pyzmq socket.recv() with NOBLOCK flag behaviour - python

I have created simple client/server using pyzmq.
One thing I am not sure is the .recv() does not receive the message even though it has been sent from server. It just ignores it and throws an error which I find to be strange.
Client.py
try:
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:2222")
print("Sending request")
socket.send(b"send the message")
message = socket.recv(flags=zmq.NOBLOCK)
print("Received reply %s " % message)
except Exception as e:
print(str(e))
Server.py
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:2222")
while True:
message = socket.recv()
socket.send(b"Ack")
I think the client should receive the Ack and print it instead of throwing the exception.
The document says,
With flags=NOBLOCK, this raises ZMQError if no messages have arrived
Clearly the server is responding with "Ack" as soon as it receives the message.
The Error message is,
Resource temporarily unavailable

Remember that in concurrent environments there are no guarantees about the order of execution of independent processes. Even though you are responding immediately to the message in the server.py, the response may not get to the receiving socket before you call socket.recv. When you call socket.send the message needs to go over the network to your server, the server needs to create the message and respond and then the message needs to go back over the network to your client code. The time to send the messages over the network will be quite long and you are calling socket.recv immediately after socket.send.
So in fact when you call message = socket.recv(flags=zmq.NOBLOCK) the client socket will not have received the Ack from the server yet, and since you are using NOBLOCK an error is thrown since no messages have been received on the socket.
NOBLOCK is likely not appropriate in this scenario. You can experiment with this by adding a sleep call in between send and recv to show that the time delay of waiting for the response from the server is indeed the issue but that's not a good solution for your client code long-term.
If you want to exit after waiting for a certain amount of time you should use socket.poll instead.
event = socket.poll(timeout=3000) # wait 3 seconds
if event == 0:
# timeout reached before any events were queued
pass
else:
# events queued within our time limit
msg = socket.recv()
Pyzmq Doc for socket.poll()

Q : say the server is not up in that case the recv() in the client will be blocked forever which I don't want.
ZeroMQ is a fabulous framework for doing smart signaling/messaging in distributed-systems
Let's sketch a demo of a principally non-blocking modus-operandi, with some inspirations of how the resources ought be both acquired and also gracefully released before process termination.
Maybe a bit of reading about the main conceptual differences in ZeroMQ hierarchy in less than a five seconds will also help.
Server.py
aContext = zmq.Context()
aLightHouse = aContext.socket( zmq.PUB )
aRepSocket = aContext.socket( zmq.REP )
aRepSocket.setsockopt( zmq.LINGER, 0 )
aRepSocket.setsockopt( zmq.COMPLETE, 1 )
aRepSocket.bind( "tcp://*:2222" )
aLightHouse.bind( "tcp://*:3333" )
aLightHouse.setsockopt( zmq.LINGER, 0 )
aLightHouse.setsockopt( zmq.CONFLATE, 1 )
aLightHouse_counter = 0
#------------------------------------------------------------
print( "INF: Server InS: ZeroMQ({0:}) going RTO:".format( zmq.zmq_version() ) )
#------------------------------------------------------------
while True:
try:
aLightHouse_counter += 1
aLightHouse.send( "INF: server-RTO blink {0:}".format( repr( aLightHouse_counter ) ),
zmq.NOBLOCK
)
if ( 0 < aRepSocket.poll( 0, zmq.POLLIN ) ):
try:
message = aRepSocket.recv( zmq.NOBLOCK ); print( "INF: .recv()ed {0:}".format( message ) )
pass; aRepSocket.send( b"Ack", zmq.NOBLOCK ); print( "INF: .sent() ACK" )
except:
# handle EXC: based on ...
print( "EXC: reported as Errno == {0:}".format( zmq.zmq_errno() ) )
else:
# NOP / Sleep / do other system work-units to get processed during the infinite-loop
except:
# handle EXC:
print( "EXC: will break ... and terminate OoS ..." )
break
#------------------------------------------------------------
print( "INF: will soft-SIG Server going-OoS..." )
aLightHouse.send( "INF: server goes OoS ... " )
#------------------------------------------------------------
print( "INF: will .close() and .term() resources on clean & graceful exit..." )
Sleep( 0.987654321 )
aRepSocket.unbind( "tcp://*:2222" )
aRepSocket.close()
aLightHouse.unbind( "tcp://*:3333" )
aLightHouse.close()
aContext.term()
#------------------------------------------------------------
print( "INF: over and out" )
Client.py
try:
aContext = zmq.Context()
aReqSocket = aContext.socket( zmq.REQ )
aBeeper = aContext.socket( zmq.SUB )
aReqSocket.setsockopt( zmq.LINGER, 0 )
aReqSocket.setsockopt( zmq.COMPLETE, 1 )
aReqSocket.connect( "tcp://localhost:2222" )
aBeeper.connect( "tcp://localhost:3333" )
aBeeper.setsockopt( zmq.SUBSCRIBE, "" )
aBeeper.setsockopt( zmq.CONFLATE, 1 )
#------------------------------------------------------------
print( "INF: Client InS: ZeroMQ({0:}) going RTO.".format( zmq.zmq_version() ) )
#------------------------------------------------------------
try:
while True:
if ( 0 == aBeeper.poll( 1234 ) ):
print( "INF: Server OoS or no beep visible within a LoS for the last 1234 [ms] ... " )
else:
print( "INF: Server InS-beep[{0:}]".format( aBeeper.recv( zmq.NOBLOCK ) ) )
try:
print( "INF: Going to sending a request" )
aReqSocket.send( b"send the message", zmq.NOBLOCK )
print( "INF: Sent. Going to poll for a response to arrive..." )
while ( 0 == aReqSocket.poll( 123, zmq.POLLIN ) ):
print( "INF: .poll( 123 ) = 0, will wait longer ... " )
message = socket.recv( flags = zmq.NOBLOCK )
print( "INF: Received a reply %s " % message )
except Exception as e:
print( "EXC: {0:}".format( str( e ) ) )
print( "INF: ZeroMQ Errno == {0:}".format( zmq.zmq_errno() ) )
print( "INF: will break and terminate" )
break
except Exception as e:
print( "EXC: {0:}".format( str( e ) ) )
finally:
#------------------------------------------------------------
print( "INF: will .close() and .term() resources on clean & graceful exit..." )
aBeeper.close()
aReqSocket.close()
aContext.term()
#------------------------------------------------------------
print( "INF: over and out" )

you are using non-blocking mode which mean it will raise an error to inform you, that there's nothing that could be done with the message and you should try again later, but if you are using blocking mode it blocks until the peers connects.
this answer is from here
basically if you remove flags=zmq.NOBLOCK it will work.
Update
if you want to use non-blocking mode you should have a look to this

Related

Issue in connecting client socket with server socket

I have a server running on a desktop machine with IP address 192.168.1.11, and client code is running on server accessing through OpenVPN connect. When I run the below code client sends the request but server doesn't receives it.
Server.py:
context=zmq.Context()
socket=context.socket(zmq.REP)
socket.bind("tcp://*:8080")
while True:
message=socket.recv_pyobj()
print("%s:%s" %(message.get(1)[0],message.get(1)[1]))
socket.send_pyobj({1:[message.get(1)[0],message.get(1)[1]]})
Client.py
socket=context.socket(zmq.REQ)
socket.connect("tcp://192.168.1.11:8080")
name="Test"
while True:
message=input("Test Message")
socket.send_pyobj(({1:[name,message]}))
Thanks help is highly appreciated.
Q : "Issue in connecting client socket with server socket"
Step 0 : proof there has been achieved an OSI-ISO-Layer-3 visibility traceroute <targetIP>
Step 1 : having achieved a visible route to <targetIP>, repair the code to meet documented REQ/REP properties
Step 2 : having achieved a visible route to <targetIP> and REQ/REP, we should improve robustness of the code
context = zmq.Context()
socket = context.socket( zmq.REP )
socket.bind( "tcp://*:8080" )
#---------------------------------------------- # ROBUSTNESS CONFIGs
socket.setsockopt( zmq.LINGER, 0 ) # .set explicitly
socket.setsockopt( zmq.MAXMSGSIZE, ... ) # .set safety ceiling
socket.setsockopt( ..., ... ) # .set ...
#---------------------------------------------- # ROBUSTNESS CONFIGs
while True:
message = socket.recv_pyobj() # .recv() a request from REQ-side
print( "%s:%s" % ( message.get(1)[0], # shall improve robustness
message.get(1)[1] # for cases other than this
)
)
socket.send_pyobj( { 1: [ message.get(1)[0], # REP must "answer" to REQ
message.get(1)[1]
]
}
)
TARGET_IP = "<targetIP>" # <targetIP> from Step 0
PORT_NUMBER = 8080
socket = context.socket( zmq.REQ )
socket.connect( "tcp://{0:}:{1:}".format( TARGET_IP, PORT_NUMBER ) )
#---------------------------------------------- # ROBUSTNESS CONFIGs
socket.setsockopt( zmq.LINGER, 0 ) # .set explicitly
socket.setsockopt( zmq.MAXMSGSIZE, ... ) # .set safety ceiling
socket.setsockopt( ..., ... ) # .set ...
#---------------------------------------------- # ROBUSTNESS CONFIGs
name = "Test"
while True:
message = input( "Test Message" )
socket.send_pyobj( ( { 1: [ name, # REQ-side sends a request
message # here
] # bearing a tuple
} # with a dict
) # having a list
) # for a single key
#------------------------------------------ # REQ-side now MUST also .recv()
_ = socket.recv() # before it can .send() again

How to Send New Messages from Azure IoT Edge Module Python

It seems there is not very much support for what I am trying to do, but it is supposed to be possible since it is demonstrated in temperature sensor and sensor filter tutorial. However, there are no examples for the actual message creation from an edge module in python. That tutorial only shows forwarding messages. There are examples of sending from a device, but devices use a different class than edge modules. From the filter example and from a couple of device examples I have pieced together the following:
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license. See LICENSE file in the project root for
# full license information.
import random
import time
import sys
import iothub_client
from iothub_client import IoTHubModuleClient, IoTHubClientError, IoTHubTransportProvider
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult, IoTHubError
# messageTimeout - the maximum time in milliseconds until a message times out.
# The timeout period starts at IoTHubModuleClient.send_event_async.
# By default, messages do not expire.
MESSAGE_TIMEOUT = 10000
# global counters
RECEIVE_CALLBACKS = 0
SEND_CALLBACKS = 0
# Choose HTTP, AMQP or MQTT as transport protocol. Currently only MQTT is supported.
PROTOCOL = IoTHubTransportProvider.MQTT
# Callback received when the message that we're forwarding is processed.
def send_confirmation_callback(message, result, user_context):
global SEND_CALLBACKS
print ( "Confirmation[%d] received for message with result = %s" % (user_context, result) )
map_properties = message.properties()
key_value_pair = map_properties.get_internals()
print ( " Properties: %s" % key_value_pair )
SEND_CALLBACKS += 1
print ( " Total calls confirmed: %d" % SEND_CALLBACKS )
# receive_message_callback is invoked when an incoming message arrives on the specified
# input queue (in the case of this sample, "input1"). Because this is a filter module,
# we will forward this message onto the "output1" queue.
def receive_message_callback(message, hubManager):
global RECEIVE_CALLBACKS
message_buffer = message.get_bytearray()
size = len(message_buffer)
print ( " Data: <<<%s>>> & Size=%d" % (message_buffer[:size].decode('utf-8'), size) )
map_properties = message.properties()
key_value_pair = map_properties.get_internals()
print ( " Properties: %s" % key_value_pair )
RECEIVE_CALLBACKS += 1
print ( " Total calls received: %d" % RECEIVE_CALLBACKS )
hubManager.forward_event_to_output("output1", message, 0)
return IoTHubMessageDispositionResult.ACCEPTED
def construct_message(message_body, topic):
try:
msg_txt_formatted = message_body
message = IoTHubMessage(msg_txt_formatted)
# Add a custom application property to the message.
# An IoT hub can filter on these properties without access to the message body.
prop_map = message.properties()
prop_map.add("topic", topic)
# TODO Use logging
# Send the message.
print( "Sending message: %s" % message.get_string() )
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
return message
class HubManager(object):
def __init__(
self,
protocol=IoTHubTransportProvider.MQTT):
self.client_protocol = protocol
self.client = IoTHubModuleClient()
self.client.create_from_environment(protocol)
# set the time until a message times out
self.client.set_option("messageTimeout", MESSAGE_TIMEOUT)
# sets the callback when a message arrives on "input1" queue. Messages sent to
# other inputs or to the default will be silently discarded.
self.client.set_message_callback("input1", receive_message_callback, self)
# Forwards the message received onto the next stage in the process.
def forward_event_to_output(self, outputQueueName, event, send_context):
self.client.send_event_async(
outputQueueName, event, send_confirmation_callback, send_context)
def send_message(self, message):
# No callback
# TODO what is the third arg?
self.client.send_event_async(
"output1", message, send_confirmation_callback, 0)
self.client.send_message()
def mypublish(self, topic, msg):
message = construct_message(msg, topic)
self.send_message(message)
print('publishing %s', msg)
def main(protocol):
try:
print ( "\nPython %s\n" % sys.version )
print ( "IoT Hub Client for Python" )
hub_manager = HubManager(protocol)
print ( "Starting the IoT Hub Python sample using protocol %s..." % hub_manager.client_protocol )
print ( "The sample is now waiting for messages and will indefinitely. Press Ctrl-C to exit. ")
while True:
hub_manager.mypublish('testtopic', 'hello world this is a module')
time.sleep(1)
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
except KeyboardInterrupt:
print ( "IoTHubModuleClient sample stopped" )
if __name__ == '__main__':
main(PROTOCOL)
When I build and deploy this it executes on the edge device without errors and in the log, the callback reports that the messages are sent ok. However, no messages come through when I attempt to monitor D2C messages.
I used this to create and send a message from a JSON dict.
new_message = json.dumps(json_obj)
new_message = IoTHubMessage(new_message)
hubManager.forward_event_to_output("output1", new_message, 0)
You can send anything you need, even strings or whatever.
To narrow down the issue, you can install the azureiotedge-simulated-temperature-sensor module published by Microsoft to see whether the issue relative to the Edge environment issue or coding.
I also wrote a sample Python code module based on the Python Module templates which works well for me, you can refer the code below:
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license. See LICENSE file in the project root for
# full license information.
import random
import time
import sys
import iothub_client
from iothub_client import IoTHubModuleClient, IoTHubClientError, IoTHubTransportProvider
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult, IoTHubError
# messageTimeout - the maximum time in milliseconds until a message times out.
# The timeout period starts at IoTHubModuleClient.send_event_async.
# By default, messages do not expire.
MESSAGE_TIMEOUT = 10000
# global counters
RECEIVE_CALLBACKS = 0
SEND_CALLBACKS = 0
# Choose HTTP, AMQP or MQTT as transport protocol. Currently only MQTT is supported.
PROTOCOL = IoTHubTransportProvider.MQTT
# Callback received when the message that we're forwarding is processed.
def send_confirmation_callback(message, result, user_context):
global SEND_CALLBACKS
print ( "Confirmation[%d] received for message with result = %s" % (user_context, result) )
map_properties = message.properties()
key_value_pair = map_properties.get_internals()
print ( " Properties: %s" % key_value_pair )
SEND_CALLBACKS += 1
print ( " Total calls confirmed: %d" % SEND_CALLBACKS )
# receive_message_callback is invoked when an incoming message arrives on the specified
# input queue (in the case of this sample, "input1"). Because this is a filter module,
# we will forward this message onto the "output1" queue.
def receive_message_callback(message, hubManager):
global RECEIVE_CALLBACKS
message_buffer = message.get_bytearray()
size = len(message_buffer)
print ( " Data: <<<%s>>> & Size=%d" % (message_buffer[:size].decode('utf-8'), size) )
map_properties = message.properties()
key_value_pair = map_properties.get_internals()
print ( " Properties: %s" % key_value_pair )
RECEIVE_CALLBACKS += 1
print ( " Total calls received: %d" % RECEIVE_CALLBACKS )
hubManager.forward_event_to_output("output1", message, 0)
return IoTHubMessageDispositionResult.ACCEPTED
class HubManager(object):
def __init__(
self,
protocol=IoTHubTransportProvider.MQTT):
self.client_protocol = protocol
self.client = IoTHubModuleClient()
self.client.create_from_environment(protocol)
# set the time until a message times out
self.client.set_option("messageTimeout", MESSAGE_TIMEOUT)
# sets the callback when a message arrives on "input1" queue. Messages sent to
# other inputs or to the default will be silently discarded.
self.client.set_message_callback("input1", receive_message_callback, self)
# Forwards the message received onto the next stage in the process.
def forward_event_to_output(self, outputQueueName, event, send_context):
self.client.send_event_async(
outputQueueName, event, send_confirmation_callback, send_context)
def SendSimulationData(self, msg):
print"sending message..."
message=IoTHubMessage(msg)
self.client.send_event_async(
"output1", message, send_confirmation_callback, 0)
print"finished sending message..."
def main(protocol):
try:
print ( "\nPython %s\n" % sys.version )
print ( "IoT Hub Client for Python" )
hub_manager = HubManager(protocol)
print ( "Starting the IoT Hub Python sample using protocol %s..." % hub_manager.client_protocol )
print ( "The sample is now waiting for messages and will indefinitely. Press Ctrl-C to exit. ")
while True:
hub_manager.SendSimulationData("test msg")
time.sleep(1)
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
except KeyboardInterrupt:
print ( "IoTHubModuleClient sample stopped" )
if __name__ == '__main__':
main(PROTOCOL)
If it can help someone, I think you miss await send_message.
Seems the same problem I answered here

zmq.error.ZMQError: Socket operation on non-socket: How to find alive sockets before closing?

In my application, I'm creating 2 sockets and have a try/except for this:
try:
socketA.connect("tcp://localhost:5557")
socketB.bind("tcp://localhost:5558")
except zmq.ZMQError as e:
if e.errno == zmq.EINVAL:
logger.error("Endpoint supplied is invalid")
else:
logger.error("The ZeroMQ error with an error number {0}".format(e.errno))
raise ZMQError(e)
cleanUp()
If for some reason, one of the sockets cannot .connect()/.bind(), I want to close both sockets and terminate the context in a cleanUp() function, but how will I know which sockets are alive before closing them?
Does ZeroMQ provide any information about active sockets before closing them nicely?
Given the logic above, let us use another approach:
Case A: both sockets did .connect() + .bind() respectively
Case B: any of sockets did fail in doing so.
try:
socketA.connect( "tcp://localhost:5557" )
socketA.setsockopt( zmq.LINGER, 0 )
try:
socketB.bind( "tcp://localhost:5558" )
socketB.setsockopt( zmq.LINGER, 0 )
except zmq.ZMQError as e:
if ( e.errno in ( zmq.EINVAL,
zmq.EPROTONOSUPPORT,
zmq.ENOCOMPATPROTO,
zmq.EADDRINUSE,
zmq.EADDRNOTAVAIL,
)
):
logger.error( "ZeroMQ TransportClass / Endpoint cannot be setup for [socketB]." )
if ( e.errno in ( zmq.ENODEV,
zmq.ENOTSOCK,
)
):
logger.error( "ZeroMQ request was made against a non-existent device or not using a valid socket [socketB]." )
if ( e.errno in ( zmq.ETERM,
zmq.EMTHREAD,
)
):
logger.error( "ZeroMQ Context is not in a state to handle this request for [socketB]." )
cleanUp( aContextINSTANCE, [ socketA, socketB, ] )
except zmq.ZMQError as e:
if ( e.errno in ( zmq.EINVAL,
zmq.EPROTONOSUPPORT,
zmq.ENOCOMPATPROTO,
)
):
logger.error( "ZeroMQ TransportClass / Endpoint cannot be setup for [socketA]." )
if ( e.errno in ( zmq.ETERM,
zmq.EMTHREAD,
)
):
logger.error( "ZeroMQ Context is not ready to handle this request for [socketA]." )
if ( e.errno in ( zmq.ENOTSOCK, ) ):
logger.error( "ZeroMQ operation was requested, but not on a valid [socketA]." )
cleanUp( aContextINSTANCE, [ socketA, ] )
finally:
# ...
pass
def cleanUp( aContextToTERMINATE, aListOfSocketsToCLOSE = [] ):
for aSocket in aListOfSocketsToCLOSE:
try:
aSocket.close() # external responsibility to setup LINGER as zero right at aSocket instantiation point
except:
pass
try:
aContextToTERMINATE.term()
except:
pass
finally:
# ...
pass
I am not completely sure what do you mean by "which sockets are alive". Both sockets must be closed regardless of which connect/bind call failed. In C libzmq terms, zmq_close is not the counterpart to zmq_connect/zmq_bind, but to zmq_socket.
zmq_socket in pyzmq is already called by Socket.__init__.

Python and ZeroMQ subscriber acknowledgement to publisher

I am trying to get an acknowledgement from my subscriber back to my publisher using ZeroMQ in Python.
I tried a few code examples, using a zmq.PUSH and zmq.PULL code sequence, but to no avail.
My code as it stands:
PUB_SERVER.PY
import zmq
import random
import sys
import time
port = "5556"
if len( sys.argv ) > 1:
port = sys.argv[1]
int( port )
context = zmq.Context()
socket = context.socket( zmq.PUB )
socket.bind( "tcp://*:%s" % port )
while True:
# topic = random.randrange( 9999, 10005 )
topic = 10000
messagedata = random.randrange( 1, 215 ) - 80
print "%d %d" % ( topic, messagedata )
socket.send( "%d %d" % ( topic, messagedata ) )
time.sleep( 1 )
SUB_CLIENT.PY
import sys
import zmq
port = "5556"
if len( sys.argv ) > 1:
port = sys.argv[1]
int( port )
if len( sys.argv ) > 2:
port1 = sys.argv[2]
int( port1 )
# Socket to talk to server
context = zmq.Context()
socket = context.socket( zmq.SUB )
print "Collecting updates from weather server..."
socket.connect( "tcp://192.168.0.21:%s" % port )
if len( sys.argv ) > 2:
socket.connect( "tcp://192.168.0.21:%s" % port1 )
# Subscribe to zipcode, default is NYC, 10001
topicfilter = "10000"
socket.setsockopt( zmq.SUBSCRIBE, topicfilter )
# Process 5 updates
total_value = 0
for update_nbr in range( 5 ):
string = socket.recv()
topic, messagedata = string.split()
total_value += int( messagedata )
print topic, messagedata
print "Average messagedata value for topic '%s' was %dF" % ( topicfilter, total_value / update_nbr )
That code gives me the output of the server in one SSH window (on a Parallella), and the received filtered messages of the client in another SSH window (on a RaspberryPi) which is working great.
Where I am lost is, once the client has gotten a filtered message from the server, how would it acknowledge that filtered message being received, and then have the server log those acknowledged messages?
Eventually, I'd want to do some intelligent decision making of sending a file to the subscriber who acknowledges.
How to acknowledge?
May create a parallel messaging structure for a soft-signalling for that purpose.
Extend PUB_SERVER.PY with a .SUB Rx access point:
anAckRxSOCKET = context.socket( zmq.SUB ) # create SUB side
anAckRxSOCKET.bind( "tcp://*:%s" % aServerAckPORT ) ) # .bind()
anAckRxSOCKET.setsockopt( zmq.SUBSCRIBE, "" ) # SUB to *-anything
# ...
anAckRxSTRING = anAckRxSOCKET.recv() # .recv()
Extend SUB_CLIENT.PY with a .PUB Tx socket to the Server side access point:
anAckTxSOCKET = context.socket( zmq.PUB ) # create PUB side(s)
anAckTxSOCKET.connect( "tcp://192.168.0.21:%s" % aServerAckPORT ) )
and
send ACK(s) with "a-proxy-ID" for any server-side processing you may want or need
anAckTxSOCKET.send( topicfilter ) # ACK with an "identity"-proxy

Python lost package

I trying to write a client server application in Python, but I faced a problem, in the client side I'm not getting all the sent data. First I tried to send the numbers 1 to 10 and I received 1,2,5,6,10, so there are missing a lot of numbers.
Server side:
def __init__( self ):
super( MCCommunication, self ).__init__()
HOST, PORT = socket.gethostbyname( socket.gethostname() ), 31000
self.server = SocketServer.ThreadingTCPServer( ( HOST, PORT ), MCRequestHandler )
ip, port = self.server.server_address
# Start a thread with the server
# Future task: Make the server a QT-Thread...
self.server_thread = threading.Thread( target = self.server.serve_forever )
# Exit the server thread when the main thread terminates
self.server_thread.setDaemon( True )
self.textUpdated.emit( 'Server Started!' )
print( 'Server Started!' )
self.server_thread.start()
def handle( self ):
#self.request.setblocking( 0 )
i = 10;
while True:
if( self.clientname == 'MasterClient' ):
try:
#ans = self.request.recv( 4096 )
#print( 'after recv' )
""" Sendign data, testing purpose """
while i:
mess = str( i );
postbox['MasterClient'].put( self.creatMessage( 0, 0 , mess ) )
i = i - 1
while( postbox['MasterClient'].empty() != True ):
sendData = postbox['MasterClient'].get_nowait()
a = self.request.send( sendData )
print( a );
#dic = self.getMessage( sendData )
#print 'Sent:%s\n' % str( dic )
except:
mess = str( sys.exc_info()[0] )
postbox['MasterClient'].put( self.creatMessage( 1, 0 , mess ) )
pass
def creatMessage( self, type1 = 0, type2 = 0, message = ' ', extra = 0 ):
return pickle.dumps( {"type1":type1, "type2":type2, "message":message, "extra":extra} );
Where the postbox['MasterClient'] is a Queue with the serialized message.
And this is the client:
def run( self ):
sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
addr = ( self.ip, self.port )
#print addr
sock.connect( addr )
#sock.setblocking( 0 )
while True:
try:
ans = sock.recv( 4096 )
dic = self.getMessage( ans )
self.recvMessageHandler( dic )
print 'Received:%s\n' % str( dic )
except:
pass
The server may have sent multiple messages by the time the client attempts to read them, and if these fit within the same 4k buffer, the recv() call will obtain both of them.
You don't show the getMessage code, but I'd guess you're doing something like pickle.loads(msg), but this will only give you the first message and discard the rest of the string, hence the dropped messages. You'll also get another issue if more than 4096 bytes are buffered by the time you read, as you could end up getting a fragment of a message and thus an unpickling error.
You'll need to break up the string you get back into seperate messages, or better, just treat the socket as a stream and let pickle.load pull a single message from it.

Categories