Sending commands to the server Python - python

Hello I have written some client server code and write now I noticed I had a bug in how I am handling receiving a command
These are my commands
#Server Commands
CMD_MSG, CMD_MULTI, CMD_IP, CMD_AUDIO, CMD_AUDIO_MULTI, CMD_FILE = range(6)
I send a command like this
self.client(chr(CMD_AUDIO), data)
and receive like this
msg = conn.recv(2024)
if msg:
cmd, msg = ord(msg[0]),msg[1:]
if cmd == CMD_MSG:
#do something
The first command seems to work but if I call any other it seems to loop through them all. Its really bizarre
I can post more code if needed.
But any ideas on how to handle the commands being sent to my server would be great
*cheers

Assuming you're using a stream (TCP) socket, the first rule of stream sockets is that you will not receive data in the same groups it is sent. If you send three messages of 10 bytes each, you may receive at the other end one block of 30 bytes, 30 blocks of one byte each, or anything in between.
You must structure your protocol so that the receiver knows how long each message within the stream is (either by adding a length field, or by having fixed length message formats), and you must save the unused portion of any recv() that crosses a message boundary to use in the next message.
The alternative to stream/TCP sockets is datagram/UDP sockets. These preserve message boundaries, but do not guarantee delivery or ordering of the messages. Depending on what you're doing, this may be acceptable, but probably not.

Related

Python socket recv() doesn't get every message if send too fast

I send mouse coordinates from python server to python client via socket. Mouse coordinates are send every time when mouse movement event is catch on the server which means quite often (dozen or so per second).
Problem is when I use python server and python client on different hosts. Then only part of messages are delivered to the client.
e.g. 3 first messages are delivered, 4 messages aren't delivered, 4 messages are delivered etc...
Everything is fine when server and client are on the same host (localhost).
Everything is fine when server and client are on different hosts but instead of python client I use standard windows Telnet client to read messages from the server.
I noticed that when I use time.sleep(0.4) break between each message that is send then all messages are delivered. Problem is I need that information in real time not with such delay. Is it possible to achieve that in Python using sockets?
Below python client code that I use:
import pickle
import socket
import sys
host = '192.168.1.222'
port = 8888
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
except socket.error, msg:
print "Faile. Error:" + str(msg[0]), "Error message : " + msg[1]
sys.exit()
mySocket = socket.socket()
mySocket.connect((host,port))
while 1:
data = mySocket.recv(1024)
if not data: break
load_data = pickle.loads(data)
print 'parametr x: ' + str(load_data[0])
print 'parametr y : ' + str(load_data[1])
mySocket.close()
You are using TCP (SOCK_STREAM) which is a reliable protocol which (contrary to UDP) does not loose any messages, even if the recipient is not reading the data fast enough. Instead TCP will reduce the sending speed.
This means that the problem must be somewhere in your application code.
One possibility is that the problem is in your sender, i.e. that you use socket.send and do not check if all the bytes you've intended to send are really got send. But this check needs to be done since socket.send might only send part of the data if the socket buffer of the OS is full which can happen if the client does not read the data fast enough.
Another possibility is that your socket.recv call receives more data than your pickle.loads needs and that the rest of the data gets discarded (not sure if pickle.loads will throw an exception if too much data are provided). Note that TCP is not a message but a stream protocol so it might be that you have more that socket.recv will return a buffer which contains more than one pickled object but you only read the first. The chance that this will happen on a network is higher than on localhost because by default the TCP layer will try to concatenate multiple send buffers into a single TCP packet for better use of the connection (i.e. less overhead). And the chance is high that these will then be received within the same recv call. By putting a sleep(0.4) on the sender side you've effectively switch off this optimization of TCP, see NAGLE algorithm for details.
Thus the correct way to implement what you want would be:
Make sure that all data are delivered at the server, i.e. check the return of socket.send.
Make sure that you unpack all messages you receive. To do this you probable need to add some message layer on top of the TCP stream to find out where the message boundary is.

Weird behavior of send() and recv()

SORRY FOR BAD ENGLISH
Why if I have two send()-s on the server, and two recv()-s on the client, sometimes the first recv() will get the content of the 2nd send() from the server, without taking just the content of the first one and let the other recv() to take the "due and proper" content of the other send()?
How can I get this work in an other way?
This is by design.
A TCP stream is a channel on which you can send bytes between two endpoints but the transmission is stream-based, not message based.
If you want to send messages then you need to encode them... for example by prepending a "size" field that will inform the receiver how many bytes to expect for the body.
If you send 100 bytes and then other 100 bytes it's well possible that the receiver will instead see 200 at once, or even 50 + 150 in two different read commands. If you want message boundaries then you have to put them in the data yourself.
There is a lower layer (datagrams) that allows to send messages, however they are limited in size and delivery is not guaranteed (i.e. it's possible that a message will get lost, that will be duplicated or that two messages you send will arrive in different order).
TCP stream is built on top of this datagram service and implements all the logic needed to transfer data reliably between the two endpoints.
As an alternative there are libraries designed to provide reliable message-passing between endpoints, like ZeroMQ.
Most probably you use SOCK_STREAM type socket. This is a TCP socket and that means that you push data to one side and it gets from the other side in the same order and without missing chunks, but there are no delimiters. So send() just sends data and recv() receives all the data available to the current moment.
You can use SOCK_DGRAM and then UDP will be used. But in such case every send() will send a datagram and recv() will receive it. But you are not guaranteed that your datagrams will not be shuffled or lost, so you will have to deal with such problems yourself. There is also a limit on maximal datagram size.
Or you can stick to TCP connection but then you have to send delimiters yourself.

Python TCP socket for a lot of data

We (as project group) are currently stuck on the issue of how to handle live data to our server.
We are getting updates on data every second, and we would like to insert this into our database (security is currently not an issue, because it is a school project). The problem is here we tried python SockerServer and AsyncIO to create a TCP server to which the data can be sent.
We got this working with different libraries etc. But we are stuck on the fact that if we keep an open connection with the client (in this case hardware which sends data every second) we can't split the different JSON or XML messages. They are all added up together.
We know why because TCP only provides order.
Any thoughts on how to handle this? So that every message sent will get split from the others.
Recreating the socket won't be the right option if I recall correctly.
What you will have to do is ensure that there is a clear delimiter for each message. For example, the first 6 characters of every message could be the length of the message - whatever reads from the socket decodes the length then reads that number of bytes, and sends the data to whatever needs it. Another way would be if there is a character/byte which never appears in the content, send it immediately before a message - for example control-A (binary value 1) could be the leadin character, and send control-B (binary value 2) as the leadout. Again the server looks for these framing a message.
If you can't change the client side (the thing sending the data), then you are going to have to parse the input. You can't just add a delimiter to something that you don't control.
An alternative is to use a header that encodes the size of the message that will be sent. Lets say you use a header of 4 bytes, The client first send the server a header with the size of the message to come. The client then sends the message (up to 4 gigs or there about). The server knows that it must first read 4 bytes (a header). It calculates the size n that the header contained then reads n bytes from the socket buffer. You are guaranteed to have read only your message. Using special delimiters is dangerous as you MUST know all possible values that a client can send.
It really depends on the type of data you are receiving. What type of connection, latency... If you have a pause of 1 second between packets and your connection is consistent, you could probably get away with first reading the entire buffer once to clear it, then as soon as there is data available - read it and clear the buffer it. not a great approach, but it might work for what you need - and no parsing involved.

Python TCP programming

I am having a tcp server and a client written in python. The aim of the programs is that the server will be sending some numbers one after the other to the client and the client should process each one of the number in a separate thread.
The server loops over the number list and sends each one of them to client.
as:
for num in nums:
client_sock.send(str(num))
and the client loop as:
while True:
data = tcpClientSock.recv(BUFSIZE)
thread.start_new_thread( startFunction, (data, ) )
The problem is even though the server sends the program in separate send() call the client receives it all at once.
How can I avoid it? Should I use UDP instead of TCP in this situation?
you'll have to flush the socket on the sending end - add a CR/NL to do so (since you're sending a string)
TCP is a stream based protocol and not message based. This means there are no message boundaries for each time the server calls send(). In fact, each time send() is called, the bytes of data are just added to the stream.
On the receiving end, you'll receive bytes of the stream as they arrive. Since there are no message boundaries, you may receive part of a message or whole messages or whole + part of the next message.
In order to send message over a TCP stream, your protocol needs to establish message boundaries. This allows the receiver to interpret whether it has received a partial, full, or multiple messages.
In your example, the server is sending strings. The string termination servers as the message boundary. On the receiving side, you should be parsing out the strings and have handling for receiving partial strings

what does it mean when python socket.sendall returns successfully?

In my code I wrote something like this:
try:
s.sendall(data)
except Exception as e:
print e
Now, can I assume that if there wasn't any exception thrown by sendall that the other side of the socket (its kernel) did receive 'data'? If not then that means I need to send an application ack which seems unreasonable to me.
If I can assume that the other side's kernel did receive 'data' then that means that 'sendall' returns only when it sees tcp ack for all the bytes I have put in 'data' but I couldn't see any documentation for this, on the contrary, from searching the web I got the feeling that I cannot assume an ack was received.
can I assume that if there wasn't any exception thrown by sendall that the other side of the socket (its kernel) did receive 'data'?
No, you can't. All it tells you that the system successfully sent the data. It will not wait for the peer to ACK the data (i.e. data received at the OS kernel) or even wait until the data got processed by the peer application. This behavior is not specific to python.
And usually it does not matter much if the peer systems kernel received the data and put it into the applications socket buffer. All what really counts is if it received and processed the data inside the application, which might involve complex things like inserting the data into a database and waiting for a successful commit or even forwarding the data to yet another system. And since it is up to the application to decide when the data are really processed you have to make your application specific ACK to signal successful processing.
Yes you can :)
According to the socket.sendall docs:
socket.sendall(string[, flags]) Send data to the socket. The socket
must be connected to a remote socket. The optional flags argument has
the same meaning as for recv() above. Unlike send(), this method
continues to send data from string until either all data has been sent
or an error occurs. None is returned on success. On error, an
exception is raised, and there is no way to determine how much data,
if any, was successfully sent.
Specifically:
socket.sendall() will continue to send all data until it has completed or an error has occurred.
Update: To answer your comment about what's going on under the hook:
Looking at the socketmodule.c source code it looks like it repeatedly tries to "send all data" until there is no more data left to send. You can see this on L3611 } while (len > 0);. Hopefully this answers your question.

Categories