I am having a tcp server and a client written in python. The aim of the programs is that the server will be sending some numbers one after the other to the client and the client should process each one of the number in a separate thread.
The server loops over the number list and sends each one of them to client.
as:
for num in nums:
client_sock.send(str(num))
and the client loop as:
while True:
data = tcpClientSock.recv(BUFSIZE)
thread.start_new_thread( startFunction, (data, ) )
The problem is even though the server sends the program in separate send() call the client receives it all at once.
How can I avoid it? Should I use UDP instead of TCP in this situation?
you'll have to flush the socket on the sending end - add a CR/NL to do so (since you're sending a string)
TCP is a stream based protocol and not message based. This means there are no message boundaries for each time the server calls send(). In fact, each time send() is called, the bytes of data are just added to the stream.
On the receiving end, you'll receive bytes of the stream as they arrive. Since there are no message boundaries, you may receive part of a message or whole messages or whole + part of the next message.
In order to send message over a TCP stream, your protocol needs to establish message boundaries. This allows the receiver to interpret whether it has received a partial, full, or multiple messages.
In your example, the server is sending strings. The string termination servers as the message boundary. On the receiving side, you should be parsing out the strings and have handling for receiving partial strings
Related
I send mouse coordinates from python server to python client via socket. Mouse coordinates are send every time when mouse movement event is catch on the server which means quite often (dozen or so per second).
Problem is when I use python server and python client on different hosts. Then only part of messages are delivered to the client.
e.g. 3 first messages are delivered, 4 messages aren't delivered, 4 messages are delivered etc...
Everything is fine when server and client are on the same host (localhost).
Everything is fine when server and client are on different hosts but instead of python client I use standard windows Telnet client to read messages from the server.
I noticed that when I use time.sleep(0.4) break between each message that is send then all messages are delivered. Problem is I need that information in real time not with such delay. Is it possible to achieve that in Python using sockets?
Below python client code that I use:
import pickle
import socket
import sys
host = '192.168.1.222'
port = 8888
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
except socket.error, msg:
print "Faile. Error:" + str(msg[0]), "Error message : " + msg[1]
sys.exit()
mySocket = socket.socket()
mySocket.connect((host,port))
while 1:
data = mySocket.recv(1024)
if not data: break
load_data = pickle.loads(data)
print 'parametr x: ' + str(load_data[0])
print 'parametr y : ' + str(load_data[1])
mySocket.close()
You are using TCP (SOCK_STREAM) which is a reliable protocol which (contrary to UDP) does not loose any messages, even if the recipient is not reading the data fast enough. Instead TCP will reduce the sending speed.
This means that the problem must be somewhere in your application code.
One possibility is that the problem is in your sender, i.e. that you use socket.send and do not check if all the bytes you've intended to send are really got send. But this check needs to be done since socket.send might only send part of the data if the socket buffer of the OS is full which can happen if the client does not read the data fast enough.
Another possibility is that your socket.recv call receives more data than your pickle.loads needs and that the rest of the data gets discarded (not sure if pickle.loads will throw an exception if too much data are provided). Note that TCP is not a message but a stream protocol so it might be that you have more that socket.recv will return a buffer which contains more than one pickled object but you only read the first. The chance that this will happen on a network is higher than on localhost because by default the TCP layer will try to concatenate multiple send buffers into a single TCP packet for better use of the connection (i.e. less overhead). And the chance is high that these will then be received within the same recv call. By putting a sleep(0.4) on the sender side you've effectively switch off this optimization of TCP, see NAGLE algorithm for details.
Thus the correct way to implement what you want would be:
Make sure that all data are delivered at the server, i.e. check the return of socket.send.
Make sure that you unpack all messages you receive. To do this you probable need to add some message layer on top of the TCP stream to find out where the message boundary is.
SORRY FOR BAD ENGLISH
Why if I have two send()-s on the server, and two recv()-s on the client, sometimes the first recv() will get the content of the 2nd send() from the server, without taking just the content of the first one and let the other recv() to take the "due and proper" content of the other send()?
How can I get this work in an other way?
This is by design.
A TCP stream is a channel on which you can send bytes between two endpoints but the transmission is stream-based, not message based.
If you want to send messages then you need to encode them... for example by prepending a "size" field that will inform the receiver how many bytes to expect for the body.
If you send 100 bytes and then other 100 bytes it's well possible that the receiver will instead see 200 at once, or even 50 + 150 in two different read commands. If you want message boundaries then you have to put them in the data yourself.
There is a lower layer (datagrams) that allows to send messages, however they are limited in size and delivery is not guaranteed (i.e. it's possible that a message will get lost, that will be duplicated or that two messages you send will arrive in different order).
TCP stream is built on top of this datagram service and implements all the logic needed to transfer data reliably between the two endpoints.
As an alternative there are libraries designed to provide reliable message-passing between endpoints, like ZeroMQ.
Most probably you use SOCK_STREAM type socket. This is a TCP socket and that means that you push data to one side and it gets from the other side in the same order and without missing chunks, but there are no delimiters. So send() just sends data and recv() receives all the data available to the current moment.
You can use SOCK_DGRAM and then UDP will be used. But in such case every send() will send a datagram and recv() will receive it. But you are not guaranteed that your datagrams will not be shuffled or lost, so you will have to deal with such problems yourself. There is also a limit on maximal datagram size.
Or you can stick to TCP connection but then you have to send delimiters yourself.
I am sending some request from the server to my client but I have some problem.
When I'm sending messages to the client, if I send many messages, I'll receive all with socket.recv()
Is there a way to get the messages one by one ?
Thanks
You need to use some kind of protocol over otherwise bare sockets.
See python twisted or use something like nanomsg or ZeroMQ if you want a simple drop-in replacement which is message-oriented.
It is not transport-agnostic though, meaning they will only work if they are used on both ends.
No. TCP is a byte stream. There are no messages larger than one byte.
I assume that you are using TCP. TCP is a streaming protocol, not a datagram protocol. This means that the data are not a series of messages, but instead a single data stream without any message boundaries. If you need something like this either switch the protocol (UDP is datagram, but has other problems) or make your own protocol on top of TCP which knows about messages.
Typical message based protocols on top of TCP either use some message delimiter (often newline) or prefix each message with its size.
I'm trying to write an IRC bot but I'm not exactly sure how the receiving of data works. What I currently have:
while True:
data = socket.recv(1024)
#process data
Let's say that for whatever reason it takes it more time to process the data, what would happen if something is sent at that time? Will it get skipped or get added to some sort of a queue and processed after the current one is done?
Depending upon the protocol type the behavior will be different.
TCP:
The TCP RFC clearly states:
TCP provides a means for the receiver to govern the amount of data
sent by the sender. This is achieved by returning a "window" with
every ACK indicating a range of acceptable sequence numbers beyond
the last segment successfully received. The window indicates an
allowed number of octets that the sender may transmit before
receiving further permission.
Also from wikipedia the information is similar:
TCP uses an end-to-end flow control protocol to avoid having the
sender send data too fast for the TCP receiver to receive and process
it reliably. For example, if a PC sends data to a smartphone that is
slowly processing received data, the smartphone must regulate the data
flow so as not to be overwhelmed. TCP uses a sliding window flow
control protocol. In each TCP segment, the receiver specifies in the
receive window field the amount of additionally received data (in
bytes) that it is willing to buffer for the connection. The sending
host can send only up to that amount of data before it must wait for
an acknowledgment and window update from the receiving host.
UDP:
UDP doesn't have any flow control mechanism as TCP. However there is an other implementation of UDP such as RUDP that have some of the features of TCP like flow control.
Here is an other interesting link for the differences between TCP & UDP.
Hello I have written some client server code and write now I noticed I had a bug in how I am handling receiving a command
These are my commands
#Server Commands
CMD_MSG, CMD_MULTI, CMD_IP, CMD_AUDIO, CMD_AUDIO_MULTI, CMD_FILE = range(6)
I send a command like this
self.client(chr(CMD_AUDIO), data)
and receive like this
msg = conn.recv(2024)
if msg:
cmd, msg = ord(msg[0]),msg[1:]
if cmd == CMD_MSG:
#do something
The first command seems to work but if I call any other it seems to loop through them all. Its really bizarre
I can post more code if needed.
But any ideas on how to handle the commands being sent to my server would be great
*cheers
Assuming you're using a stream (TCP) socket, the first rule of stream sockets is that you will not receive data in the same groups it is sent. If you send three messages of 10 bytes each, you may receive at the other end one block of 30 bytes, 30 blocks of one byte each, or anything in between.
You must structure your protocol so that the receiver knows how long each message within the stream is (either by adding a length field, or by having fixed length message formats), and you must save the unused portion of any recv() that crosses a message boundary to use in the next message.
The alternative to stream/TCP sockets is datagram/UDP sockets. These preserve message boundaries, but do not guarantee delivery or ordering of the messages. Depending on what you're doing, this may be acceptable, but probably not.