Python socket recv function - python

In the python socket module recv method socket.recv(bufsize[, flags]) docs here , it states:
Receive data from the socket. The return value is a bytes object representing the data received.
The maximum amount of data to be received at once is specified by bufsize.
I'm aware that bufsize represents the MAX amount of data received at once, and that if the amount of data received is LESS than bufsize, that means that the number of bytes sent by socket on the other end is less than bufsize
Is it possible that the data returned from the 1st call to socket.recv(bufsize) is < bufsize but there is still data left in the network buffer?
Eg.
data = socket.recv(10)
print(len(data)) # outputs 5
data = socket.recv(10) # calling `socket.recv(10)` returns more data without the
# socket on the other side doing `socket.send(data)`
Can a scenario in the example ever occur and does this apply for unix domain sockets as well as regular TCP/IP sockets?

The real problem in network communication is that the receiver cannot control when and how the network delivers the data.
If the size of the data returned by recv is less than the requested size, that means that at the moment of the recv call, no more data was available in the local network buffer. So if you can make sure that:
the sender has stopped sending data
the network could deliver all the data
then a new revc call will block.
The problem is that in real world cases, you can never make sure of the two above assumptions. TCP is a stream protocol, which only guarantees that all sent bytes will go to the receiver and in correct order. But if offers no guarantee on the timing, and sent packets can be fragmented or re-assembled by the network (starting from the network TCP stack on the sender and ending at the reciever TCP stack)

Found a similar post that follows up on this: How can I reliably read exactly n bytes from a TCP socket?
Basically use socket.makefile() to return a file object and call read(num_bytes) to return exactly the amount requested, else block.
fh = socket.makefile(mode='b')
data = fh.read(LENGTH_BYTES)

Related

Is code that relies on packet boundaries for a TCP stream inherently unreliable?

From what I understand, TCP is a stream, and shouldn't rely on packets, or boundaries of any kind. A recv() call will block until the kernel network buffers have some data, and return that data, up to a set buffer size, which can optionally be provided to recv as an arg.
Despite this, there seems to be plenty of code out there that looks like this:
import construct as c
...
AgentMessageHeader = c.Struct(
"HeaderLength" / c.Int32ub,
"MessageType" / c.PaddedString(32, "ascii"),
)
AgentMessagePayload = c.Struct(
"PayloadLength" / c.Int32ub,
"Payload" / c.Array(c.this.PayloadLength, c.Byte),
)
connection = ...
while True:
response = connection.recv()
message = AgentMessageHeader.parse(response)
payload_message = AgentMessagePayload.parse(response[message.HeaderLength:])
print("Payload Message:", payload_message.Payload)
This code is an excerpt from a way to read data from a TCP websocket from an AWS API.
Here, we appear to be trusting that connection.recv() will always receive one packet worth of data, with complete headers, and payload.
Is code like this inherently wrong? Why does it work a large majority of the time? Is it because in most cases, recv will return a full packet worth of data?
Yes, it's wrong.
It may work in a protocol where only one message is sent at a time in a simple request/response dialogue. You don't have to worry about recv() returning data from multiple messages that were sent back-to-back, because that never happens. And while the kernel is permitted to return less data than you request, in practice it will return as much as is available up to the buffer size.
And at the sending end, most calls to send() will result in all the data being sent in a single segment (if it fits within the MTU) or a few segments sent back-to-back. These will likely be buffered together in the receiving kernel, and returned at once in the recv() call.
You should never depend on this in production code; the examine you quoted is simplified because it's concentrating on the WebSocket features. Or the author just didn't know better.

How to check the amount of packets in a receive buffer of a raw socket

I have written a linux server receiving packets on a specific Ethernet-type (using a raw socket), and sending them on a different ethernet device. The thing is, the rate I need to receive the packets, is greater then the rate I can send them to the other interface. So I'm using the socket buffer, untill it gets full, and then I expect packets drop.
I have set the buffer size using
setsockopt(socket, SOL_SOCKET, RECVBUF, 20 * 1024 * 1024)
And validating using getsockopt, I do see the socket was configured correctly.
The thing is I start do drop packets much faster then I expected (nearly 10 times)
What I want to do, is get the amount of packets in the socket buffer, that I will be able to print the time left untill it is full.
(The server is written in Python, yet I would be able to "translate" from other languages)

Can python socket recieve fill if not read?

If you don't read from a python tcp socket, will it fill and cause an error ?
In my code I use .send() and there seem to be an ack reply from the device I'm talking to. If i don't read these out, will they build up and create a problem ? Do it just keep storing them all infinitely ? Surely this would cause memory issue eventually ...
thanks.
If you don't read from a tcp socket then the recv buffer on the receiving end and the send buffer on the seinding end will fill up, at which point your program will block on further send() calls.
How much memory each process will use depends on the size of those buffers, which depends on the operating system and socket options. For example, on linux you would get into a situation like this:
$ ss -tpn
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 2595384 127.0.0.1:3333 127.0.0.1:2222 users:(("python3",pid=13088,fd=3))
ESTAB 964588 0 127.0.0.1:2222 127.0.0.1:3333 users:(("python3",pid=13087,fd=4))
The first line shows the sending process (full send queue, ~2.6MB), the second line the receiving process (full recv queue, ~1MB).
This happens because during data transfer using TCP, with each ACK the receiver tells the sender how much data it is ready to accept for the next transmission. If the rec buffer is full, the send buffer will also fill up and then no more data can be sent.

Sending And Receiving Bytes through a Socket, Depending On Your Internet Speed

I made a quick program that sends a file using sockets in python.
Server:
import socket, threading
#Create a socket object.
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#Bind the socket.
sock.bind( ("", 5050) )
#Start listening.
sock.listen()
#Accept client.
client, addr = sock.accept()
#Open a new file jpg file.
file = open("out.jpg", "wb")
#Receive all the bytes and write them into the file.
while True:
received = client.recv(5)
#Stop receiving.
if received == b'':
file.close()
break
#Write bytes into the file.
file.write( received )
Client:
import socket, threading
#Create a socket object.
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#Connect to the server.
sock.connect(("192.168.1.3", 5050))
#Open a file for read.
file = open("cpp.jpg", "rb")
#Read first 5 bytes.
read = file.read(5)
#Keep sending bytes until reaching EOF.
while read != b'':
#Send bytes.
sock.send(read)
#Read next five bytes from the file.
read = file.read(1024)
sock.close()
file.close()
From experience a learn that send can send an amount of bytes that your network
speed is capble of sending them. If you give for example: sock.send(20 gb) you are going to lose bytes because most network connections can't send 20 gb at
once. You must send them part by part.
So my question is: How can i know the maximum amount of bytes that socket.send()
can send over the internet? How can i improve my program to send the file as quick as possible depending on my internet speed?
send makes no guarantees that all the data is sent (it's not directly tied to network speed; there are multiple reasons it could send less than requested), just that it lets you know how much was sent. You could explicitly write loops to send until it's all really sent, per Dunno's answer.
Or you could just use sendall and avoid the hassle. sendall is basically the wrapper described in the other answer, but Python does all the heavy lifting for you.
If you don't care about slurping the whole file into memory, you could use this to replace your whole loop structure with just:
sock.sendall(file.read())
If you're on modern Python (3.5 or higher) on a UNIX-like OS, you could optimize a bit to avoid even reading the file data into Python using socket.sendfile (which should only lead to partial send on error):
sock.sendfile(file)
If the Python doesn't support os.sendfile on your OS, this is just a effectively a loop that reads and sends repeatedly, but on a system that supports it, this directly copies from file to socket in the kernel, without even handling file data in Python (which can improve throughput speeds significantly by reducing system calls and eliminating some memory copies entirely).
Just send those bytes in a loop until all were sent, here's an example from the docs
def mysend(self, msg):
totalsent = 0
while totalsent < MSGLEN:
sent = self.sock.send(msg[totalsent:])
if sent == 0:
raise RuntimeError("socket connection broken")
totalsent = totalsent + sent
In your case, MSGLEN would be 1024, and since you're not using a class, you don't need the self argument
There are input/output buffers at all steps along the way between your source and destination. Once a buffer fills, nothing else will be accepted on to it until space has been made available.
As your application attempts to send data, it will fill up a buffer in the operating system that is cleared as the operating system is able to offload that data to the network device driver (which also has a buffer).
The network device driver interfaces with the actual network and understands how to know when it can send data and how receipt will be confirmed by the other side (if at all). As data is sent, that buffer is emptied, allowing the OS to push more data from its buffer. That, in turn, frees up room for your application to push more of its data to the OS.
There are a bunch of other things that factor into this process (timeouts, max hops are two I can think off offhand), but the general process is that you have to buffer the data at each step until it can be sent to the next step.
From experience a learn that send can send an amount of bytes that
your network speed is capble of sending them.
Since you are using a TCP Socket (i.e. SOCK_STREAM), speed-of-transmission issues are handled for you automatically. That is, once some bytes have been copied from your buffer (and into the socket's internal send-buffer) by the send() call, the TCP layer will make sure they make it to the receiving program, no matter how long it takes (well, within reason, anyway; the TCP layer will eventually give up on resending packets if it can't make any progress at all over the course of multiple minutes).
If you give for example: sock.send(20 gb) you are going to lose bytes
because most network connections can't send 20 gb at once. You must
send them part by part.
This is incorrect; you are not going to "lose bytes", as the TCP layer will automatically resend any lost packets when necessary. What might happen, however, is that send() might decide not to accept all of the bytes that you offered it. That's why it is absolutely necessary to check the return value of send() to see how many bytes send() actually accepted responsibility for -- you cannot simply assume that send() will always accept all the bytes you offered to it.
So my question is: How can i know the maximum amount of bytes that
socket.send() can send over the internet?
You can't. Instead, you have to look at the value returned by send() to know how many bytes send() has copied out of your buffer. That way, on your next call to send() you'll know what data to pass in (i.e. starting with the next byte after the last one that was sent in the previous call)
How can i improve my program to send the file as quick as possible
depending on my internet speed?
Offer send() as many bytes as you can at once; that will give it the most flexibility to optimize what it's doing behind the scenes. Other than that, just call send() in a loop, using the return value of each send() call to determine what bytes to pass to send() the next time (e.g. if the first call returns 5, you know that send() read the first 5 bytes out of your buffer and will make sure they get to their destination, so your next call to send() should pass in a buffer starting at the 6th byte of your data stream... and so on). (Or if you don't want to deal with that logic yourself, you can call sendall() like #ShadowRanger suggested; sendall() is just a wrapper containing a loop around send() that does that logic for you. The only disadvantage is that e.g. if you call sendall() on 20 gigabytes of data, it might be several hours before the sendall() call returns! Whether or not that would pose a problem for you depends on what else your program might want to accomplish, if anything, while sending the data).
That's really all there is to it for TCP.
If you were sending data using a UDP socket, on the other hand, things would be very different; in the UDP case, packets can simply be dropped, and it's up to the programmer to manage speed-of-transmission issues, packet resends, etc, explicitely. But with TCP all that is handled for you by the OS.
#Jeremy Friesner
So I can do something like that:
file = open(filename, "rb")
read = file.read(1024**3) #Read 1 gb.
totalsend = 0
#Send Loop
while totalsend < filesize:
#Try to send all the bytes.
send = sock.send(read)
totalsend += send
#If failed, then seek into the file the position
#where the next read will also read the missing bytes.
if send < 1024**3:
file.seek(totalsend)
read = file.read(1024**3) #Read 1 gb.
Is this correct?
Also, from this example i undestood one more think. The data you can send in every loop, can't be bigger in size than your memory. Because you are bringing bytes from the disk on the memory. So theoretically even if your network speed is infinity, you can't send all the bytes at once if the file is bigger than your memory.

Why only 1024 bytes are read in socketserver example

I am reading through the documentation examples for python socketserver at https://docs.python.org/2/library/socketserver.html
Why is the size specified as 1024 in the line self.request.recv(1024) inside handle method. What happens if the data sent by the client is more than 1024 bytes ?
Is it better to have a loop to read 1024 bytes until socket is empty ? I have copied the example here :
import SocketServer
class MyTCPHandler(SocketServer.BaseRequestHandler):
"""
The RequestHandler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip() # why only 1024 bytes ?
print "{} wrote:".format(self.client_address[0])
print self.data
# just send back the same data, but upper-cased
self.request.sendall(self.data.upper())
if __name__ == "__main__":
HOST, PORT = "localhost", 9999
# Create the server, binding to localhost on port 9999
server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler)
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
When reading from a socket it's always required to make a loop.
The reason is that even if the source sent say 300 bytes over the network it's possible for example that the data will arrive to the receiver as two separate chunks of 200 bytes and 100 bytes.
For this reason when you specify a buffer size for recv you only say the maximum amount you're willing to process, but the actual data amount returned may be smaller.
There is no way to implement a "read until the end of the message" at the Python level because the send/recv functions are simply wrappers of the TCP socket interface and that is a stream interface, without message boundaries (so there is no way to know if "all" the data has been received from the source).
This also means that in many cases you will need to add your own boundaries if you need to talk using messages (or you will need to use an higher-level message-based network transport interface like 0MQ)
Note that "blocking mode" - when reading from a socket - only defines the behavior when there is no data already received by the network layer of the operating system: in that case, when blocking - the program will wait for a chunk of data; if non-blocking instead - it will return immediately without waiting. If there is any data already received by the computer, then the recv call immediately returns even if the passed buffer size is bigger - independently of the blocking/non-blocking setting.
Blocking mode doesn't mean that the recv call will wait for the buffer to be filled.
NOTE: The Python documentation is indeed misleading on the behavior of recv and hopefully will be fixed soon.
A TCP socket is just a stream of bytes. Think of it like reading a file. Is it better to read a file in 1024-byte chunks? It depends on the content. Often a socket, like a file, is buffered and only complete items (lines, records, whatever is appropriate) are extracted. It's up to the implementer.
In this case, a maximum of 1024 is read. If a larger amount is sent, it will be broken up. Since there is no defined message boundary in this code, it really doesn't matter. If you care to receive only complete lines, implement a loop to read data until a message boundary is determined. Perhaps read until a carriage return is detected and process a complete line of text.

Categories