I use the following code snippet to send data to a web server. In this case, I do not care at all about the HTML page that the server wants to send back. The server supports other clients that need that data, so I can't eliminate the quantity of HTML that is going to come back at my Python on Raspberry Pi. What I'm seeing is that frequently the ESP-8266 server seems to hang waiting to send data back to my Python/Pi client. Like the Pi stops accepting the data until something times out. As a test, I eliminated 50% of the web page being served, and it works fine on the Pi/Python. Should I be doing something in the python code to set buffer size or issue a command to ensure the data is discarded and not kept in a socket buffer somewhere that could perhaps overflow or something that causes python/pi to stop accepting server data?
htmlString = ("/Interior, /COLOR,r="+str(dimPixelIn[0]).zfill(3)+",g="+str(dimPixelIn[1]).zfill(3)+",b="+str(dimPixelIn[2]).zfill(3))
conn = http.client.HTTPConnection(awningAddress, timeout=0.3)
try:
conn.request("GET", htmlString)
except socket.timeout as sto:
print("Error")
except http.client.HTTPException as Exc:
print("Error")
finally:
conn.close()
conn.close()
A TCP connection consists of 2 almost independent unidirectional streams. The conn.close() closes only the stream from the client to the server. The connection from the server to the client is still open and the server still sends data.
I know 2 option how to prevent server send data that are not necessary.
Do not use the GET method. Use the OPTIONS method. If server supports the OPTIONS method (and it should) then handled it like a GET request but it sends HTTP response with HTTP headers but without body.
Reset the connection instead closing it. You can reset connection when set SO_LINGER socket option - see Sending a reset in TCP/IP Socket connection for example.
Related
I have server and client written in python. My server is implemented using asyncio and library called 'websockets'. This is asynchronous architecture. Client from the other hand is implemented in library called 'websocket-client'. They are 2 different code bases and repositories.
In server repository i am calling serve method to start websocket server that is accepting connection from clients and allows them to send messages to sever. It looks like this:
async with serve(
self.messages_loop, host, port, create_protocol=CentralRouterServerProtocol
) as ws_server:
...
Client is using websocket-client library and it is connecting to websocket by calling 'create_connection' method. Later it is calling 'send' method to send message to server. Code:
client = create_connection(f'ws://{central_router.public_ip}', timeout=24*60*60, header=cls.HEADERS)
cls.get_client().send(json.dumps(message_dict)) // Sends message later loop. After user will type something from input.
Main requirement is that client can only send messages. It cant read it.After that server is sending ping every X seconds to confirm that connection is alive. Server waits another Y seconds for client to reply to him. Client cant reply to server, because it is running on synchronous block of code. The server is closing the connection but client doesnt know about it. Client is not reading from websocket (so he cant get information about closed websocket -> is that true?). After that sobody is typing something into input, and client is sending message to server. AND NOW -> the websocket-client send method is not raising any exception (that connection is closed), but messages will never get to the server. If user will type message one more time, it will get finnaly exception
[Errno 32] Broken pipe
but the first message after connection close will never raise and error/exception.
Why is that? What is going on? My first solution was to set ping_timeout to None on server side. It will make server not to wait this Y seconds for response, and it will never close a connection. However this is wrong solution, because it can cause zombie connections on server side.
Do anyone know, why client can sand one more message with success, after the pipe was broken?
I have to write a code in python language to create a tcp server. Different devices(client) will connect to this server and sends a file of approx 1GB.
The problem is, after receiveing approx 500MB of file, this server should simulate that the TCP connection is broken unexpectedly and it is not responding to client TCP packets with ACK or NACKS. Then the client will retry to setup the connection. Is there a way to do so?? How can I simulate this negative scenario in python v2.7.
Please help!
You need to check how much data has been received by the server by storing the amount in a variable like so:
received = 0
# when receiving data:
data = sock.recv(2084)
received += 2084
and if this value reaches the desired value, you can simply close the socket using
sock.close()
This will cause a "Connection Refused" error on the client, which forces the client to set-up a new connection / reconnect the socket to the server.
be aware that, if you are using other values for the recv() buffer, you need to change that in your receieve counter as well.
I have a basic implementation of a TCP client using python sockets, all the client does is connect to a server and send heartbeats every X seconds. The problem is that I don't want to send the server a heartbeat if the connection is closed, but I'm not sure how to detect this situation without actually sending a heartbeat and catch an exception. When I turn off the server, in the traffic capture I see FIN/ACK arriving and the client sends an ACK back, this is when I want my code to do something (or at least change some internal state of the connection). Currently, what happens is after the server went down and X seconds passed since last heartbeat the client will try to send another heartbeat, only then I see RST packet in the traffic capture and get an exception of broken pipe (errno 32). Clearly python socket handles the transport layer and the heartbeats are part of application layer, the problem I want to solve is not to send the redundant heartbeat after FIN/ACK arrived from server, any simple way to know the connection state with python socket?
I'm now familiar with the general cause of this problem from another SO answer and from the uWSGI documentation, which states:
If an HTTP request has a body (like a POST request generated by a
form), you have to read (consume) it in your application. If you do
not do this, the communication socket with your webserver may be
clobbered.
However, I don't understand what exactly is happening at the TCP level for this problem to occur. Not knowing the details of this process, I would assume the server can simply discard what remains in the stream, but that's obviously not the case.
If I consume only part of the request body in my application and ultimately return a 200 response, a web browser will report a connection reset error. Who reset the connection? The webserver or the client? It seems like all the data has been sent by the client already, but the application has just not exhausted the stream. Is there something that happens when the stream is exhausted in the application that triggers the webserver to indicate it has finished reading?
My application is Python/Flask, but I've seen questions about this from several languages and frameworks. For example, this fails if exhaust() is not called on the request stream:
#app.route('/upload', methods=['POST'])
def handle-upload():
file = request.stream
pandas.read_csv(file, nrows=100)
response = # Do stuff
file.exhaust()
return jsonify(response)
While there is some buffering throughout the chain, large file transfers are not going to complete until the receiver has consumed them. The buffers will fill up, and packets will be dropped until the buffers are drained. Eventually, the browser will give up trying to send the file and drop the connection.
I have a TCP server with code that looks like this (in a loop):
r, w, e = select([self.sock], [self.sock], [self.sock], 0)
if r or e:
try:
data = self.sock.recv(2048)
except:
debug("%s: .recv() crashed!"%self.id)
debug(traceback.format_exc())
break
For some reason, my connection from the client to this server randomly breaks, but I only see that it broke once I try to send data, then I get the error from recv(), is there any way to detect the error without trying to send data?
Depending on how the connection was closed, you may not know it's closed until you attempt a send. By default, TCP won't automatically detect if the remote machine disappears without sending a disconnect.
What you're seeing is the correct behavior and something you need to handle. Make sure you don't assume that all exceptions from recv are "crashes", though. I don't know python, but there are likely different exceptions (including disconnects) that you need to handle but the code you posted doesn't deal with properly.
You should either enable TCP keepalive or send application-layer no-op packets to determine if your connection is still open.