I have a basic implementation of a TCP client using python sockets, all the client does is connect to a server and send heartbeats every X seconds. The problem is that I don't want to send the server a heartbeat if the connection is closed, but I'm not sure how to detect this situation without actually sending a heartbeat and catch an exception. When I turn off the server, in the traffic capture I see FIN/ACK arriving and the client sends an ACK back, this is when I want my code to do something (or at least change some internal state of the connection). Currently, what happens is after the server went down and X seconds passed since last heartbeat the client will try to send another heartbeat, only then I see RST packet in the traffic capture and get an exception of broken pipe (errno 32). Clearly python socket handles the transport layer and the heartbeats are part of application layer, the problem I want to solve is not to send the redundant heartbeat after FIN/ACK arrived from server, any simple way to know the connection state with python socket?
Related
I have server and client written in python. My server is implemented using asyncio and library called 'websockets'. This is asynchronous architecture. Client from the other hand is implemented in library called 'websocket-client'. They are 2 different code bases and repositories.
In server repository i am calling serve method to start websocket server that is accepting connection from clients and allows them to send messages to sever. It looks like this:
async with serve(
self.messages_loop, host, port, create_protocol=CentralRouterServerProtocol
) as ws_server:
...
Client is using websocket-client library and it is connecting to websocket by calling 'create_connection' method. Later it is calling 'send' method to send message to server. Code:
client = create_connection(f'ws://{central_router.public_ip}', timeout=24*60*60, header=cls.HEADERS)
cls.get_client().send(json.dumps(message_dict)) // Sends message later loop. After user will type something from input.
Main requirement is that client can only send messages. It cant read it.After that server is sending ping every X seconds to confirm that connection is alive. Server waits another Y seconds for client to reply to him. Client cant reply to server, because it is running on synchronous block of code. The server is closing the connection but client doesnt know about it. Client is not reading from websocket (so he cant get information about closed websocket -> is that true?). After that sobody is typing something into input, and client is sending message to server. AND NOW -> the websocket-client send method is not raising any exception (that connection is closed), but messages will never get to the server. If user will type message one more time, it will get finnaly exception
[Errno 32] Broken pipe
but the first message after connection close will never raise and error/exception.
Why is that? What is going on? My first solution was to set ping_timeout to None on server side. It will make server not to wait this Y seconds for response, and it will never close a connection. However this is wrong solution, because it can cause zombie connections on server side.
Do anyone know, why client can sand one more message with success, after the pipe was broken?
I am creating a file server and need to have several clients send images to a server. In the client send method, I am shutting down the socket after the image has been sent to tell the server to stop receiving. Is it possible to keep the same socket connection for the next time that client sends an image rather than reconnecting with a new socket?
No. A shutdown is a definitive operation at the underlying socket library level. It is not intended to be used as a transfert acknowledgment, but only as part or a graceful shutdown.
If you want to re-use the connection, you must use a different protocol to signal the end of transmission. Common usages are size + data (binary protocol) or commands and encoded data (text protocol).
I use a TCP server in python, that implements this class:
class ThreadedTCPServer(SocketServer.ThreadingTCPServer):
pass
The normal use of it works perfect (initiating the server, handling requests and so on).
Now- I need to send a message to the clients, outside of the handle function in the TcpRequestHandler(SocketServer.BaseRequestHandler) class.
I tried to use the following trick, of using the internal socket of the server (it works for UDP)-
tcp_server.client_socket.send(message)
But I get this error message-
socket.error: [Errno 10057] A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied
So I assume it is not possible for TCP.
Is there any other way to do it?
I assume some servers need to send messages to their client sometimes (that are not just responses to requests), but I couldn't find a good way.
Thanks!
You have two general options with TCP:
Send a message to the client out of band (OOB). In this, the server connects separately to the client and the roles are reversed. The client has to listen on a port for OOB messages and acts as a server in this regard. From your problem description you don’t want to do this, which is fine.
Implement a protocol where the server can send messages to the client in response to incoming messages. You will need a way to multiplex the extra messages along with any expected return value to the initiating message. You could implement this with a shared queue on your server. You put messages into this queue outside of your handler and then when the handler is responding to messages you consume from the queue and insert them into the response.
If that sounds like something you are interested in I could write some example code later.
There are pros & cons between both approaches:
In (1) you have more socket connections to manage and you expose the client host to connections which you might not desire. The protocols are simpler because they are not multiplexed.
In (2) you only have a single TCP stream but you have to multiplex your OOB message. You also have increased latency if the client is not regularly contacting the server.
Hope that helps.
I'm working in a NetHack clone that is supposed to be playing through Telnet, like many NetHack servers. As I've said, this is a clone, so it's being written from scratch, on Python.
I've set up my socket server reusing code from a SMTP server I wrote a while ago and all of suddenly my attention jumped to this particular line of code:
s.listen(15)
My server was designed to be able to connect to 15 simultaneous clients just in case the data exchange with any took too long, but ideally listen(1) or listen(2) would be enough. But this case is different.
As it happens with Alt.org when you telnet their NetHack servers, people connected to my server should be able to play my roguelike remotely, through a single telnet session, so I guess this connection should not be interrupted. Yet, I've read here that
[...] if you are really holding more than 128 queued connect requests you are
a) taking too long to process them or b) need a heavy-weight
distributed server or c) suffering a DDoS attack.
What is the better practice to carry out here? Should I keep every connection open until the connected user disconnects or is there any other way? Should I go for listen(128) (or whatever my system's socket.SOMAXCONN is) or is that a bad practice?
number in listen(number) request limits number of pending connect requests.
Connect request is pending from initial SYN request received by OS until you called accept socket method. So number does not limits open (established) connection number but it limits number of connections in SYN_RECV state.
It is bad idea not to answer on incoming connection because:
Client will retransmit SYN requests until answer SYN is received
Client can not distinguish situation when your server is not available and it just in queue.
Better idea is to answer on connection but send some message to client with rejection reason and then close connection.
I have implemented a Modbus over TCP as server software in Python. App is multithreaded and relies heavily on standard libs. I have problems managing the connection on the server side.
Meanwhile my implementation as Modbus over TCP as client works just fine.
Implementation description
The server is multithreaded, one thread manages the SOCK_STREAM socket for receiving
frames
select is used out of efficiency reasons
A semaphore is used for preventing concurrent access on socket resource while sending or receiving
Encapsulation of Modbus upper layer is done transparently through send and receive methods, it is only a matter of building a frame with the right header and payload anyway...
Another threads runs, inside it, Modbus send and receive methods are invoked.
TCP Context
TCP is up and running, bound to a port, max client set and listening.
Traces under wireshark show:
Client: SYN
My app Server: SYN, ACK
Client: ACK
On the server side a brand new socket has been created as expected and bound to the client socket.
So far, all is good.
Modbus Context
Client: Send Modbus frame, TCP flags = 0x18 which is ACK + PUSH
My app Server: Does not wait and send a single empty TCP ack frame.
Client: Waits for a modbus frame with tcp ack flag. Therefore, takes it as an error and asks to closes the connection.
Hence, my server software cannot send any actual response afterwards as the socket on the client side is being closed or is already closed.
My problem
I receive a modbus frame that the main thread need to process (server side)
Processing takes a few ms, in the meantime a TCP ACK frame is sent through my server socket, whereas I would like it not to send anything !
Do you have any idea on how to manage the ACK behavior ? I have read stuff about the naggle algorithm, but it does not seem to be in the scope of the problem here...
I'm not sure that any option of the setsockopt method would solve my problem also, but I may be mistaken.
If you have any suggestion I am very interested...
I hope I am clear enough.
It seems like a strange requirement that all TCP packets must contain a payload as this is very difficult to control unless you are integrated with the TCP stack. If it really is the case that the client crashes because the ACK has no Modbus payload, I think the only thing you can do from python is try disabling the TCP_QUICKACK socket option so that TCP waits 500ms before sending an ACK. This obviously won't work in all cases (or may not at all if it takes your code > 500ms to create a response), but I don't know of any other options using the socket API from python.
This SO answer tells you how to disable it: Disable TCP Delayed ACKs. Should be easy to figure out how to enable it from that. Note, you need to constantly re-enable it after receiving data.