Is it possible to set a timeout for socket.close() - python

I'm working with sockets and noticed my program gets 'stuck' sometimes when I'm trying to perform a socket.close(). Is it possible to set a timeout for how long socket.close() is allowed to take before just continuing? Would calling socket.shutdown() just end up fixing the problem?

Related

Python Timeout on a thread, without the use of join

So I'm working on a TCP in python.
We are having a problem with threads not exiting properly.
We want it to have multiple connections, so when a new connection is spawned a new thread is started. However these threads don't always exit properly. They have return statements but when I run Thread.activeCount() I find that these build up.
I have no idea what the problem is. These threads build up, and then dissipate. I want to do a thread time out, however I don't want to use Thread.join() because if a new connection is established it won't go to the receive functions, which would cause the client to timeout due to a lack of response.
Any suggestions on a timeout command? I can't seem to find any on the python docs.

Abort long running http operation

In my (python) code I have a thread listening for changes from a couchdb feed (continuous changes). The changes request has a timeout parameter which is too big in certain circumstances (for example when a user wants to interrupt the program manually with ^C).
How can I abort a long-running blocking http request?
Is this possible, or do I need to reduce the timeout to make my program more responsive?
This would be unfortunate, because having a timeout small enough to make the program really responsive (say, 1s), means that there are lots of connections being created (one per second!), which defeats the purpose of listening to changes, and makes it very difficult to make sure that we are not missing any changes (in the re-connecting timespan we can indeed miss changes, so that special code is needed to handle that case)
The other option is to forcefully abort the thread, but that is not really an option in python.
If I understand correctly it looks like you are waiting too long between requests before deciding whether to respond to the users or not. You are right continuously closing and creating new connections will defeat the purpose of changes feed.
A solution could be to use heartbeat query parameter in which couchdb will keep sending newlines to tell the client that the connection is still alive.
http://localhost:5984/hello/_changes?feed=continuous&heartbeat=1000&include_docs=true
as long as you are getting heartbeats (newlines) you can be sure that you are getting new changes. A new line will indicate that no changes have occurred. Where as an actual change will be reported back. No need to close the connection. Respond to your clients if resp!="/n"
Blocking the thread execution in general prevents the thread from beeing terminated. You need to wait until the request timed out. But this is already clear.
Using a library that supports non blocking requests is maybe a solution, but I don't know if there is any.
Anyway ... you've mentioned that reducing the timeout will lead to more connections. I'd suggest to implement a waiting loop between requests that can be interrupted by an external signal to terminate the thread. with this loop you can control the number of requests independent from the timeout.

Python socket.connect hangs sometimes

I'm learning to use sockets in python and something weird is happening.
I call socket.connect in a try block, and typically it either completes and I have a new socket connection, or it raises the exception. Sometimes, however, it just hangs.
I don't understand why sometimes it returns (even without connecting!) and other times it just hangs. What makes it hang?
I am using blocking sockets (non-blocking don't seem to work for connect...), so I've added a timeout, but I'd prefer connect to finish without needing to timeout.
Perhaps, when it doesn't hang, it receives a response that tells it the requested ip/port is not available, and when it does hang there is just no response from the other end?
I'm on OSX10.8 using python2.7
When connect() hangs it is usually because you connect to an address that is behind a firewall and the firewall just drops your packets with no response. It keeps trying to connect for around 2 minutes on Linux and then times out and return an error.
Firewall may be the explanation behind this unexpected response. Rather than supposing the remote firewall accepts connection, using timeout is the best option. Since, making a connection is a swift process and within a network, it won't take longer time. So, give a proper timeout so that you can tell that the host is either down or dropping packets.

How to handle timeouts when a process receives SIGSTOP and SIGCONT?

I have some Python code which uses threading.Timer to implement a 60-second timeout for an operation.
The problem is that this code runs in a job-control environment where it may get pre-empted by a higher priority job. In this case it will be sent SIGSTOP, and then some time later, SIGCONT. I need a way to somehow notice that this has happened and reset the timeout: obviously the operation hasn't really timed out if it's been suspended for the whole 60 seconds.
I tried to add a signal handler for SIGCONT but this seems to get executed after the code provided to threading.Timer has been executed.
Is there some way to achieve this?
A fairly simple answer that occurred to me after posting this is to simply break up the timer into multiple sub-timers, e.g. having 10 6-second timers instead where each one starts the next one in a chain. That way, if I get suspended, I only lose one of the timers and still get most of the wait before timing out.
This is of course not foolproof, especially if I get repeatedly suspended and restarted, but it's easy to do and seems like it might be good enough.
You need to rethink what you're asking for; a timeout reflects elapsed time (wall time); you want to know the time used by your process.
Fortunately you can measure this with getrusage: http://docs.python.org/library/resource.html
You'll still need to set a timeout; when it returns, measure the increase in user or system time usage since the start of the operation and terminate the operation if it exceeds the limit, else reschedule the timeout appropriately.
If your application is multi-threaded, the docs says that:
only the main thread can set a new signal handler, and the main thread will be the only one to receive signals
Make sure you are handling your signals from the main thread.

What is the correct procedure for multiple, sequential communications over a socket?

I've been struggling along with sockets, making OK progress, but I keep running into problems, and feeling like I must be doing something wrong for things to be this hard.
There are plenty of tutorials out there that implement a TCP client and server, usually where:
The server runs in an infinite loop, listening for and echoing back data to clients.
The client connects to the server, sends a message, receives the same thing back, and then quits.
That I can handle. However, no one seems to go into the details of what you should and shouldn't be doing with sequential communication between the same two machines/processes.
I'm after the general sequence of function calls for doing multiple messages, but for the sake of asking a real question, here are some constraints:
Each event will be a single message client->server, and a single string response.
The messages are pretty short, say 100 characters max.
The events occur relatively slowly, max of say, 1 every 5 seconds, but usually less than half that speed.
and some specific questions:
Should the server be closing the connection after its response, or trying to hang on to the connection until the next communication?
Likewise, should the client close the connection after it receives the response, or try to reuse the connection?
Does a closed connection (either through close() or through some error) mean the end of the communication, or the end of the life of the entire object?
Can I reuse the object by connecting again?
Can I do so on the same port of the server?
Or do I have reinstantiate another socket object with a fresh call to socket.socket()?
What should I be doing to avoid getting 'address in use' errors?
If a recv() times out, is the socket reusable, or should I throw it away? Again, can I start a new connection with the same socket object, or do I need a whole new socket?
If you know that you will communicate between the two processes soon again, there is no need for closing the connection. If your server has to deal with other connections as well, you want to make it multithreaded, though.
The same. You know that both have to do the same thing, right?
You have to create a new socket on the client and you can also not reuse the socket on the server side: you have to use the new socket returned by the next (clientsocket, address) = serversocket.accept() call. You can use the same port. (Think of webservers, they always accept connections to the same port, from thousands of clients)
In both cases (closing or not closing), you should however have a message termination sign, for example a \n. Then you have to read from the socket until you have reached the sign. This usage is so common, that python has a construct for that: socket.makefile and file.readline
UPDATE:
Post the code. Probably you have not closed the connection correctly.
You can call recv() again.
UPDATE 2:
You should never assume that the connection is reliable, but include mechanisms to reconnect in case of errors. Therefore it is ok to try to use the same connection even if there are longer gaps.
As for errors you get: if you need specific help for your code, you should post small (but complete) examples.

Categories