In my (python) code I have a thread listening for changes from a couchdb feed (continuous changes). The changes request has a timeout parameter which is too big in certain circumstances (for example when a user wants to interrupt the program manually with ^C).
How can I abort a long-running blocking http request?
Is this possible, or do I need to reduce the timeout to make my program more responsive?
This would be unfortunate, because having a timeout small enough to make the program really responsive (say, 1s), means that there are lots of connections being created (one per second!), which defeats the purpose of listening to changes, and makes it very difficult to make sure that we are not missing any changes (in the re-connecting timespan we can indeed miss changes, so that special code is needed to handle that case)
The other option is to forcefully abort the thread, but that is not really an option in python.
If I understand correctly it looks like you are waiting too long between requests before deciding whether to respond to the users or not. You are right continuously closing and creating new connections will defeat the purpose of changes feed.
A solution could be to use heartbeat query parameter in which couchdb will keep sending newlines to tell the client that the connection is still alive.
http://localhost:5984/hello/_changes?feed=continuous&heartbeat=1000&include_docs=true
as long as you are getting heartbeats (newlines) you can be sure that you are getting new changes. A new line will indicate that no changes have occurred. Where as an actual change will be reported back. No need to close the connection. Respond to your clients if resp!="/n"
Blocking the thread execution in general prevents the thread from beeing terminated. You need to wait until the request timed out. But this is already clear.
Using a library that supports non blocking requests is maybe a solution, but I don't know if there is any.
Anyway ... you've mentioned that reducing the timeout will lead to more connections. I'd suggest to implement a waiting loop between requests that can be interrupted by an external signal to terminate the thread. with this loop you can control the number of requests independent from the timeout.
Related
I'm currently working on a Benchmark project, where I'm trying to stress the server out with zmq requests.
I was wondering what would be the best way to approach this, I was thinking of having a context to create a socket and push it into a thread, in which I would send request and wait for responses in each thread respectively, but I'm not too sure this is possible with python's limitations.
More over, would it be the same socket for all threads, that is, if I'm waiting for a response on one thread (With it's own socket), would it be possible for another thread to catch that response?
Thanks.
EDIT:
Test flow logic would be like this:
Client socket would use zmq.REQ.
Client sends message.
Client waits for a response.
If no response, client reconnects and tries again until limit.
I'd like to scale this operation up to any number of clients, preferring not to deal with Processes unless performance wise the difference is significant..
How would you do this?
Q : "...can I have one context and use several sockets?"
Oh sure you can.
Moreover, you can have several Context()-instances, each one managing ... almost... any number of Socket()-instances, each Socket()-instance's methods may get called from one and only one python-thread ( a Zen-of-Zero rule: zero-sharing ).
Due to known GIL-lock re-[SERIAL]-isation of all the thread-based code-execution flow, this still has to and will wait for acquiring the GIL-lock ownership, which in turn permits a GIL-lock owner ( and nobody else ) to execute a fixed amount of python instructions, before it re-releases the GIL-lock to some other thread...
So I have been getting my feet wet with python, attempting to build a reminder system that ties into the gnome notification ui. The basic idea is you type a command into your shell like remind me to check on dinner in 20 min and then in 20 min you get a desktop notification saying "check on dinner". The way I am doing this is by having a script parse the message and write the time the notification should be sent and the message that should be sent to a log file.
The notifications are getting triggered by a python daemon. I am using this daemon design I found online. The issue I am seeing is when this daemon is running it is taking 100% of my cpu! I stripped down all the code the daemon was doing and it I still have this problem when all the daemon is doing is
while True:
last_modified = os.path.getmtime(self.logfile)
I presume that this is a bad approach and I should instead be notifying the daemon when there is a new reminder and then most of the time the reminder daemon should be sleeping. Now this is just an idea but I am having a hard time finding resources on 'how to notify a process' when all I know is the daemons pid. So if I have suspend the daemon with something like time.sleep(time_to_next_notification) would there be a way for me to send a signal to to the daemon letting it know that there was a new reminder?
Though I believe you're better off using a server - client type solution that listens on a port, what you are asking is 100% possible using the signal and os libraries. This approach will not work well with multi threaded programs however as signals are only handled by the parent thread in python. Additionally windows doesn't implement signals in the same way so the options are more limited.
Signals
The "client" process can send arbitrary signals using os.kill(pid, signal). You will have to go through the available signals and determine which one you want to use (signal.NSIG may be a good option because it shouldn't stomp on any other default behavior).
The "daemon" process on startup must register a handler for what to do when it receives your chosen signal. The handler is a function you must define that receives the signal itself that was received as well as the current stack frame of execuiton (def handler(signum, frame):). If you're only doing one thing with this handler, and it doesn't need to know what was happening when it was called, you can probably ignore both these parameters. Then you must register the handler with signal.signal ex: signal.signal(signal.NSIG, handler).
From there you will want to find some appropriate way to wait until the next signal without consuming too many resources. This could be as simple as looping on a os.sleep
command, or you could try to get fancy. I'm not sure 100% how execution resumes on returning from a signal handler, so you may need to concern yourself with recursion depth (ie, make sure you don't recurse every time a signal is handled or you'll only ever be able to handle a limited number of signals before needing to re-start).
Server
Having a process listen on a port (generally referred to as a server, but functionally the same as your 'daemon' description) instead of listen for operating system signals has several main advantages.
Ports are able to send data where signals are only able to trigger events
Ports are more similar cross-platform
Ports play nice[r] with multi-threading
Ports make it easy to send messages across a network (ie: create reminder from phone and execute on PC)
Waiting for multiple things at once
In order to address the need to wait for multiple processes at once (listening for input as well as waiting to deliver next notification) you have quite a few options:
Signals actually may be a good use case as signal.SIGALRM can be used as a conveniently re-settable alarm clock (if you're using UNIX). You would set up the handler in the same way as before, and simply set an alarm for the next notification. After setting the alarm, you could simply resume listening on the port for new tasks. If a new task comes in, setting the alarm again will override the existing one, so the handler would need to retrieve the next queued notification and re-set the alarm once done with the first task.
Threads could either be used to poll a queue of notification tasks, or an individual thread could be created to wait for each task. This is not a particularly elegant solution, however it would be effective and easy to implement.
The most elegant solution would likely be to use asyncio co-routines, however I am not as well versed in asyncio, and will admit they're a bit more confusing than threads.
I'm working with Django1.8 and Python2.7.
In a certain part of the project, I open a socket and send some data through it. Due to the way the other end works, I need to leave some time (let's say 10 miliseconds) between each data that I send:
while True:
send(data)
sleep(0.01)
So my question is: is it considered a bad practive to simply use sleep() to create that pause? Is there maybe any other more efficient approach?
UPDATED:
The reason why I need to create that pause is because the other end of the socket is an external service that takes some time to process the chunks of data I send. I should also point out that it doesnt return anything after having received or let alone processed the data. Leaving that brief pause ensures that each chunk of data that I send gets properly processed by the receiver.
EDIT: changed the sleep to 0.01.
Yes, this is bad practice and an anti-pattern. You will tie up the "worker" which is processing this request for an unknown period of time, which will make it unavailable to serve other requests. The classic pattern for web applications is to service a request as-fast-as-possible, as there is generally a fixed or max number of concurrent workers. While this worker is continually sleeping, it's effectively out of the pool. If multiple requests hit this endpoint, multiple workers are tied up, so the rest of your application will experience a bottleneck. Beyond that, you also have potential issues with database locks or race conditions.
The standard approach to handling your situation is to use a task queue like Celery. Your web-application would tell Celery to initiate the task and then quickly finish with the request logic. Celery would then handle communicating with the 3rd party server. Django works with Celery exceptionally well, and there are many tutorials to help you with this.
If you need to provide information to the end-user, then you can generate a unique ID for the task and poll the result backend for an update by having the client refresh the URL every so often. (I think Celery will automatically generate a guid, but I usually specify one.)
Like most things, short answer: it depends.
Slightly longer answer:
If you're running it in an environment where you have many (50+ for example) connections to the webserver, all of which are triggering the sleep code, you're really not going to like the behavior. I would strongly recommend looking at using something like celery/rabbitmq so Django can dump the time delayed part onto something else and then quickly respond with a "task started" message.
If this is production, but you're the only person hitting the webserver, it still isn't great design, but if it works, it's going to be hard to justify the extra complexity of the task queue approach mentioned above.
Synopsis:
My program occasionally runs into a condition where it wants to send data over a socket, but that socket is blocked waiting for a response to a previous command that never came. Is there any way to unblock the socket and pick back up with it when this happens? If not that, how could I test whether the socket is blocked so I could close it and open a new one? (I need blocking sockets in the first place)
Details:
I'm connecting to a server over two sockets. Socket 1 is for general command communication. Socket 2 is for aborting running commands. Aborts can come at any time and frequently. Every command sent over socket 1 gets a response, such as:
socket1 send: set command data
socket1 read: set command ack
There is always some time between the send and the read, as the server doesn't send anything back until the command is finished executing.
To interrupt commands in progress, I connect over a another socket and issue an abort command. I then use socket 1 to issue a new command.
I am finding that occasionally commands issued over socket 1 after an abort are hanging the program. It appears that socket 1 is blocked waiting for a response to a previously issued command that never returned (and that got interrupted). While usually it works sometimes it doesn't (I didn't write the server).
In these cases, is there any way for me to check to see if socket 1 is blocked waiting for a read, and if so, abandon that read and move on? Or even any way to check at all so I can close that socket and start again?
thx!
UPDATE 1: thanks for the answers. As for why I'm using blocking sockets, it's because I'm controlling a CNC-type machine with this code, and I need to know when the command I've asked it to execute is done executing. The server returns the ACK when it's done, so that seems like a good way to handle it. I like the idea of refactoring for non-blocking but can't envision a way to get info on when the command is done otherwise. I'll look at select and the other options.
Not meaning to seem disagreeable, but you say you need blocking sockets and then go on to describe some very good reasons for needing non-blocking sockets. I would recommend refactoring to use non-blocking.
Aside from that, the only method I'm aware of to know if a socket is blocked is the fact that your program called recv or one of its variants and has not yet returned. Someone else may know an API that I don't, but setting a "blocked" boolean before the recv call and clearing it afterward is probably the best hack to get you that info. But you don't want to do that. Trust me, the refactor will be worth it in the long run.
The traditional solution to this problem is to use select. Before writing, test whether the socket will support writing, and if not, do something else (such as waiting for a response first). One level above select, Python provides the asyncore module to enable such processing. Two level above, Twisted is an entire framework dealing with asynchronous processing of messages.
Sockets should be full duplex. If Python blocks a thread from writing to a socket while another thread is reading from the same socket I would regard it as a major bug in Python. This doesn't occur in any other programming language I've used.
What you really what is to block on a select() or poll(). The only way to unblock a blocked socket is to receive data or a signal which is probably not acceptable. A select() or poll() call can block waiting for one or more sockets, either on reading or writing (waiting for buffer space). They can also take a timeout if you want to wait periodically to check on other things. Take a look at my answer to Block Socket with Unix and C/C++ Help
I started using ZeroMQ this week, and when using the Request-Response pattern I am not sure how to have a worker safely "hang up" and close his socket without possibly dropping a message and causing the customer who sent that message to never get a response. Imagine a worker written in Python who looks something like this:
import zmq
c = zmq.Context()
s = c.socket(zmq.REP)
s.connect('tcp://127.0.0.1:9999')
while i in range(8):
s.recv()
s.send('reply')
s.close()
I have been doing experiments and have found that a customer at 127.0.0.1:9999 of socket type zmq.REQ who makes a fair-queued request just might have the misfortune of having the fair-queuing algorithm choose the above worker right after the worker has done its last send() but before it runs the following close() method. In that case, it seems that the request is received and buffered by the ØMQ stack in the worker process, and that the request is then lost when close() throws out everything associated with the socket.
How can a worker detach "safely" — is there any way to signal "I don't want messages anymore", then (a) loop over any final messages that have arrived during transmission of the signal, (b) generate their replies, and then (c) execute close() with the guarantee that no messages are being thrown away?
Edit: I suppose the raw state that I would want to enter is a "half-closed" state, where no further requests could be received — and the sender would know that — but where the return path is still open so that I can check my incoming buffer for one last arrived message and respond to it if there is one sitting in the buffer.
Edit: In response to a good question, corrected the description to make the number of waiting messages plural, as there could be many connections waiting on replies.
You seem to think that you are trying to avoid a “simple” race condition such as in
... = zmq_recv(fd);
do_something();
zmq_send(fd, answer);
/* Let's hope a new request does not arrive just now, please close it quickly! */
zmq_close(fd);
but I think the problem is that fair queuing (round-robin) makes things even more difficult: you might already even have several queued requests on your worker. The sender will not wait for your worker to be free before sending a new request if it is its turn to receive one, so at the time you call zmq_send other requests might be waiting already.
In fact, it looks like you might have selected the wrong data direction. Instead of having a requests pool send requests to your workers (even when you would prefer not to receive new ones), you might want to have your workers fetch a new request from a requests queue, take care of it, then send the answer.
Of course, it means using XREP/XREQ, but I think it is worth it.
Edit: I wrote some code implementing the other direction to explain what I mean.
I think the problem is that your messaging architecture is wrong. Your workers should use a REQ socket to send a request for work and that way there is only ever one job queued at the worker. Then to acknowledge completion of the work, you could either use another REQ request that doubles as ack for the previous job and request for a new one, or you could have a second control socket.
Some people do this using PUB/SUB for the control so that each worker publishes acks and the master subscribes to them.
You have to remember that with ZeroMQ there are 0 message queues. None at all! Just messages buffered in either the sender or receiver depending on settings like High Water Mark, and type of socket. If you really do need message queues then you need to write a broker app to handle that, or simply switch to AMQP where all communication is through a 3rd party broker.
I've been thinking about this as well. You may want to implement a CLOSE message which notifies the customer that the worker is going away. You could then have the worker drain for a period of time before shutting down. Not ideal, of course, but might be workable.
There is a conflict of interest between sending requests as rapidly as possible to workers, and getting reliability in case a worked crashes or dies. There is an entire section of the ZeroMQ Guide that explains different answers to this question of reliability. Read that, it'll help a lot.
tl;dr workers can/will crash and clients need a resend functionality. The Guide provides reusable code for that, in many languages.
Wouldn't the simplest solution be to have the customer timeout when waiting for the reply and then retry if no reply is received?
Try sleeping before the call to close. This is fixed in 2.1 but not in 2.0 yet.