Question
I need to make a short write to a file every 15 seconds (and sleep the rest of the time)... it seems to me that multithreading or multiprocessing would be useful to address this, by having a dedicated thread or process to do the file write. Which would be better in terms of timing/reliability, as well as memory footprint?
Background
I am writing a small Python application for a Chumby (so limited memory availability -- 128MB total system memory); to stop the default Chumby control panel from restarting after I've killed it, a temp file needs to be written to every 15 seconds or so to "fool" the watchdog process that would ordinarily restart the control panel. The main application may be busy doing other things, and I don't want to try to have to "watch the clock" as it's doing its other things to make sure it squeezes in the temp file write.
The Chumby seems to be Linux-based, so signal.setitimer() should be available (provided you have access to Python 2.6 or above).
This function allows you to install a handler that is periodically called, so you don't need a thread or process.
Related
In order to get a speed boost for my python program, should I spawn a separate thread or a separate process for logging? My program uses a lot of logging and I am not sure if threading is suitable because of GIL. A lot of resources seem to suggest that it should be fine for I/O. I think that logging is I/O but I am not sure what does "should be fine" mean for most of the resources out there. I just need speed.
Before you start trying to optimize a program, there are some things you should do.
To start with, you should profile you programs. You could e.g. use line_profiler.
If it turns out that your software spends a considerable amount of time logging, there are two easy options.
Set the loglevel in production code so that no or few(er) messages are logged. There will still be some overhead left, but it should be much reduced.
Use mechanical means (like sed or grep) to completely remove the logging calls from the production code. If this doesn't improve the speed/throughput of your program, logging wasn't an issue.
If neither of those is suitable and logging is a significant fraction of your programs time you can try to implement thread or process based logging.
If you want to use threading for logging, you will besically need a list and a lock. The function that is called from the main thread to do logging grabs the lock, appends the text to log to the list and releases the lock. The second thread waits for the lock, grabs the lock, pops a couple of items from the list, releases the lock and writes the items to a file. Since the GIL makes sure that only one thread at a time is running Python bytecode, this will reduce the performance of your program somewhat; part of its time is spent running the bytecode from the logging thread.
Using multiprocessing is slightly different, in that you probably want to use e.g. a Queue to send logging messages from the main process to the logging process. The logging process takes items from the Queue and writes them to disk. This means that the time spent writing the logging actions to disk is spent in a different program. But there is some overhead associated with using a Queue as well.
You would have to measure to see which method uses less time in your program.
I am going by these assumptions:
You already determine that it is your logging that is bottlenecking your program.
You have a good reason why you are logging what you are logging.
The perceived slowness is most likely due to the success or failure acknowledgement from the logging action. To avoid this "command queuing" make the calls to a separate process asynchonously and skipping the callback. This may end up consuming more resources, but this will alleviate the backlog in your main program.
Nodejs handles this naturally or you can roll your own python listener. Since this will be a separate process. You can redirect the logging feature of your other programs to this one. You can even have a separate machine to handle this workload.
I'm working on a small program that collects certain data and processes it. Currently, the program runs constantly on my server, storing the data to the disk. Every once in a while I run another program that reads the stored data, processes it, sorts it, saves it to a new location, and clears the old data files.
I've never learned about threads, but it SOUNDS like this is a good place to use them? If threading works the way I think it does, I could set up a queue to hold the data, and have a separate thread that could pull data from the queue and process it as it's ready. If the queue is full, thread1 could sleep for a bit. If it's empty, thread2 could sleep for a bit
That would reduce disk writing, get rid of disk reading, and make the data collection run side-by-side with the data processing to save time.
Is any of this accurate? I'm a senior CS student and threads have never once come up (Surely that's a little odd?). I would appreciate any tips/knowledge/advice with using threads, as well as if this is the correct solution to my "problem".
Thanks!
That does sound like a situation where some form of parallelism might be useful. However, since this is Python, you might not want to actually use threads. Python, in the standard implementation, has something called the Global Interpreter Lock. Effectively, to allow the garbage collector to work, only one thread of a Python program can actually be running Python code at any time (modules written directly in C, or external operations such as disk IO or database queries, are not "running Python code" for this purpose, though you will have called them from Python).
Because of this, threading in Python is generally only a good idea if your Python code spends significant amounts of time waiting on responses from non-Python parts of the program, or external sources. If the data collection or processing is being done outside Python (collecting from a database or website, processing in numpy, etc.), that might be reasonable. If your code isn't in this situation often enough, your program ends up wasting more time switching between threads than it gains (because if two threads are both in Python code it still only runs one at a time)
If not, you should try the multiprocessing module instead. This is also a generally safer model, as the only things that can be shared between processes in multiprocessing are the things you explicitly share (while threads share all state, potentially allowing one thread to break another because you forgot to lock something).
Alternately, you might use subprocess. Effectively, this would be having your first program intermittently restart the second one every time it finishes a batch of data.
I am aware that this question is rather high-level and may be vague. Please ask if you need any more details and I will try to edit.
I am using QuickFix with Python bindings to consume high-throughput market data from circa 30 markets simultaneously. Most of computing the work is done in separate CPUs via the multiprocessing module. These parallel processes are spawned by the main process on startup. If I wish to interact with the market in any way via QuickFix, I have to do this within the main process, thus any commands (to enter orders, for example) which come from the child processes must be piped (via an mp.Queue object we will call Q) to the main process before execution.
This raises the problem of monitoring Q, which must be done within the main process. I cannot use Q.get(), since this method blocks and my entire main process will hang until something shows up in Q. In order to decrease latency, I must check Q frequently, on the order of 50 times per second. I have been using the apscheduler to do this, but I keep getting Warning errors stating that the runtime was missed. These errors are a serious issue because they prevent me from easily viewing important information.
I have therefore refactored my application to use the code posted by MestreLion as an answer to this question. This is working for me because it starts a new thread from the main process, and it does not print error messages. However, I am worried that this will cause nasty problems down the road.
I am aware of the Global Interpreter Lock in python (this is why I used the multiprocessing module to begin with), but I don't really understand it. Owing to the high-frequency nature of my application, I do not know if the Q monitoring thread and the main process consuming lots of incoming messages will compete for resources and slow each other down.
My questions:
Am I likely to run into trouble in this scenario?
If not, can I add more monitoring threads using the present approach and still be okay? There are at least two other things I would like to monitor at high frequency.
Thanks.
#MestreLion's solution that you've linked creates 50 threads per second in your case.
All you need is a single thread to consume the queue without blocking the rest of the main process:
import threading
def consume(queue, sentinel=None):
for item in iter(queue.get, sentinel):
pass_to_quickfix(item)
threading.Thread(target=consume, args=[queue], daemon=True).start()
GIL may or may not matter for performance in this case. Measure it.
Without knowing your scenario, it's difficult to say anything specific. Your question suggests, that the threads are waiting most of the time via get, so GIL is not a problem. Interprocess communication may result in problems much earlier. There you can think of switching to another protocol, using some kind of TCP-sockets. Then you can write the scheduler more efficient with select instead of threads, as threads are also slow and resource consuming. select is a system function, that allows to monitor many socket-connection at once, therefore it scales incredibly efficient with the amount of connections and needs nearly no CPU-power for monitoring.
I've recently started experimenting with using Python for web development. So far I've had some success using Apache with mod_wsgi and the Django web framework for Python 2.7. However I have run into some issues with having processes constantly running, updating information and such.
I have written a script I call "daemonManager.py" that can start and stop all or individual python update loops (Should I call them Daemons?). It does that by forking, then loading the module for the specific functions it should run and starting an infinite loop. It saves a PID file in /var/run to keep track of the process. So far so good. The problems I've encountered are:
Now and then one of the processes will just quit. I check ps in the morning and the process is just gone. No errors were logged (I'm using the logging module), and I'm covering every exception I can think of and logging them. Also I don't think these quitting processes has anything to do with my code, because all my processes run completely different code and exit at pretty similar intervals. I could be wrong of course. Is it normal for Python processes to just die after they've run for days/weeks? How should I tackle this problem? Should I write another daemon that periodically checks if the other daemons are still running? What if that daemon stops? I'm at a loss on how to handle this.
How can I programmatically know if a process is still running or not? I'm saving the PID files in /var/run and checking if the PID file is there to determine whether or not the process is running. But if the process just dies of unexpected causes, the PID file will remain. I therefore have to delete these files every time a process crashes (a couple of times per week), which sort of defeats the purpose. I guess I could check if a process is running at the PID in the file, but what if another process has started and was assigned the PID of the dead process? My daemon would think that the process is running fine even if it's long dead. Again I'm at a loss just how to deal with this.
Any useful answer on how to best run infinite Python processes, hopefully also shedding some light on the above problems, I will accept
I'm using Apache 2.2.14 on an Ubuntu machine.
My Python version is 2.7.2
I'll open by stating that this is one way to manage a long running process (LRP) -- not de facto by any stretch.
In my experience, the best possible product comes from concentrating on the specific problem you're dealing with, while delegating supporting tech to other libraries. In this case, I'm referring to the act of backgrounding processes (the art of the double fork), monitoring, and log redirection.
My favorite solution is http://supervisord.org/
Using a system like supervisord, you basically write a conventional python script that performs a task while stuck in an "infinite" loop.
#!/usr/bin/python
import sys
import time
def main_loop():
while 1:
# do your stuff...
time.sleep(0.1)
if __name__ == '__main__':
try:
main_loop()
except KeyboardInterrupt:
print >> sys.stderr, '\nExiting by user request.\n'
sys.exit(0)
Writing your script this way makes it simple and convenient to develop and debug (you can easily start/stop it in a terminal, watching the log output as events unfold). When it comes time to throw into production, you simply define a supervisor config that calls your script (here's the full example for defining a "program", much of which is optional: http://supervisord.org/configuration.html#program-x-section-example).
Supervisor has a bunch of configuration options so I won't enumerate them, but I will say that it specifically solves the problems you describe:
Backgrounding/Daemonizing
PID tracking (can be configured to restart a process should it terminate unexpectedly)
Log normally in your script (stream handler if using logging module rather than printing) but let supervisor redirect to a file for you.
You should consider Python processes as able to run "forever" assuming you don't have any memory leaks in your program, the Python interpreter, or any of the Python libraries / modules that you are using. (Even in the face of memory leaks, you might be able to run forever if you have sufficient swap space on a 64-bit machine. Decades, if not centuries, should be doable. I've had Python processes survive just fine for nearly two years on limited hardware -- before the hardware needed to be moved.)
Ensuring programs restart when they die used to be very simple back when Linux distributions used SysV-style init -- you just add a new line to the /etc/inittab and init(8) would spawn your program at boot and re-spawn it if it dies. (I know of no mechanism to replicate this functionality with the new upstart init-replacement that many distributions are using these days. I'm not saying it is impossible, I just don't know how to do it.)
But even the init(8) mechanism of years gone by wasn't as flexible as some would have liked. The daemontools package by DJB is one example of process control-and-monitoring tools intended to keep daemons living forever. The Linux-HA suite provides another similar tool, though it might provide too much "extra" functionality to be justified for this task. monit is another option.
I assume you are running Unix/Linux but you don't really say. I have no direct advice on your issue. So I don't expect to be the "right" answer to this question. But there is something to explore here.
First, if your daemons are crashing, you should fix that. Only programs with bugs should crash. Perhaps you should launch them under a debugger and see what happens when they crash (if that's possible). Do you have any trace logging in these processes? If not, add them. That might help diagnose your crash.
Second, are your daemons providing services (opening pipes and waiting for requests) or are they performing periodic cleanup? If they are periodic cleanup processes you should use cron to launch them periodically rather then have them run in an infinite loop. Cron processes should be preferred over daemon processes. Similarly, if they are services that open ports and service requests, have you considered making them work with INETD? Again, a single daemon (inetd) should be preferred to a bunch of daemon processes.
Third, saving a PID in a file is not very effective, as you've discovered. Perhaps a shared IPC, like a semaphore, would work better. I don't have any details here though.
Fourth, sometimes I need stuff to run in the context of the website. I use a cron process that calls wget with a maintenance URL. You set a special cookie and include the cookie info in with wget command line. If the special cookie doesn't exist, return 403 rather than performing the maintenance process. The other benefit here is login to the database and other environmental concerns of avoided since the code that serves normal web pages are serving the maintenance process.
Hope that gives you ideas. I think avoiding daemons if you can is the best place to start. If you can run your python within mod_wsgi that saves you having to support multiple "environments". Debugging a process that fails after running for days at a time is just brutal.
I have some long-term processes and such that must occur at given button-presses or other events in a Jython GUI I am creating.
In such situations, it seems the best option is to make a separate thread to run the called method/function in when the event happens.
What is the best way to do this? import Threading and have a class that I initialize and run when actionPerformed? Use invokelater? It seems there are a lot of ways to go about this, but would work best in the Jython-Swing environment and be the 'fastest'?
start = JButton( "Analyze", actionPerformed = self.do_analysis )
def do_analysis(self):
...
Large Time Consuming Task
...
I'm not 100% sure that jython has the same problem, but in C Python, you would run into a problem with the GIL or Global Interpreter Lock. This will mean that when your background thread is running, the GUI thread cannot start (even if it is queued to run on another core). You click a button and nothing happens :(
To get round this, I would split the long running process into short steps that can be run on an event, and queue up the event to start the next step as the current step ends. Then the GUI will be able to run between steps if it needs to. The shorter you make the steps, the more responsive the GUI will be - 50ms to 100ms should be OK.
This approach has the nice side effect that you don't need to worry about threads, locking, message queueing or anything. You can try and add these to a GUI but the GUI events and the threads can fight, leading to some very strange and hard to debug errors.
As for the "fastest", this is probably the lowest overhead for shorter background tasks. If you create a new process to run the background task (very heavy overhead in Windows) then it will run faster becasue it has its own core, but the start/stop overhead is high.
This is a situation where you will get the best results by remembering that Jython is running on the JVM. Jython has full access to Java classes, so use the Java threading API to set up a separate computation thread. And if the CPU load is high enough that using more cores would help, Java (the jvm) will take care of that by itself.
In some circumstances, with long running processes, people have used jstack -l to get the nids of running threads, and then use taskset to set the CPU affinity. The JVM nid is in hex and is the PID of the Linux process corresponding to a thread. Other OSes may have similar capabilities.
In general, it is not necessary to do anything other than to make your Jython multithreaded. If you use the Python threading module you don't have access to the full Java threading featureset, but it does use JVM threads under the hood. Just remember to limit your access to global variables or you will end up recreating the Global Interpreter Lock. The Queue module can help with this.