I'm writing some code that needs to run on different OS platforms and interact with separate processes. To write tests for it, I need to be able to create processes from python that do nothing but wait to be signaled to stop. I would like to be able to create some processes that recursively create more.
Also (this part might be a little strange), it would be best for my testing if I were able to create processes that weren't children of the creating process, so I could emulate conditions where, e.g., os.waitpid won't have permission to interact with the process, or where one process signals a factory to create a process rather than creating it directly.
If you're using Python 2.6 the multiprocessing package has some stuff you might find useful.
There's a very simple example on my github. If you run spawner it will create 3 processes that run seperately, but use a channel to talk back to the spawner. So if you kill the spawner process the others you have started will die. I'm afraid there's a lot of redundant code in here, I'm in the middle of a refactoring, but I hope it gives a basic idea.
Related
I'm building a Python IDE, which needs to highlight all occurrences of the name under cursor (using Jedi library). The process of finding the occurrences can be quite slow.
In order to avoid freezing the GUI, I could run the search in another thread, but when the user moves quickly over several words, the background threads could pile up while working on now obsolete tasks. I would like to cancel the search for previous occurrences when user moves to new name.
Looks like killing a thread is complicated in Python. What are the other options for creating an easily cancellable background tasks in Python 3.4+?
I think concurrent.futures is the answer.
You can create a Thread / Process pool, submit any callable, receive a Future, which you can cancel if needed.
Reference: https://docs.python.org/3/library/concurrent.futures.html
A thread cannot be stopped by another one. This is a OS limitation rather than a Python one. Only thing you can do is periodically inspect a variable and, if set, stop the thread itself (just return).
Moreover, threads in Python suffer from the GIL. This means that CPU intensive operations, when carried out in a separate thread, will still affect your main loop as only one thread per process can run at a time.
I'd recommend you to run the search in a separate process which you can easily cancel whenever you want.
What the guys of YouCompleteMe are doing for example is wrapping Jedi in a HTTP server which they can query in the background. If the user moves the cursor before the completion comes back, the IDE can simply drop the request.
Well, my personal favorites are work queues. If it's a one-time application you should take a look at python rq. Extremely easy and fun to use. If you want to build something more "professional-grade" take a look at something like celery.
You might also want to look at multiprocessing
I've been looking at examples from other people but I can't seem to get it to work properly.
It'll either use a single core, or basically freeze up maya if given too much to process, but I never seem to get more than one core working at once.
So for example, this is kind of what I'd like it to do, on a very basic level. Mainly just let each loop run simultaneously on a different processor with the different values (in this case, the two values would use two processors)
mylist = [50, 100, 23]
newvalue = [50,51]
for j in range(0, len(newvalue)):
exists = False
for i in range(0, len(mylist)):
#search list
if newvalue[j] == mylist[i]:
exists = True
#add to list
if exists == True:
mylist.append(mylist)
Would it be possible to pull this off? The actual code I'm wanting to use it on can take from a few seconds to like 10 minutes for each loop, but they could theoretically all run at once, so I thought multithreading would speed it up loads
Bear in mind I'm still relatively new to python so an example would be really appreciated
Cheers :)
There are really two different answers to this.
Maya scripts are really supposed to run in the main UI thread, and there are lots of ways they can trip you up if run from a separate thread. Maya includes a module called maya.utils which includes methods for deferred evaluation in the main thread. Here's a simple example:
import maya.cmds as cmds
import maya.utils as utils
import threading
def do_in_main():
utils.executeDeferred (cmds.sphere)
for i in range(10):
t = threading.Thread(target=do_in_main, args=())
t.start()
That will allow you to do things with the maya ui from a separate thread (there's another method in utils that will allow the calling thread to await a response too). Here's a link to the maya documentation on this module
However, this doesn't get you around the second aspect of the question. Maya python isn't going to split up the job among processors for you: threading will let you create separate threads but they all share the same python intepreter and the global interpreter lock will mean that they end up waiting for it rather than running along independently.
You can't use the multiprocessing module, at least not AFAIK, since it spawns new mayas rather than pushing script execution out into other processors in the Maya you are running within. Python aside, Maya is an old program and not very multi-core oriented in any case. Try XSI :)
Any threading stuff in Maya is tricky in any case - if you touch the main application (basically, any function from the API or a maya.whatever module) without the deferred execution above, you'll probably crash maya. Only use it if you have to.
And, BTW, you cant use executeDeferred, etc in batch mode since they are implemented using the main UI loop.
What theodox says is still true today, six years later. However one may go another route by spawning a new process by using the subprocess module. You'll have to communicate and share data via sockets or something similar since the new process is in a seperate interpreter. The new interpreter runs on its own and doesn't know about Maya but you can do any other work in it benefitting from the multi-threaded environment your OS provides before communicating it back to your Maya python script.
I've recently started experimenting with using Python for web development. So far I've had some success using Apache with mod_wsgi and the Django web framework for Python 2.7. However I have run into some issues with having processes constantly running, updating information and such.
I have written a script I call "daemonManager.py" that can start and stop all or individual python update loops (Should I call them Daemons?). It does that by forking, then loading the module for the specific functions it should run and starting an infinite loop. It saves a PID file in /var/run to keep track of the process. So far so good. The problems I've encountered are:
Now and then one of the processes will just quit. I check ps in the morning and the process is just gone. No errors were logged (I'm using the logging module), and I'm covering every exception I can think of and logging them. Also I don't think these quitting processes has anything to do with my code, because all my processes run completely different code and exit at pretty similar intervals. I could be wrong of course. Is it normal for Python processes to just die after they've run for days/weeks? How should I tackle this problem? Should I write another daemon that periodically checks if the other daemons are still running? What if that daemon stops? I'm at a loss on how to handle this.
How can I programmatically know if a process is still running or not? I'm saving the PID files in /var/run and checking if the PID file is there to determine whether or not the process is running. But if the process just dies of unexpected causes, the PID file will remain. I therefore have to delete these files every time a process crashes (a couple of times per week), which sort of defeats the purpose. I guess I could check if a process is running at the PID in the file, but what if another process has started and was assigned the PID of the dead process? My daemon would think that the process is running fine even if it's long dead. Again I'm at a loss just how to deal with this.
Any useful answer on how to best run infinite Python processes, hopefully also shedding some light on the above problems, I will accept
I'm using Apache 2.2.14 on an Ubuntu machine.
My Python version is 2.7.2
I'll open by stating that this is one way to manage a long running process (LRP) -- not de facto by any stretch.
In my experience, the best possible product comes from concentrating on the specific problem you're dealing with, while delegating supporting tech to other libraries. In this case, I'm referring to the act of backgrounding processes (the art of the double fork), monitoring, and log redirection.
My favorite solution is http://supervisord.org/
Using a system like supervisord, you basically write a conventional python script that performs a task while stuck in an "infinite" loop.
#!/usr/bin/python
import sys
import time
def main_loop():
while 1:
# do your stuff...
time.sleep(0.1)
if __name__ == '__main__':
try:
main_loop()
except KeyboardInterrupt:
print >> sys.stderr, '\nExiting by user request.\n'
sys.exit(0)
Writing your script this way makes it simple and convenient to develop and debug (you can easily start/stop it in a terminal, watching the log output as events unfold). When it comes time to throw into production, you simply define a supervisor config that calls your script (here's the full example for defining a "program", much of which is optional: http://supervisord.org/configuration.html#program-x-section-example).
Supervisor has a bunch of configuration options so I won't enumerate them, but I will say that it specifically solves the problems you describe:
Backgrounding/Daemonizing
PID tracking (can be configured to restart a process should it terminate unexpectedly)
Log normally in your script (stream handler if using logging module rather than printing) but let supervisor redirect to a file for you.
You should consider Python processes as able to run "forever" assuming you don't have any memory leaks in your program, the Python interpreter, or any of the Python libraries / modules that you are using. (Even in the face of memory leaks, you might be able to run forever if you have sufficient swap space on a 64-bit machine. Decades, if not centuries, should be doable. I've had Python processes survive just fine for nearly two years on limited hardware -- before the hardware needed to be moved.)
Ensuring programs restart when they die used to be very simple back when Linux distributions used SysV-style init -- you just add a new line to the /etc/inittab and init(8) would spawn your program at boot and re-spawn it if it dies. (I know of no mechanism to replicate this functionality with the new upstart init-replacement that many distributions are using these days. I'm not saying it is impossible, I just don't know how to do it.)
But even the init(8) mechanism of years gone by wasn't as flexible as some would have liked. The daemontools package by DJB is one example of process control-and-monitoring tools intended to keep daemons living forever. The Linux-HA suite provides another similar tool, though it might provide too much "extra" functionality to be justified for this task. monit is another option.
I assume you are running Unix/Linux but you don't really say. I have no direct advice on your issue. So I don't expect to be the "right" answer to this question. But there is something to explore here.
First, if your daemons are crashing, you should fix that. Only programs with bugs should crash. Perhaps you should launch them under a debugger and see what happens when they crash (if that's possible). Do you have any trace logging in these processes? If not, add them. That might help diagnose your crash.
Second, are your daemons providing services (opening pipes and waiting for requests) or are they performing periodic cleanup? If they are periodic cleanup processes you should use cron to launch them periodically rather then have them run in an infinite loop. Cron processes should be preferred over daemon processes. Similarly, if they are services that open ports and service requests, have you considered making them work with INETD? Again, a single daemon (inetd) should be preferred to a bunch of daemon processes.
Third, saving a PID in a file is not very effective, as you've discovered. Perhaps a shared IPC, like a semaphore, would work better. I don't have any details here though.
Fourth, sometimes I need stuff to run in the context of the website. I use a cron process that calls wget with a maintenance URL. You set a special cookie and include the cookie info in with wget command line. If the special cookie doesn't exist, return 403 rather than performing the maintenance process. The other benefit here is login to the database and other environmental concerns of avoided since the code that serves normal web pages are serving the maintenance process.
Hope that gives you ideas. I think avoiding daemons if you can is the best place to start. If you can run your python within mod_wsgi that saves you having to support multiple "environments". Debugging a process that fails after running for days at a time is just brutal.
I have some long-term processes and such that must occur at given button-presses or other events in a Jython GUI I am creating.
In such situations, it seems the best option is to make a separate thread to run the called method/function in when the event happens.
What is the best way to do this? import Threading and have a class that I initialize and run when actionPerformed? Use invokelater? It seems there are a lot of ways to go about this, but would work best in the Jython-Swing environment and be the 'fastest'?
start = JButton( "Analyze", actionPerformed = self.do_analysis )
def do_analysis(self):
...
Large Time Consuming Task
...
I'm not 100% sure that jython has the same problem, but in C Python, you would run into a problem with the GIL or Global Interpreter Lock. This will mean that when your background thread is running, the GUI thread cannot start (even if it is queued to run on another core). You click a button and nothing happens :(
To get round this, I would split the long running process into short steps that can be run on an event, and queue up the event to start the next step as the current step ends. Then the GUI will be able to run between steps if it needs to. The shorter you make the steps, the more responsive the GUI will be - 50ms to 100ms should be OK.
This approach has the nice side effect that you don't need to worry about threads, locking, message queueing or anything. You can try and add these to a GUI but the GUI events and the threads can fight, leading to some very strange and hard to debug errors.
As for the "fastest", this is probably the lowest overhead for shorter background tasks. If you create a new process to run the background task (very heavy overhead in Windows) then it will run faster becasue it has its own core, but the start/stop overhead is high.
This is a situation where you will get the best results by remembering that Jython is running on the JVM. Jython has full access to Java classes, so use the Java threading API to set up a separate computation thread. And if the CPU load is high enough that using more cores would help, Java (the jvm) will take care of that by itself.
In some circumstances, with long running processes, people have used jstack -l to get the nids of running threads, and then use taskset to set the CPU affinity. The JVM nid is in hex and is the PID of the Linux process corresponding to a thread. Other OSes may have similar capabilities.
In general, it is not necessary to do anything other than to make your Jython multithreaded. If you use the Python threading module you don't have access to the full Java threading featureset, but it does use JVM threads under the hood. Just remember to limit your access to global variables or you will end up recreating the Global Interpreter Lock. The Queue module can help with this.
My apologies beforehand for the length of the question, I didn't want to leave anything out.
Some background information
I'm trying to automate a data entry process by writing a Python application that uses the Windows API to simulate keystrokes, mouse movement and window/control manipulation. I have to resort to this method because I do not (yet) have the security clearance required to access the datastore/database directly (e.g. using SQL) or indirectly through a better suited API. Bureaucracy, it's a pain ;-)
The data entry process involves the correction of sales orders due to changes in article availability. The unavailable articles are either removed from the order or replaced by another suitable article.
Initially I want a human to be able to monitor the automatic data entry process to make sure everything goes right. To achieve this I slow down the actions on the one hand but also inform the user of what is currently going on through a pinned window.
The actual question
To allow the user to halt the automation process I'm registering the Pause/Break key as a hotkey and in the handler I want to pause the automation functionality. However, I'm currently struggling to figure out a way to properly pause the execution of the automation functionality. When the pause function is invoked I want the automation process to stop dead in its tracks, no matter what it is doing. I don't want it to even execute another keystroke.
UPDATE [23/01]: I actually want to do more than just pause, I want to be able to communicate with the automation process while it is running and request it to pause, skip the current sales order, give up completely and perhaps even more.
Can anybody show me The Right Way (TM) to achieve what I want?
Some more information
Here's an example of how the automation works (I'm using the pywinauto library):
from pywinauto import application
app = application.Application()
app.start_("notepad")
app.Notepad.TypeKeys("abcdef")
UPDATE [25/01]: After a few days of working on my application I've noticed I don't really use pywinauto that much, right now I'm only using it for finding window and then I directly use SendKeysCtypes.SendKeys to simulate keyboard input and win32api functions to simulate mouse input.
What I've found out so far
Here are a few methods I've come across so far in my search for an answer:
I could separate the automation functionality and the interface + hotkey listener in two separate processes. Let's refer to the former as "automator" and the latter as "manager". The manager can then pause the execution of the automator by sending the process a SIGSTOP signal and unpause it using the SIGCONT signal (or the Windows equivalents through SuspendThread/ResumeThread).
To be able to update the user interface the automator will need to inform the manager of its progression through some sort of an IPC mechanism.
Cons:
Would using SIGSTOP not be a little harsh? Would it even work properly? Lots of people seem to be advising against it and even calling it "dangerous".
I am worried that implementing the IPC mechanism is going to be a bit complicated. On the other hand, I have worked with DBus which wouldn't be too hard to implement.
The second method and one that lots of people seem to be suggesting involves using threads and essentially boils down to the following (simplified):
while True:
if self.pause: # pause
# Do the work...
However, doing it this way it seems it will only pause after there is no more work to do. The only way I see this method would work would be to divide the work (the entire automation process) into smaller work segments (i.e. tasks). Before starting on a new task the worker thread would check if it should pause and wait.
Cons:
Seems like an implementation to divide the work into smaller segments, such as the one above, would be very ugly code wise (aesthetically).
The way I imagine it, all statements would be transformed to look something like: queue.put((function, args)) (e.g. queue.put((app.Notepad.TypeKeys, "abcdef"))) and you'd have the automating process thread running through the tasks and continuously checking for the pause state before starting a task. That just can't be right...
The program would not actually stop dead in its tracks, but would first finish a task (however small) before actually pausing.
Progress made
UPDATE [23/01]: I've implemented a version of my application using the first method through the mentioned SuspendThread/ResumeThread functionality. So far this seems to work very nicely and also allows me to write the automation stuff just like you'd write any other script. The only quirk I've come across is that keyboard modifiers (CTRL, ALT, SHIFT) get "stuck" while paused. Something I can probably easily work around.
I've also written a test using the second method (threads and signals/message passing) and implemented the pause functionality. However, it looks really ugly (both checking for the pause flag and everything related to the "doing the work"). So if anybody can show me a proper example of something similar to the second method I'd appreciate it.
Related questions
Pausing a process?
Pausing a thread using threading class
Alex Martelli posted an answer saying:
There is no method for other threads to forcibly pause a thread (any more than there is for other threads to kill that thread) -- the target thread must cooperate by occasionally checking appropriate "flags" (a threading.Condition might be appropriate for the pause/unpause case).
He then referred to the multiprocessing module and SIGSTOP/SIGCONT.
Is there a way to indefinitely pause a thread?
Pausing a process in Windows
An answer to this question quotes the MSDN documentation regarding SuspendThread:
This function is primarily designed for use by debuggers. It is not intended to be used for thread synchronization. Calling SuspendThread on a thread that owns a synchronization object, such as a mutex or critical section, can lead to a deadlock if the calling thread tries to obtain a synchronization object owned by a suspended thread. To avoid this situation, a thread within an application that is not a debugger should signal the other thread to suspend itself. The target thread must be designed to watch for this signal and respond appropriately.
Is there any way to kill a Thread in Python?
How do I pass an exception between threads in python
Keep in mind that although in your level of abstraction, "executing a keystroke" is a single atomic operation, it's implemented on the machine as a rather complicated sequence of machine instructions. So, pausing a thread at arbitrary points could lead to things being in an indeterminate state. Sending SIGSTOP is the same level of dangerous as pausing a thread at an arbitrary point. Depending on where you are in a particular step, though, your automation could potentially be broken. For example, if you pause in the middle of a timing-dependent step.
It seems to me that this problem would be best solved at the level of the automation library. I'm not very familiar with the automation library that you're using. It might be worth contacting the developers of the library to see if they have any suggestions for pausing the execution of automation steps at safe sub-step levels.
I don't know pywinauto. But I'll assume that you have something like an Application class which you obtain and have methods like SendKeys/SendMouseEvent/etc to do things.
Create your own MyApplication class which holds a reference to pywinauto's application class. Provide the same methods but before each method check whether a pause event has occurred. If it has, you can jump into code which handles the pause event. That way you are checking for a pause every time you cause an event, but this all is handled by the one class without putting pause all over your code.
Once you've detected the pause you can handle it any way you like. For example, you can throw an exception to force giving up on the current task.
Separating the functionality and the interface thread/process is definately the best option imho, the second solution is quicker and easier but definately not better.
Perhaps using multiple threads and an exception would be a better idea than using multiple processes. But if you're using multiple processes than SIGSTOP might be your only way to get it to work.
Is there anything against using 2 threads for this?
1 thread for actually executing
1 thread for reading the user input
I use Python but not pywinauto; for this sort of tasks I use AutoHotKey . One way to implement a simple pause in an AutoHotkey script may be using a "toggle" key like ScrollLock and testing the key state in the script. Also, the script can restore the key state after switching the internal pause setting on / off.