I want to perform some operations in my application when any new listed process starts or stops in Windows OS.
I read subprocess module, but didn't get a clue.
Can any one knows which method is called when new process is started, or how I can override which os module method to achieve my goal.
Please help me to resolve it.
The best way to do this is to store all the processes at the beginning of the program using some sort of Array. Have a thread that constantly checks for changes inside the array. If it detects a change send "an interrupt" into the main process.
Related
I am implementing a Python plugin that is part of a larger C++ program. The goal of this program is to allow the user to input a command's actions in Python. It currently receives a string from the C++ function and runs it via the exec() function. The user can then use an API to affect changes on the larger C++ program.
The current feature I am working on is a pause execution feature. It needs to remember where it is in the code execution as well as the state of any local variables, and resume execution once a condition has been met. I am not very familiar with Python, and I would like some advice how to implement this feature. My first design ideas:
1) Using the yield command.
This seemed to be a good idea at the start since when you use the next command it remembers everything I needed it to, but the problem is that yield only returns to the previous level in the call stack as far as I can tell. So if the user calls a function that yields it will simply return to the user's code, and not the larger C++ program. As far as I can tell there isn't a way to propagate the yield command up the stack???
2) Threading
Create a main python thread that creates a thread for each command. This main thread would spawn a thread for each command executed and kill it when it is done. If it needs to be suspended and restarted it could do so through a queue of locks.
Those were the only two options I came up with. I am not sure the yield function would work or is what it was designed to do. I think the Threading approach would work but might be overkill, and take a long time to develop. I also was looking for some sort of Task Module in Python, but couldn't find exactly what I was looking for. I was wondering if anyone has any other suggestions as I am not very familiar with Python.
EDIT: As mentioned in the comments I did not explain what needs to happen when the script "Pauses". The python plugin needs to allow the C++ program to continue execution. In my mind this means A) returning if we are talking about a single threaded approach, or B) Sending a message(Function call?) to C++
EDIT EDIT: As stated I didn't fully explain the problem description. I will make another post that has a better statement of what currently exists, and what needs to happen as well as providing some sudo code. I am new to Stack Overflow, so if this is not the appropriate response please let me know.
Whenever a signal is sent in Python, execution is immediately paused until whatever signal handler function is being used is finished executing; at that point, the execution continues right where it left off. My suggestion would be to use one of the user-defined signals (signal.SIGUSR1 and signal.SIGUSR2). Take a look at the signal documentation here:
https://docs.python.org/2/library/signal.html
At the beginning of the program, you'd define a signal handler function like so:
def signal_pause(signum, frame):
if signum == signal.SIGUSR1:
# Do your pause here - function processing, etc
else:
pass
Then in the main program somewhere, you'll switch out the default signal handler for the one you just created:
signal.signal(signal.SIGUSR1, signal_pause)
And finally, whenever you want to pause, you'll send the SIGUSR1 signal like so:
os.kill(os.getpid(),signal.SIGUSR1)
Your code will immediately pause, saving its state, and head to the signal_pause function to do whatever you need to do. Once that function exits, normal program execution will resume.
EDIT: this assumes you want to do something sophisticated while you're pausing the program. If all you want to do is wait a few seconds or ask for some user input, there are some much easier ways (time.sleep or input respectively).
EDIT EDIT: this assumes you're on a Unix system.
If you need to communicate with a C program, then sockets are probably the way to go.
https://docs.python.org/2/library/socket.html
One of your two programs acts as the socket server, and the other connects to it as the socket client. When you want the C++ program to continue, you use socket.send() to transmit a continue message. Then your Python program would use socket.recv(), which will cause it to wait around until it receives a message back from the C++ program.
If you need two programs to send signals to each other, this is probably the safest way to go about it.
I have several scripts that I use to do some web crawling. They are always running, and should never stop. However, after about a week, they systematically "freeze": there is no output anymore, no response to Ctrl+C or anything. The only way is to kill the process and restart it.
I suspect that these issues come from the library I use for retrieving the data (urllib2), but the issue is very hard to reproduce.
I am thus wondering how I could check the state of the process and kill/restart it automatically if it is frozen. I was thinking of creating a PID file, and update it regularly. Another script could then periodically check the last modification date of this PID file, and restart the process if it's too old. I could use something like Monit to do the monitoring.
Is this how I should do it? Is there another best practice/common way for checking the responsiveness of a process?
If you have a process that is always running, has no connected terminal, and is the process group leader - that is a daemon. You undoubtedly know all that.
There are some defacto practices in coding programs like that. One is to have a signal handler which takes SIGHUP and forces the program to reinitialize itself. This means closing all of the open log files, rereading config scripts, etc. I do not know how applicable that is to your problem but it sometimes solves issues like frozen daemons at my work.
You can customize the idea by employing SIGUSR1 and SIGUSR2 signals to do special things, like write status to a file, or anything else. Since signals come in on an interrupt, the trap statement in scripts and signal handlers in python itself will push program state onto the interrupt stack and do "stuff".
In your case you may want the program fork/exec itself and then kill the parent.
I have already read
http://wiki.wxpython.org/LongRunningTasks
http://wiki.wxpython.org/CallAfter
and searched a lot in Google but found no answer to my problem. Because in my opinion it would be to much code and it is more a theoretical problem, I hope it is ok without code.
What I want to do with an example: I have a grid (wx.grid) with check boxes in the main thread. Then I start a new thread (thread.start_new_thread) where I go through all rows (1 second per row) and check if the checkbox is set. If it is set, some job is done.
This is working, if I read out all rows before I start the thread. But I need to read it out while the thread is running, because the user should have the ability to uncheck or check another checkbox! But if I read it out in the new thread sometimes a "NonType Object is not callable" error is raised. I think because wx.CallAfter should be used to interact with the grid in the other thread. But CallAfter I can not use to get the return value.
I have no idea how to solve this issue. Perhaps some people with more thread experience have some idea? If you need additional data please ask, but I think that my example contains all necessary information.
A common approach to this type of thing is to use a Queue.Queue object to pass commands to one or more worker threads. The worker thread(s) will wait on a pull from the queue until there are items in the queue ready to be pulled. Part of the command object could be a target in the GUI thread to send a message to (in a thread-safe way, like with wx.CallAfter) when the command is completed.
You should also take a look at the wx.lib.delayedresult module. It is similar to the above but a little more capable and robust.
I am working on a web service that requires user input python code to be executed on my server (we have checks for code injection). I have to import a rather large module so I would like to make sure that I am not starting up python and importing the module from scratch each time something runs (it takes about 4-6s).
To do this I was planning to create a python (3.2) deamon that imports the user input code as a module, executes it and then delete/garbage collect that module. I need to make sure that that module is completely gone from RAM since this process will continue until the server is restarted. I have read a bunch of things that say this is a very difficult thing to do in python.
What is the best way to do this? Would it be better to use exec to define a function with the user input code (for variable scoping) and then execute that function and somehow remove the function? Or is there a better way to do this process that I have missed?
You could perhaps consider to create a pool of python daemon processes?
Their purpose would be to serve one request and to die afterwards.
You would have to write a pool-manager that ensures that there are always X daemon processes waiting for an incoming request. (X being the number of waiting daemon processes: depending on the required workload). The pool-manager would have to observe the pool of daemon processes and start new instances every time a process was finished.
I would like to create an subprocess.Popen object from an already running process... Is that possible somehow?
Another idea would be to serialize (pickle) the subprocess object and write it to a database so that if the main process restarts it could get the subprocess.Popen objects back from the database. I'm unsure if that works.
create an subprocess.Popen object from an already running process
Do you mean from an already running sub-process? The only way I know of to pass objects between processes is to pickle them and write them out either to a file or a database as you suggested.
Typically, sub-processes cannot be spawned from already running sub-processes, but you can keep a reference to the new process you want to create and spawn it from the main process. This could get really ugly, and I suggest against it strongly. Why, specifically do you need to further your process tree past two-deep? This info might lead to a better answer.
Assuming you want to communicate with the "subprocess" and must do so using its standard i/o streams, you could create a wrapper around the executable that maps its stdin/out/err to a socket or named pipe.
The program that intends to control the "subprocess" can then start and stop communications at any time. You may have to provide for a locking mechanism too.
Then, assuming you're on Linux, you can access the stdin/out/err of a running process through /proc/<pid>/fd/<0,1,2>. You won't connect these to a subprocess.Popen object but open('/proc/<pid>/fd/1', 'rb') will behave like Popen().stdout.