suspend embedded python script execution - python

I have an app that embeds python scripting.
I'm adding calls to python from C, and my problem is that i need to suspend the script execution let the app run, and restore the execution from where it was suspended.
The idea is that python would call, say "WaitForData" function, so at that point the script must suspend (pause) the calls bail out so the app event loop would continue. When the necessary data is present, i would like to restore the execution of the script, it is like the python call returns at that point.
i'm running single threaded python.
any ideas how can i do this, or something similar, where i'll have the app event loop run before python call exits?

Well, the only way I could come up with is to run the Python engine on a separate thread. Then the main thread is blocked when the python thread is running.
When I need to suspend, I block the Python thread, and let the main thread run. When necessary, the OnIdle of the main thread, i block it and let the python continue.
it seems to be working fine.

Related

Terminating a subprocess in python flask

I have a flask application in which I have to run a bash script. On visiting a particular link (let say localhost:5000/B) I execute the script using python's subprocess library. Then used wait() function on subprocess. So that the script is finished before doing other tasks which depend on script. After finishing those remaining tasks I return the results in response by rendering a template.
Sometimes I might go back from page or press cancel button( on top side of browser). In that case I want to terminate that script from running even if it has not completed. I have added javascript in page so that when I go back from page it makes a GET request to server at link localhost:5000/C. In the function handling this, I terminate the subprocess.
But due to some reasons it does not work, even after using kill() or terminate() method.
Can we terminate a subprocess on which we have used wait() or not?
If there is a better way of doing this thing kindly let me know.
Thanks
The subprocess was creating another subprocess during the execution and it was creating problems.

Find out what is still running after I CTRL+C the python program

After I CTRL+C to kill the python program, I found most of the threads and processes has been terminated (they are watching some flags), but still some are running in background without any output to console. Is there a way that I can investigate which one is still running?
If you suggest to print something when thread/process is running, it seems not gonna work. First, some of the threads/processes are running methods in third party libraries, for example, websocket.run_forever. Second, for those threads/processing running my method, I'm pretty sure they are monitering a flag and once it's set, they all quit. For those threads/processes running third party methods, I call some methods to terminate them like websocket.close() and I do see they are terminated. So in this case, it's confusing to me what is still running.
You can call this wonderful dumpstacks function, which prints the current traceback of all the running threads.
If your main thread (or another thread under your control) is still running at that point (e.g. waiting on other threads to finish before quiting), add the function call there.
Another option is to attach a pdb debugger to the running process, and then run dumpstacks.
That should give you a very good idea of what is still running.
Here is what I finally did. It doesn't directly answer the question but it solves my problem.
First, after I CTRL + C the program, I do a
ps aux | grep -i "myProgram.py"
and find out there was only one process still running (more than five before I CTRL+C).
Next, I make all the threads I created in the program "daemon", especially those running in main process.
Then, I do
threads = threading.enumerate()
for _t in threads:
print _t.name
print _t.isAlive()
print _t.isDaemon()
so as to find out anything still running after I catch the KeyboardInterrupt (from CTRL+C) and do the cleaning up (set the thread/process terminate flag).
Now the program exists gracefully after I CTRL+C and nothing keep running in background.

Best way to stop a Python script even if there are Threads running in the script

I have a python program run_tests.py that executes test scripts (also written in python) one by one. Each test script may use threading.
The problem is that when a test script unexpectedly crashes, it may not have a chance to tidy up all open threads (if any), hence the test script cannot actually complete due to the threads that are left hanging open. When this occurs, run_tests.py gets stuck because it is waiting for the test script to finish, but it never does.
Of course, we can do our best to catch all exceptions and ensure that all threads are tidied up within each test script so that this scenario never occurs, and we can also set all threads to daemon threads, etc, but what I am looking for is a "catch-all" mechanism at the run_tests.py level which ensures that we do not get stuck indefinitely due to unfinished threads within a test script. We can implement guidelines for how threading is to be used in each test script, but at the end of the day, we don't have full control over how each test script is written.
In short, what I need to do is to stop a test script in run_tests.py even when there are rogue threads open within the test script. One way is to execute the shell command killall -9 <test_script_name> or something similar, but this seems to be too forceful/abrupt.
Is there a better way?
Thanks for reading.
To me, this looks like a pristine application for the subprocess module.
I.e. do not run the test-scripts from within the same python interpreter, rather spawn a new process for each test-script. Do you have any particular reason why you would not want to spawn a new process and run them in the same interpreter instead? Having a sub-process isolates the scripts from each other, like imports, and other global variables.
If you use the subprocess.Popen to start the sub-processes, then you have a .terminate() method to kill the process if need be.
What I actually needed to do was tidy up all threads at the end of each test script rather than at the run_tests.py level. I don't have control over the main functions of each test script, but I do have control over the tidy up functions.
So this is my final solution:
for key, thread in threading._active.iteritems():
if thread.name != 'MainThread':
thread._Thread__stop()
I don't actually need to stop the threads. I simply need to mark them as stopped with _Thread__stop() so that the test script can exit. I hope others find this useful.

Listening for subprocess failure in python

Using subprocess.Popen(), I'm launching a process that is supposed to take a long time. However, there is a chance that the process will fail shortly after it launches (producing a return code of 1). If that happens, I want to intercept the failure and present an explanatory message to the user. Is there a way to "listen" to the process and respond if it fails? I can't just use Popen.wait() because my python program has to keep running.
The hack I have in place right now is to time.sleep() my python program for .5 seconds (which should be enough time for the subprocess to to fail if it's going to do so). After the python program resumes, it polls the subprocess to determine if it has failed or not.
I imagine that a better solution might use threading and Popen.wait(), but I'm a relative beginner to python.
Edit:
The subprocess is a Java daemon that I'm launching. If another instance of the daemon is already running on the system, the Java subprocess will exit with a return code of 1, and I want to intercept the messy Java exception stack trace and present an understandable error message to the user.
Two approaches:
Call Popen.wait() on a thread as you suggested yourself, then call an error handler function if the exit code is non-zero. Make sure that the error handler is thread safe, preferably by dispatching the error message to the main thread if your application has an event loop.
Rewrite your application to use an event loop that already supports monitoring child processes, such as pyev. If you just want to monitor one subprocess, this is probably overkill.

User Input Python Script Executing Daemon

I am working on a web service that requires user input python code to be executed on my server (we have checks for code injection). I have to import a rather large module so I would like to make sure that I am not starting up python and importing the module from scratch each time something runs (it takes about 4-6s).
To do this I was planning to create a python (3.2) deamon that imports the user input code as a module, executes it and then delete/garbage collect that module. I need to make sure that that module is completely gone from RAM since this process will continue until the server is restarted. I have read a bunch of things that say this is a very difficult thing to do in python.
What is the best way to do this? Would it be better to use exec to define a function with the user input code (for variable scoping) and then execute that function and somehow remove the function? Or is there a better way to do this process that I have missed?
You could perhaps consider to create a pool of python daemon processes?
Their purpose would be to serve one request and to die afterwards.
You would have to write a pool-manager that ensures that there are always X daemon processes waiting for an incoming request. (X being the number of waiting daemon processes: depending on the required workload). The pool-manager would have to observe the pool of daemon processes and start new instances every time a process was finished.

Categories