I'm using a commercial application that uses Python as part of its scripting API. One of the functions provided is something called App.run(). When this function is called, it starts a new Java process that does the rest of the execution. (Unfortunately, I don't really know what it's doing under the hood as the supplied Python modules are .pyc files, and many of the Python functions are SWIG generated).
The trouble I'm having is that I'm building the App.run() call into a larger Python application that needs to do some guaranteed cleanup code (closing a database, etc.). Unfortunately, if the subprocess is interrupted with Ctrl+C, it aborts and returns to the command line without returning control to the main Python program. Thus, my cleanup code never executes.
So far I've tried:
Registering a function with atexit... doesn't work
Putting cleanup in a class __del__ destructor... doesn't work. (App.run() is inside the class)
Creating a signal handler for Ctrl+C in the main Python app... doesn't work
Putting App.run() in a Thread... results in a Memory Fault after the Ctrl+C
Putting App.run() in a Process (from multiprocessing)... doesn't work
Any ideas what could be happening?
This is just an outline- but something like this?
import os
cpid = os.fork()
if not cpid:
# change stdio handles etc
os.setsid() # Probably not needed
App.run()
os._exit(0)
os.waitpid(cpid)
# clean up here
(os.fork is *nix only)
The same idea could be implemented with subprocess in an OS agnostic way. The idea is running App.run() in a child process and then waiting for the child process to exit; regardless of how the child process died. On posix, you could also trap for SIGCHLD (Child process death). I'm not a windows guru, so if applicable and subprocess doesn't work, someone else will have to chime in here.
After App.run() is called, I'd be curious what the process tree looks like. It's possible its running an exec and taking over the python process space. If thats happening, creating a child process is the only way I can think of trapping it.
If try: App.run() finally: cleanup() doesn't work; you could try to run it in a subprocess:
import sys
from subprocess import call
rc = call([sys.executable, 'path/to/run_app.py'])
cleanup()
Or if you have the code in a string you could use -c option e.g.:
rc = call([sys.executable, '-c', '''import sys
print(sys.argv)
'''])
You could implement #tMC's suggestion using subprocess by adding
preexec_fn=os.setsid argument (note: no ()) though I don't see how creating a process group might help here. Or you could try shell=True argument to run it in a separate shell.
You might give another try to multiprocessing:
import multiprocessing as mp
if __name__=="__main__":
p = mp.Process(target=App.run)
p.start()
p.join()
cleanup()
Are you able to wrap the App.Run() in a Try/Catch?
Something like:
try:
App.Run()
except (KeyboardInterrupt, SystemExit):
print "User requested an exit..."
cleanup()
Related
I am developing a wrapper around docker compose with python.
However, I struggle with Popen.
Here is how I launch launch it :
import subprocess as sp
argList=['docker-compose', 'up']
env={'HOME': '/home/me/somewhere'}
p = sp.Popen(argList, env=env)
def handler(signum, frame):
p.send_signal(signum)
for s in (signal.SIGINT,):
signal.signal(s, handler) # to redirect Ctrl+C
p.wait()
Everything works fine, when I hit Ctrl+C, docker-compose kills gracelly the container, however, p.wait() never returns...
Any hint ?
NOTE : While writing the question, I though I needed to check if p.wait() does actually return and if the block is after (it's the last instruction in the script). Adding a print after it end in the process exiting normally, any further hints on this behavior ?
When I run your code as written, it works as intended in that it causes docker-compose to exit and then p.wait() returns. However, I occasionally see this behavior:
Killing example_service_1 ... done
ERROR: 2
I think that your code may end up delivering SIGINT twice to docker-compose. That is, I think docker-compose receives an initial SIGINT when you type CTRL-C, because it has the same controlling terminal as your Python script, and then you explicitly deliver another SIGINT in your handler function.
I don't always see this behavior, so it's possible my explanation is incorrect.
In any case, I think the correct solution here is imply to ignore SIGINT in your Python code:
import signal
import subprocess
argList = ["docker-compose", "up"]
p = subprocess.Popen(argList)
signal.signal(signal.SIGINT, signal.SIG_IGN) # to redirect Ctrl+C
p.wait()
With this implementation, your Python code ignores the SIGINT generated by CTRL-C, but it is received and processed normally by docker-compose.
I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.
Please don't consider it a duplicate before reading, There are a lot of questions about multithreading and keyboard interrupt, but i didn't find any considering os.system and it looks like it's important.
I have a python script which makes some external calls in worker threads.
I want it to exit if I press ctrl+c But it look like the main thread ignores it.
Something like this:
from threading import Thread
import sys
import os
def run(i):
while True:
os.system("sleep 10")
print i
def main():
threads=[]
try:
for i in range(0, 3):
threads.append(Thread(target=run, args=(i,)))
threads[i].daemon=True
threads[i].start()
for i in range(0, 3):
while True:
threads[i].join(10)
if not threads[i].isAlive():
break
except(KeyboardInterrupt, SystemExit):
sys.exit("Interrupted by ctrl+c\n")
if __name__ == '__main__':
main()
Surprisingly, it works fine if I change os.system("sleep 10") to time.sleep(10).
I'm not sure what operating system and shell you are using. I describe Mac OS X and Linux with zsh (bash/sh should act similar).
When you hit Ctrl+C, all programs running in the foreground in your current terminal receive the signal SIGINT. In your case it's your main python process and all processes spawned by os.system.
Processes spawned by os.system then terminate their execution. Usually when python script receives SIGINT, it raises KeyboardInterrupt exception, but your main process ignores SIGINT, because of os.system(). Python os.system() calls the Standard C function system(), that makes calling process ignore SIGINT (man Linux / man Mac OS X).
So neither of your python threads receives SIGINT, it's only children processes who get it.
When you remove os.system() call, your python process stops ignoring SIGINT, and you get KeyboardInterrupt.
You can replace os.system("sleep 10") with subprocess.call(["sleep", "10"]). subprocess.call() doesn't make your process ignore SIGINT.
I've had this same problem more times than I could count back when i was first learning python multithreading.
Adding the sleep call within the loop makes your main thread block, which will allow it to still hear and honor exceptions. What you want to do is utilize the Event class to set an event in your child threads that will serve as an exit flag to break execution upon. You can set this flag in your KeyboardInterrupt exception, just put the except clause for that in your main thread.
I'm not entirely certain what is going on with the different behaviors between the python specific sleep and the os called one, but the remedy I am offering should work for what your desired end result is. Just offering a guess, the os called one probably blocks the interpreter itself in a different way?
Keep in mind that generally in most situations where threads are required the main thread is going to keep executing something, in which case the "sleeping" in your simple example would be implied.
http://docs.python.org/2/library/threading.html#event-objects
I have the following python code:
os.system("C:/Python27/python.exe C:/GUI/TestGUI.py")
sys.exit(0)
It runs the command fine, and a window pops up. However, it doesn't exit the first script. It just stays there, and I eventually have to force kill the process. No errors are produced. What's going on?
instead of os.system use subprocess.Popen
this runs a command and doesn't wait for it and then exits:
import subprocess
import sys
subprocess.Popen(["mupdf", "/home/dan/Desktop/Sieve-JFP.pdf"])
sys.exit(0)
note that os.system(command) like:
p = subprocess.Popen(command)
p.wait()
KeyboardInterrupts and signals are only seen by the process (ie the main thread). If your nested command hangs due to some kind of file read or write block, you won't be able to quit the program using any keyboard commands.
Why does a read-only open of a named pipe block?
If you can't eliminate the source of the disk block, then one way is to wrap the process in the thread so you can force kill it. But if you do this, you leave opportunity for half-written and corrupted files on disk.
I suggest using os._exit instead of sys.exit, as sys.exit doesnt quit a program but raises exception level, or exits a thread. os._exit(-1) quits the entire program
import sys ,subprocess
subprocess.Popen(["C:/Python27/python.exe", "C:/GUI/TestGUI.py"])
sys.exit(0)
Popen from subprocess module what you are looking for.
(there is a follow up to this question here)
I am working on trying to write a Python based Init system for Linux but I'm having an issue getting signals to my Python init script. From the 'man 2 kill' page:
The only signals that can be sent to process ID 1, the init process, are those for which init has explicitly installed signal handlers.
In my Python based Init, I have a test function and a signal handler setup to call that function:
def SigTest(SIG, FRM):
print "Caught SIGHUP!"
signal.signal(signal.SIGHUP, SigTest)
From another TTY (the init script executes sh on another tty) if I send a signal, it is completely ignored and the text is never printed. kill -HUP 1
I found this issue because I wrote a reaping function for my Python init to reap its child processes as they die, but they all just zombied, it took awhile to figure out Python was never getting the SIGCHLD signal. Just to ensure my environment is sane, I wrote a C program to fork and have the child send PID 1 a signal and it did register.
How do I install a signal handler the system will acknowledge if signal.signal(SIG, FUNC) isn't working?
Im going to try using ctypes to register my handler with C code and see if that works, but I rather a pure Python answer if at all possible.
Ideas?
( I'm not a programmer, Im really in over my head here :p )
Test code below...
import os
import sys
import time
import signal
def SigTest(SIG, FRM):
print "SIGINT Caught"
print "forking for ash"
cpid = os.fork()
if cpid == 0:
os.closerange(0, 4)
sys.stdin = open('/dev/tty2', 'r')
sys.stdout = open('/dev/tty2', 'w')
sys.stderr = open('/dev/tty2', 'w')
os.execv('/bin/ash', ('ash',))
print "ash started on tty2"
signal.signal(signal.SIGHUP, SigTest)
while True:
time.sleep(5.0)
Signal handlers mostly work in Python. But there are some problems. One is that your handler won't run until the interpreter re-enters it's bytecode interpreter. if your program is blocked in a C function the signal handler is not called until it returns. You don't show the code where you are waiting. Are you using signal.pause()?
Another is that if you are in a system call you will get an exception after the singal handler returns. You need to wrap all system calls with a retry handler (at least on Linux).
It's interesting that you are writing an init replacement... That's something like a process manager. The proctools code might interest you, since it does handle SIGCHLD.
By the way, this code:
import signal
def SigTest(SIG, FRM):
print "SIGINT Caught"
signal.signal(signal.SIGHUP, SigTest)
while True:
signal.pause()
Does work on my system.