I am running two python scripts in parallel on linux using the following command
python program01.py & python program02.py
When I use a keyboard shortcut like 'ctrl+c' to stop them, it only closes the program which was last in the command (e.g. here it is 'program02.py'), and the other one stays open/running.
I need to know how can I close all of them at a time using a keyboard command, as I need all of them to keep running sometimes, when only keyboard command will be possible to use.
When you run program with & at end, it runs in background. To kill all background jobs, use jobs -p | xargs kill -9 command. There is no bash shortcut for that.
In that case, you may want to run your command as:
python program01.py & python program02.py &
There's no such shortcut, but you can make some:
trap 'kill -INT $(jobs -p) &> /dev/null' INT
With this in your .bashrc, Ctrl+C on a command line will interrupt all background jobs.
That means you can hit Ctrl+C to kill the foreground process, and then Ctrl+C again to kill any background processes.
Alternatively,
bind '"\C-b": "\nkill $(jobs -p)\n"'
will let you use Ctrl+B to kill all background jobs, so that you won't do it by accident when frantically hitting Ctrl+C to kill a foreground process.
This is a sample code to start a dummy thread and will break out of any loop when the dummy thread detects any keyboard input. I have found this to be a reliable way to stop a program when control-C doesn't work, but I'm sure there are better solutions.
import thread
def input_thread(L):
raw_input()
L.append(None)
dummykeypress = []
thread.start_new_thread(input_thread, (dummykeypress,))
while True:
if dummykeypress: break # or exit() to kill entire program, etc
Related
I have to execute two operations at the same time, one through a bash script and another through a python script. The simplest way to do it that I've found so far is to create a parent bash script to execute the two in parallel, such as this:
#!/bin/bash
bash process1.sh &
python3 process2.py &
I want to be able to interrupt the two processes at the same time using keyboard interrupt Ctrl+C. I tried adding
trap 'kill %1; kill %2' SIGINT
but the python script does not close as I'd like. In the python script there is a loop that should stop after the keyboard interrupt and perform some more operations after that, something like this
try:
# do something
except KeyboardInterrupt:
# Keyboard interrupt (Ctrl + C) detected
pass
# then do some final operations
but using kill does not propagate the keyboard interrupt to the python script, it just terminates the program as it is.
Is there a way to not kill the child python script, but to propagate the SIGINT to it?
Try this:
trap 'kill -INT %1 %2' INT
kill %2 sends SIGTERM (the default signal) to the Python process, not SIGINT. You want kill -INT inside the trap code.
I'm trying to build a todo manager in python where I want to continuously run a process in the bg that will alert the user with a popup when the specified time comes. I'm wondering how I can achieve that.
I've looked at some of the answers on StackOverflow and on other sites but none of them really helped.
So, What I want to achieve is to start a bg process once the user enters a task and keep on running it in the background until the time comes. At the same time there might be other threads running for other tasks as well that will end at their end times.
So far, I've tried this:
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task.
t.start()
t.join()
With this the thread is continuosly running but it runs in the foreground and only exits when the execution is done.
If I add t.daemon = True in the above code, the main thread immediately exits after start() and it looks like the daemon is also getting killed then.
Please let me know how this can be solved.
I'm guessing that you just don't want to see the terminal window after you launch the script. In this case, it is a matter of how you execute the script.
Try these things.
If you are using a windows computer you can try using pythonw.exe:
pythonw.exe example_script.py
If you are using linux (maybe OSx) you may want to use 'nohup' in the terminal.
nohup python example_script.py
More or less the reason you have to do this comes down to how the Operating system handles processes. I am not an expert on this subject matter, but generally if you launch a script from a terminal, that script becomes a child process of the terminal. So if you exit that terminal, it will also terminate any child processes. The only way to get around that is to either detach the process from the terminal with something like nohup.
Now if you end up adding the #!/usr/bin/env python shebang line, your os could possibly just run the script without a terminal window if you just double click the script. YMMV (Again depends on how your OS works)
The first thing you need to do is prevent your script from exiting by adding a while loop in the main thread:
import time
from threading import Thread
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task)
t.start()
t.join()
while True:
time.sleep(1.0)
Then you need to put it in the background:
$ nohup python alert_popup.py >> /dev/null 2>&1 &
You can get more information on controlling a background process at this answer.
Python 3.4
OS - Raspbian Jessie running on a Raspberry Pi 3
NOTE: The program "gnutv" does NOT have a stop command. It only has an option of a timer to stop the recording.
My question:
I'm still fairly new to programming and Python (self/YouTube/books taught). I am writing a program that checks a system for alarms. When an alarm is present, it triggers the program "gnutv" to begin recording video to a file. That was the easy part. I can make the program start, and record video using
Popen(["gnutv", "-out", "file", str(Videofile), str(Channel)])
The program continues to monitor the alarm inputs while the video is recording so it will know when to stop recording. BUT I can't get it to stop recording when the alarm is no longer present. I've attempted to use kill(), terminate(), and others without success (all returned errors indicating I don't know how to use these more complex commands). HOWEVER, I CAN kill the process by switching to the terminal and finding the PID using
pidof 'gnutv'
and then killing it with
kill PID#
So how can I return the PID value I get from the terminal so I can send the kill command to the terminal (again using Popen)?
i.e. - Popen(['kill', 'PID#'])
You don't need to run the kill program itself, you can just call .kill() or .terminate() on the Popen object.
import subprocess
proc = subprocess.Popen(['gnutv', '-out', 'file', str(Videofile), str(Channel)])
# Some time later...
# This is equivalent to running "kill <pid>"
proc.terminate()
# This is equivalent to running "kill -9 <pid>"
proc.kill()
If you really need the pid (hint: you don't) you can get it from the object as well, it's stored in the pid attribute.
print('Spawned gnutv (pid={})'.format(proc.pid))
You really should not be running the kill program, since that program is just a wrapper around the kill() function in the first place. Just call the function directly, or through the wrapper provided by subprocess.
thanks for helping!
I want to start and stop a Python script from a shell script. The start works fine, but I want to stop / terminate the Python script after 10 seconds. (it's a counter that keeps counting). bud is won't stop.... I think it is hanging on the first line.
What is the right way to start wait for 10 seconds en stop?
Shell script:
python /home/pi/count1.py
sleep 10
kill /home/pi/count1.py
It's not working yet. I get the point of doing the script on the background. That's working!. But I get another comment form my raspberry after doing:
python /home/pi/count1.py &
sleep 10; kill /home/pi/count1.py
/home/pi/sebastiaan.sh: line 19: kill: /home/pi/count1.py: arguments must be process or job IDs
It's got to be in the: (but what? Thanks for helping out!)
sleep 10; kill /home/pi/count1.py
You're right, the shell script "hangs" on the first line until the python script finishes. If it doesn't, the shell script won't continue. Therefore you have to use & at the end of the shell command to run it in the background. This way, the python script starts and the shell script continues.
The kill command doesn't take a path, it takes a process id. After all, you might run the same program several times, and then try to kill the first, or last one.
The bash shell supports the $! variable, which is the pid of the last background process.
Your current example script is wrong, because it doesn't run the python job and the sleep job in parallel. Without adornment, the script will wait for the python job to finish, then sleep 10 seconds, then kill.
What you probably want is something like:
python myscript.py & # <-- Note '&' to run in background
LASTPID=$! # Save $! in case you do other background-y stuff
sleep 10; kill $LASTPID # Sleep then kill to set timeout.
You can terminate any process from any other if OS let you do it. I.e. if it isn't some critical process belonging to the OS itself.
The command kill uses PID to kill the process, not the process's name or command.
Use pkill for that.
You can also, send it a different signal instead of SIGTERM (request to terminate a program) that you may wish to detect inside your Python application and respond to it.
For instance you may wish to check if the process is alive and get some data from it.
To do this, choose one of the users custom signals and register them within your Python program using signal module.
To see why your script hangs, see Austin's answer.
I want to submit my long running Python job using ampersand. I'm going to kick this process off from an interactive Python program by using a sub process call it.
How would I keep track of the submitted job programmatically in case I want to end the job from a menu option?
Example of interactive program:
Main Menu
1. Submit long running job &
2. End long running job
If you're using python's subprocess module, you don't really need to background it again with & do you? You can just keep your Popen object around to track the job, and it will run while the other python process continues.
If your "outer" python process is going to terminate what sort of track do you need to keep? Would pgrep/pkill be suitable? Alternately, you could have the long running job log its PID, often under /var/run somewhere, and use that to track if the process is still alive and/or signal it.
You could use Unix signals. Here we capture SIGUSR1 to tell the process to communicate some info to STDOUT.
#!/usr/bin/env python
import signal
import sys
def signal_handler(signal, frame):
print('Caught SIGUSR1!')
print("Current job status is " + get_job_status())
signal.signal(signal.SIGUSR1, signal_handler)
and then from the shell
kill <pid> --signal SIGUSR1