I have a long running python script that I would like to be called via a udev rule. The script should run unattended, in the background if you like. The udev RUN is not suitable for long running commands which is getting in my way - my script gets killed after a wile.So I cannot call my script directly via udev.
I tried to disowning it by calling it in udev RUN via a shell script:
#!/bin/bash
/path/to/pythonscript.py & disown
This still got killed.
Now I was thinking to turn my script into a daemon, e.g. using pythons daemon module. Now "all" I need to do is put a short command in my udev RUN statement that will send some sort of trigger to my daemon, and things should be fine. What is less clear to me is the best way to implement this communication. I was thinking of adding maybe a jsonrpc service to my daemon, only listening on the loopback device. Is there a simpler or better way to achive what I want?
EDIT
Thanks to the below comments I came up with the following solution. Main "daemon":
import signal, time
import auto_copy
SIGNAL_RECEIVED = False
def run_daemon():
signal.signal(signal.SIGUSR1, signal_handler)
while True:
time.sleep(1)
global SIGNAL_RECEIVED
if SIGNAL_RECEIVED:
auto_copy.auto_copy()
SIGNAL_RECEIVED = False
def signal_handler(dump1, dump2):
global SIGNAL_RECEIVED
SIGNAL_RECEIVED = True
if __name__ == "__main__":
auto_copy.setup_logging()
run_daemon()
This gets started via systemd, the unit file beeing
[Unitt]
Description=Start auto_copy.py script
[Service]
ExecStart=/home/isaac/bin/auto_copy_daemon.py
[Install]
WantedBy=multi-user.target
Then there is a udev rule:
SUBSYSTEM=="block", KERNEL=="sr0", ACTION=="change", RUN+="/home/isaac/bin/send_siguser1.sh"
and finally the script that sends the signal:
#!/bin/bash
kill -SIGUSR1 $(ps -elf |grep auto_copy_daemon.py |awk '/python/ {print $4}')
If anyone is interested, the project is on github.
Related
I'm writing a monitor service in Python that monitors another service and while the monitor & scheduling part works fine, I have a hard time figuring out how to do a proper shutdown of the service using a SIGINT signal send to the Docker container. Specifically, the service should catch the SIGINT from either a docker stop or a Kubernetes stop signal, but so far it doesn't. I have reduced the issue to a minimal test case which is easy to replicate in Docker:
import signal
import sys
import time
class MainApp:
def __init__(self):
self.shutdown = False
signal.signal(signal.SIGINT, self.exit_gracefully)
signal.signal(signal.SIGTERM, self.exit_gracefully)
def exit_gracefully(self, signum, frame):
print('Received:', signum)
self.shutdown = True
def start(self):
print("Start app")
def run(self):
print("Running app")
time.sleep(1)
def stop(self):
print("Stop app")
if __name__ == '__main__':
app = MainApp()
app.start()
# This boolean flag should flip to false when a SIGINT or SIGTERM comes in...
while not app.shutdown:
app.run()
else: # However, this code gets never executed ...
app.stop()
sys.exit(0)
And the corresponding Dockerfile, again minimalistic:
FROM python:3.8-slim-buster
COPY test/TestGS.py .
STOPSIGNAL SIGINT
CMD [ "python", "TestGS.py" ]
I opted for Docker because the Docker stop command is documented to issue a SIGINT signal, waits a bit, and then issues a SIGKILL. This should be an ideal test case.
However, when starting the docker container with an interactive shell attached, and stopping the container from a second shell, the stop() code never gets executed. Verifying the issue, a simple:
$ docker inspect -f '{{.State.ExitCode}}' 64d39c3b
Shows exit code 137 instead of exit code 0.
Apparently, one of two things is happening. Either the SIGTERM signal isn't propagated into the container or Python runtime and this might be true because the exit_gracefully function isn't called apparently otherwise we would see the printout of the signal. I know that you have to be careful about how to start your code from within Docker to actually get a SIGINT, but when adding the stop signal line to the Dockerfile, a global SIGINT should be issued to the container, at least to my humble understanding reading the docs.
Or, the Python code I wrote isn't catching any signal at all. Either way, I simply cannot figure out why the stop code never gets called. I spent a fair amount of time researching the web, but at this point, I feel I'm running circles, Any idea how to solve the issue of correctly ending a python script running inside docker using a SIGINT signal?
Thank you
Marvin
Solution:
The app must run as PID 1 inside docker to receive a SIGINT. To do so, one must use ENTRYPOINT instead of CMD. The fixed Dockerfile:
FROM python:3.8-slim-buster
COPY test/TestGS.py .
ENTRYPOINT ["python", "TestGS.py"]
Build the image:
docker build . -t python-signals
Run the image:
docker run -it --rm --name="python-signals" python-signals
And from a second terminal, stop the container:
docker stop python-signals
Then you get the expected output:
Received SIGTERM signal
Stop app
It seems a bit odd to me that Docker only emits SIGTERMS to PID 1, but thankfully that's relatively easy to fix. The article below was most helpful to solve this issue.
https://itnext.io/containers-terminating-with-grace-d19e0ce34290
I'm trying to build a todo manager in python where I want to continuously run a process in the bg that will alert the user with a popup when the specified time comes. I'm wondering how I can achieve that.
I've looked at some of the answers on StackOverflow and on other sites but none of them really helped.
So, What I want to achieve is to start a bg process once the user enters a task and keep on running it in the background until the time comes. At the same time there might be other threads running for other tasks as well that will end at their end times.
So far, I've tried this:
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task.
t.start()
t.join()
With this the thread is continuosly running but it runs in the foreground and only exits when the execution is done.
If I add t.daemon = True in the above code, the main thread immediately exits after start() and it looks like the daemon is also getting killed then.
Please let me know how this can be solved.
I'm guessing that you just don't want to see the terminal window after you launch the script. In this case, it is a matter of how you execute the script.
Try these things.
If you are using a windows computer you can try using pythonw.exe:
pythonw.exe example_script.py
If you are using linux (maybe OSx) you may want to use 'nohup' in the terminal.
nohup python example_script.py
More or less the reason you have to do this comes down to how the Operating system handles processes. I am not an expert on this subject matter, but generally if you launch a script from a terminal, that script becomes a child process of the terminal. So if you exit that terminal, it will also terminate any child processes. The only way to get around that is to either detach the process from the terminal with something like nohup.
Now if you end up adding the #!/usr/bin/env python shebang line, your os could possibly just run the script without a terminal window if you just double click the script. YMMV (Again depends on how your OS works)
The first thing you need to do is prevent your script from exiting by adding a while loop in the main thread:
import time
from threading import Thread
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task)
t.start()
t.join()
while True:
time.sleep(1.0)
Then you need to put it in the background:
$ nohup python alert_popup.py >> /dev/null 2>&1 &
You can get more information on controlling a background process at this answer.
I'm trying to figure out how to properly close out my script that's supposed to start up a Django server running in a docker container (boot2docker, on Mac OS X). Here's the pertinent code block:
try:
init_code = subprocess.check_output('./initdocker.sh', shell=True)
subprocess.call('./startdockerdjango.sh', shell=True)
except subprocess.CalledProcessError:
try:
subprocess.call('./startdockerdjango.sh', shell=True)
except KeyboardInterrupt:
return
Where startdockerdjango.sh takes care of setting the environment variables that docker needs and starts the server up. The script overall is supposed to know whether to do first-time setup and initialization or simply start the container and server; catching the CalledProcessError means that first time setup was already done and that the container and server can just be started up. The startup works fine, but when a user presses Ctrl-C to stop the server, the server stops normally but then apparently the process that started the server is still going. If I press return, then I can go back to the normal terminal command prompt. If I do any sort of shell command, like ls, then it will be carried out and then I can return to the terminal. I want to change the code so that, if a user presses Ctrl-C, then the server and the container that the server is running in will stop normally and then, afterward, stop the process and have the whole script exit. How can this be done? I don't want to just kill or terminate the process upon KeyboardInterrupt, since then the server and container won't be able to stop normally but will be killed off abruptly.
UPDATE:
I recently tried the following according to Padraic Cunningham's comment:
try:
init_code = subprocess.check_output('./initdocker.sh', shell=True)
subprocess.call('./startdockerdjango.sh', shell=True)
except subprocess.CalledProcessError:
try:
startproc = subprocess.Popen('./startdockerdjango.sh')
except KeyboardInterrupt:
startproc.send_signal(SIGTERM)
startproc.wait()
return
This was my attempt to send a term to the server to shut down gracefully and then use wait() to wait for the process (startproc) to complete. This, however, results in just having the container and server end abruptly, something that I was trying to prevent. The same thing happens if I try SIGINT instead. What, if anything, am I doing wrong in this second approach? I still want the same overall thing as before, which is having one single Ctrl-C end the container and server, then exit the script.
You might want to create the process using Popen. It will give you a little more control on how you manage the child process.
env = {"MY_ENV_VAR": "some value"}
proc = subprocess.Popen("./dockerdjango.sh", env=env)
try:
proc.wait()
except KeyboardInterupt:
proc.terminate() # on linux this gives the a chance to clean up,
# or even ignore the signal entirely
# use proc.send_signal(...) and the module signal to send other signals.
# or proc.kill() if you wish to be kill the process immediately.
If you set the environment variables in python it will also result in less child processes that need to be killed.
In the end, it wasn't worth the effort to have the script know to either do first-time initialization or server+container startup. Instead, the script will just try first-time setup and then will tell the user to do docker-compose up after successful setup. This is a better solution for my specific situation than trying to figure out how to have Ctrl-C properly shut down the server and then exit the script.
To Reset django server subprocess execute in your terminal:
$ sudo lsof -i tcp:8080
$ sudo lsof -i tcp:8080|awk '{print $2}'|cut -d/ -f 1|xargs kill
I am working on my python script to launch a server, may be in background or in a different process and then further do some processing before killing the launched server.
Once the rest of the processing is over, then kill the launched server.
For Example
server_cmd = 'launch_server.exe -source '+ inputfile
print server_cmd
cmd_pid = subprocess.Popen(server_cmd).pid
...
...
... #Continue doing some processing
cmd_pid.terminate() # Once the processing is done, terminate the server
Some how the script does not continue after launching the server as the server may be running in infinite loop listening for a request. Is there a good way to send this process in background so that it doesn't expect for command line input.
I am using Python 2.7.8
It's odd that your script does not continue after launching the server command. In subprocess module, Popen starts another child process while the parent process (your script) should move on.
However in your code there's already a bug: cmd_pid is an int object and does not have terminate method. You should use subprocess.Popen object to call terminate method.
Making a small change resolved the problem
server_proc = subprocess.Popen(server_cmd, stdout=subprocess.PIPE)
server_proc.terminate()
Thanks Xu for correction in terminate.
I have this script, I do not know how to have it running in the background, cause when i close the session it close too. I tried putting it on crontab but not find the index.html and shows the list of files in /.
#! /opt/python3/bin/python3
from http.server import HTTPServer, CGIHTTPRequestHandler
port = 8000
httpd = HTTPServer(('', port), CGIHTTPRequestHandler)
print("Starting simple_httpd on port: " + str(httpd.server_port))
httpd.serve_forever()`
Basically you are asking about how to detach the program from your shell ... here is a few options
./scriptname.py >/dev/null 2>&1 & # sends the program to the background
Use gnu-screen (or similar) ... run your program via screen and you can bring it back up when you log back in
Daemonize your program properly
Update:
Recently I have not written a single daemon in python. The days of forking twice or using a daemon library seem to be well behind us. I currently use supervisord and have heard good things about circus. These are just a small set of extra options you can use to deploy python daemons.