I am working on an RFID-based access control system for which I have a working python script. For some specific details if they matter, the main processing is done on a pi zero w, which is connected by USB to a microcontroller that handles the input from the RFID module and sends it to the pi in string format for simplicity. The pi then compares the string received to a yaml file and a schedule and uses GPIO to switch on or off a door strike using a power supply. The issue I'm running into is that the script stops running after about 30 minutes, and I'm not quite sure why, but I think the ideal solution in any case is to daemonize it, because a cron job is too subject to failure and a daemon seems very appropriate for this use. Does anyone have any suggestions for daemonizing the script such that it will start on boot and restart itself if it detects a failure or that it is no longer running?
As larsks said, you can create systemd service
sudo nano /etc/systemd/system/yourscript.service
This file should be something like this (read the documentation for more information):
[Unit]
Description=My cool script
After=multi-user.target
[Service]
User=root
WorkingDirectory=/path/to/your/script/directory/
Restart=on-failure
RestartSec=5s
ExecStart=/usr/bin/python3 your_script.py
StandardOutput=append:/var/log/your_script.log
StandardError=append:/var/log/your_script.log
SyslogIdentifier=coolawesomescript
[Install]
WantedBy=multi-user.target
Then enable and start it:
foo#bar:~$ sudo systemctl enable yourscript
foo#bar:~$ sudo systemctl start yourscript
Now your script will automatically restart when it crashes
You can check if your script actually works by typing sudo systemctl status yourscript
I need to deploy a Python script on a AWS machine with Ubuntu server 18.04.
In the script there is a TCP server using a custom TCP port (let's say the 9999), which handles the clients' requests in different threads.
The problem is that I don't know which could be the best practice to keep this script running if there is any problem (the main TCP server thread dies for whatever reason).
Furthermore, I don't really know which could be the best practice to run this kind of script in the AWS EC2 system.
So far I am manually starting the script via SSH. Everything in the script logic works well, the problem is how to start and keep running such script.
You should take a look at the systemd suite. It can be used to manage the status of your script. It can restart the script if it dies, or if the node is rebooted.
Here's an example service.
Create the file below in this location: /lib/systemd/system/example.service
[Unit]
Description=A short description of the script.
[Service]
Type=simple
# Script location
ExecStart=/path/to/some/script.py
# Restart the script in all circumstances (e.g If it exits successfully, fails or crashes).
Restart=always
[Install]
WantedBy=multi-user.target
Then set the service to start automatically on boot and start the service:
chmod 644 /lib/systemd/system/example.service
systemctl enable example
systemctl start example
There are a lot of resources available if you want to learn more about systemd. I'd suggest the links below:
[0] https://www.freedesktop.org/wiki/Software/systemd/
[1] https://github.com/torfsen/python-systemd-tutorial
[2] https://www.linode.com/docs/quick-answers/linux/start-service-at-boot/#create-a-custom-systemd-service
[3] https://medium.com/#benmorel/creating-a-linux-service-with-systemd-611b5c8b91d6
As for general best practices, it is difficult to provide advice without knowing more about your script. It is not recommended to use the python HTTPServer module for Production workloads, because it only implements basic security checks.
I am working on raspberry pi 3 about 3 months , I had a problem when I started working with it.
I couldn't find an efficient and safe way to run a python script on raspberry when it turns on(without monitor and mouse and keyboard).At the moment I have added "$sudo run myscript.py &" at /etc/profile but sometimes when I turn it on my script doesn't run until I connect monitor and mouse and keyboard to it and run the script with GUI and after that it works fine (again without mouse and keyboard).
I want to know is there any solution that I will be sure my script will run after I turn raspberry pi on?
Thanks a lot
You will want to setup a service and user sudo service <my_service> [start, stop, restart] to get it working on startup. See here for reference.
The /etc/profile is executed when new shell session in being started, so unless you start at least single shell session your script will not be run. Moreover it will be terminated when session stops, and if you start multiple sessions then the script will also be started for each session, which is probably not what you want.
Depending on your init system you would need to create SysVinit or systemd service. Assuming you use systemd based distro (which is currently default for most Linux distributions) you need to do following:
Step 1: Place your script in location from which it will be executed by service. For example /usr/local/bin/ may be good choice.
Step 2: Create service file. Assuming you want to name it myscript.service, create file at following path /etc/systemd/system/myscript.service with following content:
[Unit]
Description=myscript
[Service]
ExecStart="/usr/bin/python /usr/local/bin/myscript.py"
[Install]
WantedBy=multi-user.target
Step 3: Reload systemd daemon and enable your service:
systemctl daemon-reload
systemctl enable myscript
Now after you restart your system, your service should be automatically started. You can verify that using command systemctl status myscript, which returns service status.
I have a python script that has a While True: in it that I would like to have run on startup on a raspberry pi running Jessie.
So far I have a startup bash script in /etc/init.d called startup.sh which contains
sudo python3 /home/pi/Desktop/Scripts/bluez3.py &
When the raspberry pi starts up, the script does run but after 20 minutes the script seems to stop. I have logging in my script and the time-stamp stops exactly 20 mins in.
I did some reading and I think the best option would be to create the python script as a service on the raspberry pi. However, I have not been able to find a decent tutorial about how to do this (and my lack of python knowledge).
My question is, is there another way to resolve my problem or does anyone know of a good tutorial on how to make the python script into a service.
Thanks!
given the name of your script, I'm guessing it's related to some bluetooth stuff. It's likely that after 20 min, whatever you're checking/needing in your script gets unaccessible and throws an exception or something like that. Like a resource being locked, or a bt device being disconnected or a module being unloaded or unavailable or [insert edge case reason here]…
that being said, in between creating a systemd service, you can first play with supervisorctl which is just an apt install supervisor away.
then if you really want to launch it as a service, you can find plenty of examples in /lib/systemd/system/*.service, like the following:
[Unit]
Description=Your service
Wants=
After=bluetooth.target # I guess you need bluetooth initialised first
[Service]
ExecStart=/usr/bin/python3 /home/pi/Desktop/Scripts/bluez3.py
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=always
[Install]
WantedBy=multi-user.target
which I customized from the sshd.service file 😉
I have a simple Python script working as a daemon. I am trying to create systemd script to be able to start this script during startup.
Current systemd script:
[Unit]
Description=Text
After=syslog.target
[Service]
Type=forking
User=node
Group=node
WorkingDirectory=/home/node/Node/
PIDFile=/var/run/zebra.pid
ExecStart=/home/node/Node/node.py
[Install]
WantedBy=multi-user.target
node.py:
if __name__ == '__main__':
with daemon.DaemonContext():
check = Node()
check.run()
run contains while True loop.
I try to run this service with systemctl start zebra-node.service. Unfortunately service never finished stating sequence - I have to press Ctrl+C.
Script is running, but status is activating and after a while it change to deactivating.
Now I am using python-daemon (but before I tried without it and the symptoms were similar).
Should I implement some additional features to my script or is systemd file incorrect?
The reason, it does not complete the startup sequence is, that for Type forking your startup process is expected to fork and exit (see $ man systemd.service - search for forking).
Simply use only the main process, do not daemonize
One option is to do less. With systemd, there is often no need to create daemons and you may directly run the code without daemonizing.
#!/usr/bin/python -u
from somewhere import Node
check = Node()
check.run()
This allows using simpler Type of service called simple, so your unit file would look like.
[Unit]
Description=Simplified simple zebra service
After=syslog.target
[Service]
Type=simple
User=node
Group=node
WorkingDirectory=/home/node/Node/
ExecStart=/home/node/Node/node.py
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
Note, that the -u in python shebang is not necessary, but in case you print something out to the stdout or stderr, the -u makes sure, there is no output buffering in place and printed lines will be immediately caught by systemd and recorded in journal. Without it, it would appear with some delay.
For this purpose I added into unit file the lines StandardOutput=syslog and StandardError=syslog. If you do not care about printed output in your journal, do not care about these lines (they do not have to be present).
systemd makes daemonization obsolete
While the title of your question explicitly asks about daemonizing, I guess, the core of the question is "how to make my service running" and while using main process seems much simpler (you do not have to care about daemons at all), it could be considered answer to your question.
I think, that many people use daemonizing just because "everybody does it". With systemd the reasons for daemonizing are often obsolete. There might be some reasons to use daemonization, but it will be rare case now.
EDIT: fixed python -p to proper python -u. thanks kmftzg
It is possible to daemonize like Schnouki and Amit describe. But with systemd this is not necessary. There are two nicer ways to initialize the daemon: socket-activation and explicit notification with sd_notify().
Socket activation works for daemons which want to listen on a network port or UNIX socket or similar. Systemd would open the socket, listen on it, and then spawn the daemon when a connection comes in. This is the preferred approch because it gives the most flexibility to the administrator. [1] and [2] give a nice introduction, [3] describes the C API, while [4] describes the Python API.
[1] http://0pointer.de/blog/projects/socket-activation.html
[2] http://0pointer.de/blog/projects/socket-activation2.html
[3] http://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
[4] http://www.freedesktop.org/software/systemd/python-systemd/daemon.html#systemd.daemon.listen_fds
Explicit notification means that the daemon opens the sockets itself and/or does any other initialization, and then notifies init that it is ready and can serve requests. This can be implemented with the "forking protocol", but actually it is nicer to just send a notification to systemd with sd_notify().
Python wrapper is called systemd.daemon.notify and will be one line to use [5].
[5] http://www.freedesktop.org/software/systemd/python-systemd/daemon.html#systemd.daemon.notify
In this case the unit file would have Type=notify, and call
systemd.daemon.notify("READY=1") after it has established the sockets. No forking or daemonization is necessary.
You're not creating the PID file.
systemd expects your program to write its PID in /var/run/zebra.pid. As you don't do it, systemd probably thinks that your program is failing, hence deactivating it.
To add the PID file, install lockfile and change your code to this:
import daemon
import daemon.pidlockfile
pidfile = daemon.pidlockfile.PIDLockFile("/var/run/zebra.pid")
with daemon.DaemonContext(pidfile=pidfile):
check = Node()
check.run()
(Quick note: some recent update of lockfile changed its API and made it incompatible with python-daemon. To fix it, edit daemon/pidlockfile.py, remove LinkFileLock from the imports, and add from lockfile.linklockfile import LinkLockFile as LinkFileLock.)
Be careful of one other thing: DaemonContext changes the working dir of your program to /, making the WorkingDirectory of your service file useless. If you want DaemonContext to chdir into another directory, use DaemonContext(pidfile=pidfile, working_directory="/path/to/dir").
I came across this question when trying to convert some python init.d services to systemd under CentOS 7. This seems to work great for me, by placing this file in /etc/systemd/system/:
[Unit]
Description=manages worker instances as a service
After=multi-user.target
[Service]
Type=idle
User=node
ExecStart=/usr/bin/python /path/to/your/module.py
Restart=always
TimeoutStartSec=10
RestartSec=10
[Install]
WantedBy=multi-user.target
I then dropped my old init.d service file from /etc/init.d and ran sudo systemctl daemon-reload to reload systemd.
I wanted my service to auto restart, hence the restart options. I also found using idle for Type made more sense than simple.
Behavior of idle is very similar to simple; however, actual execution
of the service binary is delayed until all active jobs are dispatched.
This may be used to avoid interleaving of output of shell services
with the status output on the console.
More details on the options I used here.
I also experimented with keeping the old service and having systemd resart the service but I ran into some issues.
[Unit]
# Added this to the above
#SourcePath=/etc/init.d/old-service
[Service]
# Replace the ExecStart from above with these
#ExecStart=/etc/init.d/old-service start
#ExecStop=/etc/init.d/old-service stop
The issues I experienced was that the init.d service script was used instead of the systemd service if both were named the same. If you killed the init.d initiated process, the systemd script would then take over. But if you ran service <service-name> stop it would refer to the old init.d service. So I found the best way was to drop the old init.d service and the service command referred to the systemd service instead.
Hope this helps!
Also, you most likely need to set daemon_context=True when creating the DaemonContext().
This is because, if python-daemon detects that if it is running under a init system, it doesn't detach from the parent process. systemd expects that the daemon process running with Type=forking will do so. Hence, you need that, else systemd will keep waiting, and finally kill the process.
If you are curious, in python-daemon's daemon module, you will see this code:
def is_detach_process_context_required():
""" Determine whether detaching process context is required.
Return ``True`` if the process environment indicates the
process is already detached:
* Process was started by `init`; or
* Process was started by `inetd`.
"""
result = True
if is_process_started_by_init() or is_process_started_by_superserver():
result = False
Hopefully this explains better.