Python daemon and systemd service - python

I have a simple Python script working as a daemon. I am trying to create systemd script to be able to start this script during startup.
Current systemd script:
[Unit]
Description=Text
After=syslog.target
[Service]
Type=forking
User=node
Group=node
WorkingDirectory=/home/node/Node/
PIDFile=/var/run/zebra.pid
ExecStart=/home/node/Node/node.py
[Install]
WantedBy=multi-user.target
node.py:
if __name__ == '__main__':
with daemon.DaemonContext():
check = Node()
check.run()
run contains while True loop.
I try to run this service with systemctl start zebra-node.service. Unfortunately service never finished stating sequence - I have to press Ctrl+C.
Script is running, but status is activating and after a while it change to deactivating.
Now I am using python-daemon (but before I tried without it and the symptoms were similar).
Should I implement some additional features to my script or is systemd file incorrect?

The reason, it does not complete the startup sequence is, that for Type forking your startup process is expected to fork and exit (see $ man systemd.service - search for forking).
Simply use only the main process, do not daemonize
One option is to do less. With systemd, there is often no need to create daemons and you may directly run the code without daemonizing.
#!/usr/bin/python -u
from somewhere import Node
check = Node()
check.run()
This allows using simpler Type of service called simple, so your unit file would look like.
[Unit]
Description=Simplified simple zebra service
After=syslog.target
[Service]
Type=simple
User=node
Group=node
WorkingDirectory=/home/node/Node/
ExecStart=/home/node/Node/node.py
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
Note, that the -u in python shebang is not necessary, but in case you print something out to the stdout or stderr, the -u makes sure, there is no output buffering in place and printed lines will be immediately caught by systemd and recorded in journal. Without it, it would appear with some delay.
For this purpose I added into unit file the lines StandardOutput=syslog and StandardError=syslog. If you do not care about printed output in your journal, do not care about these lines (they do not have to be present).
systemd makes daemonization obsolete
While the title of your question explicitly asks about daemonizing, I guess, the core of the question is "how to make my service running" and while using main process seems much simpler (you do not have to care about daemons at all), it could be considered answer to your question.
I think, that many people use daemonizing just because "everybody does it". With systemd the reasons for daemonizing are often obsolete. There might be some reasons to use daemonization, but it will be rare case now.
EDIT: fixed python -p to proper python -u. thanks kmftzg

It is possible to daemonize like Schnouki and Amit describe. But with systemd this is not necessary. There are two nicer ways to initialize the daemon: socket-activation and explicit notification with sd_notify().
Socket activation works for daemons which want to listen on a network port or UNIX socket or similar. Systemd would open the socket, listen on it, and then spawn the daemon when a connection comes in. This is the preferred approch because it gives the most flexibility to the administrator. [1] and [2] give a nice introduction, [3] describes the C API, while [4] describes the Python API.
[1] http://0pointer.de/blog/projects/socket-activation.html
[2] http://0pointer.de/blog/projects/socket-activation2.html
[3] http://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
[4] http://www.freedesktop.org/software/systemd/python-systemd/daemon.html#systemd.daemon.listen_fds
Explicit notification means that the daemon opens the sockets itself and/or does any other initialization, and then notifies init that it is ready and can serve requests. This can be implemented with the "forking protocol", but actually it is nicer to just send a notification to systemd with sd_notify().
Python wrapper is called systemd.daemon.notify and will be one line to use [5].
[5] http://www.freedesktop.org/software/systemd/python-systemd/daemon.html#systemd.daemon.notify
In this case the unit file would have Type=notify, and call
systemd.daemon.notify("READY=1") after it has established the sockets. No forking or daemonization is necessary.

You're not creating the PID file.
systemd expects your program to write its PID in /var/run/zebra.pid. As you don't do it, systemd probably thinks that your program is failing, hence deactivating it.
To add the PID file, install lockfile and change your code to this:
import daemon
import daemon.pidlockfile
pidfile = daemon.pidlockfile.PIDLockFile("/var/run/zebra.pid")
with daemon.DaemonContext(pidfile=pidfile):
check = Node()
check.run()
(Quick note: some recent update of lockfile changed its API and made it incompatible with python-daemon. To fix it, edit daemon/pidlockfile.py, remove LinkFileLock from the imports, and add from lockfile.linklockfile import LinkLockFile as LinkFileLock.)
Be careful of one other thing: DaemonContext changes the working dir of your program to /, making the WorkingDirectory of your service file useless. If you want DaemonContext to chdir into another directory, use DaemonContext(pidfile=pidfile, working_directory="/path/to/dir").

I came across this question when trying to convert some python init.d services to systemd under CentOS 7. This seems to work great for me, by placing this file in /etc/systemd/system/:
[Unit]
Description=manages worker instances as a service
After=multi-user.target
[Service]
Type=idle
User=node
ExecStart=/usr/bin/python /path/to/your/module.py
Restart=always
TimeoutStartSec=10
RestartSec=10
[Install]
WantedBy=multi-user.target
I then dropped my old init.d service file from /etc/init.d and ran sudo systemctl daemon-reload to reload systemd.
I wanted my service to auto restart, hence the restart options. I also found using idle for Type made more sense than simple.
Behavior of idle is very similar to simple; however, actual execution
of the service binary is delayed until all active jobs are dispatched.
This may be used to avoid interleaving of output of shell services
with the status output on the console.
More details on the options I used here.
I also experimented with keeping the old service and having systemd resart the service but I ran into some issues.
[Unit]
# Added this to the above
#SourcePath=/etc/init.d/old-service
[Service]
# Replace the ExecStart from above with these
#ExecStart=/etc/init.d/old-service start
#ExecStop=/etc/init.d/old-service stop
The issues I experienced was that the init.d service script was used instead of the systemd service if both were named the same. If you killed the init.d initiated process, the systemd script would then take over. But if you ran service <service-name> stop it would refer to the old init.d service. So I found the best way was to drop the old init.d service and the service command referred to the systemd service instead.
Hope this helps!

Also, you most likely need to set daemon_context=True when creating the DaemonContext().
This is because, if python-daemon detects that if it is running under a init system, it doesn't detach from the parent process. systemd expects that the daemon process running with Type=forking will do so. Hence, you need that, else systemd will keep waiting, and finally kill the process.
If you are curious, in python-daemon's daemon module, you will see this code:
def is_detach_process_context_required():
""" Determine whether detaching process context is required.
Return ``True`` if the process environment indicates the
process is already detached:
* Process was started by `init`; or
* Process was started by `inetd`.
"""
result = True
if is_process_started_by_init() or is_process_started_by_superserver():
result = False
Hopefully this explains better.

Related

How to improve Python script performances when launched from systemd?

I have a Raspberry Pi (there is a Debian-based distro) which needs to keep running a service based on a Python script.
What I have done so far has been to create the .service file added to the /lib/systemd/system/ folder, now it is run automatically at the system boot and it is able to be restarted if any crash occurs, furthermore, a little logging system has been added based on syslog.
The content of the .service file looks like this so far:
[Unit]
Description=My_Service
After=network.target network-online.target
After=local-fs.target
[Service]
Type=simple
Restart=always
ExecStartPre=/bin/mkdir -p /home/user/log
ExecStart=/usr/local/bin/python3 -u /home/user/my_service.py
SyslogIdentifier=My_Service
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
Now I've noticed that the script is slighlty less performant than when it is run by terminal.
Because it is the only one script that the system should keep running, I was trying to set it with the highest priority but I am not sure how to do that.
So far I've added the following lines in the [Service] section but I'm not sure if it is ok or if it could be the best practice.
CPUSchedulingPolicy=rr
CPUSchedulingPriority=99
Nice=-20
The question is: How can I set the maximum priority and maximum usage of the system resources for such service in order to maximise its performances?
I'm also trying to disable other system services which are not useful for my embedded system, such as the bluetooth.service, could this kind of work be a good practice?
-- Edit --
No solutions found yet.
To run python script as service I recommend to use Supervisor.
https://rcwd.dev/long-lived-python-scripts-with-supervisor.html

Keep Python script running as a TCP socket server on AWS machine

I need to deploy a Python script on a AWS machine with Ubuntu server 18.04.
In the script there is a TCP server using a custom TCP port (let's say the 9999), which handles the clients' requests in different threads.
The problem is that I don't know which could be the best practice to keep this script running if there is any problem (the main TCP server thread dies for whatever reason).
Furthermore, I don't really know which could be the best practice to run this kind of script in the AWS EC2 system.
So far I am manually starting the script via SSH. Everything in the script logic works well, the problem is how to start and keep running such script.
You should take a look at the systemd suite. It can be used to manage the status of your script. It can restart the script if it dies, or if the node is rebooted.
Here's an example service.
Create the file below in this location: /lib/systemd/system/example.service
[Unit]
Description=A short description of the script.
[Service]
Type=simple
# Script location
ExecStart=/path/to/some/script.py
# Restart the script in all circumstances (e.g If it exits successfully, fails or crashes).
Restart=always
[Install]
WantedBy=multi-user.target
Then set the service to start automatically on boot and start the service:
chmod 644 /lib/systemd/system/example.service
systemctl enable example
systemctl start example
There are a lot of resources available if you want to learn more about systemd. I'd suggest the links below:
[0] https://www.freedesktop.org/wiki/Software/systemd/
[1] https://github.com/torfsen/python-systemd-tutorial
[2] https://www.linode.com/docs/quick-answers/linux/start-service-at-boot/#create-a-custom-systemd-service
[3] https://medium.com/#benmorel/creating-a-linux-service-with-systemd-611b5c8b91d6
As for general best practices, it is difficult to provide advice without knowing more about your script. It is not recommended to use the python HTTPServer module for Production workloads, because it only implements basic security checks.

Proper way to write a daemon in 2019 in Python

TL;DR;
I would like to write a daemon in Python, but I feel that PEP 3143 is overkill now that almost everybody uses systemd. I am looking for advice for a good start to write a daemon in Python.
Context
By reading other related questions on SO, it seems there is a before and an after systemd. The articles I have read are more than 10 years old and I feel that nowadays it is much simpler to achieve what I want to do.
I would like to write a program container that can be run either:
In the front (blocking) ($ ./foo)
In the background ($ ./foo &)
In a detached state ($ ./foo start, $ ./foo stop).
Managed by systemd ($ sudo systemctl start foo)
Being able to start and stop the program by itself would require these commands:
$ daemon start
$ daemon stop
$ daemon status
Also if the program is able to demonize itself it would also take care of some side effects (prevent zombies, double fork, pidfile, logging...)
I have yet not figured out how to manage the log. Since a daemon is detached from a TTY, it should redirect stdin, stdout, stderr to /dev/null and use a logger instead. To use a logger I can see different options:
Use Syslog, but require a write access to /dev/log
Use daemon.log through stdout and systemd, but requires systemd
Use a custom log file that will be specified with --log=~/foo.log
Use stdout/stderr because the process is not detached
For the PID file, the traditional location is in /var/run/ in which most users have no access. So the user should be able to configure the --pidfile.
From this I realize that building a simple daemon is not an easy task and I do not know from where to start.
One trivial approach would be to have two separated programs. One simple blocking program that performs the task, use stdout and one process manager that can do what systemd do, but at a user level.
If I would summarize my question in one sentence I would say:
Is it worth it to use PEP3143 Standard daemon process library in 2019 to write a daemon in Python instead of relying on a daemon manager such as systemd?

Run Python headless on local server

So, here is my little problem:
I have a small python program and have to run it 24/7 with internet access. So using my laptop is not really a solution. But I can use a local server. My program is saved on the server. Is there a way to start the program headless on the server, so it can run for a long period of time?
Thanks
This post assumes you are using linux. If this is not the case, I will still keep this answer around for anyone else. The general Principles will apply to any OS regardless.
While setsid is one way to put a program into the background, it is usually not what you want for a number of reasons:
If you ssh into the server again, there is no easy way to see the output of the program. Any output will not be kept.
If the program crashes, it won't be restarted.
If the server reboots, it won't be started.
There is no easy way to see the status, stop or restart it.
One slightly better method would be to use tmux (or the older screen). These can be used to detach a process, but still have access to it's output. (see this answer).
However, if you want to do things correctly, you should use a process manager/supervisor, such as systemd or supervisord.
For systemd, you can create the following file: /etc/systemd/system/yourprogramname.service
Inside it, place the following text:
[Unit]
Description=YourDescription
[Service]
ExecStart=/usr/bin/python3 /your/full/script/location.py
Restart=always
[Install]
WantedBy=multi-user.target
(These files support a number of additional options, you can view them at: http://0pointer.de/public/systemd-man/systemd.service.html)
Then reload the units with systemctl daemon-reload and enable your unit at boot with systemctl enable yourprogramname.service.
You can then:
Start it: systemctl start yourprogramname
Retart it: systemctl restart yourprogramname
Stop it: systemctl stop yourprogramname
Get the status: systemctl status yourprogramname
View the full logs: journalctl -u yourprogramname
(these commands all require sudo)

Run Python script forever, logging errors and restarting when crashes

I have a python script that continuously process new data and writes to a mongodb. In the script, its a while loop and a sleep that runs the code continuously.
What is the recommended way to run the Python script forever, logging errors when they occur, and restarting when it crashes?
Will node.js's forever be suitable? I'm also running node/meteor on the same Ubuntu server.
supervisord is perfect for this sort of thing. While I used to check that programs were still running every couple of minutes with a cron job, supervisord runs all programs in an in-process thread, so in the event your program terminates, supervisord will automatically restart the process. I no longer need to parse the output of ps to see if a program crashed.
It has a simple declaritive config file and configurable logging. By default it creates a log file for your-program-name-stderr.log your-program-name-stdout.log which are automatically handled by logrotate when supervisord is installed from an OS package manager (Debian for me).
If you don't want to configure supervisord's logging, you should look at logging in python so you can control what goes into those files.
if you're on a debian derivative you should be able to install and start the daemon simply by executing apt-get install supervisord as root.
The config file is very straightforward too:
[program:myprogram]
command=/path/to/my/program/script
directory=/path/to/my/program/base
user=myuser
autostart=true
autorestart=true
redirect_stderr=True
supervisorctl also allows you to see what your program is doing interactively and can start and stop multiple programs with supervisorctl start myprogram etc
Recently wrote something similar. The basic pattern I follow is
while True:
try:
#functionality
except SpecificError:
#log exception
except: #catch everything else
finally:
time.sleep(600)
to handle reboots you can use init.d or cron jobs.
If you are writing a daemon, you should probably do it with this command:
http://manpages.ubuntu.com/manpages/lucid/man8/start-stop-daemon.8.html
You can spawn this from a System V /etc/init.d/ script, or use Upstart which is slowly replacing it.
Upstart: http://upstart.ubuntu.com/getting-started.html
System V: http://www.cyberciti.biz/tips/linux-write-sys-v-init-script-to-start-stop-service.html
I find System V easier to write, but if this will ever be packaged and distributed in a debian file, I recommend writing an Upstart conf.
Definitely keep the sleep so it won't keep a grip on CPU load.
I don't know if this is still relevant to you, but I have been reading forever about how to do this and want to share somewhere what I did.
For me, the goal was to have a python script running always (on my Linux computer). The python script also has a "while True " loop in it which should theoretically run forever, but if it for any reason I cannot think of would crash, I want the script to restart. Also, when I restart the computer it should run the script.
I am not an expert but for me the best and most understandable was to use systemd (assuming you use Linux).
There are two nice examples of how to do this given here and here, showing how to write your .service files in either /etc/systemd/system or /lib/systemd/system. If you want to be completely correct you should take the former:
" /etc/systemd/system/: units installed by the system administrator" 1
The documentation of systemd here is actually nice to read, even if you are not an expert.
Hope this helps someone!

Categories