I have a Raspberry Pi (there is a Debian-based distro) which needs to keep running a service based on a Python script.
What I have done so far has been to create the .service file added to the /lib/systemd/system/ folder, now it is run automatically at the system boot and it is able to be restarted if any crash occurs, furthermore, a little logging system has been added based on syslog.
The content of the .service file looks like this so far:
[Unit]
Description=My_Service
After=network.target network-online.target
After=local-fs.target
[Service]
Type=simple
Restart=always
ExecStartPre=/bin/mkdir -p /home/user/log
ExecStart=/usr/local/bin/python3 -u /home/user/my_service.py
SyslogIdentifier=My_Service
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
Now I've noticed that the script is slighlty less performant than when it is run by terminal.
Because it is the only one script that the system should keep running, I was trying to set it with the highest priority but I am not sure how to do that.
So far I've added the following lines in the [Service] section but I'm not sure if it is ok or if it could be the best practice.
CPUSchedulingPolicy=rr
CPUSchedulingPriority=99
Nice=-20
The question is: How can I set the maximum priority and maximum usage of the system resources for such service in order to maximise its performances?
I'm also trying to disable other system services which are not useful for my embedded system, such as the bluetooth.service, could this kind of work be a good practice?
-- Edit --
No solutions found yet.
To run python script as service I recommend to use Supervisor.
https://rcwd.dev/long-lived-python-scripts-with-supervisor.html
Related
I need to deploy a Python script on a AWS machine with Ubuntu server 18.04.
In the script there is a TCP server using a custom TCP port (let's say the 9999), which handles the clients' requests in different threads.
The problem is that I don't know which could be the best practice to keep this script running if there is any problem (the main TCP server thread dies for whatever reason).
Furthermore, I don't really know which could be the best practice to run this kind of script in the AWS EC2 system.
So far I am manually starting the script via SSH. Everything in the script logic works well, the problem is how to start and keep running such script.
You should take a look at the systemd suite. It can be used to manage the status of your script. It can restart the script if it dies, or if the node is rebooted.
Here's an example service.
Create the file below in this location: /lib/systemd/system/example.service
[Unit]
Description=A short description of the script.
[Service]
Type=simple
# Script location
ExecStart=/path/to/some/script.py
# Restart the script in all circumstances (e.g If it exits successfully, fails or crashes).
Restart=always
[Install]
WantedBy=multi-user.target
Then set the service to start automatically on boot and start the service:
chmod 644 /lib/systemd/system/example.service
systemctl enable example
systemctl start example
There are a lot of resources available if you want to learn more about systemd. I'd suggest the links below:
[0] https://www.freedesktop.org/wiki/Software/systemd/
[1] https://github.com/torfsen/python-systemd-tutorial
[2] https://www.linode.com/docs/quick-answers/linux/start-service-at-boot/#create-a-custom-systemd-service
[3] https://medium.com/#benmorel/creating-a-linux-service-with-systemd-611b5c8b91d6
As for general best practices, it is difficult to provide advice without knowing more about your script. It is not recommended to use the python HTTPServer module for Production workloads, because it only implements basic security checks.
So, here is my little problem:
I have a small python program and have to run it 24/7 with internet access. So using my laptop is not really a solution. But I can use a local server. My program is saved on the server. Is there a way to start the program headless on the server, so it can run for a long period of time?
Thanks
This post assumes you are using linux. If this is not the case, I will still keep this answer around for anyone else. The general Principles will apply to any OS regardless.
While setsid is one way to put a program into the background, it is usually not what you want for a number of reasons:
If you ssh into the server again, there is no easy way to see the output of the program. Any output will not be kept.
If the program crashes, it won't be restarted.
If the server reboots, it won't be started.
There is no easy way to see the status, stop or restart it.
One slightly better method would be to use tmux (or the older screen). These can be used to detach a process, but still have access to it's output. (see this answer).
However, if you want to do things correctly, you should use a process manager/supervisor, such as systemd or supervisord.
For systemd, you can create the following file: /etc/systemd/system/yourprogramname.service
Inside it, place the following text:
[Unit]
Description=YourDescription
[Service]
ExecStart=/usr/bin/python3 /your/full/script/location.py
Restart=always
[Install]
WantedBy=multi-user.target
(These files support a number of additional options, you can view them at: http://0pointer.de/public/systemd-man/systemd.service.html)
Then reload the units with systemctl daemon-reload and enable your unit at boot with systemctl enable yourprogramname.service.
You can then:
Start it: systemctl start yourprogramname
Retart it: systemctl restart yourprogramname
Stop it: systemctl stop yourprogramname
Get the status: systemctl status yourprogramname
View the full logs: journalctl -u yourprogramname
(these commands all require sudo)
I wrote a server application in Python with Flask and now I would like to get it up and running on a virtual machine I have set up. Thus, I would really appreciate guidance in two areas.
How do I get a server setup so that it is perpetually running, and other computers can access it? The computers can be in the same network so I don't have to worry about a domain name or anything. I am just looking for multiple devices to be able to access it. I am currently able to run the server on my local machine and everything works just fine.
I have my virtual linux machine set up remotely, so I SSH into it and do everything from command line, but I am a bit lost as to how to do the aforementioned stuff from the command line.
Any guidance/help is much appreciated! The web-searching I have done hasn't pointed me in the right direction. I apologize if any of my terminology was off (if so, please feel free to correct me so I learn!). Thank you!
Use systemd on Ubuntu, /etc/systemd/system, for a simple setup (probably not ideal for a production setup though).
I do this sometimes for Python Flask app that I'm prototyping. First, put your application code in /opt/my-app. I usually just cd /opt and git clone a repo there. Then, create a file called /etc/systemd/system/my-app.service. In that file, add the following:
[Unit]
Description=My App daemon
After=network.target postgresql.service
Wants=postgresql.service
[Service]
EnvironmentFile=/etc/sysconfig/my-app
WorkingDirectory=/opt/my-app/ # <- this is where your app lives
User=root
Group=root
Type=simple
ExecStart=/usr/bin/python server.py # <- this starts your app
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
Next, paste any environment variables you have into a file called /etc/sysconfig/my-app like:
DB_HOST=localhost
DB_USER=postgres
DB_PASSWORD=postgres
DB_NAME=postgres
Then you can do:
service my-app start
service my-app stop
service my-app restart
and then you can hit the app running on the servers IP and port (just like if you ran python app.py or python server.py. To check the logs for your daemon process, if it doesn't seem to work out, you can run:
journalctl -u my-app -e
In production, I'm not sure this is the best setup, probably better to look into something like ngnix. But I do this for prototypes all the time and it's pretty great.
I have a python script that has a While True: in it that I would like to have run on startup on a raspberry pi running Jessie.
So far I have a startup bash script in /etc/init.d called startup.sh which contains
sudo python3 /home/pi/Desktop/Scripts/bluez3.py &
When the raspberry pi starts up, the script does run but after 20 minutes the script seems to stop. I have logging in my script and the time-stamp stops exactly 20 mins in.
I did some reading and I think the best option would be to create the python script as a service on the raspberry pi. However, I have not been able to find a decent tutorial about how to do this (and my lack of python knowledge).
My question is, is there another way to resolve my problem or does anyone know of a good tutorial on how to make the python script into a service.
Thanks!
given the name of your script, I'm guessing it's related to some bluetooth stuff. It's likely that after 20 min, whatever you're checking/needing in your script gets unaccessible and throws an exception or something like that. Like a resource being locked, or a bt device being disconnected or a module being unloaded or unavailable or [insert edge case reason here]…
that being said, in between creating a systemd service, you can first play with supervisorctl which is just an apt install supervisor away.
then if you really want to launch it as a service, you can find plenty of examples in /lib/systemd/system/*.service, like the following:
[Unit]
Description=Your service
Wants=
After=bluetooth.target # I guess you need bluetooth initialised first
[Service]
ExecStart=/usr/bin/python3 /home/pi/Desktop/Scripts/bluez3.py
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=always
[Install]
WantedBy=multi-user.target
which I customized from the sshd.service file 😉
I have a simple Python script working as a daemon. I am trying to create systemd script to be able to start this script during startup.
Current systemd script:
[Unit]
Description=Text
After=syslog.target
[Service]
Type=forking
User=node
Group=node
WorkingDirectory=/home/node/Node/
PIDFile=/var/run/zebra.pid
ExecStart=/home/node/Node/node.py
[Install]
WantedBy=multi-user.target
node.py:
if __name__ == '__main__':
with daemon.DaemonContext():
check = Node()
check.run()
run contains while True loop.
I try to run this service with systemctl start zebra-node.service. Unfortunately service never finished stating sequence - I have to press Ctrl+C.
Script is running, but status is activating and after a while it change to deactivating.
Now I am using python-daemon (but before I tried without it and the symptoms were similar).
Should I implement some additional features to my script or is systemd file incorrect?
The reason, it does not complete the startup sequence is, that for Type forking your startup process is expected to fork and exit (see $ man systemd.service - search for forking).
Simply use only the main process, do not daemonize
One option is to do less. With systemd, there is often no need to create daemons and you may directly run the code without daemonizing.
#!/usr/bin/python -u
from somewhere import Node
check = Node()
check.run()
This allows using simpler Type of service called simple, so your unit file would look like.
[Unit]
Description=Simplified simple zebra service
After=syslog.target
[Service]
Type=simple
User=node
Group=node
WorkingDirectory=/home/node/Node/
ExecStart=/home/node/Node/node.py
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
Note, that the -u in python shebang is not necessary, but in case you print something out to the stdout or stderr, the -u makes sure, there is no output buffering in place and printed lines will be immediately caught by systemd and recorded in journal. Without it, it would appear with some delay.
For this purpose I added into unit file the lines StandardOutput=syslog and StandardError=syslog. If you do not care about printed output in your journal, do not care about these lines (they do not have to be present).
systemd makes daemonization obsolete
While the title of your question explicitly asks about daemonizing, I guess, the core of the question is "how to make my service running" and while using main process seems much simpler (you do not have to care about daemons at all), it could be considered answer to your question.
I think, that many people use daemonizing just because "everybody does it". With systemd the reasons for daemonizing are often obsolete. There might be some reasons to use daemonization, but it will be rare case now.
EDIT: fixed python -p to proper python -u. thanks kmftzg
It is possible to daemonize like Schnouki and Amit describe. But with systemd this is not necessary. There are two nicer ways to initialize the daemon: socket-activation and explicit notification with sd_notify().
Socket activation works for daemons which want to listen on a network port or UNIX socket or similar. Systemd would open the socket, listen on it, and then spawn the daemon when a connection comes in. This is the preferred approch because it gives the most flexibility to the administrator. [1] and [2] give a nice introduction, [3] describes the C API, while [4] describes the Python API.
[1] http://0pointer.de/blog/projects/socket-activation.html
[2] http://0pointer.de/blog/projects/socket-activation2.html
[3] http://www.freedesktop.org/software/systemd/man/sd_listen_fds.html
[4] http://www.freedesktop.org/software/systemd/python-systemd/daemon.html#systemd.daemon.listen_fds
Explicit notification means that the daemon opens the sockets itself and/or does any other initialization, and then notifies init that it is ready and can serve requests. This can be implemented with the "forking protocol", but actually it is nicer to just send a notification to systemd with sd_notify().
Python wrapper is called systemd.daemon.notify and will be one line to use [5].
[5] http://www.freedesktop.org/software/systemd/python-systemd/daemon.html#systemd.daemon.notify
In this case the unit file would have Type=notify, and call
systemd.daemon.notify("READY=1") after it has established the sockets. No forking or daemonization is necessary.
You're not creating the PID file.
systemd expects your program to write its PID in /var/run/zebra.pid. As you don't do it, systemd probably thinks that your program is failing, hence deactivating it.
To add the PID file, install lockfile and change your code to this:
import daemon
import daemon.pidlockfile
pidfile = daemon.pidlockfile.PIDLockFile("/var/run/zebra.pid")
with daemon.DaemonContext(pidfile=pidfile):
check = Node()
check.run()
(Quick note: some recent update of lockfile changed its API and made it incompatible with python-daemon. To fix it, edit daemon/pidlockfile.py, remove LinkFileLock from the imports, and add from lockfile.linklockfile import LinkLockFile as LinkFileLock.)
Be careful of one other thing: DaemonContext changes the working dir of your program to /, making the WorkingDirectory of your service file useless. If you want DaemonContext to chdir into another directory, use DaemonContext(pidfile=pidfile, working_directory="/path/to/dir").
I came across this question when trying to convert some python init.d services to systemd under CentOS 7. This seems to work great for me, by placing this file in /etc/systemd/system/:
[Unit]
Description=manages worker instances as a service
After=multi-user.target
[Service]
Type=idle
User=node
ExecStart=/usr/bin/python /path/to/your/module.py
Restart=always
TimeoutStartSec=10
RestartSec=10
[Install]
WantedBy=multi-user.target
I then dropped my old init.d service file from /etc/init.d and ran sudo systemctl daemon-reload to reload systemd.
I wanted my service to auto restart, hence the restart options. I also found using idle for Type made more sense than simple.
Behavior of idle is very similar to simple; however, actual execution
of the service binary is delayed until all active jobs are dispatched.
This may be used to avoid interleaving of output of shell services
with the status output on the console.
More details on the options I used here.
I also experimented with keeping the old service and having systemd resart the service but I ran into some issues.
[Unit]
# Added this to the above
#SourcePath=/etc/init.d/old-service
[Service]
# Replace the ExecStart from above with these
#ExecStart=/etc/init.d/old-service start
#ExecStop=/etc/init.d/old-service stop
The issues I experienced was that the init.d service script was used instead of the systemd service if both were named the same. If you killed the init.d initiated process, the systemd script would then take over. But if you ran service <service-name> stop it would refer to the old init.d service. So I found the best way was to drop the old init.d service and the service command referred to the systemd service instead.
Hope this helps!
Also, you most likely need to set daemon_context=True when creating the DaemonContext().
This is because, if python-daemon detects that if it is running under a init system, it doesn't detach from the parent process. systemd expects that the daemon process running with Type=forking will do so. Hence, you need that, else systemd will keep waiting, and finally kill the process.
If you are curious, in python-daemon's daemon module, you will see this code:
def is_detach_process_context_required():
""" Determine whether detaching process context is required.
Return ``True`` if the process environment indicates the
process is already detached:
* Process was started by `init`; or
* Process was started by `inetd`.
"""
result = True
if is_process_started_by_init() or is_process_started_by_superserver():
result = False
Hopefully this explains better.