I tried to run a Python script as a system service, but the service is not starting. Here is my configuration:
pyntp.service:
[Unit]
Description=Python NTP Service
After=multi-user.target
[Service]
Type=forking
ExecStart=/usr/bin/python $HOME/ntp/ntpservice.py
[Install]
WantedBy=multi-user.target
ntpservice.py:
#!/usr/bin/python
import os
import time
import json
pid = os.fork()
if pid == 0:
print 'parent'
else:
print 'child'
while True:
print('123')
time.sleep(1)
The step to start the service is as follows:
cp pyntp.service /etc/systemd/system/
cp ntpservice.py /usr/local/bin/
systemctl daemon-reload
systemctl enable pyntp.service
systemctl start pyntp.service
The thing is, when I try to see the status of pyntp service, it is always like this:
● pyntp.service - Python NTP Service
Loaded: loaded (/usr/lib/systemd/system/pyntp.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Wed 2018-11-14 22:27:56 CST; 34min ago
Process: 801 ExecStart=/usr/bin/python $HOME/ntp/ntpservice.py (code=exited, status=0/SUCCESS)
Main PID: 801 (code=exited, status=0/SUCCESS)
Nov 14 22:27:56 HES1 systemd[1]: Started Python NTP Service.
Nov 14 22:27:56 HES1 systemd[1]: Starting Python NTP Service...
Can any one help me resolve this? Thanks.
Your program is behaving as expected. Just forking isn't enough to make a daemon. What's happening is your code is running as long as its parent process runs, then (both forks are) exiting when the parent process terminates. What you want is to write a daemon (and have that controlled by systemd). You may find this question useful in explaining some easy ways to do that: How do you create a daemon in Python?
fork is an important part of this process but just doing a fork by itself doesn't completely solve the problem. If you'd like to see a more detailed example of how to daemonize your process by hand using fork you can read this: Python code to Daemonize a process?
Related
I'm trying to run a bot on a VPS and im able to get a systemd service create so as to be able to run my python code automatically if the server were to ever reboot for any reason. The service is enabled, the status is showing as active when I check its status, and journalctl shows that the .py file has started, but that's where my progress ends. I receive no other output after the notification that the service has started. And when I check my VPS console there is 0 CPU usage meaning that the script is in fact not running.
The script is located at /home/user/projects/ytbot1/bot/main.py and runs perfectly fine when executed manually through python3 main.py.
both the script and the .service file were given u+x permissions to the root and user, and the service is set to run only when the user is logged in (I think,... all I did was set User=myusername in ytbot1.service)
[Unit]
Description=reiss YT Bot
[Service]
User=reiss
Group=reiss
Type=exec
ExecStart=/usr/bin/python3 "/home/reiss/projects/ytbot1/bot/main.py"
Restart=always
RestartSec=5
PrivateTmp=true
TimeoutSec=900
[Install]
WantedBy=multi-user.target
here's the output from sudo systemctl status ytbot1
● ytbot1.service - reiss YT Bot
Loaded: loaded (/etc/systemd/system/ytbot1.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-05-16 10:34:04 CEST; 9s ago
Main PID: 7684 (python3)
Tasks: 1 (limit: 19141)
Memory: 98.4M
CGroup: /system.slice/ytbot1.service
└─7684 /usr/bin/python3 /home/reiss/projects/ytbot1/bot/main.py
and sudo journalctl -fu ytbot1.service
root#vm1234567:~# journalctl -fu ytbot1.service
-- Logs begin at Mon 2022-05-16 07:41:00 CEST. --
May 16 10:07:18 vm1234567.contaboserver.net systemd[1]: Starting reiss YT Bot...
May 16 10:07:18 vm1234567.contaboserver.net systemd[1]: Started reiss YT Bot.
and it stops there. the log doesn't update or add new information.
desired output:
-- Logs begin at Mon 2022-05-16 07:41:00 CEST. --
May 16 10:07:18 vm1234567.contaboserver.net systemd[1]: Starting reiss YT Bot...
May 16 10:07:18 vm1234567.contaboserver.net systemd[1]: Started reiss YT Bot.
Handling GoogleAPI
2022 5 15 14 38 2
./APR_2022_V20 MAY_2022_V15.mp4
DOWNLOADING VIDEOS...
[...] *Script runs, you get the picture*
Any help? Could it be that I have my .py file in the wrong place or maybe something's wrong with the .service file/working directory? Maybe I should use a different version of python? The script i'm trying to run is pretty complex so maybe forking could be an issue (the code calls on a couple google apis but setting Type=forking just forces the service startup to infinitely load then time-out for some reason)? I don't know mayn... I appreciate feedback. Thanks!
Try using /usr/bin/python3 -u and then the file path.
The -u option tells Python not to fully buffer output.
By default, Python uses line buffering if the output is a console, otherwise full buffering. Line buffering means output is saved up until there's a complete line, and then flushed. Full buffering can buffer many lines at a time. And the systemd journal is probably not detected as a console.
I am running service on the raspberry pi that is meant to run a python script on the startup. Sometimes the python script fails, however when it fails the service still reports it as success which is wrong.
The python script below:
import cec
import sys
import time
import configparser
from tuya.devices import TuyaSmartSwitch
class SmartSwitch:
def __init__(self, config_path):
CONFIG = configparser.ConfigParser()
CONFIG.read(config_path)
try: # connect to the smart switch
self.device = TuyaSmartSwitch(
username=CONFIG["TUYA"]["username"],
password=CONFIG["TUYA"]["password"],
location=CONFIG["TUYA"]["location"],
device=CONFIG["TUYA"]["device"])
except:
print("Could not connect to the switch")
sys.exit()
def turn_off(self):
self.device.turn_off()
def turn_on(self):
self.device.turn_on()
This is the terminal output:
pi#raspberrypi:~/subwoofer_switch $ sudo systemctl status subwoofer.service
● subwoofer.service - My script to control suboowfer smart switch
Loaded: loaded (/etc/systemd/system/subwoofer.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2020-07-02 19:46:43 BST; 23h ago
Process: 541 ExecStart=/usr/bin/python3 /home/pi/subwoofer_switch/subwoofer_control.py (code=exited, status=0/SUCCESS)
Main PID: 541 (code=exited, status=0/SUCCESS)
Jul 02 19:46:30 raspberrypi systemd[1]: Started My script to control suboowfer smart switch.
Jul 02 19:46:43 raspberrypi python3[541]: Could not connect to the switch
Jul 02 19:46:43 raspberrypi systemd[1]: subwoofer.service: Succeeded.
As you can see the script did fail the connection and sys.exit() should have run and closed the script but it still reports as success.
Here is the service code:
[Unit]
Description=My script to control suboowfer smart switch
After=multi-user.target
[Service]
Restart=on-failure
RestartSec=10s
Type=idle
ExecStart=/usr/bin/python3 /home/pi/subwoofer_switch/subwoofer_control.py
[Install]
WantedBy=multi-user.target
I am not sure what I am doing wrong, as my hope was if the service has failed to start it would try again to run the python script
If, as in your code, no parameter is supplied to sys.exit(), it defaults to zero, which means "success". So, if your intention is to exit with failure, use:
sys.exit(1)
I'm deploying a Django app in a virtual environment and I'm using supervisor for the app itself and some Celery tasks. When my /etc/supervisor/conf.d/project is like this:
[program:botApp]
command = /home/ubuntu/gunicorn_start.bash;
user = ubuntu;
stdout_logfile = /home/ubuntu/logs/gunicorn_supervisor.log;
redirect_stderr = true;
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8;
it works fine, I do sudo systemctl restart supervisor and I can see it running properly, but when I add my second program in the same configuration file like this:
[program:botApp]
command = /home/ubuntu/gunicorn_start.bash;
user = ubuntu;
stdout_logfile = /home/ubuntu/logs/gunicorn_supervisor.log;
redirect_stderr = true;
environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8;
[program:worker]
command=/home/ubuntu/django_env/bin/celery -A botApp worker -l info;
user=ubuntu;
numprocs=1;
stdout_logfile=/home/ubuntu/logs/celeryworker.log;
redirect_stderr = true;
autostart=true;
autorestart=true;
startsecs=10;
stopwaitsecs = 600 ;
killasgroup=true;
priority=998;
it throws the following error:
● supervisor.service - Supervisor process control system for UNIX
Loaded: loaded (/lib/systemd/system/supervisor.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2018-09-04 08:09:26 UTC; 12s ago
Docs: http://supervisord.org
Process: 21931 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS)
Process: 21925 ExecStart=/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf (code=exited, status=2)
Main PID: 21925 (code=exited, status=2)
Sep 04 08:09:26 ip-172-31-45-13 systemd[1]: supervisor.service: Unit entered failed state.
Sep 04 08:09:26 ip-172-31-45-13 systemd[1]: supervisor.service: Failed with result 'exit-code'.
I have tried changing the second program to be the same as the first one with different name and log file and it throws the same error. Do I need to do something extra for using 2 programs with supervisor? Many thanks.
Since this question was asked over a year ago, it seems doubtful we'll ever receive the answers to these questions, but the following pieces of information would have been helpful:
what Linux distribution and version are (or were) you using; e.g., Ubuntu 18.04, CentOS 7, etc.?
did you look at the logs generated by systemd? (journalctl -xu supervisord)
what, if any, messages did they contain?
did you look at the individual log files generated by your two supervisord services (e.g., /home/ubuntu/logs/celeryworker.log)?
what, if any, messages did they contain?
My gut feel is that the output of journalctl -xu supervisord will tell you what you need to know. Or at least move you a step in the right direction.
Once the configuration file has been created, you may update the Supervisor configuration and start the processes using the following commands:
sudo supervisorctl reread
sudo supervisorctl update
and then restart your supervisorctl
The following Python script named "tst_script.py" writes in a file. It works when it is launch via command line but fails to create the file when lauch via a systemd service.
#!/usr/bin/python
#-*- coding: utf-8 -*-
import time
if __name__ == '__main__':
with open('test_file', 'a') as fd:
while True:
for i in range(10):
fd.write('test' + str(i) + '\r\n')
time.sleep(3)
break
The service script named "tst_script.service" is the following:
[Unit]
Description=Python test script
[Install]
WantedBy=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/python -O /home/gilles/tst_script.py
The service script is copied in "/lib/systemd/system" folder and activated by:
sudo systemctl daemon-reload
sudo systemctl enable tst_script.service
We check that the service is installed and enabled by: sudo systemctl status tst_script.service
The result is:
tst_script.service - Python test script
Loaded: loaded (/lib/systemd/system/tst_script.service; enabled)
Active: inactive (dead) since ven. 2018-03-02 13:35:00 CET; 16min ago
Main PID: 10565 (code=exited, status=0/SUCCESS)
and we launch the service by: sudo systemctl start tst_script.service
We check that the script is running: sudo systemctl status tst_script.service
The result is:
tst_script.service - Python test script
Loaded: loaded (/lib/systemd/system/tst_script.service; enabled)
Active: active (running) since ven. 2018-03-02 13:51:17 CET; 1s ago
Main PID: 10738 (python)
CGroup: /system.slice/tst_script.service
└─10738 /usr/bin/python -O /home/gilles/tst_script.py
And after 30 seconds, we check that the process is completed:
tst_script.service - Python test script
Loaded: loaded (/lib/systemd/system/tst_script.service; enabled)
Active: inactive (dead) since ven. 2018-03-02 13:51:47 CET; 3s ago
Process: 10738 ExecStart=/usr/bin/python -O /home/gilles/tst_script.py (code=exited, status=0/SUCCESS)
Main PID: 10738 (code=exited, status=0/SUCCESS)
but, as a result, the file "tst_file" doesn't exist...
I googled but don't find any answer who solve my problem. The closest answer I found is running python script as a systemd service.
Have you any idea to solve this problem ?
As Cong Ma mentioned in the comments, the most obvious problem is that you are probably looking in the wrong place. In your code you have the output file as: with open('test_file', 'a') which when you are testing this script appears in the same folder as the script.
The problem is, that Python does not interpret file_name as /path/to/python/script/file_name but instead reads a relative path as relative to the present working directory. If you are kicking off from systemd, your present working directory is not the same as where the script is located.
You can handle this by configuring your service, but easier in this case is to just provide an absolute path to the output file:
import time
if __name__ == '__main__':
with open('/home/gilles/test_file', 'a') as fd:
while True:
for i in range(10):
fd.write('test' + str(i) + '\r\n')
time.sleep(3)
break
If you'd rather fool around with the service, you can change the working directory of systemd. This info might be helpful: Changing Working Directory of systemd service
I'm trying to get a Flask + SocketIO app running as a service on Ubuntu 16.04, inside a virtual environment. My server is restarted every day at 3 am (outside of my control), so I need it to automatically launch on startup.
Running the script by itself works fine:
$ python main.py
(29539) wsgi starting up on http://127.0.0.1:8081
I can tell that it's working because it's serving pages (through an nginx server set up by following this Stack Overflow answer, though I don't think that's relevant.)
Here's my /etc/systemd/system/opendc.service:
[Unit]
Description=OpenDC flask + socketio service
[Service]
Environment=PYTHON_HOME=/var/www/opendc.ewi.tudelft.nl/web-server/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
ExecStart=/var/www/opendc.ewi.tudelft.nl/web-server main.py
Restart=always
[Install]
WantedBy=multi-user.target
So when I try to get that going using:
$ sudo systemctl daemon-reload
$ sudo systemctl restart opendc
It doesn't serve pages anymore. The status shows:
$ sudo systemctl status opendc
* opendc.service - OpenDC flask + socketio service
Loaded: loaded (/etc/systemd/system/opendc.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Fri 2016-08-19 10:48:31 CEST; 15min ago
Process: 29533 ExecStart=/var/www/opendc.ewi.tudelft.nl/web-server main.py (code=exited, status=203/EXEC)
Main PID: 29533 (code=exited, status=203/EXEC)
Aug 19 10:48:31 opendc.ewi.tudelft.nl systemd[1]: opendc.service: Service hold-off time over, scheduling restart.
Aug 19 10:48:31 opendc.ewi.tudelft.nl systemd[1]: Stopped OpenDC flask + socketio service.
Aug 19 10:48:31 opendc.ewi.tudelft.nl systemd[1]: opendc.service: Start request repeated too quickly.
Aug 19 10:48:31 opendc.ewi.tudelft.nl systemd[1]: Failed to start OpenDC flask + socketio service.
I've looked up (code=exited, status=203/EXEC) and done some troubleshooting with what I found:
I checked that main.py is executable:
$ ls -l main.py
-rwxr-xr-x 1 leon leon 2007 Aug 19 10:46 main.py
And that main.py has this first line to point to Python in the virtual environment:
#!/var/www/opendc.ewi.tudelft.nl/web-server/venv/bin/python
So what's the problem here?
Tried and tested way of making a python file run in a Virtual Environment as a service.
[Unit][Unit]
Description=Your own description
After=network.target
[Service]
Type=simple
Restart=always
WorkingDirectory=/home/path/to/WorkingDirectory/
VIRTUAL_ENV=/home/path/to/WorkingDirectory/venv
Environment=PATH=$VIRTUAL_ENV/bin:$PATH
ExecStart=/home/path/to/WorkingDirectory/venv/bin/python app.py
Restart=on-failure
[Install]
WantedBy=multi-user.target
I am putting this one here so I can always come back to it
I believe that you mistype PYTHON_HOME and than PATH=$VIRTUAL_ENV/bin:$PATH
you should use PATH=$PYTHON_HOME/bin:$PATH