I have python app with loop, which generates some files, saves video images and some other stuff. i installed it on Fedora (17) PC and want it to run "forever", ie if it hangs (i can put some keep_alive in file in loop) - it should be restarted. It also should be started on reboot.
As i understand, python-daemon help to do this, and systemd in Fedora.
I have the following config file for systemd (im not sure on some parameters though as documentation is too complicated for my level of linux knowledge):
[Unit]
Description=TPR Daemon
[Service]
Type=forking
Restart=always
WorkingDirectory=/home/igor/tpr
PIDFile=/var/run/tpr.pid
ExecStart=/usr/bin/python /home/igor/tpr/testd.py
[Install]
WantedBy=default.target
and here is my testd.py:
import daemon
import time, sys
class MyDaemon(object):
def __init__(self):
pass
def run(self):
while True:
print 'I am alive!'
time.sleep(1)
if __name__ == '__main__':
with daemon.DaemonContext(stdout=sys.stdout):
check = MyDaemon()
check.run()
when i run it with "sudo systemctl start tpr.service", it hangs for a while and then cancels out with this meesage:
Warning: Unit file of tpr.service changed on disk, 'systemctl --system daemon-reload' recommended.
Job for tpr.service failed. See 'systemctl status tpr.service' and 'journalctl -xn' for details.
and here are some logs from /var/log/messages:
Aug 9 21:32:27 localhost systemd[1]: Unit tpr.service entered failed state.
Aug 9 21:32:27 localhost systemd[1]: tpr.service holdoff time over, scheduling restart.
Aug 9 21:32:27 localhost systemd[1]: Stopping TPR Daemon...
Aug 9 21:32:27 localhost systemd[1]: Starting TPR Daemon...
Aug 9 21:33:57 localhost systemd[1]: tpr.service operation timed out. Terminating.
Aug 9 21:33:57 localhost python[28702]: I am alive!
Aug 9 21:33:57 localhost python[28702]: I am alive!
Aug 9 21:33:57 localhost python[28702]: I am alive!
Aug 9 21:33:57 localhost python[28702]: I am alive!
...
Aug 9 21:33:57 localhost python[28702]: I am alive!
Aug 9 21:33:57 localhost python[28702]: I am alive!
Aug 9 21:33:57 localhost python[28702]: I am alive!
Aug 9 21:33:57 localhost python[28702]: I am alive!
Aug 9 21:33:57 localhost systemd[1]: tpr.service: control process exited, code=exited status=1
Aug 9 21:33:57 localhost systemd[1]: Failed to start TPR Daemon.
Aug 9 21:33:57 localhost systemd[1]: Unit tpr.service entered failed state.
Aug 9 21:33:57 localhost systemd[1]: tpr.service holdoff time over, scheduling restart.
Aug 9 21:33:57 localhost systemd[1]: Stopping TPR Daemon...
Aug 9 21:33:57 localhost systemd[1]: Starting TPR Daemon...
so it should be running, but what this error about?
And maybe there is some simple convenient way to accomplish my task, and im inventing bicycle?
Update:
It seems the daemon should somehow let systemd know it has started..but how?
Aug 10 01:15:36 localhost systemd[1]: Starting TPR Daemon...
Aug 10 01:17:06 localhost systemd[1]: tpr.service operation timed out. Terminating.
Aug 10 01:17:06 localhost systemd[1]: tpr.service: control process exited, code=exited status=1
Aug 10 01:17:06 localhost systemd[1]: Failed to start TPR Daemon.
Aug 10 01:17:06 localhost systemd[1]: Unit tpr.service entered failed state.
Aug 10 01:17:06 localhost systemd[1]: tpr.service holdoff time over, scheduling restart.
Aug 10 01:17:06 localhost systemd[1]: Stopping TPR Daemon...
Aug 10 01:17:06 localhost systemd[1]: Starting TPR Daemon...
Aug 10 01:18:36 localhost systemd[1]: tpr.service operation timed out. Terminating.
Aug 10 01:18:36 localhost systemd[1]: tpr.service: control process exited, code=exited status=1
Aug 10 01:18:36 localhost systemd[1]: Failed to start TPR Daemon.
Aug 10 01:18:36 localhost systemd[1]: Unit tpr.service entered failed state.
Aug 10 01:18:36 localhost systemd[1]: tpr.service holdoff time over, scheduling restart.
Aug 10 01:18:36 localhost systemd[1]: Stopping TPR Daemon...
Aug 10 01:18:36 localhost systemd[1]: Starting TPR Daemon...
The error about changing on the disk means just that the file is changed.
After you run systemctl daemon-reload the file will be reread and hten you'll be able to start the service.
You can use notifications as said in this manual pages. The type for the service is notify.
Next thing is: you say your service type as forking. Does your process really forks? It is recommended to have PIDfile option set up if you use forking.
With systemd it is not necessary to fork your process to become daemon.
Related
I have a brand new Fedora Server 36 minimal install and all it will run is KVM.
Did the install with dnf group install "Headless Virtualization" and restarted the server.
systemctl status libvirtd showed it was not running; i.e.
[root#dell-fedora-kvm ~]# systemctl status libvirtd
○ libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled)
Active: inactive (dead)
TriggeredBy: ○ libvirtd-tcp.socket
○ libvirtd-admin.socket
○ libvirtd-ro.socket
○ libvirtd-tls.socket
○ libvirtd.socket
Docs: man:libvirtd(8)
https://libvirt.org
so I started and enabled it with systemctl start libvirtd and systemctl enable libvirtd
restarted but it still doesn't start automatically. When I start it manually systemctl start libvirtd, this is what I get with status. Not sure this tells us where the issue could be.
root#dell-fedora-kvm ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2022-07-03 16:32:23 AEST; 52s ago
TriggeredBy: ○ libvirtd-tcp.socket
● libvirtd-admin.socket
● libvirtd-ro.socket
○ libvirtd-tls.socket
● libvirtd.socket
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 857 (libvirtd)
Tasks: 21 (limit: 32768)
Memory: 44.8M
CPU: 480ms
CGroup: /system.slice/libvirtd.service
├─ 857 /usr/sbin/libvirtd --timeout 120
├─ 957 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
└─ 958 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
Jul 03 16:32:23 dell-fedora-kvm systemd[1]: Started libvirtd.service - Virtualization daemon.
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: started, version 2.86 cachesize 150
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset auth cryptoh>
Jul 03 16:32:24 dell-fedora-kvm dnsmasq-dhcp[957]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Jul 03 16:32:24 dell-fedora-kvm dnsmasq-dhcp[957]: DHCP, sockets bound exclusively to interface virbr0
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: reading /etc/resolv.conf
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: using nameserver 127.0.0.53#53
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: read /etc/hosts - 2 addresses
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 03 16:32:24 dell-fedora-kvm dnsmasq-dhcp[957]: read /var/lib/libvirt/dnsmasq/default.hostsfile
However the service shuts down after about 2 mins...
[root#dell-fedora-kvm ~]# systemctl status libvirtd
○ libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Sun 2022-07-03 16:37:25 AEST; 6s ago
TriggeredBy: ○ libvirtd-tcp.socket
● libvirtd-admin.socket
● libvirtd-ro.socket
○ libvirtd-tls.socket
● libvirtd.socket
Docs: man:libvirtd(8)
https://libvirt.org
Process: 994 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS)
Main PID: 994 (code=exited, status=0/SUCCESS)
Tasks: 2 (limit: 32768)
Memory: 31.7M
CPU: 330ms
CGroup: /system.slice/libvirtd.service
├─ 957 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
└─ 958 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
Jul 03 16:35:25 dell-fedora-kvm systemd[1]: Starting libvirtd.service - Virtualization daemon...
Jul 03 16:35:25 dell-fedora-kvm systemd[1]: Started libvirtd.service - Virtualization daemon.
Jul 03 16:35:25 dell-fedora-kvm dnsmasq[957]: read /etc/hosts - 2 addresses
Jul 03 16:35:25 dell-fedora-kvm dnsmasq[957]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 03 16:35:25 dell-fedora-kvm dnsmasq-dhcp[957]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Jul 03 16:37:25 dell-fedora-kvm systemd[1]: libvirtd.service: Deactivated successfully.
Jul 03 16:37:25 dell-fedora-kvm systemd[1]: libvirtd.service: Unit process 957 (dnsmasq) remains running after unit stopped.
Jul 03 16:37:25 dell-fedora-kvm systemd[1]: libvirtd.service: Unit process 958 (dnsmasq) remains running after unit stopped.
I have not made any changes to libvirtd.service. Any idea what's going on?
In Fedora 35, the libvirt default installation was switched to use virtqemud
https://fedoraproject.org/wiki/Changes/LibvirtModularDaemons
Furthermore, regardless of whether an install is configured to use libvirtd or virtqemud, there is no need for the services to be running at install time. They all make use of systemd socket activation, so they will start automatically whenever some application tries to use libvirt. They will stay running as long as application is connected, and shutdown 2 minutes after the last app disconnects (unless VMs are actively running).
I have following systemctl service
[Unit]
Description=Ml api
[Service]
#user=root
ExecStart=/usr/local/bin/python3.9 -u /home/a.nikitin#corp.bsv.legal/bsv_ml_api/app.py
ExecStop=/bin/kill -15 $MAINPID
Type=forking
#KillMode=process
#SyslogIdentifier=ml-api
#SyslogFacility=daemon
Restart=on-failure
[Install]
WantedBy=multiuser.target
When i run it i got an error. The sudo journalctl -u ml.service -e shows
ml.service - Ml api
Loaded: loaded (/usr/lib/systemd/system/ml.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2022-06-10 11:38:38 MSK; 1 day 23h ago
Main PID: 77614 (code=exited, status=203/EXEC)
Jun 10 11:38:38 srv-ml-api.corp.bsv.legal systemd[1]: Unit ml.service entered failed state.
Jun 10 11:38:38 srv-ml-api.corp.bsv.legal systemd[1]: ml.service failed.
Jun 10 11:38:38 srv-ml-api.corp.bsv.legal systemd[1]: ml.service holdoff time over, scheduling restart.
Jun 10 11:38:38 srv-ml-api.corp.bsv.legal systemd[1]: Stopped Ml api.
Jun 10 11:38:38 srv-ml-api.corp.bsv.legal systemd[1]: start request repeated too quickly for ml.service
Jun 10 11:38:38 srv-ml-api.corp.bsv.legal systemd[1]: Failed to start Ml api.
Jun 10 11:38:38 srv-ml-api.corp.bsv.legal systemd[1]: Unit ml.service entered failed state.
Jun 10 11:38:38 srv-ml-api.corp.bsv.legal systemd[1]: ml.service failed.
However, if i run directly
/usr/local/bin/python3.9 -u /home/a.nikitin#corp.bsv.legal/bsv_ml_api/app.py
Everything is ok and the script started
INFO: Started server process [4401]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:1111 (Press CTRL+C to quit)
I don't know where the problem is. It's FastAPI + uvicorn app.
Turns out i need to specify python env
Environment="PYTHONPATH=$PYTHONPATH:/home/a.nikitin#corp.bsv.legal/.local/lib/python3.9/site-packages"
I've created a python script that collects info for a 5 minute period and then sends data to aws cloudwatch. The body of the script runs forever in a while True: block, and when I run it on its own using python3 to_cloud.py, it functions successfully.
I made a service for it as follows:
[Unit]
Description=Service for sending script data to cloudwatch
After=multi-user.target
Conflicts=getty#tty1.service
[Service]
Type=simple
ExecStart=/usr/bin/python3 /home/ubuntu/scripts/to_cloud.py
WorkingDirectory= /home/ubuntu/scripts
StandardInput=tty-force
[Install]
WantedBy=multi-user.target
When I start the service, it runs for the duration of the internal loop that collects the info, but then the following happens:
● to_cloud.service - Service for sending script data to cloudwatch
Loaded: loaded (/lib/systemd/system/to_cloud.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2021-06-23 16:53:44 UTC; 5s ago
Process: 191072 ExecStart=/usr/bin/python3 /home/ubuntu/scripts/to_cloud.py (code=exited, status=1/FAILURE)
Main PID: 191072 (code=exited, status=1/FAILURE)
Jun 23 16:52:43 ip-172-31-19-11 systemd[1]: Started Service for sending script data to cloudwatch.
Jun 23 16:53:44 ip-172-31-19-11 systemd[1]: to_cloud.service: Main process exited, code=exited, status=1/FAILURE
Jun 23 16:53:44 ip-172-31-19-11 systemd[1]: to_cloud.service: Failed with result 'exit-code'.
There are no logs in the journalctl so I'm wondering how to figure out what's going wrong. Thanks!
I have an EC2 instance associated with ElasticBeanstalk on which my flask app is deployed. I have implemented some rest APIs using flask, python.
I created another EC2 instance on Linux 2, on which installed MongoDB community edition. I noticed that it has a local ip mapped in /etc/mongod.conf file:
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
As per my understanding I need to map the private IP of EC2 instance - flask app to it:
# network interfaces
net:
port: 27017
bindIp: <private ip of EC2 with flask app>
So that I can access the MongoDB installed at this instance from the flask app.
# configuring mongo
application.config["MONGO_DBNAME"] = "my_db"
application.config["MONGO_URI"] = "mongodb://public_ip_of_mongodb:27017/my_app"
For some reasons, as soon as I edit the '/etc/mongod.conf file' mongod service starts failing:
mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-08-04 17:17:28 UTC; 1min 11s ago
Docs: https://docs.mongodb.org/manual
Process: 2019 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=48)
Process: 2015 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 2012 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 2010 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Main PID: 1651 (code=exited, status=0/SUCCESS)
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: Starting MongoDB Database Server...
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal mongod[2019]: about to fork child process, waiting until server is ready for connections.
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal mongod[2019]: forked process: 2023
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: mongod.service: control process exited, code=exited status=48
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: Failed to start MongoDB Database Server.
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: Unit mongod.service entered failed state.
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: mongod.service failed.
Even if I revert the bindId to 127.0.0.1 it still fails.
Am I missing anything over here?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
i have Problem with odoo 11, he doesn't want to start, at first it worked fine,so I wrote this command :
systemctl status odoo.service
So, i have this error :
odoo.service - Odoo Open Source ERP and CRM
Loaded: loaded (/lib/systemd/system/odoo.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-06-28 10:28:38 PKT; 7s ago
Process: 3256 ExecStart=/opt/odoo/odoo-bin --config /etc/odoo.conf --logfile /var/log/odoo/odoo-server.log (code=exited, status=2)
Main PID: 3256 (code=exited, status=2)
juin 10 11:08:45 server-inprotec systemd[1]: Started Odoo Open Source ERP and CRM.
juin 10 11:08:46 server-inprotec odoo[3256]: Usage: odoo [options]
juin 10 11:08:46 server-inprotec odoo[3256]: odoo: error: The config file '/etc/odoo/odoo.conf' selected with -c/--config doesn't exi
juin 10 11:08:46 server-inprotec systemd[1]: odoo.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
juin 10 11:08:46 server-inprotec systemd[1]: odoo.service: Unit entered failed state.
juin 10 11:08:46 server-inprotec systemd[1]: odoo.service: Failed with result 'exit-code'.
I looked for this error but I didn't find, I just found that the database Postgresql doesn't work, when I write its status in terminal :
systemctl status postgresql-9.5.main.service
Result :
postgresql#9.5-main.service - PostgreSQL Cluster 9.5-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since lun. 2019-06-10 12:04:22 CET; 1h 59min ago
Process: 2951 ExecStart=postgresql#%i --skip-systemctl-redirect %i start (code=exited, status=1/FAILURE)
juin 10 12:04:22 server-inprotec systemd[1]: Starting PostgreSQL Cluster 9.5-main...
juin 10 12:04:22 server-inprotec postgresql#9.5-main[2951]: Error: Config owner (inprotec:1000) and data owner (postgres:111) do not
juin 10 12:04:22 server-inprotec systemd[1]: postgresql#9.5-main.service: Control process exited, code=exited status=1
juin 10 12:04:22 server-inprotec systemd[1]: Failed to start PostgreSQL Cluster 9.5-main.
juin 10 12:04:22 server-inprotec systemd[1]: postgresql#9.5-main.service: Unit entered failed state.
juin 10 12:04:22 server-inprotec systemd[1]: postgresql#9.5-main.service: Failed with result 'exit-code'.
, thanks in advance.
You have to create the file /etc/odoo/odoo.conf with a proper configuration, and then launch it again.
It should contain, among other things, settings to connect with a postgres server. Setting it up is another question, of course.