I am trying to run a python file with another python file in threading with subprocess. I am able to run the file, but not able to stop it.
What I want is
I want to run the test.py with my main.py in thread and stop it by entering stop in the console.
test.py
import time
while True:
print("Hello from test.py")
time.sleep(5)
main.py
import subprocess
import _thread
processes = []
def add_process(file_name):
p = subprocess.Popen(["python", file_name], shell=True)
processes.append(p)
print("Waiting the process...")
p.wait()
while True:
try:
# user input
ui = input("> ")
if ui == "start":
_thread.start_new_thread(add_process, ("test.py",))
print("Process started.")
elif ui == "stop":
if len(processes) > 0:
processes[-1].kill()
print("Process killed.")
else:
pass
except KeyboardInterrupt:
print("Exiting Program!")
break
Output
C:\users\me>python main2.py
> start
Process started.
> Waiting the process...
Hello from test.py, after 0 seconds
Hello from test.py, after 4 seconds
> stop
Process killed.
> Hello from test.py, after 8 seconds
Hello from test.py, after 12 seconds
> stopHello from test.py, after 16 seconds
Process killed.
> Hello from test.py, after 20 seconds
>
The program is still running even after I stop it with kill function, I have tried terminate also. I need to stop that, how can I. Or, is there any alternative module for subprocess to do this?
I suspect you have just started multiple processes. What happens if you replace your stop code with this:
for proc in processes:
proc.kill()
This should kill them all.
Note that you are on windows, and on windows terminate() is an alias for kill(). It is not a good idea to rely on killing processes gracelessly anyway, and if you need to stop your test.py you would be better off having some means of communicating with it and having it exit gracefully.
Incidentally, you don't want shell=True, and you're better off with threading than with _thread.
I'm trying to use supervisord events to listen to PROCESS_STATE_STOPPED events from processes managed by Supervisor.
My event listener (listener.py), looks like this:
import sys
from supervisor.childutils import listener
def write_stdout(s):
sys.stdout.write(s)
sys.stdout.flush()
def write_stderr(s):
sys.stderr.write(s)
sys.stderr.flush()
def main():
while True:
headers, body = listener.wait(sys.stdin, sys.stdout)
body = dict([pair.split(":") for pair in body.split(" ")])
write_stderr("Headers: %r\n" % repr(headers))
write_stderr("Body: %r\n" % repr(body))
listener.ok(sys.stdout)
if headers["eventname"] == "PROCESS_STATE_STOPPED":
write_stderr("Process state stopped...\n")
if __name__ == '__main__':
main()
My corresponding entry in supervisord.conf is as follows:
[program:theprogramname]
command=/bin/cat ; the program (relative uses PATH, can take args)
process_name=%(program_name)s ; process_name expr (default %(program_name)s)
numprocs=1 ; number of processes copies to start (def 1)
...
[eventlistener:theeventlistenername]
command=python /home/mickm/listener.py ; the program (relative uses PATH, can take args)
process_name=%(program_name)s_%(process_num)s ; process_name expr (default %(program_name)s)
numprocs=1 ; number of processes copies to start (def 1)
events=PROCESS_STATE ; event notif. types to subscribe to (req'd)
autorestart=true
redirect_stderr=true
I've researched the other StackOverflow questions on this area & I've attempted to implement accepted solutions from:
How to subscribe to PROCESS_STATE_RUNNING events for all processes
Supervisor event subscription is hanging on READY state
However, when I run listener.py , it outputs READY to SDOUT & goes no further. I've tried stopping/starting & restarting supervisor-monitored processes, but my script does pick up anything.
To test this, I'm:
running my event listener script in terminal A;
performing a kill -9 <pid> on the /bin/cat process in terminal B.
My supervisor log is as follows:
2017-01-13 14:56:36,168 INFO success: theeventlistenername_0 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-01-13 14:56:36,168 INFO success: theprogramname entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-01-13 14:57:29,457 INFO exited: theprogramname (terminated by SIGKILL; not expected)
2017-01-13 14:57:30,460 INFO spawned: 'theprogramname' with pid 25788
2017-01-13 14:57:31,462 INFO success: theprogramname entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
While the the supervisor log knows my process has been terminated:
i) Nothing is detected by listener.py, which remains at 'READY' prompt.
ii) supervisor restarts to process (as expected).
I'm wondering why the listener is not picking up the PROCESS_STATE_STOPPED event for the program defined in the config file (as per Supervisor event docs). Also - whether this can be applied generally for all supervisor-managed processes.
Thanks.
Configuration variable redirect_stderr:
http://supervisord.org/configuration.html
Do not set redirect_stderr=true in an [eventlistener:x] section.
Eventlisteners use stdout and stdin to communicate with supervisord.
If stderr is redirected, output from stderr will interfere with the
eventlistener protocol.
This is my working script:
import sys
from supervisor.childutils import listener
def write_stdout(s):
sys.stdout.write(s)
sys.stdout.flush()
def write_stderr(s):
sys.stderr.write(s)
sys.stderr.flush()
def main():
while True:
headers, body = listener.wait(sys.stdin, sys.stdout)
body = dict([pair.split(":") for pair in body.split(" ")])
write_stderr("Headers: %r\n" % repr(headers))
write_stderr("Body: %r\n" % repr(body))
if headers["eventname"] == "PROCESS_STATE_STOPPING":
write_stderr("Process state stopping...\n")
# acknowledge the event
write_stdout("RESULT 2\nOK")
But I believe you cannot test it simply running your script. Your supervisorctl .conf file has some special env variables like events which doesnt appear when you run your script manually
If you want to test it then add some logging to your listener:
[eventlistener:theeventlistenername]
command=python /home/mickm/listener.py ; the program (relative uses PATH, can take args)
process_name=%(program_name)s_%(process_num)s ; process_name expr (default %(program_name)s)
numprocs=1 ; number of processes copies to start (def 1)
events=PROCESS_STATE ; event notif. types to subscribe to (req'd)
autorestart=true
stderr_logfile=errorlogfile
stdout_logfile=applogfile
Also you can debug headers variable and you can check the data you are receiving.
With the logging enabled you can test it with a tail -f error.log.path on the stderr_logfile logfile.
Headers: "{'ver': '3.0', 'poolserial': '4', 'len': '71', 'server': 'supervisor', 'eventname': 'PROCESS_STATE_RUNNING', 'serial': '4', 'pool': 'mylistener'}"
Body: "{'from_state': 'STARTING', 'processname': 'someprocess', 'pid': '345', 'groupname': 'someprocess'}"
Process state running...
And when I kil -9 the process the above output appears in the log.
Also the stdout_logfile will log this kind of information:
RESULT 2
OKREADY
RESULT 2
OKREADY
RESULT 2
OKREADY
RESULT 2
OKREADY
RESULT 2
OKREADY
I'm running a daemon on a remote machine: mydaemon. This daemon should be persistently running at all times.
When I'm running a job on my remote machine, it also launches a lightweight python server process, my_remote_server.py.
One of the commands I can send to my_remote_server.py is to restart mydaemon, which I'm trying to do like this:
os.system("killall mydaemon")
subprocess.Popen(["mydaemon"], stdin=None, stdout=None, stderr=None, close_fds=True)
When my job ends, my_remote_server.py is supposed to terminate, but mydaemon should keep running. However I see my_remote_server.py stuck as a zombie process (This is causing the system to not see my job as terminated)
820 root Z [my_remote_serve]
834 root 552 S /usr/sbin/telnetd -l /bin/sh
835 root 836 S /bin/sh
844 root 672 S mydaemon
I want to detach Parent (my_remote_server.py) from the child (mydaemon), but I can't figure out how.
--
My python version is 2.5.4
edit:
I think I understand daemonization a bit better now, but I'm still having some trouble getting the daemon to separate
I'm leaving out the error handling here for brevity
os.system("killall mydaemon")
if(os.fork() > 0):
return True # my_remote_server.py returns to handle additional commands
os.setsid()
if(os.fork() > 0):
exit(0) # first child exits after becoming session leader
os.execlp("mydaemon") # have the second child run as the daemon
This is my ps list before I call the restart_mydaemon function
252 root 672 S mydaemon
286 root 4552 S /usr/bin/python my_remote_server.py
This is after restart_mydaemon, first child is zombied (shouldn't it be gone?)
286 root 4552 S /usr/bin/python my_remote_server.py
300 root Z [my_remote_serve]
304 root 672 S [mydaemon]
This is when the job terminates (my_remote_server.py should have exited, but it's a zombie, however, the first child has exited now at this point)
286 root Z [my_remote_serve]
304 root 1012 S [mydaemon]
Turning a process into a daemon is a multi-step process. Part of your problem is that your parent process is not waiting for child termination correctly (read about wait/waitpid),
Your process must close any open file descriptors (stdin, stdout, stderr)
Your process needs to (change directory, set umask, whatever you want/need)
You must fork a process
That process must become a process group leader
You must fork again, to fully detach from the (grand)parent
try:
#free parent, detach from process group
pid = os.fork()
if( pid>0 ):
exit(0) #parent exits
except OSError, e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
#become session leader, process group leader, detach from controlling terminal
os.setsid()
try:
#prevent zombie process, make init cleanup
pid = os.fork()
if( pid>0 ):
exit(0) #parent exits
except OSError, e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
#change directories, close stdin/out/err, etc
os.chdir(MYDIR)
os.umask(MYMASK)
#close files (including stdin, stdout, stderr)
#(re)open new stdin/stdout, if desired
Take a look here for reference/good explanation of how daemonization works:
http://code.activestate.com/recipes/278731/
Overall this is pretty non-trivial, I would try using the python-daemon library or similar library:
https://pypi.python.org/pypi/python-daemon/
I wouldn't call sub process to do this. Sub process imo should only be used to run a command that you cant import.
So in this case your my_remote_server.py could import the mydaemon and run it.
You have 2 ways of doing this. You could make my_remote_server the parent and mydaemon the child. Or have a new script that is the parent and spawns 2 children(mydaemon, my_remote_server).
For example. Below I create a child process. Global scope since i need to access it in the HTTP handler of tornado. I set up the Web Server. Start the Child process and then start the web server. When the endpoint is hit the handler runs and the child process is pulled from the global scope. We terminate the child process and print. You could add more endpoint to tornado to start the child again or stop and then restart it.
remote.py:
import multiprocessing
import deamon
import tornado.ioloop
import tornado.web
child_pro = multiprocessing.Process(target=deamon.run_deamon)
child_pro.daemon = True
class MainHandler(tornado.web.RequestHandler):
def get(self):
global child_pro
child_pro = child_pro # type: multiprocessing.Process
child_pro.terminate()
print ("Child killed")
def make_app():
return tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
global child_pro
app = make_app()
app.listen(8888)
child_pro.start()
tornado.ioloop.IOLoop.current().start()
deamon.py:
import time
def run_deamon():
while True:
time.sleep(2)
print ('child alive')
I want a script to start a new process, such that the new process continues running after the initial script exits. I expected that I could use multiprocessing.Process to start a new process, and set daemon=True so that the main script may exit while the created process continues running.
But it seems that the second process is silently terminated when the main script exits. Is this expected behavior, or am I doing something wrong?
From the Python docs:
When a process exits, it attempts to
terminate all of its daemonic child
processes.
This is the expected behavior.
If you are on a unix system, you could use os.fork:
import os
import time
pid=os.fork()
if pid:
# parent
while True:
print("I'm the parent")
time.sleep(0.5)
else:
# child
while True:
print("I'm just a child")
time.sleep(0.5)
Running this creates two processes. You can kill the parent without killing the child.
For example, when you run script you'll see something like:
% script.py
I'm the parent
I'm just a child
I'm the parent
I'm just a child
...
Stop the script with ctrl-Z:
^Z
[1]+ Stopped script.py
Find the process ID number for the parent. It will be the smaller of the two process ID numbers since the parent came first:
% ps axuw | grep script.py
unutbu 6826 0.1 0.1 33792 6388 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6827 0.0 0.1 33792 4352 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6832 0.0 0.0 17472 952 pts/24 S+ 15:09 0:00 grep --color=auto script.py
Kill the parent process:
% kill 6826
Restore script.py to the foreground:
% fg
script.py
Terminated
You'll see the child process is still running:
% I'm just a child
I'm just a child
I'm just a child
...
Kill the child (in a new terminal) with
% kill 6827
Simply use the subprocess module:
import subprocess
subprocess.Popen(["sleep", "60"])
Here is a related question on SO, where one of the answers gives a nice solution to this problem:
"spawning process from python"
If you are on a unix system (using docs):
#!/usr/bin/env python3
import os
import sys
import time
import subprocess
import multiprocessing
from multiprocessing import Process
def to_use_in_separate_process(*args):
print(args)
#check args before using them:
if len(args)>1:
subprocess.call((args[0], args[1]))
print('subprocess called')
def main(apathtofile):
print('checking os')
if os.name == 'posix':
print('os is posix')
multiprocessing.get_context('fork')
p = Process(target=to_use_in_separate_process, args=('xdg-open', apathtofile))
p.run()
print('exiting def main')
if __name__ == '__main__':
#parameter [1] must be some file that can be opened by xdg-open that this
#program uses.
if len(sys.argv)>1:
main(sys.argv[1])
print('we can exit now.')
else:
print('no parameters...')
print('mother program will end now!')
sys.exit(0)
In Ubuntu the following commands keep working even though the python app exit.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)
I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times?
You have two options here.
Make a proper cron job that calls your script. Cron is a common name for a GNU/Linux daemon that periodically launches scripts according to a schedule you set. You add your script into a crontab or place a symlink to it into a special directory and the daemon handles the job of launching it in the background. You can read more at Wikipedia. There is a variety of different cron daemons, but your GNU/Linux system should have it already installed.
Use some kind of python approach (a library, for example) for your script to be able to daemonize itself. Yes, it will require a simple event loop (where your events are timer triggering, possibly, provided by sleep function).
I wouldn't recommend you to choose 2., because you would be, in fact, repeating cron functionality. The Linux system paradigm is to let multiple simple tools interact and solve your problems. Unless there are additional reasons why you should make a daemon (in addition to trigger periodically), choose the other approach.
Also, if you use daemonize with a loop and a crash happens, no one will check the mail after that (as pointed out by Ivan Nevostruev in comments to this answer). While if the script is added as a cron job, it will just trigger again.
Here's a nice class that is taken from here:
#!/usr/bin/env python
import sys, os, time, atexit
from signal import SIGTERM
class Daemon:
"""
A generic daemon class.
Usage: subclass the Daemon class and override the run() method
"""
def __init__(self, pidfile, stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'):
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.pidfile = pidfile
def daemonize(self):
"""
do the UNIX double-fork magic, see Stevens' "Advanced
Programming in the UNIX Environment" for details (ISBN 0201563177)
http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
"""
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError, e:
sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# decouple from parent environment
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError, e:
sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# redirect standard file descriptors
sys.stdout.flush()
sys.stderr.flush()
si = file(self.stdin, 'r')
so = file(self.stdout, 'a+')
se = file(self.stderr, 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# write pidfile
atexit.register(self.delpid)
pid = str(os.getpid())
file(self.pidfile,'w+').write("%s\n" % pid)
def delpid(self):
os.remove(self.pidfile)
def start(self):
"""
Start the daemon
"""
# Check for a pidfile to see if the daemon already runs
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if pid:
message = "pidfile %s already exist. Daemon already running?\n"
sys.stderr.write(message % self.pidfile)
sys.exit(1)
# Start the daemon
self.daemonize()
self.run()
def stop(self):
"""
Stop the daemon
"""
# Get the pid from the pidfile
try:
pf = file(self.pidfile,'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
if not pid:
message = "pidfile %s does not exist. Daemon not running?\n"
sys.stderr.write(message % self.pidfile)
return # not an error in a restart
# Try killing the daemon process
try:
while 1:
os.kill(pid, SIGTERM)
time.sleep(0.1)
except OSError, err:
err = str(err)
if err.find("No such process") > 0:
if os.path.exists(self.pidfile):
os.remove(self.pidfile)
else:
print str(err)
sys.exit(1)
def restart(self):
"""
Restart the daemon
"""
self.stop()
self.start()
def run(self):
"""
You should override this method when you subclass Daemon. It will be called after the process has been
daemonized by start() or restart().
"""
You should use the python-daemon library, it takes care of everything.
From PyPI: Library to implement a well-behaved Unix daemon process.
Assuming that you would really want your loop to run 24/7 as a background service
For a solution that doesn't involve injecting your code with libraries, you can simply create a service template, since you are using linux:
[Unit]
Description = <Your service description here>
After = network.target # Assuming you want to start after network interfaces are made available
[Service]
Type = simple
ExecStart = python <Path of the script you want to run>
User = # User to run the script as
Group = # Group to run the script as
Restart = on-failure # Restart when there are errors
SyslogIdentifier = <Name of logs for the service>
RestartSec = 5
TimeoutStartSec = infinity
[Install]
WantedBy = multi-user.target # Make it accessible to other users
Place that file in your daemon service folder (usually /etc/systemd/system/), in a *.service file, and install it using the following systemctl commands (will likely require sudo privileges):
systemctl enable <service file name without .service extension>
systemctl daemon-reload
systemctl start <service file name without .service extension>
You can then check that your service is running by using the command:
systemctl | grep running
You can use fork() to detach your script from the tty and have it continue to run, like so:
import os, sys
fpid = os.fork()
if fpid!=0:
# Running as daemon now. PID is fpid
sys.exit(0)
Of course you also need to implement an endless loop, like
while 1:
do_your_check()
sleep(5)
Hope this get's you started.
You can also make the python script run as a service using a shell script. First create a shell script to run the python script like this (scriptname arbitary name)
#!/bin/sh
script='/home/.. full path to script'
/usr/bin/python $script &
now make a file in /etc/init.d/scriptname
#! /bin/sh
PATH=/bin:/usr/bin:/sbin:/usr/sbin
DAEMON=/home/.. path to shell script scriptname created to run python script
PIDFILE=/var/run/scriptname.pid
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting feedparser"
start_daemon -p $PIDFILE $DAEMON
log_end_msg $?
;;
stop)
log_daemon_msg "Stopping feedparser"
killproc -p $PIDFILE $DAEMON
PID=`ps x |grep feed | head -1 | awk '{print $1}'`
kill -9 $PID
log_end_msg $?
;;
force-reload|restart)
$0 stop
$0 start
;;
status)
status_of_proc -p $PIDFILE $DAEMON atd && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/atd {start|stop|restart|force-reload|status}"
exit 1
;;
esac
exit 0
Now you can start and stop your python script using the command /etc/init.d/scriptname start or stop.
A simple and supported version is Daemonize.
Install it from Python Package Index (PyPI):
$ pip install daemonize
and then use like:
...
import os, sys
from daemonize import Daemonize
...
def main()
# your code here
if __name__ == '__main__':
myname=os.path.basename(sys.argv[0])
pidfile='/tmp/%s' % myname # any name
daemon = Daemonize(app=myname,pid=pidfile, action=main)
daemon.start()
Ubuntu has a very simple way to manage a service.
For python the difference is that ALL the dependencies (packages) have to be in the same directory, where the main file is run from.
I just manage to create such a service to provide weather info to my clients.
Steps:
Create your python application project as you normally do.
Install all dependencies locally like:
sudo pip3 install package_name -t .
Create your command line variables and handle them in code (if you need any)
Create the service file. Something (minimalist) like:
[Unit]
Description=1Droid Weather meddleware provider
[Service]
Restart=always
User=root
WorkingDirectory=/home/ubuntu/weather
ExecStart=/usr/bin/python3 /home/ubuntu/weather/main.py httpport=9570 provider=OWMap
[Install]
WantedBy=multi-user.target
Save the file as myweather.service (for example)
Make sure that your app runs if started in the current directory
python3 main.py httpport=9570 provider=OWMap
The service file produced above and named myweather.service (important to have the extension .service) will be treated by the system as the name of your service. That is the name that you will use to interact with your service.
Copy the service file:
sudo cp myweather.service /lib/systemd/system/myweather.service
Refresh demon registry:
sudo systemctl daemon-reload
Stop the service (if it was running)
sudo service myweather stop
Start the service:
sudo service myweather start
Check the status (log file with where your print statements go):
tail -f /var/log/syslog
Or check the status with:
sudo service myweather status
Back to the start with another iteration if needed
This service is now running and even if you log out it will not be affected.
And YES if the host is shutdown and restarted this service will be restarted...
cron is clearly a great choice for many purposes. However it doesn't create a service or daemon as you requested in the OP. cron just runs jobs periodically (meaning the job starts and stops), and no more often than once / minute. There are issues with cron -- for example, if a prior instance of your script is still running the next time the cron schedule comes around and launches a new instance, is that OK? cron doesn't handle dependencies; it just tries to start a job when the schedule says to.
If you find a situation where you truly need a daemon (a process that never stops running), take a look at supervisord. It provides a simple way to wrapper a normal, non-daemonized script or program and make it operate like a daemon. This is a much better way than creating a native Python daemon.
how about using $nohup command on linux?
I use it for running my commands on my Bluehost server.
Please advice if I am wrong.
If you are using terminal(ssh or something) and you want to keep a long-time script working after you log out from the terminal, you can try this:
screen
apt-get install screen
create a virtual terminal inside( namely abc): screen -dmS abc
now we connect to abc: screen -r abc
So, now we can run python script: python keep_sending_mails.py
from now on, you can directly close your terminal, however, the python script will keep running rather than being shut down
Since this keep_sending_mails.py's PID is a child process of the virtual screen rather than the
terminal(ssh)
If you want to go back check your script running status, you can use screen -r abc again
First, read up on mail aliases. A mail alias will do this inside the mail system without you having to fool around with daemons or services or anything of the sort.
You can write a simple script that will be executed by sendmail each time a mail message is sent to a specific mailbox.
See http://www.feep.net/sendmail/tutorial/intro/aliases.html
If you really want to write a needlessly complex server, you can do this.
nohup python myscript.py &
That's all it takes. Your script simply loops and sleeps.
import time
def do_the_work():
# one round of polling -- checking email, whatever.
while True:
time.sleep( 600 ) # 10 min.
try:
do_the_work()
except:
pass
I would recommend this solution. You need to inherit and override method run.
import sys
import os
from signal import SIGTERM
from abc import ABCMeta, abstractmethod
class Daemon(object):
__metaclass__ = ABCMeta
def __init__(self, pidfile):
self._pidfile = pidfile
#abstractmethod
def run(self):
pass
def _daemonize(self):
# decouple threads
pid = os.fork()
# stop first thread
if pid > 0:
sys.exit(0)
# write pid into a pidfile
with open(self._pidfile, 'w') as f:
print >> f, os.getpid()
def start(self):
# if daemon is started throw an error
if os.path.exists(self._pidfile):
raise Exception("Daemon is already started")
# create and switch to daemon thread
self._daemonize()
# run the body of the daemon
self.run()
def stop(self):
# check the pidfile existing
if os.path.exists(self._pidfile):
# read pid from the file
with open(self._pidfile, 'r') as f:
pid = int(f.read().strip())
# remove the pidfile
os.remove(self._pidfile)
# kill daemon
os.kill(pid, SIGTERM)
else:
raise Exception("Daemon is not started")
def restart(self):
self.stop()
self.start()
to creating some thing that is running like service you can use this thing :
The first thing that you must do is installing the Cement framework:
Cement frame work is a CLI frame work that you can deploy your application on it.
command line interface of the app :
interface.py
from cement.core.foundation import CementApp
from cement.core.controller import CementBaseController, expose
from YourApp import yourApp
class Meta:
label = 'base'
description = "your application description"
arguments = [
(['-r' , '--run'],
dict(action='store_true', help='Run your application')),
(['-v', '--version'],
dict(action='version', version="Your app version")),
]
(['-s', '--stop'],
dict(action='store_true', help="Stop your application")),
]
#expose(hide=True)
def default(self):
if self.app.pargs.run:
#Start to running the your app from there !
YourApp.yourApp()
if self.app.pargs.stop:
#Stop your application
YourApp.yourApp.stop()
class App(CementApp):
class Meta:
label = 'Uptime'
base_controller = 'base'
handlers = [MyBaseController]
with App() as app:
app.run()
YourApp.py class:
import threading
class yourApp:
def __init__:
self.loger = log_exception.exception_loger()
thread = threading.Thread(target=self.start, args=())
thread.daemon = True
thread.start()
def start(self):
#Do every thing you want
pass
def stop(self):
#Do some things to stop your application
Keep in mind that your app must run on a thread to be daemon
To run the app just do this in command line
python interface.py --help
Use whatever service manager your system offers - for example under Ubuntu use upstart. This will handle all the details for you such as start on boot, restart on crash, etc.
You can run a process as a subprocess inside a script or in another script like this
subprocess.Popen(arguments, close_fds=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)
Or use a ready-made utility
https://github.com/megashchik/d-handler