Python daemon exits prematurely. So does everything else - python

My understanding of Python's daemon module is that I can have a script that does stuff, spawns a daemon, and continues to do stuff. When the script finishes the daemon should hang around. Yes?
But that's not happening...
I have a python script that uses curses to manage a whole bunch of scripts and functions. It works wonderfully except when I use the script to spawn a daemon.
Now a daemon in this application is represented by a class. For example:
class TestDaemon(DaemonBase):
def __init__(self,stuff):
logger.debug("TestDaemon.__init__()")
def run(self):
logger.debug("TestDaemon entering loop")
while True:
pass
def cleanup(self):
super(TestDaemon,self).cleanup()
logger.debug("TestDaemon.cleanup()")
def load_config(self):
super(TestDaemon,self).load_config()
logger.debug("TestDaemon.load_config()")
And the daemon is launched with a function like:
def launch(*args,**kwargs):
import daemon
import lockfile
import signal
import os
oDaemon = TestDaemon(stuff)
context = daemon.DaemonContext(
working_directory=os.path.join(os.getcwd(),sAppName),
umask=0o077, #chmod mode = 777 minus umask. Only current user has access
pidfile=lockfile.FileLock('/home/sheena/.daemons/{0}__{1}.pid'.format(sAppName,sProcessName)),
)
context.signal_map = {
signal.SIGTERM: oDaemon.cleanup, #cleanup
signal.SIGHUP: 'terminate',
signal.SIGUSR1: oDaemon.load_config, #reload config
}
logger.debug("launching daemon")
with context:
oDaemon.run()
logger.debug("daemon launched")
The program gets as far as logging "launching daemon".
After this point, everything exits and the daemon doesn't run.
Any ideas why this would happen?
There is no evidence of exceptions - exceptions are set to be logged but there are none.
Question: Any ideas why this could be happening?
Stuff I've tried:
If I have oDaemon.run() in a try block it fails in exactly the same way
I assumed maybe the context is set up wrong so replaced with context with with daemon.DaemonContext(). Same problem
I replaced:
with context:
oDaemon.run()
with
def run():
while True:
pass
with context:
run()
and the main program still exited prematurely but at least it spawned a daemon so I assume it doesn't like the way I put stuff in a class...

We don't know anything about this DaemonBase class, but this:
with context:
oDaemon.run()
is a blocking call, cause you have infinite loop in run(). That is why your program cannot continue further.
Where is the code for starting actual daemon process?

Related

How to kill a single process in Python at any point of its execution?

I'm fairly new to Python and have created an rudimentary application that runs a series of what is functionally macros for the user to automate some tedious processes with an existing application. Some of these processes take a while, and others I would like to run infinitely until a user hits a key to stop it. It is a small program and I was in a hurry, so the easiest solution was throwing a stop field into my classes and putting "if stop -> return/break" in numerous spots throughout my methods. The code below should demonstrate the general idea.
class ExampleClass:
stop = false
def stop_doing_stuff(self):
stop = true
def do_stuff(self):
if stop:
return
else:
for i in range(10000):
if stop:
return
else:
do_thing()
This seems clearly to me as an amateur solution, and I would like to know how this could be done better. I'm sure there is a way to accomplish this with threading, but I've only briefly worked with threads in the past so was not sure how at the time. I was most curious though as to whether there is perhaps an even easier solution which does not involve launching a thread since I'm in no need of multiple processes running concurrently.
Edit:
I forgot to mention this is a GUI application running. The user is pressing buttons which will do the tasks for them. However, the GUI is hidden as a task is executed.
You may catch a KeyboardInterrupt:
try:
for i in range(10000):
do_things()
except KeyboardInterrupt:
return
You can immediately terminate a script with a specified exit status using sys.exit(exit_status):
import sys
def foo():
sys.exit(0) # exit with success status (zero)
sys.exit(1) # exit with error status (non-zero)
If you want to terminate the script via keyboard you need only Ctrl + C / Cmd + C it. You can handle that interrupt gracefully using signal:
#!/usr/bin/env python
import signal
import sys
def signal_handler(sig, frame):
print('You pressed Ctrl+C!')
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
print('Press Ctrl+C to terminate')
# ..perform work here..

On Ctrl-d, call Close() like with file objects happen

I've wrote a class that inherits from object and has instances of sub-objects that uses some threads for tasks. There are two socket listeners that creates other threads for each accepted connection. They do what they have to do. To finish them, they are looking on a Threading.Event object to know that they have to finish.
I've noticed that, when exit the python console they are not notified (or don't catch the notification) and the exit don't return control to the bash console, unless a Close() is called before.
First idea to fix it has been to implement the '__del__' method to use the garbage collector to clean it when exit.
class ServiceProvider(object):
def __init__(self):
super(ServiceProvider,self).__init__()
#...
self.Open()
def Open(self):
#... Some threads are created.
def Close(self):
#.... Threading.Event to report the threads to finish
def __del__(self):
self.Close()
But the behaviour is the same. If I place a print in those methods, non in '__del__', neither in 'Close' they are written. Unless it is closed before, then the print in the del is wrote.
Then I've implemented the '__enter__' and '__exit__' methods to manage the with statement. And the exit behaves as expected and when the with ends, things are release. But what I really want is to have something like the file descriptors that event if file.close() is not called, it is executed when exits the program.
class ServiceProvider(object):
#...
def __enter__(self):
return self
def __exit__(self):
self.Close()
Searching for more solutions I've tried with atexit but not. I have similar results that doesn't fix the issue. Even I collect all the objects created of this class, the doOnExit only writes its print if the objects in the list are already Close.
import atexit
global objects2Close
objects2Close = []
#atexit.register
def doOnExit():
for obj in objects2Close:
obj.Close()
class ServiceProvider(object):
def __init__(self):
super(ServiceProvider,self).__init__()
objects2Close.append(self)
It's usually a good idea to use with when you have resources that you don't want to leak (files, connections, whatever else you care about).
Somewhere, just outside your main loop you should have something like:
with ServiceProvider(some_params) as service_provider:
rest_of_the_code()
What this does is that regardless of how you exit rest_of_the_code() (except for kill -9) it will call service_provider.Close() at the end. This works for exceptions and interrupts as well. Kill -9 doesn't work because the process is kill at os level and doesn't have a chance to attempt to recover.
I've got a solution for this issue. The posted information in this question was not related with the real issue.
This is as simple as daemon threading.
A the implementation uses some threads for listening remote connections they have to finish their execution when the program goes to exit. But the program ends when all the no daemon thread has finished.
Mistakenly those listeners and talkers where not set to be daemons and that's why the execution waits for them.

Programmatically exiting python script while multithreading

I have some code which runs routinely, and every now and then (like once a month) the program seems to hang somewhere and I'm not sure where.
I thought I would implement [what has turned out to be not quite] a "quick fix" of checking how long the program has been running for. I decided to use multithreading to call the function, and then while it is running, check the time.
For example:
import datetime
import threading
def myfunc():
#Code goes here
t=threading.Thread(target=myfunc)
t.start()
d1=datetime.datetime.utcnow()
while threading.active_count()>1:
if (datetime.datetime.utcnow()-d1).total_seconds()>60:
print 'Exiting!'
raise SystemExit(0)
However, this does not close the other thread (myfunc).
What is the best way to go about killing the other thread?
The docs could be clearer about this. Raising SystemExit tells the interpreter to quit, but "normal" exit processing is still done. Part of normal exit processing is .join()-ing all active non-daemon threads. But your rogue thread never ends, so exit processing waits forever to join it.
As #roippi said, you can do
t.daemon = True
before starting it. Normal exit processing does not wait for daemon threads. Your OS should kill them then when the main process exits.
Another alternative:
import os
os._exit(13) # whatever exit code you want goes there
That stops the interpreter "immediately", and skips all normal exit processing.
Pick your poison ;-)
There is no way to kill a thread. You must kill the target from within the target. The best way is with a hook and a queue. It goes something like this.
import Threading
from Queue import Queue
# add a kill_hook arg to your function, kill_hook
# is a queue used to pass messages to the main thread
def myfunc(*args, **kwargs, kill_hook=None):
#Code goes here
# put this somewhere which is periodically checked.
# an ideal place to check the hook is when logging
try:
if q.get_nowait(): # or use q.get(True, 5) to wait a longer
print 'Exiting!'
raise SystemExit(0)
except Queue.empty:
pass
q = Queue() # the queue used to pass the kill call
t=threading.Thread(target=myfunc, args = q)
t.start()
d1=datetime.datetime.utcnow()
while threading.active_count()>1:
if (datetime.datetime.utcnow()-d1).total_seconds()>60:
# if your kill criteria are met, put something in the queue
q.put(1)
I originally found this answer somewhere online, possibly this. Hope this helps!
Another solution would be to use a separate instance of Python, and monitor the other Python thread, killing it from the system level, with psutils.
Wow, I like the daemon and stealth os._exit solutions too!

How to use multiprocessing with class instances in Python?

I am trying to create a class than can run a separate process to go do some work that takes a long time, launch a bunch of these from a main module and then wait for them all to finish. I want to launch the processes once and then keep feeding them things to do rather than creating and destroying processes. For example, maybe I have 10 servers running the dd command, then I want them all to scp a file, etc.
My ultimate goal is to create a class for each system that keeps track of the information for the system in which it is tied to like IP address, logs, runtime, etc. But that class must be able to launch a system command and then return execution back to the caller while that system command runs, to followup with the result of the system command later.
My attempt is failing because I cannot send an instance method of a class over the pipe to the subprocess via pickle. Those are not pickleable. I therefore tried to fix it various ways but I can't figure it out. How can my code be patched to do this? What good is multiprocessing if you can't send over anything useful?
Is there any good documentation of multiprocessing being used with class instances? The only way I can get the multiprocessing module to work is on simple functions. Every attempt to use it within a class instance has failed. Maybe I should pass events instead? I don't understand how to do that yet.
import multiprocessing
import sys
import re
class ProcessWorker(multiprocessing.Process):
"""
This class runs as a separate process to execute worker's commands in parallel
Once launched, it remains running, monitoring the task queue, until "None" is sent
"""
def __init__(self, task_q, result_q):
multiprocessing.Process.__init__(self)
self.task_q = task_q
self.result_q = result_q
return
def run(self):
"""
Overloaded function provided by multiprocessing.Process. Called upon start() signal
"""
proc_name = self.name
print '%s: Launched' % (proc_name)
while True:
next_task_list = self.task_q.get()
if next_task is None:
# Poison pill means shutdown
print '%s: Exiting' % (proc_name)
self.task_q.task_done()
break
next_task = next_task_list[0]
print '%s: %s' % (proc_name, next_task)
args = next_task_list[1]
kwargs = next_task_list[2]
answer = next_task(*args, **kwargs)
self.task_q.task_done()
self.result_q.put(answer)
return
# End of ProcessWorker class
class Worker(object):
"""
Launches a child process to run commands from derived classes in separate processes,
which sit and listen for something to do
This base class is called by each derived worker
"""
def __init__(self, config, index=None):
self.config = config
self.index = index
# Launce the ProcessWorker for anything that has an index value
if self.index is not None:
self.task_q = multiprocessing.JoinableQueue()
self.result_q = multiprocessing.Queue()
self.process_worker = ProcessWorker(self.task_q, self.result_q)
self.process_worker.start()
print "Got here"
# Process should be running and listening for functions to execute
return
def enqueue_process(target): # No self, since it is a decorator
"""
Used to place an command target from this class object into the task_q
NOTE: Any function decorated with this must use fetch_results() to get the
target task's result value
"""
def wrapper(self, *args, **kwargs):
self.task_q.put([target, args, kwargs]) # FAIL: target is a class instance method and can't be pickled!
return wrapper
def fetch_results(self):
"""
After all processes have been spawned by multiple modules, this command
is called on each one to retreive the results of the call.
This blocks until the execution of the item in the queue is complete
"""
self.task_q.join() # Wait for it to to finish
return self.result_q.get() # Return the result
#enqueue_process
def run_long_command(self, command):
print "I am running number % as process "%number, self.name
# In here, I will launch a subprocess to run a long-running system command
# p = Popen(command), etc
# p.wait(), etc
return
def close(self):
self.task_q.put(None)
self.task_q.join()
if __name__ == '__main__':
config = ["some value", "something else"]
index = 7
workers = []
for i in range(5):
worker = Worker(config, index)
worker.run_long_command("ls /")
workers.append(worker)
for worker in workers:
worker.fetch_results()
# Do more work... (this would actually be done in a distributor in another class)
for worker in workers:
worker.close()
Edit: I tried to move the ProcessWorker class and the creation of the multiprocessing queues outside of the Worker class and then tried to manually pickle the worker instance. Even that doesn't work and I get an error
RuntimeError: Queue objects should only be shared between processes
through inheritance
. But I am only passing references of those queues into the worker instance?? I am missing something fundamental. Here is the modified code from the main section:
if __name__ == '__main__':
config = ["some value", "something else"]
index = 7
workers = []
for i in range(1):
task_q = multiprocessing.JoinableQueue()
result_q = multiprocessing.Queue()
process_worker = ProcessWorker(task_q, result_q)
worker = Worker(config, index, process_worker, task_q, result_q)
something_to_look_at = pickle.dumps(worker) # FAIL: Doesn't like queues??
process_worker.start()
worker.run_long_command("ls /")
So, the problem was that I was assuming that Python was doing some sort of magic that is somehow different from the way that C++/fork() works. I somehow thought that Python only copied the class, not the whole program into a separate process. I seriously wasted days trying to get this to work because all of the talk about pickle serialization made me think that it actually sent everything over the pipe. I knew that certain things could not be sent over the pipe, but I thought my problem was that I was not packaging things up properly.
This all could have been avoided if the Python docs gave me a 10,000 ft view of what happens when this module is used. Sure, it tells me what the methods of multiprocess module does and gives me some basic examples, but what I want to know is what is the "Theory of Operation" behind the scenes! Here is the kind of information I could have used. Please chime in if my answer is off. It will help me learn.
When you run start a process using this module, the whole program is copied into another process. But since it is not the "__main__" process and my code was checking for that, it doesn't fire off yet another process infinitely. It just stops and sits out there waiting for something to do, like a zombie. Everything that was initialized in the parent at the time of calling multiprocess.Process() is all set up and ready to go. Once you put something in the multiprocess.Queue or shared memory, or pipe, etc. (however you are communicating), then the separate process receives it and gets to work. It can draw upon all imported modules and setup just as if it was the parent. However, once some internal state variables change in the parent or separate process, those changes are isolated. Once the process is spawned, it now becomes your job to keep them in sync if necessary, either through a queue, pipe, shared memory, etc.
I threw out the code and started over, but now I am only putting one extra function out in the ProcessWorker, an "execute" method that runs a command line. Pretty simple. I don't have to worry about launching and then closing a bunch of processes this way, which has caused me all kinds of instability and performance issues in the past in C++. When I switched to launching processes at the beginning and then passing messages to those waiting processes, my performance improved and it was very stable.
BTW, I looked at this link to get help, which threw me off because the example made me think that methods were being transported across the queues: http://www.doughellmann.com/PyMOTW/multiprocessing/communication.html
The second example of the first section used "next_task()" that appeared (to me) to be executing a task received via the queue.
Instead of attempting to send a method itself (which is impractical), try sending a name of a method to execute.
Provided that each worker runs the same code, it's a matter of a simple getattr(self, task_name).
I'd pass tuples (task_name, task_args), where task_args were a dict to be directly fed to the task method:
next_task_name, next_task_args = self.task_q.get()
if next_task_name:
task = getattr(self, next_task_name)
answer = task(**next_task_args)
...
else:
# poison pill, shut down
break
REF: https://stackoverflow.com/a/14179779
Answer on Jan 6 at 6:03 by David Lynch is not factually correct when he says that he was misled by
http://www.doughellmann.com/PyMOTW/multiprocessing/communication.html.
The code and examples provided are correct and work as advertised. next_task() is executing a task received via the queue -- try and understand what the Task.__call__() method is doing.
In my case what, tripped me up was syntax errors in my implementation of run(). It seems that the sub-process will not report this and just fails silently -- leaving things stuck in weird loops! Make sure you have some kind of syntax checker running e.g. Flymake/Pyflakes in Emacs.
Debugging via multiprocessing.log_to_stderr()F helped me narrow down the problem.

interactive scripts with threads

I'm trying to wrap the blocking calls in pyaudio with a thread to give me non-blocking access through queues. However, the problem I have is not with pyaudio, or queues, but with the issue of trying to test a thread. In keeping with "strip the example down to the minimum possible", all the pyaudio stuff has vanished, to leave only the thread class, and its instantiation in a main.
What I was hoping for was an object that I could create, and leave to get on with its stuff in the background, while I do control things with the console or tk. I figure the following max-stripped down example should have the thread doing stuff, while main runs and asks me if it is working. The raw_input prompt never appears. I would not be surprised at this if I was running it from IDLE, which is not thread safe, but I get the same behaviour if I run the script directly from the OS. I was prepared to see the raw input prompt disappear up the screen pushed by 'running' prints, but not even that happens. The prompt never appears. What's going on? It does respond to ctrl-C and to closing the window, but I'd still like to be able to see main running.
import threading
import time
class TestThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.running=True
self.run()
def run(self):
while self.running:
time.sleep(0.5)
print 'running'
def stop(self):
self.running=False
if __name__=='__main__':
tt=TestThread()
a=raw_input('simple stuff working ? -- ')
tt.stop()
You should start the thread with self.start() instead of self.run(). In this case you are just running the thread function like any other normal function.
Normally you do not inherit from Thread. Instead, you use Thread(target=func2run).start()

Categories