Run code at Pyramid shutdown - python

Pyramid supports an ApplicationCreated event. However I can't find any ApplicationDestroyed/ApplicationShutdown event. Is it at all possible do execute a function upon shutdown.
Do I have any choice other than to go further up my stack: ie. I'm using gevent inside uWSGI. It might be possible to get gevent or uWSGI to run my shutdown code, but it certainly isn't as pretty.

Pyramid does not support any shutdown event.
However Python has a atexit event, that runs on interpreter shutdown
http://docs.python.org/library/atexit.html
import atexit
#atexit.register
def goodbye():
print "You are now leaving the Python sector."

Related

python apscheduler not shutting down

I'm trying to stop the apscheduler from running on by removing the job and shutting it down completely!
None of them is working, my function expire_data still gets triggered
def process_bin(value):
print "Stored:",pastebin.value
print "Will expire in",pastebin_duration.value,"seconds!"
if pastebin_duration>=0:
scheduler = BlockingScheduler()
job=scheduler.add_job(expire_data, 'interval', seconds=5)
scheduler.start()
job.remove()
scheduler.shutdown()
def expire_data():
print "Delete data!"
How can I stop it?
Question: I'm trying to stop the apscheduler from running
You are using a BlockingScheduler, therefore you can't.
APScheduler BlockingScheduler
BlockingScheduler is the simplest possible scheduler.
It runs in the foreground, so when you call start(), the call never returns.
Read about Choosing the right scheduler
BlockingScheduler: use when the scheduler is the only thing running in your process
BackgroundScheduler: use when you’re not using any of the frameworks below, and want the scheduler to run in the background inside your application
AsyncIOScheduler: use if your application uses the asyncio module
GeventScheduler: use if your application uses gevent
TornadoScheduler: use if you’re building a Tornado application
TwistedScheduler: use if you’re building a Twisted application
QtScheduler: use if you’re building a Qt application

Twisted and a command line interface

I have a program where I want to do two things:
Interact with a server and respond to events from the server. I am doing this using twisted.
Have a command line prompt for the user where he can issue additional commands. I am using the python cmd module for this so far.
There seems to be no other choice than having two threads, as readline only has a blocking interface and needs to handle stuff like auto completion. Twisted on the other hand has to continuously run the reactor.
Now the problem is that it seems very hard to handle Ctrl-C for this. The easy solution would seem to have the command line run in the main thread and just use reactor.callFromThread for every interaction with the rest of the program. This is very easy, as overwriting Cmd.onecmd can do this in a generic way. However when I try to spawn the reactor in a thread with
t = Thread(target=reactor.run)
t.start()
I immediately get an exception
File "/usr/lib/python3.6/signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
builtins.ValueError: signal only works in main thread
Everyone using twisted insists that the twisted reactor should run in the main thread, as that would be a better design.
When trying to do it that way and running twisted in the main thread, it will catch the Ctrl-C, exit the reactor and I am stuck with a thread that does not exit, as the call to input() inside cmdloop does not return. I tried searching for a solution to this and how to get out of the input() call, but everyone also insists that a command line interface should run in the main thread.
One potential option I found was to run twisted as the main thread and make the input thread a daemon, so it should exit when the reactor exits, however the daemon flag did not change anything (the thread did not exit when the main thread did). Furthermore this is likely dangerous, as the thread might be doing something important when it is killed.
Is there any way out of this?
Take a look at how invective does this with Twisted and without threads (one way to read the code might be to start at the mainpoint and work your way in).

Stop a background process in flask without creating zombie processes

I need to start a long-running background process with subprocess when someone visits a particular view.
My code:
from flask import Flask
import subprocess
app = Flask(__name__)
#app.route("/")
def index():
subprocess.Popen(["sleep", "10"])
return "hi\n"
if __name__ == "__main__":
app.run(debug=True)
This works great, for the most part.
The problem is that when the process (sleep) ends, ps -Af | grep sleep shows it as [sleep] <defunct>.
From what I've read, this is because I still have a reference to the process in flask.
Is there a way to drop this reference after the process exits?
I tried doing g.subprocess = subprocess.Popen(["sleep", "10"]), and waiting for the process to end in #app.after_request(response) so I can use del on it, but this prevents flask from returning the response until the subprocess exits - I need it to return the response before the subprocess exits.
Note:
I need the subprocess.Popen operation to be non-blocking - this is important.
As I've suggested in the comments, one of the cleanest and most robust way of achieving this kind of thing in Python is by using celery.
Celery requires a broker transport for messaging, for which rabbitmq is the default, and at least a process with workers running. However, the thing that increases readbility an dmaintanability is that the worker code can co-exist in the same file or files than your server app. You invoke the remote procedures as though it where a simple function call.
Celery can handle retries, post-task events, and lots of other things for free, everything with mature code hardened by years of use in production.
This is your example after re-writting it for use with Celery:
from flask import Flask
from celery import Celery
import subprocess
app = Flask(__name__)
celery_app = Celery("test")
#celery_app.task
def run_process():
subprocess.Popen(["sleep", "5"])
#app.route("/")
def index():
run_process.delay()
return "hi\n"
if __name__ == "__main__":
app.run(debug=True, port=8080)
With this code, in a system with the rabbitmq server running with default options (I installed the package, and started the service - no configurations whatsoever. Of course on production you would have to tune that - but if everything is to be on the same server, it may not even be needed.)
With rabbitmq in place, one starts the worker process with a command line like: celery worker -A bla1.celery_app -D (pip install celery on the same virtualenv you have your Flask). Then just launch the flask server and see it working.
Of course this has even more advantages if you are doing more work in Python itself than just calling an external process. It can have access to your database models, and you can perform assynchronous actions that modify objects in there (and eventually trigger responses for the user, as "flash" messages on the user session, or e-mails)
I've seen a lot of "poor man's parallel processing" using subprocess.Popen and letting it run freely, but that's often leading to zombie problems as you noted.
You could run your process in a thread (in that case, no need for Popen, just use call or check_call if you want to raise an exception if process failed). call or check_call (or run since Python 3.5) waits for the process to complete so no zombies, and since you're running it in a thread you're not blocked.
import threading
def in_background():
subprocess.call(["sleep", "10"])
#app.route("/")
def index():
t = threading.Thread(target=in_background)
t.start()
return "hi\n"
Note: To wait for thread completion you'd have to use t.join() and for that you'd have to keep a reference on the t thread object.
BTW, I suppose that your real process isn't sleep, or it's not very useful and time.sleep(10) does the same (always in a thread of course!)

Listening for subprocess failure in python

Using subprocess.Popen(), I'm launching a process that is supposed to take a long time. However, there is a chance that the process will fail shortly after it launches (producing a return code of 1). If that happens, I want to intercept the failure and present an explanatory message to the user. Is there a way to "listen" to the process and respond if it fails? I can't just use Popen.wait() because my python program has to keep running.
The hack I have in place right now is to time.sleep() my python program for .5 seconds (which should be enough time for the subprocess to to fail if it's going to do so). After the python program resumes, it polls the subprocess to determine if it has failed or not.
I imagine that a better solution might use threading and Popen.wait(), but I'm a relative beginner to python.
Edit:
The subprocess is a Java daemon that I'm launching. If another instance of the daemon is already running on the system, the Java subprocess will exit with a return code of 1, and I want to intercept the messy Java exception stack trace and present an understandable error message to the user.
Two approaches:
Call Popen.wait() on a thread as you suggested yourself, then call an error handler function if the exit code is non-zero. Make sure that the error handler is thread safe, preferably by dispatching the error message to the main thread if your application has an event loop.
Rewrite your application to use an event loop that already supports monitoring child processes, such as pyev. If you just want to monitor one subprocess, this is probably overkill.

How to shutdown cherrypy from within?

I am developing on cherrypy, I start it from a python script.
For better development I wonder what is the correct way to stop cherrypy from within the main process (and not from the outside with ctrl-c or SIGTERM).
I assume I have to register a callback function from the main application to be able to stop the cherrypy main process from a worker thread.
But how do I stop the main process from within?
import sys
class MyCherryPyApplication(object):
def default(self):
sys.exit()
default.exposed = True
cherrypy.quickstart(MyCherryPyApplication())
Putting a sys.exit() in any request handler exits the whole server
I would have expected this only terminates the current thread, but it terminates the whole server. That's what I wanted.

Categories