I am developing on cherrypy, I start it from a python script.
For better development I wonder what is the correct way to stop cherrypy from within the main process (and not from the outside with ctrl-c or SIGTERM).
I assume I have to register a callback function from the main application to be able to stop the cherrypy main process from a worker thread.
But how do I stop the main process from within?
import sys
class MyCherryPyApplication(object):
def default(self):
sys.exit()
default.exposed = True
cherrypy.quickstart(MyCherryPyApplication())
Putting a sys.exit() in any request handler exits the whole server
I would have expected this only terminates the current thread, but it terminates the whole server. That's what I wanted.
Related
I'm currently trying to understand the signal handling in Django when receiving a SIGTERM.
Background information
I have an application with potentially long running requests, running in a Docker container. When Docker wants to stop a container, it first sends a SIGTERM signal, waits for a while, and then sends a SIGKILL. Normally, on the SIGTERM, you stop receiving new requests, and hope that the currently running requests finish before Docker decides to send a SIGKILL.
However, in my application, I want to save which requests have been tried, and find that more important than finishing the request right now. So I'd prefer for the current requests to shutdown on SIGTERM, so that I can gracefully end them (and saving their state), rather than waiting for the SIGKILL.
My attempt
My theory is that you can register a signal listener for SIGTERM, that performs a sys.exit(), so that a SystemExit exception is raised. I then want to catch that exception in my request handler, and save my state. As a first experiment I've created a mock project for the Django development server.
I registered the signal in the Appconfig.ready() function:
import signal
import sys
from django.apps import AppConfig
import logging
logger = logging.getLogger(__name__)
def signal_handler(signal_num, frame):
sys.exit()
class TesterConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'tester'
def ready(self):
logger.info('starting ready')
signal.signal(signal.SIGTERM, signal_handler)
and have created a request handler that catches Exceptions and BaseExceptions:
import logging
import sys
import time
from django.http import HttpResponse
def handler(request):
try:
logger.info('start')
while True:
time.sleep(1)
except Exception:
logger.info('exception')
except BaseException:
logger.info('baseexception')
return HttpResponse('hallo')
But when I start the development server user python manage.py runserver and then send a kill signal using kill -n 15 <pid>, no 'baseexception' message gets logged ('start' does get logged).
The full code can be foud here.
My question
My hypothesis is that the SIGTERM signal is handled in the main thread, so the sys.exit() call is handled in the main thread. So the exception is not raised in the thread running the request handler, and nothing is caught.
How do I change my code to have the SystemError raised in the request handler thread? I need some information from that thread to log, so I can't just 'log' something in the signal handler directly.
Ok, I did some investigation, and I found an answer to my own question. As I somewhat suspected while posing the question, it is the kind of question where you probably want a different solution to the one being asked for. However, I'll post my findings here, for if someone in the future finds the question and finds him- and/or herself in a similar situation.
There were a couple of reasons that the above did not work. The first is that I forgot to register my app in the INSTALLED_APPS, so the code in TesterConfig.ready was not actually executed.
Next, it turns out that Django also registers a handler for the SIGTERM signal, see the Django source code. So if you send a SIGTERM to the process, this is the one that gets triggered. I temporarily commented the line in my virtual environment to investigate some more, but of course that can never lead to a real solution.
The sys.exit() function indeed raises a SystemExit exception, but that is handled only in the thread itself. If you would want to communicate between threads, you'll probably want to use an Event and check it regularly in the thread that you want to execute.
If you're looking for suggestions how to do something like this when running Django through gunicorn, I found that if you use the sync worker, you can register signals in your views.py, because the requests will be handled in the main thread.
In the end I ended up registering the signal here, and writing a logging line and raising an Exception in the signal handler. This is then handled by the exception handling that was already in place.
If a process is running and for example the user accidentally terminates via the Task Manager or the machine reboots (thus forcefully terminating processes), how can I register such an event that the process will execute some task before completely terminating?
What I've tried unsuccessfully is:
from signal import signal
from signal import SIGTERM
def foo():
print('hello world')
if __name__ == '__main__':
signal(SIGTERM, foo)
while True:
pass
I'll run this from the command line, then navigate to task manager and end the task but foo is never called.
Based on the answer to Can I handle the killing of my windows process through the Task Manager? - it seems like the task manager kills processes using an unmaskable signal (equivalent to Linux's SIGKILL). This means that you cannot catch it.
There are other signals you can catch in Windows, like SIGBREAK, SIGCHLD, CTRL_C_EVENT and CTRL_BREAK_EVENT, but I guess the task manager does not use any of those for terminating a process.
We have a rich backend application that handles messaging/queuing, database queries, and computer vision. An additional feature we need is tcp communication - preferably via http. The point is: this is not primarily a web application. We would expect to have a set of http channels set up for different purposes. Yes - we understand about messaging including topics and publish-subscribe: but direct tcp based request/response also has its place.
I have looked at and tried out a half dozen python http web servers. They either implicitly or explicitly describe a requirement to run the event loop on the main thread. This is for us a cart before the horse: the main thread is already occupied with other tasks including coordination of the other activities.
To illustrate the intended structure I will lift code from my aiohttp-specific question How to run an aiohttp web application in a secondary thread. In that question I tried running in another standalone script but on a subservient thread:
def runWebapp():
from aiohttp import web
async def handle(request):
name = request.match_info.get('name', "Anonymous")
text = "Hello, " + name
return web.Response(text=text)
app = web.Application()
app.add_routes([web.get('/', handle),
web.get('/{name}', handle)])
web.run_app(app)
if __name__ == '__main__':
from threading import Thread
t = Thread(target=runWebapp)
t.start()
print('thread started let''s nap..')
import time
time.sleep(50)
This gives error:
RuntimeError: There is no current event loop in thread 'Thread-1'.
This error turns out to mean "hey you're not running this on the main thread".
We can logically replace aiohttp with other web servers here. Are there any for which this approach of asking the web server's event handling loop to run on a secondary thread will work? So far I have also tried cherrypy, tornado, and flask.
Note that one prominent webserver that I have not tried is django. But that one seems to require an extensive restructuring of the application around the directory structures expected (/required?) for django. We would not want to do that given the application has a set of other purposes that supersede this sideshow of having http servers.
An approach that I have looked at is asyncio. I have not understood whether it can support running event loops on a side thread or not: if so then it would be an answer to this question.
In any case are there any web servers that explicitly support having their event loops off of the main thread?
You can create and set an event loop while on the secondary thread:
asyncio.set_event_loop(asyncio.new_event_loop())
cherrypy and flask already work without this; tornado works with this.
On aiohttp, you get another error from it calling loop.add_signal_handler():
ValueError: set_wakeup_fd only works in main thread
You need to skip that because only the main thread of the main interpreter is allowed to set a new signal handler, which means web servers running on a secondary thread cannot directly handle signals to do graceful exit.
Example: aiohttp
Set the event loop before calling run_app().
aiohttp 3.8+ already uses a new event loop in run_app(), so you can skip this.
Pass handle_signals=False when calling run_app() to not add signal handlers.
asyncio.set_event_loop(asyncio.new_event_loop()) # aiohttp<3.8
web.run_app(app, handle_signals=False)
Example: tornado
Set the event loop before calling app.listen().
asyncio.set_event_loop(asyncio.new_event_loop())
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
Any Python program is run on a single thread which is the main. And when you create a Thread it does not mean that your program already uses two threads.
Unfortunately, it is not possible to use different event loops for every Thread but possible to do that using multiprocessing instead of threading.
It allows creating its own event loop for every single Process.
from multiprocessing import Process
from aiohttp import web
def runWebapp(port):
async def handle(request):
name = request.match_info.get("name", "Anonymous")
text = "Hello, " + name
return web.Response(text=text)
app = web.Application()
app.add_routes([
web.get("/", handle),
web.get("/{name}", handle)
])
web.run_app(app, port=port)
if __name__ == "__main__":
p1 = Process(target=runWebapp, args=(8080,))
p2 = Process(target=runWebapp, args=(8081,))
p1.start()
p2.start()
I have an application which is stuck in a file.read call. Before it goes into a loop where this call is made it forks a child which starts a gevent WSGI server. The purpose of this setup is that I want to wait for a keystroke, send this keystroke to the child websocket server which spreads the message among other connected websocket-clients. My problem is that I don't know how to stop this thing.
If I Ctrl+C the child server process gets the sigint and stops. But my parent only responds if it can read something out of his file. Isn't there something like an asynchronous handler? I also tried registering for SIGINT via signal.signal and manually sending the signal, but the signal handler was only called if something was written to the file.
BTW: I'm runnning Linux.
Pyramid supports an ApplicationCreated event. However I can't find any ApplicationDestroyed/ApplicationShutdown event. Is it at all possible do execute a function upon shutdown.
Do I have any choice other than to go further up my stack: ie. I'm using gevent inside uWSGI. It might be possible to get gevent or uWSGI to run my shutdown code, but it certainly isn't as pretty.
Pyramid does not support any shutdown event.
However Python has a atexit event, that runs on interpreter shutdown
http://docs.python.org/library/atexit.html
import atexit
#atexit.register
def goodbye():
print "You are now leaving the Python sector."