Gracefully stop multi-process application in Python - python

I have an application which is stuck in a file.read call. Before it goes into a loop where this call is made it forks a child which starts a gevent WSGI server. The purpose of this setup is that I want to wait for a keystroke, send this keystroke to the child websocket server which spreads the message among other connected websocket-clients. My problem is that I don't know how to stop this thing.
If I Ctrl+C the child server process gets the sigint and stops. But my parent only responds if it can read something out of his file. Isn't there something like an asynchronous handler? I also tried registering for SIGINT via signal.signal and manually sending the signal, but the signal handler was only called if something was written to the file.
BTW: I'm runnning Linux.

Related

Python handling system shutdown, but not crash (Or how I would like to restore state in case of something unexpected)

So I have a Flask API running through a Systemd service running on a piece of hardware that's battery powered (to control other hardware). I have a bunch of state that I need to save and in case something goes wrong, like a power outage, I need to be able to restore that state.
Right now I save the state as JSON files so I can load them (if they exist) on startup. But I'd also need to be able to remove them again in case it gets the shutdown signal.
I saw somewhere I could set KillSignal to SIGINT and handle the shutdown as a keyboard interrupt. Or something about ExecStop. Would that be enough, or is there a better way to handle such a scenario?
If you look at the shutdown logs of a linux system you'll see 'sending sigterm to all processes... sending sigkill to all processes'. In a normal shutdown processes get a few second's grace before being killed. So if you trap sigterm you can run your shutdown code, but it had better be over before the untrappable sigkill comes along. Since sigterm is always sent to kill a running process, trapping it is indeed the Right Way (TM) to cleanup on exit. But since you are using systemd services you could also cleanup in the service.

Connect to already running python process

I want to create deamon-like application that will have separated always running in background process and called every time from console process that will somehow pass requests to "deamon" process. The idea is that you can do something like ui.py create something and this ui.py will send its arguments to running and daemon it will perform requested action. After that ui.py will stop and deamon will continue running.
I was thinking about doing in throw network sockets or even http requests, but I hope to find more elegant solution. Maybe there is some way to establish Pipe or Queue connection every time.
P.S. It should be crossplatform.

Non-blocking server in Twisted

I am building an application that needs to run a TCP server on a thread other than the main. When trying to run the following code:
reactor.listenTCP(ServerConfiguration.tcpport, TcpCommandFactory())
reactor.run()
I get the following error
exceptions.ValueError: signal only works in main thread
Can I run the twisted servers on threads other than the main one?
Twisted can run in any thread - but only one thread at a time. If you want to run in the non-main thread, simply do reactor.run(installSignalHandlers=False). However, you cannot use a reactor on the non-main thread to spawn subprocesses, because their termination will never be detected. (This is a limitation of UNIX, really, not of Twisted.)

How to quit a thrift server in Python

I am writing an application in Python that uses thrift to communicate between itself and a client. Whenever I try to exit the application (using Ctrl-C or the exit button on the window), the thrift server keeps the application alive, probably because the server.serve() function enters an infinite loop. What is the best way to exit this server when the rest of the application quits?
It turns out my problem was not actually thrift-specific. I was running an infinite loop in a non-daemonic thread; therefore, python waited for that thread to close before my whole program would close. Setting "self.daemon = True" in the thread's init method fixed the problem nicely.

How to shutdown cherrypy from within?

I am developing on cherrypy, I start it from a python script.
For better development I wonder what is the correct way to stop cherrypy from within the main process (and not from the outside with ctrl-c or SIGTERM).
I assume I have to register a callback function from the main application to be able to stop the cherrypy main process from a worker thread.
But how do I stop the main process from within?
import sys
class MyCherryPyApplication(object):
def default(self):
sys.exit()
default.exposed = True
cherrypy.quickstart(MyCherryPyApplication())
Putting a sys.exit() in any request handler exits the whole server
I would have expected this only terminates the current thread, but it terminates the whole server. That's what I wanted.

Categories