Twisted and a command line interface - python

I have a program where I want to do two things:
Interact with a server and respond to events from the server. I am doing this using twisted.
Have a command line prompt for the user where he can issue additional commands. I am using the python cmd module for this so far.
There seems to be no other choice than having two threads, as readline only has a blocking interface and needs to handle stuff like auto completion. Twisted on the other hand has to continuously run the reactor.
Now the problem is that it seems very hard to handle Ctrl-C for this. The easy solution would seem to have the command line run in the main thread and just use reactor.callFromThread for every interaction with the rest of the program. This is very easy, as overwriting Cmd.onecmd can do this in a generic way. However when I try to spawn the reactor in a thread with
t = Thread(target=reactor.run)
t.start()
I immediately get an exception
File "/usr/lib/python3.6/signal.py", line 47, in signal
handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler))
builtins.ValueError: signal only works in main thread
Everyone using twisted insists that the twisted reactor should run in the main thread, as that would be a better design.
When trying to do it that way and running twisted in the main thread, it will catch the Ctrl-C, exit the reactor and I am stuck with a thread that does not exit, as the call to input() inside cmdloop does not return. I tried searching for a solution to this and how to get out of the input() call, but everyone also insists that a command line interface should run in the main thread.
One potential option I found was to run twisted as the main thread and make the input thread a daemon, so it should exit when the reactor exits, however the daemon flag did not change anything (the thread did not exit when the main thread did). Furthermore this is likely dangerous, as the thread might be doing something important when it is killed.
Is there any way out of this?

Take a look at how invective does this with Twisted and without threads (one way to read the code might be to start at the mainpoint and work your way in).

Related

Exit all threads in case of Error in python

I'm working on a python project were I want the same behavior as in C for my threads. In C when the main thread exit, it kills all other threads.
The project contains a TCP error server that it used to get logs from other threads and other software .The TCP link is simplex.
Some errors must involve the end of the whole program.
For external software I can kill them using their PID.
For other threads I've tried sys._exit(), sometimes it works, and sometimes some threads remain.
If my other threads were looping I could use a semaphore or something like that, but it is only one iteration of a linear process.
I've thought about the design pattern Producer/Consumer or add a lot of lock.acquire()/lock.release() but I think it will add more complexity and it imply to break the linear thread.
I've had a look to other Stackoverflow question I've found those solutions:
Use sys._exit() but its success rate is not 100%.
Convert my threads into subprocess to kill them easily, but in my case I can't.
I'm looking for a solution, a design pattern or something else to solve it.
PS: I'm a C lover and each time I deal with Python I think to solutions as simple as to call exit() to kill all my threads.
If you make your worker threads daemon threads, they will die when all your non-daemon threads (e.g. the main thread) have exited.
http://docs.python.org/library/threading.html#threading.Thread.daemon
Thread daemon status isDaemon() is False, set it True by setDaemon(True)
Another solution :
To make the thread stop on Keyboard Interrupt signal (ctrl+c) you can catch the exception "KeyboardInterrup" and cleanup before exiting. Like this:
try:
start_thread() #And the rest of your main
except (KeyboardInterrupt, SystemExit):
cleanup_stop_thread();
sys.exit()

Spawn a subprocess but kill it if main process gets killed

I am creating a program in Python that listens to varios user interactions and logs them. I have these requirements/restrictions:
I need a separate process that sends those logs to a remote database every hour
I can't do it in the current process because it blocks the UI.
If the main process stops, the background process should also stop.
I've been reading about subprocess but I can't seem to find anything on how to stop both simultaneously. I need the equivalent of spawn_link if anybody know some Erlang/Elixir.
Thanks!
To answer the question in the title (for visitors from google): there are robust solutions on Linux, Windows using OS-specific APIs and less robust but more portable psutil-based solutions.
To fix your specific problem (it is XY problem): use a daemon thread instead of a process.
A thread would allow to perform I/O without blocking GUI, code example (even if GUI you've chosen doesn't provide async. I/O API such as tkinter's createfilehandler() or gtk's io_add_watch()).

Non-blocking server in Twisted

I am building an application that needs to run a TCP server on a thread other than the main. When trying to run the following code:
reactor.listenTCP(ServerConfiguration.tcpport, TcpCommandFactory())
reactor.run()
I get the following error
exceptions.ValueError: signal only works in main thread
Can I run the twisted servers on threads other than the main one?
Twisted can run in any thread - but only one thread at a time. If you want to run in the non-main thread, simply do reactor.run(installSignalHandlers=False). However, you cannot use a reactor on the non-main thread to spawn subprocesses, because their termination will never be detected. (This is a limitation of UNIX, really, not of Twisted.)

Listening for subprocess failure in python

Using subprocess.Popen(), I'm launching a process that is supposed to take a long time. However, there is a chance that the process will fail shortly after it launches (producing a return code of 1). If that happens, I want to intercept the failure and present an explanatory message to the user. Is there a way to "listen" to the process and respond if it fails? I can't just use Popen.wait() because my python program has to keep running.
The hack I have in place right now is to time.sleep() my python program for .5 seconds (which should be enough time for the subprocess to to fail if it's going to do so). After the python program resumes, it polls the subprocess to determine if it has failed or not.
I imagine that a better solution might use threading and Popen.wait(), but I'm a relative beginner to python.
Edit:
The subprocess is a Java daemon that I'm launching. If another instance of the daemon is already running on the system, the Java subprocess will exit with a return code of 1, and I want to intercept the messy Java exception stack trace and present an understandable error message to the user.
Two approaches:
Call Popen.wait() on a thread as you suggested yourself, then call an error handler function if the exit code is non-zero. Make sure that the error handler is thread safe, preferably by dispatching the error message to the main thread if your application has an event loop.
Rewrite your application to use an event loop that already supports monitoring child processes, such as pyev. If you just want to monitor one subprocess, this is probably overkill.

Signal handling in Python

In my program I have a bunch of threads running and I'm trying
to interrupt the main thread to get it to do something asynchronously.
So I set up a handler and send the main process a SIGUSR1 - see the code
below:
def SigUSR1Handler(signum, frame):
self._logger.debug('Received SIGUSR1')
return
signal.signal(signal.SIGUSR1, SigUSR1Handler)
[signal.signal(signal.SIGUSR1, signal.SIG_IGN)]
In the above case, all the threads and the main process stops - from a 'c'
point of view this was unexpected - I want the threads to continue as they
were before the signal. If I put the SIG_IGN in instead, everything continues
fine.
Can somebody tell me how to do this? Maybe I have to do something with the 'frame'
manually to get back to where it was..just a guess though
thanks in advance,
Thanks for your help on this.
To explain a bit more, I have thread instances writing string information to
a socket which is also output to a file. These threads run their own timers so they
independently write their outputs to the socket. When the program runs I also see
their output on stdout but it all stops as soon as I see the debug line from the signal.
I need the threads to constantly send this info but I need the main program to
take a command so it also starts doing something else (in parallel) for a while.
I thought I'd just be able to send a signal from the command line to trigger this.
Mixing signals and threads is always a little precarious. What you describe should not happen, however. Python only handles signals in the main thread. If the OS delivered the signal to another thread, that thread may be briefly interrupted (when it's performing, say, a systemcall) but it won't execute the signal handler. The main thread will be asked to execute the signalhandler at the next opportunity.
What are your threads (including the main thread) actually doing when you send the signal? How do you notice that they all 'stop'? Is it a brief pause (easily explained by the fact that the main thread will need to acquire the GIL before handling the signal) or does the process break down entirely?
I'll sort-of answer my own question:
In my first attempt at this I was using time.sleep(run_time) in the main
thread to control how long the threads ran until they were stopped. By adding
debug I could see that the sleep loop seemed to be exiting as soon as the
signal handler returned so everything was shutting down normally but early!
I've replaced the sleep with a while loop and that doesn't jump out after
the signal handler returns so my threads keep running. So it solves the
problem but I'm still a bit puzzled about sleep()'s behaviour.
You should probably use a threading.Condition variable instead of sending signals. Have your main thread check it every loop and perform its special operation if it's been set.
If you insist on using signals, you'll want to move to using subprocess instead of threads, as your problem is likely due to the GIL.
Watch this presentation by David Beazley.
http://blip.tv/file/2232410
It also explains some quirky behavior related to threads and signals (Python specific, not the general quirkiness of the subject :-) ).
http://pyprocessing.berlios.de/ Pyprocessing is a neat library that makes it easier to work with separate processes in Python.

Categories