I'm having a kafka consumer which is running in a thread in my django application, I want to apply some monitoring and alerting on that thread. So how can I add thread monitoring (check state if it is alive or dead) and if thread is dead then need to raise an alert.
I have tried monitoring by creating scheduler which runs every 10 mins and calls thread.is_alive() method. But the problem is the scheduler is running in a different process and unable to access main process' s thread. So how can I resolve this?
To monitor a thread in Python, you can use the threading module, which is a built-in module in Python that provides a number of functions for working with threads. Here is an example of how you can use the threading module to monitor a thread:
import threading
def my_function():
my_thread = threading.Thread(target=my_function)
my_thread.start()
if my_thread.is_alive():
# Do something
The threading module provides a number of other functions and methods that you can use to monitor and control threads, such as join(), isDaemon(), and setDaemon(). For more information, you can read the official Python documentation for the threading module: https://docs.python.org/3/library/threading.html.
Related
I have a Python web application in which the client (Ember.js) communicates with the server via WebSocket (I am using Flask-SocketIO).
Apart from the WebSocket server the backend does two more things that are worth to be mentioned:
Doing some image conversion (using graphicsmagick)
OCR incoming images from the client (using tesseract)
When the client submits an image its entity is created in the database and the id is put in an image conversion queue. The worker grabs it and does image conversion. After that the worker puts it in the OCR queue where it will be handled by the OCR queue worker.
So far so good. The WS requests are handled synchronously in separate threads (Flask-SocketIO uses Eventlet for that) and the heavy computational action happens asynchronously (in separate threads as well).
Now the problem: the whole application runs on a Raspberry Pi 3. If I do not make use of the 4 cores it has I only have one ARMv8 core clocked at 1.2 GHz. This is very little power for OCR. So I decided to find out how to use multiple cores with Python. Although I read about the problems with the GIL) I found out about multiprocessing where it says The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads.. Exactly what I wanted. So I instantly replaced the
from threading import Thread
thread = Thread(target=heavy_computational_worker_thread)
thread.start()
by
from multiprocessing import Process
process = Process(target=heavy_computational_worker_thread)
process.start()
The queue needed to be handled by the multiple cores as well So i had to change
from queue import Queue
queue = multiprocessing.Queue()
to
import multiprocessing
queue = multiprocessing.Queue()
as well. Problematic: the queue and the Thread libraries are monkey patched by Eventlet. If I stop using the monkey patched version of Thread and Queue and use the one from multiprocsssing instead then the request thread started by Eventlet blocks forever when accessing the queue.
Now my question:
Is there any way I can make this application do the OCR and image conversion on a separate core?
I would like to keep using WebSocket and Eventlet if that's possible. The advantage I have is that the only communication interface between the processes would be the queue.
Ideas that I already had:
- Not using a Python implementation of a queue but rather using I/O. For example a dedicated Redis which the different subprocesses would access
- Going a step further: starting every queue worker as a separate Python process (e.g. python3 wsserver | python3 ocrqueue | python3 imgconvqueue). Then I would have to make sure myself that the access on the queue and on the database would be non-blocking
The best thing would be to keep the single process and make it work with multiprocessing, though.
Thank you very much in advance
Eventlet is currently incompatible with the multiprocessing package. There is an open issue for this work: https://github.com/eventlet/eventlet/issues/210.
The alternative that I think will work well in your case is to use Celery to manage your queue. Celery will start a pool of worker processes that wait for tasks provided by the main process via a message queue (RabbitMQ and Redis are both supported).
The Celery workers do not need to use eventlet, only the main server does, so this frees them to do whatever they need to do without the limitations imposed by eventlet.
If you are interested in exploring this approach, I have a complete example that uses it: https://github.com/miguelgrinberg/flack.
Yet the thread module works for me. How to check if thread made by module thread (in Python 3 _thread) is running? When the function the thread is doing ends, the thread ends too, or doesn't?
def __init__(self):
self.thread =None
......
if self.thread==None or not self.thread.isAlive() :
self.thread = thread.start_new_thread(self.dosomething,())
else:
tkMessageBox.showwarning("XXXX","There's no need to have more than two threads")
I know there is no function called isAlive() in "thread" module, is there any alternative?
But there isn't any reason why to use "threading" module, is there?
Unless you really need the low-level capabilities of the internal thread (_thread module, you really should use the threading module instead. It makes everything easier to use and does come with helpers such as is_alive.
Btw. the alternative of restarting a thread like you do in your example code would be to keep it running but have it wait for additional jobs. E.g. you could have a queue somewhere which keeps track of all jobs you want the thread to do, and the thread keeps working on them until the queue is empty—and then it will not terminate but wait for new jobs to appear. And only at the end of the application, you signalize the thread to stop waiting and terminate it.
I'm using amqp module with RabbitMQ. I needed to create a non blocking consumer. For that purpose I used threading module.
There is one problem: I must make threads stopped on app exit. Here is the part of the code:
c.start_consuming(message_callback)
while not self._stop.isSet():
if self._stop.isSet():
print("thread will be stopped")
else:
print("thread will NOT BE STOPPED")
c.channel.wait()
Problem is, c.channel.wait() sometime blocks and sometime not depending on whether there are some messages in the queue it is listening or not (I did some experiments but they are not enough)
If there were a timeout argument I could use with c.channel.wait() function, I could achieve that goal by setting the timeout, for example 0.1 seconds. As far as I search the source code, there is no timeout option.
Main Question: How can I create a non-blocking consumer with amqp module?
Sub Question 1: How can I patch the amqp code so it starts using a timeout value?
Fallback solution: I may consider using multiprocessing module in order to kill that process anytime.
I am trying to simulate a network of applications that run using twisted. As part of my simulation I would like to synchronize certain events and be able to feed each process large amounts of data. I decided to use multiprocessing Events and Queues. However, my processes are getting hung.
I wrote the example code below to illustrate the problem. Specifically, (about 95% of the time on my sandy bridge machine), the 'run_in_thread' function finishes, however the 'print_done' callback is not called until after I press Ctrl-C.
Additionally, I can change several things in the example code to make this work more reliably such as: reducing the number of spawned processes, calling self.ready.set from reactor_ready, or changing the delay of deferLater.
I am guessing there is a race condition somewhere between the twisted reactor and blocking multiprocessing calls such as Queue.get() or Event.wait()?
What exactly is the problem I am running into? Is there a bug in my code that I am missing? Can I fix this or is twisted incompatible with multiprocessing events/queues?
Secondly, would something like spawnProcess or Ampoule be the recommended alternative? (as suggested in Mix Python Twisted with multiprocessing?)
Edits (as requested):
I've run into problems with all the reactors I've tried glib2reactor selectreactor, pollreactor, and epollreactor. The epollreactor seems to give the best results and seems to work fine for the example given below but still gives me the same (or a similar) problem in my application. I will continue investigating.
I'm running Gentoo Linux kernel 3.3 and 3.4, python 2.7, and I've tried Twisted 10.2.0, 11.0.0, 11.1.0, 12.0.0, and 12.1.0.
In addition to my sandy bridge machine, I see the same issue on my dual core amd machine.
#!/usr/bin/python
# -*- coding: utf-8 *-*
from twisted.internet import reactor
from twisted.internet import threads
from twisted.internet import task
from multiprocessing import Process
from multiprocessing import Event
class TestA(Process):
def __init__(self):
super(TestA, self).__init__()
self.ready = Event()
self.ready.clear()
self.start()
def run(self):
reactor.callWhenRunning(self.reactor_ready)
reactor.run()
def reactor_ready(self, *args):
task.deferLater(reactor, 1, self.node_ready)
return args
def node_ready(self, *args):
print 'node_ready'
self.ready.set()
return args
def reactor_running():
print 'reactor_running'
df = threads.deferToThread(run_in_thread)
df.addCallback(print_done)
def run_in_thread():
print 'run_in_thread'
for n in processes:
n.ready.wait()
def print_done(dfResult=None):
print 'print_done'
reactor.stop()
if __name__ == '__main__':
processes = [TestA() for i in range(8)]
reactor.callWhenRunning(reactor_running)
reactor.run()
The short answer is yes, Twisted and multiprocessing are not compatible with each other, and you cannot reliably use them as you are attempting to.
On all POSIX platforms, child process management is closely tied to SIGCHLD handling. POSIX signal handlers are process-global, and there can be only one per signal type.
Twisted and stdlib multiprocessing cannot both have a SIGCHLD handler installed. Only one of them can. That means only one of them can reliably manage child processes. Your example application doesn't control which of them will win that ability, so I would expect there to be some non-determinism in its behavior arising from that fact.
However, the more immediate problem with your example is that you load Twisted in the parent process and then use multiprocessing to fork and not exec all of the child processes. Twisted does not support being used like this. If you fork and then exec, there's no problem. However, the lack of an exec of a new process (perhaps a Python process using Twisted) leads to all kinds of extra shared state which Twisted does not account for. In your particular case, the shared state that causes this problem is the internal "waker fd" which is used to implement deferToThread. With the fd shared between the parent and all the children, when the parent tries to wake up the main thread to deliver the result of the deferToThread call, it most likely wakes up one of the child processes instead. The child process has nothing useful to do, so that's just a waste of time. Meanwhile the main thread in the parent never wakes up and never notices your threaded task is done.
It's possible you can avoid this issue by not loading any of Twisted until you've already created the child processes. This would turn your usage into a single-process use case as far as Twisted is concerned (in each process, it would be initially loaded, and then that process would not go on to fork at all, so there's no question of how fork and Twisted interact anymore). This means not even importing Twisted until after you've created the child processes.
Of course, this only helps you out as far as Twisted goes. Any other libraries you use could run into similar trouble (you mentioned glib2, that's a great example of another library that will totally choke if you try to use it like this).
I highly recommend not using the multiprocessing module at all. Instead, use any multi-process approach that involves fork and exec, not fork alone. Ampoule falls into that category.
I have several questions regarding Python threads.
Is a Python thread a Python or OS implementation?
When I use htop a multi-threaded script has multiple entries - the same memory consumption, the same command but a different PID. Does this mean that a [Python] thread is actually a special kind of process? (I know there is a setting in htop to show these threads as one process - Hide userland threads)
Documentation says:
A thread can be flagged as a “daemon thread”. The significance of this
flag is that the entire Python program exits when only daemon threads
are left.
My interpretation/understanding was: main thread terminates when all non-daemon threads are terminated.
So python daemon threads are not part of Python program if "the entire Python program exits when only daemon threads are left"?
Python threads are implemented using OS threads in all implementations I know (C Python, PyPy and Jython). For each Python thread, there is an underlying OS thread.
Some operating systems (Linux being one of them) show all different threads launched by the same executable in the list of all running processes. This is an implementation detail of the OS, not of Python. On some other operating systems, you may not see those threads when listing all the processes.
The process will terminate when the last non-daemon thread finishes. At that point, all the daemon threads will be terminated. So, those threads are part of your process, but are not preventing it from terminating (while a regular thread will prevent it). That is implemented in pure Python. A process terminates when the system _exit function is called (it will kill all threads), and when the main thread terminates (or sys.exit is called), the Python interpreter checks if there is another non-daemon thread running. If there is none, then it calls _exit, otherwise it waits for the non-daemon threads to finish.
The daemon thread flag is implemented in pure Python by the threading module. When the module is loaded, a Thread object is created to represent the main thread, and it's _exitfunc method is registered as an atexit hook.
The code of this function is:
class _MainThread(Thread):
def _exitfunc(self):
self._Thread__stop()
t = _pickSomeNonDaemonThread()
if t:
if __debug__:
self._note("%s: waiting for other threads", self)
while t:
t.join()
t = _pickSomeNonDaemonThread()
if __debug__:
self._note("%s: exiting", self)
self._Thread__delete()
This function will be called by the Python interpreter when sys.exit is called, or when the main thread terminates. When the function returns, the interpreter will call the system _exit function. And the function will terminate, when there are only daemon threads running (if any).
When the _exit function is called, the OS will terminate all of the process threads, and then terminate the process. The Python runtime will not call the _exit function until all the non-daemon thread are done.
All threads are part of the process.
My interpretation/understanding was: main thread terminates when all
non-daemon threads are terminated.
So python daemon threads are not part of python program if "the entire
Python program exits when only daemon threads are left"?
Your understanding is incorrect. For the OS, a process is composed of many threads, all of which are equal (there is nothing special about the main thread for the OS, except that the C runtime add a call to _exit at the end of the main function). And the OS doesn't know about daemon threads. This is purely a Python concept.
The Python interpreter uses native thread to implement Python thread, but has to remember the list of threads created. And using its atexit hook, it ensures that the _exit function returns to the OS only when the last non-daemon thread terminates. When using "the entire Python program", the documentation refers to the whole process.
The following program can help understand the difference between daemon thread and regular thread:
import sys
import time
import threading
class WorkerThread(threading.Thread):
def run(self):
while True:
print 'Working hard'
time.sleep(0.5)
def main(args):
use_daemon = False
for arg in args:
if arg == '--use_daemon':
use_daemon = True
worker = WorkerThread()
worker.setDaemon(use_daemon)
worker.start()
time.sleep(1)
sys.exit(0)
if __name__ == '__main__':
main(sys.argv[1:])
If you execute this program with the '--use_daemon', you will see that the program will only print a small number of Working hard lines. Without this flag, the program will not terminate even when the main thread finishes, and the program will print Working hard lines until it is killed.
I'm not familiar with the implementation, so let's make an experiment:
import threading
import time
def target():
while True:
print 'Thread working...'
time.sleep(5)
NUM_THREADS = 5
for i in range(NUM_THREADS):
thread = threading.Thread(target=target)
thread.start()
The number of threads reported using ps -o cmd,nlwp <pid> is NUM_THREADS+1 (one more for the main thread), so as long as the OS tools detect the number of threads, they should be OS threads. I tried both with cpython and jython and, despite in jython there are some other threads running, for each extra thread that I add, ps increments the thread count by one.
I'm not sure about htop behaviour, but ps seems to be consistent.
I added the following line before starting the threads:
thread.daemon = True
When I executed the using cpython, the program terminated almost immediately and no process was found using ps, so my guess is that the program terminated together with the threads. In jython the program worked the same way (it didn't terminate), so maybe there are some other threads from the jvm that prevent the program from terminating or daemon threads aren't supported.
Note: I used Ubuntu 11.10 with python 2.7.2+ and jython 2.2.1 on java1.6.0_23
Python threads are practically an interpreter implementation, because the so called global interpreter lock (GIL), even if it's technically using the os-level threading mechanisms. On *nix it's utilizing the pthreads, but the GIL effectivly makes it a hybrid stucked to the application-level threading paradigm. So you will see it on *nix systems multiple times in a ps/top output, but it still behaves (performance-wise) like a software-implemented thread.
No, you are just seeing the kind of underlying thread implementation of your os. This kind of behaviur is exposed by *nix pthread-like threading or im told even windows does implement threads this way.
When your program closes, it waits for all threads to finish also. If you have threads, which could postpone the exit indefinitly, it may be wise to flag those threads as "daemons" and allow your program to finish even if those threads are still running.
Some reference material you might be interested:
Linux Gazette: Understanding Threading in Python.
Doug Hellman: Multi-processing techniques in Python
David Beazley: PyCon 2010:Understanding the Python GIL(Video-presentation)
There are great answers to the question, but I feel the daemon threads question is still not explained in a simple fashion. So this answer refers just to the third question
"main thread terminates when all non-daemon threads are terminated."
So python daemon threads are not part of Python program if "the entire Python program exits when only daemon threads are left"?
If you think about what a daemon is, it is usually a service. Some code that runs in an infinite loop, that serves request, fill queues, accepts connections, etc. Other threads use it. It has no purpose when running by itself (in a single process terms).
So the program can't wait for the daemon thread to terminate, because it might never happen. Python will end the program when all non daemon threads are done. It also stops the daemon threads.
To wait until a daemon thread has completed its work, use the join() method.
daemon_thread.join() will make Python to wait for the daemon thread as well before exiting. The join() also accepts a timeout argument.