I'm using Python Python Multiprocessing for a RabbitMQ Consumers.
On Application Start I create 4 WorkerProcesses.
def start_workers(num=4):
for i in xrange(num):
process = WorkerProcess()
process.start()
Below you find my WorkerClass.
The Logic works so far, I create 4 parallel Consumer Processes.
But the Problem is after a Process got killed. I want to create a new Process. The Problem in the Logic below is that the new Process is created as child process from the old one and after a while the memory runs out of space.
Is there any possibility with Python Multiprocessing to start a new process and kill the old one correctly?
class WorkerProcess(multiprocessing.Process):
def ___init__(self):
app.logger.info('%s: Starting new Thread!', self.name)
super(multiprocessing.Process, self).__init__()
def shutdown(self):
process = WorkerProcess()
process.start()
return True
def kill(self):
start_workers(1)
self.terminate()
def run(self):
try:
# Connect to RabbitMQ
credentials = pika.PlainCredentials(app.config.get('RABBIT_USER'), app.config.get('RABBIT_PASS'))
connection = pika.BlockingConnection(
pika.ConnectionParameters(host=app.config.get('RABBITMQ_SERVER'), port=5672, credentials=credentials))
channel = connection.channel()
# Declare the Queue
channel.queue_declare(queue='screenshotlayer',
auto_delete=False,
durable=True)
app.logger.info('%s: Start to consume from RabbitMQ.', self.name)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback, queue='screenshotlayer')
channel.start_consuming()
app.logger.info('%s: Thread is going to sleep!', self.name)
# do what channel.start_consuming() does but with stoppping signal
#while self.stop_working.is_set():
# channel.transport.connection.process_data_events()
channel.stop_consuming()
connection.close()
except Exception as e:
self.shutdown()
return 0
Thank You
In the main process, keep track of your subprocesses (in a list) and loop over them with .join(timeout=50) (https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Process.join).
Then check is he is alive (https://docs.python.org/2/library/multiprocessing.html#multiprocessing.Process.is_alive).
If he is not, replace him with a fresh one.
def start_workers(n):
wks = []
for _ in range(n):
wks.append(WorkerProcess())
wks[-1].start()
while True:
#Remove all terminated process
wks = [p for p in wks if p.is_alive()]
#Start new process
for i in range(n-len(wks)):
wks.append(WorkerProcess())
wks[-1].start()
I would not handle the process pool management myself. Instead, I would use the ProcessPoolExecutor from the concurrent.future module.
No need to inherit the WorkerProcess to inherit the Process class. Just write your actual code in the class and then submit it to a process pool executor. The executor would have a pool of processes always ready to execute your tasks.
This way you can keep things simple and less headache for you.
You can read more about in my blog post here: http://masnun.com/2016/03/29/python-a-quick-introduction-to-the-concurrent-futures-module.html
Example Code:
from concurrent.futures import ProcessPoolExecutor
from time import sleep
def return_after_5_secs(message):
sleep(5)
return message
pool = ProcessPoolExecutor(3)
future = pool.submit(return_after_5_secs, ("hello"))
print(future.done())
sleep(5)
print(future.done())
print("Result: " + future.result())
Related
I want to run multiple threads in parallel. Each thread picks up a task from a task queue and executes that task.
from threading import Thread
from Queue import Queue
import time
class link(object):
def __init__(self, i):
self.name = str(i)
def run_jobs_in_parallel(consumer_func, jobs, results, thread_count,
async_run=False):
def consume_from_queue(jobs, results):
while not jobs.empty():
job = jobs.get()
try:
results.append(consumer_func(job))
except Exception as e:
print str(e)
results.append(False)
finally:
jobs.task_done()
#start worker threads
if jobs.qsize() < thread_count:
thread_count = jobs.qsize()
for tc in range(1,thread_count+1):
worker = Thread(
target=consume_from_queue,
name="worker_{0}".format(str(tc)),
args=(jobs,results,))
worker.start()
if not async_run:
jobs.join()
def create_link(link):
print str(link.name)
time.sleep(10)
return True
def consumer_func(link):
return create_link(link)
# create_link takes a while to execute
jobs = Queue()
results = list()
for i in range(0,10):
jobs.put(link(i))
run_jobs_in_parallel(consumer_func, jobs, results, 25, async_run=False)
Now what is happening is, let say we have 10 link objects in jobs queue, while the threads are running in parallel, multiple threads are executing same task. How can I prevent this from happening?
Note - the above sample code does not have the problem describe above, but i have exactly same code except create_link method does some complex stuff.
I think what you need is a lock object (docs,tutorial+examples). If you create an instance of such an object you can 'lock' some parts of your code, ensuring that only one thread executes this part at a time.
I guess in your case you want to lock the line job = jobs.get().
First you have to create the lock in a scope where all threads have access to it. (You don't want a lock for every thread but a single lock for all your threads. That means creating the lock within your thread just before acquiring it won't work)
import threading
lock = threading.Lock()
then you can use it on your line like:
lock.acquire()
job = jobs.get()
lock.release()
or
with lock:
job = jobs.get()
The first thread to reach acquire() will lock the lock. other threads that try to acquire() the lock will pause until the lock gets unlocked again by calling release().
I am building a watchdog timer that runs another Python program, and if it fails to find a check-in from any of the threads, shuts down the whole program. This is so it will, eventually, be able to take control of needed communication ports. The code for the timer is as follows:
from multiprocessing import Process, Queue
from time import sleep
from copy import deepcopy
PATH_TO_FILE = r'.\test_program.py'
WATCHDOG_TIMEOUT = 2
class Watchdog:
def __init__(self, filepath, timeout):
self.filepath = filepath
self.timeout = timeout
self.threadIdQ = Queue()
self.knownThreads = {}
def start(self):
threadIdQ = self.threadIdQ
process = Process(target = self._executeFile)
process.start()
try:
while True:
unaccountedThreads = deepcopy(self.knownThreads)
# Empty queue since last wake. Add new thread IDs to knownThreads, and account for all known thread IDs
# in queue
while not threadIdQ.empty():
threadId = threadIdQ.get()
if threadId in self.knownThreads:
unaccountedThreads.pop(threadId, None)
else:
print('New threadId < {} > discovered'.format(threadId))
self.knownThreads[threadId] = False
# If there is a known thread that is unaccounted for, then it has either hung or crashed.
# Shut everything down.
if len(unaccountedThreads) > 0:
print('The following threads are unaccounted for:\n')
for threadId in unaccountedThreads:
print(threadId)
print('\nShutting down!!!')
break
else:
print('No unaccounted threads...')
sleep(self.timeout)
# Account for any exceptions thrown in the watchdog timer itself
except:
process.terminate()
raise
process.terminate()
def _executeFile(self):
with open(self.filepath, 'r') as f:
exec(f.read(), {'wdQueue' : self.threadIdQ})
if __name__ == '__main__':
wd = Watchdog(PATH_TO_FILE, WATCHDOG_TIMEOUT)
wd.start()
I also have a small program to test the watchdog functionality
from time import sleep
from threading import Thread
from queue import SimpleQueue
Q_TO_Q_DELAY = 0.013
class QToQ:
def __init__(self, processQueue, threadQueue):
self.processQueue = processQueue
self.threadQueue = threadQueue
Thread(name='queueToQueue', target=self._run).start()
def _run(self):
pQ = self.processQueue
tQ = self.threadQueue
while True:
while not tQ.empty():
sleep(Q_TO_Q_DELAY)
pQ.put(tQ.get())
def fastThread(q):
while True:
print('Fast thread, checking in!')
q.put('fastID')
sleep(0.5)
def slowThread(q):
while True:
print('Slow thread, checking in...')
q.put('slowID')
sleep(1.5)
def hangThread(q):
print('Hanging thread, checked in')
q.put('hangID')
while True:
pass
print('Hello! I am a program that spawns threads!\n\n')
threadQ = SimpleQueue()
Thread(name='fastThread', target=fastThread, args=(threadQ,)).start()
Thread(name='slowThread', target=slowThread, args=(threadQ,)).start()
Thread(name='hangThread', target=hangThread, args=(threadQ,)).start()
QToQ(wdQueue, threadQ)
As you can see, I need to have the threads put into a queue.Queue, while a separate object slowly feeds the output of the queue.Queue into the multiprocessing queue. If instead I have the threads put directly into the multiprocessing queue, or do not have the QToQ object sleep in between puts, the multiprocessing queue will lock up, and will appear to always be empty on the watchdog side.
Now, as the multiprocessing queue is supposed to be thread and process safe, I can only assume I have messed something up in the implementation. My solution seems to work, but also feels hacky enough that I feel I should fix it.
I am using Python 3.7.2, if it matters.
I suspect that test_program.py exits.
I changed the last few lines to this:
tq = threadQ
# tq = wdQueue # option to send messages direct to WD
t1 = Thread(name='fastThread', target=fastThread, args=(tq,))
t2 = Thread(name='slowThread', target=slowThread, args=(tq,))
t3 = Thread(name='hangThread', target=hangThread, args=(tq,))
t1.start()
t2.start()
t3.start()
QToQ(wdQueue, threadQ)
print('Joining with threads...')
t1.join()
t2.join()
t3.join()
print('test_program exit')
The calls to join() means that the test program never exits all by itself since none of the threads ever exit.
So, as is, t3 hangs and the watchdog program detects this and detects the unaccounted for thread and stops the test program.
If t3 is removed from the above program, then the other two threads are well behaved and the watchdog program allows the test program to continue indefinitely.
I know that the termination notice is made available via the meta-data url and that I can do something similar to
if requests.get("http://169.254.169.254/latest/meta-data/spot/termination-time").status_code == 200
in order to determine if the notice has been posted. I run a Python service on my Spot Instances that:
Loops over long polling SQS Queues
If it gets a message, it pauses polling and works on the payload.
Working on the payload can take 5-50 minutes.
Working on the payload will involve spawning a threadpool of up to 50 threads to handle parallel uploading of files to S3, this is the majority of the time spent working on the payload.
Finally, remove the message from the queue, rinse, repeat.
The work is idempotent, so if the same payload runs multiple times, I'm out the processing time/costs, but will not negatively impact the application workflow.
I'm searching for an elegant way to now also poll for the termination notice every five seconds in the background. As soon as the termination notice appears, I'd like to immediately release the message back to the SQS queue in order for another instance to pick it up as quickly as possible.
As a bonus, I'd like to shutdown the work, kill off the threadpool, and have the service enter a stasis state. If I terminate the service, supervisord will simply start it back up again.
Even bigger bonus! Is there not a python module available that simplifies this and just works?
I wrote this code to demonstrate how a thread can be used to poll for the Spot instance termination. It first starts up a polling thread, which would be responsible for checking the http endpoint.
Then we create pool of fake workers (mimicking real work to be done) and starts running the pool. Eventually the polling thread will kick in (about 10 seconds into execution as implemented) and kill the whole thing.
To prevent the script from continuing to work after Supervisor restarts it, we would simply put a check at the beginning of the __main__ and if the termination notice is there we sleep for 2.5 minutes, which is longer than that notice lasts before the instance is shutdown.
#!/usr/bin/env python
import threading
import Queue
import random
import time
import sys
import os
class Instance_Termination_Poll(threading.Thread):
"""
Sleep for 5 seconds and eventually pretend that we then recieve the
termination event
if requests.get("http://169.254.169.254/latest/meta-data/spot/termination-time").status_code == 200
"""
def run(self):
print("Polling for termination")
while True:
for i in range(30):
time.sleep(5)
if i==2:
print("Recieve Termination Poll!")
print("Pretend we returned the message to the queue.")
print("Now Kill the entire program.")
os._exit(1)
print("Well now, this is embarassing!")
class ThreadPool:
"""
Pool of threads consuming tasks from a queue
"""
def __init__(self, num_threads):
self.num_threads = num_threads
self.errors = Queue.Queue()
self.tasks = Queue.Queue(self.num_threads)
for _ in range(num_threads):
Worker(self.tasks, self.errors)
def add_task(self, func, *args, **kargs):
"""
Add a task to the queue
"""
self.tasks.put((func, args, kargs))
def wait_completion(self):
"""
Wait for completion of all the tasks in the queue
"""
try:
while True:
if self.tasks.empty() == False:
time.sleep(10)
else:
break
except KeyboardInterrupt:
print "Ctrl-c received! Kill it all with Prejudice..."
os._exit(1)
self.tasks.join()
class Worker(threading.Thread):
"""
Thread executing tasks from a given tasks queue
"""
def __init__(self, tasks, error_queue):
threading.Thread.__init__(self)
self.tasks = tasks
self.daemon = True
self.errors = error_queue
self.start()
def run(self):
while True:
func, args, kargs = self.tasks.get()
try:
func(*args, **kargs)
except Exception, e:
print("Exception " + str(e))
error = {'exception': e}
self.errors.put(error)
self.tasks.task_done()
def do_work(n):
"""
Sleeps a random ammount of time, then creates a little CPU usage to
mimic some work taking place.
"""
for z in range(100):
time.sleep(random.randint(3,10))
print "Thread ID: {} working.".format(threading.current_thread())
for x in range(30000):
x*n
print "Thread ID: {} done, sleeping.".format(threading.current_thread())
if __name__ == '__main__':
num_threads = 30
# Start up the termination polling thread
term_poll = Instance_Termination_Poll()
term_poll.start()
# Create our threadpool
pool = ThreadPool(num_threads)
for y in range(num_threads*2):
pool.add_task(do_work, n=y)
# Wait for the threadpool to complete
pool.wait_completion()
I wish to have a single producer, multiple consumer architecture in Python while performing multi-threaded programming. I wish to have an operation like this :
Producer produces the data
Consumers 1 ..N (N is pre-determined) wait for the data to arrive (block) and then process the SAME data in different ways.
So I need all the consumers to to get the same data from the producer.
When I used Queue to perform this, I realized that all but the first consumer would be starved with the implementation I have.
One possible solution is to have a unique queue for each of the consumer threads wherein the same data is pushed in multiple queues by the producer. Is there a better way to do this ?
from threading import Thread
import time
import random
from Queue import Queue
my_queue = Queue(0)
def Producer():
global my_queue
my_list = []
for each in range (50):
my_list.append(each)
my_queue.put(my_list)
def Consumer1():
print "Consumer1"
global my_queue
print my_queue.get()
my_queue.task_done()
def Consumer2():
print "Consumer2"
global my_queue
print my_queue.get()
my_queue.task_done()
P = Thread(name = "Producer", target = Producer)
C1 = Thread(name = "Consumer1", target = Consumer1)
C2 = Thread(name = "Consumer2", target = Consumer2)
P.start()
C1.start()
C2.start()
In the example above, the C2 gets blocked indefinitely as C1 consumes the data produced by P1. What I would rather want is for C1 and C2 both to be able to access the SAME data as produced by P1.
Thanks for any code/pointers!
Your producer creates only one job to do:
my_queue.put(my_list)
For example, put my_list twice, and both consumers work:
def Producer():
global my_queue
my_list = []
for each in range (50):
my_list.append(each)
my_queue.put(my_list)
my_queue.put(my_list)
So this way you put two jobs to queue with the same list.
However i have to warn you: to modify the same data in different threads without thread synchronization is generally bad idea.
Anyways, approach with one queue would not work for you, since one queue is supposed to be processed with threads with the same algorithm.
So, I advise you to go ahead with unique queue per each consumer, since other solutions are not as trivial.
How about a per-thread queue then?
As part of starting each consumer, you would also create another Queue, and add this to a list of "all thread queues". Then start the producer, passing it the list of all queues, which he can then push data into all of them.
A single-producers and five-consumers example, verified.
from multiprocessing import Process, JoinableQueue
import time
import os
q = JoinableQueue()
def producer():
for item in range(30):
time.sleep(2)
q.put(item)
pid = os.getpid()
print(f'producer {pid} done')
def worker():
while True:
item = q.get()
pid = os.getpid()
print(f'pid {pid} Working on {item}')
print(f'pid {pid} Finished {item}')
q.task_done()
for i in range(5):
p = Process(target=worker, daemon=True).start()
producers = []
# it is easy to extend it to multi producers.
for i in range(1):
p = Process(target=producer)
producers.append(p)
p.start()
# make sure producers done
for p in producers:
p.join()
# block until all workers are done
q.join()
print('All work completed')
Explanation:
One producer and five consumers in this example.
JoinableQueue is used to make sure all elements stored in queue will be processed. 'task_done' is for worker to notify an element is done. 'q.join()' will wait for all elements marked as done.
With #2, there is no need to join wait for every worker.
But it is important to join wait for producer to store element into queue. Otherwise, program exit immediately.
I do know it might be an overkill, but... What about using signal/slot framework from Qt? For consistency, QThread could be used instead of threading.Thread
from __future__ import annotations # Needed for forward Consumer typehint in register_consumer
from queue import Queue
from typing import List
from PySide2.QtCore import QThread, QObject, QCoreApplication, Signal, Slot, Qt
import time
import random
def thread_name():
# Convenient class
return QThread.currentThread().objectName()
class Producer(QThread):
product_available = Signal(list)
def __init__(self):
QThread.__init__(self, objectName='ThreadProducer')
self.consumers: List[Consumer] = list()
# See Consumer class comments for info (exactly the same reason here)
self.internal_consumer_queue = Queue()
self.active = True
def run(self):
my_list = [each for each in range(5)]
self.product_available.emit(my_list)
print(f'Producer: from thread {QThread.currentThread().objectName()} I\'ve sent my products\n')
while self.active:
consumer: Consumer = self.internal_consumer_queue.get(block=True)
print(f'Producer: {consumer} has told me it has completed his task with my product! '
f'(Thread {thread_name()})')
if not consumer in self.consumers:
raise ValueError(f'Consumer {consumer} was not registered')
self.consumers.remove(consumer)
if len(self.consumers) == 0:
print('All consumers have completed their task! I\'m terminating myself')
self.active = False
#Slot(object)
def on_task_done_by_consumer(self, consumer: Consumer):
self.internal_consumer_queue.put(consumer)
def register_consumer(self, consumer: Consumer):
if consumer in self.consumers:
return
self.consumers.append(consumer)
consumer.task_done_with_product.connect(self.on_task_done_by_consumer)
class Consumer(QThread):
task_done_with_product = Signal(object)
def __init__(self, name: str, producer: Producer):
self.name = name
# Super init and set Thread name
QThread.__init__(self, objectName=f'Thread_Of_{self.name}')
self.producer = producer
# See method on_product_available doc
self.internal_queue = Queue()
def run(self) -> None:
self.producer.product_available.connect(self.on_product_available, Qt.ConnectionType.UniqueConnection)
# Thread loop waiting for product availability
product = self.internal_queue.get(block=True)
print(f'{self.name}: Product {product} received and elaborated in thread {thread_name()}\n\n')
# Tell the producer I've done
self.task_done_with_product.emit(self)
# Now the thread is naturally closed
#Slot(list)
def on_product_available(self, product: list):
"""
As a limitation of PySide, it seems that list are not supported for QueuedConnection. This work around using
internal queue might solve
"""
# This is executed in Main Loop!
print(f'{self.name}: In thread {thread_name()} I received the product, and I\'m queuing it for being elaborated'
f'in consumer thread')
self.internal_queue.put(product)
# Quit the thread
self.active = False
def __repr__(self):
# Needed in case of exception for representing current consumer
return f'{self.name}'
# Needed to executed main and threads event loops
app = QCoreApplication()
QThread.currentThread().setObjectName('MainThread')
producer = Producer()
c1 = Consumer('Consumer1', producer)
c1.start()
producer.register_consumer(c1)
c2 = Consumer('Consumer2', producer)
c2.start()
producer.register_consumer(c2)
producer.product_available.connect(c1.on_product_available)
producer.product_available.connect(c2.on_product_available)
# Start Producer thread for LAST!
producer.start()
app.exec_()
Results:
Producer: from thread ThreadProducer I've sent my products
Consumer1: In thread MainThread I received the product, and I'm queuing it for being elaboratedin consumer thread
Consumer1: Product [0, 1, 2, 3, 4] received and elaborated in thread Thread_Of_Consumer1
Consumer2: In thread MainThread I received the product, and I'm queuing it for being elaboratedin consumer thread
Consumer2: Product [0, 1, 2, 3, 4] received and elaborated in thread Thread_Of_Consumer2
Producer: Consumer1 has told me it has completed his task with my product! (Thread ThreadProducer)
Producer: Consumer2 has told me it has completed his task with my product! (Thread ThreadProducer)
All consumers have completed their task! I'm terminating myself
Notes:
The step-by-step explanation is into the code comments. If anything is unclear, I'll try my best for better clarifying
Unfortunately I've not found a way to use QueueConnection (doc here) so as to directly execute the Slot into the proper thread: an internal queueing has been used to pass information from main loop to proper thread (either Producer and Consumer). It seems that list and object cannot be meta-registered in PySide/pyqt for queueing purposes
I am working on a xmlrpc server which has to perform certain tasks cyclically. I am using twisted as the core of the xmlrpc service but I am running into a little problem:
class cemeteryRPC(xmlrpc.XMLRPC):
def __init__(self, dic):
xmlrpc.XMLRPC.__init__(self)
def xmlrpc_foo(self):
return 1
def cycle(self):
print "Hello"
time.sleep(3)
class cemeteryM( base ):
def __init__(self, dic): # dic is for cemetery
multiprocessing.Process.__init__(self)
self.cemRPC = cemeteryRPC()
def run(self):
# Start reactor on a second process
reactor.listenTCP( c.PORT_XMLRPC, server.Site( self.cemRPC ) )
p = multiprocessing.Process( target=reactor.run )
p.start()
while not self.exit.is_set():
self.cemRPC.cycle()
#p.join()
if __name__ == "__main__":
import errno
test = cemeteryM()
test.start()
# trying new method
notintr = False
while not notintr:
try:
test.join()
notintr = True
except OSError, ose:
if ose.errno != errno.EINTR:
raise ose
except KeyboardInterrupt:
notintr = True
How should i go about joining these two process so that their respective joins doesn't block?
(I am pretty confused by "join". Why would it block and I have googled but can't find much helpful explanation to the usage of join. Can someone explain this to me?)
Regards
Do you really need to run Twisted in a separate process? That looks pretty unusual to me.
Try to think of Twisted's Reactor as your main loop - and hang everything you need off that - rather than trying to run Twisted as a background task.
The more normal way of performing this sort of operation would be to use Twisted's .callLater or to add a LoopingCall object to the Reactor.
e.g.
from twisted.web import xmlrpc, server
from twisted.internet import task
from twisted.internet import reactor
class Example(xmlrpc.XMLRPC):
def xmlrpc_add(self, a, b):
return a + b
def timer_event(self):
print "one second"
r = Example()
m = task.LoopingCall(r.timer_event)
m.start(1.0)
reactor.listenTCP(7080, server.Site(r))
reactor.run()
Hey asdvawev - .join() in multiprocessing works just like .join() in threading - it's a blocking call the main thread runs to wait for the worker to shut down. If the worker never shuts down, then .join() will never return. For example:
class myproc(Process):
def run(self):
while True:
time.sleep(1)
Calling run on this means that join() will never, ever return. Typically to prevent this I'll use an Event() object passed into the child process to allow me to signal the child when to exit:
class myproc(Process):
def __init__(self, event):
self.event = event
Process.__init__(self)
def run(self):
while not self.event.is_set():
time.sleep(1)
Alternatively, if your work is encapsulated in a queue - you can simply have the child process work off of the queue until it encounters a sentinel (typically a None entry in the queue) and then shut down.
Both of these suggestions means that prior to calling .join() you can send set the event, or insert the sentinel and when join() is called, the process will finish it's current task and then exit properly.