I'm having problems running multithreaded tasks using python RQ (tested on v0.5.6 and v0.6.0).
Consider the following piece of code, as a simplified version of what I'm trying to achieve:
thing.py
from threading import Thread
class MyThing(object):
def say_hello(self):
while True:
print "Hello World"
def hello_task(self):
t = Thread(target=self.say_hello)
t.daemon = True # seems like it makes no difference
t.start()
t.join()
main.py
from rq import Queue
from redis import Redis
from thing import MyThing
conn = Redis()
q = Queue(connection=conn)
q.enqueue(MyThing().say_hello, timeout=5)
When executing main.py (while rqworker is running in background), the job breaks as expected by timeout, within 5 seconds.
Problem is, when I'm setting a task containing thread/s such as MyThing().hello_task, the thread runs forever and nothing happens when the 5 seconds timeout is over.
How can I run a multithreaded task with RQ, such that the timeout will kill the task, its sons, grandsons and their wives?
When you run t.join(), the hello_task thread blocks and waits until the say_hello thread returns - thus not receiving the timeout signal from rq. You can allow the main thread to run and properly receive the timeout signal by using Thread.join with a set amount of time to wait, while waiting for the thread to finish running. Like so:
def hello_task(self):
t = Thread(target=self.say_hello)
t.start()
while t.isAlive():
t.join(1) # Block for 1 second
That way you could also catch the timeout exception and handle it, if you wish:
def hello_task(self):
t = Thread(target=self.say_hello)
t.start()
try:
while t.isAlive():
t.join(1) # Block for 1 second
except JobTimeoutException: # From rq.timeouts.JobTimeoutException
print "Thread killed due to timeout"
raise
Related
In my Python application, I have a function that consumes message from Amazon SQS FIFO queue.
def consume_msgs():
sqs = boto3.client('sqs',
region_name='us-east-1',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
print('STARTING WORKER listening on {}'.format(QUEUE_URL))
while 1:
response = sqs.receive_message(
QueueUrl=QUEUE_URL,
MaxNumberOfMessages=1,
WaitTimeSeconds=10,
)
messages = response.get('Messages', [])
for message in messages:
try:
print('{} > {}'.format(threading.currentThread().getName(), message.get('Body')))
body = json.loads(message.get('Body'))
sqs.delete_message(QueueUrl=QUEUE_URL, ReceiptHandle=message.get('ReceiptHandle'))
except Exception as e:
print('Exception in worker > ', e)
sqs.delete_message(QueueUrl=QUEUE_URL, ReceiptHandle=message.get('ReceiptHandle'))
time.sleep(10)
In order to scale up, I am using multi threading to process messages.
if __name__ == '__main__:
for i in range(3):
t = threading.Thread(target=consume_msgs, name='worker-%s' % i)
t.setDaemon(True)
t.start()
while True:
print('Waiting')
time.sleep(5)
The application runs as service. If I need to deploy new release, it has to be restarted. Is there a way have the threads exist gracefully when main process is being terminated? In stead of killing the threads abruptly, they finish with current message first and stop receiving the next messages.
Since your threads keep looping, you cannot just join them, but you need to signal them it's time to break out of the loop too in order to be able to do that. This docs hint might be useful:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an Event.
With that, I've put the following example together, which can hopefully help a bit:
from threading import Thread, Event
from time import sleep
def fce(ident, wrap_up_event):
cnt = 0
while True:
print(f"{ident}: {cnt}", wrap_up_event.is_set())
sleep(3)
cnt += 1
if wrap_up_event.is_set():
break
print(f"{ident}: Wrapped up")
if __name__ == '__main__':
wanna_exit = Event()
for i in range(3):
t = Thread(target=fce, args=(i, wanna_exit))
t.start()
sleep(5)
wanna_exit.set()
A single event instance is passed to fce which would just keep running endlessly, but when done with each iteration, before going back to the top check, if the event has been set to True. And before exiting from the script, we set this event to True from the controlling thread. Since the threads are no longer marked as daemon threads, we do not have to explicitly join them.
Depending on how exactly you want to shutdown your script, you will need to handle the incoming signal (SIGTERM perhaps) or KeyboardInterrupt exception for SIGINT. And perform your clean-up before exiting, the mechanics of which remain the same. Apart from not letting python just stop execution right away, you need to let your threads know they should not re-enter the loop and wait for them to be joined.
The SIGINT is a bit simpler, because it's exposed as a python exception and you could do for instance this for the "main" bit:
if __name__ == '__main__':
wanna_exit = Event()
for i in range(3):
t = Thread(target=fce, args=(i, wanna_exit))
t.start()
try:
while True:
sleep(5)
print('Waiting')
except KeyboardInterrupt:
pass
wanna_exit.set()
You can of course send SIGINT to a process with kill and not only from the controlling terminal.
I wrote a program that uses threads to keep a connection alive while the main program loops until it either has an exception or is manually closed. My program runs in 1 hour intervals and the timeout for the connection is 20 minutes, thus I spawn a thread for every connection element that exist inside of my architecture. Thus, if we have two servers to connect to it connects to both these serves and stays connected and loops through each server retrieving data.
the program I wrote works correctly, however I can't seem to find a way to handle when the program it's self throws an exception. This is to say I can't find an appropriate way to dispose of the threads when the main program excepts. When the program excepts it will just hang open because of the thread not excepting as well and it won't close correctly and will have to be closed manually.
Any suggestions on how to handle cleaning up threads on program exit?
This is my thread:
def keep_vc_alive(vcenter,credentials, api):
vm_url = str(vcenter._proxy.binding.url).split('/')[2]
while True:
try:
logging.info('staying connected %s' % str(vm_url))
vcenter.keep_session_alive()
except:
logging.info('unable to call current time of vcenter %s attempting to reconnect.' % str(vm_url))
try:
vcenter = None
connected,api_version,uuid,vcenter = vcenter_open(60, api, * credentials)
except:
logging.critical('unable to call current time of vcenter %s killing application, please have administrator restart the module.' % str(vm_url))
break
time.sleep(60*10)
Then my exception clean up code is as follows, obviously I know.stop() doesn't work, but I honestly have no idea how to do what it is im trying to do.
except Abort: # Exit without clearing the semaphore
logging.exception('ApplicationError')
try:
config_values_vc = metering_config('VSphere',['vcenter-ip','username','password','api-version'])
for k in xrange(0, len(config_values_vc['username'])): # Loop through each vcenter server
vc_thread[config_values_vc['vcenter-ip'][k]].stop()
except:
pass
#disconnect vcenter
try:
for vcenter in list_of_vc_connections:
list_of_vc_connections[vcenter].disconnect()
except:
pass
try: # Close the db is it is open (db is defined)
db.close()
except:
pass
sys.exit(1)
except SystemExit:
raise
except:
logging.exception('ApplicationError')
semaphore('ComputeLoader', False)
logging.critical('Unexpected error: %s' % sys.exc_info()[0])
raise
Instead of sleeping, wait on a threading.Event():
def keep_vc_alive(vcenter,credentials, api, event): # event is a threading.Event()
vm_url = str(vcenter._proxy.binding.url).split('/')[2]
while not event.is_set(): # If the event got set, we exit the thread
try:
logging.info('staying connected %s' % str(vm_url))
vcenter.keep_session_alive()
except:
logging.info('unable to call current time of vcenter %s attempting to reconnect.' % str(vm_url))
try:
vcenter = None
connected,api_version,uuid,vcenter = vcenter_open(60, api, * credentials)
except:
logging.critical('unable to call current time of vcenter %s killing application, please have administrator restart the module.' % str(vm_url))
break
event.wait(timeout=60*10) # Wait until the timeout expires, or the event is set.
Then, in your main thread, set the event in the exception handling code:
except Abort: # Exit without clearing the semaphore
logging.exception('ApplicationError')
event.set() # keep_alive thread will wake up, see that the event is set, and exit
The generally accepted way to stop threads in python is to use the threading.Event object.
The algorithm followed usually is something like the following:
import threading
...
threads = []
#in the main program
stop_event = threading.Event()
#create thread and store thread and stop_event together
thread = threading.Thread(target=keep_vc_alive, args=(stop_event))
threads.append((thread, stop_event))
#execute thread
thread.start()
...
#in thread (i.e. keep_vc_alive)
# check is_set in stop_event
while not stop_event.is_set():
#receive data from server, etc
...
...
#in exception handler
except Abort:
#set the stop_events
for thread, stop_event in threads:
stop_event.set()
#wait for threads to stop
while 1:
#check for any alive threads
all_finished = True
for thread in threads:
if thread.is_alive():
all_finished = False
#keep cpu down
time.sleep(1)
In the following code
def sendPostRequest():
request = urllib.request.Request(myURL, myBody, myHeaders)
print("created POST request", request)
response = urllib.request.urlopen(request)
print("finished POST", response)
for i in range(5):
t = threading.Thread(target=sendPostRequest)
t.daemon = True # thread dies when main thread (only non-daemon thread) exits.
t.start()
, the line print("finished POST", response) is never reached, while I can observe in the server logs that the request arrived successfully. The line print("created POST request", request) is reached however.
Why is this the case?
The code makes thread daemon threads.
According to threading documentation:
A thread can be flagged as a “daemon thread”. The significance of this
flag is that the entire Python program exits when only daemon threads
are left. The initial value is inherited from the creating thread. The
flag can be set through the daemon property or the daemon constructor
argument.
The program maybe end before the response is returned from the server.
Instead of using daemon thread, use non-daemon thread, or explicitly wait the threads to finish started using Thread.join.
threads = []
for i in range(5):
t = threading.Thread(target=sendPostRequest)
t.start()
threads.append(t)
for t in threads:
t.join()
I wanted to use threading in python to download lot of webpages and went through the following code which uses queues in one of the website.
it puts a infinite while loop. Does each of thread run continuously with out ending till all of them are complete? Am I missing something.
#!/usr/bin/env python
import Queue
import threading
import urllib2
import time
hosts = ["http://yahoo.com", "http://google.com", "http://amazon.com",
"http://ibm.com", "http://apple.com"]
queue = Queue.Queue()
class ThreadUrl(threading.Thread):
"""Threaded Url Grab"""
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
#grabs host from queue
host = self.queue.get()
#grabs urls of hosts and prints first 1024 bytes of page
url = urllib2.urlopen(host)
print url.read(1024)
#signals to queue job is done
self.queue.task_done()
start = time.time()
def main():
#spawn a pool of threads, and pass them queue instance
for i in range(5):
t = ThreadUrl(queue)
t.setDaemon(True)
t.start()
#populate queue with data
for host in hosts:
queue.put(host)
#wait on the queue until everything has been processed
queue.join()
main()
print "Elapsed Time: %s" % (time.time() - start)
Setting the thread's to be daemon threads causes them to exit when the main is done. But, yes you are correct in that your threads will run continuously for as long as there is something in the queue else it will block.
The documentation explains this detail Queue docs
The python Threading documentation explains the daemon part as well.
The entire Python program exits when no alive non-daemon threads are left.
So, when the queue is emptied and the queue.join resumes when the interpreter exits the threads will then die.
EDIT: Correction on default behavior for Queue
Your script works fine for me, so I assume you are asking what is going on so you can understand it better. Yes, your subclass puts each thread in an infinite loop, waiting on something to be put in the queue. When something is found, it grabs it and does its thing. Then, the critical part, it notifies the queue that it's done with queue.task_done, and resumes waiting for another item in the queue.
While all this is going on with the worker threads, the main thread is waiting (join) until all the tasks in the queue are done, which will be when the threads have sent the queue.task_done flag the same number of times as messages in the queue . At that point the main thread finishes and exits. Since these are deamon threads, they close down too.
This is cool stuff, threads and queues. It's one of the really good parts of Python. You will hear all kinds of stuff about how threading in Python is screwed up with the GIL and such. But if you know where to use them (like in this case with network I/O), they will really speed things up for you. The general rule is if you are I/O bound, try and test threads; if you are cpu bound, threads are probably not a good idea, maybe try processes instead.
good luck,
Mike
I don't think Queue is necessary in this case. Using only Thread:
import threading, urllib2, time
hosts = ["http://yahoo.com", "http://google.com", "http://amazon.com",
"http://ibm.com", "http://apple.com"]
class ThreadUrl(threading.Thread):
"""Threaded Url Grab"""
def __init__(self, host):
threading.Thread.__init__(self)
self.host = host
def run(self):
#grabs urls of hosts and prints first 1024 bytes of page
url = urllib2.urlopen(self.host)
print url.read(1024)
start = time.time()
def main():
#spawn a pool of threads
for i in range(len(hosts)):
t = ThreadUrl(hosts[i])
t.start()
main()
print "Elapsed Time: %s" % (time.time() - start)
I am implementing a pipeline pattern with zeroMQ using the python bindings.
tasks are fanned out to workers which listen for new tasks with an infinite loop like this:
while True:
socks = dict(self.poller.poll())
if self.receiver in socks and socks[self.receiver] == zmq.POLLIN:
msg = self.receiver.recv_unicode(encoding='utf-8')
self.process(msg)
if self.hear in socks and socks[self.hear] == zmq.POLLIN:
msg = self.hear.recv()
print self.pid,":", msg
sys.exit(0)
they exit when they get a message from the sink node, confirming having received all the results expected.
however, worker may miss such a message and not finish. What is the best way to have workers always finish, when they have no way to know (other than through the already mentioned message, that there are no further tasks to process).
Here is the testing code I wrote for checking the workers status:
#-*- coding:utf-8 -*-
"""
Test module containing tests for all modules of pypln
"""
import unittest
from servers.ventilator import Ventilator
from subprocess import Popen, PIPE
import time
class testWorkerModules(unittest.TestCase):
def setUp(self):
self.nw = 4
#spawn 4 workers
self.ws = [Popen(['python', 'workers/dummy_worker.py'], stdout=None) for i in range(self.nw)]
#spawn a sink
self.sink = Popen(['python', 'sinks/dummy_sink.py'], stdout=None)
#start a ventilator
self.V = Ventilator()
# wait for workers and sinks to connect
time.sleep(1)
def test_send_unicode(self):
'''
Pushing unicode strings through workers to sinks.
'''
self.V.push_load([u'são joão' for i in xrange(80)])
time.sleep(1)
#[p.wait() for p in self.ws]#wait for the workers to terminate
wsr = [p.poll() for p in self.ws]
while None in wsr:
print wsr, [p.pid for p in self.ws if p.poll() == None] #these are the unfinished workers
time.sleep(0.5)
wsr = [p.poll() for p in self.ws]
self.sink.wait()
self.sink = self.sink.returncode
self.assertEqual([0]*self.nw, wsr)
self.assertEqual(0, self.sink)
if __name__ == '__main__':
unittest.main()
All the messaging stuff eventually ends up with heartbeats. If you (as a worker or a sink or whatever) discover that a component you need to work with is dead, you can basically either try to connect somewhere else or kill yourself. So if you as a worker discover that the sink is there no more, just exit. This also means that you may exit even though the sink is still there but the connection is broken. But I am not sure you can do more, perhaps set all the timeouts more reasonably...