execute functions in a queue - python

i have a example who should show what i'd like to do
queue = 2
def function():
print 'abcd'
time.sleep(3)
def exec_times(times):
#do something
function()
def exec_queue(queue):
#do something
function()
exec_times(3)
#things need be working while it waiting for the function finish
time.sleep(10)
the result should be
abcd
abcd
#after finish the first two function executions
abcd
so, there is a way to do that without use thread?
i mean some glib function to do this job.

If you want to avoid threads, one option is to use multiple processes. If you're on python 2.6, take a look at the multiprocessing module. If python 2.5, look at pyprocessing.
Note "Process Pools" in the docs for multiprocessing, which seem to handle your requirements:
One can create a pool of processes which will carry out tasks submitted to it with the Pool class.
class multiprocessing.Pool([processes[, initializer[, initargs[, maxtasksperchild]]]])
A process pool object which controls a pool of worker processes to which jobs
can be submitted. It supports asynchronous results with timeouts and callbacks
and has a parallel map implementation.

Related

How to do this with python multiprocessing pool

I frequently use the pattern below to parallelify tasks in python. I do it this way because filling the input queue is quick, and once the processes are launched and running asynchronously, I can call a blocking get() in a loop and pull the results out as they are ready. For tasks which take days, this is great because I can do things like report progress.
from multiprocessing import Process, Queue
class worker():
def __init__(self, init_dict,):
self.init_dict = init_dict
def __call__(self, task_queue, done_queue):
for task_args in task_queue.get()
task_result = self.do_work(task_args)
done_queue.put(task_result)
if __name__=="__main__":
n_threads = 8
init_dict = {} # whatever we need to setup our class
worker_class = worker(init_dict)
task_queue = Queue()
done_queue = Queue()
some_iterator = [1,2,3,4,5] # or a list of files to chew through normally
for task in some_iterator:
task_queue.put(task)
for i in range(n_threads):
Process(target=worker_class, args=(task_queue, done_queue)).start()
for i in range(len(some_iterator)):
result = done_queue.get()
# do something with result
# print out progress stats, whatever, as tasks complete
I have glossed over a few detail like catching errors, dealing with things that fail, killing zombie process, exiting at the end of the task queue and catching tracebacks, but you get the idea. I really love this pattern and it works perfectly for my needs. I have a lot of code that uses it.
I need more computing power though and want to spread the work across a cluster. Ray offers a multiprocessing pool with an API that matches that of python multiprocessing. I just can't work out how to get the above pattern to work. Mainly I get:
RuntimeError: Queue objects should only be shared between processes through inheritance
Does anybody have any recommendations of how I can get results as they are ready from a queue when using a pool, rather than n separate processes?
I appreciate that if I do a massive rewrite, then there are probably other ways to get what I want from ray, but I have a lot of code like this, so want to try and keep changes minimal.
Thanks

How to make a python thread dependent on the completion of another thread?

Basically I want make like 15000 get requests of the form GET www.somewebsite.com/archive/1, www.somewebsite.com/archive/2, and write the content to its own file locally. But doing all those in order takes a bit. And doing them all with their own thread results in all sorts of IO and HTTP errors. But if I do say 50 at a time it works fine. What I want to do is create a chunk thread that I spawn 50 threads off of, and then spawn another chunk thread when that one is finished. But I haven't found a way to do this.
I need a way to say "don't execute any more lines until this thread is completed" or a way to queue up threads that get executed asynchronously in order.
Python has a built in library multiprocessing that will allow you to implement simple batch processing with a queue.
import multiprocessing
static_input = range(100)
chunksize = 10
def work(item):
return "Number " + str(item)
with multiprocessing.Pool() as pool:
for out in pool.imap_unordered(work, static_input, chunksize):
print(out)
"You need to use join method of Thread object in the end of the script."
This has been stated here by maksim skurydzin.
You might also want to take a look at the multiprocessing class here.

What kind of problems (if any) would there be combining asyncio with multiprocessing?

As almost everyone is aware when they first look at threading in Python, there is the GIL that makes life miserable for people who actually want to do processing in parallel - or at least give it a chance.
I am currently looking at implementing something like the Reactor pattern. Effectively I want to listen for incoming socket connections on one thread-like, and when someone tries to connect, accept that connection and pass it along to another thread-like for processing.
I'm not (yet) sure what kind of load I might be facing. I know there is currently setup a 2MB cap on incoming messages. Theoretically we could get thousands per second (though I don't know if practically we've seen anything like that). The amount of time spent processing a message isn't terribly important, though obviously quicker would be better.
I was looking into the Reactor pattern, and developed a small example using the multiprocessing library that (at least in testing) seems to work just fine. However, now/soon we'll have the asyncio library available, which would handle the event loop for me.
Is there anything that could bite me by combining asyncio and multiprocessing?
You should be able to safely combine asyncio and multiprocessing without too much trouble, though you shouldn't be using multiprocessing directly. The cardinal sin of asyncio (and any other event-loop based asynchronous framework) is blocking the event loop. If you try to use multiprocessing directly, any time you block to wait for a child process, you're going to block the event loop. Obviously, this is bad.
The simplest way to avoid this is to use BaseEventLoop.run_in_executor to execute a function in a concurrent.futures.ProcessPoolExecutor. ProcessPoolExecutor is a process pool implemented using multiprocessing.Process, but asyncio has built-in support for executing a function in it without blocking the event loop. Here's a simple example:
import time
import asyncio
from concurrent.futures import ProcessPoolExecutor
def blocking_func(x):
time.sleep(x) # Pretend this is expensive calculations
return x * 5
#asyncio.coroutine
def main():
#pool = multiprocessing.Pool()
#out = pool.apply(blocking_func, args=(10,)) # This blocks the event loop.
executor = ProcessPoolExecutor()
out = yield from loop.run_in_executor(executor, blocking_func, 10) # This does not
print(out)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
For the majority of cases, this is function alone is good enough. If you find yourself needing other constructs from multiprocessing, like Queue, Event, Manager, etc., there is a third-party library called aioprocessing (full disclosure: I wrote it), that provides asyncio-compatible versions of all the multiprocessing data structures. Here's an example demoing that:
import time
import asyncio
import aioprocessing
import multiprocessing
def func(queue, event, lock, items):
with lock:
event.set()
for item in items:
time.sleep(3)
queue.put(item+5)
queue.close()
#asyncio.coroutine
def example(queue, event, lock):
l = [1,2,3,4,5]
p = aioprocessing.AioProcess(target=func, args=(queue, event, lock, l))
p.start()
while True:
result = yield from queue.coro_get()
if result is None:
break
print("Got result {}".format(result))
yield from p.coro_join()
#asyncio.coroutine
def example2(queue, event, lock):
yield from event.coro_wait()
with (yield from lock):
yield from queue.coro_put(78)
yield from queue.coro_put(None) # Shut down the worker
if __name__ == "__main__":
loop = asyncio.get_event_loop()
queue = aioprocessing.AioQueue()
lock = aioprocessing.AioLock()
event = aioprocessing.AioEvent()
tasks = [
asyncio.async(example(queue, event, lock)),
asyncio.async(example2(queue, event, lock)),
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
Yes, there are quite a few bits that may (or may not) bite you.
When you run something like asyncio it expects to run on one thread or process. This does not (by itself) work with parallel processing. You somehow have to distribute the work while leaving the IO operations (specifically those on sockets) in a single thread/process.
While your idea to hand off individual connections to a different handler process is nice, it is hard to implement. The first obstacle is that you need a way to pull the connection out of asyncio without closing it. The next obstacle is that you cannot simply send a file descriptor to a different process unless you use platform-specific (probably Linux) code from a C-extension.
Note that the multiprocessing module is known to create a number of threads for communication. Most of the time when you use communication structures (such as Queues), a thread is spawned. Unfortunately those threads are not completely invisible. For instance they can fail to tear down cleanly (when you intend to terminate your program), but depending on their number the resource usage may be noticeable on its own.
If you really intend to handle individual connections in individual processes, I suggest to examine different approaches. For instance you can put a socket into listen mode and then simultaneously accept connections from multiple worker processes in parallel. Once a worker is finished processing a request, it can go accept the next connection, so you still use less resources than forking a process for each connection. Spamassassin and Apache (mpm prefork) can use this worker model for instance. It might end up easier and more robust depending on your use case. Specifically you can make your workers die after serving a configured number of requests and be respawned by a master process thereby eliminating much of the negative effects of memory leaks.
Based on #dano's answer above I wrote this function to replace places where I used to use multiprocess pool + map.
def asyncio_friendly_multiproc_map(fn: Callable, l: list):
"""
This is designed to replace the use of this pattern:
with multiprocessing.Pool(5) as p:
results = p.map(analyze_day, list_of_days)
By letting caller drop in replace:
asyncio_friendly_multiproc_map(analyze_day, list_of_days)
"""
tasks = []
with ProcessPoolExecutor(5) as executor:
for e in l:
tasks.append(asyncio.get_event_loop().run_in_executor(executor, fn, e))
res = asyncio.get_event_loop().run_until_complete(asyncio.gather(*tasks))
return res
See PEP 3156, in particular the section on Thread interaction:
http://www.python.org/dev/peps/pep-3156/#thread-interaction
This documents clearly the new asyncio methods you might use, including run_in_executor(). Note that the Executor is defined in concurrent.futures, I suggest you also have a look there.

Non blocking python process or thread

I have a simple app that listens to a socket connection. Whenever certain chunks of data come in a callback handler is called with that data. In that callback I want to send my data to another process or thread as it could take a long time to deal with. I was originally running the code in the callback function, but it blocks!!
What's the proper way to spin off a new task?
threading is the threading library usually used for resource-based multithreading. The multiprocessing library is another library, but designed more for running intensive parallel computing tasks; threading is generally the recommended library in your case.
Example
import threading, time
def my_threaded_func(arg, arg2):
print "Running thread! Args:", (arg, arg2)
time.sleep(10)
print "Done!"
thread = threading.Thread(target=my_threaded_func, args=("I'ma", "thread"))
thread.start()
print "Spun off thread"
The multiprocessing module has worker pools. If you don't need a pool of workers, you can use Process to run something in parallel with your main program.
import threading
from time import sleep
import sys
# assume function defs ...
class myThread (threading.Thread):
def __init__(self, threadID):
threading.Thread.__init__(self)
self.threadID = threadID
def run(self):
if self.threadID == "run_exe":
run_exe()
def main():
itemList = getItems()
for item in itemList:
thread = myThread("run_exe")
thread.start()
sleep(.1)
listenToSocket(item)
while (thread.isAlive()):
pass # a way to wait for thread to finish before looping
main()
sys.exit(0)
The sleep between thread.start() and listenToSocket(item) ensures that the thread is established before you begin to listen. I implemented this code in a unit test framework were I had to launch multiple non-blacking processes (len(itemList) number of times) because my other testing framework (listenToSocket(item)) was dependent on the processes.
un_exe() can trigger a subprocess call that can be blocking (i.e. invoking pipe.communicate()) so that output data from the execution will still be printed in time with the python script output. But the nature of threading makes this ok.
So this code solves two problems - print data of a subprocess without blocking script execution AND dynamically create and start multiple threads sequentially (makes maintenance of the script better if I ever add more items to my itemList later).

How do I run two python loops concurrently?

Suppose I have the following in Python
# A loop
for i in range(10000):
Do Task A
# B loop
for i in range(10000):
Do Task B
How do I run these loops simultaneously in Python?
If you want concurrency, here's a very simple example:
from multiprocessing import Process
def loop_a():
while 1:
print("a")
def loop_b():
while 1:
print("b")
if __name__ == '__main__':
Process(target=loop_a).start()
Process(target=loop_b).start()
This is just the most basic example I could think of. Be sure to read http://docs.python.org/library/multiprocessing.html to understand what's happening.
If you want to send data back to the program, I'd recommend using a Queue (which in my experience is easiest to use).
You can use a thread instead if you don't mind the global interpreter lock. Processes are more expensive to instantiate but they offer true concurrency.
There are many possible options for what you wanted:
use loop
As many people have pointed out, this is the simplest way.
for i in xrange(10000):
# use xrange instead of range
taskA()
taskB()
Merits: easy to understand and use, no extra library needed.
Drawbacks: taskB must be done after taskA, or otherwise. They can't be running simultaneously.
multiprocess
Another thought would be: run two processes at the same time, python provides multiprocess library, the following is a simple example:
from multiprocessing import Process
p1 = Process(target=taskA, args=(*args, **kwargs))
p2 = Process(target=taskB, args=(*args, **kwargs))
p1.start()
p2.start()
merits: task can be run simultaneously in the background, you can control tasks(end, stop them etc), tasks can exchange data, can be synchronized if they compete the same resources etc.
drawbacks: too heavy!OS will frequently switch between them, they have their own data space even if data is redundant. If you have a lot tasks (say 100 or more), it's not what you want.
threading
threading is like process, just lightweight. check out this post. Their usage is quite similar:
import threading
p1 = threading.Thread(target=taskA, args=(*args, **kwargs))
p2 = threading.Thread(target=taskB, args=(*args, **kwargs))
p1.start()
p2.start()
coroutines
libraries like greenlet and gevent provides something called coroutines, which is supposed to be faster than threading. No examples provided, please google how to use them if you're interested.
merits: more flexible and lightweight
drawbacks: extra library needed, learning curve.
Why do you want to run the two processes at the same time? Is it because you think they will go faster (there is a good chance that they wont). Why not run the tasks in the same loop, e.g.
for i in range(10000):
doTaskA()
doTaskB()
The obvious answer to your question is to use threads - see the python threading module. However threading is a big subject and has many pitfalls, so read up on it before you go down that route.
Alternatively you could run the tasks in separate proccesses, using the python multiprocessing module. If both tasks are CPU intensive this will make better use of multiple cores on your computer.
There are other options such as coroutines, stackless tasklets, greenlets, CSP etc, but Without knowing more about Task A and Task B and why they need to be run at the same time it is impossible to give a more specific answer.
from threading import Thread
def loopA():
for i in range(10000):
#Do task A
def loopB():
for i in range(10000):
#Do task B
threadA = Thread(target = loopA)
threadB = Thread(target = loobB)
threadA.run()
threadB.run()
# Do work indepedent of loopA and loopB
threadA.join()
threadB.join()
You could use threading or multiprocessing.
How about: A loop for i in range(10000): Do Task A, Do Task B ? Without more information i dont have a better answer.
I find that using the "pool" submodule within "multiprocessing" works amazingly for executing multiple processes at once within a Python Script.
See Section: Using a pool of workers
Look carefully at "# launching multiple evaluations asynchronously may use more processes" in the example. Once you understand what those lines are doing, the following example I constructed will make a lot of sense.
import numpy as np
from multiprocessing import Pool
def desired_function(option, processes, data, etc...):
# your code will go here. option allows you to make choices within your script
# to execute desired sections of code for each pool or subprocess.
return result_array # "for example"
result_array = np.zeros("some shape") # This is normally populated by 1 loop, lets try 4.
processes = 4
pool = Pool(processes=processes)
args = (processes, data, etc...) # Arguments to be passed into desired function.
multiple_results = []
for i in range(processes): # Executes each pool w/ option (1-4 in this case).
multiple_results.append(pool.apply_async(param_process, (i+1,)+args)) # Syncs each.
results = np.array(res.get() for res in multiple_results) # Retrieves results after
# every pool is finished!
for i in range(processes):
result_array = result_array + results[i] # Combines all datasets!
The code will basically run the desired function for a set number of processes. You will have to carefully make sure your function can distinguish between each process (hence why I added the variable "option".) Additionally, it doesn't have to be an array that is being populated in the end, but for my example, that's how I used it. Hope this simplifies or helps you better understand the power of multiprocessing in Python!

Categories