I try to create a Python script that performs queries to multiple sites. The script works well (I use urllib2) but just for one link. For multiples sites, I make multiple requests one after the other but it is not very powerful.
What is the ideal solution (the threads I guess) to run multiple queries in parallel and stop others when a query returns a specific string please ?
I found this question but I have not found how to change it to stop the remaining threads... :
Python urllib2.urlopen() is slow, need a better way to read several urls
Thank you in advance !
(sorry if I made mistakes in English, I'm French ^^)
You can use Twisted to deal with multiple requests concurrently. Internally it will use epoll (or iocp or kqueue depending on the platform) to get notified of tcp availability efficently, which is cheaper than using threads. Once one request matches, you cancel the others.
Here is the Twisted http agent tutorial.
Usually this is implemented with the following pattern (sorry, my Python skills are not so good).
You have a class named Runner. This class has long running method, which gets the information you need. Also, it has a Cancel method, which interrupts the long running method in some way (you can make the url request object a class member field, so the cancel class calls the equivalent of request.terminate()).
The long running method need to accept a callback function, which to signal when done.
Then, before you start your many threads, you create instances of all these objects of that class, and keep them in a list. In the same loop you can start these long running methods, passing a callback method of your main program.
And, in the callback method, you just go trough the list of all threaded classes and call their cancel method.
Please, edit my answer with any Python specific implementation :)
You can run your queries with the multiprocessing library, poll for results, and shutdown queries you no longer need. Documentation for the module includes information on the Process class which has a terminate() method. If you wish to limit the number of requests sent out, check out options for pooling.
Related
I want to keep a python class permanently alive so I can continually interact with it. The reason for this is that this class is highly memory intensive which means that (1) I cannot fit it into memory multiple times, and (2) Loading the class is prohibitively slow.
I have tried implementing this using both Pyro and RPYC, but it appears that these packages always delete the object and create a new object every time a new request is made (which is exactly what I don't want to do.) However, I did find the following option for Pyro:
#Pyro4.behavior(instance_mode="single")
Which ensures that only a single instance is created. However, since it is possible that multiple requests will be made simultaneously I am not 100% that this is safe to do. Is there a better way to accomplish what I am trying to do?
Thanks in advance for any help, it is greatly appreciated! (I've been struggling with this for quite a while now).
L
If you don't want to make your class thread safe, you can set SERVERTYPE to "multiplex", this will make it so all remote method calls are processed sequentially.
https://pythonhosted.org/Pyro4/servercode.html#server-types-and-concurrency-model:
multiplexed server (servertype "multiplex")
This server uses a connection multiplexer to process all remote method calls sequentially. No threads are used in this server. It uses the best supported selector available on your platform (kqueue, poll, select). It means only one method call is running at a time, so if it takes a while to complete, all other calls are waiting for their turn (even when they are from different proxies). The instance mode used for registering your class, won’t change the way the concurrent access to the instance is done: in all cases, there is only one call active at all times. Your objects will never be called concurrently from different threads, because there are no threads. It does still affect when and how often Pyro creates an instance of your class.
I've got the following problem:
I have two different classes; let's call them the interface and worker. The interface is supposed to accept requests from outside, and multiplexes them to several workers.
Contrary to almost every example I have found, I have several peculiarities:
The workers are not supposed to be recreated for every request.
The workers are different; a request for workers[0] cannot be answered by workers[1]. This multiplexing is done in interface.
I have a number of function-like calls which are difficult to model via events or simple queues.
There are a few different requests, which would make one queue per request difficult.
For example, assume that each worker is storing a single integer number (let's say the number of calls this worker received). In non-parallel processing, I'd use something like this:
class interface(object):
workers = None #set somewhere else.
def get_worker_calls(self, worker_id):
return self.workers[worker_id].get_calls()
class worker(object)
calls = 0
def get_calls(self):
self.calls += 1
return self.calls
This, obviously, doesn't work. What does?
Or, maybe more relevantly, I don't have experience with multiprocessing. Is there a design paradigm I'm missing that would easily solve the above?
Thanks!
For reference, I have considered several approaches, and I was unable to find a good one:
Use one request and answer queue. I've discarded this idea since that'd either block interface'for the answer-time of the current worker (making it badly scalable), or would require me sending around extra information.
Use of one request queue. Each message contains a pipe to return the answer to that request. After fixing the issue with being unable to send pipes via pipes, I've run into problems with pipe closing unless sending both ends over the connection.
Use of one request queue. Each message contains a queue to return the answer to that request. Fails since I cannot send queues via queues, but the reduction trick doesn't work.
The above also applies to the respective Manager-generated objects.
Multiprocessing means you have 2+ separated processes running. There is no way to access memory from one process to another directly (as with multithreading).
Your best shot is to use some kind of external Queue mechanism, you can start with Celery or RQ. RQ is simpler but celery has built-in monitoring.
But you have to know that Multiprocessing will work only if Celery/RQ are able to "pack" the needed functions/classes and send them to other process. Therefore you have to use __main__ level functions (that are in top of file, not belongs to any class).
You can always implement it yourself, Redis is very simple, ZeroMQ and RabbitMQ are also good.
Beaver library is good example of how to deal with multiprocessing in python using ZeroMQ queue.
I have a program, which opens a lot of urls and downloads pictures .
I have a function of the program, which manages link's opening and pictures downloading, which contains a for loop and performs some operations on the priority queue. I want to run this function, but no longer than the set time period. For example if this function is running longer than 1 hour I want to terminate it and run the rest of the program (other functions).
I was trying to find some solutions, and I found two question here on stack.
The first solution use only time module First solution
The second use also the multiprocessing module
Second solution. Can some one suggest which one will be more appropriate to use in my program? I will write a pseudocode of my function:
def fun():
for link in linkList:
if link not in queue:
queue.push(link)
else:
queue.updatePriority(link)
if queue:
top = queue.pop()
fun(top)
This function is called in other function:
def run(startLink):
fun(startLink)
And the run() function is called in other module.
Which method is better to use with a program which contains a lot of modules and performs a lot of
The asyncio module is ideal for this task.
You can create a future, then use asyncio.wait which supports a timeout parameter.
Using multiprocessing here would be a little bit tricky, because fun is consuming a priority queue (I'm assuming a Queue.PriorityQueue) that is coming from some other part of the program. That queue cannot easily be passed between processes - you would need to create a custom multiprocessing.BaseManager subclass, register the Queue.PriorityQueue class with it, and start up the Manager server, instantiate a PriorityQueue on the server, and use a Proxy to that instance everywhere you interact with the queue. That's a lot of overhead, and also hurts performance a bit.
Since it appears you don't actually want any concurrency here - you want the rest of the program to stop while fun is running - I don't think there's a compelling reason to use multiprocessing. Instead, I think using the time-based solution makes more sense.
I am writing an implementation of a NAT. My algorithm is as follows:
Packet comes in
Check against lookup table if external, add to lookup table if internal
Swap the source address and send the packet on its way
I have been reading about Twisted. I was curious if Twisted takes advantage of multicore CPUs? Assume the system has thousands of users and one packet comes right after the other. With twisted can the lookup table operations be taking place at the same time on each core. I hear with threads the GIL will not allow this anyway. Perhaps I could benifit from multiprocessing>
Nginix is asynchronous and happily serves thousands of users at the same time.
Using threads with twisted is discouraged. It has very good performance when used asynchronously, but the code you write for the request handlers must not block. So if your handler is a pretty big piece of code, break it up into smaller parts and utilize twisted's famous Deferreds to attach the other parts via callbacks. It certainly requires a somewhat different thinking than most programmers are used to, but it has benefits. If the code has blocking parts, like database operations, or accessing other resources via network to get some result, try finding asynchronous libraries for those tasks too, so you can use Deferreds in those cases also. If you can't use asynchronous libraries you may finally use the deferToThread function, which will run the function you want to call in a different thread and return a Deferred for it, and fire your callback when finished, but it's better to use that as a last resort, if nothing else can be done.
Here is the official tutorial for Deferreds:
http://twistedmatrix.com/documents/10.1.0/core/howto/deferredindepth.html
And another nice guide, which can help to get used to think in "async mode":
http://ezyang.com/twisted/defer2.html
My question is: which python framework should I use to build my server?
Notes:
This server talks HTTP with it's clients: GET and POST (via pyAMF)
Clients "submit" "tasks" for processing and, then, sometime later, retrieve the associated "task_result"
submit and retrieve might be separated by days - different HTTP connections
The "task" is a lump of XML describing a problem to be solved, and a "task_result" is a lump of XML describing an answer.
When a server gets a "task", it queues it for processing
The server manages this queue and, when tasks get to the top, organises that they are processed.
the processing is performed by a long running (15 mins?) external program (via subprocess) which is feed the task XML and which produces a "task_result" lump of XML which the server picks up and stores (for later Client retrieval).
it serves a couple of basic HTML pages showing the Queue and processing status (admin purposes only)
I've experimented with twisted.web, using SQLite as the database and threads to handle the long running processes.
But I can't help feeling that I'm missing a simpler solution. Am I? If you were faced with this, what technology mix would you use?
I'd recommend using an existing message queue. There are many to choose from (see below), and they vary in complexity and robustness.
Also, avoid threads: let your processing tasks run in a different process (why do they have to run in the webserver?)
By using an existing message queue, you only need to worry about producing messages (in your webserver) and consuming them (in your long running tasks). As your system grows you'll be able to scale up by just adding webservers and consumers, and worry less about your queuing infrastructure.
Some popular python implementations of message queues:
http://code.google.com/p/stomper/
http://code.google.com/p/pyactivemq/
http://xph.us/software/beanstalkd/
I'd suggest the following. (Since it's what we're doing.)
A simple WSGI server (wsgiref or werkzeug). The HTTP requests coming in will naturally form a queue. No further queueing needed. You get a request, you spawn the subprocess as a child and wait for it to finish. A simple list of children is about all you need.
I used a modification of the main "serve forever" loop in wsgiref to periodically poll all of the children to see how they're doing.
A simple SQLite database can track request status. Even this may be overkill because your XML inputs and results can just lay around in the file system.
That's it. Queueing and threads don't really enter into it. A single long-running external process is too complex to coordinate. It's simplest if each request is a separate, stand-alone, child process.
If you get immense bursts of requests, you might want a simple governor to prevent creating thousands of children. The governor could be a simple queue, built using a list with append() and pop(). Every request goes in, but only requests that fit will in some "max number of children" limit are taken out.
My reaction is to suggest Twisted, but you've already looked at this. Still, I stick by my answer. Without knowing you personal pain-points, I can at least share some things that helped me reduce almost all of the deferred-madness that arises when you have several dependent, blocking actions you need to perform for a client.
Inline callbacks (lightly documented here: http://twistedmatrix.com/documents/8.2.0/api/twisted.internet.defer.html) provide a means to make long chains of deferreds much more readable (to the point of looking like straight-line code). There is an excellent example of the complexity reduction this affords here: http://blog.mekk.waw.pl/archives/14-Twisted-inlineCallbacks-and-deferredGenerator.html
You don't always have to get your bulk processing to integrate nicely with Twisted. Sometimes it is easier to break a large piece of your program off into a stand-alone, easily testable/tweakable/implementable command line tool and have Twisted invoke this tool in another process. Twisted's ProcessProtocol provides a fairly flexible way of launching and interacting with external helper programs. Furthermore, if you suddenly decide you want to cloudify your application, it is not all that big of a deal to use a ProcessProtocol to simply run your bulk processing on a remote server (random EC2 instances perhaps) via ssh, assuming you have the keys setup already.
You can have a look at celery
It seems any python web framework will suit your needs. I work with a similar system on a daily basis and I can tell you, your solution with threads and SQLite for queue storage is about as simple as you're going to get.
Assuming order doesn't matter in your queue, then threads should be acceptable. It's important to make sure you don't create race conditions with your queues or, for example, have two of the same job type running simultaneously. If this is the case, I'd suggest a single threaded application to do the items in the queue one by one.