As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a working Python script that checks the 6,300 or so sites we have to ensure they are up by sending an HTTP request to each and measuring the response. Currently the script takes about 40 min to run completely, I was interested in possibly some other ways to speed up the script, two thoughts were either threading or multiple running instances.
This is the order of execution now:
MySQL query to get all of the active domains to scan (6,300 give or take)
Iterate through each domain and using urllib send an HTTP request to each
If the site doesn't return '200' then log the results
repeat until complete
This seems like it could possibly be sped up significantly with threading but I am not quite sure how that process flow would look since I am not familiar with threading.
If someone could offer a sample high-level process flow and any other pointers for working with threading or offer any other insights on how to improve the script in general it would be appreciated.
The flow would look something like this:
Create a domain Queue
Create a result Queue
MySQL query to get all of the active domains to scan
Put the domains in the domain Queue
Spawn a pool of worker threads
Run the threads
Each worker will get a domain from the domain Queue, send a request and put the result in the result Queue
Wait for the threads to finish
Get everything from the result Queue and log it
You'll probably want to tune the number of threads, thus the pool, and not just 6300 threads for every domain.
You can take a look at scrapy framework. It's made for web scraping. It's asynchronus build on twisted and pretty fast.
In your case you can just get list of domains to scrape, and only see if it will return 200 without actually scraping anything. It should be much faster.
Here's the link:
http://scrapy.org/
Threading is definitely what you need. It will remove the serialized nature of your algorithm, and since it is mostly IO bounded, you will gain a lot by sending HTTP requests in parallel.
Your flow would become:
MySQL query to get all of the active domains to scan (6,300 give or take)
Iterate through each domain and create a thread that will use urllib to send an HTTP request to each
Log the results in threads
You can make this algorithm better by creating a n worker threads with queues, and add domains to queues instead of creating one thread per each domain. I just wanted to make things a little bit easier for you since you're not familiar with threads.
I guees you should go for threading, taking under investigation the optimal number of processes to start in order to avoid killing your client. Python manual offers good examples by the way take a look here Download multiple pages concurrently?
and to urllib, threading, multiprocessing
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm working through possible architectures for a problem. In as few words as possible, the problem is: I need to design a system that allows clients to connect using HTTP/REST to kick off long running processes. Each process will create a persistent connection to a third party server and write the received data to a queue. Each process will terminate only if the third party server closes the connection or another HTTP/REST request is received indicating it should be terminated.
Constraints and background:
Clients must be able to connect using HTTP/REST
System must be written in Python
I'm a lower level C guy (with enough Python experience to feel competent) but trying to wrap my head around the Python frameworks available for making this easier. My gut is to jump into the weeds and I know if I implement this as I'm thinking, I might as well have written it in C. Don't want that. I want to leverage as many frameworks and libraries for Python as possible. Performance is not a top priority.
Approaches I've considered:
In doing research, I came across Twisted which might be a fit and seems to make sense to me (thinking about this as a daemon). I'm imagining the final product would be a Twisted app that exposes a REST interface, dispatches new threads connecting to the third party service for each client request received, and would manage its own thread pool. I'm familiar with threading, though admittedly haven't done anything in Python with them yet. In a nutshell, Twisted looks very cool, though in the end, I'm left wondering if I'm overcomplicating this.
The second approach I considered is using Celery and Flask and simply let Celery handle all the dispatching, thread management, etc. I found this article showing Celery and Flask playing nicely together. It seems much like a much simpler approach.
After writing this, I'm leaning towards the second option of using Celery and Flask, though I don't know much about Celery, so looking for any advice you might have, as well as other possible architectures that I'm not considering. I really appreciate it and thank you in advance.
Yes, Twisted is overkill here.
From what you described, the combination of Celery and Flask would suffice. It would allow you to implement a REST interface that kicks off your long running processes as Celery tasks. You can easily implement a REST method allowing clients to stop running tasks by invoking Celery's revoke method on a tasks ID. Take note that Celery depends on a Message Broker for sending and receiving messages (frequently RabbitMQ is used) and a data backend for storing results (frequently Redis is used).
>>> from celery.task.control import revoke
>>> revoke(task_id, terminate=True)
http://docs.celeryproject.org/en/latest/userguide/workers.html#revoking-tasks
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm writing a program in Python for which I'm considering a local client-server model, but I am struggling to figure out the best way for the server to communicate with the client(s). A simple, canned solution would be best--I'm not looking to reinvent the wheel. Here are my needs for this program:
Runs on Linux
Server and clients are on the same system, so I don't need to go over a network.
Latency that's not likely to be annoying to an interactive user.
Multiple clients can connect to the same server.
Clients are started independently of the server and can connect/disconnect at any time.
The number of clients is measurable in dozens; I don't need to scale very high.
Clients can come in a few different flavors:
Stream readers - Reads a continuous stream of data (in practice, this is all text).
State readers - Reads some state information that updates every once in a while.
Writers - Sends some data to the server, receives some response each time.
Client type 1 seems simple enough; it's a unidirectional dumb pipe. Client type 2 is a bit more interesting. I want to avoid simply polling the server to check for new data periodically since that would add noticeable latency for the user. The server needs some way to signal to all and only the relevant clients when the state information is updated so that the client can receive the updated state from the server. Client type 3 must be bidirectional; it will send user-supplied data to the server and receive some kind of response after each send.
I've looked at Python's IPC page (http://docs.python.org/2/library/ipc.html), but I don't think any of those solutions are right for my needs. The subprocess module is completely inappropriate, and everything else is a bit more low-level than I'd like.
The similar question Efficient Python to Python IPC isn't quite the same; I don't need to transfer Python objects, I'm not especially worried about CPU efficiency for the number of clients I'll have, I only care about Linux, and none of the answers to that question are especially helpful to me anyway.
Update:
I cannot accept an answer that just points me at a framework/library/module/tool without actually giving an explanation of how it can be used for my three different server-client relationships. If you say, "All of this can be done with named pipes!" I will have to ask "How?" Code snippets would be ideal, but a high-level description of a solution can work too.
Have you already looked into ZeroMQ? It has excellent Python support, and the documented examples already cover your use cases.
It's easy to use on a single platform, single machine setup, but it can be very easily extended to a network.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm looking for a python library or a command line tool for downloading multiple files in parallel. My current solution is to download the files sequentially which is slow. I know you can easily write a half-assed threaded solution in python, but I always run into annoying problem when using threading. It is for polling a large number of xml feeds from websites.
My requirements for the solution are:
Should be interruptable. Ctrl+C should immediately terminate all downloads.
There should be no leftover processes that you have to kill manually using kill, even if the main program crashes or an exception is thrown.
It should work on Linux and Windows too.
It should retry downloads, be resilient against network errors and should timeout properly.
It should be smart about not hammering the same server with 100+ simultaneous downloads, but queue them in a sane way.
It should handle important http status codes like 301, 302 and 304. That means that for each file, it should take the Last-Modified value as input and only download if it has changed since last time.
Preferably it should have a progress bar or it should be easy to write a progress bar for it to monitor the download progress of all files.
Preferably it should take advantage of http keep-alive to maximize the transfer speed.
Please don't suggest how I may go about implementing the above requirements. I'm looking for a ready-made, battle-tested solution.
I guess I should describe what I want it for too... I have about 300 different data feeds as xml formatted files served from 50 data providers. Each file is between 100kb and 5mb in size. I need to poll them frequently (as in once every few minutes) to determine if any of them has new data I need to process. So it is important that the downloader uses http caching to minimize the amount of data to fetch. It also uses gzip compression obviously.
Then the big problem is how to use the bandwidth in an as efficient manner as possible without overstepping any boundaries. For example, one data provider may consider it abuse if you open 20 simultaneous connections to their data feeds. Instead it may be better to use one or two connections that are reused for multiple files. Or your own connection may be limited in strange ways.. My isp limits the number of dns lookups you can do so some kind of dns caching would be nice.
You can try pycurl, though the interface is not easy at first, but once you look at examples, its not hard to understand. I have used it to fetch 1000s of web pages in parallel on meagre linux box.
You don't have to deal with threads, so it terminates gracefully, and there are no processes left behind
It provides options for timeout, and http status handling.
It works on both linux and windows.
The only problem is that it provides a basic infrastructure (basically just a python layer above the excellent curl library). You will have to write few lines to achieve the features as you want.
There are lots of options but it will be hard to find one which fits all your needs.
In your case, try this approach:
Create a queue.
Put URLs to download into this queue (or "config objects" which contain the URL and other data like the user name, the destination file, etc).
Create a pool of threads
Each thread should try to fetch a URL (or a config object) from the queue and process it.
Use another thread to collect the results (i.e. another queue). When the number of result objects == number of puts in the first queue, then you're finished.
Make sure that all communication goes via the queue or the "config object". Avoid accessing data structures which are shared between threads. This should save you 99% of the problems.
I don't think such a complete library exists, so you'll probably have to write your own. I suggest taking a look at gevent for this task. They even provide a concurrent_download.py example script. Then you can use urllib2 for most of the other requirements, such as handling HTTP status codes, and displaying download progress.
I would suggest Twisted, although it is not a ready made solution, but provides the main building blocks to get every feature you listed in an easy way and it does not use threads.
If you are interested, take a look at the following links:
http://twistedmatrix.com/documents/current/api/twisted.web.client.html#getPage
http://twistedmatrix.com/documents/current/api/twisted.web.client.html#downloadPage
As per your requirements:
Supported out of the box
Supported out of the box
Supported out of the box
Timeout supported out of the box, other error handling done through deferreds
Achieved easily using cooperators (example 7)
Supported out of the box
Not supported, solutions exists (and they are not that hard to implement)
Not supported, it can be implemented (but it will be relatively hard)
Nowadays there are excellent Python libs you might want to use - urllib3 and requests
Try using aria2 through simple python subprocess module.
It provide all requirements from your list, except 7, out of the box, and 7 is easy to write.
aria2c has a nice xml-rpc or json-rpc interface to interact with it from your scripts.
Does urlgrabber fit your requirements?
http://urlgrabber.baseurl.org/
If it doesn't, you could consider volunteering to help finish it. Contact the authors, Michael Stenner and Ryan Tomayko.
Update: Googling for "parallel wget" yields these, among others:
http://puf.sourceforge.net/
http://www.commandlinefu.com/commands/view/3269/parallel-file-downloading-with-wget
It seems like you have a number of options to choose from.
I used the standard libs for that, urllib.urlretrieve to be precise. downloaded podcasts this way, via a simple thread pool, each using its own retrieve. I did about 10 simultanous connections, more should not be a problem. Continue a interrupted download, maybe not. Ctrl-C could be handled, I guess. Worked on Windows, installed a handler for progress bars. All in all 2 screens of code, 2 screens for generating the URLs to retrieve.
This seems pretty flexible:
http://keramida.wordpress.com/2010/01/19/parallel-downloads-with-python-and-gnu-wget/
Threading isn't "half-assed" unless you're a bad programmer. The best general approach to this problem is the producer / consumer model. You have one dedicated URL producer, and N dedicated download threads (or even processes if you use the multiprocessing model).
As for all of your requirements, ALL of them CAN be done with the normal python threaded model (yes, even catching Ctrl+C -- I've done it).
Got a situation where I'm going to be parsing websites. each site has to have it's own "parser" and possibly it's own way of dealing with cookies/etc..
I'm trying to get in my head which would be a better choice.
Choice I:
I can create a multiprocessing function, where the (masterspawn) app gets an input url, and in turn it spans a process/function within the masterspawn app that then handles all the setup/fetching/parsing of the page/URL.
This approach would have one master app running, and it in turn creates multiple instances of the internal function.. Should be fast, yes/no?
Choice II:
I could create a "Twisted" kind of server, that would essentially do the same thing as Choice I. The difference being that using "Twisted" would also impose some overhead. I'm trying to evaluate Twisted, with regards to it being a "Server" but i don't need it to perform the fetching of the url.
Choice III:
I could use scrapy. I'm inclined not to go this route as I don't want/need to use the overhead that scrapy appears to have. As i stated, each of the targeted URLs needs its own parse function, as well as dealing with the cookies...
My goal is to basically have the "architected" solution spread across multiple boxes, where each client box interfaces with a master server that allocates the urls to be parsed.
thanks for any comments on this..
-tom
There are two dimensions to this question: concurrency and distribution.
Concurrency: either Twisted or multiprocessing will do the job of concurrently handling fetching/parsing jobs. I'm not sure though where your premise of the "Twisted overhead" comes from. On the contrary, the multiprocessing path would incur much more overhead, since a (relatively heavy-weight) OS-process would have to be spawned. Twisteds' way of handling concurrency is much more light-weight.
Distribution: multiprocessing won't distribute your fetch/parse jobs to different boxes. Twisted can do this, eg. using the AMP protocol building facilities.
I cannot comment on scrapy, never having used it.
For this particular question I'd go with multiprocessing - it's simple to use and simple to understand. You don't particularly need twisted, so why take on the extra complication.
One other option you might want to consider: use a message queue. Have the master drop URLs onto a queue (eg. beanstalkd, resque, 0mq) and have worker processes pickup the URLs and process them. You'll get both concurrency and distribution: you can run workers on as many machines as you want.
My question is: which python framework should I use to build my server?
Notes:
This server talks HTTP with it's clients: GET and POST (via pyAMF)
Clients "submit" "tasks" for processing and, then, sometime later, retrieve the associated "task_result"
submit and retrieve might be separated by days - different HTTP connections
The "task" is a lump of XML describing a problem to be solved, and a "task_result" is a lump of XML describing an answer.
When a server gets a "task", it queues it for processing
The server manages this queue and, when tasks get to the top, organises that they are processed.
the processing is performed by a long running (15 mins?) external program (via subprocess) which is feed the task XML and which produces a "task_result" lump of XML which the server picks up and stores (for later Client retrieval).
it serves a couple of basic HTML pages showing the Queue and processing status (admin purposes only)
I've experimented with twisted.web, using SQLite as the database and threads to handle the long running processes.
But I can't help feeling that I'm missing a simpler solution. Am I? If you were faced with this, what technology mix would you use?
I'd recommend using an existing message queue. There are many to choose from (see below), and they vary in complexity and robustness.
Also, avoid threads: let your processing tasks run in a different process (why do they have to run in the webserver?)
By using an existing message queue, you only need to worry about producing messages (in your webserver) and consuming them (in your long running tasks). As your system grows you'll be able to scale up by just adding webservers and consumers, and worry less about your queuing infrastructure.
Some popular python implementations of message queues:
http://code.google.com/p/stomper/
http://code.google.com/p/pyactivemq/
http://xph.us/software/beanstalkd/
I'd suggest the following. (Since it's what we're doing.)
A simple WSGI server (wsgiref or werkzeug). The HTTP requests coming in will naturally form a queue. No further queueing needed. You get a request, you spawn the subprocess as a child and wait for it to finish. A simple list of children is about all you need.
I used a modification of the main "serve forever" loop in wsgiref to periodically poll all of the children to see how they're doing.
A simple SQLite database can track request status. Even this may be overkill because your XML inputs and results can just lay around in the file system.
That's it. Queueing and threads don't really enter into it. A single long-running external process is too complex to coordinate. It's simplest if each request is a separate, stand-alone, child process.
If you get immense bursts of requests, you might want a simple governor to prevent creating thousands of children. The governor could be a simple queue, built using a list with append() and pop(). Every request goes in, but only requests that fit will in some "max number of children" limit are taken out.
My reaction is to suggest Twisted, but you've already looked at this. Still, I stick by my answer. Without knowing you personal pain-points, I can at least share some things that helped me reduce almost all of the deferred-madness that arises when you have several dependent, blocking actions you need to perform for a client.
Inline callbacks (lightly documented here: http://twistedmatrix.com/documents/8.2.0/api/twisted.internet.defer.html) provide a means to make long chains of deferreds much more readable (to the point of looking like straight-line code). There is an excellent example of the complexity reduction this affords here: http://blog.mekk.waw.pl/archives/14-Twisted-inlineCallbacks-and-deferredGenerator.html
You don't always have to get your bulk processing to integrate nicely with Twisted. Sometimes it is easier to break a large piece of your program off into a stand-alone, easily testable/tweakable/implementable command line tool and have Twisted invoke this tool in another process. Twisted's ProcessProtocol provides a fairly flexible way of launching and interacting with external helper programs. Furthermore, if you suddenly decide you want to cloudify your application, it is not all that big of a deal to use a ProcessProtocol to simply run your bulk processing on a remote server (random EC2 instances perhaps) via ssh, assuming you have the keys setup already.
You can have a look at celery
It seems any python web framework will suit your needs. I work with a similar system on a daily basis and I can tell you, your solution with threads and SQLite for queue storage is about as simple as you're going to get.
Assuming order doesn't matter in your queue, then threads should be acceptable. It's important to make sure you don't create race conditions with your queues or, for example, have two of the same job type running simultaneously. If this is the case, I'd suggest a single threaded application to do the items in the queue one by one.