A programming strategy to bypass the os thread limit? - python

The scenario: We have a python script that checks thousands of proxys simultaneously.
The program uses threads, 1 per proxy, to speed the process. When it reaches the 1007 thread, the script crashes because of the thread limit.
My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
What will your solution be, friends?
Thanks for the answers.

You want to do non-blocking I/O with the select module.
There are a couple of different specific techniques. select.select should work for every major platform. There are other variations that are more efficient (and could matter if you are checking tens of thousands of connections simultaneously) but you will then need to write the code for you specific platform.

I've run into this situation before. Just make a pool of Tasks, and spawn a fixed number of threads that run an endless loop which grabs a Task from the pool, run it, and repeat. Essentially you're implementing your own thread abstraction and using the OS threads to implement it.
This does have drawbacks, the major one being that if your Tasks block for long periods of time they can prevent the execution of other Tasks. But it does let you create an unbounded number of Tasks, limited only by memory.

Does Python have any sort of asynchronous IO functionality? That would be the preferred answer IMO - spawning an extra thread for each outbound connection isn't as neat as having a single thread which is effectively event-driven.

Using different processes, and pipes to transfer data. Using threads in python is pretty lame. From what I heard, they don't actually run in parallel, even if you have a multi-core processor... But maybe it was fixed in python3.

My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
The standard way is to have each thread get next tasks in a loop instead of dying after processing just one. This way you don't have to keep track of the number of threads, since you just fire a fixed number of them. As a bonus, you save on thread creation/destruction.

A counting semaphore should do the trick.
from socket import *
from threading import *
maxthreads = 1000
threads_sem = Semaphore(maxthreads)
class MyThread(Thread):
def __init__(self, conn, addr):
Thread.__init__(self)
self.conn = conn
self.addr = addr
def run(self):
try:
read = conn.recv(4096)
if read == 'go away\n':
global running
running = False
conn.close()
finally:
threads_sem.release()
sock = socket()
sock.bind(('0.0.0.0', 2323))
sock.listen(1)
running = True
while running:
conn, addr = sock.accept()
threads_sem.acquire()
MyThread(conn, addr).start()

Make sure your threads get destroyed properly after they've been used or use a threadpool, although per what I see they're not that effective in Python
see here:
http://code.activestate.com/recipes/203871/

Using the select module or a similar library would most probably be a more efficient solution, but that would require bigger architectural changes.
If you just want to limit the number of threads, a global counter should be fine, as long as you access it in a thread safe way.

Be careful to minimize the default thread stack size. At least on Linux, the default limit puts severe restrictions on the number of created threads. Linux allocates a chunk of the process virtual address space to the stack (usually 10MB). 300 threads x 10MB stack allocation = 3GB of virtual address space dedicated to stack, and on a 32 bit system you have a 3GB limit. You can probably get away with much less.

Twisted is a perfect fit for this problem. See http://twistedmatrix.com/documents/current/core/howto/clients.html for a tutorial on writing a client.
If you don't mind using alternate Python implmentations, Stackless has light-weight (non-native) threads. The only company I know doing much with it though is CCP--they use it for tasklets in their game on both the client and server. You still need to do async I/O with Stackless because if a thread blocks, the process blocks.

As mentioned in another thread, why do you spawn off a new thread for each single operation? This is a classical producer - consumer problem, isn't it? Depending a bit on how you look at it, the proxy checkers might be comsumers or producers.
Anyway, the solution is to make a "queue" of "tasks" to process, and make the threads in a loop check if there are any more tasks to perform in the queue, and if there isn't, wait a predefined interval, and check again.
You should protect your queue with some locking mechanisms, i.e. semaphores, to prevent race conditions.
It's really not that difficult. But it requires a bit of thinking getting it right. Good luck!

Related

How do I handle multiple requests to python program using zeromq and threading or async?

I have a little program, which does some calculations in background, when I call it through zerorpc module in python 2.7.
Here is my code:
is_busy = False
class Server(object):
def calculateSomeStuff(self):
global is_busy
if (is_busy):
return 'I am busy!'
is_busy = True
# calculate some stuff
is_busy = False
print 'Done!'
return
def getIsBusy(self):
return is_busy
s = zerorpc.Server(Server())
s.bind("tcp://0.0.0.0:66666")
s.run()
What should I change to make this program returning is_busy when I call .getIsBusy() method, after .calculateSomeStuff() has started doing it's job?
As I know, there is no way to make it asynchronous in python 2.
You need multi-threading for real concurrency and exploit more than one CPU core if this is what you are after. See the Python threading module, GIL-lock details & possible workarounds and literature.
If you want a cooperative solution, read on.
zerorpc uses gevent for asynchronous input/output. With gevent you write coroutines (also called greenlet or userland threads) which are all running cooperatively on a single thread. The thread in which the gevent input output loop is running. The gevent ioloop takes care of resuming coroutines waiting for some I/O event.
The key here is the word cooperative. Compare that to threads running on a single CPU/core machine. Effectively there is nothing concurrent,but the operating system will preempt ( verb:
take action in order to prevent (an anticipated event) from happening ) a running thread to execute the next on and so on so that every threads gets a fair chance of moving forward.
This happens fast enough so that it feels like all threads are running at the same time.
If you write your code cooperatively with the gevent input/output loop, you can achieve the same effect by being careful of calling gevent.sleep(0) often enough to give a chance for the gevent ioloop to run other coroutines.
It's literally cooperative multithrading. I've heard it was like that in Windows 2 or something.
So, in your example, in the heavy computation part, you likely have some loop going on. Make sure to call gevent.sleep(0) a couple times per second and you will have the illusion of multi-threading.
I hope my answer wasn't too confusing.

When should I be using asyncio over regular threads, and why? Does it provide performance increases?

I have a pretty basic understanding of multithreading in Python and an even basic-er understanding of asyncio.
I'm currently writing a small Curses-based program (eventually going to be using a full GUI, but that's another story) that handles the UI and user IO in the main thread, and then has two other daemon threads (each with their own queue/worker-method-that-gets-things-from-a-queue):
a watcher thread that watches for time-based and conditional (e.g. posts to a message board, received messages, etc.) events to occur and then puts required tasks into...
the other (worker) daemon thread's queue which then completes them.
All three threads are continuously running concurrently, which leads me to some questions:
When the worker thread's queue (or, more generally, any thread's queue) is empty, should it be stopped until is has something to do again, or is it okay to leave continuously running? Do concurrent threads take up a lot of processing power when they aren't doing anything other than watching its queue?
Should the two threads' queues be combined? Since the watcher thread is continuously running a single method, I guess the worker thread would be able to just pull tasks from the single queue that the watcher thread puts in.
I don't think it'll matter since I'm not multiprocessing, but is this setup affected by Python's GIL (which I believe still exists in 3.4) in any way?
Should the watcher thread be running continuously like that? From what I understand, and please correct me if I'm wrong, asyncio is supposed to be used for event-based multithreading, which seems relevant to what I'm trying to do.
The main thread is basically always just waiting for the user to press a key to access a different part of the menu. This seems like a situation asyncio would be perfect for, but, again, I'm not sure.
Thanks!
When the worker thread's queue (or, more generally, any thread's queue) is empty, should it be stopped until is has something to do again, or is it okay to leave continuously running? Do concurrent threads take up a lot of processing power when they aren't doing anything other than watching its queue?
You should just use a blocking call to queue.get(). That will leave the thread blocked on I/O, which means the GIL will be released, and no processing power (or at least a very minimal amount) will be used. Don't use non-blocking gets in a while loop, since that's going to require a lot more CPU wakeups.
Should the two threads' queues be combined? Since the watcher thread is continuously running a single method, I guess the worker thread would be able to just pull tasks from the single queue that the watcher thread puts in.
If all the watcher is doing is pulling things off a queue and immediately putting it into another queue, where it gets consumed by a single worker, it sounds like its unnecessary overhead - you may as well just consume it directly in the worker. It's not exactly clear to me if that's the case, though - is the watcher consuming from a queue, or just putting items into one? If it is consuming from a queue, who is putting stuff into it?
I don't think it'll matter since I'm not multiprocessing, but is this setup affected by Python's GIL (which I believe still exists in 3.4) in any way?
Yes, this is affected by the GIL. Only one of your threads can run Python bytecode at a time, so won't get true parallelism, except when threads are running I/O (which releases the GIL). If your worker thread is doing CPU-bound activities, you should seriously consider running it in a separate process via multiprocessing, if possible.
Should the watcher thread be running continuously like that? From what I understand, and please correct me if I'm wrong, asyncio is supposed to be used for event-based multithreading, which seems relevant to what I'm trying to do.
It's hard to say, because I don't know exactly what "running continuously" means. What is it doing continuously? If it spends most of its time sleeping or blocking on a queue, it's fine - both of those things release the GIL. If it's constantly doing actual work, that will require the GIL, and therefore degrade the performance of the other threads in your app (assuming they're trying to do work at the same time). asyncio is designed for programs that are I/O-bound, and can therefore be run in a single thread, using asynchronous I/O. It sounds like your program may be a good fit for that depending on what your worker is doing.
The main thread is basically always just waiting for the user to press a key to access a different part of the menu. This seems like a situation asyncio would be perfect for, but, again, I'm not sure.
Any program where you're mostly waiting for I/O is potentially a good for for asyncio - but only if you can find a library that makes curses (or whatever other GUI library you eventually choose) play nicely with it. Most GUI frameworks come with their own event loop, which will conflict with asyncio's. You would need to use a library that can make the GUI's event loop play nicely with asyncio's event loop. You'd also need to make sure that you can find asyncio-compatible versions of any other synchronous-I/O based library your application uses (e.g. a database driver).
That said, you're not likely to see any kind of performance improvement by switching from your thread-based program to something asyncio-based. It'll likely perform about the same. Since you're only dealing with 3 threads, the overhead of context switching between them isn't very significant, so switching from that a single-threaded, asynchronous I/O approach isn't going to make a very big difference. asyncio will help you avoid thread synchronization complexity (if that's an issue with your app - it's not clear that it is), and at least theoretically, would scale better if your app potentially needed lots of threads, but it doesn't seem like that's the case. I think for you, it's basically down to which style you prefer to code in (assuming you can find all the asyncio-compatible libraries you need).

Python threads seem to freeze the machine

I'm using python threads to resolve website IP Addresses. This is my worker process for the resolving. This is a daemon thread.
def get_ip_worker():
"""This is the worker (thread) process for parsing ips, this process takes domain from the q processes it
and then saves it to another q"""
socket.setdefaulttimeout(3)
while True:
domain = domains_q.get()
try:
addr_info = socket.getaddrinfo(domain, 80, 0, 0, socket.SOL_TCP)
for family, socktype, proto, name, ip in addr_info:
if family == 2: #okay it's ipv4
ip, port = ip
processed_q.put((ip, domain))
elif family == 10: #okay it's ipv6
ip, port, no_1, no_2 = ip
processed_q.put((ip, domain))
except:
pass
#print 'Socket Error'
domains_q.task_done()
EDIT: domain = domains_q.get() this line blocks until an item is available in the Queue.
The problem comes when I run this on 300 threads, Load Average seems okay, but simple ls -la takes 5 secs and everything is slow. Where did I go wrong? Should I use async or multiprocessing?
Do you really need to process 300 connections in parallel by 300 threads? I have never tried creating that many threads, but it may be a problem. And that is definitely not a good way of solving the problem. Usually there are other options. First, you do not need 300 threads to listen 300 connections. Create a number of threads that seem to work on your HW and OS. Use a single thread to retrieve requests from main queue, then pass them to a thread from a thread pool.
BTW, check if your "retrieve from a queue" operation really blocks and waits if the queue is empty. If not, the loops may be executed all the time, not depending on whether there are incoming requests or not.
What you may really need is a non-blocking mode for sockets and something like select.select() to wait until one of your sockets is ready for reading or writing. You may write that code on your own. If you are not eager to do that, probably a good asynchronous networking library like gevent (or twisted) can help to improve the architecture of your program. Utilizing the full power of multicore CPUs is a separate question, but I've heard there are solutions, at least for gevent (they are based on gunicorn that runs several processes; have never tried it). But I think you are experiencing problems not with execution speed, but with a need to effectively wait for I/O on many objects at a time. If so, avoid massive use of threads for that purpose, it is usually ineffective not only in Python, but even in languages without GIL that are better suited for multithreded programming. multiprocessing avoids GIL but adds its own execution costs, so I would suggest not to use it here.

Non-blocking, non-concurrent tasks in Python

I am working on an implementation of a very small library in Python that has to be non-blocking.
On some production code, at some point, a call to this library will be done and it needs to do its own work, in its most simple form it would be a callable that needs to pass some information to a service.
This "passing information to a service" is a non-intensive task, probably sending some data to an HTTP service or something similar. It also doesn't need to be concurrent or to share information, however it does need to terminate at some point, possibly with a timeout.
I have used the threading module before and it seems the most appropriate thing to use, but the application where this library will be used is so big that I am worried of hitting the threading limit.
On local testing I was able to hit that limit at around ~2500 threads spawned.
There is a good possibility (given the size of the application) that I can hit that limit easily. It also makes me weary of using a Queue given the memory implications of placing tasks at a high rate in it.
I have also looked at gevent but I couldn't see an example of being able to spawn something that would do some work and terminate without joining. The examples I went through where calling .join() on a spawned Greenlet or on an array of greenlets.
I don't need to know the result of the work being done! It just needs to fire off and try to talk to the HTTP service and die with a sensible timeout if it didn't.
Have I misinterpreted the guides/tutorials for gevent ? Is there any other possibility to spawn a callable in fully non-blocking fashion that can't hit a ~2500 limit?
This is a simple example in Threading that does work as I would expect:
from threading import Thread
class Synchronizer(Thread):
def __init__(self, number):
self.number = number
Thread.__init__(self)
def run(self):
# Simulating some work
import time
time.sleep(5)
print self.number
for i in range(4000): # totally doesn't get past 2,500
sync = Synchronizer(i)
sync.setDaemon(True)
sync.start()
print "spawned a thread, number %s" % i
And this is what I've tried with gevent, where it obviously blocks at the end to
see what the workers did:
def task(pid):
"""
Some non-deterministic task
"""
gevent.sleep(1)
print('Task', pid, 'done')
for i in range(100):
gevent.spawn(task, i)
EDIT:
My problem stemmed out from my lack of familiarity with gevent. While the Thread code was indeed spawning threads, it also prevented the script from terminating while it did some work.
gevent doesn't really do that in the code above, unless you add a .join(). All I had to do to see the gevent code do some work with the spawned greenlets was to make it a long running process. This definitely fixes my problem as the code that needs to spawn the greenlets is done within a framework that is a long running process in itself.
Nothing requires you to call join in gevent, if you're expecting your main thread to last longer than any of your workers.
The only reason for the join call is to make sure the main thread lasts at least as long as all of the workers (so that the program doesn't terminate early).
Why not spawn a subprocess with a connected pipe or similar and then, instead of a callable, just drop your data on the pipe and let the subprocess handle it completely out of band.
As explained in Understanding Asynchronous/Multiprocessing in Python, asyncoro framework supports asynchronous, concurrent processes. You can run tens or hundreds of thousands of concurrent processes; for reference, running 100,000 simple processes takes about 200MB. If you want to, you can mix threads in rest of the system and coroutines with asyncoro (provided threads and coroutines don't share variables, but use coroutine interface functions to send messages etc.).

Twisted - should this code be run in separate threads

I am running some code that has X workers, each worker pulling tasks from a queue every second. For this I use twisted's task.LoopingCall() function. Each worker fulfills its request (scrape some data) and then pushes the response back to another queue. All this is done in the reactor thread since I am not deferring this to any other thread.
I am wondering whether I should run all these jobs in separate threads or leave them as they are. And if so, is there a problem if I call task.LoopingCall every second from each thread ?
No, you shouldn't use threads. You can't call LoopingCall from a thread (unless you use reactor.callFromThread), but it wouldn't help you make your code faster.
If you notice a performance problem, you may want to profile your workload, figure out where the CPU-intensive work is, and then put that work into multiple processes, spawned with spawnProcess. You really can't skip the step where you figure out where the expensive work is, though: there's no magic pixie dust you can sprinkle on your Twisted application that will make it faster. If you choose a part of your code which isn't very intensive and doesn't require blocking resources like CPU or disk, then you will discover that the overhead of moving work to a different process may outweigh any benefit of having it there.
You shouldn't use threads for that. Doing it all in the reactor thread is ok. If your scraping uses twisted.web.client to do the network access, it shouldn't block, so you will go as fast as it gets.
First, beware that Twisted's reactor sometimes multithreads and assigns tasks without telling you anything. Of course, I haven't seen your program in particular.
Second, in Python (that is, in CPython) spawning threads to do non-blocking computation has little benefit. Read up on the GIL (Global Interpreter Lock).

Categories