I am using Threading module in python. How to know how many max threads I can have on my system?
I am using Threading module in python. How to know how many max
threads I can have on my system?
There doesn't seem to be a hard-coded or configurable MAX value that I've ever found, but there is definitely a limit. Run the following program:
import threading
import time
def mythread():
time.sleep(1000)
def main():
threads = 0 #thread counter
y = 1000000 #a MILLION of 'em!
for i in range(y):
try:
x = threading.Thread(target=mythread, daemon=True)
threads += 1 #thread counter
x.start() #start each thread
except RuntimeError: #too many throws a RuntimeError
break
print("{} threads created.\n".format(threads))
if __name__ == "__main__":
main()
I suppose I should mention that this is using Python 3.
The first function, mythread(), is the function which will be executed as a thread. All it does is sleep for 1000 seconds then terminate.
The main() function is a for-loop which tries to start one million threads. The daemon property is set to True simply so that we don't have to clean up all the threads manually.
If a thread cannot be created Python throws a RuntimeError. We catch that to break out of the for-loop and display the number of threads which were successfully created.
Because daemon is set True, all threads terminate when the program ends.
If you run it a few times in a row you're likely to see that a different number of threads will be created each time. On the machine from which I'm posting this reply, I had a minimum 18,835 during one run, and a maximum of 18,863 during another run. And the more you fiddle with the code, as in, the more code you add to this in order to experiment or find more information, you'll find the fewer threads can/will be created.
So, how to apply this to real world.
Well, a server may need the ability to start a triple-digit number of threads, but in most other cases you should re-evaluate your game plan if you think you're going to be generating a large number of threads.
One thing you need to consider if you're using Python: if you're using a standard distribution of Python, your system will only execute one Python thread at a time, including the main thread of your program, so adding more threads to your program or more cores to your system doesn't really get you anything when using the threading module in Python. You can research all of the pedantic details and ultracrepidarian opinions regarding the GIL / Global Interpreter Lock for more info on that.
What that means is that cpu-bound (computationally-intensive) code doesn't benefit greatly from factoring it into threads.
I/O-bound (waiting for file read/write, network read, or user I/O) code, however, benefits greatly from multithreading! So, start a thread for each network connection to your Python-based server.
Threads can also be great for triggering/throwing/raising signals at set periods, or simply to block out the processing sections of your code more logically.
Related
Is there any way to have a thread terminate itself once it is finished its target function?
I have an application that requires running a wait for connection function every 2min. This function takes around 5 seconds to run.
import time
import threading
count = 0
def fivesecondwait():
time.sleep(5)
print("Finished side function")
t = threading.Thread(target = twentysecondwait)
while True:
if count == 10:
count = 0
t.start()
print("Running Main Application")
count += 1
time.sleep(1)
If I try and join the thread the loop stops running, but if I do not include a join it results in the error "Threads can only be started once". I need the function to be called every 10 seconds and to not block the execution of the loop. So is there any way to have the thread terminate itself so it can be used again?
This is for a measurement device that needs to continue measuring every 50ms while repeatedly waiting for 3-4 seconds for a connection from an MQTT host.
Your question and your code don't quite seem to match up, but this much is true; You said, "it results in the error "Threads can only be started once", and your code appears to create a single thread instance, which it then it attempts to start() more than one time.
The error message is self explanatory. Python will not allow you to call t.start() more than one time on the same thread instance, t. The reason why is, a thread isn't just an object. It's an object that represents an execution of your code (i.e., a trip through your code.)
You can't take the same trip more than one time. Even if you go to the same beach that you visited last year, even if you follow the same route, and stop to rest in all the same places, it's still a different trip. Threads obey the same model.
If you want to run twentysecondwait() multiple times "in the background," there's nothing wrong with that, but you'll have to create and start a new thread instance for each execution.
Is there a practical limit to how many thread instances can be created?
There probably is no practical limit to how many can be sequentially created and destroyed,* but each running thread occupies significant memory and uses other resources as well. So, yes, There is a limit** to how many threads can be "alive" at the same time.
Your example thread function takes 20 seconds to execute. If the program starts a new one every ten seconds, then the number of live threads will increase by one every 20 seconds—180 per hour. The program certainly will not be able to sustain that for 90 days.
* A better performing alternative to contually creating and destroying threads is to use a thread pool such as Python's concurrent.futures.ThreadPoolExecutor.
** The limit is going to depend on what version of Python you are running, on what OS you are running, on how much memory your computer has, etc. The limit probably is not well defined in any case.
A thread will automatically terminate itself when it finished running.
But:
a) you can't start the same thread twice and
b) you can't re-start a thread that terminated.
Simple solution: always start a new thread.
while True:
if count == 10:
count = 0
t = threading.Thread(target = fivesecondwait)
t.start()
...
[...] so it can be used again?
No. You could however keep a single thread running and use synchronization objects to tell the thread when to do something and when not.
This will require more knowledge but be less resource intense.
i am learning about threads and got one thing confusing.
from threading import Thread
from time import sleep
global a
a=0
def th1():
lasta=0
while(a<200):
if(a!=lasta):
lasta=a
print(a)
thrd=Thread(target=th1)
print(a)
thrd.start()
for i in range (1,200):
a+=1
sleep(0)
this prints numbers from 0 to 199, but
from threading import Thread
from time import sleep
global a
a=0
def th1():
lasta=0
while(a<200):
if(a!=lasta):
lasta=a
print(a)
thrd=Thread(target=th1)
print(a)
thrd.start()
for i in range (1,200):
a+=1
this code only prints 0 and 199.
I think what's happening is that in second code there's not the (lats say) stop statement that would make program execute different portion of the code while first one stops the loop and gives another thread the possibility to execute. then it checks if 0 seconds went and continues from for loop. i dont know if im right, please if you could help me explain what is really going on i would be glad.
also how can i master this kind of things? for example to run two threads continuously and let them do stuff according to one global variable. because as i clearly see even though i'm using different thread, they aren't really doing stuff together, they are still waiting for each other
Thanks!
Your specific question is quite correct: a 0-second time-out is quite different from no statement at all. As you guess, this suspends the running thread and allows another thread to receive control of the CPU.
If you use merely multi-threading, then you get the interleaving situation you describe here: you have two logical execution threads, but only one logical processor. For parallel processing, you need multi-processing. There are many tutorials and examples available on line; simply search for "Python multiprocessing tutorial".
Your request "how can i master this kind of things?" is far too general to be a Stack Overflow question. You master this knowledge as you master any knowledge: you find materials that match your learning styles, work through those materials, and practice.
How multiple threads will run depends on the environment. Threads do not 'do stuff together'. They run somewhat independently.
On a single processor machine, or in a program which is running multiple threads on a single processor, they never run at the same time. Each thread gets a time slice. It runs for a fixed amount of time (unless it yields early, as with sleep). When its time slice is finished, the next thread runs for a fixed amount of time, or until it yields.
In your first example, the main thread yields each time it increments, so the th1 thread will run once per increment, and will see each number.
In the second example, the main thread will run a full timeslice before the th1 thread is given execution time. This is sufficient for the loop in the main thread to cycle many times. When the th1 thread runs again, it has 'missed' many values of a.
So, in this code I am testing some multi threading to speed up my code. If I have a large number of tasks in queue I get RuneTimeError: Can't start new thread error. For example, range(0,100) works, but range(0,1000) won't work. I am using threading.Semaphore(4) and this is correctly working, only processing 4 threads at a time, tested this is working. I know why I am getting this error, because even though I am using threading.Semaphore it still technically starts all the threads at the start, but just pauses them until it's the threads turn to run and starting 1000 threads at the same time is to much for the PC to handle. Is there anyway to fix this problem? (Also, yes, I know about GIL)
def thread_test():
threads = []
for t in self.tasks:
t = threading.Thread(target=utils.compareV2.run_compare_temp, args=t)
t.start()
threads.append(t)
for t in threads:
t.join()
for x in range(0,100):
self.tasks.append(("arg1","arg2"))
thread_test()
Instead of starting 1000 threads and then only letting 4 do any work at a time, start 4 threads and let them all do work.
What are the extra 996 threads buying you?
They're using memory and putting pressure on the system's scheduler.
The reason you get the RuntimeError is probably that you've run out of memory for the call stacks for all of those threads. The default limit varies by platform but it's probably something close to 8MiB. A thousand of those and you're up around 8GiB... just for stack space.
You can reduce the amount of memory available for the call stack of each thread with threading.stack_size(...). This will let you fit more threads on your system (but be sure you don't set it below the amount of stack space you actually need or you'll crash your process).
Most likely, what you actually want is a number of processes close to the number of physical cores on your host or a threading system that's more light-weight than what the threading module gives you.
I have a python script that has to take many permutations of a large dataset, score each permutation, and retain only the highest scoring permutations. The dataset is so large that this script takes almost 3 days to run.
When I check my system resources in windows, only 12% of my CPU is being used and only 4 out of 8 cores are working at all. Even if I put the python.exe process at highest priority, this doesn't change.
My assumption is that dedicating more CPU usage to running the script could make it run faster, but my ultimate goal is to reduce the runtime by at least half. Is there a python module or some code that could help me do this? As an aside, does this sound like a problem that could benefit from a smarter algorithm?
Thank you in advance!
There are a few ways to go about this, but check out the multiprocessing module. This is a standard library module for creating multiple processes, similar to threads but without the limitations of the GIL.
You can also look into the excellent Celery library. This is a distrubuted task queue, and has a lot of great features. Its a pretty easy install, and easy to get started with.
I can answer a HOW-TO with a simple code sample. While this is running, run /bin/top and see your processes. Simple to do. Note, I've even included how to clean up afterwards from a keyboard interrupt - without that, your subprocesses will keep running and you'll have to kill them manually.
from multiprocessing import Process
import traceback
import logging
import time
class AllDoneException(Exception):
pass
class Dum(object):
def __init__(self):
self.numProcesses = 10
self.logger = logging.getLogger()
self.logger.setLevel(logging.INFO)
self.logger.addHandler(logging.StreamHandler())
def myRoutineHere(self, processNumber):
print "I'm in process number %d" % (processNumber)
time.sleep(10)
# optional: raise AllDoneException
def myRoutine(self):
plist = []
try:
for pnum in range(0, self.numProcesses):
p = Process(target=self.myRoutineHere, args=(pnum, ))
p.start()
plist.append(p)
while 1:
isAliveList = [p.is_alive() for p in plist]
if not True in isAliveList:
break
time.sleep(1)
except KeyboardInterrupt:
self.logger.warning("Caught keyboard interrupt, exiting.")
except AllDoneException:
self.logger.warning("Caught AllDoneException, Exiting normally.")
except:
self.logger.warning("Caught Exception, exiting: %s" % (traceback.format_exc()))
for p in plist:
p.terminate()
d = Dum()
d.myRoutine()
You should spawn new processes instead of threads to utilize cores in your CPU. My general rule is one process per core. So you split your problem input space into the number of cores available, each process getting part of the problem space.
Multiprocessing is best for this. You could also use Parallel Python.
Very late to the party - but in addition to using multiprocessing module as reptilicus said, also make sure to set "affinity".
Some python modules fiddle with it, effectively lowering the number of cores available to Python:
https://stackoverflow.com/a/15641148/4195846
Due to Global Interpreter Lock one Python process cannot take advantage of multiple cores. But if you can somehow parallelize your problem (which you should do anyway), then you can use multiprocessing to spawn as many Python processes as you have cores and process that data in each subprocess.
The scenario: We have a python script that checks thousands of proxys simultaneously.
The program uses threads, 1 per proxy, to speed the process. When it reaches the 1007 thread, the script crashes because of the thread limit.
My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
What will your solution be, friends?
Thanks for the answers.
You want to do non-blocking I/O with the select module.
There are a couple of different specific techniques. select.select should work for every major platform. There are other variations that are more efficient (and could matter if you are checking tens of thousands of connections simultaneously) but you will then need to write the code for you specific platform.
I've run into this situation before. Just make a pool of Tasks, and spawn a fixed number of threads that run an endless loop which grabs a Task from the pool, run it, and repeat. Essentially you're implementing your own thread abstraction and using the OS threads to implement it.
This does have drawbacks, the major one being that if your Tasks block for long periods of time they can prevent the execution of other Tasks. But it does let you create an unbounded number of Tasks, limited only by memory.
Does Python have any sort of asynchronous IO functionality? That would be the preferred answer IMO - spawning an extra thread for each outbound connection isn't as neat as having a single thread which is effectively event-driven.
Using different processes, and pipes to transfer data. Using threads in python is pretty lame. From what I heard, they don't actually run in parallel, even if you have a multi-core processor... But maybe it was fixed in python3.
My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
The standard way is to have each thread get next tasks in a loop instead of dying after processing just one. This way you don't have to keep track of the number of threads, since you just fire a fixed number of them. As a bonus, you save on thread creation/destruction.
A counting semaphore should do the trick.
from socket import *
from threading import *
maxthreads = 1000
threads_sem = Semaphore(maxthreads)
class MyThread(Thread):
def __init__(self, conn, addr):
Thread.__init__(self)
self.conn = conn
self.addr = addr
def run(self):
try:
read = conn.recv(4096)
if read == 'go away\n':
global running
running = False
conn.close()
finally:
threads_sem.release()
sock = socket()
sock.bind(('0.0.0.0', 2323))
sock.listen(1)
running = True
while running:
conn, addr = sock.accept()
threads_sem.acquire()
MyThread(conn, addr).start()
Make sure your threads get destroyed properly after they've been used or use a threadpool, although per what I see they're not that effective in Python
see here:
http://code.activestate.com/recipes/203871/
Using the select module or a similar library would most probably be a more efficient solution, but that would require bigger architectural changes.
If you just want to limit the number of threads, a global counter should be fine, as long as you access it in a thread safe way.
Be careful to minimize the default thread stack size. At least on Linux, the default limit puts severe restrictions on the number of created threads. Linux allocates a chunk of the process virtual address space to the stack (usually 10MB). 300 threads x 10MB stack allocation = 3GB of virtual address space dedicated to stack, and on a 32 bit system you have a 3GB limit. You can probably get away with much less.
Twisted is a perfect fit for this problem. See http://twistedmatrix.com/documents/current/core/howto/clients.html for a tutorial on writing a client.
If you don't mind using alternate Python implmentations, Stackless has light-weight (non-native) threads. The only company I know doing much with it though is CCP--they use it for tasklets in their game on both the client and server. You still need to do async I/O with Stackless because if a thread blocks, the process blocks.
As mentioned in another thread, why do you spawn off a new thread for each single operation? This is a classical producer - consumer problem, isn't it? Depending a bit on how you look at it, the proxy checkers might be comsumers or producers.
Anyway, the solution is to make a "queue" of "tasks" to process, and make the threads in a loop check if there are any more tasks to perform in the queue, and if there isn't, wait a predefined interval, and check again.
You should protect your queue with some locking mechanisms, i.e. semaphores, to prevent race conditions.
It's really not that difficult. But it requires a bit of thinking getting it right. Good luck!