Running Python Threads Simotaneously - python

I'm looking at running some code to auto-save a game every X minutes but it also has a thread accepting keyboard input. Here's some sample code that I'm trying to get running simultaneously but it appears they run one after the other. How can I get them to run at the same time?
import time
import threading
def countdown(length,delay):
length += 1
while length > 0:
time.sleep(delay)
length -= 1
print(length, end=" ")
countdown_thread = threading.Thread(target=countdown(3,2)).start()
countdown_thread2 = threading.Thread(target=countdown(3,1)).start()
Update: Not sure really what the difference python has between a process and a thread (would process show as a second process in Windows?) But here's my updated code. It still runs sequentially and not at the same time.
import time
from threading import Thread
from multiprocessing import Process
def countdown(length,delay):
length += 1
while length > 0:
time.sleep(delay)
length -= 1
print(length, end=" ")
p1 = Process(target=countdown(5,0.3))
print ("")
p2 = Process(target=countdown(10,0.1))
print ("")
Thread(target=countdown(5,0.3))
print ("")
Thread(target=countdown(10,0.1))

When you create the threads, they should be created as
Thread(target=countdown, args=(3,2))
As-is, it runs countdown(3,2), and passes the result as the Thread target!

AFAIK Threads cant run simultainously.
I suggest you take a look at multiprocessing instead:
https://docs.python.org/3/library/multiprocessing.html

Related

How do I delay one loop while the other runs independantly?

import time
i = 1
def sendData(x):
time.sleep(5)
print("delayed data: ", x)
while (1):
print(i)
sendData(i)
i += 1
time.sleep(0.5)
What I want is to print a value every 5 seconds while the infinite loop runs.
so I can see the values printing very .5 seconds and another value being printed every 5 seconds.
At the moment, the loop still gets delayed because of the time.sleep(5) in the helper function. Any help is appreciated. Thank you.
You can achieve your goal using the threading library. It allows you to run code in the "background" while your main code runs alongside it.
Here's an example of how to run the sendData function in the background with the main loop executing concurrently. Notice that I modified sendData to use the global variable i instead of receiving it as a parameter to allow the main loop to update i.
import threading
import time
i = 1
def sendData():
while True:
time.sleep(5)
print("delayed data: ", i)
thread = threading.Thread(target=sendData)
thread.start()
while (1):
print(i)
i += 1
time.sleep(0.5)
You can read more about threading and sharing variables when using threading.
You could achieve this with an asynchronous approach or use multi-[threading|procesiing]
Your approach is blocking the execution as it runs step by step.
Choosing the approach depends on the task that you want to perform in the sendData method, but from the name, I could suggest asyncio should work just fine.
import asyncio
async def send_data(x):
await asyncio.sleep(5) # Could be network request as well
print("delayed data: ", x)
async def main():
i = 1
while True:
print(i)
# Create non blocking task (run in background)
asyncio.create_task(send_data(i))
i += 1
await asyncio.sleep(0.5)
if __name__ == '__main__':
asyncio.run(main())
This code may help you. But, you need to modify them as you want
# importing module
import time
# running loop from 0 to 4
for i in range(0,5):
# printing numbers
print("delayed data: ", i)
# adding 0.5 seconds time delay
time.sleep(0.5)
the output print every 0.5 sec like this:
delayed data: 0
delayed data: 1
delayed data: 2
delayed data: 3
delayed data: 4

How long does it take to create a thread in python

I'm trying to finish my programming course and I'm stuck on one exercise.
I have to count how much time it takes in Python to create threads and whether it depends on the number of threads created.
I wrote a simple script and I don't know if it is good:
import threading
import time
def fun1(a,b):
c = a + b
print(c)
time.sleep(100)
times = []
for i in range(10000):
start = time.time()
threading.Thread(target=fun1, args=(55,155)).start()
end = time.time()
times.append(end-start)
print(times)
In times[] I got a 10000 results near 0.0 or exacly 0.0.
And now I don't know if I created the test because I don't understand something, or maybe the result is correct and the time of creating a thread does not depend on the number of already created ones?
Can U help me with it? If it's worng solution, explain me why, or if it's correct confirm it? :)
So there are two ways to interpret your question:
Whether the existence of other threads (that have not been started) affects creation time for new threads
Whether other threads running in the background (threads already started) affects creation time for new threads.
Checking the first one
In this case, you simply don't start the threads:
import threading
import time
def fun1(a,b):
c = a + b
print(c)
time.sleep(100)
times = []
for i in range(10):
start = time.time()
threading.Thread(target=fun1, args=(55,155)) # don't start
end = time.time()
times.append(end-start)
print(times)
output for 10 runs:
[4.696846008300781e-05, 2.8848648071289062e-05, 2.6941299438476562e-05, 2.5987625122070312e-05, 2.5987625122070312e-05, 2.5987625122070312e-05, 2.5987625122070312e-05, 2.5987625122070312e-05, 2.5033950805664062e-05, 2.6941299438476562e-05]
As you can see, the times are about the same (as you would expect).
Checking the second one
In this case, we want the previously created threads to keep running as we create more threads. So we give each thread a task that never finishes:
import threading
import time
def fun1(a,b):
while True:
pass # never ends
times = []
for i in range(100):
start = time.time()
threading.Thread(target=fun1, args=(55,155)).start()
end = time.time()
times.append(end-start)
print(times)
output:
Over 100 runs, the first one took 0.0003440380096435547 whereas the last one took 0.3017098903656006 so there's quite a magnitude of increase there.

How to listen event with Python threading

I'm trying to implement a multi thread program in python and am having troubles.
I try to design a program, when the program (main thread) receives a specific command,
The counter of the program will return to the previous number and continue counting down.
The following is the code I tried to write:
import threading
import time
count = 0
preEvent = threading.Event()
runEvent = threading.Event()
runEvent.set()
preEvent.clear()
def pre():
global count
while True:
if event.isSet():
count -= 1
event.clear()
def doSomething():
global count
while True:
# if not runEvent.isSet():
# runEvent.wait()
print(count)
count += 1
time.sleep(1)
def main():
t1 = threading.Thread(target=pre, args=())
t2 = threading.Thread(target=doSomething, args=())
t1.start()
t2.start()
command = input("input command")
while command != 'quit':
command = input("input command")
if command == 'pre':
preEvent.set()
main()
But I encountered a few problems
How to block t1 simultaneously while I inputting a specific command
How to start from the beginning when t1 is restored instead of starting from the blocked point
Regarding question 1, I tried adding a condition check before the print(count) command, but if I enter the "pre" command when the program outputs count, the program will still perform count+=1
After that, the program will go back to check the conditions and block t1, but this does not achieve the synchronization effect I wanted.
Is there a way to achieve this goal?
Question 2 is similar to question 1. I hope that whenever I enter the command, my program will output like this.
1
2
3
4
pre
3
4
...
But if t1 is blocked after finishing the print(count) instruction, when the event is cleared, t1 will continue to execute from count+=1
So the output will become the following
1
2
3
4
pre
4
5
...
I tried to find information on the Internet, but I never knew how to add keywords.
Is there a method or library that can achieve this function?
I have tried my best to describe my problem, but it may not be good enough. If I have any questions about my problem, I can add more explanation.
Thank you all for your patience to read my question
The threading module has a wait method that will block until notify or notify_all is called. This should accomplish what you're looking for in your first question. For question 2 you can either define a function that handles the exit case, or just recreate a thread to start form the beginning.
Simply it can work like this, maybe helps.
from threading import Thread, Event
count = 0
counter = Event()
def pre():
global count
counter.wait()
count -= 1
print(count)
def main():
global count
print(count)
while True:
command = input("Enter 1 to increase, Enter 2 to decrease :")
if command == "1":
t1 = Thread(target=pre, args=())
t1.start()
counter.set()
else :
count += 1
print(count)
main()

Different inputs for different processes in python multiprocessing

Please bear with me as this is a bit of a contrived example of my real application. Suppose I have a list of numbers and I wanted to add a single number to each number in the list using multiple (2) processes. I can do something like this:
import multiprocessing
my_list = list(range(100))
my_number = 5
data_line = [{'list_num': i, 'my_num': my_number} for i in my_list]
def worker(data):
return data['list_num'] + data['my_num']
pool = multiprocessing.Pool(processes=2)
pool_output = pool.map(worker, data_line)
pool.close()
pool.join()
Now however, there's a wrinkle to my problem. Suppose that I wanted to alternate adding two numbers (instead of just adding one). So around half the time, I want to add my_number1 and the other half of the time I want to add my_number2. It doesn't matter which number gets added to which item on the list. However, the one requirement is that I don't want to be adding the same number simultaneously at the same time across the different processes. What this boils down to essentially (I think) is that I want to use the first number on Process 1 and the second number on Process 2 exclusively so that the processes are never simultaneously adding the same number. So something like:
my_num1 = 5
my_num2 = 100
data_line = [{'list_num': i, 'my_num1': my_num1, 'my_num2': my_num2} for i in my_list]
def worker(data):
# if in Process 1:
return data['list_num'] + data['my_num1']
# if in Process 2:
return data['list_num'] + data['my_num2']
# and so forth
Is there an easy way to specify specific inputs per process? Is there another way to think about this problem?
multiprocessing.Pool allows to execute an initializer function which is going to be executed before the actual given function will be run.
You can use it altogether with a global variable to allow your function to understand in which process is running.
You probably want to control the initial number the processes will get. You can use a Queue to notify to the processes which number to pick up.
This solution is not optimal but it works.
import multiprocessing
process_number = None
def initializer(queue):
global process_number
process_number = queue.get() # atomic get the process index
def function(value):
print "I'm process %s" % process_number
return value[process_number]
def main():
queue = multiprocessing.Queue()
for index in range(multiprocessing.cpu_count()):
queue.put(index)
pool = multiprocessing.Pool(initializer=initializer, initargs=[queue])
tasks = [{0: 'Process-0', 1: 'Process-1', 2: 'Process-2'}, ...]
print(pool.map(function, tasks))
My PC is a dual core, as you can see only Process-0 and Process-1 are processed.
I'm process 0
I'm process 0
I'm process 1
I'm process 0
I'm process 1
...
['Process-0', 'Process-0', 'Process-1', 'Process-0', ... ]

Always run a constant number of subprocesses in parallel

I want to use subprocesses to let 20 instances of a written script run parallel. Lets say i have a big list of urls with like 100.000 entries and my program should control that all the time 20 instances of my script are working on that list. I wanted to code it as follows:
urllist = [url1, url2, url3, .. , url100000]
i=0
while number_of_subproccesses < 20 and i<100000:
subprocess.Popen(['python', 'script.py', urllist[i]]
i = i+1
My script just writes something into a database or textfile. It doesnt output anything and dont need more input than the url.
My problem is i wasnt able to find something how to get the number of subprocesses that are active. Im a novice programmer so every hint and suggestion is welcome. I was also wondering how i can manage it once the 20 subprocesses are loaded that the while loop checks the conditions again? I thought of maybe putting another while loop over it, something like
while i<100000
while number_of_subproccesses < 20:
subprocess.Popen(['python', 'script.py', urllist[i]]
i = i+1
if number_of_subprocesses == 20:
sleep() # wait to some time until check again
Or maybe theres a bette possibility that the while loop is always checking on the number of subprocesses?
I also considered using the module multiprocessing, but i found it really convenient to just call the script.py with subprocessing instead of a function with multiprocessing.
Maybe someone can help me and lead me into the right direction. Thanks Alot!
Taking a different approach from the above - as it seems that the callback can't be sent as a parameter:
NextURLNo = 0
MaxProcesses = 20
MaxUrls = 100000 # Note this would be better to be len(urllist)
Processes = []
def StartNew():
""" Start a new subprocess if there is work to do """
global NextURLNo
global Processes
if NextURLNo < MaxUrls:
proc = subprocess.Popen(['python', 'script.py', urllist[NextURLNo], OnExit])
print ("Started to Process %s", urllist[NextURLNo])
NextURLNo += 1
Processes.append(proc)
def CheckRunning():
""" Check any running processes and start new ones if there are spare slots."""
global Processes
global NextURLNo
for p in range(len(Processes):0:-1): # Check the processes in reverse order
if Processes[p].poll() is not None: # If the process hasn't finished will return None
del Processes[p] # Remove from list - this is why we needed reverse order
while (len(Processes) < MaxProcesses) and (NextURLNo < MaxUrls): # More to do and some spare slots
StartNew()
if __name__ == "__main__":
CheckRunning() # This will start the max processes running
while (len(Processes) > 0): # Some thing still going on.
time.sleep(0.1) # You may wish to change the time for this
CheckRunning()
print ("Done!")
Just keep count as you start them and use a callback from each subprocess to start a new one if there are any url list entries to process.
e.g. Assuming that your sub-process calls the OnExit method passed to it as it ends:
NextURLNo = 0
MaxProcesses = 20
NoSubProcess = 0
MaxUrls = 100000
def StartNew():
""" Start a new subprocess if there is work to do """
global NextURLNo
global NoSubProcess
if NextURLNo < MaxUrls:
subprocess.Popen(['python', 'script.py', urllist[NextURLNo], OnExit])
print "Started to Process", urllist[NextURLNo]
NextURLNo += 1
NoSubProcess += 1
def OnExit():
NoSubProcess -= 1
if __name__ == "__main__":
for n in range(MaxProcesses):
StartNew()
while (NoSubProcess > 0):
time.sleep(1)
if (NextURLNo < MaxUrls):
for n in range(NoSubProcess,MaxProcesses):
StartNew()
To keep constant number of concurrent requests, you could use a thread pool:
#!/usr/bin/env python
from multiprocessing.dummy import Pool
def process_url(url):
# ... handle a single url
urllist = [url1, url2, url3, .. , url100000]
for _ in Pool(20).imap_unordered(process_url, urllist):
pass
To run processes instead of threads, remove .dummy from the import.

Categories