I want to create processes without waiting for other processes finish which they can't because they are in an infinite loop.
import time
from multiprocessing import Process
def child_function(param1, param2):
print(str(param1 * param2))
while True:
print("doing some stuff")
time.sleep(3)
def main_function():
print("Initializing some things.")
for _ in range(10):
Process(target=child_function(3, 5)).start()
if __name__ == '__main__':
main_function()
This code only starts one process and waits for it to finish. How can I do this?
Edit: Comment answer works fine and the answer below also works fine but for creating thread. Thank you everyone.
Try this Python module Threading
import time
import threading
def child_function(param1, param2):
print(str(param1 * param2))
while True:
print("doing some stuff")
time.sleep(3)
def main_function():
print("Initializing some things.")
for _ in range(10):
x = threading.Thread(target=child_function, args=(3,5, ))
x.start()
main_function()
Explanation: as already mentioned in the comments, notice that we are passing the function as opposed to calling it via the thread constructor, Also you can compare Threading vs Multiprocessing and use whichever best suits the project.
I'm trying to run 2 functions at the same time.
def func1():
print('Working')
def func2():
print('Working')
func1()
func2()
Does anyone know how to do this?
Do this:
from threading import Thread
def func1():
print('Working')
def func2():
print("Working")
if __name__ == '__main__':
Thread(target = func1).start()
Thread(target = func2).start()
The answer about threading is good, but you need to be a bit more specific about what you want to do.
If you have two functions that both use a lot of CPU, threading (in CPython) will probably get you nowhere. Then you might want to have a look at the multiprocessing module or possibly you might want to use jython/IronPython.
If CPU-bound performance is the reason, you could even implement things in (non-threaded) C and get a much bigger speedup than doing two parallel things in python.
Without more information, it isn't easy to come up with a good answer.
This can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.
To parallelize your example, you'd need to define your functions with the #ray.remote decorator, and then invoke them with .remote.
import ray
ray.init()
# Define functions you want to execute in parallel using
# the ray.remote decorator.
#ray.remote
def func1():
print("Working")
#ray.remote
def func2():
print("Working")
# Execute func1 and func2 in parallel.
ray.get([func1.remote(), func2.remote()])
If func1() and func2() return results, you need to rewrite the above code a bit, by replacing ray.get([func1.remote(), func2.remote()]) with:
ret_id1 = func1.remote()
ret_id2 = func1.remote()
ret1, ret2 = ray.get([ret_id1, ret_id2])
There are a number of advantages of using Ray over the multiprocessing module or using multithreading. In particular, the same code will run on a single machine as well as on a cluster of machines.
For more advantages of Ray see this related post.
One option, that looks like it makes two functions run at the same
time, is using the threading module (example in this answer).
However, it has a small delay, as an Official Python Documentation
page describes. A better module to try using is multiprocessing.
Also, there's other Python modules that can be used for asynchronous execution (two pieces of code working at the same time). For some information about them and help to choose one, you can read this Stack Overflow question.
Comment from another user about the threading module
He might want to know that because of the Global Interpreter Lock
they will not execute at the exact same time even if the machine in
question has multiple CPUs. wiki.python.org/moin/GlobalInterpreterLock
– Jonas Elfström Jun 2 '10 at 11:39
Quote from the Documentation about threading module not working
CPython implementation detail: In CPython, due to the Global Interpreter
Lock, only one thread can execute Python code at once (even though
certain performance-oriented libraries might overcome this limitation).
If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing or concurrent.futures.ProcessPoolExecutor.
However, threading is still an appropriate model if you
want to run multiple I/O-bound tasks simultaneously.
The thread module does work simultaneously unlike multiprocess, but the timing is a bit off. The code below prints a "1" and a "2". These are called by different functions respectively. I did notice that when printed to the console, they would have slightly different timings.
from threading import Thread
def one():
while(1 == num):
print("1")
time.sleep(2)
def two():
while(1 == num):
print("2")
time.sleep(2)
p1 = Thread(target = one)
p2 = Thread(target = two)
p1.start()
p2.start()
Output: (Note the space is for the wait in between printing)
1
2
2
1
12
21
12
1
2
Not sure if there is a way to correct this, or if it matters at all. Just something I noticed.
Try this
from threading import Thread
def fun1():
print("Working1")
def fun2():
print("Working2")
t1 = Thread(target=fun1)
t2 = Thread(target=fun2)
t1.start()
t2.start()
In case you also want to wait until both functions have been completed:
from threading import Thread
def func1():
print 'Working'
def func2():
print 'Working'
# Define the threads and put them in an array
threads = [
Thread(target = self.func1),
Thread(target = self.func2)
]
# Func1 and Func2 run in separate threads
for thread in threads:
thread.start()
# Wait until both Func1 and Func2 have finished
for thread in threads:
thread.join()
Another approach to run multiple functions concurrently in python is using asyncio that I couldn't see within the answers.
import asyncio
async def func1():
for _ in range(5):
print(func1.__name__)
await asyncio.sleep(0) # switches tasks every iteration.
async def func2():
for _ in range(5):
print(func2.__name__)
await asyncio.sleep(0)
tasks = [func1(), func2()]
await asyncio.gather(*tasks)
Out:
func1
func2
func1
func2
func1
func2
func1
func2
func1
func2
[NOTE]:
The above asyncio syntax is valid on python 3.7 and later
multiprocessing vs multithreading vs asyncio
This code below can run 2 functions parallelly:
from multiprocessing import Process
def test1():
print("Test1")
def test2():
print("Test2")
if __name__ == "__main__":
process1 = Process(target=test1)
process2 = Process(target=test2)
process1.start()
process2.start()
process1.join()
process2.join()
Result:
Test1
Test2
And, these 2 sets of code below can run 2 functions concurrently:
from threading import Thread
def test1():
print("Test1")
def test2():
print("Test2")
thread1 = Thread(target=test1)
thread2 = Thread(target=test2)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
from operator import methodcaller
from multiprocessing.pool import ThreadPool
def test1():
print("Test1")
def test2():
print("Test2")
caller = methodcaller("__call__")
ThreadPool().map(caller, [test1, test2])
Result:
Test1
Test2
And, this code below can run 2 async functions concurrently and asynchronously:
import asyncio
async def test1():
print("Test1")
async def test2():
print("Test2")
async def call_tests():
await asyncio.gather(test1(), test2())
asyncio.run(call_tests())
Result:
Test1
Test2
I think what you are trying to convey can be achieved through multiprocessing. However if you want to do it through threads you can do this.
This might help
from threading import Thread
import time
def func1():
print 'Working'
time.sleep(2)
def func2():
print 'Working'
time.sleep(2)
th = Thread(target=func1)
th.start()
th1=Thread(target=func2)
th1.start()
test using APscheduler:
from apscheduler.schedulers.background import BackgroundScheduler
import datetime
dt = datetime.datetime
Future = dt.now() + datetime.timedelta(milliseconds=2550) # 2.55 seconds from now testing start accuracy
def myjob1():
print('started job 1: ' + str(dt.now())[:-3]) # timed to millisecond because thats where it varies
time.sleep(5)
print('job 1 half at: ' + str(dt.now())[:-3])
time.sleep(5)
print('job 1 done at: ' + str(dt.now())[:-3])
def myjob2():
print('started job 2: ' + str(dt.now())[:-3])
time.sleep(5)
print('job 2 half at: ' + str(dt.now())[:-3])
time.sleep(5)
print('job 2 done at: ' + str(dt.now())[:-3])
print(' current time: ' + str(dt.now())[:-3])
print(' do job 1 at: ' + str(Future)[:-3] + '''
do job 2 at: ''' + str(Future)[:-3])
sched.add_job(myjob1, 'date', run_date=Future)
sched.add_job(myjob2, 'date', run_date=Future)
i got these results. which proves they are running at the same time.
current time: 2020-12-15 01:54:26.526
do job 1 at: 2020-12-15 01:54:29.072 # i figure these both say .072 because its 1 line of print code
do job 2 at: 2020-12-15 01:54:29.072
started job 2: 2020-12-15 01:54:29.075 # notice job 2 started before job 1, but code calls job 1 first.
started job 1: 2020-12-15 01:54:29.076
job 2 half at: 2020-12-15 01:54:34.077 # halfway point on each job completed same time accurate to the millisecond
job 1 half at: 2020-12-15 01:54:34.077
job 1 done at: 2020-12-15 01:54:39.078 # job 1 finished first. making it .004 seconds faster.
job 2 done at: 2020-12-15 01:54:39.091 # job 2 was .002 seconds faster the second test
I might be wrong but:
with this piece of code:
def function_sleep():
time.sleep(5)
start_time = time.time()
p1=Process(target=function_sleep)
p2=Process(target=function_sleep)
p1.start()
p2.start()
end_time = time.time()
I took the time and I would expect to get 5/6 seconds, while it always takes the double of the argument passed to the function sleep (10 seconds in this case).
What's the matter?
Sorry guys, as mentioned in the previous comment, the "join()" need to be called.
That's very important!
I'm trying to implement very simple multiprocessing code in python 2.7, but it looks like the code run serially and not parallel.
The following code prints *****1***** while I expect it to print *****2***** immediately after *****1*****.
import os
import multiprocessing
from time import sleep
def main():
func1_proc = multiprocessing.Process(target=func1())
func2_proc = multiprocessing.Process(target=func2())
func1_proc.start()
func2_proc.start()
pass
def func1():
print "*****1*****"
sleep(100)
def func2():
print "*****2*****"
sleep(100)
if __name__ == "__main__":
main()
You're calling func1 and func2 before passing their returning values to Process, so func1 is going to sleep 100 seconds before returning None, for which Process will raise an error.
You should pass function objects to Process instead so that it will run them in separate processes:
func1_proc = multiprocessing.Process(target=func1)
func2_proc = multiprocessing.Process(target=func2)
If I have these two fuctions:
def func1():
print "I am function1"
def func2():
print "Function 1 is still running."
while(func1 is running):
func2()
How do I check if function 1 is still running?
I solved this using the threading module. Below is my code.
import threading
from threading import Thread
func1_thread = Thread(target = func1)
while func1_thread.isAlive():
func2()
I'm trying to program a loop with a asynchronous part in it. I dont want to wait for this asynchronous part every iteration though. Is there a way to not wait for this function inside the loop to finish?
In code (example):
import time
def test():
global a
time.sleep(1)
a += 1
test()
global a
a = 10
test()
while(1):
print a
You can put it in a thread. Instead of test()
from threading import Thread
Thread(target=test).start()
print("this will be printed immediately")
To expand on blue_note, let's say you have a function with arguments:
def test(b):
global a
time.sleep(1)
a += 1 + b
You need to pass in your args like this:
from threading import Thread
b = 1
Thread(target=test, args=(b, )).start()
print("this will be printed immediately")
Note args must be a tuple.
A simple way is to run test() in another thread
import threading
th = threading.Thread(target=test)
th.start()
You should look at a library meant for asynchronous requests, such as gevent
Examples here: http://sdiehl.github.io/gevent-tutorial/#synchronous-asynchronous-execution
import gevent
def foo():
print('Running in foo')
gevent.sleep(0)
print('Explicit context switch to foo again')
def bar():
print('Explicit context to bar')
gevent.sleep(0)
print('Implicit context switch back to bar')
gevent.joinall([
gevent.spawn(foo),
gevent.spawn(bar),
])
use thread. it creates a new thread in that the asynchronous function runs
https://www.tutorialspoint.com/python/python_multithreading.htm