This question already has answers here:
Python Argument Binders
(7 answers)
Closed 3 days ago.
I've looked at many solutions but none are working for me. I have a simple function (with arg) for a while loop. I would like to run that function with arg1 concurrently with the same function with arg2. At the moment it will only run the first function (output is infinite "func: 1") Here is what I have:
import multiprocessing
from multiprocessing import Process
def func(x):
while 2 - 1 > 0:
print("func:", x)
Process(target=func(1).start())
Process(target=func(2).start())
I was hoping for an output of randomized "func: 1" "func: 2"
Could someone please explain how to make this "simple" loop function run concurrently with itself?
Edit: Solution that worked for me was:
from multiprocessing import Process
def func(x):
while True:
print("func:", x)
if __name__ == '__main__':
p1 = Process(target=func, args=(1,))
p2 = Process(target=func, args=(2,))
p1.start()
p2.start()
The syntax you have seems little off. You should pass the function itself to the Process constructor (rather than calling the function with arguments). Please see the correct syntax below:
import multiprocessing
from multiprocessing import Process
def func(x):
while True:
print("func:", x)
p1 = Process(target=func, args=(1,))
p2 = Process(target=func, args=(2,))
p1.start()
p2.start()
p1.join()
p2.join()
Related
I have a black box legacy FORTRAN package for solving PDEs (partial differential equations) with a python wrapper. Sometimes the FORTRAN PDE solver takes a long time to converge. If the FORTRAN is taking too long to converge, I would like my python script to be able to force the FORTRAN to stop so the python can finish running, perhaps with an error message.
Any thoughts on how to automate this watchdog timer sort of behavior in python when working with a wrapped FORTRAN black box? Would it be possible with multiprocessing?
I found a solution that worked using python's built-in multiprocessing module. Here is a simplified version of my solution. The FORTRAN blackbox has been replaced in the code below with the do_calculation() method. I would be interested in better ways to answer this question, but for now, this is the way I am doing it. Here is the most helpful youtube video I found: https://www.youtube.com/watch?v=iwFGC_3sVio&t=622s
import multiprocessing as mp
import time
import numpy as np
def timer_func(seconds, queue):
print(f" {seconds}-second timer started")
time.sleep(seconds)
print("Time up!")
queue.put("Time up!")
pass
def do_calculation(listy_list, queue):
print("Starting Calculation!")
time.sleep(10)
output = 0
for i in listy_list:
output = np.sqrt((output+i)*i)
print("Done Calculating!")
queue.put(output)
pass
def main():
in_list = np.arange(0,10000)
queue = mp.Queue()
p1 = mp.Process(target=do_calculation, args=[in_list, queue])
p2 = mp.Process(target=timer_func, args=[3, queue])
p1.start()
p2.start()
result = queue.get()
print(f"Queue returned {result}")
if result == "Time up!":
p1.terminate()
print("p1 terminated")
else:
p2.terminate()
print(f"Queue resturned {result}")
p1.join()
p2.join()
# print(f"result of p1 {result}")
pass
if __name__ == '__main__':
main()
What I'd like to do is the following program to print out:
Running Main
Running Second
Running Main
Running Second
[...]
Code:
from multiprocessing import Process
import time
def main():
while True:
print('Running Main')
time.sleep(1)
def second():
while True:
print('Running Second')
time.sleep(1)
p1 = Process(main())
p2 = Process(second())
p1.start()
p2.start()
But it doesn't have the desired behavior. Instead it just prints out:
Running Main
Running Main
[...]
I suspect my program doesn't work because of the while statement?
Is there any way I can overcome this problem and have my program print out what I mentioned no matter what I execute in my function?
The issue here seems to be when you make the process vars. I suspect the reason for why the process inclusively runs the first function is because of syntax. My interpretation is that instead of creating a process out of a function you are making a process that executes a function exclusively.
When you want to create Process object you want to avoid using this
p1 = Process(target=main())
and rather write
p1 = Process(target=main)
That also means if you want to include any input for the function you will have to
p1 = Process(target=main, args=('hi',))
I just want to see a simple code implementation of multiprocessing on windows, but it doesn't enter/run functions neither in jupyternotebook or running saved .py
import time
import multiprocessing
s=[1,4]
def subu(remo):
s[remo-1]=remo*9
print(f'here{remo}')
return
if __name__=="__main__":
p1=multiprocessing.Process(target=subu , args=[1])
p2=multiprocessing.Process(target=subu , args=[2])
p1.start()
p2.start()
p1.join()
p2.join()
# print("2222here")
print(s)
input()
the output by .py is:
[1, 4]
[1, 4]
and the output by jupyternotebook is:
[1,4]
which I hoped to be:
here1
here2
[9,18]
what's wrong with code above? and what about this code:
import concurrent
thread_num=2
s=[1,4]
def subu(remo):
s[remo-1]=remo*9
print(f'here{remo}')
return
with concurrent.futures.ProcessPoolExecutor() as executor:
## or if __name__=="__main__":
##... with concurrent.futures.ProcessPoolExecutor() as executor:
results=[executor.submit(subu,i) for i in range(thread_num)]
for f in concurrent.futures.as_completed(results):
print(f.result())
input()
doesnot run at all in jupyter pulling error
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
I kinda know I can't expect jupyter to run multiprocessing. but saved.py also can't run it. and it exits without waiting for input()
There are a couple of potential problems. The worker function needs to be importable (at least on Windows) so that it can be found by the subprocess. And since subprocess memory isn't visible to the parent, the results need to be returned. So, putting the worker in a separate module
subumodule.py
def subu(remo):
remo = remo*9
print(f'here{remo}')
return remo
And using a process pool's existing infrastructure to return a worker return value to the parent. You could
import time
import multiprocessing
if __name__=="__main__":
with multiprocessing.Pool(2) as pool:
s = list(pool.map(subu, (1,2))) #here
print(s)
input()
I trying to play around with multi-threading so I can better at it, but for some weird reason, my code doesn't want to follow the commands. It's suppose to go into a while loop and print, but it doesn't, and it's also not raising any errors, so which lines is the mistake on?
#!/usr/bin/env python
#
#
#
import random
import thread
import time
import sys
import os
def DisplayA(name,wait):
while True:
print 'Display: 1';time.sleep(wait)
def DisplayB(name,wait):
while True:
print 'Display: 2';time.sleep(wait)
def DisplayC(name,wait):
while True:
print 'Display: 3';time.sleep(wait)
thread.start_new_thread(DisplayA,('Display1',2))
thread.start_new_thread(DisplayB,('Display2',3))
thread.start_new_thread(DisplayC,('Display3',5))
Add this to the bottom:
while True:
pass
The problem is that you're running off the bottom of your main program. This terminates the entire execution session.
Quick and short solution:
while True:
time.sleep(1)
Do not use pass in the while loop, because it eats CPU.
Expensive way of doing nothing.
If you want a more general solution, then you can import Tread from threading, then you can use join:
from threading import Thread
...
p1 = Thread(name="A", target=DisplayA, args=('Display1',2))
p2 = Thread(name="B", target=DisplayB, args=('Display2',3))
p3 = Thread(name="C", target=DisplayC, args=('Display3',5))
p1.start()
p2.start()
p3.start()
p1.join()
p2.join()
p3.join()
This solution works also if the threads do not run endless, and your program can continue after the threads have finished.
You can either do what Prune here suggested, or you can suspend the main thread after initiating DisplayA, DisplayB and DisplayC.
I am working on the integration of two different framework, say Main_process1.py and Main_process2.py. Consider, Main_process1.py have a main() and Main_process2.py have another main().
So, i have changed the main() as main1() and main2() respectively for Main_process1.py and Main_process2.py and created a new file like overall_Main.py.
The new file overall_Main.py will have two process, one starting main1() in Main_process1.py and main2() in Main_process2.py.
Basically, what's required is, an overall main process has to be created for two different main processes. Please give some suggestion
This is, like, the very basics of Python Multiprocessing:
from multiprocessing import Process
from Main_process1 import main1
from Main_process2 import main2
if __name__ == '__main__':
p1 = Process(target=main1, args=(list your args for main1 here,))
p2 = Process(target=main2, args=(list your args for main2 here,))
p1.start()
p2.start()
#your other main code
p1.join()
p2.join()