Multiprocessing print statement doesn't work in Spyder IDE - python

On a machine running Windows Server 2012 R2, in the Spyder IDE from Anaconda and running Python 3.7 with the following code:
import time
import multiprocessing
start=time.perf_counter()
def do_something():
print('func start')
time.sleep(1)
print('func end')
if __name__=='__main__':
print('name is main')
p1=multiprocessing.Process(target=do_something)
p1.start()
p1.join()
finish=time.perf_counter()
print('\n\nProgram completed in '+str(round((finish-start),2))+'s')
print('Goodbye!')
And I get the output
name is main
Program completed in 0.13s
Goodbye!
My expectation was that I would see the two print statements
func start
func end
and also (because .join was envoked) that the program would take >1s to complete.
I suspect that the .start() call did not successfully call the do_something function.
FYI, I am following this tutorial, which I know needs to be modified to include the if statement on windows. I am also seeing similar code on other sites, but doesn't seem to work on my end.
Any suggestions on troubleshooting would be much appreciated.
**EDIT: Per the comment below from Azy_Crw4282, the code seems to work on his end and, per his suggestion, it seems to work from the cmd prompt. So this seems to be a bug specifically with the Spyder IDE.
FYI, I wanted to understand whether the issue was that the process was getting kicked off but the IDE wasn't capturing the output OR the process wasn't getting kicked off. I tried two things, 1) the code below writes a dataframe to csv. When doing this in the multiprocessing function, it does NOT write the file. 2) I created a global variable and changed variable value in the function. Spyder keeps the variable values after the code runs, and when I printed the variable it was unchanged.
So, in summary - it seems that the Spyder IDE does not work with the multiprocessing module.**
import time
import multiprocessing
start=time.perf_counter()
df=pd.DataFrame(data={'Col1':[1.1,2.1,3.1],
'Col2':[1.2,2.2,3.2],
'Col3':[1.3,2.3,3.3]}, columns=['Col1','Col2','Col3'])
def do_something():
print('func start')
df.to_csv('C:/testMp.csv')
time.sleep(1)
print('func end')
if __name__=='__main__':
print('name is main')
p1=multiprocessing.Process(target=do_something)
p1.start()
p1.join()
finish=time.perf_counter()
print('\n\nProgram completed in '+str(round((finish-start),2))+'s')
print('Goodbye!')

When I ran your code, I get the following output. Can you try running your code in another ide/ cmd line/ terminal?
import multiprocessing
start=time.perf_counter()
def do_something():
print('func start')
time.sleep(1)
print('func end')
if __name__=='__main__':
print('name is main')
p1=multiprocessing.Process(target=do_something)
p1.start()
p1.join()
finish=time.perf_counter()
print('\n\nProgram completed in '+str(round((finish-start),2))+'s')
print('Goodbye!')
Outputs:
name is main
Program completed in 0.0s
Goodbye!
func start
func end
Program completed in 1.27s
Goodbye!
The above result is not probably what you expect. In order to achieve, what you want, you need to indent the outer print section so that it appears within the main call.
import time
import multiprocessing
start=time.perf_counter()
def do_something():
print('func start')
time.sleep(1)
print('func end')
if __name__=='__main__':
print('name is main')
p1=multiprocessing.Process(target=do_something)
p1.start()
p1.join()
finish=time.perf_counter()
print('\n\nProgram completed in '+str(round((finish-start),2))+'s')
print('Goodbye!')
Outputs:
name is main
func start
func end
Program completed in 1.33s
Goodbye!

Related

Interrupting python-wrapped FORTRAN

I have a black box legacy FORTRAN package for solving PDEs (partial differential equations) with a python wrapper. Sometimes the FORTRAN PDE solver takes a long time to converge. If the FORTRAN is taking too long to converge, I would like my python script to be able to force the FORTRAN to stop so the python can finish running, perhaps with an error message.
Any thoughts on how to automate this watchdog timer sort of behavior in python when working with a wrapped FORTRAN black box? Would it be possible with multiprocessing?
I found a solution that worked using python's built-in multiprocessing module. Here is a simplified version of my solution. The FORTRAN blackbox has been replaced in the code below with the do_calculation() method. I would be interested in better ways to answer this question, but for now, this is the way I am doing it. Here is the most helpful youtube video I found: https://www.youtube.com/watch?v=iwFGC_3sVio&t=622s
import multiprocessing as mp
import time
import numpy as np
def timer_func(seconds, queue):
print(f" {seconds}-second timer started")
time.sleep(seconds)
print("Time up!")
queue.put("Time up!")
pass
def do_calculation(listy_list, queue):
print("Starting Calculation!")
time.sleep(10)
output = 0
for i in listy_list:
output = np.sqrt((output+i)*i)
print("Done Calculating!")
queue.put(output)
pass
def main():
in_list = np.arange(0,10000)
queue = mp.Queue()
p1 = mp.Process(target=do_calculation, args=[in_list, queue])
p2 = mp.Process(target=timer_func, args=[3, queue])
p1.start()
p2.start()
result = queue.get()
print(f"Queue returned {result}")
if result == "Time up!":
p1.terminate()
print("p1 terminated")
else:
p2.terminate()
print(f"Queue resturned {result}")
p1.join()
p2.join()
# print(f"result of p1 {result}")
pass
if __name__ == '__main__':
main()

Need to stop execution of a method in Python 3.7 Version

On windows, I need to stop a python method execution after a specific time using Python 3.7 version. I tried with multiprocessing.Process but it is not able to call the method.
from multiprocessing import Process
import time
def func1(name):
print("Hi from func1")
i = 0
while True:
i += 1
print('hello', name)
time.sleep(1)
if __name__ == '__main__':
p = Process(target=func1, args=('bob',))
p.start()
time.sleep(5)
p.terminate()
In the above code, it is not able to invoke the func1 method. Please let me know if I am missing something or is there any better way to achieve this.

Can I run two functions simultaneously no matter what these functions run?

What I'd like to do is the following program to print out:
Running Main
Running Second
Running Main
Running Second
[...]
Code:
from multiprocessing import Process
import time
def main():
while True:
print('Running Main')
time.sleep(1)
def second():
while True:
print('Running Second')
time.sleep(1)
p1 = Process(main())
p2 = Process(second())
p1.start()
p2.start()
But it doesn't have the desired behavior. Instead it just prints out:
Running Main
Running Main
[...]
I suspect my program doesn't work because of the while statement?
Is there any way I can overcome this problem and have my program print out what I mentioned no matter what I execute in my function?
The issue here seems to be when you make the process vars. I suspect the reason for why the process inclusively runs the first function is because of syntax. My interpretation is that instead of creating a process out of a function you are making a process that executes a function exclusively.
When you want to create Process object you want to avoid using this
p1 = Process(target=main())
and rather write
p1 = Process(target=main)
That also means if you want to include any input for the function you will have to
p1 = Process(target=main, args=('hi',))

python multiprocessing does not run functions

I just want to see a simple code implementation of multiprocessing on windows, but it doesn't enter/run functions neither in jupyternotebook or running saved .py
import time
import multiprocessing
s=[1,4]
def subu(remo):
s[remo-1]=remo*9
print(f'here{remo}')
return
if __name__=="__main__":
p1=multiprocessing.Process(target=subu , args=[1])
p2=multiprocessing.Process(target=subu , args=[2])
p1.start()
p2.start()
p1.join()
p2.join()
# print("2222here")
print(s)
input()
the output by .py is:
[1, 4]
[1, 4]
and the output by jupyternotebook is:
[1,4]
which I hoped to be:
here1
here2
[9,18]
what's wrong with code above? and what about this code:
import concurrent
thread_num=2
s=[1,4]
def subu(remo):
s[remo-1]=remo*9
print(f'here{remo}')
return
with concurrent.futures.ProcessPoolExecutor() as executor:
## or if __name__=="__main__":
##... with concurrent.futures.ProcessPoolExecutor() as executor:
results=[executor.submit(subu,i) for i in range(thread_num)]
for f in concurrent.futures.as_completed(results):
print(f.result())
input()
doesnot run at all in jupyter pulling error
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
I kinda know I can't expect jupyter to run multiprocessing. but saved.py also can't run it. and it exits without waiting for input()
There are a couple of potential problems. The worker function needs to be importable (at least on Windows) so that it can be found by the subprocess. And since subprocess memory isn't visible to the parent, the results need to be returned. So, putting the worker in a separate module
subumodule.py
def subu(remo):
remo = remo*9
print(f'here{remo}')
return remo
And using a process pool's existing infrastructure to return a worker return value to the parent. You could
import time
import multiprocessing
if __name__=="__main__":
with multiprocessing.Pool(2) as pool:
s = list(pool.map(subu, (1,2))) #here
print(s)
input()

What mistake am I making in this python script?

I trying to play around with multi-threading so I can better at it, but for some weird reason, my code doesn't want to follow the commands. It's suppose to go into a while loop and print, but it doesn't, and it's also not raising any errors, so which lines is the mistake on?
#!/usr/bin/env python
#
#
#
import random
import thread
import time
import sys
import os
def DisplayA(name,wait):
while True:
print 'Display: 1';time.sleep(wait)
def DisplayB(name,wait):
while True:
print 'Display: 2';time.sleep(wait)
def DisplayC(name,wait):
while True:
print 'Display: 3';time.sleep(wait)
thread.start_new_thread(DisplayA,('Display1',2))
thread.start_new_thread(DisplayB,('Display2',3))
thread.start_new_thread(DisplayC,('Display3',5))
Add this to the bottom:
while True:
pass
The problem is that you're running off the bottom of your main program. This terminates the entire execution session.
Quick and short solution:
while True:
time.sleep(1)
Do not use pass in the while loop, because it eats CPU.
Expensive way of doing nothing.
If you want a more general solution, then you can import Tread from threading, then you can use join:
from threading import Thread
...
p1 = Thread(name="A", target=DisplayA, args=('Display1',2))
p2 = Thread(name="B", target=DisplayB, args=('Display2',3))
p3 = Thread(name="C", target=DisplayC, args=('Display3',5))
p1.start()
p2.start()
p3.start()
p1.join()
p2.join()
p3.join()
This solution works also if the threads do not run endless, and your program can continue after the threads have finished.
You can either do what Prune here suggested, or you can suspend the main thread after initiating DisplayA, DisplayB and DisplayC.

Categories