I am trying my very first formal python program using Threading and Multiprocessing on a windows machine. I am unable to launch the processes though, with python giving the following message. The thing is, I am not launching my threads in the main module. The threads are handled in a separate module inside a class.
EDIT: By the way this code runs fine on ubuntu. Not quite on windows
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
My original code is pretty long, but I was able to reproduce the error in an abridged version of the code. It is split in two files, the first is the main module and does very little other than import the module which handles processes/threads and calls a method. The second module is where the meat of the code is.
testMain.py:
import parallelTestModule
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
parallelTestModule.py:
import multiprocessing
from multiprocessing import Process
import threading
class ThreadRunner(threading.Thread):
""" This class represents a single instance of a running thread"""
def __init__(self, name):
threading.Thread.__init__(self)
self.name = name
def run(self):
print self.name,'\n'
class ProcessRunner:
""" This class represents a single instance of a running process """
def runp(self, pid, numThreads):
mythreads = []
for tid in range(numThreads):
name = "Proc-"+str(pid)+"-Thread-"+str(tid)
th = ThreadRunner(name)
mythreads.append(th)
for i in mythreads:
i.start()
for i in mythreads:
i.join()
class ParallelExtractor:
def runInParallel(self, numProcesses, numThreads):
myprocs = []
prunner = ProcessRunner()
for pid in range(numProcesses):
pr = Process(target=prunner.runp, args=(pid, numThreads))
myprocs.append(pr)
# if __name__ == 'parallelTestModule': #This didnt work
# if __name__ == '__main__': #This obviously doesnt work
# multiprocessing.freeze_support() #added after seeing error to no avail
for i in myprocs:
i.start()
for i in myprocs:
i.join()
On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.
Modified testMain.py:
import parallelTestModule
if __name__ == '__main__':
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
Try putting your code inside a main function in testMain.py
import parallelTestModule
if __name__ == '__main__':
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
See the docs:
"For an explanation of why (on Windows) the if __name__ == '__main__'
part is necessary, see Programming guidelines."
which say
"Make sure that the main module can be safely imported by a new Python
interpreter without causing unintended side effects (such a starting a
new process)."
... by using if __name__ == '__main__'
Though the earlier answers are correct, there's a small complication it would help to remark on.
In case your main module imports another module in which global variables or class member variables are defined and initialized to (or using) some new objects, you may have to condition that import in the same way:
if __name__ == '__main__':
import my_module
As #Ofer said, when you are using another libraries or modules, you should import all of them inside the if __name__ == '__main__':
So, in my case, ended like this:
if __name__ == '__main__':
import librosa
import os
import pandas as pd
run_my_program()
hello here is my structure for multi process
from multiprocessing import Process
import time
start = time.perf_counter()
def do_something(time_for_sleep):
print(f'Sleeping {time_for_sleep} second...')
time.sleep(time_for_sleep)
print('Done Sleeping...')
p1 = Process(target=do_something, args=[1])
p2 = Process(target=do_something, args=[2])
if __name__ == '__main__':
p1.start()
p2.start()
p1.join()
p2.join()
finish = time.perf_counter()
print(f'Finished in {round(finish-start,2 )} second(s)')
you don't have to put imports in the if __name__ == '__main__':, just running the program you wish to running inside
In yolo v5 with python 3.8.5
if __name__ == '__main__':
from yolov5 import train
train.run()
In my case it was a simple bug in the code, using a variable before it was created. Worth checking that out before trying the above solutions. Why I got this particular error message, Lord knows.
The below solution should work for both python multiprocessing and pytorch multiprocessing.
As other answers mentioned that the fix is to have if __name__ == '__main__': but I faced several issues in identifying where to start because I am using several scripts and modules. When I can call my first function inside main then everything before it started to create multiple processes (not sure why).
Putting it at the very first line (even before the import) worked. Only calling the first function return timeout error. The below is the first file of my code and multiprocessing is used after calling several functions but putting main in the first seems to be the only fix here.
if __name__ == '__main__':
from mjrl.utils.gym_env import GymEnv
from mjrl.policies.gaussian_mlp import MLP
from mjrl.baselines.quadratic_baseline import QuadraticBaseline
from mjrl.baselines.mlp_baseline import MLPBaseline
from mjrl.algos.npg_cg import NPG
from mjrl.algos.dapg import DAPG
from mjrl.algos.behavior_cloning import BC
from mjrl.utils.train_agent import train_agent
from mjrl.samplers.core import sample_paths
import os
import json
import mjrl.envs
import mj_envs
import time as timer
import pickle
import argparse
import numpy as np
# ===============================================================================
# Get command line arguments
# ===============================================================================
parser = argparse.ArgumentParser(description='Policy gradient algorithms with demonstration data.')
parser.add_argument('--output', type=str, required=True, help='location to store results')
parser.add_argument('--config', type=str, required=True, help='path to config file with exp params')
args = parser.parse_args()
JOB_DIR = args.output
if not os.path.exists(JOB_DIR):
os.mkdir(JOB_DIR)
with open(args.config, 'r') as f:
job_data = eval(f.read())
assert 'algorithm' in job_data.keys()
assert any([job_data['algorithm'] == a for a in ['NPG', 'BCRL', 'DAPG']])
job_data['lam_0'] = 0.0 if 'lam_0' not in job_data.keys() else job_data['lam_0']
job_data['lam_1'] = 0.0 if 'lam_1' not in job_data.keys() else job_data['lam_1']
EXP_FILE = JOB_DIR + '/job_config.json'
with open(EXP_FILE, 'w') as f:
json.dump(job_data, f, indent=4)
# ===============================================================================
# Train Loop
# ===============================================================================
e = GymEnv(job_data['env'])
policy = MLP(e.spec, hidden_sizes=job_data['policy_size'], seed=job_data['seed'])
baseline = MLPBaseline(e.spec, reg_coef=1e-3, batch_size=job_data['vf_batch_size'],
epochs=job_data['vf_epochs'], learn_rate=job_data['vf_learn_rate'])
# Get demonstration data if necessary and behavior clone
if job_data['algorithm'] != 'NPG':
print("========================================")
print("Collecting expert demonstrations")
print("========================================")
demo_paths = pickle.load(open(job_data['demo_file'], 'rb'))
########################################################################################
demo_paths = demo_paths[0:3]
print (job_data['demo_file'], len(demo_paths))
for d in range(len(demo_paths)):
feats = demo_paths[d]['features']
feats = np.vstack(feats)
demo_paths[d]['observations'] = feats
########################################################################################
bc_agent = BC(demo_paths, policy=policy, epochs=job_data['bc_epochs'], batch_size=job_data['bc_batch_size'],
lr=job_data['bc_learn_rate'], loss_type='MSE', set_transforms=False)
in_shift, in_scale, out_shift, out_scale = bc_agent.compute_transformations()
bc_agent.set_transformations(in_shift, in_scale, out_shift, out_scale)
bc_agent.set_variance_with_data(out_scale)
ts = timer.time()
print("========================================")
print("Running BC with expert demonstrations")
print("========================================")
bc_agent.train()
print("========================================")
print("BC training complete !!!")
print("time taken = %f" % (timer.time() - ts))
print("========================================")
# if job_data['eval_rollouts'] >= 1:
# score = e.evaluate_policy(policy, num_episodes=job_data['eval_rollouts'], mean_action=True)
# print("Score with behavior cloning = %f" % score[0][0])
if job_data['algorithm'] != 'DAPG':
# We throw away the demo data when training from scratch or fine-tuning with RL without explicit augmentation
demo_paths = None
# ===============================================================================
# RL Loop
# ===============================================================================
rl_agent = DAPG(e, policy, baseline, demo_paths,
normalized_step_size=job_data['rl_step_size'],
lam_0=job_data['lam_0'], lam_1=job_data['lam_1'],
seed=job_data['seed'], save_logs=True
)
print("========================================")
print("Starting reinforcement learning phase")
print("========================================")
ts = timer.time()
train_agent(job_name=JOB_DIR,
agent=rl_agent,
seed=job_data['seed'],
niter=job_data['rl_num_iter'],
gamma=job_data['rl_gamma'],
gae_lambda=job_data['rl_gae'],
num_cpu=job_data['num_cpu'],
sample_mode='trajectories',
num_traj=job_data['rl_num_traj'],
num_samples= job_data['rl_num_samples'],
save_freq=job_data['save_freq'],
evaluation_rollouts=job_data['eval_rollouts'])
print("time taken = %f" % (timer.time()-ts))
I ran into the same problem. #ofter method is correct because there are some details to pay attention to. The following is the successful debugging code I modified for your reference:
if __name__ == '__main__':
import matplotlib.pyplot as plt
import numpy as np
def imgshow(img):
img = img / 2 + 0.5
np_img = img.numpy()
plt.imshow(np.transpose(np_img, (1, 2, 0)))
plt.show()
dataiter = iter(train_loader)
images, labels = dataiter.next()
imgshow(torchvision.utils.make_grid(images))
print(' '.join('%5s' % classes[labels[i]] for i in range(4)))
For the record, I don't have a subroutine, I just have a main program, but I have the same problem as you. This demonstrates that when importing a Python library file in the middle of a program segment, we should add:
if __name__ == '__main__':
I tried the tricks mentioned above on the following very simple code. but I still cannot stop it from resetting on any of my Window machines with Python 3.8/3.10. I would very much appreciate it if you could tell me where I am wrong.
print('script reset')
def do_something(inp):
print('Done!')
if __name__ == '__main__':
from multiprocessing import Process, get_start_method
print('main reset')
print(get_start_method())
Process(target=do_something, args=[1]).start()
print('Finished')
output displays:
script reset
main reset
spawn
Finished
script reset
Done!
Update:
As far as I understand, you guys are not preventing either the script containing the __main__ or the .start() from resetting (which doesn't happen in Linux), rather you are suggesting workarounds so that we don't see the reset. One has to make all imports minimal and put them in each function separately, but it is still, relative to Linux, slow.
I am having trouble logging everything that happens in a called multi processed module.
Script: mainScript.py
import multiprocessing
python_file = A different python file
def main():
module = __import__(python_file)
p = multiprocessing.Process(target=getattr(module, 'main'), args=())
p.start()
step_timeout = 20
p.join(step_timeout)
Script: A different python file
import os
def main():
print('I am a different python script')
os.system('dir').read()
os.system("call C:\\test.bat")
return
I am having trouble logging the output from the batch files that are called as well as the result of os.system(command).read().
I know you can log everything that run through a python script doing something like this...
python mainScript.py >> logFile.txt
Is it possible I can do something like this with the multiprocessing output?
p = multiprocessing.Process(target=getattr(module, 'main'), args=()) >> logFile.txt
If so, will it also capture the output of the batch scripts?
Is it possible to redirect everything that is printed in my called process to the main process?
Using windows and python 3.6 I am having trouble with the multiprocessing once I import the function which is to be mapped.
This code works (gives output [2, 12, 30, 56, 90]):
from multiprocessing import Pool
def f(x,y):
return x*y
if __name__ == '__main__':
args=[(1,2),(3,4),(5,6),(7,8),(9,10)]
with Pool(4) as p:
result=p.starmap(f,args)
print(result)
Now I move the function f to a different .py file called 'test' and instead import it:
from multiprocessing import Pool
from test import f
if __name__ == '__main__':
args=[(1,2),(3,4),(5,6),(7,8),(9,10)]
with Pool(4) as p:
result=p.starmap(f,args)
print(result)
with test.py only containing:
def f(x,y):
return x*y
Running this leads to an infinite loop (doesn't return anything with cpu usage high).
What is causing this and is there a way to fix it? I have successfully got multiprocessing to work on a program by copying all the code into one huge .py which obviously is not ideal.
My module, which is also a script, calls some internally defined functions that use multiprocessing.
Running the module as a script works just fine on Windows and Linux. Calling its main function from another python script works fine on Linux but not on Windows.
The core, multi-processed function (the function passed to the multiprocessing.Process constructor as the target) never gets executed when my module calls the Process's start() function.
The module must be doing something too demanding for this usage (multiprocessing on Windows when called from a script), but how can I get to the source of this problem?
Here's some example code to demonstrate the behavior. First the module:
# -*- coding: utf-8 -*-
'my_mp_module.py'
import argparse
import itertools
import Queue
import multiprocessing
def meaty_function(**kwargs):
'Do a meaty calculation using multiprocessing'
task_values = kwargs['task_values']
# Set up a queue of tasks to perform, one for each element in the task_values array
in_queue = multiprocessing.Queue()
out_queue = multiprocessing.Queue()
reduce(lambda a, b: a or b,
itertools.imap(in_queue.put, enumerate(task_values)))
core_procargs=(
in_queue ,
out_queue,
)
core_processes = [multiprocessing.Process(target=_core_function,
args=core_procargs) for ii in xrange(len(task_values))]
for p in core_processes:
p.daemon = True # I've tried both ways, setting this to True and False
p.start()
sum_of_results = 0
for result_count in xrange(len(task_values)):
a_result = out_queue.get(block=True)
sum_of_results += a_result
for p in core_processes:
p.join()
return sum_of_results
def _core_function(inp_queue, out_queue):
'Perform the core calculation for each task in the input queue, placing the results in the output queue'
while 1:
try:
task_idx, task_value = inp_queue.get(block=False)
# Perform a calculation with this task value.
task_result = task_idx + task_value # The real calculation is more complicated than this
out_queue.put(task_result)
except Queue.Empty:
break
def get_command_line_arguments(command_line=None):
'parse the given command_line (list of strings) or from sys.argv, return the corresponding argparse.Namespace object'
aparse = argparse.ArgumentParser(description=__doc__)
aparse.add_argument('--task_values', '-t',
action='append',
type=int,
help='''The value for each task to perform.''')
return aparse.parse_args(args=command_line)
def main(command_line=None):
'perform a meaty calculation with the input from the command line, and print the results'
# collect input from the command line
args=get_command_line_arguments(command_line)
keywords = vars(args)
# perform a meaty calculation with the input
meaty_results = meaty_function(**keywords)
# display the results
print(meaty_results)
if __name__ == '__main__':
multiprocessing.freeze_support()
main(command_line=None)
Now the script that calls the module:
# -*- coding: utf-8 -*-
'my_mp_script.py:'
import my_mp_module
import multiprocessing
multiprocessing.freeze_support()
my_mp_module.main(command_line=None)
Running the module as a script gives the expected results:
C:\Users\greg>python -m my_mp_module -t 0 -t 1 -t 2
6
But running another script that simply calls the module's main() function gives an error message under Windows (here I stripped out the error message duplicated from each of the multiple processes):
C:\Users\greg>python my_mp_script.py -t 0 -t 1 -t 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 510, in prepare
'__parents_main__', file, path_name, etc
File "C:\Users\greg\Documents\PythonCode\Scripts\my_mp_script.py", line 7, in <module>
my_mp_module.main(command_line=None)
File "C:\Users\greg\Documents\PythonCode\Lib\my_mp_module.py", line 72, in main
meaty_results = meaty_function(**keywords)
File "C:\Users\greg\Documents\PythonCode\Lib\my_mp_module.py", line 28, in meaty_function
p.start()
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\process.py", line 130, in start
self._popen = Popen(self)
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 258, in __init__
cmd = get_command_line() + [rhandle]
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 358, in get_command_line
is not going to be frozen to produce a Windows executable.''')
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
Linux and Windows work a little differently in the way they create additional processes. Linux forks the code but Windows creates a new Python interpreter to run the spawned process. The effect here is that all your code gets re-loaded just as if it were the first time. There is a similar question that might be informative to look at see... How to stop multiprocessing in python running for the full script.
The solution here is to modify the my_mp_script.py script so the call to my_mp_module.main() is guarded like so..
import my_mp_module
import multiprocessing
if __name__ == '__main__':
my_mp_module.main(command_line=None)
Note that I've also removed the freeze_support() functions for now, however those may be acceptable to put back in if needed.
Using Windows 7 + python 2.6, I am trying to run a simulation model in parallel. I can launch multiple instances of the executable by double-clicking on them in my file browser. However, asynchronous calls with Popen result in each successive instance interrupting the previous one. For what it's worth, the executable returns text to the console, but I don't need to collect results interactively.
Here's where I am so far:
import multiprocessing, subprocess
def run(c):
exe = os.path.join("<location>","folder",str(c),"program.exe")
run = os.path.join("<location>","folder",str(c),"run.dat")
subprocess.Popen([exe,run],creationflags = subprocess.CREATE_NEW_CONSOLE)
def main():
pool = multiprocessing.Pool(3)
for c in range(10):
pool.apply_async(run,(str(c),))
pool.close()
pool.join()
if __name__ == '__main__':
main()
After scouring SO for a solution, I've learned that using multiprocessing may be redundant, but I need some way to limit the number of cores working.
Enabled by #J.F. Sebastian's comment regarding the cwd argument.
import multiprocessing, subprocess
def run(c):
exe = os.path.join("<location>","folder",str(c),"program.exe")
run = os.path.join("<location>","folder",str(c),"run.dat")
subprocess.check_call([exe,run],cwd=os.path.join("<location>","folder"),creationflags = subprocess.CREATE_NEW_CONSOLE)
def main():
pool = multiprocessing.Pool(3)
for c in range(10):
pool.apply_async(run,(str(c),))
pool.close()
pool.join()
if __name__ == '__main__':
main()