I am trying my very first formal python program using Threading and Multiprocessing on a windows machine. I am unable to launch the processes though, with python giving the following message. The thing is, I am not launching my threads in the main module. The threads are handled in a separate module inside a class.
EDIT: By the way this code runs fine on ubuntu. Not quite on windows
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
My original code is pretty long, but I was able to reproduce the error in an abridged version of the code. It is split in two files, the first is the main module and does very little other than import the module which handles processes/threads and calls a method. The second module is where the meat of the code is.
testMain.py:
import parallelTestModule
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
parallelTestModule.py:
import multiprocessing
from multiprocessing import Process
import threading
class ThreadRunner(threading.Thread):
""" This class represents a single instance of a running thread"""
def __init__(self, name):
threading.Thread.__init__(self)
self.name = name
def run(self):
print self.name,'\n'
class ProcessRunner:
""" This class represents a single instance of a running process """
def runp(self, pid, numThreads):
mythreads = []
for tid in range(numThreads):
name = "Proc-"+str(pid)+"-Thread-"+str(tid)
th = ThreadRunner(name)
mythreads.append(th)
for i in mythreads:
i.start()
for i in mythreads:
i.join()
class ParallelExtractor:
def runInParallel(self, numProcesses, numThreads):
myprocs = []
prunner = ProcessRunner()
for pid in range(numProcesses):
pr = Process(target=prunner.runp, args=(pid, numThreads))
myprocs.append(pr)
# if __name__ == 'parallelTestModule': #This didnt work
# if __name__ == '__main__': #This obviously doesnt work
# multiprocessing.freeze_support() #added after seeing error to no avail
for i in myprocs:
i.start()
for i in myprocs:
i.join()
On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.
Modified testMain.py:
import parallelTestModule
if __name__ == '__main__':
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
Try putting your code inside a main function in testMain.py
import parallelTestModule
if __name__ == '__main__':
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
See the docs:
"For an explanation of why (on Windows) the if __name__ == '__main__'
part is necessary, see Programming guidelines."
which say
"Make sure that the main module can be safely imported by a new Python
interpreter without causing unintended side effects (such a starting a
new process)."
... by using if __name__ == '__main__'
Though the earlier answers are correct, there's a small complication it would help to remark on.
In case your main module imports another module in which global variables or class member variables are defined and initialized to (or using) some new objects, you may have to condition that import in the same way:
if __name__ == '__main__':
import my_module
As #Ofer said, when you are using another libraries or modules, you should import all of them inside the if __name__ == '__main__':
So, in my case, ended like this:
if __name__ == '__main__':
import librosa
import os
import pandas as pd
run_my_program()
hello here is my structure for multi process
from multiprocessing import Process
import time
start = time.perf_counter()
def do_something(time_for_sleep):
print(f'Sleeping {time_for_sleep} second...')
time.sleep(time_for_sleep)
print('Done Sleeping...')
p1 = Process(target=do_something, args=[1])
p2 = Process(target=do_something, args=[2])
if __name__ == '__main__':
p1.start()
p2.start()
p1.join()
p2.join()
finish = time.perf_counter()
print(f'Finished in {round(finish-start,2 )} second(s)')
you don't have to put imports in the if __name__ == '__main__':, just running the program you wish to running inside
In yolo v5 with python 3.8.5
if __name__ == '__main__':
from yolov5 import train
train.run()
In my case it was a simple bug in the code, using a variable before it was created. Worth checking that out before trying the above solutions. Why I got this particular error message, Lord knows.
The below solution should work for both python multiprocessing and pytorch multiprocessing.
As other answers mentioned that the fix is to have if __name__ == '__main__': but I faced several issues in identifying where to start because I am using several scripts and modules. When I can call my first function inside main then everything before it started to create multiple processes (not sure why).
Putting it at the very first line (even before the import) worked. Only calling the first function return timeout error. The below is the first file of my code and multiprocessing is used after calling several functions but putting main in the first seems to be the only fix here.
if __name__ == '__main__':
from mjrl.utils.gym_env import GymEnv
from mjrl.policies.gaussian_mlp import MLP
from mjrl.baselines.quadratic_baseline import QuadraticBaseline
from mjrl.baselines.mlp_baseline import MLPBaseline
from mjrl.algos.npg_cg import NPG
from mjrl.algos.dapg import DAPG
from mjrl.algos.behavior_cloning import BC
from mjrl.utils.train_agent import train_agent
from mjrl.samplers.core import sample_paths
import os
import json
import mjrl.envs
import mj_envs
import time as timer
import pickle
import argparse
import numpy as np
# ===============================================================================
# Get command line arguments
# ===============================================================================
parser = argparse.ArgumentParser(description='Policy gradient algorithms with demonstration data.')
parser.add_argument('--output', type=str, required=True, help='location to store results')
parser.add_argument('--config', type=str, required=True, help='path to config file with exp params')
args = parser.parse_args()
JOB_DIR = args.output
if not os.path.exists(JOB_DIR):
os.mkdir(JOB_DIR)
with open(args.config, 'r') as f:
job_data = eval(f.read())
assert 'algorithm' in job_data.keys()
assert any([job_data['algorithm'] == a for a in ['NPG', 'BCRL', 'DAPG']])
job_data['lam_0'] = 0.0 if 'lam_0' not in job_data.keys() else job_data['lam_0']
job_data['lam_1'] = 0.0 if 'lam_1' not in job_data.keys() else job_data['lam_1']
EXP_FILE = JOB_DIR + '/job_config.json'
with open(EXP_FILE, 'w') as f:
json.dump(job_data, f, indent=4)
# ===============================================================================
# Train Loop
# ===============================================================================
e = GymEnv(job_data['env'])
policy = MLP(e.spec, hidden_sizes=job_data['policy_size'], seed=job_data['seed'])
baseline = MLPBaseline(e.spec, reg_coef=1e-3, batch_size=job_data['vf_batch_size'],
epochs=job_data['vf_epochs'], learn_rate=job_data['vf_learn_rate'])
# Get demonstration data if necessary and behavior clone
if job_data['algorithm'] != 'NPG':
print("========================================")
print("Collecting expert demonstrations")
print("========================================")
demo_paths = pickle.load(open(job_data['demo_file'], 'rb'))
########################################################################################
demo_paths = demo_paths[0:3]
print (job_data['demo_file'], len(demo_paths))
for d in range(len(demo_paths)):
feats = demo_paths[d]['features']
feats = np.vstack(feats)
demo_paths[d]['observations'] = feats
########################################################################################
bc_agent = BC(demo_paths, policy=policy, epochs=job_data['bc_epochs'], batch_size=job_data['bc_batch_size'],
lr=job_data['bc_learn_rate'], loss_type='MSE', set_transforms=False)
in_shift, in_scale, out_shift, out_scale = bc_agent.compute_transformations()
bc_agent.set_transformations(in_shift, in_scale, out_shift, out_scale)
bc_agent.set_variance_with_data(out_scale)
ts = timer.time()
print("========================================")
print("Running BC with expert demonstrations")
print("========================================")
bc_agent.train()
print("========================================")
print("BC training complete !!!")
print("time taken = %f" % (timer.time() - ts))
print("========================================")
# if job_data['eval_rollouts'] >= 1:
# score = e.evaluate_policy(policy, num_episodes=job_data['eval_rollouts'], mean_action=True)
# print("Score with behavior cloning = %f" % score[0][0])
if job_data['algorithm'] != 'DAPG':
# We throw away the demo data when training from scratch or fine-tuning with RL without explicit augmentation
demo_paths = None
# ===============================================================================
# RL Loop
# ===============================================================================
rl_agent = DAPG(e, policy, baseline, demo_paths,
normalized_step_size=job_data['rl_step_size'],
lam_0=job_data['lam_0'], lam_1=job_data['lam_1'],
seed=job_data['seed'], save_logs=True
)
print("========================================")
print("Starting reinforcement learning phase")
print("========================================")
ts = timer.time()
train_agent(job_name=JOB_DIR,
agent=rl_agent,
seed=job_data['seed'],
niter=job_data['rl_num_iter'],
gamma=job_data['rl_gamma'],
gae_lambda=job_data['rl_gae'],
num_cpu=job_data['num_cpu'],
sample_mode='trajectories',
num_traj=job_data['rl_num_traj'],
num_samples= job_data['rl_num_samples'],
save_freq=job_data['save_freq'],
evaluation_rollouts=job_data['eval_rollouts'])
print("time taken = %f" % (timer.time()-ts))
I ran into the same problem. #ofter method is correct because there are some details to pay attention to. The following is the successful debugging code I modified for your reference:
if __name__ == '__main__':
import matplotlib.pyplot as plt
import numpy as np
def imgshow(img):
img = img / 2 + 0.5
np_img = img.numpy()
plt.imshow(np.transpose(np_img, (1, 2, 0)))
plt.show()
dataiter = iter(train_loader)
images, labels = dataiter.next()
imgshow(torchvision.utils.make_grid(images))
print(' '.join('%5s' % classes[labels[i]] for i in range(4)))
For the record, I don't have a subroutine, I just have a main program, but I have the same problem as you. This demonstrates that when importing a Python library file in the middle of a program segment, we should add:
if __name__ == '__main__':
I tried the tricks mentioned above on the following very simple code. but I still cannot stop it from resetting on any of my Window machines with Python 3.8/3.10. I would very much appreciate it if you could tell me where I am wrong.
print('script reset')
def do_something(inp):
print('Done!')
if __name__ == '__main__':
from multiprocessing import Process, get_start_method
print('main reset')
print(get_start_method())
Process(target=do_something, args=[1]).start()
print('Finished')
output displays:
script reset
main reset
spawn
Finished
script reset
Done!
Update:
As far as I understand, you guys are not preventing either the script containing the __main__ or the .start() from resetting (which doesn't happen in Linux), rather you are suggesting workarounds so that we don't see the reset. One has to make all imports minimal and put them in each function separately, but it is still, relative to Linux, slow.
I have a Python app that initiates from a main script, let's say a main.py. main.py (since my app is organized) references and imports other .py files within the same directory, that house other functions. As my app is continuously running, it imports such a function from another script, which is also supposed to run forever until it is explicitly cancelled.
Thing is, how would I cancel that specific script, while leaving its affected variables untouched and the main script/larger app still running?
I do not how I would go about targeting a specific function to stop its execution.
I use a kill function in my utils to kill any unneeded python process who's name I know. Note the following code was tested/works on Ubuntu Linux and Mac OS machines.
def get_running_pids(process_name):
pids = []
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if process_name in line.decode('utf-8'):
pid = int(line.decode('utf-8').split(None, 1)[0])
pids.append(pid)
return pids
def kill_process_with_name(process_name):
pids = get_running_pids(process_name)
for pid in pids:
os.kill(pid, signal.SIGKILL)
You Could set up user defined, custom, Exceptions. Extending Pythons builtin Exception object. Further reading here : Pythons User Defined Exceptions
CustomExceptions.py:
class HaltException(Exception):
pass
-
main.py:
from CustomExceptions import HaltException
class Functions():
def a(self):
print("hey")
self.b()
return "1"
def b(self):
print("hello")
raise HaltException()
def main():
func_obj = Functions()
try:
func_obj.a()
except HaltException as e:
pass
print("Awesome")
main()
Programs may name their own exceptions by creating a new exception
class (see Classes for more about Python classes). Exceptions should
typically be derived from the Exception class, either directly or
indirectly.
I'm creating a virtual OS with different files, like boot.py and Run.py
It worked for a while until I typed this in:
if Run.run == 0:
error()
else:
start()
There is no errors saying about it like red lines or any. Now I get this error:
AttributeError("module 'boot' has no attribute 'bootup'",)
Here's the code:
boot.py:
#Imports
import Run
#Varibles
name = "PyOS"
version = 1
creator = "© Dragon Softwares"
def bootup(run):
if Run.run == 0:
error()
else:
start()
def start():
print("Starting PyOS")
print("Version ", version)
def error():
print("Please Run 'Run.py' To bootup the system")
bootup()
Run.py:
#Imports
import time
import boot
#Varibles
run = 0
def main():
run = 1
time.sleep(0.1)
boot.bootup
main()
You have a circular import. Don't have boot import Run and vice-versa.
What happens now is:
Run.py is executed as the __main__ module.
The line import boot is reached and executed.
boot has not been imported before, so boot.py is found and loaded to be the boot module.
The line import Run is reached and executed.
Run has not been imported before, so Run.py is found and loaded to be the Run module. Note: that Run.py is also used for __main__ does not matter here.
The line import boot is reached and executed.
boot is already being imported, so it used to set the name boot in the Run module
Various other names are set in the Run module namespace (run and main).
The line main() is reached and executed
The main() function references boot.bootup, which doesn't exist yet, because the boot module top-level code never reached the def bootup(run): line.
Instead of importing Run, pass the run value as a parameter to the boot function:
def main():
run = 1
time.sleep(0.1)
boot.bootup(run)
Also, don't call bootup() at the end of the boot module. That function call will be executed when you import boot, and makes no sense there. If you want to present an error message, use a if __name__ == '__main__': guard:
if __name__ == '__main__':
import sys
error()
sys.exit(1)
The if test will only ever be true if you ran boot.py as the script.
How can I exit child process when parent process is stoped?
Following is my code.
I want to stop all execution when there is KeyboardInterrupt.
import os, sys
scripts = ["script_1.py","script_2.py"]
try:
for script in scripts:
command = 'python ' + script
os.system(command)
except KeyboardInterrupt:
os._exit(1)
except Exception as e:
raise e
Since you are trying to execute Python scripts within another Python script, let's do this in a Pythonic way and get rid of os.system for importlib.import_module
import os
import importlib
scripts = ['script_1.py', 'script_2.py']
for filename in scripts:
modulename, ext = os.path.splitext(filename)
importlib.import_module(modulename)
If your scripts are like:
if __name__ == '__main__':
# code here
print('hello')
It will not works, because the if __name__ == '__main__': is here to ensure the part under if will be executed only if the file is executed as a script (and not as a module).
So, in this case the better thing to do is something like:
script_1.py:
def main():
# code here
print('hello')
And in the main script:
import os
import importlib
scripts = ['script_1.py', 'script_2.py']
for filename in scripts:
modulename, ext = os.path.splitext(filename)
module = importlib.import_module(modulename)
module.main()
I am trying to write an alarm clock program in python using multiprogramming module on Windows 7.
It all runs good in the interpreter. But when packed in one file by pyinstaller, every time the code create a process, there turn out to be 2 processes, one is a parent and the other is its child. When the code kills the parent process, the child become an orphan process.
The code:
from multiprocessing import Process,freeze_support
import time
import winsound
def startout(seconds,name):
freeze_support()
print name+':pid '+str(os.getpid())+' is created'
startTime=time.time()
while (time.time()-startTime)<seconds:
time.sleep(1)
winsound.PlaySound('SystemQuestion', winsound.SND_ALIAS)
print name+' end'
class alarmCenter:
def __init__(self):
self.alarmList={'alarm1':None,'alarm2':None,'alarm3':None}
def newAlarm(self,seconds,name):
if self.alarmList[name]!=None:
if self.alarmList[name].is_alive():
return False
ala=Process(target=startout, args=(seconds,name))
ala.deamon=True
ala.start()
self.alarmList[name]=ala
return True
def stopAlarm(self,name):
try:
self.alarmList[name].terminate()
self.alarmList[name].join()
self.alarmList[name]=None
except Exception:
pass
def terminateAll(self):
for each in self.alarmList.keys():
if self.alarmList[each]!=None:
self.alarmList[each].terminate()
if __name__=='__main__':
freeze_support()
#....
Note that multiprocessing.freeze_support() is already there.
Could anyone please show me how to kill the child process or fix this bug?