Python - How to stop child process when the parent script is stoped - python

How can I exit child process when parent process is stoped?
Following is my code.
I want to stop all execution when there is KeyboardInterrupt.
import os, sys
scripts = ["script_1.py","script_2.py"]
try:
for script in scripts:
command = 'python ' + script
os.system(command)
except KeyboardInterrupt:
os._exit(1)
except Exception as e:
raise e

Since you are trying to execute Python scripts within another Python script, let's do this in a Pythonic way and get rid of os.system for importlib.import_module
import os
import importlib
scripts = ['script_1.py', 'script_2.py']
for filename in scripts:
modulename, ext = os.path.splitext(filename)
importlib.import_module(modulename)
If your scripts are like:
if __name__ == '__main__':
# code here
print('hello')
It will not works, because the if __name__ == '__main__': is here to ensure the part under if will be executed only if the file is executed as a script (and not as a module).
So, in this case the better thing to do is something like:
script_1.py:
def main():
# code here
print('hello')
And in the main script:
import os
import importlib
scripts = ['script_1.py', 'script_2.py']
for filename in scripts:
modulename, ext = os.path.splitext(filename)
module = importlib.import_module(modulename)
module.main()

Related

Creating a Flag file

I'm relatively new to python so please forgive early level understanding!
I am working to create a kind of flag file. Its job is to monitor a Python executable, the flag file is constantly running and prints "Start" when the executable started, "Running" while it runs and "Stop" when its stopped or crashed, if a crash occurs i want it to be able to restart the script. so far i have this down for the Restart:
from subprocess import run
from time import sleep
# Path and name to the script you are trying to start
file_path = "py"
restart_timer = 2
def start_script():
try:
# Make sure 'python' command is available
run("python "+file_path, check=True)
except:
# Script crashed, lets restart it!
handle_crash()
def handle_crash():
sleep(restart_timer) # Restarts the script after 2 seconds
start_script()
start_script()
how can i implement this along with a flag file?
Not sure what you mean with "flag", but this minimally achieves what you want.
Main file main.py:
import subprocess
import sys
from time import sleep
restart_timer = 2
file_path = 'sub.py' # file name of the other process
def start():
try:
# sys.executable -> same python executable
subprocess.run([sys.executable, file_path], check=True)
except subprocess.CalledProcessError:
sleep(restart_timer)
return True
else:
return False
def main():
print("starting...")
monitor = True
while monitor:
monitor = start()
if __name__ == '__main__':
main()
Then the process that gets spawned, called sub.py:
from time import sleep
sleep(1)
print("doing stuff...")
# comment out to see change
raise ValueError("sub.py is throwing error...")
Put those files into the same directory and run it with python main.py
You can comment out the throwing of the random error to see the main script terminate normally.
On a larger note, this example is not saying it is a good way to achieve the quality you need...

Packaging python modules having MultiProcessing Code

I am trying to package my python project into an executable using pyinstaller. The main module contains code for multiprocessing. When I run the executable, only the lines of code prior to the multi processing part get executed again and again. Neither does it throw an exception or exit the program.
Code in main module:
from Framework.ExcelUtility import ExcelUtility
from Framework.TestRunner import TestRunner
import concurrent.futures
class Initiator:
def __init__(self):
self.exec_config_dict = {}
self.test_list = []
self.test_names = []
self.current_test_set = []
def set_first_execution_order(self):
# Code
def set_subsequent_execution_order(self):
# Code
def kick_off_tests(self):
'''Method to do Multi process execution'''
if(__name__=="__main__"):
with concurrent.futures.ProcessPoolExecutor(max_workers=int(self.exec_config_dict.get('Parallel'))) as executor:
for test in self.current_test_set:
executor.submit(TestRunner().runner,test) ***This line is not being executed from the exe file.
initiator = Initiator()
initiator.get_run_info()
initiator.set_first_execution_order()
initiator.kick_off_tests()
while len(initiator.test_list) > 0:
initiator.set_subsequent_execution_order()
try:
initiator.kick_off_tests()
except BaseException as exception:
print(exception)
From the problem definition I'm assuming you are using ms-windows, and that the main module is not named __main__.py.
In that case, multiprocessing has some special guidelines:
Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).
and
Instead one should protect the “entry point” of the program by using if __name__ == '__main__'
So, change the last part of your main module like this:
from multiprocessing import freeze_support
def kick_off_tests(self):
'''Method to do Multi process execution'''
with concurrent.futures.ProcessPoolExecutor(max_workers=int(self.exec_config_dict.get('Parallel'))) as executor:
for test in self.current_test_set:
executor.submit(TestRunner().runner,test)
if __name__ == '__main__':
freeze_support()
initiator = Initiator()
initiator.get_run_info()
initiator.set_first_execution_order()
initiator.kick_off_tests()
while len(initiator.test_list) > 0:
initiator.set_subsequent_execution_order()
try:
initiator.kick_off_tests()
except BaseException as exception:
print(exception)

How to stop imported python script from running , but only runs when called in code

When i import one of my python scripts and run my current script , it seems to be running and displaying the output of the imported script which is really unusual behaviour. I have just imported this in my script but not really called any of its functions in my main code. How can i avoid from this behaviour happening ?
If i pass the -d flag with my main script it will run the usual code in my main script only
If i pass the -t flag with my main script , it will run the code from the imported python script only
main.py
import os
import argparse
import functions as funcs
import generate_json as gen_json
from test_compare_filesets import tester as imptd_tester
def get_json_location():
path = os.getcwd() + '/Testdata'
return path
def main():
parser = argparse.ArgumentParser()
parser.add_argument("-d", "--export-date", action="store_true", required=True)
parser.add_argument("-t", "--execute-test", action="store_true", required=False)
args = parser.parse_args()
date = args.export_date
testt = args.execute_test
yml_directory = os.listdir('yaml/')
yml_directory.remove('export_config.yaml')
with open('dates/' + date + '.json', 'w') as start:
start.close()
for yml in yml_directory :
print("Running export for " + yml)
yml_file = os.path.join('yaml/' + yml)
json_path = get_json_location()
yml = funcs.read_config(yml_file)
data_folder = date
gen_json.generate_data_report(json_path , yml , data_folder)
if __name__ == '__main__':
main()
test_files.py
import generate_report as generate_reportt
def compare_filesets(file_names, previous_data, current_data):
for item in file_names:
print(item + generate_reportt.compare(previous_data.get(item), current_data.get(item)) + "\n")
def test_filesets():
'''
Test for scenario 1
'''
dict_1 = generate_reportt.read_file_into_dict("dates/2018-01-01.json")
dict_2 = generate_reportt.read_file_into_dict("dates/2018-01-02.json")
print(" Test 1 ")
compare_filesets(file_names=['a.json', 'b.json', 'c.json'],
previous_data=dict_1,
current_data=dict_2
)
This is why using the statement:
if __name__ == "__main__":
main()
is very important. You will want to add this to the script you're importing, and put all of your code that is being called within a main() function in that script. The variable __name__ of a script changes depending on whether the script is imported or not. If you're not importing the script and running it, then that script's __name__ variable will be "__main__". However, if it is imported, the __name__ variable turns into the script's filename, and therefore everything in main() function of that script will not be run.
For more information: What does if __name__ == "__main__": do?

return value from one python script to another

I have two files: script1.py and script2.py. I need to invoke script2.py from script1.py and return the value from script2.py back to script1.py. But the catch is script1.py actually runs script2.py through os.
script1.py:
import os
print(os.system("script2.py 34"))
script2.py
import sys
def main():
x="Hello World"+str(sys.argv[1])
return x
if __name__ == "__main__":
x= main()
As you can see, I am able to get the value into script2, but not back to script1. How can I do that? NOTE: script2.py HAS to be called as if its a commandline execution. Thats why I am using os.
Ok, if I understand you correctly you want to:
pass an argument to another script
retrieve an output from another script to original caller
I'll recommend using subprocess module. Easiest way would be to use check_output() function.
Run command with arguments and return its output as a byte string.
Sample solution:
script1.py
import sys
import subprocess
s2_out = subprocess.check_output([sys.executable, "script2.py", "34"])
print s2_out
script2.py:
import sys
def main(arg):
print("Hello World"+arg)
if __name__ == "__main__":
main(sys.argv[1])
The recommended way to return a value from one python "script" to another is to import the script as a Python module and call the functions directly:
import another_module
value = another_module.get_value(34)
where another_module.py is:
#!/usr/bin/env python
def get_value(*args):
return "Hello World " + ":".join(map(str, args))
def main(argv):
print(get_value(*argv[1:]))
if __name__ == "__main__":
import sys
main(sys.argv)
You could both import another_module and run it as a script from the command-line. If you don't need to run it as a command-line script then you could remove main() function and if __name__ == "__main__" block.
See also, Call python script with input with in a python script using subprocess.

How to get the caller script name

I'm using Python 2.7.6 and I have two scripts:
outer.py
import sys
import os
print "Outer file launching..."
os.system('inner.py')
calling inner.py:
import sys
import os
print "[CALLER GOES HERE]"
I want the second script (inner.py) to print the name of the caller script (outer.py).
I can't pass to inner.py a parameter with the name of the first script because I have tons of called/caller scripts and I can't refactor all the code.
Any idea?
One idea is to use psutil.
#!env/bin/python
import psutil
me = psutil.Process()
parent = psutil.Process(me.ppid())
grandparent = psutil.Process(parent.ppid())
print grandparent.cmdline()
This is ofcourse dependant of how you start outer.py.
This solution is os independant.
On linux you can get the process id and then the caller name like so.
p1.py
import os
os.system('python p2.py')
p2.py
import os
pid = os.getppid()
cmd = open('/proc/%d/cmdline' % (pid,)).read()
caller = ' '.join(cmd.split(' ')[1:])
print caller
running python p1.py will yield p1.py
I imagine you can do similar things in other OS as well.
Another, a slightly shorter version for unix only
import os
parent = os.system('readlink -f /proc/%d/exe' % os.getppid())
If applicable to your situation you could also simply pass an argument that lets inner.py differentiate:
import sys
import os
print "Outer file launching..."
os.system('inner.py launcher')
innter.py
import sys
import os
try:
if sys.argv[0] == 'launcher':
print 'outer.py called us'
except:
pass

Categories