I have a main python script which calls several subscripts --> main.py
My first sub script subscript1.py runs a few lines of code then at the end, opens an external program (putty) using subprocess. The program to be opened is a data monitor which I want to keep open the whole time.
I want to return to main.py so that subscript2.py can be run.
Problem: python code doesn't resume until external program from subprocess is closed. How can I keep subprocess open but carry on with my python code?
main.py:
import subprocess
subprocess.call(['python', 'subscript1.py'])
subprocess.call(['python', 'subscript2.py'])
subscript1.py:
import subprocess
prog_path = 'C:/Programs/PUTTY.exe'
load_config = 'config_to_load'
... lines of code to check for a condition
if outcome_value == 1:
subprocess.run(prog_path, 'load', load_config)
else:
print("error message")
If I were you I will use multiprocessing.pool
from multiprocessing import Pool
import time
import subprocess
def do_wait(_):
subprocess.call(['python','-c',"import time;time.sleep(1)"])
st = time.time()
with Pool(5) as p:
print(p.map(do_wait, [1, 2, 3]))
diff = time.time() - st
print(f"total : {diff} sec")
Related
I make a python script running in the console, and I want to create another console for printing important messages without running another python script to do that.
I first tried to use win32console.AllocConsole() directly, but it got Access is denied
(Seemingly because one process can attach to at most one console according to the docs).
So I tried creating a new process by using multiprocessing :
import sys, os
import win32api, win32con, win32console
import multiprocessing
def ShowConsole():
win32console.FreeConsole()
win32console.AllocConsole()
sys.stdout = open("CONOUT$", "w")
sys.stderr = open("CONOUT$", "w")
print("Test")
os.system("pause")
if __name__ == '__main__':
p = multiprocessing.Process(target=ShowConsole)
p.start()
But when I ran the code in Powershell, it exited directly with no message while no new console is created.
None of the possible solutions I found in stackoverflow works for me. What should I do?
Update: It turns out that it is because multiprocessing.Process fails to call ShowConsole function. I use multiprocessing.dummy.Process as the alternative and it works as expected.
The reason why multiprocessing.Process fails to call target is still unclear.
There's nothing wrong with your example above, it pops the console as shown below. I added a "hello" in the main section to differentiate.
But since you want to values from the first console to the second,
here's a better example. Utilize put/get to pass the information from the first console to the second console.
import win32console
import multiprocessing
import time
def secondconsole(output):
win32console.FreeConsole()
win32console.AllocConsole()
while True:
print(output.get())
if __name__ == "__main__":
output = multiprocessing.Queue()
multiprocessing.Process(target=secondconsole, args=[output]).start()
while True:
print("Hello World")
output.put("Hello to second console") #here you will provide the data to the second console
time.sleep(3) #sleep for 3 seconds just for testing
It looks like the issue might be with the way you are trying to open the console using sys.stdout and sys.stderr. Try using the following code instead:
import sys, os
import win32api, win32con, win32console
import multiprocessing
def ShowConsole():
win32console.FreeConsole()
win32console.AllocConsole()
os.dup2(win32console.GetStdHandle(win32console.STD_OUTPUT_HANDLE), sys.stdout.fileno())
os.dup2(win32console.GetStdHandle(win32console.STD_ERROR_HANDLE), sys.stderr.fileno())
print("Test")
os.system("pause")
if __name__ == '__main__':
p = multiprocessing.Process(target=ShowConsole)
p.start()
Is there a way to child.py sends stout "on the fly", when running?
Or main.py needs to wait child.py to terminate?
In these scripts, main.py needs to wait 5 seconds to start printing all lines.
I want that process.stdout.readline() get the last print in child.py when child.py still running.
main.py
import subprocess
import time
process = subprocess.Popen(["./child.py"], stdout=subprocess.PIPE)
i = 1
while i < 5:
print(process.stdout.readline()) #to print, child.py needs to terminate before
time.sleep(1)
i+=1
child.py
#!/usr/bin/env python3
# coding=utf-8
import sys
import time
def run():
i = 1
while i < 5:
time.sleep(1)
print(f'ok {i}')
i+=1
if __name__ == "__main__":
run()
In child.py you wrote this:
print(f'ok {i}')
Replace it with this:
print(f'ok {i}', flush=True)
When testing interactively isatty() returns True,
so child.py will default to unbuffered behavior.
Each line of output will appear immediately.
When running as a subprocess connected to a pipe,
you are seeing it default to buffered behavior.
Use a flush() call to defeat this.
I'm relatively new to python so please forgive early level understanding!
I am working to create a kind of flag file. Its job is to monitor a Python executable, the flag file is constantly running and prints "Start" when the executable started, "Running" while it runs and "Stop" when its stopped or crashed, if a crash occurs i want it to be able to restart the script. so far i have this down for the Restart:
from subprocess import run
from time import sleep
# Path and name to the script you are trying to start
file_path = "py"
restart_timer = 2
def start_script():
try:
# Make sure 'python' command is available
run("python "+file_path, check=True)
except:
# Script crashed, lets restart it!
handle_crash()
def handle_crash():
sleep(restart_timer) # Restarts the script after 2 seconds
start_script()
start_script()
how can i implement this along with a flag file?
Not sure what you mean with "flag", but this minimally achieves what you want.
Main file main.py:
import subprocess
import sys
from time import sleep
restart_timer = 2
file_path = 'sub.py' # file name of the other process
def start():
try:
# sys.executable -> same python executable
subprocess.run([sys.executable, file_path], check=True)
except subprocess.CalledProcessError:
sleep(restart_timer)
return True
else:
return False
def main():
print("starting...")
monitor = True
while monitor:
monitor = start()
if __name__ == '__main__':
main()
Then the process that gets spawned, called sub.py:
from time import sleep
sleep(1)
print("doing stuff...")
# comment out to see change
raise ValueError("sub.py is throwing error...")
Put those files into the same directory and run it with python main.py
You can comment out the throwing of the random error to see the main script terminate normally.
On a larger note, this example is not saying it is a good way to achieve the quality you need...
I am having trouble logging everything that happens in a called multi processed module.
Script: mainScript.py
import multiprocessing
python_file = A different python file
def main():
module = __import__(python_file)
p = multiprocessing.Process(target=getattr(module, 'main'), args=())
p.start()
step_timeout = 20
p.join(step_timeout)
Script: A different python file
import os
def main():
print('I am a different python script')
os.system('dir').read()
os.system("call C:\\test.bat")
return
I am having trouble logging the output from the batch files that are called as well as the result of os.system(command).read().
I know you can log everything that run through a python script doing something like this...
python mainScript.py >> logFile.txt
Is it possible I can do something like this with the multiprocessing output?
p = multiprocessing.Process(target=getattr(module, 'main'), args=()) >> logFile.txt
If so, will it also capture the output of the batch scripts?
Is it possible to redirect everything that is printed in my called process to the main process?
I a linux script that I'm looking to automate through subprocess. Each iteration of subprocess should run the linux script in each subdirectory of a parent directory, and each of these subprocesses should run in a separate thread.
The way my directory is organized is as follows:
/parent/p1
/parent/p2....and so on till
/parent/p[n]
The first part of my code aims to run the process across all the subdirectories (p1, p2, p3...etc). It works fine for a fast process. However, many of my jobs need to run in the background, for which I usually use nohup and manually run them on a separate node. So every node in my terminal will run the same job on each directory (p1, p2, p3..etc). The latter part of my code (using threading) aims to achieve this, but what ends up happening is every node runs the same process (p1,p1,p1...etc) - basically by entire 'jobs' function is being passed through runSims when I want them separated out over the threads. Would someone know how I could further iterate the threading function to place different jobs on each node?
import os
import sys
import subprocess
import os.path
import threading
#takes the argument: python FOLDER_NAME #ofThreads
#Example: python /parent 8
directory = sys.argv[1] #in my case input is /parent
threads = int(sys.argv[2]) #input is 8
category_name = directory.split('/')[-1] #splits parent as a word
folder_list = next(os.walk(directory))[1] #makes a list of subdirectories [p1,p2,p3..]
def jobs(cmd):
for i in folder_list:
f = open("/vol01/bin/dir/nohup.out", "w")
cmd = subprocess.call(['nohup','python','np.py','{0}/{1}' .format(directory,i)],cwd = '/vol01/bin/dir', stdout=f)
return cmd
def runSimThreads(numThreads):
threads = []
for i in range(numThreads):
t = threading.Thread(target=jobs, args=(i,))
threads.append(t)
t.start()
#Wait for all threads to complete
main_thread = threading.currentThread()
for t in threads:
if t is main_thread:
continue
t.join()
runSimThreads(threads)
That can't be your code.
import os
import sys
import subprocess
import os.path
import threading
#takes the argument: python FOLDER_NAME #ofThreads
#Example: python /parent 8
threads = 8 #input is 8
...
...
for t in threads:
print("hello")
--output:--
TypeError: 'int' object is not iterable
You are using the same variable names everywhere, and that is confusing you (or me?).
You also do this:
def jobs(cmd):
for i in folder_list:
f = open("/vol01/bin/dir/nohup.out", "w")
cmd = "something"
You are overwriting your cmd parameter variable, which means that jobs() shouldn't have a parameter variable.
Response to comment1:
import threading as thr
import time
def greet():
print("hello world")
t = thr.Thread(target=greet)
t.start()
t.join()
--output:--
hello world
import threading as thr
import time
def greet(greeting):
print(greeting)
t = thr.Thread(target=greet, args=("Hello, Newman.",) )
t.start()
t.join()
--output:--
Hello, Newman.
Below is the equivalent of what you are doing:
import threading as thr
import time
def greet(greeting):
greeting = "Hello, Jerry."
print(greeting)
t = thr.Thread(target=greet, args=("Hello, Newman.",) )
t.start()
t.join()
--output:--
Hello, Jerry.
And anyone reading that code would ask, "Why are you passing an argument to the greet() function when you don't use it?"
I'm relatively new to python
Well, your code does this:
threads = 8
#Other irrelevant stuff here
for t in threads:
print("hello")
and that will produce the error:
TypeError: 'int' object is not iterable
Do you know why?