Let's say I have a main.py script which does different stuff inside a while loop, among which it calls a binary via terminal command with subprocess.Popen(). The binary is meant to control an external piece of hardware. At execution, first it sets the configuration and then it enters an infinite while loop waiting for the user to input control commands or exit, both via terminal.
The goal is to create the subprocess in main.py, wait a few seconds until the configuration is done (for example, 10 seconds) and then keep it alive waiting until it is required to send a command while the main code continues with the 'other stuff'.
After the while loop in 'main.py' is done, the subprocess should be killed by typing 'exit'.
Can this be done without the parent and child processes impacting each other in a way one messes up with the other?
The current approach:
main.py
import subprocess
# initiate subprocess to set configuration
p = subprocess.Popen(path/to/binary,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
universal_newlines=True)
# sleep 10 seconds until setting is done
sleep(10)
while True:
# do stuff
# send command to subprocess
(stdout, stderr) = p.communicate('cmd_to_subprocess\n')
# continue with stuff
# while loop finished
# kill subprocessed
p.communicate('exit\n')
Related
I am running an HPC simulation on amazon AWS with spot instances. Spot instances can be terminated with 2 minutes notice by AWS. In order to check for termination you need to exectute curl on a spefiic URL every 5 seconds. It is a simple request that returns a json with the termination time, if AWS have initiated the termination process.
Currently I am using subprocess to run the script:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1, universal_newlines=True)
for line in p.stdout:
if "Floating point exception" in line:
print(line.rstrip())
log.write(line)
log.flush()
p.wait()
status = p.returncode
print(status)
Is it possible to add a callback that is called every 5 seconds?
The callback would check the return of the curl command, if it finds a termination time it would set a flag in a file and exit. The main process will then end gracefully because of this flag.
To clarify, I do not want to interact or kill the main process. This particular process (not written by me) checks continuously the content of a file and exits gracefully if it finds a specific keyword. The callback would set this keyword.
Is this the right approach?
Write a function that runs the following loop;
launches curl in a subprocess and processes the returned JSON.
If the sim should terminate, it writes the required file and returns.
Otherwise sleep for 4.5 minutes.
Start that function in a threading.Thread before you launch the simulation.
You'd have to test what happens to your for line in p.stdout loop if the program running in p exits.
Maybe it will generate an exception. Or you might want to check p.poll() in the loop to handle that gracefully.
I am running a script that launches a program via cmd and then, while the program is open, checks the log file of the program for errors. If any, close the program.
I cannot use taskkill command since I don't know the PID of the process and the image is the same as other processes that I don't want to kill.
Here is a code example:
import os, multiprocessing, time
def runprocess():
os.system('"notepad.exe"')
if __name__ == '__main__':
process = multiprocessing.Process(target=runprocess,args=[])
process.start()
time.sleep(5)
#Continuously checking if errors in log file here...
process_has_errors = True #We suppose an error has been found for our case.
if process_has_errors:
process.terminate()
The problem is that I want the notepad windows to close. It seems like the terminate() method will simply disconnect the process without closing all it's tasks.
What can I do to make sure to end all pending tasks in a process when terminating it, instead of simply disconnecting the process from those tasks?
You can use taskkill but you have to use the /T (and maybe /F) switch so all child processes of the cmd process are killed too. You get the process id of the cmd task via process.pid.
You could use a system call if you know the name of the process:
import os
...
if process_has_errors:
processName = "notepad.exe"
process.terminate()
os.system(f"TASKKILL /F /IM {processName}")
I am using Python 2.7.8 to coordinate and automate the running of several application many times over in a Windows environment. During each run, I use subprocess.Popen to launch several child process, passing subprocess.PIPE for stdin and stdout to each as follows:
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
where cmd is the list of arguments.
The script waits for an external trigger to know when a given run is done, and then terminates each application it is currently running by writing a string to the stdin of each Popen object. The applications read this string, and perform its own graceful shutdown (which is why I don't simply call kill() or terminate()).
# Try to shutdown process
timeout = 5
try:
if proc.poll() is None:
proc.stdin.write(cmd)
# Wait to see if proc shuts down gracefully
while timeout > 0:
if proc.poll() is not None:
break
else:
time.sleep(1)
timeout -= 1
else:
# Kill it the old fashioned way
proc.kill()
except Error:
pass # Process as necessary...
Once the applications are complete, I'm left with a Popen object. If I inspect the stdin or stdout members of that object, I get something like the following:
<open file '<fdopen>', mode 'wb' at 0x0277C758>
The script then loops to perform the next run, relaunching the necessary applications.
My question is, do I need to explicitly call close() for the stdin and stdout file descriptors each time, in order to avoid leaks, i.e. in the finally statement above? I am wondering this because it is possible for the loop to occur hundreds, or even thousands of times during a given script.
I've looked through the subprocess.py code, but the file handles for the pipes are created by an apparent Windows(-only) call in the _subprocess module, so I can't get any further detail.
The pipes might eventually be closed during the garbage collection but you should not rely on it and close the pipes explicitly.
def kill_process(process):
if process.poll() is None: # don't send the signal unless it seems it is necessary
try:
process.kill()
except OSError: # ignore
pass
# shutdown process in `timeout` seconds
t = Timer(timeout, kill_process, [proc])
t.start()
proc.communicate(cmd)
t.cancel()
.communicate() method closes the pipes and waits for the child process to exit.
I'm running a tool via Python in cmd. For each sample in a given directory I want that tool to do something. However, when I use process = subprocess.Popen(command) in the loop, the commands does not wait untill its finished, resulting in 10 prompts at once. And when I use subprocess.Popen(command, stdout=subprocess.PIPE) the command remains black and I can't see the progress, although it does wait untill the command is finished.
Does anyone know a way how to call an external tool via Python in cmd, that does wait untill the command is finished and thats able to show the progress of the tool in the cmd?
#main.py
for sample in os.listdir(os.getcwd()):
if ".fastq" in sample and '_R1_' in sample and "Temp" not in sample:
print time.strftime("%H:%M:%S")
DNA_Bowtie2.DNA_Bowtie2(os.getcwd()+'\\'+sample+'\\'+sample)
#DNA_Bowtie2.py
# Run Bowtie2 command and wait for process to be finished.
process = subprocess.Popen(command, stdout=subprocess.PIPE)
process.wait()
process.stdout.read()
Edit: command = a perl or java command. With above make-up I cannot see tool output since the prompt (perl window, or java window) remains black.
It seems like your subprocess forks otherwise there is no way the wait() would return before the process has finished.
The order is important here: first read the output, then wait.
If you do it this way:
process.wait()
process.stdout.read()
you can experience a deadlock if the pipe buffer is completely full: the subprocess blocks on waiting on stdout and never reaches the end, your program blocks on wait() and never reaches the read().
Do instead
process.stdout.read()
process.wait()
which will read until EOF.
This holds for if you want the stdout of the process at all.
If you don't want that, you should omit the stdout=PIPE stuff. Then the output is directed into that prompt window. Then you can omit process.stdout.read() as well.
Normally, the process.wait() should then prevent that 10 instances run at once. If that doesn't work, I don't know why not...
I wrote a twitter like service, and I want to do some stress testing on it with python.
I have a client program, called "client".
I want to write a script that will start several processes of the the "client" program, send a few messages, wait a few seconds and will exit.
what I wrote is
p = subprocess.Popen(['client','c1','localhost','4981'],stdin=subprocess.PIPE)
now I can't call the communicate method, because it waits for an EOF but the process isn't over yet.
calling the stdin.flush doesn't seems to work either.
Any tips on how do I do this?
(I don't have to do this in python, if theres a way to do this with a bash script its also ok)
You can use bash loop to run some clients and put them into the background. If you need to communicate with one, just put it to the foreground using fg and then put it to background again using bg
Call p.stdin.close() to signal that there are no more messages:
#!/usr/bin/python
import time
from subprocess import Popen, PIPE
# start several processes of the the "client" program
processes = [Popen(['client','c1','localhost','4981'], stdin=PIPE)
for _ in range(5)]
# send a few messages
for p in processes:
print >>p.stdin, message
p.stdin.close()
# wait a few seconds (remove finished processes while we wait)
for _ in range(3):
for p in processes[:]:
if p.poll() is not None:
processes.remove(p)
time.sleep(1)
# and will exit (kill unfinished subprocesses)
for p in processes:
if p.poll() is None:
p.kill()
p.wait()