Proper use of os.wait()? - python

I am trying to solve an issue with automating a series of scripts used in my workplace. I am a beginner so I apologise for what most likely will be an easy question (hopefully), I have read the literature but it didn't quite make sense to me.
Essentially I have a bash script that runs a python script and an R script that needs to be run in order, currently running the code the R script begins before the python is finished and I have been told here than I cannot use the shell wait function as my python script launches child processes and shell wait cannot be used to wait on grandchild processes.
Thats fine, so the solution offered was to make the python and R script wait on their own child processes so that when they exit, the the bash script can properly run in order. Unfortunately I cannot figure out the proper nomenclature of this in my python script.
Here's what i have:
cmd = "python %s/create_keyfile.py %s %s %s %s" %(input, input, input,
input, input)
print cmd
os.system(cmd)
cmd = "python %s/uneak_name_plus_barcode_v2.py %s %s %s %s" %(input,
input, input, input, input)
print cmd
os.system(cmd)
cmd = "python %s/run_production_mode.py %s %s %s %s %s" %(input, input,
input, input, input, input)
print cmd
os.system(cmd)
Where 'input' is actual inputs in my code, I probably just cant share exactly what we are doing :)
So essentially I am trying to figure out the best way of having the whole script wait on these three scripts before exiting.

Use subprocess.check_call() not os.system()
subprocess.check_call() will block your main Python script's execution until the function has returned a value.
Documentation for check_call() here
The subprocess module should always be used instead of os.system() for subprocess management and execution.

Thank you to all that helped, here is what caused my dilemma for anyone google searching this. I determined from inserting "python -c "from time import sleep; sleep(30)"" into my code that the first two python scripts were waiting as expected, the final one was not (the timer would immediately trigger after that script ran), turns out that the third python script also called another small python script that had a "&" at the end of it that was ignoring any commands to wait on it. Simply removing this & allowed all the code to run sequentially. – Michael Bates

Related

Python script using subprocess to get PID and kill it acts weird when launched from outside its sitting directory

Thank you in advance for the time you'll give to read this question. I am learning Python and I looked up a lot before asking here, please forgive me for the newbie question.
So I created this script in python 3 using subprocess module to search for another python script's PID, while only knowing the beginning of the script's name and terminate it nicely.
Basically I run python clocks on my LCD screen through Raspberry and I2C, and I terminate the script, clear the LCD and turn it off. This "off" script code is provided below.
The issue is that when I run it from the directory it sits in with a:
python3 off.py
It works perfectly, getting parsing and terminating the PID, then turning off the LCD display.
Ideally I want to trigger it through telegram-cli because I did it in bash and it worked nicely, I find it to be a nice feature. In python it fails.
So I tested and it appears that when I try to launch it from another directory like this:
python3 ~/code/off.py
The grep subprocess returns more than the one PID it returns normally when launched from the script residing directory. For instance (with python3 -v):
kill: failed to parse argument: '25977
26044'
The second PID number is from a sub process created by the script, I can't seem to find what it is as it terminates when the script ends but fails it initial purpose.
Any help in understanding what is happening here would be really appreciated.
I came so far, as show below, from two ugly lines of bash mixed with a call to an dummy four lines python scripts, so I really feel I am getting close to a proper way of achieving my first real python script.
I tried to decompose the script line by line in the interpreter and could not reproduce the error, everything behave as expected. I only get this double PID result when running the script from an outer location.
Thank you in advance for any helpful insight on how to understand what is happening!
#!/usr/bin/env python3
import subprocess
import I2C_LCD_driver
import string
# Defining variables for searched strings and string encoding
searched_process_name = 'lcd_'
cut_grep_out_of_results = 'grep'
result_string_encoding = 'utf-8'
mylcd = I2C_LCD_driver.lcd()
LCD_NOBACKLIGHT = 0x00
run = True
def kill_script():
# Listing processes and getting the searched process
ps_process = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE)
grep_process = subprocess.Popen(["grep", "-i", searched_process_name], stdin=ps_process.stdout, stdout=subprocess.PIPE)
# The .stdout.close() lines below allow the previous process to receive a SIGPIPE if the next process exits.
ps_process.stdout.close()
# Cleaning the result until only the PID number is returned in a string
grep_cutout = subprocess.Popen(["grep", "-v", cut_grep_out_of_results], stdin=grep_process.stdout, stdout=subprocess.PIPE)
grep_process.stdout.close()
awk = subprocess.Popen(["cut", "-c", "10-14"], stdin=grep_cutout.stdout, stdout=subprocess.PIPE)
grep_cutout.stdout.close()
output = awk.communicate()[0]
clean_output = output.decode(result_string_encoding)
clean_output_no_new_line = clean_output.rstrip()
clean_output_no_quote = clean_output_no_new_line.replace("'", '')
PID = clean_output_no_quote
# Terminating the LCD script process
subprocess.Popen(["kill", "-9", PID])
while run:
kill_script()
# Cleaning and shutting off LCD screen
mylcd.lcd_clear()
mylcd.lcd_device.write_cmd(LCD_NOBACKLIGHT)
break
I found out the reason of this weird comportment. An error on my end:
I forgot I called some directories with a name including the characters string I was running grep -i against provoking the double result when running the script from outside its directory using its full path.
Turns out the script runs pretty well using subprocess.
So in the end, I renamed the scripts I wanted to terminate with disp_ rather than lcd_ and added shell=False to my subprocesses to make sure there were no risk of unwantedly sending the output to bash while the running the script.

Subprocess in script doesn't work, when started manually it does

I have a script that reads from an mssql database and passes the read data to a subprocess of some.exe.
The data fetching works, fine but as soon as it is supposed to start proc = subprocess.(["C:\\absolute\\path\\some.exe ", fetched_data]) proc.wait() it seems to skip it and goes on for the next "fetched_data".. I also tried to use subprocess.call(["C:\\absolute\\path\\some.exe ", fetched_data])
If I start python in the console (windows cmd) and do the exact same thing it works.
Why does calling the subprocess in the script not work and if issued manually in the console it does?
edit: The problem was that the subprocess started in the script again used another.exe, which couldn't be found by the subprocess (as the it used the python path). When started from directory where some.exe and another.exe are, the script runs fine.
fetched_data is an additional argument therefore:
proc = subprocess.call(["C:\\absolute\\path\\some.exe ", fetched_data])
It's an argument LIST not a string, what subprocess expects.

OS system call to Sicstus hangs indefinitely using Python

I'm trying to write a proofchecking application that receives proofs from a user on a website and sends it through to a Prolog script to check its validity.
I'm using Django, Python 2.7 and Sicstus. In my server "view.py" file, I call a python script "checkProof.py", passing it the raw text form of the proof the user submits. Inside of that file I have the following function:
def checkProof(pFile, fFile):
p = subprocess.Popen(['/bin/bash', '-i', '-c', 'sicstus -l ProofServer/server/proofChecker.pl -- %s %s' % (pFile, fFile)],
stdout=subprocess.PIPE)
p.communicate() # Hangs here.
proofChecker.pl receives a modified version of the proof (pFile), analyses it and outputs feedback into a feedback file (fFile). The Python script loops until the feedback file is generated, and returns this to the rest of the server.
The first time I call this function, everything works fine and I get the expected output. The second time I call this function, the program hangs indefinitely at "p.communicate()".
This means that, currently, only one proof can be checked using the application between server restarts. The server should be able to check an indefinite number of proofs between restarts.
Does anyone know why this is happening? I'd be happy to include additional information if necessary.
Update
Based on advice given below, I tried three different kinds of calls to try to determine where the problem lies. The first is what I'm trying to do already - calling Sicstus on my real proofchecking code. The second was calling a very simple Prolog script that writes a hardcoded output. The third was a simple Python script that does the same:
def checkProof(pFile, fFile):
cmd1 = 'sicstus -l ProofServer/server/proofChecker.pl -- %s %s' % (pFile, fFile)
cmd2 = 'sicstus -l ProofServer/server/tempFeedback.pl -- %s %s' % (pFile, fFile)
cmd3 = 'python ProofServer/server/tempFeedback.py %s %s' % (pFile, fFile)
p = subprocess.Popen(['/bin/bash', '-i', '-c', cmd3],
stdout=subprocess.PIPE)
p.communicate() # Hangs here.
In all three cases, the application continues to hang on the second attempted call. This implies that the problem is not with calling Sicstus, but just with the way I'm calling programs in general. This is a bit reassuring but I'm still not sure what I'm doing wrong.
I managed to fix this issue, eventually.
I think the issue was that appending the -i (interactive) flag to bash meant that it expected input, and when it didn't get that input it suspended the process on the second call. This is what was happening when trying to replicate the process with something simpler.
I got rid of the -i flag, and found that it now raised the error "/bin/bash: sicstus: command not found", even though sicstus is on my server's PATH and I can call it fine if I ssh into the server and call it directly. I fixed this by specifying the full path. I can now check proofs an indefinite number of times between server restarts, which is great. My code is now:
def checkProof(pFile, fFile):
cmd = '/usr/local/sicstus4.2.3/bin/sicstus -l ProofServer/server/proofChecker.pl -- %s %s' % (pFile, fFile)
p = subprocess.Popen(['/bin/bash', '-c', cmd])
p.communicate()

Why does my script stop executing commands after calling an .EXE?

Here is the relevant code from a Python script where a few commands are executed to copy an executable file and then execute it:
exe_file_path = os.getcwd() + r'\name_of_executable.exe'
temp_loc = os.environ['temp']
subprocess.Popen(r'copy %s %s' % (exe_file_path, temp_loc), shell=True)
exe_file_path = os.environ['temp'] + r'\name_of_executable.exe'
subprocess.Popen(r'start %s' % (exe_file_path), shell=True)
subprocess.Popen(r'del %s' % (exe_file_path), shell=True)
Currently, name_of_executable.exe only prints out text and then calls system("pause").
After the pause is executed, I push enter and I would assume the executable would close and the Python script would continue, but the last line of Python doesn't execute.
Is this because I'm using the TEMP folder? (I'm executing from a command prompt running as administrator. How do I get the script to work?
All programs will be immediately started one after another. Call communicate on each Popen object to wait for program termination.
Additionally, your use of format strings is unnecessarily dangerous. ['copy', exe_file_path, temp_loc] automatically escapes any strange characters in exe_file_path and temp_loc (and is easier to read).
By the way, Python has very good functions for copying and deleting files in shutil and os; there is no need to call shell programs for that.
And instead of concatenating strings to determine exe_file_path, you should use os.path.join (although this is not that important, since your program seems locked to Windows).

Retrieving Raw_Input from a system ran script

I'm using the OS.System command to call a python script.
example:
OS.System("call jython script.py")
In the script I'm calling, the following command is present:
x = raw_input("Waiting for input")
If I run script.py from the command line I can input data no problem, if I run it via the automated approach I get an EOFError. I've read in the past that this happens because the system expects a computer to be running it and therefore could never receive input data in this way.
So the question is how can I get python to wait for user input while being run in an automated way?
The problem is the way you run your child script. Since you use os.system() the script's input channel is closed immediately and the raw_input() prompt hits an EOF (end of file). And even if that didn't happen, you wouldn't have a way to actually send some input text to the child as I assume you'd want given that you are using raw_input().
You should use the subprocess module instead.
import subprocess
from subprocess import PIPE
p = subprocess.Popen(["jython", "script.py"], stdin=PIPE, stdout=PIPE)
print p.communicate("My input")
Your question is a bit unclear. What is the process calling your Python script and how is it being run? If the parent process has no standard input, the child won't have it either.

Categories