I'm trying to write a proofchecking application that receives proofs from a user on a website and sends it through to a Prolog script to check its validity.
I'm using Django, Python 2.7 and Sicstus. In my server "view.py" file, I call a python script "checkProof.py", passing it the raw text form of the proof the user submits. Inside of that file I have the following function:
def checkProof(pFile, fFile):
p = subprocess.Popen(['/bin/bash', '-i', '-c', 'sicstus -l ProofServer/server/proofChecker.pl -- %s %s' % (pFile, fFile)],
stdout=subprocess.PIPE)
p.communicate() # Hangs here.
proofChecker.pl receives a modified version of the proof (pFile), analyses it and outputs feedback into a feedback file (fFile). The Python script loops until the feedback file is generated, and returns this to the rest of the server.
The first time I call this function, everything works fine and I get the expected output. The second time I call this function, the program hangs indefinitely at "p.communicate()".
This means that, currently, only one proof can be checked using the application between server restarts. The server should be able to check an indefinite number of proofs between restarts.
Does anyone know why this is happening? I'd be happy to include additional information if necessary.
Update
Based on advice given below, I tried three different kinds of calls to try to determine where the problem lies. The first is what I'm trying to do already - calling Sicstus on my real proofchecking code. The second was calling a very simple Prolog script that writes a hardcoded output. The third was a simple Python script that does the same:
def checkProof(pFile, fFile):
cmd1 = 'sicstus -l ProofServer/server/proofChecker.pl -- %s %s' % (pFile, fFile)
cmd2 = 'sicstus -l ProofServer/server/tempFeedback.pl -- %s %s' % (pFile, fFile)
cmd3 = 'python ProofServer/server/tempFeedback.py %s %s' % (pFile, fFile)
p = subprocess.Popen(['/bin/bash', '-i', '-c', cmd3],
stdout=subprocess.PIPE)
p.communicate() # Hangs here.
In all three cases, the application continues to hang on the second attempted call. This implies that the problem is not with calling Sicstus, but just with the way I'm calling programs in general. This is a bit reassuring but I'm still not sure what I'm doing wrong.
I managed to fix this issue, eventually.
I think the issue was that appending the -i (interactive) flag to bash meant that it expected input, and when it didn't get that input it suspended the process on the second call. This is what was happening when trying to replicate the process with something simpler.
I got rid of the -i flag, and found that it now raised the error "/bin/bash: sicstus: command not found", even though sicstus is on my server's PATH and I can call it fine if I ssh into the server and call it directly. I fixed this by specifying the full path. I can now check proofs an indefinite number of times between server restarts, which is great. My code is now:
def checkProof(pFile, fFile):
cmd = '/usr/local/sicstus4.2.3/bin/sicstus -l ProofServer/server/proofChecker.pl -- %s %s' % (pFile, fFile)
p = subprocess.Popen(['/bin/bash', '-c', cmd])
p.communicate()
Related
Thank you in advance for the time you'll give to read this question. I am learning Python and I looked up a lot before asking here, please forgive me for the newbie question.
So I created this script in python 3 using subprocess module to search for another python script's PID, while only knowing the beginning of the script's name and terminate it nicely.
Basically I run python clocks on my LCD screen through Raspberry and I2C, and I terminate the script, clear the LCD and turn it off. This "off" script code is provided below.
The issue is that when I run it from the directory it sits in with a:
python3 off.py
It works perfectly, getting parsing and terminating the PID, then turning off the LCD display.
Ideally I want to trigger it through telegram-cli because I did it in bash and it worked nicely, I find it to be a nice feature. In python it fails.
So I tested and it appears that when I try to launch it from another directory like this:
python3 ~/code/off.py
The grep subprocess returns more than the one PID it returns normally when launched from the script residing directory. For instance (with python3 -v):
kill: failed to parse argument: '25977
26044'
The second PID number is from a sub process created by the script, I can't seem to find what it is as it terminates when the script ends but fails it initial purpose.
Any help in understanding what is happening here would be really appreciated.
I came so far, as show below, from two ugly lines of bash mixed with a call to an dummy four lines python scripts, so I really feel I am getting close to a proper way of achieving my first real python script.
I tried to decompose the script line by line in the interpreter and could not reproduce the error, everything behave as expected. I only get this double PID result when running the script from an outer location.
Thank you in advance for any helpful insight on how to understand what is happening!
#!/usr/bin/env python3
import subprocess
import I2C_LCD_driver
import string
# Defining variables for searched strings and string encoding
searched_process_name = 'lcd_'
cut_grep_out_of_results = 'grep'
result_string_encoding = 'utf-8'
mylcd = I2C_LCD_driver.lcd()
LCD_NOBACKLIGHT = 0x00
run = True
def kill_script():
# Listing processes and getting the searched process
ps_process = subprocess.Popen(["ps", "aux"], stdout=subprocess.PIPE)
grep_process = subprocess.Popen(["grep", "-i", searched_process_name], stdin=ps_process.stdout, stdout=subprocess.PIPE)
# The .stdout.close() lines below allow the previous process to receive a SIGPIPE if the next process exits.
ps_process.stdout.close()
# Cleaning the result until only the PID number is returned in a string
grep_cutout = subprocess.Popen(["grep", "-v", cut_grep_out_of_results], stdin=grep_process.stdout, stdout=subprocess.PIPE)
grep_process.stdout.close()
awk = subprocess.Popen(["cut", "-c", "10-14"], stdin=grep_cutout.stdout, stdout=subprocess.PIPE)
grep_cutout.stdout.close()
output = awk.communicate()[0]
clean_output = output.decode(result_string_encoding)
clean_output_no_new_line = clean_output.rstrip()
clean_output_no_quote = clean_output_no_new_line.replace("'", '')
PID = clean_output_no_quote
# Terminating the LCD script process
subprocess.Popen(["kill", "-9", PID])
while run:
kill_script()
# Cleaning and shutting off LCD screen
mylcd.lcd_clear()
mylcd.lcd_device.write_cmd(LCD_NOBACKLIGHT)
break
I found out the reason of this weird comportment. An error on my end:
I forgot I called some directories with a name including the characters string I was running grep -i against provoking the double result when running the script from outside its directory using its full path.
Turns out the script runs pretty well using subprocess.
So in the end, I renamed the scripts I wanted to terminate with disp_ rather than lcd_ and added shell=False to my subprocesses to make sure there were no risk of unwantedly sending the output to bash while the running the script.
I am trying to solve an issue with automating a series of scripts used in my workplace. I am a beginner so I apologise for what most likely will be an easy question (hopefully), I have read the literature but it didn't quite make sense to me.
Essentially I have a bash script that runs a python script and an R script that needs to be run in order, currently running the code the R script begins before the python is finished and I have been told here than I cannot use the shell wait function as my python script launches child processes and shell wait cannot be used to wait on grandchild processes.
Thats fine, so the solution offered was to make the python and R script wait on their own child processes so that when they exit, the the bash script can properly run in order. Unfortunately I cannot figure out the proper nomenclature of this in my python script.
Here's what i have:
cmd = "python %s/create_keyfile.py %s %s %s %s" %(input, input, input,
input, input)
print cmd
os.system(cmd)
cmd = "python %s/uneak_name_plus_barcode_v2.py %s %s %s %s" %(input,
input, input, input, input)
print cmd
os.system(cmd)
cmd = "python %s/run_production_mode.py %s %s %s %s %s" %(input, input,
input, input, input, input)
print cmd
os.system(cmd)
Where 'input' is actual inputs in my code, I probably just cant share exactly what we are doing :)
So essentially I am trying to figure out the best way of having the whole script wait on these three scripts before exiting.
Use subprocess.check_call() not os.system()
subprocess.check_call() will block your main Python script's execution until the function has returned a value.
Documentation for check_call() here
The subprocess module should always be used instead of os.system() for subprocess management and execution.
Thank you to all that helped, here is what caused my dilemma for anyone google searching this. I determined from inserting "python -c "from time import sleep; sleep(30)"" into my code that the first two python scripts were waiting as expected, the final one was not (the timer would immediately trigger after that script ran), turns out that the third python script also called another small python script that had a "&" at the end of it that was ignoring any commands to wait on it. Simply removing this & allowed all the code to run sequentially. – Michael Bates
I have a python build script for a Xamarin application that I need to compile into different ipa's and apk's based on locale.
The script manipulates the necessary values in info.plist and the Android manifest and then builds each of the versions using subprocess.popen to call xbuild. Or at least that's how it's suppose to be.
The problem is that when I in anyway interact with the subprocess (basically i need to wait until it's done before I start changing values for the next version)
This works:
build_path = os.path.dirname(os.path.realpath(__file__))
ipa_path = "/path/to/my.ipa"
cmd = '/Library/Frameworks/Mono.framework/Versions/4.6.2/Commands/xbuild /p:Configuration="Release" /p:Platform="iPhone" /p:IpaPackageDir="%s" /t:Build %s/MyApp/iOS/MyApp.iOS.csproj' % (ipa_path, build_path)
subprocess.Popen(cmd, env=os.environ, shell=True)
However it will result in the python script continuing in parallel with the build.
If I do this:
subprocess.Popen(cmd, env=os.environ, shell=True).wait()
Xbuild fail with the following error message:
Build FAILED.
Errors:
/Users/sune/dev/MyApp/iOS/MyApp.iOS.csproj: error :
/Users/sune/dev/MyApp/iOS/MyApp.iOS.csproj: There is an unclosed literal string.
Line 2434, position 56.
It fails within milliseconds of being called, whereas normally the build process takes several minutes
Any other shorthand methods of subprocess.popen such as .call, .check_call, and the underlying operations of subprocess.poll and subprocess.communicate causes the same error to happen.
What's really strange is that even calling time.sleep can provoke the same error:
subprocess.Popen(cmd, env=os.environ, shell=True)
time.sleep(2)
Which I don't get because as I understand it I should also be able to do something like this:
shell = subprocess.Popen(cmd, env=os.environ, shell=True)
while shell.poll() is None:
time.sleep(2)
print "done"
To essentially achieve the same as calling shell.wait()
Edit: Using command list instead of string
If I use a command list and shell=False like this
cmd = [
'/Library/Frameworks/Mono.framework/Versions/4.6.2/Commands/xbuild',
'/p:Configuration="Release"',
'/p:Platform="iPhone"',
'/p:IpaPackageDir="%s' % ipa_path,
'/t:Build %s/MyApp/iOS/MyApp.iOS.csproj' % build_path
]
subprocess.Popen(cmd, env=os.environ, shell=False)
Then this is the result:
MSBUILD: error MSBUILD0003: Please specify the project or solution file to build, as none was found in the current directory.
Any input is much appreciated. I'm banging my head against the wall here.
I firmly believe that this is not possible. It must be a shortcoming of the way the subprocess module is implemented.
xbuild spawns multiple subprocesses during the build and if polled for status the subprocess in python will discover that one of these had a non-zero return status and stop the execution of one or more of the xbuild subprocesses causing the build to fail as described.
I ended up using a bash script to do the compiling and use python to manipulate xml files etc.
I'm struggling to get some python script to start a subprocess, wait until it completes and then retrieve the required data. I'm quite new to Python.
The command I wish to run as a subprocess is
./bin.testing/Eva -t --suite="temp0"
Running that command by hand in the Linux terminal produces:
in terminal mode
Evaluation error = 16.7934
I want to run the command as a python sub-process, and receive the output back. However, everything I try seems to skip the second line (ultimately, it's the second line that I want.) At the moment, I have this:
def job(self,fen_file):
from subprocess import Popen, PIPE
from sys import exit
try:
eva=Popen('{0}/Eva -t --suite"{0}"'.format(self.exedir,fen_file),shell=True,stdout=PIPE,stderr=PIPE)
stdout,stderr=eva.communicate()
except:
print ('Error running test suite '+fen_file)
exit("Stopping")
print(stdout)
.
.
.
return 0
All this seems to produce is
in terminal mode
0
with the important line missing. The print statement is just so I can see what I am getting back from the sub-process -- the intention is that it will be replaced with code that processes the number from the second line and returns the output (here I'm just returning 0 just so I can get this particular bit to work first. The caller of this function prints the result, which is why there is a zero at the end of the output.) exedir is just the directory of the executable for the sub-process, and fen-file is just an ascii file that the sub-process needs. I have tried removing the 'in terminal mode' from the source code of the sub-process and re compiling it, but that doesn't work -- it still doesn't return the important second line.
Thanks in advance; I expect what I am doing wrong is really very simple.
Edit: I ought to add that the subprocess Eva can take a second or two to complete.
Since the 2nd line is an error message, it's probably stored in your stderr variable!
To know for sure you can print your stderr in your code, or you can run the program on the command line and see if the output is split into stdout and stderr. One easy way is to do ./bin.testing/Eva -t --suite="temp0" > /dev/null. Any messages you get are stderr since stdout is redirected to /dev/null.
Also, typically with Popen the shell=True option is discouraged unless really needed. Instead pass a list:
[os.path.join(self.exedir, 'Eva'), '-t', '--suite=' + fen_file], shell=False, ...
This can avoid problems down the line if one of your arguments would normally be interpreted by the shell. (Note, I removed the ""'s, because the shell would normally eat those for you!)
Try using subprocess check_output.
output_lines = subprocess.check_output(['./bin.testing/Eva', '-t', '--suite="temp0"'])
for line in output_lines.splitlines():
print(line)
I am trying the following:
#!/usr/bin/python
import os, subprocess
func = 'print("Hello World")'
x = subprocess.Popen(['mongo', '--eval', func], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stdin=subprocess.PIPE)
print x.stdout.read()
print x.stderr.read()
But all I am getting is:
MongoDB shell version: 2.2.3
followed by two new lines. How do I capture the output of function execution?
Reading the pipes gets whatever is currently inside said pipe. Your mongo is waiting to connect to the localhost. Since it doesn't return quickly enough, your read command is not getting the results. This may be because you don't have mongo running locally, but you will run into this problem repeatedly if you don't wait for the subprocess to complete.
Also, keep in mind that subprocess.Popen, to my knowledge, doesn't block. You would probably need to make a x.wait() call if you want the function to complete before trying to grab the output.