I have a Python script that needs to invoke another Python script in the same directory. I did this:
from subprocess import call
call('somescript.py')
I get the following error:
call('somescript.py')
File "/usr/lib/python2.6/subprocess.py", line 480, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.6/subprocess.py", line 633, in __init__
errread, errwrite)
File "/usr/lib/python2.6/subprocess.py", line 1139, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I have the script somescript.py in the same folder though. Am I missing something here?
If 'somescript.py' isn't something you could normally execute directly from the command line (I.e., $: somescript.py works), then you can't call it directly using call.
Remember that the way Popen works is that the first argument is the program that it executes, and the rest are the arguments passed to that program. In this case, the program is actually python, not your script. So the following will work as you expect:
subprocess.call(['python', 'somescript.py', somescript_arg1, somescript_val1,...]).
This correctly calls the Python interpreter and tells it to execute your script with the given arguments.
Note that this is different from the above suggestion:
subprocess.call(['python somescript.py'])
That will try to execute the program called python somscript.py, which clearly doesn't exist.
call('python somescript.py', shell=True)
Will also work, but using strings as input to call is not cross platform, is dangerous if you aren't the one building the string, and should generally be avoided if at all possible.
Windows? Unix?
Unix will need a shebang and exec attribute to work:
#!/usr/bin/env python
as the first line of script and:
chmod u+x script.py
at command-line or
call('python script.py'.split())
as mentioned previously.
Windows should work if you add the shell=True parameter to the "call" call.
Check out this.
from subprocess import call
with open('directory_of_logfile/logfile.txt', 'w') as f:
call(['python', 'directory_of_called_python_file/called_python_file.py'], stdout=f)
import subprocess
command = 'home/project/python_files/run_file.py {} {} {}'.format(
arg1, arg2, arg3) # if you want to pass any arguments
p = subprocess.Popen(
[command],
shell=True,
stdin=None,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
close_fds=True)
out, err = p.communicate()
subprocess.call expects the same arguments as subprocess.Popen - that is a list of strings (the argv in C) rather than a single string.
It's quite possible that your child process attempted to run "s" with the parameters "o", "m", "e", ...
If you're on Linux/Unix you could avoid call() altogether and not execute an entirely new instance of the Python executable and its environment.
import os
cpid = os.fork()
if not cpid:
import somescript
os._exit(0)
os.waitpid(cpid, 0)
For what it's worth.
What's wrong with
import sys
from os.path import dirname, abspath
local_dir = abspath(dirname(__file__))
sys.path.append(local_dir)
import somescript
or better still wrap the functionality in a function, e.g. baz, then do this.
import sys
from os.path import dirname, abspath
local_dir = abspath(dirname(__file__))
sys.path.append(local_dir)
import somescript
somescript.baz()
There seem to be a lot of scripts starting python processes or forking, is that a requirement?
First, check if somescript.py is executable and starts with something along the lines of #!/usr/bin/python.
If this is done, then you can use subprocess.call('./somescript.py').
Or as another answer points out, you could do subprocess.call(['python', 'somescript.py']).
def main(argv):
host = argv[0]
type = argv[1]
val = argv[2]
ping = subprocess.Popen(['python ftp.py %s %s %s'%(host,type,val)],stdout = subprocess.PIPE,stderr = subprocess.PIPE,shell=True)
out = ping.communicate()[0]
output = str(out)
print output
The subprocess call is a very literal-minded system call. it can be used for any generic process...hence does not know what to do with a Python script automatically.
Try
call ('python somescript.py')
If that doesn't work, you might want to try an absolute path, and/or check permissions on your Python script...the typical fun stuff.
Related
I am trying to make a basic linter script which I can run on Python files in the current directory. So far my script looks like this:
import subprocess
from os import listdir
from os.path import isfile, join
if __name__ == "__main__":
subprocess.check_output(["black", "-l", "100", "./"])
files = [f for f in listdir("./") if isfile(join("./", f))]
for file in files:
if file.endswith(".py"):
subprocess.check_output(["flake8", file])
I am wanting to run the code via the command line with a call such as main.py. Black performs fine and finds the .py files in the current directory and formats them without a problem. However, when trying to run a similar command with flake8, it also runs on children of the directory such as the venv folder which I am not interested in.
Therefore, the script includes a check to get the files in the current directory and then find the .py files. However, once I get those files, I cannot seem to use my flake8 command with subprocess.check_output. The error I get is as follows:
Traceback (most recent call last):
File "linter.py", line 18, in <module>
subprocess.check_output(["flake8", file], shell=False)
File "C:\Users\calum\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\calum\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['flake8', 'flask_server_controller.py']' returned non-zero exit status 1.
Could someone please explain the error and/or provide a solution to my problem. I would also like to add other linting tools to the script such as pylint, however, I am worried I will run into the same problem without understanding it properly.
Thanks in advance.
subprocess.check_output is giving you that because of the check_ aspect
this means that when the executable you're running returns nonzero (for example, flake8 returns nonzero when it detects lint failures) an exception will be raised
To avoid this behaviour, I'd suggest using subprocess.run instead, and forwarding along the return code. Something like this:
import os
import subprocess
import sys
def main():
ret = 0
output = b''
for filename in os.listdir('.'):
if filename.endswith('.py'):
proc_ret = subprocess.run(
('flake8', filename),
stdout=subprocess.PIPE,
)
ret |= proc_ret.returncode
output += proc_ret.stdout
sys.stdout.buffer.write(output)
return ret
if __name__ == '__main__':
exit(main())
Note that this is going to be prohibitively slow, you have to incur the startup cost of flake8 for every file.
One way you can improve this is by passing all the filenames to flake8 at once:
import os
import subprocess
import sys
def main():
filenames = (fname for fname in os.listdir('.') if fname.endswith('.py'))
proc_ret = subprocess.run(('flake8', *filenames), stdout=subprocess.PIPE)
sys.stdout.buffer.write(proc_ret.stdout)
return proc_ret.returncode
if __name__ == '__main__':
exit(main())
but also this brings up another interesting point, why are you collecting the output at all? if you let the output go to stdout it will be printed automatically:
import os
import subprocess
def main():
filenames = (fname for fname in os.listdir('.') if fname.endswith('.py'))
return subprocess.call(('flake8', *filenames))
if __name__ == '__main__':
exit(main())
and hmmm, you probably don't need to do this at all since flake8 has its own inclusion / exclusion code -- you probably just want to configure exclude properly
# setup.cfg / tox.ini / .flake8
[flake8]
# for example, exclude recursing into the venv
exclude = venv
and then you can use flake8 . as normal
(disclaimer: I am the current maintainer of flake8)
Dear stackoverflow users,
I'm looking for a solution for a probably quite easy problem. I want to automate some quantum chemical calculations and ran into a small problem.
Normally you start your quantum chemical programm (in my case it's called orca) with your input file (*.inp) on a remote server as a background process and pipe the output into an outputfile (*.out) via
nohup orca H2.inp >& H2.out &
or something similar.
Now I wanted to use a python script (with some templating) to write the input file automatically. At the end the script should start the calculation in a way that I could log out of the server without stopping orca. I tried that with
subprocess.run(["orca", input_file], stdout=output_file)
but so far it did not work. How do I "emulate" the command given at the top with the subprocess module?
Regards
Update
I have one file that is called H2.xyz. The script reads and splits the filename by the point and creates an input file name H2.inp and the output should be written into the file H2.out.
Update 2
The input file is derived from the *xyz file via
xyzfile = str(sys.argv[1])
input_file = xyzfile.split(".")[0] + ".inp"
output_file = xyzfile.split(".")[0] + ".out"
and is created within the script via templating. In the end I want to run the script in the following way:
python3 script.py H2_0_1.xyz
Why not simply:
subprocess.Popen(f'orca {input_file} >& {output_file}',
shell=True, stdin=None, stdout=None, stderr=None, close_fds=True)
More info:
Run Process and Don't Wait
For me (Windows, Python 2.7) the method call works very fine like this:
with open('H2.out', 'a') as out :
subprocess.call(['orca', infile], stdout=out,
stderr=out,
shell=True) # Yes, I know. But It's Windows.
On Linux you maybe do not need shell=True for a list of arguments.
Is the usage of subprocess important? If not, you could use os.system.
The Python call would get really short, in your case
os.system("nohup orca H2.inp >& H2.out &")
should do the trick.
I had the same problem not long ago.
Here is my solution:
commandLineCode = "nohup orca H2.inp >& H2.out &"
try:
proc = subprocess.Popen(commandLineCode,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd = workingDir)
except OSError:
print("Windows Error occured")
print(traceback.format_exc())
timeoutInSeconds = 100
try:
outs, errs = proc.communicate(timeout = timeoutInSeconds)
except subprocess.TimeoutExpired:
print("timeout")
proc.kill()
outs, errs = proc.communicate()
stdoutDecode = outs.decode("utf-8")
stderrDecode = errs.decode("utf-8")
for line in stdoutDecode.splitlines():
# write line to outputFile
if stderrDecode:
for line in stderrDecode.splitlines():
# write line to error log
The OSError exception is pretty important since you never now what your OS might do wrong.
For more on the communicate() command which actually starts the process read:
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate
I have been looking for an answer for how to execute a java jar file through python and after looking at:
Execute .jar from Python
How can I get my python (version 2.5) script to run a jar file inside a folder instead of from command line?
How to run Python egg files directly without installing them?
I tried to do the following (both my jar and python file are in the same directory):
import os
if __name__ == "__main__":
os.system("java -jar Blender.jar")
and
import subprocess
subprocess.call(['(path)Blender.jar'])
Neither have worked. So, I was thinking that I should use Jython instead, but I think there must a be an easier way to execute jar files through python.
Do you have any idea what I may do wrong? Or, is there any other site that I study more about my problem?
I would use subprocess this way:
import subprocess
subprocess.call(['java', '-jar', 'Blender.jar'])
But, if you have a properly configured /proc/sys/fs/binfmt_misc/jar you should be able to run the jar directly, as you wrote.
So, which is exactly the error you are getting?
Please post somewhere all the output you are getting from the failed execution.
This always works for me:
from subprocess import *
def jarWrapper(*args):
process = Popen(['java', '-jar']+list(args), stdout=PIPE, stderr=PIPE)
ret = []
while process.poll() is None:
line = process.stdout.readline()
if line != '' and line.endswith('\n'):
ret.append(line[:-1])
stdout, stderr = process.communicate()
ret += stdout.split('\n')
if stderr != '':
ret += stderr.split('\n')
ret.remove('')
return ret
args = ['myJarFile.jar', 'arg1', 'arg2', 'argN'] # Any number of args to be passed to the jar file
result = jarWrapper(*args)
print result
I used the following way to execute tika jar to extract the content of a word document. It worked and I got the output also. The command I'm trying to run is "java -jar tika-app-1.24.1.jar -t 42250_EN_Upload.docx"
from subprocess import PIPE, Popen
process = Popen(['java', '-jar', 'tika-app-1.24.1.jar', '-t', '42250_EN_Upload.docx'], stdout=PIPE, stderr=PIPE)
result = process.communicate()
print(result[0].decode('utf-8'))
Here I got result as tuple, hence "result[0]". Also the string was in binary format (b-string). To convert it into normal string we need to decode with 'utf-8'.
With args: concrete example using Closure Compiler (https://developers.google.com/closure/) from python
import os
import re
src = test.js
os.execlp("java", 'blablabla', "-jar", './closure_compiler.jar', '--js', src, '--js_output_file', '{}'.format(re.sub('.js$', '.comp.js', src)))
(also see here When using os.execlp, why `python` needs `python` as argv[0])
How about using os.system() like:
os.system('java -jar blabla...')
os.system(command)
Execute the command (a string) in a subshell. This is implemented by calling the Standard C function system(), and has the same limitations. Changes to sys.stdin, etc. are not reflected in the environment of the executed command.
I have a python script that calls a shell scrips, that in turn calls a .exe called iv4_console. I need to print the stdout of iv4_console for debugging purposes. I used this:
Python:
import sys
import subprocess
var="rW015005000000"
proc = subprocess.Popen(["c.sh", var], shell=True, stdout=subprocess.PIPE)
output = ''
for line in iter(proc.stdout.readline, ""):
print line
output += line
Shell:
start_dir=$PWD
release=$1
echo Release inside shell: $release
echo Directory: $start_dir
cd $start_dir
cd ../../iv_system4/ports/visualC12/Debug
echo Debug dir: $PWD
./iv4_console.exe ../embedded/LUA/analysis/verbose-udp-toxml.lua ../../../../../logs/$release/VASP_DUN722_20160307_Krk_Krk_113048_092_1_$release.dvl &>../../../../FCW/ObjectDetectionTest/VASP_DUN722_20160307_Krk_Krk_113048_092_1_$release.xml
./iv4_console.exe ../embedded/LUA/analysis/verbose-udp-toxml.lua ../../../../../logs/$release/VASP_FL140_20170104_C60_Checkout_afterIC_162557_001_$release.dvl &>../../../../FCW/ObjectDetectionTest/VASP_FL140_20170104_C60_Checkout_afterIC_162557_001_$release.xml
exit
But this didn't work, it prints nothing. What do you think?
See my comment, best approach (i.m.o) would be to just use python only.
However, in answer of your question, try:
import sys
import subprocess
var="rW015005000000"
proc = subprocess.Popen(["/bin/bash", "/full/path/to/c.sh"], stdout=subprocess.PIPE)
# Best to always avoid shell=True because of security vulnerabilities.
proc.wait() # To make sure the shell script does not continue running indefinitely in the background
output, errors = proc.communicate()
print(output.decode())
# Since subprocess.communicate() returns a bytes-string, you can use .decode() to print the actual output as a string.
You can use
import subprocess
subprocess.call(['./c.sh'])
to call the shell script in python file
or
import subprocess
import shlex
subprocess.call(shlex.split('./c.sh var'))
Preface: I understand this question has been asked before, but I cannot find a solution to my error from looking at those previous answers.
All I want to do is call diff for the output of two different commands on the same file.
import os, sys
from subprocess import check_call
import shlex
ourCompiler = 'espressoc';
checkCompiler = 'espressocr';
indir = 'Tests/Espresso/GoodTests';
check_call(["pwd"]);
for root, dirs, filenames in os.walk(indir):
for f in filenames:
if len(sys.argv) == 2 and sys.argv[1] == f:
str1 = "<(./%s ./%s) " % (ourCompiler, os.path.join(root, f))
str2 = "<(./%s ./%s) " % (checkCompiler, os.path.join(root, f))
check_call(["diff", str1, str2])
Why is it that I receive following error?
diff: <(./espressoc ./Tests/Espresso/GoodTests/Init.java) : No such file or directory
diff: <(./espressocr ./Tests/Espresso/GoodTests/Init.java) : No such file or directory
Traceback (most recent call last):
File "runTest.py", line 21, in <module>
check_call(["diff", str1, str2])
File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['diff', '<(./espressoc ./Tests/Espresso/GoodTests/Init.java) ', '<(./espressocr ./Tests/Espresso/GoodTests/Init.java) ']' returned non-zero exit status 2
If I were to run this command from my shell it works fine.
diff is complaining that it can't find the file with the strange name <(./espressoc ./Tests/Espresso/GoodTests/Init.java) because that's the argument you fed it.
subprocess.Popen (check_call is a convenience function for it) is directly calling what you give it, there isn't a shell to interpret redirections or anything, unless you tell it shell=True, which will then call the command via /bin/sh (on POSIX). Note the security considerations before using it.
So basically:
subprocess.check_call(['diff', '<this', '<that'])` # strange files.
subprocess.check_call('diff <this <that', shell=True)` # /bin/sh does some redirection
If you wanted to be "pure" (probably more effort than it's worth), I think you could subprocess all three processes (diff, compiler 1 and 2) and handle the piping yourself. Does diff wait for 2 EOFs or something before closing stdin? Not sure how it actually deals with the double input redirection like your line has...