Calling cmd and passing arguments with Python subprocess - python

I am trying to get a command to run within python that otherwise works fine as a bat file or in the cmd command prompt.
Python version 2.7.9.
The following works fine in the command prompt:
"C:\Program Files (x86)\Common Files\Aquatic Informatics\AQUARIUS\runreport.exe" -Report="NHC_chart_1Week_1Series" -TimeSeries="Precip Total.24hr#FK Intake Forecast" -Server="aqserver-van" -username="admin" -password="admin" -label="Battery Voltage Data" -OutputFile="C:\Aquarius\Reports\Altagas\Summary\24hr_Precip_Forecast.pdf"
This basically calls a program (runreport.exe) and passes a bunch of arguments to it. If things go well a file is created.
It seems I should be able to do the same in python with subprocess.call
I have tried many different versions of the code but none of them run the program correctly. Below is my current code:
import subprocess
run_report_program_path=r'C:\Program Files (x86)\Common Files\Aquatic Informatics\AQUARIUS\runreport.exe'
reportname="-Report=NHC_chart_1Week_1Series"
timeseries="-TimeSeries=Precip Total.24hr#FK Intake Forecast"
server="-Server=aqserver-van"
user="-username=admin"
passwrd="-password=admin"
output="-OutputFile=C:\Aquarius\Reports\Altagas\Summary\24hr_Precip_Forecast.pdf"
result=subprocess.check_output([run_report_program_path, reportname, timeseries, server, user, passwrd, output], shell= True)
The program runreport's does 'pop up' but runs to quick and doesn't output the desired result, instead I get the error below. Does anyone see the issue?
Traceback (most recent call last):
File "C:\aquarius\scripts\API_import_and_trigger_reports_py\python_scripts\call_run_reports\test2.py", line 10, in <module>
result=subprocess.check_output([run_report_program_path, reportname, timeseries, server, user, passwrd, output], shell= True)
File "C:\Python27\lib\subprocess.py", line 573, in check_output
raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command '['C:\\Program Files (x86)\\Common Files\\Aquatic Informatics\\AQUARIUS\\runreport.exe', '-Report=NHC_chart_1Week_1Series', '-TimeSeries=Precip Total.24hr#FK Intake Forecast', '-Server=aqserver-van', '-username=admin', '-password=admin', '-OutputFile=C:\\Aquarius\\Reports\\Altagas\\Summary\x14hr_Precip_Forecast.pdf']' returned non-zero exit status 1
Also, in the command prompt it usually returns a nice set of errors (like time series not found, or report created successfully), I also don't get these. Is it possible to get these returned within the python session?

subprocess.check_output raises an exception if your program does not return zero. Try insert an r before the output string, that maybe is why your program fails.
Also it passes the output as a return value, so you would not see any output even if your program passed.

Changing the output to
output=r'-OutputFile=C:\Aquarius\Reports\Altagas\Summary\24hr_Precip_Forecast.pdf'
Fixed the issue. Thanks you

Related

subprocess.call command runs but doesn't actually do anything

I get no error message when running the following code, however it doesn't perform the required task (to be more precise, the output is an empty file).
import subprocess
chemin_in_troncon = r"C:\Users\HBT\Desktop\Run\1OUTIL_CREATION_DB\donnees_out\2-FAX0W0\FAX0W0900_2.in"
chemin_strapontin = r"C:\Users\HBT\Desktop\STRAPONTIN\strapontin9120.exe"
result900 = subprocess.call([chemin_strapontin, chemin_in_troncon], shell=True)
However, when I execute the equivalent task directly from the shell with the following command, the .exe file does its job and gives me a non-empty output :
strapontin9120.exe ../Run/1OUTIL_CREATION_DB/donnees_out/2-FAX0W0/FAX0W0900_2.in
Here, I wrote this shell command while in the directory containing "strapontin9120.exe".
Any idea what is wrong in my code?

Python script that get HW info about RAM working inside host but fails as a Rundeck Job

I have a custom python script that collects information (Capacity, Vendors, Manufacturer, etc) about the hardware (CPU, RAM, etc) of a server host, and dumps that information in json formatted output files (i.e. memory_info.json - later to be used by another script for something else). The script makes use of python subprocess in order to call external programs like "inxi" in this case.
After some testing in various hosts (different OS, archs, etc) it seems to work fine, so I decided to configure it, in order to run as a Rundeck job (for central management purposes). The rundeck job is very simple and it just executes the script in the following way:
sshpass -p ${RD_OPTION_SERVERPASSWORD} ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -q -t root#${RD_OPTION_SERVERIPADDRESS} "python3 /tmp/hw_inxi.py"
Here comes the problem:
Executing it as a rundeck job, i get the following traceback:
Traceback (most recent call last):
File "/tmp/hw_inxi.py", line 53, in <module>
"--output-file", "print", "--output", "json"]).strip().decode()
File "/usr/lib/python3.7/subprocess.py", line 395, in check_output
**kwargs).stdout
File "/usr/lib/python3.7/subprocess.py", line 487, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['inxi', '-xxxm', '--output-file', 'print', '--output', 'json']' returned non-zero exit status 1.
Result: 1
Failed: NonZeroResultCode: Result code was 1
Execution failed: 2541 in project server_setup: [Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [RUNDECK_SERVER: NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(node:RUNDECK_SERVER)=BaseDataContext{{exec={exitCode=0}}}, ContextView(step:1, node:RUNDECK_SERVER)=BaseDataContext{{exec={exitCode=0}}}}, base=null)} ]}, Node failures: {RUNDECK_SERVER=[NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(node:RUNDECK_SERVER)=BaseDataContext{{exec={exitCode=0}}}, ContextView(step:1, node:RUNDECK_SERVER)=BaseDataContext{{exec={exitCode=0}}}}, base=null)} ]}, status: failed]
Inxi is installed and properly configured in the destination machine and binary permissions seem ok. My script run perfectly fine with no error from within the server and the problem appears when it executes as a rundeck job (Rundeck version 3.3.10).
The relative code snippet from the script can be seen below:
inxi_mem = subprocess.check_output(["inxi", "-xxxm", "--output-file", "print", "--output", "json"]).strip().decode()
memory_list = json.loads(inxi_mem)
Any ideas on who may be the culprit or how to troubleshoot the above?
(p.s. sorry if I forgot something, this is my first question)
I am using a job step to connect to the remote host, on purpose, as this is the only way to use the 2nd script with the hardware data that the 1st script gathered, from within my Rundeck server.
The problem seems to be related with environment variables and it was solved by adding "--tty" option in the inxi command as mentioned in this issue. The subprocess command now looks like below and works like a charm.
inxi_mem = subprocess.check_output(["inxi", "--tty", "-xxxm", "--output-file", "print", "--output", "json"]).strip().decode()
memory_list = json.loads(inxi_mem)
You're using the job step to connect to the remote node, a good approach is:
Configure your remote node in the model source.
And then, in a job, call the script in a command step dispatching the job to your node.

Python : Execute several commands in CMD in single instance

I am trying to call several "blocking" .bat files from my python program. First thing I need to do is change the directory that the CMD is opened. After I have the CMD pointing to the desired location I will call two bat files. I want them to execute sequentially.
def launchAdminConsole():
print('Going to launch admin console')
changeDir = 'cd dir1\\dir2\\bin \n 1.bat \n 2.bat'
os.system("start /wait cmd /c {"+changeDir+"}")
print("Admin Console launced")
According to this question using the /wait should make the command prompt wait but for me, it just pops up and goes away so I am not sure if the bat file is executed or not.
Also I am not sure if I have formed the command line code correctly. I googled about how to execute several commands in a single instance of cmd from python but none of the results helped me much so I took a guess on my own and did the above code.
I need the command prompt to open and run the two bat files and then give the control back to python. I do not need to fetch the output of the bat files or something I just need to know if the two files are executed or not. As I said before they are all blocking bat files so if run correctly the command prompt will not be able to close so quickly. I hope you guys got my requirement else comment below I will explain more.
Edit :
Updated my code as follows
def launchAdminConsole2():
print('Going to launch admin console')
changeDir = 'cd dir1\\dir2\\bin'
runOnce1 = '1.bat'
runOnce2 = '2.bat'
p = subprocess.Popen(changeDir,shell=True)
p.wait()
print(p.returncode)
p = subprocess.call([changeDir, runOnce, runOnce1])
p.wait()
print("Admin Console launced")
The return code for the change directory returns 0 but it is saying that 1.bat is not found. I am sure that if the directory has changed that file will be present in the given location.
The error is
File "C:\Users\nirma\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 304, in call
with Popen(*popenargs, **kwargs) as p:
File "C:\Users\nirma\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 756, in __init__
restore_signals, start_new_session)
File "C:\Users\nirma\AppData\Local\Programs\Python\Python37-32\lib\subprocess.py", line 1155, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
You can run several commands using cmd.exe /c if you separate them with ampersands:
cmd.exe /c "cd \dir & 1.bat & 2.bat"
Try passing that to subprocess.call and see how it goes.

Issues with subprocess.Popen() replacing os.system()

I am trying to generate a set of files through a python script on a 48-core cluster which actually has a master-and 3 slave machines. I need to generate a set of files, run some scripts on them, collect results and then delete them. I again repeat the process- regenerate files, execute, delete etc.,
When I delete and regenerate files with the same name, I see that the slave machine complains that it cannot find the files.
I am running python script through os.system()
I learnt from this post that it is better to use subprocess.Popen() rather than os.system so that it actually waits for my script to generate my files, before proceeding with the execution. I could use os.system("pause") or time.sleep(whatever) for waiting, but I want to convert my os.systems to subprocess.popens or subprocess.calls and I am stuck here.
I ran through the python documentation and tried out subprocess.Popen('ls'), but I am not able to get a simple thing like subprocess.Popen('cd /whatever_directory') working.
It might sound silly but how do I execute such a simple command like cd through subprocess rather than os.system('cd')?
Then, I actually want to convert the following into subprocess. How do I do it?
import os,optparse
from optparse import OptionParser
parser.add_option("-m", "--mod",dest='module', help='Enter the entity name')
parser.add_option("-f", "--folder", dest="path",help="Enter the path")
module=options.module
path=options.path
os.system('python %s/python_repeat_deckgen_remote.py -m %s' %(path,module))
I just replaced os.system with subprocess.Popen.
But it gave me a lot of complaints:
File "/usr/lib64/python2.6/subprocess.py", line 633, in __init__
errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1139, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
As NPE already noted, new processes don't affect the existing one (which means os.system('cd /some/where') also has no effect on the current process). In this case, though, I think you're tripping over the fact that os.system invokes a shell to interpret the command you pass in, while subprocess.Popen does not do so by default. But you can tell it to do so:
proc = subprocess.Popen(
'python %s/python_repeat_deckgen_remote.py -m %s' % (path, module),
shell = True)
status = proc.wait()
If you're invoking a shell built-in command or using shell expansions, it's necessary to invoke the shell (assuming you're not willing to simulate it):
>>> import subprocess
>>> x = subprocess.Popen('echo $$')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/local/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
>>> x = subprocess.Popen('echo $$', shell = True); x.wait()
81628
0
>>>
but you can bypass the shell—which can help with security issues—if your problem permits, by breaking up the command arguments into a list:
>>> x = subprocess.Popen(['echo', '$$']); x.wait()
$$
0
>>>
Note that this time the output here is the string $$, rather than a process ID, since this time the shell did not interpret the string.
For your original example, for instance, you might use:
proc = subprocess.Popen(['python',
os.path.join(path, 'python_repeat_deckgen_remote.py'),
'-m',
module])
which avoids issues if path and/or module contain characters special to the shell.
(You might even want to call subprocess.Popen with cwd = path and eliminate the call to os.path.join, although that depends on other things.)
I am not able to get a simple thing like subprocess.Popen('cd /whatever_directory') working.
Each process's current working directory is independent of other processes in the system.
When you start a subprocess (using either of these mechanisms), and get it to cd into another directory, that has no effect on the parent process, i.e. on your Python script.
To change the current working directory of your script, you should use os.chdir().
I might be not answering you question directly, but for such tasks which require running subprocess, I always use plumbum.
IMO, It makes the task much simpler and more intuitive, including running on remote machines.
Using plumbum, in order to set subprocess's working directory, you can run the command in a with local.cwd(path): my_cmd() context.

Subprocess module fails to run command

I'm trying to execute Google's cpplint.py on a group of my files and collect the results to one log file. However, I have not managed to beat the subprocess module. My current code is here:
import os, subprocess
rootdir = "C:/users/me/Documents/dev/"
srcdir = "project/src/"
with open(rootdir+srcdir+"log.txt", mode='w', encoding='utf-8') as logfile:
for subdir, dirs, files in os.walk(rootdir+srcdir):
for file in files:
if file.endswith(".h") or file.endswith(".cpp"):
filewithpath=os.path.join(subdir, file)
cmd=['c:/Python27/python.exe','C:/users/me/Documents/dev/cpplint.py','--filter=-whitespace,-legal,-build/include,-build/header_guard/', filewithpath]
output = subprocess.check_output(cmd)
logfile.write(output.decode('ascii'))
Trying to run the above code throws an error:
File "C:\Python32\lib\site.py", line 159
file=sys.stderr)
^ SyntaxError: invalid syntax Traceback (most recent call last): File "C:\Users\me\Documents\dev\project\src\verifier.py", line 19, in <module>
output = subprocess.check_output(cmd) File "C:\Python32\lib\subprocess.py", line 511, in check_output
raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['c:/Python27/python.exe', 'C:/users/me/Documents/dev/cpplint.py', '--filter=-whitespace,-legal,-build/include,-build/header_guard/', 'C:/users/me/Documents/dev/project/src/aboutdialog.cpp']' returned non-zero exit status 1
If I substitute the cmd with something simpler like:
cmd=['C:/WinAVR-20100110/bin/avr-gcc.exe','--version']
Then the script works as expected.
I have also tried to use a single command string instead of a list of strings as cmd, but the result is the same.
When debugging the code, I copied the list-of-strings-turned-into-the-command-line-command from the debugger and ran it in the Windows command line, and the command ran as expected.
The Python interpreter running my script is Python 3.2.
Any tips are greatly appreciated.
Looks like cpplint.py is simply exiting with a non-zero return code - which it might do, for instance, if it finds errors or "lint" in the source files it is checking.
See the documentation for subprocess.check_output. Note that if the command executed returns a non-zero exit code then a subprocess.CalledProcessError is raised.
You could work around it by watching for CalledProcessError, e.g.
try:
output = subprocess.check_output(cmd)
except subprocess.CalledProcessError as e:
# ack! cpplint.py failed... report an error to the user?
EDIT:
The SyntaxError seems to be the key here, and is probably caused by C:\Python32\lib being in your PYTHONPATH (either explicitly, or, this could happen if it is your current working directory).
The Python interpreter (since about 1.5.2-ish) automatically runs import site when started. So, when this is the case, and your script goes to execute:
c:/Python27/python.exe C:/users/me/Documents/dev/cpplint.py ...
then the Python 2.7 interpreter will find C:\Python32\lib\site.py first, and try to load that, instead of the one (presumably) at C:\Python27\lib\site.py. The issue is that Python 3's site.py contains syntax incompatible with Python 2, so the process launched by subprocess.check_output is failing with a SyntaxError before it even gets a chance to run cpplint, which propagates the CalledProcessError.
Solution? Make sure Python2 gets a bonafide Python2 "PYTHONPATH", and likewise for Python3! In other words, make sure C:\Python32\lib is not in the PYTHONPATH search path when running the Python2 interpreter.
One way to do this in your case is to set an explicit environment when launching the process, e.g.:
python2_env = {"PYTHONPATH": "path/to/python2/stuff:..."}
output = subprocess.check_output(cmd, env=python2_env)
I would request you to run this first
pipe = subprocess.Popen([cmd, options],stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout, stderr = pipe.communicate()
You will get to know what exactly is the error behind it, since CalledProcessError is raised only if the exit code was non-zero.
I did it by replacing the def main() with the following (I editet the errorfunction too to get a proper csv-file):
errorlog = sys.stderr
sys.stderr = open("errorlog.csv","w")
sys.stderr.write("File;Line;Message;Category;Confidence\n")
for filename in filenames:
ProcessFile(filename, _cpplint_state.verbose_level)
_cpplint_state.PrintErrorCounts()
sys.exit(_cpplint_state.error_count > 0)
sys.stdout = errorlog
sys.stderr.close()

Categories