I want to execute a set of statements which is a combination of shell and python code in python script in a child process. I am using subprocess.call() method but it only takes one shell command as input. I want to execute some python code after the shell command in the child process and exit the child process once shell+python code has finished execution.
command = "./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights {0}".format(latest_subdir)
proc = subprocess.Popen([command], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
with open(result_file, 'w') as fout:
fout.write(out)
os.system('mv {0} {1}'.format(latest_subdir,processed_dir))
s3_util.upload_results([result_file])
When you send a string to os.system, you use the syntax of the system command shell. For instance, this works just fine on Linux:
os.system('ls; wc -l so.py; echo "done"')
Separate the commands with semicolons.
Related
I wrote a Python script to run a terminal command that belongs to a 3rd party program.
import subprocess
DETACHED_PROCESS = 0x00000008
command = 'my cmd command'
process = subprocess.Popen(
args=command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
encoding="utf-8",
creationflags=DETACHED_PROCESS
)
code = process.wait()
print(process.stdout.readlines())
# Output: []
This script basically runs the command successfully. However, I'd like to print the output but process.stdout.readlines() prints an empty list.
I need to run the subprocess with creationflags due to 3rd party program's terminal command.
I've also tried creationflags=subprocess.CREATE_NEW_CONSOLE. It works but process takes too long because of 3rd party program's terminal command.
Is there a way to print the output of subprocess by using creationflags=0x00000008 ?
By the way, I can use subprocess.run etc to run the command also but I'm wondering if I can fix this.
Thank you for your time!
Edit:
I'm sorry I forgot to say I can get output if i write "dir" etc. as a command. However, I can't get any output when I write a command such as: command = '"program.exe" test'
I'm not sure that this works for your specific case, but I use subprocess.check_output when I need to capture subprocess output.
import subprocess
DETACHED_PROCESS = 0x00000008
command = 'command'
process = subprocess.check_output(
args=command,
shell=True,
stderr=subprocess.STDOUT,
encoding="utf-8",
creationflags=DETACHED_PROCESS
)
print(process)
This just returns a string of stdout.
When I run a shell command in Python it does not show the output until the command is finished. the script that I run takes a few hours to finish and I'd like to see the progress while it is running.
How can I have python run it and show the outputs in real-time?
Use the following function to run your code. In this example, I want to run an R script with two arguments. You can replace cmd with any other shell command.
from subprocess import Popen, PIPE
def run(command):
process = Popen(command, stdout=PIPE, shell=True)
while True:
line = process.stdout.readline().rstrip()
if not line:
break
print(line)
cmd = f"Rscript {script_path} {arg1} {arg2}"
run(cmd)
I am facing difficulties calling a command line from my script.I run the script but I don't get any result. Through this command line in my script I want to run a tool which produces a folder that has the output files for each line.The inputpath is already defined. Can you please help me?
for line in inputFile:
cmd = 'python3 CRISPRcasIdentifier.py -f %s/%s.fasta -o %s/%s.csv -st dna -co %s/'%(inputpath,line.strip(),outputfolder,line.strip(),outputfolder)
os.system(cmd)
You really want to use the Python standard library module subprocess. Using functions from that module, you can construct you command line as a list of strings, and each would be processed as one file name, option or value. This bypasses the shell's escaping, and eliminates the need to massage you script arguments before calling.
Besides, your code would not work, because the body block of the for statement is not indented. Python would simply not accept this code (could be you pasted into the questiong without the proper indentations).
as mentioned before, executing command vias: os.system(command) is not recomended. please use subprocess (read in python docs about this modulesubprocess_module_docs). see the code here:
for command in input_file:
p = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
# use this if you want to communicate with child process
# p = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
# --- do the rest
I usually do like this for static command
from subprocess import check_output
def sh(command):
return check_output(command, shell=True, universal_newlines=True)
output = sh('echo hello world | sed s/h/H/')
BUT THIS IS NOT SAFE!!! It's vunerable to shell injection you should do
from subprocess import check_output
from shlex import split
def sh(command):
return check_output(split(command), universal_newlines=True)
output = sh('echo hello world')
The difference is subtle but important. shell=True will create a new shell, so pipes, etc will work. I use this when I have a big command line with pipes and that is static, I mean, it do not depend on user input. This is because this variant is vunerable to shell injection, a user can input something; rm -rf / and it will run.
The second variant only accepts one command, it will not spawn a shell, instead it will run the command directly. So no pipes and such shell things will work, and is safer.
universal_newlines=True is for getting output as string instead of bytes. Use it for text output, if you need binary output just ommit it. The default is false.
So here is the full example
from subprocess import check_output
from shlex import split
def sh(command):
return check_output(split(command), universal_newlines=True)
for line in inputFile:
cmd = 'python3 CRISPRcasIdentifier.py -f %s/%s.fasta -o %s/%s.csv -st dna -co %s/'%(inputpath,line.strip(),outputfolder,line.strip(),outputfolder)
sh(cmd)
Ps: I didn't test this
I have a python process running, having a logger object configured to print logs in a log file.
Now, I am trying to call a scala script through this python process, by using subprocess module of Python.
subprocess.Popen(scala_run_command, stdout=subprocess.PIPE, shell=True)
The issue is, whenever the python process exits, it hangs the shell, which comes to life only after explicitly running stty sane command. My guess is that it is caused because the scala script outputs to shell and hence the shell hangs, because of its stdout [something in its stdout causes the shell to lose its sanity].
For the same reason, I wanted to try to put the output of scala run script to be captured in my default log file, which does not seem to be happening using multiple ways.
So, the query boils down to, how to the get the stdout output of shell command ran through subprocess module in a log file. Even if there is a better way to achieve this instead of subprocess, run, I would love to know the ideas.
The current state of code looks like this.
__echo_command = 'echo ":load %s"'
__spark_console_command = 'spark;'
def run_scala_script(self, script):
echo_command = self.__echo_command % script
spark_console_command = self.__spark_console_command
echo_result = subprocess.run(echo_command, stdout=subprocess.PIPE, shell=True)
result = subprocess.run(spark_console_command, stdout=subprocess.PIPE, shell=True, input=echo_result.stdout)
logger.info('Scala script %s completed successfully' % script)
logger.info(result.stdout)
Use
p = subprocess.Popen(...)
followed by
stdout, stderr = p.communicate()
and then stdout and stderr will contain the output bytes from the subprocess' output streams. You can then log the stdout value.
I work in Unix, and I have a "general tool" that loads another process (GUI utility) on the background, and exits.
I call my "general tool" from a Python script, using Popen and proc.communicate() method.
My "general tool" runs for ~1 second, loads the GUI process on the background and exits immediately.
The problem is that proc.communicate() continues waiting to the process, although it's already terminated. I have to manually close the GUI (which is a subprocess that runs on the BG), so proc.communicate() returns.
How can this be solved?
I need proc.communicate() to return once the main process is terminated, and not to wait for the subprocesses that run on the background...
Thanks!!!
EDIT:
Adding some code snippets:
My "General Tool" last Main lines (Written in Perl):
if ($args->{"gui"}) {
my $script_abs_path = abs_path($0);
my $script_dir = dirname($script_abs_path);
my $gui_util_path = $script_dir . "/bgutil";
system("$gui_util_path $args->{'work_area'} &");
}
return 0;
My Python script that runs the "General Tool":
cmd = PATH_TO_MY_GENERAL_TOOL
proc = subprocess.Popen(cmd, shell = True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
stdout, dummy = proc.communicate()
exit_code = proc.returncode
if exit_code != 0:
print 'The tool has failed with status: {0}. Error message is:\n{1}'.format(exit_code, stdout)
sys.exit(1)
print 'This line is printed only when the GUI process is terminated...'
Don't use communicate. Communicate is explicitly designed to wait until the stdout of the process has closed. Presumably perl is not closing stdout as it's leaving it open for it's own subprocess to write to.
You also don't really need to use Popen as you're not really using its features. That is, you create pipes, and then just reprint to stdout with your own message. And it doesn't look like you need a shell at all.
Try using subprocess.call or even subprocess.check_call.
eg.
subprocess.check_call(cmd)
No need to check the return value as check_call throws an exception (which contains the exit code) if the process returns with a non-zero exit code. The output of the process is directly written to the controlling terminal -- no need to redirect the output.
Finally, if cmd is a compound of a path to an executable and its arguments then use shlex.split.
eg.
cmd = "echo whoop" # or cmd = "ls 'does not exist'"
subprocess.check_call(shlex.split(cmd))
Sample code to test with:
mypython.py
import subprocess, shlex
subprocess.check_call(shlex.split("perl myperl.pl"))
print("finishing top level process")
myperl.pl
print "starting perl subprocess\n";
my $cmd = 'python -c "
import time
print(\'starting python subprocess\')
time.sleep(3);
print(\'finishing python subprocess\')
" &';
system($cmd);
print "finishing perl subprocess\n";
Output is:
$ python mypython.py
starting perl subprocess
finishing perl subprocess
finishing top level process
$ starting python subprocess
finishing python subprocess