Hi guys,
I have three commands for copy/paste the folder with a similar path. I using this code:
from subprocess import Popen, PIPE
cmd_list = [
'cp -r /opt/some_folder_1/ /home/user_name/',
'cp -r /var/some_folder_2/ /home/user_name/',
'cp -r /etc/some_folder_3/ /home/user_name/',
]
copy_paste = Popen(
cmd_list,
shell = True,
stdin = PIPE,
stdout = PIPE,
stderr = PIPE
)
stdout, stderr = make_copy.communicate()
But for the copy/paste three folders, I should three times run code.
Could you help me with these guys?
Thank you!
Do:
import subprocess
cmd_list = [
['cp', '-r', '/opt/some_folder_1/', '/home/user_name/'],
['cp', '-r', '/var/some_folder_2/', '/home/user_name/'],
['cp', '-r', '/etc/some_folder_3/', '/home/user_name/'],
]
for cmd in cmd_list:
res = subprocess.check_output(cmd)
# stdout and stderr are available at res.stdout and res.stderr
# An error is raised for non-zero return codes
When passing a list into Popen or any of the other subprocess functions, you're not able to pass multiple commands in the cmd_list. It is expected that the first item in the list is the command you are running, and everything else are parameters for that one command. This restriction helps keep your code safer, especially when using user supplied input.
Another options is you can join everything together into a single command with a double ampersand. When doing so if one command fails the remaining commands won't run.
copy_paste = Popen(
' && '.join(cmd_list),
shell = True,
stdin = PIPE,
stdout = PIPE,
stderr = PIPE
)
Related
I need to gzip files of size more than 10 GB using python on top of shell commands and hence decided to use subprocess Popen.
Here is my code:
outputdir = '/mnt/json/output/'
inp_cmd='gzip -r ' + outputdir
pipe = Popen(["bash"], stdout =PIPE,stdin=PIPE,stderr=PIPE)
cmd = bytes(inp_cmd.encode('utf8'))
stdout_data,stderr_data = pipe.communicate(input=cmd)
It is not gzip-ing the files within output directory.
Any way out?
The best way is to use subprocess.call() instead of subprocess.communicate().
call() waits till the command is executed completely while in Popen(), one has to extrinsically use wait() method for the execution to finish.
Have you tried it like this:
output_dir = "/mnt/json/output/"
cmd = "gzip -r {}".format(output_dir)
proc = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
shell=True,
)
out, err = proc.communicate()
I have created a Subprocess object. The subprocess invokes a shell, I need to send the shell command provided below to it. The code I've tried:
from subprocess import Popen, PIPE
p = Popen(["code.exe","25"],stdin=PIPE,stdout=PIPE,stderr=PIPE)
print p.communicate(input='ping 8.8.8.8')
The command doesn't execute, nothing is being input into the shell. Thanks in advance.
If I simulate code.exe to read the arg and then process stdin:
#!/usr/bin/env bash
echo "arg: $1"
echo "stdin:"
while read LINE
do
echo "$LINE"
done < /dev/stdin
and slightly update your code:
import os
from subprocess import Popen, PIPE
cwd = os.getcwd()
exe = os.path.join(cwd, 'foo.sh')
p = Popen([exe, '25'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
out, err = p.communicate(input='aaa\nbbb\n')
for line in out.split('\n'):
print(line)
Then the spawned process outputs:
arg: 25
stdin:
aaa
bbb
If input is changed without a \n though:
out, err = p.communicate(input='aaa')
Then it doesn't appear:
arg: 25
stdin:
Process finished with exit code 0
So you might want to look closely at the protocol between both ends of the pipe. For example this might be enough:
input='ping 8.8.8.8\n'
Hope that helps.
I am trying to use popen to kick off a subprocess that calls two commands (with multiple arguements) one after the other. The second command relies on the first command running, so I was hoping to use a single subprocess to run both rather than spawning two processes and wait on the first.
But I am running into issues because I am not sure how to give two command inputs or to seperate the command as one single object.
Also, I am trying to avoid setting shell to true if possible.
This is essentially, what I am trying to do:
for test in resources:
command = [
'pgh',
'resource',
'create',
'--name', test['name'],
'--description', test['description'],
]
command2 = [
'pgh',
'assignment',
'create',
'--name', test['name'],
'--user', test['user'],
]
p = Popen(command, stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate()
print(stdout)
print(stderr)
As per my understanding the following should work for you.
To chain the execution once the previous completes use.
p1 = subprocess.Popen(command, stdout=subprocess.PIPE)
p2 = subprocess.Popen(command2, stdin=p1.stdout, stdout=subprocess.PIPE)
print p2.communicate()
You will have to launch command and wait for completion before launching another command. You should do this repeatedly for each command.
This can be done as
ps = [ Popen(c, stdout=PIPE, stderr=PIPE).communicate()
for c in command]
Note that this launches the next command irrespective of weather the first command succeeded or failed. If you want to launch the next command only if the previous command succeds then use
def check_execute(commands):
return_code = 0
for c in commands:
p = Popen(c, stdout=PIPE, stderr=PIPE)
result = p.communicate()
yield result
return_code = p.returncode
if return_code != 0:
break
I am trying to call an executable called foo, and pass it some command line arguments. An external script calls into the executable and uses the following command:
./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log
The script uses Popen to execute this command as follows:
from subprocess import Popen
from subprocess import PIPE
def run_command(command, returnObject=False):
cmd = command.split(' ')
print('%s' % cmd)
p = None
print('command : %s' % command)
if returnObject:
p = Popen(cmd)
else:
p = Popen(cmd)
p.communicate()
print('returncode: %s' % p.returncode)
return p.returncode
return p
command = "./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log
"
run_command(command)
However, this passes extra arguments ['2>&1', '|', '/usr/bin/tee', 'temp.log'] to the foo executable.
How can I get rid of these extra arguments getting passed to foo while maintaining the functionality?
I have tried shell=True but read about avoiding it for security purposes (shell injection attack). Looking for a neat solution.
Thanks
UPDATE:
- Updated the file following the tee command
The string
./main/foo --config config_file 2>&1 | /usr/bin/tee >temp.log
...is full of shell constructs. These have no meaning to anything without a shell in play. Thus, you have two options:
Set shell=True
Replace them with native Python code.
For instance, 2>&1 is the same thing as passing stderr=subprocess.STDOUT to Popen, and your tee -- since its output is redirected and it's passed no arguments -- could just be replaced with stdout=open('temp.log', 'w').
Thus:
p = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=open('temp.log', 'w'))
...or, if you really did want the tee command, but were just using it incorrectly (that is, if you wanted tee temp.log, not tee >temp.log):
p1 = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
p2 = subprocess.Popen(['tee', 'temp.log'], stdin=p1.stdout)
p1.stdout.close() # drop our own handle so p2's stdin is the only handle on p1.stdout
stdout, _ = p2.communicate()
Wrapping this in a function, and checking success for both ends might look like:
def run():
p1 = subprocess.Popen(['./main/foo', '--config', 'config_file'],
stderr=subprocess.STDOUT,
stdout=subprocess.PIPE)
p2 = subprocess.Popen(['tee', 'temp.log'], stdin=p1.stdout)
p1.stdout.close() # drop our own handle so p2's stdin is the only handle on p1.stdout
# True if both processes were successful, False otherwise
return (p2.wait() == 0 && p1.wait() == 0)
By the way -- if you want to use shell=True and return the exit status of foo, rather than tee, things get a bit more interesting. Consider the following:
p = subprocess.Popen(['bash', '-c', 'set -o pipefail; ' + command_str])
...the pipefail bash extension will force the shell to exit with the status of the first pipeline component to fail (and 0 if no components fail), rather than using only the exit status of the final component.
Here's a couple of "neat" code examples in addition to the explanation from #Charles Duffy answer.
To run the shell command in Python:
#!/usr/bin/env python
from subprocess import check_call
check_call("./main/foo --config config_file 2>&1 | /usr/bin/tee temp.log",
shell=True)
without the shell:
#!/usr/bin/env python
from subprocess import Popen, PIPE, STDOUT
tee = Popen(["/usr/bin/tee", "temp.log"], stdin=PIPE)
foo = Popen("./main/foo --config config_file".split(),
stdout=tee.stdin, stderr=STDOUT)
pipestatus = [foo.wait(), tee.wait()]
Note: don't use "command arg".split() with non-literal strings.
See How do I use subprocess.Popen to connect multiple processes by pipes?
You may combine answers to two StackOverflow questions:
1. piping together several subprocesses
x | y problem
2. Merging a Python script's subprocess' stdout and stderr (while keeping them distinguishable)
2>&1 problem
Have been trying to get something like this to work for a while, the below doesn't seem to be sending the correct arg to the c program arg_count, which outputs argc = 1. When I'm pretty sure I would like it to be 2. ./arg_count -arg from the shell outputs 2...
I have tried with another arg (so it would output 3 in the shell) and it still outputs 1 when calling via subprocess.
import subprocess
pipe = subprocess.Popen(["./args/Release/arg_count", "-arg"], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = pipe.communicate()
result = out.decode()
print "Result : ",result
print "Error : ",err
Any idea where im falling over? I'm running linux btw.
From the documentation:
The shell argument (which defaults to False) specifies whether to use
the shell as the program to execute. If shell is True, it is
recommended to pass args as a string rather than as a sequence.
Thus,
pipe = subprocess.Popen("./args/Release/arg_count -arg", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
should give you what you want.
If shell=True then your call is equivalent to:
from subprocess import Popen, PIPE
proc = Popen(['/bin/sh', '-c', "./args/Release/arg_count", "-arg"],
stdout=PIPE, stderr=PIPE)
i.e., -arg is passed to the shell itself and not your program. Drop shell=True to pass -arg to the program:
proc = Popen(["./args/Release/arg_count", "-arg"],
stdout=PIPE, stderr=PIPE)
If you don't need to capture stderr separately from stdout then you could use check_output():
from subprocess import check_output, STDOUT
output = check_output(["./args/Release/arg_count", "-arg"]) # or
output_and_errors = check_output(["./args/Release/arg_count", "-arg"],
stderr=STDOUT)