Running shell command from python script with \n - python

I am trying to run the shell command
echo -e 'FROM busybox\nRUN echo "hello world"' | docker build -t myimage:latest -
from jupyter notebook using subprocesses
I have tried the code
p = subprocess.Popen('''echo -e 'FROM busybox\nRUN echo "hello world"' | docker build -t myimage:latest - ''', shell=True)
p.communicate()
and some iterations with run() or call(), but everytime the output is
-e 'FROM busybox
It seems that the new line character \n causes the problem. Any ideas to solve the problem?

The \n gets parsed by Python into a literal newline. You can avoid that by using a raw string instead,
p = subprocess.run(
r'''echo -e 'FROM busybox\nRUN echo "hello world"' | docker build -t myimage:latest - ''',
shell=True, check=True)
but I would recommend running a single process and passing in the output from Python; this also avoids a shell, which is generally desirable.
p = subprocess.run(['docker', 'build', '-t', 'myimage:latest', '-'],
input='FROM busybox\nRUN echo "hello world"',
text=True, check=True)
Notice also how we prefer subprocess.run() over the more primitive subprocess.Popen(); as suggested in the documentation, you want to avoid this low-level function whenever you can. With check=True we also take care to propagate any subprocess errors up to the Python parent process.
As an aside, printf is both more versatile and more portable than echo -e; I would generally recommend you to avoid echo -e altogether.
This ideone demo with nl instead of docker build demonstrates the variations, and coincidentally proves why you want to avoid echo -e even if your login shell is e.g. Bash (in which case you'd think it should be supported; but subprocess doesn't use your login shell).

Related

Python subprocess.run ignores --exclude clause

I have one issue with subprocess.run.
This command in a Bash shell works without any problem:
tar -C '/home/' --exclude={'/home/user1/.cache','/home/user1/.config'} -caf '/transito/user1.tar' '/home/user1' > /dev/null 2>&1
But if I execute it through Python:
cmd = "tar -C '/home/' --exclude={'/home/user1/.cache','/home/user1/.config'} -caf '/transito/user1.tar' '/home/user1' > /dev/null 2>&1"
subprocess.run(cmd, shell=True, stdout=subprocess.PIPE)
The execution works without errors but the --exclude clause is not considered.
Why?
Whether or not curly brace expansion is handled correctly depends on what the standard system shell is. By default, subprocess.run() invokes /bin/sh. On systems like Linux, /bin/sh is bash. On others, such as FreeBSD, it's a different shell that doesn't support brace expansion.
To ensure the subprocess runs with a shell that can handle braces properly, you can tell subprocess.run() what shell to use with the executable argument:
subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, executable='/bin/bash')
As a simple example of this, here's a system where /bin/sh is bash:
>>> subprocess.run("echo foo={a,b}", shell=True)
foo=a foo=b
and one where it's not:
>>> subprocess.run("echo foo={a,b}", shell=True)
foo={a,b}
but specifying another shell works:
>>> subprocess.run("echo foo={a,b}", shell=True, executable='/usr/pkg/bin/bash')
foo=a foo=b
Bash curly expansion doesn't work inside Python and will be sent by subprocess as they are - they will not be expanded, regardless of the arguments you use on run().
Edit: unless of course the argument executable='/bin/bash' as stated on the other answer which seems to work after all
In a bash shell,
--exclude {'/home/user1/.cache','/home/user1/.config'}
becomes:
--exclude=/home/user1/.cache --exclude=/home/user1/.config
So to achieve the same result, in Python it must be expressed like this (one of the possible ways) before sending the command string to subprocess.run:
' '.join(["--exclude=" + path for path in ['/home/user1/.cache','/home/user1/.config']])
cmd = "tar -C '/home/' " + ' '.join(["--exclude=" + path for path in ['/home/user1/.cache','/home/user1/.config']]) + " -caf '/transito/user1.tar' '/home/user1' > /dev/null 2>&1"
print(cmd) # output: "tar -C '/home/' --exclude=/home/user1/.cache --exclude=/home/user1/.config -caf '/transito/user1.tar' '/home/user1' > /dev/null 2>&1"
subprocess.run(cmd, shell=True, stdout=subprocess.PIPE)

for loop in `Subprocess.run` results in `Syntax error: "do" unexpected`

I'm trying to run a for loop in a shell through python. os.popen runs it fine, but is deprecated on 3.x and I want the stderr. Following the highest-voted answer on How to use for loop in Subprocess.run command results in Syntax error: "do" unexpected, with which shellcheck concurs:
import subprocess
proc = subprocess.run(
"bash for i in {1..3}; do echo ${i}; done",
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE, )
print(proc.stderr)
I'm ultimately trying to reset all usbs by calling this shell code https://unix.stackexchange.com/a/611305/362437 through python, so any alternate approaches to doing that would be appreciated too.
When you do
subprocess.run('foo', shell=True)
it actually runs the equivalent of
/bin/sh -c 'foo'
(except that it magically gets all quotes right :-) ). So, in your case, it executes
/bin/sh -c "bash for i in {1..3}; do echo ${i}; done"
So the "command" given with the -c switch is actually a list of three commands: bash for i in {1..3}, do echo ${i}, and done. This is going to leave you with a very confused shell.
The easiest way of fixing this is probably to remove that bash from the beginning of the string. That way, the command passed to /bin/sh makes some sense.
If you want to run bash explicitly, you're probably better off using shell=False and using a list for the first argument to preserve your quoting sanity. Something like
import subprocess
proc = subprocess.run(
['/bin/bash', '-c', 'for i in {1..3}; do echo ${i}; done'],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE, )

Executing a local shell function on a remote host over ssh using Python

My .profile defines a function
myps () {
ps -aef|egrep "a|b"|egrep -v "c\-"
}
I'd like to execute it from my python script
import subprocess
subprocess.call("ssh user#box \"$(typeset -f); myps\"", shell=True)
Getting an error back
bash: -c: line 0: syntax error near unexpected token `;'
bash: -c: line 0: `; myps'
Escaping ; results in
bash: ;: command not found
script='''
. ~/.profile # load local function definitions so typeset -f can emit them
ssh user#box ksh -s <<EOF
$(typeset -f)
myps
EOF
'''
import subprocess
subprocess.call(['ksh', '-c', script]) # no shell=True
There are a few pertinent items here:
The dotfile defining this function needs to be locally invoked before you run typeset -f to dump the function's definition over the wire. By default, a noninteractive shell does not run the majority of dotfiles (any specified by the ENV environment variable is an exception).
In the given example, this is served by the . ~/profile command within the script.
The shell needs to be one supporting typeset, so it has to be bash or ksh, not sh (as used by script=True by default), which may be provided by ash or dash, lacking this feature.
In the given example, this is served by passing ['ksh', '-c'] is the first two arguments to the argv array.
typeset needs to be run locally, so it can't be in an argv position other than the first with script=True. (To provide an example: subprocess.Popen(['''printf '%s\n' "$#"''', 'This is just literal data!', '$(touch /tmp/this-is-not-executed)'], shell=True) evaluates only printf '%s\n' "$#" as a shell script; This is just literal data! and $(touch /tmp/this-is-not-executed) are passed as literal data, so no file named /tmp/this-is-not-executed is created).
In the given example, this is mooted by not using script=True.
Explicitly invoking ksh -s (or bash -s, as appropriate) ensures that the shell evaluating your function definitions matches the shell you wrote those functions against, rather than passing them to sh -c, as would happen otherwise.
In the given example, this is served by ssh user#box ksh -s inside the script.
I ended up using this.
import subprocess
import sys
import re
HOST = "user#" + box
COMMAND = 'my long command with many many flags in single quotes'
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
The original command was not interpreting the ; before myps properly. Using sh -c fixes that, but... ( please see Charles Duffy comments below ).
Using a combination of single/double quotes sometimes makes the syntax easier to read and less prone to mistakes. With that in mind, a safe way to run the command ( provided the functions in .profile are actually accessible in the shell started by the subprocess.Popen object ):
subprocess.call('ssh user#box "$(typeset -f); myps"', shell=True),
An alternative ( less safe ) method would be to use sh -c for the subshell command:
subprocess.call('ssh user#box "sh -c $(echo typeset -f); myps"', shell=True)
# myps is treated as a command
This seemingly returned the same result:
subprocess.call('ssh user#box "sh -c typeset -f; myps"', shell=True)
There are definitely alternative methods for accomplishing these type of tasks, however, this might give you an idea of what the issue was with the original command.

using os.system for multiple line commands

I am trying to run shell code from a python file to submit another python file to a computing cluster. The shell code is as follows:
#BSUB -J Proc[1]
#BSUB -e ~/logs/proc.%I.%J.err
#BSUB -o ~/logs/proc.%I.%J.out
#BSUB -R "span[hosts=1]"
#BSUB -n 1
python main.py
But when I run it from python like the following I can't get it to work:
from os import system
system('bsub -n 1 < #BSUB -J Proc[1];#BSUB -e ~/logs/proc.%I.%J.err;#BSUB -o ~/logs/proc.%I.%J.out;#BSUB -R "span[hosts=1]";#BSUB -n 1;python main.py')
Is there something I'm doing wrong here?
If I understand correctly, all the #BSUB stuff is text that should be fed to the bsub command as input; bsub is run locally, then runs those commands for you on the compute node.
In that case, you can't just do:
bsub -n 1 < #BSUB -J Proc[1];#BSUB -e ~/logs/proc.%I.%J.err;#BSUB -o ~/logs/proc.%I.%J.out;#BSUB -R "span[hosts=1]";#BSUB -n 1;python main.py
That's interpreted by the shell as "run bsub -n 1 and read from a file named OH CRAP A COMMENT STARTED AND NOW WE DON'T HAVE A FILE TO READ!"
You could fix this with MOAR HACKERY (using echo or here strings taking further unnecessary dependencies on shell execution). But if you want to feed stdin input, the best solution is to use a more powerful tool for the task, the subprocess module:
# Open a process (no shell wrapper) that we can feed stdin to
proc = subprocess.Popen(['bsub', '-n', '1'], stdin=subprocess.PIPE)
# Feed the command series you needed to stdin, then wait for process to complete
# Per Michael Closson, can't use semi-colons, bsub requires newlines
proc.communicate(b'''#BSUB -J Proc[1]
#BSUB -e ~/logs/proc.%I.%J.err
#BSUB -o ~/logs/proc.%I.%J.out
#BSUB -R "span[hosts=1]"
#BSUB -n 1
python main.py
''')
# Assuming the exit code is meaningful, check it here
if proc.returncode != 0:
# Handle a failed process launch here
This avoids a shell launch entirely (removing the issue with needing to deal with comment characters at all, along with all the other issues with handling shell metacharacters), and is significantly more explicit about what is being run locally (bsub -n 1) and what is commands being run in the bsub session (the stdin).
The #BSUB directives are parsed by the bsub binary, which doesn't support ; as a delimiter. You need to use newlines. This worked for me.
#!/usr/bin/python
import subprocess;
# Open a process (no shell wrapper) that we can feed stdin to
proc = subprocess.Popen(['bsub', '-n', '1'], stdin=subprocess.PIPE)
# Feed the command series you needed to stdin, then wait for process to complete
input="""#!/bin/sh
#BSUB -J mysleep
sleep 101
"""
proc.communicate(input);
*** So obviously I got the python code from #ShadowRanger. +1 his answer. I would have posted this as a comment to his answer if SO supported python code in a comment.

how to integrate "at" command in python

I need to integrate " echo /bin/meteo | at 23:00 today " in to a python script.
In the python script the command "at 23:00 today" should call the bash script /bin/meteo
I did install plumbum and intergrated this in my python scrip.
from plumbum.cmd import echo, grep
Unfortunately I have no clue how to proceed from here.
I tryed:
#!/usr/bin/python2.7
if pfd.input_pins[0].value ==0:
cmd = "echo /bin/meteo | at 06:36 today"
subprocess.Popen(cmd, shell=True)
but the lights in /bin/meteo are randomly swiching on and off (not blinking as they should)
They do it from 06:36 until 06:37 and not only 5 times.
/bin/meteo:
#!/bin/bash -x
for i in {1..5}; do #blink 5x
echo -n -e "\x37\x00\x55" | nc -u -q 1 192.168.0.6 8899 #Zone 3 on
sleep 0.1
echo -n -e "\x3A\x00\x55" | nc -u -q 1 192.168.0.6 8899 #Zone 3 off
done
sleep 0.1
exit
subprocess.Popen will run the command:
import subprocess
cmd = "echo /bin/meteo | at 23:00 today "
subprocess.Popen(cmd, shell=True)
Execute a child program in a new process. On Unix, the class uses os.execvp()-like behavior to execute the child program. On Windows, the class uses the Windows CreateProcess() function. The arguments to Popen are as follows.
args should be a sequence of program arguments or else a single string. By default, the program to execute is the first item in args if args is a sequence. If args is a string, the interpretation is platform-dependent and described below. See the shell and executable arguments for additional differences from the default behavior. Unless otherwise stated, it is recommended to pass args as a sequence.
It is not totally clear what you want but you can run any commands like:
In [9]: cmd = "date"
In [10]: subprocess.call(cmd, shell=True)
Sun Jul 6 22:30:47 IST 2014
Or using sudo:
import subprocess
cmd = "sudo which python"
my_pass="xxxx"
subprocess.call('echo {} | sudo -S {}'.format(my_pass,cmd), shell=True)
In [29]: subprocess.call('echo {} | sudo -S {}'.format(my_pass,cmd), shell=True)
/usr/local/bin/python
Out[29]: 0
With Python 3.4, it's easy to call a command and exchange input/output in a bulk:
subprocess.check_output(["at", "23:00", "today"], input="/bin/meteo")
Therefore in this very case, shell=True shouldn't be needed as we just call the at command with arguments and give it the script on input.
With older versions of python, this needs to be rewritten as:
process = subprocess.Popen(["at", "23:00", "today"])
process.communicate(input="/bin/meteo")
With the plumbum module, you could instead use:
from plumbum.cmd import at, echo
(echo["/bin/meteo"] | at["23:30", "today"])()
But I don't believe that it's very useful.

Categories