python subprocess remote find -exec filename with space - python

So I have a remote Linux Server and would like to run a Python Script on my local machine to list all files and their modification dates in a specific folder on that remote server. That is my code so far:
command = "find \""+to_directory+'''\"* -type f -exec sh -c \"stat -c \'%y:%n\' \'{}\'\" \;'''
scp_process_ = subprocess.run("ssh "+to_user+"#"+to_host+" '"+command+"' ", shell=True, capture_output=False, text=True)
Now running the command
find "/shares/Public/Datensicherung/"* -type f -exec sh -c "stat -c '%y:%n' '{}'" \;
on the server itself works fine without any error.
But as soon I use a subprocess to run it remotely over ssh it has a problem with a file in a folder with spaces: "/shares/Public/Datensicherung/New folder/hi.txt" with a space in it:
stat: can't stat '/shares/Public/Datensicherung/New': No such file or directory
stat: can't stat 'folder/hi.txt': No such file or directory
I know it is messed up, but that is the best solution I could build.
I would like to stick with subprocess and ssh but if you have a better solution feel free to post it.

With shell=True you are invoking three shell instances, each of which requires a layer of quoting. This is possible to do, of course, but there are many reasons to avoid it if at all possible.
First off, you can easily avoid the local shell=True and this actually improves the robustness and clarity of your Python code.
command = "find \""+to_directory+'''\"* -type f -exec sh -c \"stat -c \'%y:%n\' \'{}\'\" \;'''
scp_process_ = subprocess.run(
["ssh", to_user+"#"+to_host, command],
capture_output=False, text=True)
Secondly, stat can easily accept multiple arguments, so you can take out the sh -c '...' too.
command = 'find "' + to_directory + '" -type f -exec stat -c "%y:%n" {} +'
The optimization also switches + for \; (so the sh -c '' wrapper was doubly useless anyway).

Sometimes the issue happening because malformed command string. For purpose of comunication with Unix shell was craeted shlex module. So basically you wrap your code with shlex and then pass it into supbrocess.run.
I don't see the actual final cmd to call but you could split it to proper command with shlex.split by yourself.
From your example it would be something like:
from shlex import join
cmd = join(['ssh',
f'{to_user}#{to_host}',
'find',
f'{to_directory}*',
'-type',
'f',
'-exec',
'sh',
'-c',
"stat -c '%y:%n' '{}'",
';']))
scp_process_ = subprocess.run(cmd, shell=True, capture_output=False, text=True)
Also, you maybe want to play around with shell=True option.

Related

Python subprocess.run ignores --exclude clause

I have one issue with subprocess.run.
This command in a Bash shell works without any problem:
tar -C '/home/' --exclude={'/home/user1/.cache','/home/user1/.config'} -caf '/transito/user1.tar' '/home/user1' > /dev/null 2>&1
But if I execute it through Python:
cmd = "tar -C '/home/' --exclude={'/home/user1/.cache','/home/user1/.config'} -caf '/transito/user1.tar' '/home/user1' > /dev/null 2>&1"
subprocess.run(cmd, shell=True, stdout=subprocess.PIPE)
The execution works without errors but the --exclude clause is not considered.
Why?
Whether or not curly brace expansion is handled correctly depends on what the standard system shell is. By default, subprocess.run() invokes /bin/sh. On systems like Linux, /bin/sh is bash. On others, such as FreeBSD, it's a different shell that doesn't support brace expansion.
To ensure the subprocess runs with a shell that can handle braces properly, you can tell subprocess.run() what shell to use with the executable argument:
subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, executable='/bin/bash')
As a simple example of this, here's a system where /bin/sh is bash:
>>> subprocess.run("echo foo={a,b}", shell=True)
foo=a foo=b
and one where it's not:
>>> subprocess.run("echo foo={a,b}", shell=True)
foo={a,b}
but specifying another shell works:
>>> subprocess.run("echo foo={a,b}", shell=True, executable='/usr/pkg/bin/bash')
foo=a foo=b
Bash curly expansion doesn't work inside Python and will be sent by subprocess as they are - they will not be expanded, regardless of the arguments you use on run().
Edit: unless of course the argument executable='/bin/bash' as stated on the other answer which seems to work after all
In a bash shell,
--exclude {'/home/user1/.cache','/home/user1/.config'}
becomes:
--exclude=/home/user1/.cache --exclude=/home/user1/.config
So to achieve the same result, in Python it must be expressed like this (one of the possible ways) before sending the command string to subprocess.run:
' '.join(["--exclude=" + path for path in ['/home/user1/.cache','/home/user1/.config']])
cmd = "tar -C '/home/' " + ' '.join(["--exclude=" + path for path in ['/home/user1/.cache','/home/user1/.config']]) + " -caf '/transito/user1.tar' '/home/user1' > /dev/null 2>&1"
print(cmd) # output: "tar -C '/home/' --exclude=/home/user1/.cache --exclude=/home/user1/.config -caf '/transito/user1.tar' '/home/user1' > /dev/null 2>&1"
subprocess.run(cmd, shell=True, stdout=subprocess.PIPE)

for loop in `Subprocess.run` results in `Syntax error: "do" unexpected`

I'm trying to run a for loop in a shell through python. os.popen runs it fine, but is deprecated on 3.x and I want the stderr. Following the highest-voted answer on How to use for loop in Subprocess.run command results in Syntax error: "do" unexpected, with which shellcheck concurs:
import subprocess
proc = subprocess.run(
"bash for i in {1..3}; do echo ${i}; done",
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE, )
print(proc.stderr)
I'm ultimately trying to reset all usbs by calling this shell code https://unix.stackexchange.com/a/611305/362437 through python, so any alternate approaches to doing that would be appreciated too.
When you do
subprocess.run('foo', shell=True)
it actually runs the equivalent of
/bin/sh -c 'foo'
(except that it magically gets all quotes right :-) ). So, in your case, it executes
/bin/sh -c "bash for i in {1..3}; do echo ${i}; done"
So the "command" given with the -c switch is actually a list of three commands: bash for i in {1..3}, do echo ${i}, and done. This is going to leave you with a very confused shell.
The easiest way of fixing this is probably to remove that bash from the beginning of the string. That way, the command passed to /bin/sh makes some sense.
If you want to run bash explicitly, you're probably better off using shell=False and using a list for the first argument to preserve your quoting sanity. Something like
import subprocess
proc = subprocess.run(
['/bin/bash', '-c', 'for i in {1..3}; do echo ${i}; done'],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE, )

Running shell command from python script with \n

I am trying to run the shell command
echo -e 'FROM busybox\nRUN echo "hello world"' | docker build -t myimage:latest -
from jupyter notebook using subprocesses
I have tried the code
p = subprocess.Popen('''echo -e 'FROM busybox\nRUN echo "hello world"' | docker build -t myimage:latest - ''', shell=True)
p.communicate()
and some iterations with run() or call(), but everytime the output is
-e 'FROM busybox
It seems that the new line character \n causes the problem. Any ideas to solve the problem?
The \n gets parsed by Python into a literal newline. You can avoid that by using a raw string instead,
p = subprocess.run(
r'''echo -e 'FROM busybox\nRUN echo "hello world"' | docker build -t myimage:latest - ''',
shell=True, check=True)
but I would recommend running a single process and passing in the output from Python; this also avoids a shell, which is generally desirable.
p = subprocess.run(['docker', 'build', '-t', 'myimage:latest', '-'],
input='FROM busybox\nRUN echo "hello world"',
text=True, check=True)
Notice also how we prefer subprocess.run() over the more primitive subprocess.Popen(); as suggested in the documentation, you want to avoid this low-level function whenever you can. With check=True we also take care to propagate any subprocess errors up to the Python parent process.
As an aside, printf is both more versatile and more portable than echo -e; I would generally recommend you to avoid echo -e altogether.
This ideone demo with nl instead of docker build demonstrates the variations, and coincidentally proves why you want to avoid echo -e even if your login shell is e.g. Bash (in which case you'd think it should be supported; but subprocess doesn't use your login shell).

Executing a local shell function on a remote host over ssh using Python

My .profile defines a function
myps () {
ps -aef|egrep "a|b"|egrep -v "c\-"
}
I'd like to execute it from my python script
import subprocess
subprocess.call("ssh user#box \"$(typeset -f); myps\"", shell=True)
Getting an error back
bash: -c: line 0: syntax error near unexpected token `;'
bash: -c: line 0: `; myps'
Escaping ; results in
bash: ;: command not found
script='''
. ~/.profile # load local function definitions so typeset -f can emit them
ssh user#box ksh -s <<EOF
$(typeset -f)
myps
EOF
'''
import subprocess
subprocess.call(['ksh', '-c', script]) # no shell=True
There are a few pertinent items here:
The dotfile defining this function needs to be locally invoked before you run typeset -f to dump the function's definition over the wire. By default, a noninteractive shell does not run the majority of dotfiles (any specified by the ENV environment variable is an exception).
In the given example, this is served by the . ~/profile command within the script.
The shell needs to be one supporting typeset, so it has to be bash or ksh, not sh (as used by script=True by default), which may be provided by ash or dash, lacking this feature.
In the given example, this is served by passing ['ksh', '-c'] is the first two arguments to the argv array.
typeset needs to be run locally, so it can't be in an argv position other than the first with script=True. (To provide an example: subprocess.Popen(['''printf '%s\n' "$#"''', 'This is just literal data!', '$(touch /tmp/this-is-not-executed)'], shell=True) evaluates only printf '%s\n' "$#" as a shell script; This is just literal data! and $(touch /tmp/this-is-not-executed) are passed as literal data, so no file named /tmp/this-is-not-executed is created).
In the given example, this is mooted by not using script=True.
Explicitly invoking ksh -s (or bash -s, as appropriate) ensures that the shell evaluating your function definitions matches the shell you wrote those functions against, rather than passing them to sh -c, as would happen otherwise.
In the given example, this is served by ssh user#box ksh -s inside the script.
I ended up using this.
import subprocess
import sys
import re
HOST = "user#" + box
COMMAND = 'my long command with many many flags in single quotes'
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
The original command was not interpreting the ; before myps properly. Using sh -c fixes that, but... ( please see Charles Duffy comments below ).
Using a combination of single/double quotes sometimes makes the syntax easier to read and less prone to mistakes. With that in mind, a safe way to run the command ( provided the functions in .profile are actually accessible in the shell started by the subprocess.Popen object ):
subprocess.call('ssh user#box "$(typeset -f); myps"', shell=True),
An alternative ( less safe ) method would be to use sh -c for the subshell command:
subprocess.call('ssh user#box "sh -c $(echo typeset -f); myps"', shell=True)
# myps is treated as a command
This seemingly returned the same result:
subprocess.call('ssh user#box "sh -c typeset -f; myps"', shell=True)
There are definitely alternative methods for accomplishing these type of tasks, however, this might give you an idea of what the issue was with the original command.

running a command as a super user from a python script

So I'm trying to get a process to be run as a super user from within a python script using subprocess. In the ipython shell something like
proc = subprocess.Popen('sudo apach2ctl restart',
shell=True, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
works fine, but as soon as I stick it into a script I start getting: sudo: apach2ctl: command not found.
I would guess this is due to the way sudo handles environments on ubuntu. (I've also tried sudo -E apche2ctl restart and sudo env path=$PATH apache2ctl restart with no avail)
So my question is basically, if I want to run apache2ctl restart as super user that prompts the user for the super user password when required, how should I go about doing this? I have no intention of storing passwords in the script.
Edit:
I've tried passing in the commands as both a string and tokenized into a list. In the python interpreter, with a string I'll get the password prompt properly (still doesnt work in a python script as in my original problem), a list just gives the help screen for sudo.
Edit 2:
So what I gather is that while Popen will work with some commands just as strings when shell=True, it takes
proc = subprocess.Popen(['sudo','/usr/sbin/apache2ctl','restart'])
without 'shell=True' to get sudo to work.
Thanks!
Try:
subprocess.call(['sudo', 'apach2ctl', 'restart'])
The subprocess needs to access the real stdin/out/err for it to be able to prompt you, and read in your password. If you set them up as pipes, you need to feed the password into that pipe yourself.
If you don't define them, then it grabs sys.stdout, etc...
Try giving the full path to apache2ctl.
Another way is to make your user a password-less sudo user.
Type the following on command line:
sudo visudo
Then add the following and replace the <username> with yours:
<username> ALL=(ALL) NOPASSWD: ALL
This will allow the user to execute sudo command without having to ask for password (including application launched by the said user. This might be a security risk though
I used this for python 3.5. I did it using subprocess module.Using the password like this is very insecure.
The subprocess module takes command as a list of strings so either create a list beforehand using split() or pass the whole list later. Read the documentation for more information.
What we are doing here is echoing the password and then using pipe we pass it on to the sudo through '-S' argument.
#!/usr/bin/env python
import subprocess
sudo_password = 'mysecretpass'
command = 'apach2ctl restart'
command = command.split()
cmd1 = subprocess.Popen(['echo',sudo_password], stdout=subprocess.PIPE)
cmd2 = subprocess.Popen(['sudo','-S'] + command, stdin=cmd1.stdout, stdout=subprocess.PIPE)
output = cmd2.stdout.read().decode()
The safest way to do this is to prompt for the password beforehand and then pipe it into the command. Prompting for the password will avoid having the password saved anywhere in your code and it also won't show up in your bash history. Here's an example:
from getpass import getpass
from subprocess import Popen, PIPE
password = getpass("Please enter your password: ")
# sudo requires the flag '-S' in order to take input from stdin
proc = Popen("sudo -S apach2ctl restart".split(), stdin=PIPE, stdout=PIPE, stderr=PIPE)
# Popen only accepts byte-arrays so you must encode the string
proc.communicate(password.encode())
You have to use Popen like this:
cmd = ['sudo', 'apache2ctl', 'restart']
proc = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
It expects a list.
To run a command as root, and pass it the password at the command prompt, you could do it as so:
import subprocess
from getpass import getpass
ls = "sudo -S ls -al".split()
cmd = subprocess.run(
ls, stdout=subprocess.PIPE, input=getpass("password: "), encoding="ascii",
)
print(cmd.stdout)
For your example, probably something like this:
import subprocess
from getpass import getpass
restart_apache = "sudo /usr/sbin/apache2ctl restart".split()
proc = subprocess.run(
restart_apache,
stdout=subprocess.PIPE,
input=getpass("password: "),
encoding="ascii",
)
I tried all the solutions, but did not work. Wanted to run long running tasks with Celery but for these I needed to run sudo chown command with subprocess.call().
This is what worked for me:
To add safe environment variables, in command line, type:
export MY_SUDO_PASS="user_password_here"
To test if it's working type:
echo $MY_SUDO_PASS
> user_password_here
To run it at system startup add it to the end of this file:
nano ~/.bashrc
#.bashrc
...
existing_content:
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
...
export MY_SUDO_PASS="user_password_here"
You can add all your environment variables passwords, usernames, host, etc here later.
If your variables are ready you can run:
To update:
echo $MY_SUDO_PASS | sudo -S apt-get update
Or to install Midnight Commander
echo $MY_SUDO_PASS | sudo -S apt-get install mc
To start Midnight Commander with sudo
echo $MY_SUDO_PASS | sudo -S mc
Or from python shell (or Django/Celery), to change directory ownership recursively:
python
>> import subprocess
>> subprocess.call('echo $MY_SUDO_PASS | sudo -S chown -R username_here /home/username_here/folder_to_change_ownership_recursivley', shell=True)
Hope it helps.
You can use this way and even catch errors, even can add variables to your commands. -
val = 'xy
response = Popen(f"(sudo {val})", stderr=PIPE, stdout=PIPE, shell=True)
output, errors = response.communicate()
Hope this helps.

Categories