I've the following fabfile:
from fabric.api import *
env.hosts = ['samplehost']
env.user = 'foo'
env.password = 'bar'
env.shell = ''
def exec_ls():
run('ls')
run('ls -l')
and I get the following output:
[samplehost] Executing task 'exec_ls'
[samplehost] run: ls
[samplehost] out: sample.txt
[samplehost] run: ls -l
[samplehost] out: rbash: ls -l: command not found
Fatal error: run() encountered an error (return code 127) while executing 'ls -l'
Aborting.
Disconnecting from samplehost... done.
The login shell for user 'foo' is '/bin/rbash'.
It seems that if I execute a command with parameters it is treated as a single command (while 'ls' without parameters works perfectly).
Please note that I've put an empty shell because otherwise Fabric tries to use '/bin/bash' and that's not allowed by he restricted shell.
Is it possible to use Fabric in a restricted shell?
The problem isn't related to the fact that rbash is being used, but to the the empty value of env.shell. To fix that problem use:
env.shell = '/bin/rbash -l -c'
Note that:
the default value for env.shell is /bin/bash -l -c, so using /bin/rbash -l -c makes sense
when env.shell is set to the empty string, the command isn't executed through any shell
the shell is the one that takes care of splitting long strings into commands and arguments, without the shell all the string is interpreted as a single command that isn't going to be found as it was happening
In my environment, using a restricted shell as part of a Pure array, it appears an option would be to pass the argument shell=False to the run function.
-Check the environment of the target-machine with
echo $SHELL
.Hypothetically you get this:
/bin/sh
-Then in your python fabfile.py:
from fabric.api import env
env.shell = "/bin/sh -c"
Related
I am using a python script to restrict the commands usage using the command argument in the authorized_keys file.
command:
ssh host-name bash --login -c 'exec $0 "$#"' mkdir -p hello
My script is performing required actions to restrict the commands. After filtering, the python script does sys.exit(1) for error and sys.exit(0) for success. After the return value the above ssh command is not getting executed at the end. Is there something else I need to send from the python script to SSH daemon?
The command modifier in the authorized_keys is not (only) used to validate the users command, but that command is run instead of the command provided by the user. This means calling sys.exit(0) from there prevents running the user-provided command.
In that script, after you validate the command, you need to run it too!
I think changing it to
ssh host-name bash --login -c 'exec $0 "$#" && mkdir -p hello'
should do the trick, otherwise bash will assume only the part in the single quotes is the command to execute.
If the second part should be executed even if the first part fails, replace the && with ;
My .profile defines a function
myps () {
ps -aef|egrep "a|b"|egrep -v "c\-"
}
I'd like to execute it from my python script
import subprocess
subprocess.call("ssh user#box \"$(typeset -f); myps\"", shell=True)
Getting an error back
bash: -c: line 0: syntax error near unexpected token `;'
bash: -c: line 0: `; myps'
Escaping ; results in
bash: ;: command not found
script='''
. ~/.profile # load local function definitions so typeset -f can emit them
ssh user#box ksh -s <<EOF
$(typeset -f)
myps
EOF
'''
import subprocess
subprocess.call(['ksh', '-c', script]) # no shell=True
There are a few pertinent items here:
The dotfile defining this function needs to be locally invoked before you run typeset -f to dump the function's definition over the wire. By default, a noninteractive shell does not run the majority of dotfiles (any specified by the ENV environment variable is an exception).
In the given example, this is served by the . ~/profile command within the script.
The shell needs to be one supporting typeset, so it has to be bash or ksh, not sh (as used by script=True by default), which may be provided by ash or dash, lacking this feature.
In the given example, this is served by passing ['ksh', '-c'] is the first two arguments to the argv array.
typeset needs to be run locally, so it can't be in an argv position other than the first with script=True. (To provide an example: subprocess.Popen(['''printf '%s\n' "$#"''', 'This is just literal data!', '$(touch /tmp/this-is-not-executed)'], shell=True) evaluates only printf '%s\n' "$#" as a shell script; This is just literal data! and $(touch /tmp/this-is-not-executed) are passed as literal data, so no file named /tmp/this-is-not-executed is created).
In the given example, this is mooted by not using script=True.
Explicitly invoking ksh -s (or bash -s, as appropriate) ensures that the shell evaluating your function definitions matches the shell you wrote those functions against, rather than passing them to sh -c, as would happen otherwise.
In the given example, this is served by ssh user#box ksh -s inside the script.
I ended up using this.
import subprocess
import sys
import re
HOST = "user#" + box
COMMAND = 'my long command with many many flags in single quotes'
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
The original command was not interpreting the ; before myps properly. Using sh -c fixes that, but... ( please see Charles Duffy comments below ).
Using a combination of single/double quotes sometimes makes the syntax easier to read and less prone to mistakes. With that in mind, a safe way to run the command ( provided the functions in .profile are actually accessible in the shell started by the subprocess.Popen object ):
subprocess.call('ssh user#box "$(typeset -f); myps"', shell=True),
An alternative ( less safe ) method would be to use sh -c for the subshell command:
subprocess.call('ssh user#box "sh -c $(echo typeset -f); myps"', shell=True)
# myps is treated as a command
This seemingly returned the same result:
subprocess.call('ssh user#box "sh -c typeset -f; myps"', shell=True)
There are definitely alternative methods for accomplishing these type of tasks, however, this might give you an idea of what the issue was with the original command.
Through Fabric, I am trying to start a celerycam process using the below nohup command. Unfortunately, nothing is happening. Manually using the same command, I could start the process but not through Fabric. Any advice on how can I solve this?
def start_celerycam():
'''Start celerycam daemon'''
with cd(env.project_dir):
virtualenv('nohup bash -c "python manage.py celerycam --logfile=%scelerycam.log --pidfile=%scelerycam.pid &> %scelerycam.nohup &> %scelerycam.err" &' % (env.celery_log_dir,env.celery_log_dir,env.celery_log_dir,env.celery_log_dir))
I'm using Erich Heine's suggestion to use 'dtach' and it's working pretty well for me:
def runbg(cmd, sockname="dtach"):
return run('dtach -n `mktemp -u /tmp/%s.XXXX` %s' % (sockname, cmd))
This was found here.
As I have experimented, the solution is a combination of two factors:
run process as a daemon: nohup ./command &> /dev/null &
use pty=False for fabric run
So, your function should look like this:
def background_run(command):
command = 'nohup %s &> /dev/null &' % command
run(command, pty=False)
And you can launch it with:
execute(background_run, your_command)
This is an instance of this issue. Background processes will be killed when the command ends. Unfortunately on CentOS 6 doesn't support pty-less sudo commands.
The final entry in the issue mentions using sudo('set -m; service servicename start'). This turns on Job Control and therefore background processes are put in their own process group. As a result they are not terminated when the command ends.
For even more information see this link.
you just need to run
run("(nohup yourcommand >& /dev/null < /dev/null &) && sleep 1")
DTACH is the way to go. It's a software you need to install like a lite version of screen.
This is a better version of the "dtach"-method found above, it will install dtach if necessary. It's to be found here where you can also learn how to get the output of the process which is running in the background:
from fabric.api import run
from fabric.api import sudo
from fabric.contrib.files import exists
def run_bg(cmd, before=None, sockname="dtach", use_sudo=False):
"""Run a command in the background using dtach
:param cmd: The command to run
:param output_file: The file to send all of the output to.
:param before: The command to run before the dtach. E.g. exporting
environment variable
:param sockname: The socket name to use for the temp file
:param use_sudo: Whether or not to use sudo
"""
if not exists("/usr/bin/dtach"):
sudo("apt-get install dtach")
if before:
cmd = "{}; dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(
before, sockname, cmd)
else:
cmd = "dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(sockname, cmd)
if use_sudo:
return sudo(cmd)
else:
return run(cmd)
May this help you, like it helped me to run omxplayer via fabric on a remote rasberry pi!
You can use :
run('nohup /home/ubuntu/spider/bin/python3 /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py > /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py.log 2>&1 &', pty=False)
nohup did not work for me and I did not have tmux or dtach installed on all the boxes I wanted to use this on so I ended up using screen like so:
run("screen -d -m bash -c '{}'".format(command), pty=False)
This tells screen to start a bash shell in a detached terminal that runs your command
You could be running into this issue
Try adding 'pty=False' to the sudo command (I assume virtualenv is calling sudo or run somewhere?)
This worked for me:
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
Edit: I had to make sure the pid file was removed first so this was the full code:
# Create new celerycam
sudo('rm celerycam.pid', warn_only=True)
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
I was able to circumvent this issue by running nohup ... & over ssh in a separate local shell script. In fabfile.py:
#task
def startup():
local('./do-stuff-in-background.sh {0}'.format(env.host))
and in do-stuff-in-background.sh:
#!/bin/sh
set -e
set -o nounset
HOST=$1
ssh $HOST -T << HERE
nohup df -h 1>>~/df.log 2>>~/df.err &
HERE
Of course, you could also pass in the command and standard output / error log files as arguments to make this script more generally useful.
(In my case, I didn't have admin rights to install dtach, and neither screen -d -m nor pty=False / sleep 1 worked properly for me. YMMV, especially as I have no idea why this works...)
I have the following code that works great to run the ls command. I have a bash alias that I use alias ll='ls -alFGh' is it possible to get python to run the bash command without python loading my bash_alias file, parsing, and then actually running the full command?
import subprocess
command = "ls" # the shell command
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=None, shell=True)
#Launch the shell command:
output = process.communicate()
print (output[0])
Trying with command = "ll" the output I get is:
/bin/sh: ll: command not found
b''
You cannot. When you run a python process it has no knowledge of a shell alias. There are simple ways of passing text from parent to child process (other than IPC), the command-line and through environment (i.e. exported) variables. Bash does not support exporting aliases.
From the man bash pages: For almost every purpose, aliases are superseded by shell functions.
Bash does support exporting functions, so I suggest you make your alias a simple function instead. That way it is exported from shell to python to shell. For example:
In the shell:
ll() { ls -l; }
export -f ll
In python:
import subprocess
command = "ll" # the shell command
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=None, shell=True)
output = process.communicate()
print(output[0].decode()) # Required if using Python 3
Since you are using the print() function I have assumed you are using python 3. In which case you need the .decode(), since a bytes object is returned.
With a bit of hackery it is possible to create and export shell functions from python as well.
I need to run some bash commands via Fabric API (ssh).
I have the following String in my Python module:
newCommand = command + "'`echo -ne '\\015'"
When I print this string directly in Python the output is the expected:
command'`echo -ne '\015'
However, if I try to run this command via the Fabric API the command is somehow modified into this:
/bin/bash -l -c "command'\`echo -ne '\015'"
Notice the '\' before 'echo'. Why is this happenning? The '\' is breaking my command and I can't successfuly run the command.
ps: The prefix "/bin/bash -l -c" is expected since that's how Fabric works with SSH
This is not a valid shell command:
command'`echo -ne '\015'
Even if you add the missing backtick and single quote, it's nothing like writing "command" and pressing enter.
The context your command will be run in is basically what you'd get if you'd ssh and paste a command:
clientprompt$ ssh host
Welcome to Host, User
hostprompt$ <COMMAND HERE>
You should focus your efforts on finding a single command that does what you want, and not a series of keypresses that you could write to do it (that's not how ssh works).