I have some freebsd servers and don't have sudo. But I want to run some command automatically with root just like the following command:
def autodeploy(url):
with cd('/tmp'):
if not exists('releasetar.sh'):
put('/tmp/releasetar.sh', 'releasetar.sh', mode=0644)
run("wget '{}'".format(url))
run('su - -m -c "cd /tmp && bash /tmp/releasetar.sh"')
the su with -c option worked to linux but didn't worked on freebsd. How can I solved this problem ? I'm wish your solution can both worked on linux and freebsd. Thank you for your answer~~
If you're using fabric you can just provide the -u argument from the command line to specify which user you want to run the task as
fab -u root <task name>
For more options from the command line check out http://docs.fabfile.org/en/1.7/usage/fab.html#command-line-options
You can also set your username programmatically
from fabric.api import run, settings
with settings(user="root"):
run("some-command")
Install sudo from ports (/usr/ports/security/sudo).
Related
I have a command /usr/bin/virsh dumpxml <UUID> that I need to run inside of a python program, the thing is that the command needs to run like sudo -u <user> sudo /usr/bin/virsh dumpxml <UUID>, I try to call it with the following code:
cmd = "sudo -u <user> sudo /usr/bin/virsh dumpxml %s" % uuid
data = os.popen(cmd).read()
...
But I'm getting an error message indicating that the domain is not found, but, if I run the same command from the bash, it works fine.
The /usr/bin/virsh dumpxml command is inside the /etc/sudoers.d/
Also, I need to run with the second 'sudo' because if not, it won't work
Any ideas?
I'm trying to run a shell file on python:
mongod --config /opt/mongodb/mongod.conf
and call it on python:
subprocess.call(['bash', 'run.sh'])
but it says mongod : not found.
When I run it in the terminal it works.
How can I fix this?
You don't need to use bash. Just run it as a normal script as you do in terminal:
import subprocess
subprocess.call(['./run.sh'])
Also it seems that mongod is not in your system environment path so you need to add absolute path of mongod to your run.sh:
#!/bin/bash
/opt/mongodb-linux-x86_64-ubuntu1404-3.0.6/bin/mongod --config /opt/mongodb/mongod.conf
try :
import os
os.system('bash run.sh')
update command to :
#!/bin/sh
/usr/bin/mongod --quiet --config /opt/mongodb/mongod.conf
Through Fabric, I am trying to start a celerycam process using the below nohup command. Unfortunately, nothing is happening. Manually using the same command, I could start the process but not through Fabric. Any advice on how can I solve this?
def start_celerycam():
'''Start celerycam daemon'''
with cd(env.project_dir):
virtualenv('nohup bash -c "python manage.py celerycam --logfile=%scelerycam.log --pidfile=%scelerycam.pid &> %scelerycam.nohup &> %scelerycam.err" &' % (env.celery_log_dir,env.celery_log_dir,env.celery_log_dir,env.celery_log_dir))
I'm using Erich Heine's suggestion to use 'dtach' and it's working pretty well for me:
def runbg(cmd, sockname="dtach"):
return run('dtach -n `mktemp -u /tmp/%s.XXXX` %s' % (sockname, cmd))
This was found here.
As I have experimented, the solution is a combination of two factors:
run process as a daemon: nohup ./command &> /dev/null &
use pty=False for fabric run
So, your function should look like this:
def background_run(command):
command = 'nohup %s &> /dev/null &' % command
run(command, pty=False)
And you can launch it with:
execute(background_run, your_command)
This is an instance of this issue. Background processes will be killed when the command ends. Unfortunately on CentOS 6 doesn't support pty-less sudo commands.
The final entry in the issue mentions using sudo('set -m; service servicename start'). This turns on Job Control and therefore background processes are put in their own process group. As a result they are not terminated when the command ends.
For even more information see this link.
you just need to run
run("(nohup yourcommand >& /dev/null < /dev/null &) && sleep 1")
DTACH is the way to go. It's a software you need to install like a lite version of screen.
This is a better version of the "dtach"-method found above, it will install dtach if necessary. It's to be found here where you can also learn how to get the output of the process which is running in the background:
from fabric.api import run
from fabric.api import sudo
from fabric.contrib.files import exists
def run_bg(cmd, before=None, sockname="dtach", use_sudo=False):
"""Run a command in the background using dtach
:param cmd: The command to run
:param output_file: The file to send all of the output to.
:param before: The command to run before the dtach. E.g. exporting
environment variable
:param sockname: The socket name to use for the temp file
:param use_sudo: Whether or not to use sudo
"""
if not exists("/usr/bin/dtach"):
sudo("apt-get install dtach")
if before:
cmd = "{}; dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(
before, sockname, cmd)
else:
cmd = "dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(sockname, cmd)
if use_sudo:
return sudo(cmd)
else:
return run(cmd)
May this help you, like it helped me to run omxplayer via fabric on a remote rasberry pi!
You can use :
run('nohup /home/ubuntu/spider/bin/python3 /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py > /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py.log 2>&1 &', pty=False)
nohup did not work for me and I did not have tmux or dtach installed on all the boxes I wanted to use this on so I ended up using screen like so:
run("screen -d -m bash -c '{}'".format(command), pty=False)
This tells screen to start a bash shell in a detached terminal that runs your command
You could be running into this issue
Try adding 'pty=False' to the sudo command (I assume virtualenv is calling sudo or run somewhere?)
This worked for me:
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
Edit: I had to make sure the pid file was removed first so this was the full code:
# Create new celerycam
sudo('rm celerycam.pid', warn_only=True)
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
I was able to circumvent this issue by running nohup ... & over ssh in a separate local shell script. In fabfile.py:
#task
def startup():
local('./do-stuff-in-background.sh {0}'.format(env.host))
and in do-stuff-in-background.sh:
#!/bin/sh
set -e
set -o nounset
HOST=$1
ssh $HOST -T << HERE
nohup df -h 1>>~/df.log 2>>~/df.err &
HERE
Of course, you could also pass in the command and standard output / error log files as arguments to make this script more generally useful.
(In my case, I didn't have admin rights to install dtach, and neither screen -d -m nor pty=False / sleep 1 worked properly for me. YMMV, especially as I have no idea why this works...)
I'm trying to set up my /etc/rc.local to automatically start up a process on reboot as another user. For some reason, the .bash_rc for this user does not seem to be getting initialized.
Here's the command I added to /etc/rc.local :
sudo su -l batchuser -c "/home/batchuser/app/run_prod.sh &"
this didn't work, so I also tried this:
sudo su -l batchuser -c ". /home/batchuser/.profile; /home/batchuser/app/run_prod.sh &"
run_prod.sh just starts up a python script. The python script fails because it references modules which are in a python path which gets initialized in the .bash_rc
EDIT: it works when I do this
sudo su -l batchuser -c "export PYTHONPATH=/my/python/path; /home/batchuser/app/run_prod.sh &"
Why does this work and not the statement above? How come the .bashrc is not getting initialized?
I have run into this same problem. I can't fully explain the behavior, but I ended up doing this type of thing:
sudo $PYTHONPATH=$PYTHONPATH the_command
or more specifically for your case,
sudo $PYTHONPATH=$PYTHONPATH su -l batchuser -c /home/batchuser/app/run_prod.sh &"
Does that work for you? If it does, you may find it doesn't return immediately like you expect it to. You may need to move the & outside the quotes so it applies to the sudo command.
I am using Fabric to run commands on a remote server. The user with which I connect on that server has some sudo privileges, and does not require a password to use these privileges. When SSH'ing into the server, I can run sudo blah and the command executes without prompting for a password. When I try to run the same command via Fabric's sudo function, I get prompted for a password. This is because Fabric builds a command in the following manner when using sudo:
sudo -S -p <sudo_prompt> /bin/bash -l -c "<command>"
Obviously, my user does not have permission to execute /bin/bash without a password.
I've worked around the problem by using run("sudo blah") instead of sudo("blah"), but I wondered if there is a better solution. Is there a workaround for this issue?
Try passing shell=False to sudo. That way /bin/bash won't be added to the sudo command. sudo('some_command', shell=False)
From line 503 of fabric/operations.py:
if (not env.use_shell) or (not shell):
real_command = "%s %s" % (sudo_prefix, _shell_escape(command))
the else block looks like this:
# V-- here's where /bin/bash is added
real_command = '%s %s "%s"' % (sudo_prefix, env.shell,
_shell_escape(cwd + command))
You can use:
from fabric.api import env
# [...]
env.password = 'yourpassword'
In your /etc/sudoers file add
user ALL=NOPASSWD: some_command
where user is your sudo user and some_command the command you want to run with fabric, then on the fabric script run sudo it with shell=False:
sudo('some_command', shell=False)
this works for me
In your /etc/sudoers file, you could add
user ALL=NOPASSWD: /bin/bash
...where user is your Fabric username.
Obviously, you can only do this if you have root access, as /etc/sudoers is only writable by root.
Also obviously, this isn't terribly secure, as being able to execute /bin/bash leaves you open to essentially anything, so if you don't have root access and have to ask a sysadmin to do this for you, they probably won't.
Linux noob here but I found this question while trying to install graphite-fabric onto an EC2 AMI. Fabric kept prompting for a root password.
The evntual trick was to pass in the ssh private key file to fabric.
fab -i key.pem graphite_install -H root#servername
You can also use passwords for multiple machines:
from fabric import env
env.hosts = ['user1#host1:port1', 'user2#host2.port2']
env.passwords = {'user1#host1:port1': 'password1', 'user2#host2.port2': 'password2'}
See this answer: https://stackoverflow.com/a/5568219/552671
I recently faced this same issue, and found Crossfit_and_Beer's answer confusing.
A supported way to achieve this is via using env.sudo_prefix, as documented by this github commit (from this PR)
My example of use:
env.sudo_prefix = 'sudo '