I need to run a deamon over a remote Linux machine, using SSH.
Deamon's name is pigpiod and it belongs to pigpio module ( controling Raspberry Pi's GPIO ), Ubuntu Mate 16.04.
Executing commands that does not require sudo (for example- ls)- script runs OK, while those who need sudo, fails.
adress='192.168.2.112' , is a remote Linux to run this daemon.
Code below fails (running sudo pigpiod):
def runpigpiod_remote(adress):
result = subprocess.run(['ssh','guy#'+adress,'sudo','pigpiod'])
Code below succeeds(run ls -l)
def runpigpiod_remote(adress):
result = subprocess.run(['ssh','guy#'+adress,'ls','-l'])
In order the check if subprocess.run capable of executing sudo+ command - I tryied localy on same machine and it succeeds:
def run_process():
try:
check_output(["pidof","pigpiod"])
print("pigpiod already loaded")
except:
subprocess.CalledProcessError
print("Not Loaded")
subprocess.run(['sudo','pigpiod'])
if os.system("pgrep -x "+name)==0:
print("Loaded successfully")
Code changed ( thanks to comment of #Hamuel )- as noted in proper way to sudo over ssh
def runpigpiod_remote(adress):
result = subprocess.run(['ssh','-t','guy#'+adress,'sudo','pigpiod'])
Related
Overview
I'm trying to use python fabric to run an ssh command as root on a remote server.
The command: nohup ./foo &
foo is expected to command run for several days. I must be able to disassociate foo from fabric's remote ssh session, and put foo in the background.
The Fabric FAQ says you should use something like screen or tmux when you run your fabric script (which runs the backgrounded command). I tried that, but my fabric script still hung. foo is not hanging.
Question
How do I use fabric to run this command on a remote server without the script hanging: nohup ./foo &
Details
This is my script:
#!/bin/sh
# Credit: https://unix.stackexchange.com/a/20895/6766
if "true" : '''\'
then
exec "/nfs/it/network_python/$OSREL/bin/python" "$0" "$#"
exit 127
fi
'''
from getpass import getpass
import os
from fabric import Connection, Config
assert os.geteuid()==0, "ERROR: Must run as root"
for host in ['host1.foo.local', 'host2.foo.local']:
# Make an ssh connection to the host...
conn = Connection(host)
# The script always hangs at this line
result = conn.run('nohup ./foo &', warn=True, hide=True)
I always open a tmux session to run the aforementioned script in; even doing so, the script hangs when I get to conn.run(), above.
I'm running the script on a vanilla CentOS 6.5 VM; it runs under python 2.7.10 and fabric 2.1.
The Fabric FAQ is unclear... I thought the FAQ wanted tmux used on the local side when I executed the Fabric script.
The correct way to fix this problem is to replace nohup in the remote command, with screen -d -m <command>. Now I can run the whole script locally with no hangs (and I don't have to use tmux in the local term).
Explicitly, I have to rewrite the last line of my script in my question as:
# Remove &, and nohup...
result = conn.run('screen -d -m ./foo', warn=True, hide=True)
Good day. I am using a Raspberry Pi 3 model B running Raspbian Stretch. I have a Python script named bluepyscanner.py which is basically a Python 3 variation of the bluepy scanner sample code with a small addition for a .txt log file.
from bluepy.btle import Scanner, DefaultDelegate
class ScanDelegate(DefaultDelegate):
def __init__(self):
DefaultDelegate.__init__(self)
def handleDiscovery(self, dev, isNewDev, isNewData):
if isNewDev:
print("Discovered device", dev.addr)
elif isNewData:
print("Received new data from", dev.addr)
scanner = Scanner().withDelegate(ScanDelegate())
devices = scanner.scan(10.0)
for dev in devices:
print("Device {} ({}), RSSI={} dB".format(dev.addr, dev.addrType, dev.rssi))
for (adtype, desc, value) in dev.getScanData():
print(" {} = {}".format(desc, value))
with open('bluepyscanlog.txt', 'a') as the_file:
the_file.write("{}={}\n".format(desc, value))
I can run this script perfectly when I launch it from terminal with
$ sudo python3 /home/pi/bluepyscanner.py
However, I am somehow unable to get this script to run automatically on boot. I have tried the following three methods separately and none has worked so far:
rc.local (https://www.raspberrypi.org/documentation/linux/usage/rc-local.md): I appended the following line to /etc/rc.local
python3 /home/pi/bluepyscanner.py
Cron (https://www.raspberrypi.org/documentation/linux/usage/cron.md): I used the Cron GUI and added a recurring task to be launched "at reboot"
sudo python3 /home/pi/bluepyscanner.py
systemd (https://www.raspberrypi.org/documentation/linux/usage/systemd.md): I followed the instructions on the linked documentation page with main.py replaced by my bluepyscanner.py and the working directory replaced by /home/pi
Can anyone give me a pointer on what might have gone wrong? Bluetooth is enabled and bluepy is installed in accordance with this. I don't think the script has run because, unlike when ran from terminal, bluepyscanlog.txt was not created.
Thank you in advance for your time.
Please make these changes into your script
...
with open('/home/pi/bluepyscanlog.txt', 'a+') as the_file:
...
and make the proper changes in your /etc/rc.local
sudo python3 /home/pi/bluepyscanner.py
May be you can see previous copies of bluepyscanlog.txt at /
If this doesn't do the job bluetooth service may be starting after rc.local is executed. Do this modifications in your /etc/rd.local as sudo
....
sudo service bluetooth start
sudo python3 /home/pi/bluepyscanner.py > /home/pi/bb.log
exit 0
Ensure that exit 0 is the last command in the file. If you created rc.local manually ensure it gets execution rights.
sudo chmod +x /etc/rc.local
You will see that your script is being executed.
In my raspberry these are the contents of bb.log
Discovered device d2:xx:XX:XX:XX:XX
Device d2:xx:XX:XX:XX:XX (random), RSSI=-62 dB
Flags = 06
0x12 = 08001000
Incomplete 128b Services = xxxxxxxxxxxxxxxxxxxxxxxxx
16b Service Data = xxxxxxxxxxxxxx
Complete Local Name = xxxxxxxxxxx
Tx Power = 05
(Xs mask original content)
Im trying to start mysql server from python 3.4 script on Max OS X 10.10.4
yet I don't know how to pass the super user password ?
import os
os.system("sudo /usr/local/mysql/support-files/mysql.server start")
Sudo: no tty present and no askpass program specified
You are currently going in the right direction. In my system a mysql server is started from /etc/init.d/mysql.
Just use the following code snippet to run your mysql server from a python script.
import os
os.system('sudo /etc/init.d/mysql start')
The best way to run the root commands will be to execute the file itself as root
Simply type sudo python script.py in your shell and you can replace os.system('sudo /etc/init.d/mysql start')
with
os.system('/etc/init.d/mysql start')
I'm using python fabric to deploy binaries to an ec2 server and am attempting to run them in background (a subshell).
All the fabric commands for performing local actions, putting files, and executing remote commands w/o elevated privileges work fine. The issue I run into is when I attempt to run the binary.
with cd("deploy"):
run('mkdir log')
sudo('iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080', user="root")
result = sudo('./dbserver &', user="root") # <---- This line
print result
if result.failed:
print "Running dbserver failed"
else:
print "DBServer now running server" # this gets printed despite the binary not running
After I login to the server and ps aux | grep dbserver nothing shows up. How can I get fabric to execute the binary? The same command ./dbserver & executed from the shell does exactly what I want it to. Thanks.
This is likey reated to TTY issues, and/or that you're attempting to background a process.
Both of these are discussed in the FAQ under these two headings:
http://www.fabfile.org/faq.html#init-scripts-don-t-work
http://www.fabfile.org/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang
Try making the sudo like this:
sudo('nohup ./dbserver &', user=root, pty=False)
I'm running some deployment tasks with Fabric that needs to checkout/update a Mercurial repository to the machine and then execute the appropriate copying/configuration.
Every time that I instatiate a new machine (we're currently using EC2 for our infrastructure) or when I run hg pull in the machine it'll ask for my ssh key passphrase, that's a bit annoying when we need to initialize a dozen machines at a time.
I've tried to run ssh-add in Fabric when the new EC2 instance is initialized but it seems like that ssh-agent isn't running for that shell and I get a Could not open a connection to your authentication agent. message from the output of Fabric.
How would I make ssh-add work when connected to the instance by the Fabric script?
A comment on fabric's issue tracker solved this for me. It's a modified version of the lincolnloop solution. Using this "run" instead of fabric's will pipe your commands through ssh locally, allowing your local ssh-agent to provide the keys.
from fabric.api import env, roles, local, output
from fabric.operations import _shell_escape
def run(command, shell=True, pty=True):
"""
Helper function.
Runs a command with SSH agent forwarding enabled.
Note:: Fabric (and paramiko) can't forward your SSH agent.
This helper uses your system's ssh to do so.
"""
real_command = command
if shell:
cwd = env.get('cwd', '')
if cwd:
cwd = 'cd %s && ' % _shell_escape(cwd)
real_command = '%s "%s"' % (env.shell,
_shell_escape(cwd + real_command))
if output.debug:
print("[%s] run: %s" % (env.host_string, real_command))
elif output.running:
print("[%s] run: %s" % (env.host_string, command))
local("ssh -A %s '%s'" % (env.host_string, real_command))
Please note that I'm running Fabric 1.3.2, and this fix won't be needed much longer.