SSH Command using Popen() odd results - python

I've been using the following code for a while now, however recently (in the last month) it's been failing:
# Make sure WireGuard starts at boot
systemd_cmd = 'ssh %s %s#%s "systemctl enable wg-quick#wg0"' % (SSH_OPTS, SSH_USER, get_gw_ret['pub_ipv4_addr'])
p = subprocess.Popen(start_wg_cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
systemd_out = p.communicate()[1]
systemd_out_message = str(systemd_out)[2:-1]
A successful run of systemctl enable wg-quick#wg0 is the goal. The expected output is something like;
Created symlink /etc/systemd/system/multi-user.target.wants/wg-quick#wg0.service → /lib/systemd/system/wg-quick#.service.
However these days it returns:
wg-quick: `wg0\' already exists\\n
I've narrowed it down to the above python code since executing this command directly in a bash shell is always successful. The complete commands looks like:
ssh -o StrictHostKeyChecking=no root#10.0.0.8 "systemctl enable wg-quick#wg0"
Any ideas what might be happening?

Someone pointed out my typo:
- p = subprocess.Popen(start_wg_cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
+ p = subprocess.Popen(systemd_cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
Thanks for reading!

Related

How to execute 'su' command using parallel-ssh

I want to log in to two hosts using parallel-ssh and execute su command. Then I want to confirm that I am the root user by printing out whoami
Code:
hosts = ['myHost1', 'myHost2']
client = ParallelSSHClient(hosts, user='myUser', password='myPassword')
output = client.run_command('su')
for host in output:
stdin = output[host].stdin
stdin.write('rootPassword\n')
stdin.flush()
client.join(output)
output = client.run_command('whoami')
for host, host_output in output.items():
for line in host_output.stdout:
print("Host [%s] - %s" % (host, line))
Result:
Host [myHost1] - myUser
Host [myHost2] - myUser
Obviously, I expect root in the output. I am following the documentation.
I've tried using all different line endings instead of \n and nothing has changed.
How can I execute su command using parallel-ssh?
Try this:
**def exec_command(hosts):
strr = ""
client = ParallelSSHClient(hosts, user='admin', password='admin_password')
cmd = 'echo root_password |su -c "commmand" root'
output = client.run_command(cmd)
client.join()
for host_out in output:
for line in host_out.stdout:
strr+=line+" "
return strr
**
'echo root_password |su -c "command" root'
try to put sudo=True at the end of run_command
output = client.run_command(<..>, sudo=True)
like in docs
It turns out that what I am trying to do is not achievable.
The first problem
I found in this post that all commands are in their own channel. That means that even if su would be successful it wouldn't affect the second command. The author of the post recommends running
su -c whoami - root
The second problem
I managed to debug the problem even further by changing host_output.stdout to host_output.stderr It turned out that I receive an error which previously was not being shown on the terminal:
standard in must be a tty
Possible solutions to this problem are here . They didn't work for me but might work for you.
For me workaround was to allow on all my hosts root login. And then in parallel-ssh I log in as a root already with all the rights in place.

how to type sudo password when using subprocess.call?

i defined a function that switch my proxy settings every now and then,
problem is that i want it to run in a loop without manual intervention. But when i execute the program in sudo it gets called the first time en runs smoothly, second time it asks me for my sudo password. Here is the bit of code:
def ProxySetting(Proxy):
print "ProxyStetting(Proxy)"
call("networksetup -setwebproxy 'Wi-Fi' %s" "on" % Proxy, shell = True)
call("networksetup -setsecurewebproxy 'Wi-Fi' %s" "on" % Proxy, shell = True)
call("networksetup -setftpproxy 'Wi-Fi' %s" "on" %Proxy , shell=True)
I could use threading but am sure there is a way of doing it that wont cause problems. How can i hard code my sudo password so that it runs at the beginning of the function?
Here you can execute a command sudo without interactive prompt asking you to type your password :
from subprocess import call
pwd='my password'
cmd='ls'
call('echo {} | sudo -S {}'.format(pwd, cmd), shell=True)
Another method of passing your password to a shell command through python that wouldn't involve it showing up in any command history or ps output is:
p = subprocess.Popen(['sudo', self.resubscribe_script], stdin=subprocess.PIPE)
p.communicate('{}\n'.format(self.sudo_password))
Note that using communicate will only allow one input to be given to stdin; there are other methods for getting a reusable input.

returned non-zero exit status 3 python2.7 subprocess check_output

I have written some python code that makes a subprocess call that executes a curl command however I am getting an error Command '['sh', '/tests/curlhttp.sh', 'http://www.bbc.co.uk', '80']' returned non-zero exit status 3 I have also tried running that command via the terminal on my linux box and this seems to be ok. Here is my python script
def RunCURL(command):
result = []
//I get an error when running this
output = check_output(command.split(" "), stderr=subprocess.STDOUT)
print output
# loop through and create a list of lists
for line in output.splitlines():
if "=" in line and "time_total" not in line:
sublist = line.split("=")[0].rstrip().lstrip()
print sublist + " hello this is curl"
result.append(sublist)
return result
and here is my curl script I am trying to execute:
#!/bin/bash
curl -w '\ncontent_type=%{content_type}\nfilename_effective=%{filename_effective}\nftp_entry_path=%{ftp_entry_path}\nhttp_code=%{http_code}\nhttp_connect=%{http_connect}\nlocal_ip=%{local_ip}\nlocal_port=%{local_port}\nnum_connects=%{num_connects}\nnum_redirects=%{num_redirects}\nredirect_url=%{redirect_url}\nremote_ip=%{remote_ip}\nremote_port=%{remote_port}\nsize_download=%{size_download}\nsize_header=%{size_header}\nsize_request=%{size_request}\nsize_upload=%{size_upload}\nspeed_download=%{speed_download}\nspeed_upload=%{speed_upload}\nssl_verify_result=%{ssl_verify_result}\ntime_appconnect=%{time_appconnect}\ntime_connect=%{time_connect}\ntime_namelookup=%{time_namelookup}\ntime_pretransfer=%{time_pretransfer}\ntime_redirect=%{time_redirect}\ntime_starttransfer=%{time_starttransfer}\ntime_total=%{time_total}\nurl_effective=%{url_effective}\n\n' -o /dev/null -s $1:$2
I have taken this curl script from this blog and have only changed the address field on the script to accept a url and a port number: http://blog.kenweiner.com/2014/11/http-request-timings-with-curl.html
This is what I get when running the curl script copied from the blog itself straight into the terminal
content_type=text/html; charset=UTF-8
filename_effective=/dev/null
ftp_entry_path=
http_code=302
http_connect=000
local_ip=10.250.8.99
local_port=60839
num_connects=1
num_redirects=0
redirect_url=https://www.google.co.uk/?gfe_rd=cr&ei=_7gdWOrCLrH38Af1qoKIBw
remote_ip=216.58.204.36
remote_port=443
size_download=262
size_header=258
size_request=78
size_upload=0
speed_download=3535.000
speed_upload=0.000
ssl_verify_result=0
time_appconnect=0.062
time_connect=0.013
time_namelookup=0.001
time_pretransfer=0.062
time_redirect=0.000
time_starttransfer=0.074
time_total=0.074
url_effective=https://www.google.com/
When I place this script into a file I get this
content_type=
filename_effective=/dev/null
ftp_entry_path=
http_code=000
http_connect=000
local_ip=
local_port=0
num_connects=0
num_redirects=0
redirect_url=
remote_ip=
remote_port=0
size_download=0
size_header=0
size_request=0
size_upload=0
speed_download=0.000
speed_upload=0.000
ssl_verify_result=0
time_appconnect=0.000
time_connect=0.000
time_namelookup=0.000
time_pretransfer=0.000
time_redirect=0.000
time_starttransfer=0.000
time_total=0.000
url_effective=https://www.google.com/
Looking at cUrl error codes it looks like 3 means that your URL is malformatted. Does it work when you run it outside of Python?

subprocess.popen seems to fail when run from crontab

I'm running a script from crontab that will just ssh and run a command and store the results in a file.
The function that seems to be failing is subprocess.popen.
Here is the python function:
def _executeSSHCommand(sshcommand,user,node):
'''
Simple function to execute an ssh command on a remote node.
'''
sshunixcmd = '/usr/bin/ssh %s#%s \'%s\'' % (user,node,sshcommand)
process = subprocess.Popen([sshunixcmd],
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
process.wait()
result = process.stdout.readlines()
return result
When it's run from the command line, it executes correctly, from cron it seems to fail with the error message below.
Here are the crontab entries:
02 * * * * /home/matt/scripts/check-diskspace.py >> /home/matt/logs/disklog.log
Here are the errors:
Sep 23 17:02:01 timmy CRON[13387]: (matt) CMD (/home/matt/scripts/check-diskspace.py >> /home/matt/logs/disklog.log)
Sep 23 17:02:01 timmy CRON[13386]: (CRON) error (grandchild #13387 failed with exit status 2)
I'm going blind trying to find exactly where I have gone so wrong. Any ideas?
The cron PATH is very limited. You should either set absolute path to your ssh /usr/bin/ssh or set the PATH as a first line in your crontab.
You probably need to pass ssh the -i argument to tell ssh to use a specific key file. The problem is that your environment is not set up to tell ssh which key to use.
The fact that you're using python here is a bit of a red herring.
For everything ssh-related in python, you might consider using paramiko. Using it, the following code should do what you want.
import paramiko
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect(node, username=user)
stdout = client.exec_command(ssh_command)[0]
return stdout.readlines()
When running python scripts from cron, the environment PATH can be a hangup, as user1652558 points out.
To expand on this answer with example code to add custom PATH values to the environment for a subprocess call:
import os
import subprocess
#whatever user PATH values you need
my_path = "/some/custom/path1:/some/custom/path2"
#append the custom values to the current PATH settings
my_env = os.environ.copy()
my_env["PATH"] = my_path + ":" + my_env["PATH"]
#subprocess call
resp = subprocess.check_output([cmd], env=my_env, shell=True)

Interface with remote computers using Python

I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).
For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and who, but how would I get this info into a script for manipulation? Something like,
import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
Similarly for things like cat /proc/cpuinfo to get the processor information of a node. A starting point would be really great. Thanks.
Here's a simple, cheap solution to get you started
from subprocess import *
p = Popen('ssh servername who', shell=True, stdout=PIPE)
p.wait()
print p.stdout.readlines()
returns (eg)
['usr pts/0 2009-08-19 16:03 (kakapo)\n',
'usr pts/1 2009-08-17 15:51 (kakapo)\n',
'usr pts/5 2009-08-17 17:00 (kakapo)\n']
and for cpuinfo:
p = Popen('ssh servername cat /proc/cpuinfo', shell=True, stdout=PIPE)
I've been using Pexpect, which let's you ssh into machines, send commands, read the output, and react to it, with success. I even started an open-source project around it, Proxpect - which haven't been updated in ages, but I digress...
The pexpect module can help you interface with ssh. More or less, here is what your example would look like.
child = pexpect.spawn('ssh servername')
child.expect('Password:')
child.sendline('ABCDEF')
(output,status) = child.sendline('who')
If your needs overgrow simple "ssh remote-host.example.org who" then there is an awesome python library, called RPyC. It has so called "classic" mode which allows to almost transparently execute Python code over the network with several lines of code. Very useful tool for trusted environments.
Here's an example from Wikipedia:
import rpyc
# assuming a classic server is running on 'hostname'
conn = rpyc.classic.connect("hostname")
# runs os.listdir() and os.stat() remotely, printing results locally
def remote_ls(path):
ros = conn.modules.os
for filename in ros.listdir(path):
stats = ros.stat(ros.path.join(path, filename))
print "%d\t%d\t%s" % (stats.st_size, stats.st_uid, filename)
remote_ls("/usr/bin")
If you're interested, there's a good tutorial on their wiki.
But, of course, if you're perfectly fine with ssh calls using Popen or just don't want to run separate "RPyC" daemon, then this is definitely an overkill.
This covers the bases. Notice the use of sudo for things that needed more privileges. We configured sudo to allow those commands for that user without needing a password typed.
Also, keep in mind that you should run ssh-agent to make this "make sense". But all in all, it works really well. Running deploy-control httpd configtest will check the apache configuration on all the remote servers.
#!/usr/local/bin/python
import subprocess
import sys
# The user#host: for the SourceURLs (NO TRAILING SLASH)
RemoteUsers = [
"deploy#host1.example.com",
"deploy#host2.appcove.net",
]
###################################################################################################
# Global Variables
Arg = None
# Implicitly verified below in if/else
Command = tuple(sys.argv[1:])
ResultList = []
###################################################################################################
for UH in RemoteUsers:
print "-"*80
print "Running %s command on: %s" % (Command, UH)
#----------------------------------------------------------------------------------------------
if Command == ('httpd', 'configtest'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'graceful'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'status'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status'))
#----------------------------------------------------------------------------------------------
elif Command == ('disk', 'usage'):
CommandResult = subprocess.call(('ssh', UH, 'df -h'))
#----------------------------------------------------------------------------------------------
elif Command == ('uptime',):
CommandResult = subprocess.call(('ssh', UH, 'uptime'))
#----------------------------------------------------------------------------------------------
else:
print
print "#"*80
print
print "Error: invalid command"
print
HelpAndExit()
#----------------------------------------------------------------------------------------------
ResultList.append(CommandResult)
print
###################################################################################################
if any(ResultList):
print "#"*80
print "#"*80
print "#"*80
print
print "ERRORS FOUND. SEE ABOVE"
print
sys.exit(0)
else:
print "-"*80
print
print "Looks OK!"
print
sys.exit(1)
Fabric is a simple way to automate some simple tasks like this, the version I'm currently using allows you to wrap up commands like so:
run('whoami', fail='ignore')
you can specify config options (config.fab_user, config.fab_password) for each machine you need (if you want to automate username password handling).
More info on Fabric here:
http://www.nongnu.org/fab/
There is a new version which is more Pythonic - I'm not sure whether that is going to be better for you int his case... works fine for me at present...

Categories