I am currently working on a script where when I launch an EC2 instance, I send a paramiko command to rename the host name. Because this is a custome AMI, I cannot use the AWS Boto3 CLI to do it, so I need to do it via an SSH command.
The problem I am running into, is Paramiko seems to fail at passing my specific command. It will pass other commands just fine, but I am assuming I am running into some sort of limitation of either paramiko or python and cannot seem to troubleshoot it. This is for a RHEL instance, so renaming the Network file is the only way I can think to do this.
If I run the command, as is, through the terminal of the host, it works. So something between paramiko and this command seems to be the blocker.
Here is my sample script t hat should work, but seems to fail at running the command.
#!/usr/bin/env python
import boto3
import time
import subprocess
import paramiko
import StringIO
c = paramiko.SSHClient()
c.set_missing_host_key_policy(paramiko.AutoAddPolicy())
c.connect(hostname = '12.34.56.78', username = "username", key_filename='''/Users/mallachar/Downloads/testkey.pem''' )
stdin , stdout, stderr = c.exec_command('sudo sed -i -E "s/^HOSTNAME.*/HOSTNAME=testhost.company/" /etc/sysconfig/network')
print stdout.read()
print stderr.read()
c.close
Here is me printing stdout and stderr
sudo: sorry, you must have a tty to run sudo
Pretty simple, I had to add this to the command.
get_pty=True
so
stdin , stdout, stderr = c.exec_command('sudo sed -i -E "s/^HOSTNAME.*/HOSTNAME=testhost.company/" /etc/sysconfig/network',get_pty=True)
Related
Consider this python script:
import subprocess
nc = subprocess.Popen(["/bin/bash"], stdin=subprocess.PIPE, text=True)
nc.stdin.write("nc localhost 2222\n")
nc.stdin.write("pwd\n")
When I listen with netcat as nc -lnvp 2222
I successfully connect and send the string pwd nothing more happens of course.
Now I get a non stable php reverse shell(Completely new event) and I connect through netcat successfully. I execute this script to upgrade shell and print current directory. By the way that listener is another Popen instance.
import subprocess
nc = subprocess.Popen(["/bin/bash"], stdin=subprocess.PIPE, text=True)
nc.stdin.write("nc localhost 2222\n")
nc.stdin.write('python3 -c "import pty;pty.spawn(\'/bin/bash\')"\n')
nc.stdin.write('pwd\n')
Now when I execute that python script, I expected the input will go through netcat, get executed in that new bash tty and spawn a stable shell and pass pwd to return current directory. But this script only works upto spawing stable shell and then stdin input doesn't go through nc or something else happens that I'm not aware of.
What's happening here?
Edit: I need to be able to run multiple commands. Using subprocess.communicate(input=<command>) causes deadlock and can't accept stdin.
I have two Raspberry Pi's. I am trying to transfer files from one Pi to the other using scp. I am trying to do this through Python because the program that will be transferring files is a python file.
below is the shell script I have for the SCP part (Blurred out the pass and IP):
#!/bin/sh
sshpass -p ######## scp test.txt pi#IP:/home/pi
and below is the Python Script that launches that Shell script.
import subprocess
subprocess.call(['./ssh.sh'])
print("DONE")
For some reason the python script doesnt kick back any errors and hits the print line but the file is not transferred. When i run the scp command outside of python the file transfers just fine. Am I doing something incorrect here?
****EDIT****
I cant even get Subprocess to work with this which is why i ended up using na shell script. Here is my attempt with Subprocess:
import subprocess
subprocess.call("sshpass -p ######## scp test.txt pi#IP:/home/pi")
print"DONE"
Again I get no errors, but the file is not transferred
****EDIT #2****
So I found out that because sshpass is being used, scp isnt prompting me to add the IP to known hosts, as a result the file simply isnt trnasferred at all. I need a way to add this acceptance into the script IE I ge the following if I launch the command without sshpass:
The authenticity of host 'IP (IP)' can't be established.
ECDSA key fingerprint is 13:91:24:8e:6f:21:98:1f:5b:3a:c8:42:7a:88:e9:91.
Are you sure you want to continue connecting (yes/no)?
I want to communicate to pass "yes\n" to this prompt as well as the password afterwards. Is this possible?
For the first query
You can use 'subprocess.popen' to get output(STDOUT) and error(STDERR) for the executed command.
import subprocess
cmd = 'sshpass -p ****** scp dinesh.txt root#256.219.210.135:/root'
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print "Output is ",out
print "Error is ",err
If you execute above code with wrong password, the you will get below output:
[root#centos /]# python code.py
Output is
Error is Permission denied, please try again.
In this case, if the file is successfully transferred, then there is no output.
If you execute command like 'ls -l' then output will be printed.
For your second query (****EDIT #2****)
Options are :
Password less SSH. Check this.
Pexpect
I found a much easier way of tackling all of this
sshpass -p ###### scp -o StrictHostKeyChecking=no test.txt pi#IP:/home/pi
The -o switch allows me to auto store the IP into known hosts thus I do not need to communicate with the shell at all. The interaction from Python to Shell works with that addition; Doing this solely through subprocess also works.
If you don't mind to try other approaches it worth to use SCPClient from scp import.
Trying to monitor the available physical disc space of a remote machine using a python script, which executes the df -h . command using subprocess.popen.
import subprocess
import time
command = 'ssh remoteserver "df -h ."'
while True:
proc = subprocess.Popen(command,shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output,err=proc.communicate()
print output
print err
time.sleep(60)
The script runs fine and prints the output to the terminal when run from command line
$> python2.7 script.py
Filesystem Size Used Avail Use% Mounted on
remoteserver:/home/user
555G 447G 109G 81% /home
The scripts does not produce any output or seems to be blocking when the script is started with nohup command.
$> nohup python2.7 script.py &
Would like the script to work and fetch the disc space of remote machine using the above script when started in nohup.
I'm not 100% sure of the underlying issue here, but when you invoke NOHUP in the shell, it's disconnected some of the STDIN/STDOUT from the terminal process, which I suspect it causing some of this interactions you're seeing.
Given that you're doing this from a remote machine, I'd actually recommend you look at using something like Fabric as a library to do what you're after. It's pretty straightforward, and does most of the handling of terminal sessions as well as closing things down nicely for you when you're complete.
something like:
from fabric import api
from fabric.api import env
import fabric
env.host_string = '%s#%s' % (username, remote_host)
env.disable_known_hosts = True
env.password = password
fabric.state.output['stdout'] = False
fabric.state.output['stderr'] = False
results = api.run('df -h')
You might try sending stdin=subprocess.PIPE to the subprocess command, then calling proc.stdin.close() on the next line, before the communicate() call. Or you can try changing the command to 'ssh remoteserver "df -h ." </dev/null'. Others report using FNULL = open(os.devnull, 'r') and passing in FNULL to the stdin= argument, but I'm not sure if you need to call FNULL.close() after or not.
SSH is most likely waiting for input for some reason when it is run from nohup. Perhaps it is unable to authenticate in the nohup environment and is asking for password input?
To make sure SSH is not waiting for input, try adding -o "BatchMode yes" to the ssh command and see if there are some clues in the output/error from the subprocess communicate call.
I am writing a GUI which uses SSH commands. I tried to use the subprocess module to call ssh and set the SSH_ASKPASS environment variable so that my application can pop up a window asking for the SSH password. However I cannot make ssh read the password using the given SSH_ASKPASS command: it always prompts it in the terminal window, regardless how I set the DISPLAY, SSH_ASKPASS, TERM environment variables or how I pipe the standard input/output. How can I make sure that ssh is detached from the current TTY and use the given program to read password?
My test code was:
#!/usr/bin/env python
import os
import subprocess
env = dict(os.environ)
env['DISPLAY'] = ':9999' # Fake value (trying in OS X and Windows)
del env['TERM']
env['SSH_ASKPASS'] = '/opt/local/libexec/git-core/git-gui--askpass'
p = subprocess.Popen(['ssh', '-T', '-v', 'user#myhost.com'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env
)
p.communicate()
SSH uses the SSH_ASKPASS variable only if the process is really detached from TTY (stdin redirecting and setting environment variables is not enough). To detach a process from console it should fork and call os.setsid(). So the first solution I found was:
# Detach process
pid = os.fork()
if pid == 0:
# Ensure that process is detached from TTY
os.setsid()
# call ssh from here
else:
print "Waiting for ssh (pid %d)" % pid
os.waitpid(pid, 0)
print "Done"
There is also an elegant way to do this using the subprocess module: in the preexec_fn argument we can pass a Python function that is called in the subprocess before executing the external command. So the solution for the question is one extra line:
env = {'SSH_ASKPASS':'/path/to/myprog', 'DISPLAY':':9999'}
p = subprocess.Popen(['ssh', '-T', '-v', 'user#myhost.com'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env,
preexec_fn=os.setsid
)
Your problem is that SSH detects your TTY and talks to it directly (as is clearly stated in the man-page). You can try and run ssh without a terminal - the man page suggests it might be necessary to redirect stdin to /dev/null for ssh to think it has no terminal.
You can also use pexcept for this, it's known to work with SSH - example usage.
The Right Way (TM) to do what you're trying to do is either:
Use a library specifically for using SSH in python (for example twisted conch or paramiko)
Use public and private keys so that passwords will not be necessary
If you want a quick and dirty way of doing it for your own personal usage, you could enable passwordless login between these two machines by doing this in your terminal:
ssh-keygen -t rsa # generate a keypair (if you haven't done this already)
ssh-copy-id user#other_machine # copy your public key to the other machine
Then you can get ssh commands to go through (subprocess can't seem to accept ssh commands directly) by creating a script (remember to mark it executable, e.g. chmod 755 my_script.sh ) with the things you want, such as:
#!/bin/bash
ssh user#other_machine ls
and call it from your program:
import subprocess
response = subprocess.call("./my_script.sh")
print(response)
For production-use of apps that need to be deployed on other people's machines I'd go with abyx's approach of using an SSH library. Much simpler than messing with some environment variables.
So I'm trying to get a process to be run as a super user from within a python script using subprocess. In the ipython shell something like
proc = subprocess.Popen('sudo apach2ctl restart',
shell=True, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
works fine, but as soon as I stick it into a script I start getting: sudo: apach2ctl: command not found.
I would guess this is due to the way sudo handles environments on ubuntu. (I've also tried sudo -E apche2ctl restart and sudo env path=$PATH apache2ctl restart with no avail)
So my question is basically, if I want to run apache2ctl restart as super user that prompts the user for the super user password when required, how should I go about doing this? I have no intention of storing passwords in the script.
Edit:
I've tried passing in the commands as both a string and tokenized into a list. In the python interpreter, with a string I'll get the password prompt properly (still doesnt work in a python script as in my original problem), a list just gives the help screen for sudo.
Edit 2:
So what I gather is that while Popen will work with some commands just as strings when shell=True, it takes
proc = subprocess.Popen(['sudo','/usr/sbin/apache2ctl','restart'])
without 'shell=True' to get sudo to work.
Thanks!
Try:
subprocess.call(['sudo', 'apach2ctl', 'restart'])
The subprocess needs to access the real stdin/out/err for it to be able to prompt you, and read in your password. If you set them up as pipes, you need to feed the password into that pipe yourself.
If you don't define them, then it grabs sys.stdout, etc...
Try giving the full path to apache2ctl.
Another way is to make your user a password-less sudo user.
Type the following on command line:
sudo visudo
Then add the following and replace the <username> with yours:
<username> ALL=(ALL) NOPASSWD: ALL
This will allow the user to execute sudo command without having to ask for password (including application launched by the said user. This might be a security risk though
I used this for python 3.5. I did it using subprocess module.Using the password like this is very insecure.
The subprocess module takes command as a list of strings so either create a list beforehand using split() or pass the whole list later. Read the documentation for more information.
What we are doing here is echoing the password and then using pipe we pass it on to the sudo through '-S' argument.
#!/usr/bin/env python
import subprocess
sudo_password = 'mysecretpass'
command = 'apach2ctl restart'
command = command.split()
cmd1 = subprocess.Popen(['echo',sudo_password], stdout=subprocess.PIPE)
cmd2 = subprocess.Popen(['sudo','-S'] + command, stdin=cmd1.stdout, stdout=subprocess.PIPE)
output = cmd2.stdout.read().decode()
The safest way to do this is to prompt for the password beforehand and then pipe it into the command. Prompting for the password will avoid having the password saved anywhere in your code and it also won't show up in your bash history. Here's an example:
from getpass import getpass
from subprocess import Popen, PIPE
password = getpass("Please enter your password: ")
# sudo requires the flag '-S' in order to take input from stdin
proc = Popen("sudo -S apach2ctl restart".split(), stdin=PIPE, stdout=PIPE, stderr=PIPE)
# Popen only accepts byte-arrays so you must encode the string
proc.communicate(password.encode())
You have to use Popen like this:
cmd = ['sudo', 'apache2ctl', 'restart']
proc = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
It expects a list.
To run a command as root, and pass it the password at the command prompt, you could do it as so:
import subprocess
from getpass import getpass
ls = "sudo -S ls -al".split()
cmd = subprocess.run(
ls, stdout=subprocess.PIPE, input=getpass("password: "), encoding="ascii",
)
print(cmd.stdout)
For your example, probably something like this:
import subprocess
from getpass import getpass
restart_apache = "sudo /usr/sbin/apache2ctl restart".split()
proc = subprocess.run(
restart_apache,
stdout=subprocess.PIPE,
input=getpass("password: "),
encoding="ascii",
)
I tried all the solutions, but did not work. Wanted to run long running tasks with Celery but for these I needed to run sudo chown command with subprocess.call().
This is what worked for me:
To add safe environment variables, in command line, type:
export MY_SUDO_PASS="user_password_here"
To test if it's working type:
echo $MY_SUDO_PASS
> user_password_here
To run it at system startup add it to the end of this file:
nano ~/.bashrc
#.bashrc
...
existing_content:
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
...
export MY_SUDO_PASS="user_password_here"
You can add all your environment variables passwords, usernames, host, etc here later.
If your variables are ready you can run:
To update:
echo $MY_SUDO_PASS | sudo -S apt-get update
Or to install Midnight Commander
echo $MY_SUDO_PASS | sudo -S apt-get install mc
To start Midnight Commander with sudo
echo $MY_SUDO_PASS | sudo -S mc
Or from python shell (or Django/Celery), to change directory ownership recursively:
python
>> import subprocess
>> subprocess.call('echo $MY_SUDO_PASS | sudo -S chown -R username_here /home/username_here/folder_to_change_ownership_recursivley', shell=True)
Hope it helps.
You can use this way and even catch errors, even can add variables to your commands. -
val = 'xy
response = Popen(f"(sudo {val})", stderr=PIPE, stdout=PIPE, shell=True)
output, errors = response.communicate()
Hope this helps.