How to pass an argument to the subprocess that expects input - python

I would like to execute a "git push" command using Python script. I was hoping to achieve this by using python's subprocess.Popen() method. However when I invoke the "git push" command, the ssh agent asks me for my passphrase in order to be able to push any changes to the repository.
And here is my question - is there any way for me to be able to pass my passphrase which is an input expected by the ssh process not the git itself?
process = subprocess.Popen("git push", stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
pin = process.stdin.write("Passphrase")
I acknowledge that it might be easier to get rid of this ssh-agent's passphrase message everytime I git push or pull, however just for the sake of knowing I was curious is there a way to achieve it this way.

Note that GitPython issue 559 mentioned in 2016:
As far as I know, it is not possible to use keys with passphrases with GitPython due to the way git is invoked by it.
It simply does not connect standard input, which usually would be required to read the passphrase. The preferred way to do it would be to use a specific key without passphrase.
If that is not an option, git will also invoke the program pointed to by GIT_ASKPASS to read a password for use with the ssh key.
There is however a possible workaround, initially applied to GPG:
How to sign commits using the GitPython package
See 03_sign_commit_using_the_gitpython_package.py.
But again, for GPG. Check if that can be adapted for SSH.

Related

asyncssh custom scp handler

I need to implement an SCP handler which would allow me to save files to a specific directory. For example, if user typed scp [source] [destination], script would copy [source] at temp/[destination].
So, all I need to do is to change [destination] adding temp/ as a prefix.
I have decided to implement custom process handler, which, as you can guess, accepts asyncssh.SSHServerProcess. The problems I have faced are:
When I try to access process.command attribute, I get scp -t [destination], whereas I typed scp [source] [destination] at the clientside (note, [source] does not appear in process.command). I doubt whether it is a proper way to obtain command or not.
After I have transformed command, how can I apply scp command? Consider I have scp -t temp/[destination], how can I run this command?

Using the ssh agent inside a python script

I'm pulling and pushing to a github repository with a python script. For the github repository, I need to use a ssh key.
If I do this manually before running the script:
eval $(ssh-agent -s)
ssh-add ~/.ssh/myprivkey
everything works fine, the script works. But, after a while, apparently the key expires, and I have to run those 2 cmds again
The thing is, if I do that inside the python script, with os.system(cmd), it doesn't work, it only works if I do that manually
I know this must be a messy way to use the ssh agent, but I honestly don't know how it works, and I just want the script to work, that's all
The script runs once an hour, just in case
While the normal approach would be to run your Python script in a shell where the ssh agent is already running, you can also consider an alternative approach with sshify.py
# This utility will execute the given command (by default, your shell)
# in a subshell, with an ssh-agent process running and your
# private key added to it. When the subshell exits, the ssh-agent
# process is killed.
Consider defining the ssh key path against a host of github.com in your ssh config file as outlined here: https://stackoverflow.com/a/65791491/14648336
If Linux then at this path ~/.ssh/ create a file called config and input something similar to in the above answer:
Host github.com
HostName github.com
User your_user_name
IdentityFile ~/.ssh/your_ssh_priv_key_file_name
This would save the need for starting an agent each time and also prevent the need for custom environment variables if using GitPython (you mention using Python) as referenced in some other SO answers.

How to execute scp locally using fabric

I am using fabric right now to upload a large directory of files to ~100 different servers. These servers don't have rsync installed, so that's out of the question. Right now, I am using the upload_project method to accomplish this which is working well on the test servers I have, but I have heard that ftp (even if it's sftp) isn't allowed on some of these servers and that I might also need to limit the bandwidth of the transfer. To avoid these problems, I am trying to use scp, however, I am having some issues. I originally thought I could just go by the code in the rsync_project method and do something like local("scp -r %s %s" % (local_str, remote_str). However, scp still wants a password. So, I echoed the env.password to scp, which worked, but then it needed me to type yes to accept the key. I know I could just echo the password and echo yes to any key prompt, but I was wondering if there was some other way to accomplish this without all of the prompts. Also, sadly, the version of fabric on the server (which I cannot update) is a bit behind (1.6ish) and doesn't have the context-lib functionality that can handle prompts.
I was wondering if there was some other way to accomplish this without all of the prompts
The answer is not exactly related to Fabric. In the case of your scp command, you should look into the StrictHostKeyChecking option, to skip the known hosts key check. Example usage:
scp $src $dest -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
It sounds like you've addressed the issue with the password prompt; was there anything else you were trying to solve here?

how to check if python script is being called remotely via ssh

So I'm writing a command line utility and I've run into a problem I'm curious about.
The utility can be called with file arguments or can read from sys.stdin. Originally I was using sys.stdin.isatty() to figure out if data is being piped in, however, I found that if I call the utility remotely via ssh server utility, sys.stdin.isatty() will return false, despite the fact that there is no actual data being piped in.
As a workaround I'm using - as the file argument to force reading from stdin (eg: echo "data_here" | utility -f -), but I'm curious to know if there's a reliable way to tell the difference between a pipe getting data from a process and a pipe that's only open because the call is via ssh.
Systems programming is not my forte, so I'm grateful for any help I can get from you guys.
You can tell if you're being invoked via SSH by checking your environment. If you're being invoked via an SSH connection, the environment variables SSH_CONNECTION and SSH_CLIENT will be set. You can test if they are set with, say:
if "SSH_CONNECTION" in os.environ:
# do something
Another option, if you wanted to just stick with your original approach of sys.stdin.isatty(), would be to to allocate a pseudo-tty for the SSH connection. Normally SSH does this by default if you just SSH in for an interactive session, but not if you supply a command. However, you can force it to do so when supplying a command by passing the -t flag:
ssh -t server utility
However, I would caution you against doing either of these. As you can see, trying to detect whether you should accept input from stdin based on whether it's a TTY can cause some surprising behavior. It could also cause frustration from users if they wanted a way to interactively provide input to your program when debugging something.
The approach of adding an explicit - argument makes it a lot more explicit and less surprising which behavior you get. Some utilities also just use the lack of any file arguments to mean to read from stdin, so that would also be a less-surprising alternative.
According to this answer the SSH_CLIENT or SSH_TTY environment variable should be declared. From that, the following code should work:
import os
def running_ssh():
return 'SSH_CLIENT' in os.environ or 'SSH_TTY' in os.environ
A more complete example would examine the parent processes to check if any of them are sshd which would probably require the psutil module.

Python - Capture exit status of command executed via SSH

I need a way to capture exit status from a command run through SSH. Would like the exit status to end up within a variable. I cannot seem to get it working though.
Command would be something simple like:
os.system("ssh -qt hostname 'sudo yum list updates --security > /tmp/yum_update_packagelist.txt';echo $?")
Anyone have an idea? Everything I've tried has either not worked at all, or ended up giving me the exit status of the ssh command, not the underlying command.
You probably want to use an SSH library like paramiko (or spur, Fabric, etc.… just google/PyPI/SO-search for "Python SSH" to see all the options and pick the one that best matches your use case). There are demos included with paramiko that do exactly what you want to do.
If you insist on scripting the command-line ssh tool, you (a) almost certainly want to use subprocess instead of os.system (as the os.system docs explicitly say), and (b) will need to do some bash-scripting (assuming the remote side is running bash) to pass the value back to you (e.g., wrap it in a one-liner script that prints the exit status on stderr).
If you just want to know why your existing code doesn't work, let's take a look at it:
os.system("ssh -qt hostname 'sudo yum list updates --security > /tmp/yum_update_packagelist.txt';echo $?")
First, you're running an ssh command, then a separate echo $? command, which will echo the exit status of ssh. If you wanted to echo the status of the sudo, you need to get the semicolon into the ssh command. (And if you wanted the status of the yum, inside the sudo.)
Second, os.system doesn't look at what gets printed to stdout anyway. As the docs clearly say, "the return value is the exit status of the process". So, you're getting back the exit status of the echo command, which is pretty much guaranteed to be 0.'
So, to make this work, you'd need to get the echo into the right place, and then read the stdout by using subprocess.check_output or similar instead of os.system, and then parse that output to read the last line. If you do all that, it should work. But again, you shouldn't do all that; just use paramiko or another SSH library.
You probably want to use the Fabric api instead of directly calling the ssh executable.
From there, check out Can I catch error codes when using Fabric to run() calls in a remote shell?

Categories