Using the ssh agent inside a python script - python

I'm pulling and pushing to a github repository with a python script. For the github repository, I need to use a ssh key.
If I do this manually before running the script:
eval $(ssh-agent -s)
ssh-add ~/.ssh/myprivkey
everything works fine, the script works. But, after a while, apparently the key expires, and I have to run those 2 cmds again
The thing is, if I do that inside the python script, with os.system(cmd), it doesn't work, it only works if I do that manually
I know this must be a messy way to use the ssh agent, but I honestly don't know how it works, and I just want the script to work, that's all
The script runs once an hour, just in case

While the normal approach would be to run your Python script in a shell where the ssh agent is already running, you can also consider an alternative approach with sshify.py
# This utility will execute the given command (by default, your shell)
# in a subshell, with an ssh-agent process running and your
# private key added to it. When the subshell exits, the ssh-agent
# process is killed.

Consider defining the ssh key path against a host of github.com in your ssh config file as outlined here: https://stackoverflow.com/a/65791491/14648336
If Linux then at this path ~/.ssh/ create a file called config and input something similar to in the above answer:
Host github.com
HostName github.com
User your_user_name
IdentityFile ~/.ssh/your_ssh_priv_key_file_name
This would save the need for starting an agent each time and also prevent the need for custom environment variables if using GitPython (you mention using Python) as referenced in some other SO answers.

Related

How do I execute ssh via Robot Framework's Process library

I'm at a loss for what to think, here. There must be something I'm missing since I'm able to execute this perfectly fine in another, similar context, but not here. I am trying to run the following console command via Robot Framework for the sake of testing my system's reaction to a network outage: ssh it#128.0.0.1 sudo ifconfig ens192 down (the user & host ip are modified for posting here.)
To accomplish this, I'm using the following line of code:
Run Process 'ssh it#128.0.0.1 sudo ifconfig ens192 down' shell=True stdout=stdout stderr=stderr
Now, I've executed this myself in console and saw that the result was an inability to ping the ip, which is what I want. So I know for certain that the command itself isn't a problem. What is a problem is the fact that I get the following result: /bin/sh: ssh it#128.0.0.1 sudo ifconfig ens192 down: command not found
That's fine, though, because I believe this is a simple issue of the ssh instance not having the it user's $PATH. For reasons described below, I'm going to guess its $PATH variable is empty. Thus, I modified the keyword call as such:
Run Process 'ssh it#128.0.0.1 PATH\=/usr/sbin sudo ifconfig ens192 down' shell=True stdout=stdout stderr=stderr
The result of this command is as follows: /bin/sh: ssh it#128.0.0.1 PATH=/usr/sbin sudo ifconfig ens192 down: No such file or directory, which I believe to be referring to the items in the PATH attribute. I've tried several different things to determine what the problem could be, such as why it would be unable to find these files in an instance where, executed manually in the console, these directories are indeed present.
Given that I have less than a year's worth of experience with Robot Framework, I'm certain that there's just something I'm not understanding about the Process library. Could I have an explanation as to what I am missing for this execution and, if possible, an explanation as to how I should be executing an ssh command via Robot Framework?
A few extra things worth pointing out:
The it user is brand new, having been created via useradd it and copying the home directory contents into its new space.
I have already added an entry into the device's sudoers file to allow the it user to execute ifconfig without requiring password entry.
On that note, I've sent the executing user's ssh keys over to the device such that it does not require a password to log into it as that user.
I have attempted to execute several basic commands. I have intermittently gotten ls to work but have been unable to get a greater sense of what $PATH the Robot Framework ssh instance has because echo doesn't seem to work.

Modules appear to not be installed when running python through ssh

I am connecting to a remote server through
ssh user#server.com
and run
python script.py
in the appropriate directory. However, I get the error
ImportError: No module named numpy
even though I know the module is installed and the script runs with no problems when I am physically logged in to that server.
None of the answers I was able to find worked (for example this, and this). Do have any ideas as to how I can run the script using ssh?
The remote server has Python 2.6.6 installed, and
which python
returns
/usr/bin/python
The remote serves runs CentOS.
See similar problem describe here: Why does an SSH remote command get fewer environment variables then when run manually?.
Compare your environment variables in the local (physical) mode to the remote mode by running env in both cases. Move missing variables from your local profile to /etc/profile. Then log out from ssh session and connect again.
Another approach: If you don't want to change anything, then after ssh switch to your user via su - <your user>. This may look weird because you already logged it with this user. The difference is, that after su all your env. variables will set like in a local (physical) mode. Advantage: it is quick. Disadvantage: You will have to do it each time you want to run your Python script. So the first approach with configuring /etc/profile may be better on the long run.

How to Use Python to execute initial commands over the terminal and then continue use

I have looked into using pxssh ,subprocess and paramiko but have found no success. What I am ultimately trying to do is figure out a way to not only use SSH to access a server and execute commands using a python script and finish there, but also have it open an instance of the terminal after executing all the commands for continued use.
Currently the server has modules that clients have to manually activate using commands after they have established an SSH connection.
For example:
module python
This command would give the user access to python.
Following this the user would then be able to use python and all its commands through the ssh connection in the terminal.
The issue I have with the methods listed earlier for executing these commands is that it does not display an instance of the terminal. It successfully executes the commands but since these commands have to be executed every time a new SSH connection is established it's worthless unless I can get essentially a copy of the terminal that the Python script executed and loaded up all the modules with.
Does any one have a solution to this? I've scoured the web for hours to no success.
This is a very difficult issue to explain so if anything is unclear please ask me and I will try my best to rephrase things. I am very new to all this.

Getting access to VM's console with Python

I am writing a Python script that creates and runs several VMs via virsh for the user. Some of the configuration has to be done by executing commands inside the VM which I would want to do automatically.
What would be the easiest way to get remote shell access in Python? I am considering the following approaches:
To use the virsh console command as a sub-process and do I/O to it.
To bring up an SSH session to the VM. I can configure the VM before it boots by editing its file system so I know its target IP address.
Any better API for doing this. RPC?
I need to get the return values for commands so I know if they executed correctly or not. For that matter I need to be able to detect when a program I invoke has finished. Options #1 and #2 rely on scraping the output and that gets complex.
Any suggestions much appreciated.

different behavior in Python shell and program

I'm using subprocess.Popen to instantiate an ssh-agent, add a key and push a git repository to a remote. To do this I string them together with &&. The code I'm using is
subprocess.Popen("eval $(ssh-agent) && ssh-add /root/.ssh/test_rsa && git push target HEAD", shell=True)
When I run this as a .py file I am prompted for the key's password. This seems to work as I get.
Identity added: /root/.ssh/test_rsa (/root/.ssh/test_rsa).
But when it tries to push the repository to the remote, an error occurs.
ssh: connect to host ***.***.***.*** port 22: Connection refused
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
However, if I simply run the same command in the interactive shell, it works. What causes this difference in behaviour, and what can I do to fix this?
The git server was on an aws instance that was being started earlier in the script. There was a check to make sure it was running, but aws seems to report an instance as running once boot has begun. This means that there is a brief time in which the instance is running, but an ssh daemon doesn't exist. Because the script moved very quickly into trying to push, it was falling within this time period and the server was refusing its connection attempt. By the time I would try anything in the interactive shell the instance was running long enough that it was working.
In short, aws says instances are running before the OS has started services.

Categories