How do I execute ssh via Robot Framework's Process library - python

I'm at a loss for what to think, here. There must be something I'm missing since I'm able to execute this perfectly fine in another, similar context, but not here. I am trying to run the following console command via Robot Framework for the sake of testing my system's reaction to a network outage: ssh it#128.0.0.1 sudo ifconfig ens192 down (the user & host ip are modified for posting here.)
To accomplish this, I'm using the following line of code:
Run Process 'ssh it#128.0.0.1 sudo ifconfig ens192 down' shell=True stdout=stdout stderr=stderr
Now, I've executed this myself in console and saw that the result was an inability to ping the ip, which is what I want. So I know for certain that the command itself isn't a problem. What is a problem is the fact that I get the following result: /bin/sh: ssh it#128.0.0.1 sudo ifconfig ens192 down: command not found
That's fine, though, because I believe this is a simple issue of the ssh instance not having the it user's $PATH. For reasons described below, I'm going to guess its $PATH variable is empty. Thus, I modified the keyword call as such:
Run Process 'ssh it#128.0.0.1 PATH\=/usr/sbin sudo ifconfig ens192 down' shell=True stdout=stdout stderr=stderr
The result of this command is as follows: /bin/sh: ssh it#128.0.0.1 PATH=/usr/sbin sudo ifconfig ens192 down: No such file or directory, which I believe to be referring to the items in the PATH attribute. I've tried several different things to determine what the problem could be, such as why it would be unable to find these files in an instance where, executed manually in the console, these directories are indeed present.
Given that I have less than a year's worth of experience with Robot Framework, I'm certain that there's just something I'm not understanding about the Process library. Could I have an explanation as to what I am missing for this execution and, if possible, an explanation as to how I should be executing an ssh command via Robot Framework?
A few extra things worth pointing out:
The it user is brand new, having been created via useradd it and copying the home directory contents into its new space.
I have already added an entry into the device's sudoers file to allow the it user to execute ifconfig without requiring password entry.
On that note, I've sent the executing user's ssh keys over to the device such that it does not require a password to log into it as that user.
I have attempted to execute several basic commands. I have intermittently gotten ls to work but have been unable to get a greater sense of what $PATH the Robot Framework ssh instance has because echo doesn't seem to work.

Related

bash script that is run from Python reaches sudo timeout

This is a long bash script (400+ lines ) that is originally invoked from a django app like so -
os.system('./bash_script.sh &> bash_log.log')
It stops on a random command in the script. If the order of commands is changed, it hangs on another command in approx. the same location.
sshing to the machine that runs the django app, and running sudo ./bash_script.sh, asks for a password and then runs all the way.
I can't see the message it presents when it hangs in the log file, couldn't make it redirect there. I assume it's a sudo password request.
Tried -
sudo -v in the script - didn't help.
ssh to the machine and manually extend the sudo timeout in /etc/sudoers - didnt help, I think since the django app is already in the air and uses the previos timeout.
splitting the script in two, and running one in separate thread, like so -
def basher(command, log_path):
with open(log_path) as log:
Popen(command, stdout=log, stderr=log).wait()
script_thread = Thread(target=basher, args=('bash_script_pt1.sh', 'bash_log_pt1.log'))
script_thread.start()
os.system('./bash_script_pt2.sh &> bash_log_pt2.log') # I know it's deprecated, not sure if maybe it's better in this case
script_thread.join()
The logs showed that part 1 ended ok, but part 2 still hangs, albeit later in the code than when they were together.
I thought to edit /etc/sudoers from inside the Python code, and then re-login via su - user. There are snippets of how to pass the password using pty, however I don't understand the mechanics of it and could not get it to work.
I also noted that ps aux | grep bash_script.sh shows that the script is being run twice. As -
/bin/bash bash_script.sh
and as
sh -c bash_script.sh.
I assume os.system has an internal shell=True going on.
I don't understand the Linux entities/mechanics in play to figure out what's happening.
My guess is that the django app has different and more limited permissions, than the script itself does, and the script is inheriting said restrictions because it is being executed by it.
You need to find out what permissions the script has when you run it just from bash, and what it has when you run it via django, and then figure out what the difference is.

Using the ssh agent inside a python script

I'm pulling and pushing to a github repository with a python script. For the github repository, I need to use a ssh key.
If I do this manually before running the script:
eval $(ssh-agent -s)
ssh-add ~/.ssh/myprivkey
everything works fine, the script works. But, after a while, apparently the key expires, and I have to run those 2 cmds again
The thing is, if I do that inside the python script, with os.system(cmd), it doesn't work, it only works if I do that manually
I know this must be a messy way to use the ssh agent, but I honestly don't know how it works, and I just want the script to work, that's all
The script runs once an hour, just in case
While the normal approach would be to run your Python script in a shell where the ssh agent is already running, you can also consider an alternative approach with sshify.py
# This utility will execute the given command (by default, your shell)
# in a subshell, with an ssh-agent process running and your
# private key added to it. When the subshell exits, the ssh-agent
# process is killed.
Consider defining the ssh key path against a host of github.com in your ssh config file as outlined here: https://stackoverflow.com/a/65791491/14648336
If Linux then at this path ~/.ssh/ create a file called config and input something similar to in the above answer:
Host github.com
HostName github.com
User your_user_name
IdentityFile ~/.ssh/your_ssh_priv_key_file_name
This would save the need for starting an agent each time and also prevent the need for custom environment variables if using GitPython (you mention using Python) as referenced in some other SO answers.

ssh running a command gives different results to running it locally

I have a python script that uses Popen to create an appium server for a simulator on a mac
self.appium_process = subprocess.Popen(["/usr/local/bin/appium", "-a", self.ip, "--nodeconfig", self.node_file_path, "--relaxed-security", "-p", str(appium_port), "-dc", default_capabilities], stdout=log_file, stderr=subprocess.STDOUT)
I created a bash shell script that calls the python script. When I run the script from the local box it works and the appium logs show the connection.
I need to run this remote via ssh however. So I use the following to call the script:
ssh 10.18.66.99 automation_fw/config/testscript.sh
This, however, always ends up with the log showing:
env: node: No such file or directory
I checked and the node app has an extra slash before its called:
$ which node
/usr/local/bin//node
$
I tried changing the path on the machine but no change. How can I get this to run from ssh in the same way as it can run locally on that same box
A
When you are running a command vis SSH you are not starting what's called a login shell (more about that here).
From the details you've shared, I would say it's some thing in your environment (running outside a logged-in shell), more specifically a problem with your $PATH variable. You might want to check /etc/environment or similar paths (depending on your Linux flavour) for the wrong value.

run ssh command remotely without redirecting output

I want to run a python script on my server (that python script has GUI). But I want to start it from ssh. Something like this:
ssh me#server -i my_key "nohup python script.py"
... > let the script run forever
BUT it complains "unable to access video driver" since it is trying to use my ssh terminal as output.
Can I somehow make my commands output run on server machine and not to my terminal... Basically something like "wake-on-lan functionality" -> tell the server you want something and he will do everything using its own system (not sending any output back)
What about
ssh me#server -i my_key "nohup python script.py >/dev/null 2>&1"
You can use redirection to some remote logfile instead of /dev/null of course.
? :)
EDIT: GUI applications on X usually use $DISPLAY variable to know where they should be displayed. Moreover, X11 display servers use authorization to permit or disallow applications connecting to its display. Commands
export DISPLAY=:0 && xhost +
may be helpful for you.
Isn't it possible for you to rather use python ssh extension instead of calling external application?
It would:
run as one process
guarantee that invocation will be the same among all possible system
lose the overhead from "execution"
send everything trough ssh (you won't have to worry about input like "; possibly local executed command)
If not, go with what Piotr Wades suggested.

Most pythonic way of running a single command with sudo rights

I have a python script which is performing some nagios configuration. The script is running as a user which has full sudo rights (the user can run any command with sudo, without password prompt). The final step in the configuration is this:
open(NAGIOS_COMMAND_FILE, 'a').write(cmdline)
The NAGIOS_COMMAND_FILE is only writable by root, so this command should be run by root. I can think of two ways of achieving this (both unsatisfactory):
Run the whole script as root. I do not like doing this, since any error in my script will be executed with full root rights.
Put the open(NAGIOS_COMMAND_FILE, 'a').write(cmdline) command in a separate script, and use the subprocess library to call that script, with sudo. I do not like creating an extra script just to run a single command.
I suppose there is no way of changing the running user just for a single command, in my current script, or am I wrong?
Why don't you give write permission on NAGIOS_COMMAND_FILE to your user who have all sudo rights?
Never, ever run a web server as root or as a user with full sudo privileges. This isn't a pythonic thing, it is a "keep my server from being pwned" thing.
Look at os.seteuid, the "principle of least privilege", and man sudoers and run your server as regular "httpd-server" where "httpd-server" has sudoer permission to write to NAGIOS_COMMAND_FILE. And then be sure that what you write to the command file is as clean as you can make it.
It is actually possible to change user for a single command.
Fabric provides a way to log in as any user to a server. It relies on ssh connections I believe. So you could connect to localhost with a different user in your python script and execute the desired command.
http://docs.fabfile.org/en/1.4.3/api/core/decorators.html
Anyway, as others have already precised, it is best to allow the user running the script permission to execute this one command and avoid relying on root for execution.
I would agree with the post above, either give your user write perms to the NAGIOS_COMMAND_FILE or add that use to a group that has those permissions, like nagcmd.

Categories