I am trying to write a framework which has capability to entaract with multiple linux machines.
For example my test case which is going to use that framework can be able to start a server in a linux machine, start a client in another linux machine and then should be able to make some configuration changes in a different linux machine without waiting for any command to complete.
I have tried using pexpect to do my job but didn't find it more useful.
Can anyone suggest me any Python module which i can use to do my task ?
My Testcase steps are like:
1. Login to SIP Server -> su -> start SIP server
2. Login to Voice Server -> su -> make some configuration changes
3. Login to SIP client -> su -> start SIP client
4. Collect logs and perform validations
In my environment I can't login into my machines directly as su.
You should look into using something like python-fabric It allows you to use higher level language constructs such as context managers and makes the shell more usable with python in general.
Example usage:
fabfile.py
from fabric.api import run
def host_type():
run('uname -s')
Then you can use host_type from the commandline in the directory of fabfile.py:
$ fab -H localhost,linuxbox host_type
[localhost] run: uname -s
[localhost] out: Darwin
[linuxbox] run: uname -s
[linuxbox] out: Linux
Related
I am running a Python script that collects data and is running inside a Virtual Environment hosted remotely on a VPS (Debian based).
My PC crashed and I am trying to get back into the visual logs of the python script.
I know that the script is still running because it saves its data into a CSV file. That CSV is still being written.
If I activate the source again, then I can rerun the script. It sounds to me that I will have 2 instances of the same script running in this case...
I am not familiar with the virtual environment and I cannot find the right way to do it without deactivating and reactivating it. I am running my script on the cheapest OVH VPS I could buy because my computer is clearly not reliable for running 24/7.
You might use screen to run your script in a separate terminal session. This will avoid losing logging if the ssh connection gets dropped.
The workflow would be something in the lines of (on your host):
# Install screen
$ sudo apt udpate
$ sudo apt install screen
# Start a screen session
$ screen
# Run your script
$ python myscript.py
In case of dropping your ssh connections, it'll be enough to:
# ssh back into the host from your client
# reattach previous screen session
$ screen -r
For advanced use the official docs are quite comprehensive.
Note: As a more general note, what explained above is pretty much the basic logic of a terminal mulitplexer. You'll be able to achieve the same using tmux.
I'm trying to use fabric in python to send commands to a program on another machine.
This code works fine:
from fabric.api import env, run
env.host_string = 'xxx.xxx.xxx.xxx'
env.user = 'username'
env.password = 'password'
run('ls')
But when running
run('rostopic list')
I get
Warning: run() received nonzero return code 127 while executing 'rostopic list'!
'/bin/bash: rostopic: command not found'
However on the machine itself if I run
rostopic list
it runs as it's supposed to
I'm not sure how to proceed and I don't understand why it's not working with fabric. FYI I tried implementing this with paramiko but I also run into problems, however it works fine with pxssh. The issue is that I need this to work on Windows and pxssh isn't compatible. How can I make this command work with fabric?
From the comments you've made about the pathing issues, it sounds like you will need to use some combination of the path, cd, and/or prefix context managers in order to run ROS (Robot Operating System) commands over an SSH connection. You might also wish to troubleshoot by getting Fabric completely out of the picture, and instead work on getting the command to work via ssh -t, like so:
$ ssh user#machine -t "cd /my/home/directory && /opt/ros/indigo/bin/rostopic list"
An example using context managers with Fabric would look like this:
with path('/opt/ros/indigo/bin/'):
with prefix('always run this command'):
with cd('/my/special/directory'):
run('rostopic list')
That's pretty contrived, but hopefully illustrates the point. Regardless, I would first make sure that you can run the command via ssh -t. Solving that problem will likely lead you to the correct way to make this happen with Fabric.
As a side/related consideration: is this a virtual environment on your remote machine? You could use the prefix context manager to activate it, like so:
with prefix('/opt/ros/indigo/bin/activate'):
run('rostopic list')
Or using ssh -t, you could execute:
$ ssh user#machine -t "/opt/ros/indigo/bin/activate && rostopic list"
I'm in the strange position of being both the developer of a python utility for our project, and the tester of it.
The app is ready and now I want to write a couple of blackbox tests that connect to the server where it resides (the server itself is the product that we commercialize), and launch the python application.
The python app allows a minimal command line scripting (some parameters automatically launch functions that otherwise would require user interaction at the main menu). For the residual user interactions, I usually try bash syntax like this:
./app -f <<< $'parameter1\nparameter2\n\n'
And finally I redirect everything to >/dev/null.
If I do manual checks at the command line on the server (where I connect via SSH), everything works smoothly. The app launch lasts 30 seconds, and after 30 seconds I'm correctly returned to the prompt.
Now to the blackbox testing part. Here I'm also using python (Py.test framework), but the test code resides on another machine.
The test runner machine will connect to the Server under test via Paramiko libraries. I've already used this a lot in scripting other functionalities of the product, and it works quite well.
But the problem in this case is that the app under test that I wrote in python uses the NCurses library in its normal behaviour ("import curses"), and apparently when trying to launch this in the Py.test script:
import paramiko
...
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.load_system_host_keys()
client.connect(myhost, myuser, mypass, mytimeout)
client.exec_command("./app -f <<< $'parameter1\nparameter2\n\n' >/dev/null")
Regardless the redirection to /dev/null, the .exec_command() prints this to standard error out with a message about the curses initialization:
...
File "/my/path/to/app", line xxx, in curses_screen
scr = curses.initscr()
...
_curses.error: setupterm: could not find terminal
and finally the py.test script fails because the app execution crashed.
Is there some conflicts between curses (used by the app under test) and paramiko (used by the test script)? As I said, if I connect manually via SSH to the server where the app resides and launch the command line manually with the silent redirection to /dev/null, it works as I would expect.
ncurses really would like to do input/output to a terminal. /dev/null is not a terminal, and some terminal I/O mode changes will fail in that case. Occasionally someone connects the I/O to a socket, and ncurses will (usually) work in that situation.
In your environment, besides the lack of a terminal, it is possible that TERM is unset. That will make setupterm fail.
Setupterm could not find terminal, in Python program using curses
wrong error from curses.wrapper if curses initialization fails
I want to execute a Python script on several (15+) remote machine using SSH. After invoking the script/command I need to disconnect ssh session and keep the processes running in background for as long as they are required to.
I have used Paramiko and PySSH in past so have no problems using them again. Only thing I need to know is how to disconnect a ssh session in python (since normally local script would wait for each remote machine to complete processing before moving on).
This might work, or something similar:
ssh user#remote.host nohup python scriptname.py &
Basically, have a look at the nohup command.
On Linux machines, you can run the script with 'at'.
echo "python scriptname.py" ¦ at now
If you are going to perform repetitive tasks on many hosts, like for example deploying software and running setup scripts, you should consider using something like Fabric
Fabric is a Python (2.5 or higher) library and command-line tool for
streamlining the use of SSH for application deployment or systems
administration tasks.
It provides a basic suite of operations for executing local or remote
shell commands (normally or via sudo) and uploading/downloading files,
as well as auxiliary functionality such as prompting the running user
for input, or aborting execution.
Typical use involves creating a Python module containing one or more
functions, then executing them via the fab command-line tool.
You can even use tmux in this scenario.
As per the tmux documentation:
tmux is a terminal multiplexer. It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal. And do a lot more
From a tmux session, you can run a script, quit the terminal, log in again and check back as it keeps the session until the server restart.
How to configure tmux on a cloud server
I have to write test for deployment script which uploads files through SSH, but I'd like to have it not depending on external servers configuration. This is how i see it:
Create 2 SSH daemons without authentication on different ports of loopback interface.
Run the deployment script on these two ports
The only question is how to run these dummy SSH daemons.
I use Python and Fabric.
If you want full control over the server's actions (e.g. in order to simulate various problem conditions and thereby do a really thorough testing) I recommend twisted: as this article shows, it makes it really easy to set up your own custom SSH server.
If you'd rather use an existing ssh server, pick one from the list here (or use the one that comes with your system, if any; or maybe sshwindows if you're on windows) and run it with subprocess from Python as a part of starting up your tests.
Another option is to spin up a dockerized container with sshd service running. You can use a docker image like these:
https://github.com/kabirbaidhya/fakeserver
https://github.com/panubo/docker-sshd.
I've used this for testing out a deployment script (made on top of fabric ).
Here's how you use it.
Pull the image.
➜ docker pull kabirbaidhya/fakeserver
Set authorized keys for the server.
➜ cat ~/.ssh/id_rsa.pub > /path/to/authorized_keys
Run the fakeserver.
➜ docker run -d -p 2222:22 \
-v "/path/to/authorized_keys:/etc/authorized_keys/tester" \
-e SSH_USERS="tester:1001:1001" \
--name=fakeserver kabirbaidhya/fakeserver
You can now use the fakeserver from any ssh client. For instance:
➜ ssh tester#localhost -p 2222
➜ ssh tester#localhost -p 2222 "echo 'Hello World'"
If this works, you can then use any ssh clients or scripts on top of paramiko or fabric to test against this mock server.
Hope this helps.
Reimplementing an SSH daemon is not trivial.
If your only problem is you don't want them depending on existing configurations, you can start up new sshd with -f to specify a specific configuration and -p to run on a given port.
You can use os.system to make calls to the shell:
os.system('sshd -f myconfig -p 22022')