I am writing a Python script that creates and runs several VMs via virsh for the user. Some of the configuration has to be done by executing commands inside the VM which I would want to do automatically.
What would be the easiest way to get remote shell access in Python? I am considering the following approaches:
To use the virsh console command as a sub-process and do I/O to it.
To bring up an SSH session to the VM. I can configure the VM before it boots by editing its file system so I know its target IP address.
Any better API for doing this. RPC?
I need to get the return values for commands so I know if they executed correctly or not. For that matter I need to be able to detect when a program I invoke has finished. Options #1 and #2 rely on scraping the output and that gets complex.
Any suggestions much appreciated.
Related
I must say that I haven't coded seriously since university, latest in '92...
Since then I've just done some simple VB, so no real coding :)
I am trying to get some python coding done so that I am sending a remote command over SSH and I retrieve the output. Basically I am catting a text file and I am retrieving the contents of the file.
I have seen various methods to do this via SSH, using an ip address, port, key, etc, while opening the tunnel.
Yet, in my case, I have to manipulate the tunnel via some special server control, with web access, invoking Ajax commands, etc.
Relevant here is to just say that I have the python code to open and close the tunnels. What I need is to insert a simple snippet of code to execute a cat and to retrieve the output.
Can someone enlighten me on how that can be done? I don't need elevated access for the command, of course.
Any guidance would be much appreciated.
I am using python 3.6.4.
TIA
I have a web application where the user can issue the request to execute a program on a different machine. The web application is written in Python. The remote program is C, but on this side language shouldn't matter.
So SSH is the way to go I think (which has been answered here before). But in my use case the remote program can run for days, and obviously I don't want the ssh call to block my web application.
So I could run the remote program as a background process. But I am also interested in knowing when the program finishes and in the exit code.
So is there any way in achieving this?
My friends and I have written a simple telegram bot in python. The script is run on a remote shared host. The problem is that for some reason the script stops from time to time, and we want to have some sort of a mechanism to check whether it is running or not and restart it if necessary.
However, we don't have access to ssh, we can't run bash scripts and I couldn't find a way to install supervisord. Is there a way to achieve the same result by using a different method?
P.S. I would appreciate it if you gave detailed a explanation as I'm a newbie hobbyist. However, I have no problem with researching and learning new things.
You can have a small supervisor Python script whose only purpose is to start (and restart) your main application Python script. When your application crashes the supervisor takes care and restarts it.
I need to make a python script that will do these steps in order, but I'm not sure how to go about setting this up.
SSH into a server
Copy a folder from point A to point B (cp /foo/bar/folder1 /foo/folder2)
mysql -u root -pfoobar (This database is accessible from localhost only)
create a database, do some other mysql stuff in the mysql console
Replaces instances of Foo with Bar in file foobar
Copy and edit a file
Restart a service
The fact that I have to ssh into a server, and THEN do all of this is really confusing me. I looked into the Fabric library, but that seems to do only do 1 command at a time and doesn't keep context from previous commands.
I looked into the Fabric library, but that seems to do only do 1 command at a time and doesn't keep context from previous commands.
Look into Fabric more. It is still probably what you want.
This page has a lot of good examples.
By "context" I'm assuming you want to be able to cd into another directory and run commands from there. That's what fabric.context_managers.cd is for -- search for it on that page.
Sounds like you are doing some sort of remote deployment/configuring. There's a whole world of tools out there to professionally set this up, look into Chef and Puppet.
Alternatively if you're just looking for a quick and easy way of scripting some remote commands, maybe pexpect can do what you need.
Pexpect is a pure Python module for spawning child applications; controlling them; and responding to expected patterns in their output.
I haven't used it myself but a quick glance at its manual suggests it can work with an SSH session fine: https://pexpect.readthedocs.org/en/latest/api/pxssh.html
I have never used Fabric.
My way to solve those kind of issues (before starting to use saltstack) it was using pyexpect, to run the ssh connection, and all the commands that were needed.
maybe the use of a series of sql scripts to work with the database (just to make it easier) would help.
Another way, since you need to access the remote server using ssh, it would be using paramiko to connect and execute commands remotely. It's a bit more complicated when you want to see what's happening on stdout (while with pexpect you will see exactly what's going on).
but it all depends from what you really need.
I have looked into using pxssh ,subprocess and paramiko but have found no success. What I am ultimately trying to do is figure out a way to not only use SSH to access a server and execute commands using a python script and finish there, but also have it open an instance of the terminal after executing all the commands for continued use.
Currently the server has modules that clients have to manually activate using commands after they have established an SSH connection.
For example:
module python
This command would give the user access to python.
Following this the user would then be able to use python and all its commands through the ssh connection in the terminal.
The issue I have with the methods listed earlier for executing these commands is that it does not display an instance of the terminal. It successfully executes the commands but since these commands have to be executed every time a new SSH connection is established it's worthless unless I can get essentially a copy of the terminal that the Python script executed and loaded up all the modules with.
Does any one have a solution to this? I've scoured the web for hours to no success.
This is a very difficult issue to explain so if anything is unclear please ask me and I will try my best to rephrase things. I am very new to all this.