Using a channel for multiple commands, replys, and interactive commands? - python

I'm beginning to learn twisted.conch to automate some tasks over SSH.
I tried to modify the sample sshclient.py from http://www.devshed.com/c/a/Python/SSH-with-Twisted/4/ . It runs 1 command after login and prints captured output.
What I wanted to do is to run a series commands, and maybe decide what to do based on the output.
The problem I ran into is that twisted.conch.ssh.channel.SSHChannel appears to be always closing itself after running a command (such as df -h). The example will sendRequest after channelOpen. Then the channel is always closed after dataReceived no matter what I did.
I'm wondering if this is due to server sending an EOF after the command. And therefore this channel must be closed? Should I just open multiple channels for multiple commands?
Another problem is those interactive commands (such as rm -i somefile). It seems that because the server didn't send EOF, SSHChannel.dataReceived never gets called. How do I manage to capture output in this situation, and what do I do to send back a response?

Should I just open multiple channels for multiple commands?
Yep. That's how SSH works.
SSHChannel.dataReceived never gets called
This doesn't sound like what should happen. Perhaps you can include a minimal example which reproduces the behavior.

Related

Paramiko read stdout after every stdin [duplicate]

I am writing a program in Python which must communicate through SSH with a physical target, and send to this targets some commands automatically (it is for testing).
I start by doing this with Paramiko and everything was perfect until I have to send several commands and when for example the second one must be execute in the context of the first (for example the first one makes cd /mytargetRep and the second one is ./executeWhatIWant). I can't use exec_command to do so, because each exec_command starts a new session.
I try to use a channel with invoke_shell(), but I have an other problem with this one: I don't know when command execution is ended by doing this. I can have some very short (in time) command execution, and some other are really more longer so I need to know when the command execution is over.
I know a workaround it to use exec_command with a shell logic operations such as && or using ;. For example exec_command("cd /mytargetRep && ./executeWhatIWant"). But I can't do that, because it must also be possible to execute some commands manually (I have a minimalist terminal where I can send commands), so for example, the user will make cd /mytargetRep then ./executeWhatIWant and not cd /mytargetRep && ./executeWhatIWant.
So my question is: is there a solution by using Paramiko to send several commands in a same SSH session and be able to know the end of the command execution?
Thanks
It seems that you want to implement an interactive shell, yet you need to control individual commands execution. That's not really possible with just SSH interface. "shell" channel in SSH is black box with an input and output. So there's nothing in Paramiko that will help you implementing this.
If you need to find out when a specific command finishes or where an output of a specific command ends, you need to use features of a shell.
You can solve that by inserting a unique separator (string) in between and search for it in the channel output stream. With a common *nix shells something like this works:
channel = ssh.invoke_shell()
channel.send('cd /mytargetRep\n')
channel.send('echo unique-string-separating-output-of-the-commands\n')
channel.send('./executeWhatIWant\n')
Though I do not really think that you need that very often. Most commands that are needed to make a specific commands working, like cd or set, do not really output anything.
So in most cases you can use SSHClient.exec_command and your code will be a way simpler and more reliable:
Execute multiple commands in Paramiko so that commands are affected by their predecessors
Even if you need to use something seemingly complex like su/sudo, it is still better to stick with SSHClient.exec_command:
Executing command using "su -l" in SSH using Python
For a similar question, see:
Combining interactive shell and recv_exit_status method using Paramiko

Paramiko: "Must be connected to a terminal."

I am trying to use Paramiko to access the input and output of a program running inside of a screen session. Let's assume there is a screen screenName with a single window running the program I wish to access the I/O of. When I try something like client.exec_command('screen -r screenName') for example, I get the message "Must be connected to a terminal" in the stdout.
Searching around, I learned that for some programs, one needs to request a "pseudo-terminal" by adding the get_pty=True parameter to exec_command. When I try this, my code just hangs, and my terminal becomes unresponsive.
What else can I try to make this work?
To provide a little more context to my problem, I am creating a web app that allows users to perform basic operations that I have previously done only in a PuTTy terminal. A feature I wish to have is a kind of web terminal designed to interact with the input and output of this particular program. Effectively all I want to do is continuously pipe output from the program to my backend, and pipe input to the program.

Bash script "listen" to output from an infinitely looping python script

I've written a python script to check the status of the smoke detectors in my house. it runs every second and returns a value of 0,1,2. It uses a separate function on its own thread to recognise the difference between low battery and alarm beeps.
I need a bash script to listen for these conditions and send a netcat packet to my main server if one of the conditions for an alert is met.
I tried using
alertvalue = `python alarm.py`
and running that in a while loop, this echos nothing.
I thought of redirecting the python output to a file but some reading said that it would fail if the script wrote to the file at the same time the bash script would attempt to read from it.
I'm wondering if I can send and receive netcat information directly within the python script?
Could you point me in the right direction here?

Python Thread Breaking Terminal

Hello minds of stackoverflow,
I've run into a perplexing bug. I have a python script that creates a new thread that ssh's into a remote machine and starts a process. However, this process does not return on its own (and I want it to keep running throughout the duration of my script). In order to force the thread to return, at the end of my script I ssh into the machine again and kill -9 the process. This is working well, expect for the fact that it breaks the terminal.
To start the thread I run the following code:
t = threading.Thread(target=run_vUE_rfal, args=(vAP.IP, vUE.IP))
t.start()
The function run_vUE_rfal is as follows:
cmd = "sudo ssh -ti ~/.ssh/my_key.pem user#%s 'sudo /opt/company_name/rfal/bin/vUE-rfal -l 3 -m -d %s -i %s'" % (vUE_IP, vAP_IP, vUE_IP)
output = commands.getstatusoutput(cmd)
return
It seems when the command is run, it somehow breaks my terminal. It is broken in that instead of creating a new line for each print, it appends the WIDTH of my terminal in whitespace to the end of each line and prints it as seemingly one long string. Also, I am unable to see my keyboard input to that terminal, but it still successfully read. My terminal looks something like this:
normal formatted output
normal formatted output
running vUE-rfal
print1
print2
print3_extra_long
print4
If I replace the body of the run_vUE_rfal function with some simple prints, the terminal does not break. I have many other ssh's and telnets in this script that work fine. However, this is the only one I'm running in a separate thread as it is the only one that does not return. I need to maintain the ability to close the process of the remote machine when my script is finished.
Any explanations to the cause and idea for a fix are much appreciated.
Thanks in advance.
It seems the process you control is changing terminal settings. These are bypassing stderr and stdout - for good reasons. E.g. ssh itself needs this to ask users for passwords even when it's output is being redirected.
A way to solve this could be to use the python-module pexpect (it's a 3rd-party library) to launch your process, as it will create its' own fake-tty you don't care about.
BTW, to "repair" your terminal, use the reset command. As you already noticed, you can enter commands. reset will set the terminal to default settings.

Why does stdout not flush when connecting to a process that is run with supervisord?

I am using Supervisor (process controller written in python) to start and control my web server and associated services. I find the need at times to enter into pdb (or really ipdb) to debug when the server is running. I am having trouble doing this through Supervisor.
Supervisor allows the processes to be started and controlled with a daemon called supervisord, and offers access through a client called supervisorctl. This client allows you to attach to one of the foreground processes that has been started using a 'fg' command. Like this:
supervisor> fg webserver
All logging data gets sent to the terminal. But I do not get any text from the pdb debugger. It does accept my input so stdin seems to be working.
As part of my investigation I was able to confirm that neither print nor raw_input send and text out either; but in the case of raw_input the stdin is indeed working.
I was also able to confirm that this works:
sys.stdout.write('message')
sys.flush()
I though that when I issued the fg command that it would be as if I had run the process in the foreground in the standard terminal ... but it appears that supervisorctl is doing something more. Regular printing does not flush for example. Any ideas?
How can I get pdb, standard prints, etc to work properly when connecting to the foreground terminal using the fg command in supervisorctl?
(Possible helpful ref: http://supervisord.org/subprocess.html#nondaemonizing-of-subprocesses)
It turns out that python defaults to buffering its output stream. In certain cases (such as this one) - it results in output being detained.
Idioms like this exist:
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
to force the buffer to zero.
But the better alternative I think is to start the base python process in an unbuffered state using the -u flag. Within the supervisord.conf file it simply becomes:
command=python -u script.py
ref: http://docs.python.org/2/using/cmdline.html#envvar-PYTHONUNBUFFERED
Also note that this dirties up your log file - especially if you are using something like ipdb with ANSI coloring. But since it is a dev environment it is not likely that this matters.
If this is an issue - another solution is to stop the process to be debugged in supervisorctl and then run the process temporarily in another terminal for debugging. This would keep the logfiles clean if that is needed.
It could be that your webserver redirects its own stdout (internally) to a log file (i.e. it ignores supervisord's stdout redirection), and that prevents supervisord from controlling where its stdout goes.
To check if this is the case, you can tail -f the log, and see if the output you expected to see in your terminal goes there.
If that's the case, see if you can find a way to configure your webserver not to do that, or, if all else fails, try working with two terminals... (one for input, one for ouptut)

Categories