I am using the python paramiko module to run a built in parmiko function SSH.execute on a remote server. I want to run a script on the server which will require 4 prompts. I was planning to do a more complex version of this:
ExpectedString = 'ExpectedOutput'
Output = SSH.execute('./runScript')
if Output == ExpectedString:
SSH.execute('Enter this')
else:
raise SomeException
The problem is nothing comes back for output as the server was waiting for a number to entered and the script gets stuck at this SSH.execute command. So even if another SSH.execute command is run after it never gets run! Should I be looking to use something other than paramiko?
You need to interact with the remote script. Actually, SSH.execute doesn't exist, I assume you're talking about exec_command. Instead of just returning the output, it actually gives you wrappers for stdout, stdin and stderr streams. You can directly use these in order to communicate with the remote script.
Basically, this is how you run a command and pass data over stdin (and receive output using stdout):
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("some_script")
stdin.write('expected_input\n')
stdin.flush()
data = stdout.read.splitlines()
You should check for the prompts, of course, instead of relying on good timing.
#leoluk - yep, I understand your problem, both those recommended solutions won't work. The problem, as you said, with exec_command is that you can only read the output once the command completes. So, if you wanted to remotely run the command rm -i *, you won't be able to read which file is to be deleted before you can respond with a "yes" or a "no". The key here is to use invoke_shell. See this youtube link - https://www.youtube.com/watch?v=lLKdxIu3-A4 - this helped and got me going.
Related
I have a long running program on my remote server, which I want to start with Paramiko.
I know how to start a program in the background (with nohup for example) but my program first needs a few user inputs. In a normal SSH session I would pass those interactively over the terminal and then detach (Ctrl+Z, bg, detach) but I can't do this with Paramiko.
Here is what I've been trying so far:
stdin, stdout, stderr = ssh.exec_command('MyCommand', get_pty=True)
stdin.channel.send(b'MyData\r')
stdin.channel.send(b'\x1A') # This is Ctrl+Z
ssh.exec_command('bg; disown')
ssh.close()
but when the SSH connection closes, the remote program also stops running. How can I send user input to a program, that I want to continue running in the background?
You are currently executing MyCommand and bg; disown in two separate shell instances. That's why the bg; disown has no effect on the MyCommand.
If you really want to emulate the interactive shell features this way, them you need to execute both command in one real shell instance. For that you will need to use SSHClient.invoke_shell and not SSHClient.exec_command.
Though in general, that's not a good idea. See also:
What is the difference between exec_command and send with invoke_shell() on Paramiko?
If the program is yours, modify it to allow getting input from command-line.
If you cannot modify it, use shell constructs to provide the input, like:
ssh.exec_command('nohup echo MyData | MyCommand >/dev/null 2>&1 &')
Maybe you need to add quotes around the command, I'm not sure, what's the "operator precedence" here.
I am using the python paramiko module to run a built in parmiko function SSH.execute on a remote server. I want to run a script on the server which will require 4 prompts. I was planning to do a more complex version of this:
ExpectedString = 'ExpectedOutput'
Output = SSH.execute('./runScript')
if Output == ExpectedString:
SSH.execute('Enter this')
else:
raise SomeException
The problem is nothing comes back for output as the server was waiting for a number to entered and the script gets stuck at this SSH.execute command. So even if another SSH.execute command is run after it never gets run! Should I be looking to use something other than paramiko?
You need to interact with the remote script. Actually, SSH.execute doesn't exist, I assume you're talking about exec_command. Instead of just returning the output, it actually gives you wrappers for stdout, stdin and stderr streams. You can directly use these in order to communicate with the remote script.
Basically, this is how you run a command and pass data over stdin (and receive output using stdout):
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("some_script")
stdin.write('expected_input\n')
stdin.flush()
data = stdout.read.splitlines()
You should check for the prompts, of course, instead of relying on good timing.
#leoluk - yep, I understand your problem, both those recommended solutions won't work. The problem, as you said, with exec_command is that you can only read the output once the command completes. So, if you wanted to remotely run the command rm -i *, you won't be able to read which file is to be deleted before you can respond with a "yes" or a "no". The key here is to use invoke_shell. See this youtube link - https://www.youtube.com/watch?v=lLKdxIu3-A4 - this helped and got me going.
Hello minds of stackoverflow,
I've run into a perplexing bug. I have a python script that creates a new thread that ssh's into a remote machine and starts a process. However, this process does not return on its own (and I want it to keep running throughout the duration of my script). In order to force the thread to return, at the end of my script I ssh into the machine again and kill -9 the process. This is working well, expect for the fact that it breaks the terminal.
To start the thread I run the following code:
t = threading.Thread(target=run_vUE_rfal, args=(vAP.IP, vUE.IP))
t.start()
The function run_vUE_rfal is as follows:
cmd = "sudo ssh -ti ~/.ssh/my_key.pem user#%s 'sudo /opt/company_name/rfal/bin/vUE-rfal -l 3 -m -d %s -i %s'" % (vUE_IP, vAP_IP, vUE_IP)
output = commands.getstatusoutput(cmd)
return
It seems when the command is run, it somehow breaks my terminal. It is broken in that instead of creating a new line for each print, it appends the WIDTH of my terminal in whitespace to the end of each line and prints it as seemingly one long string. Also, I am unable to see my keyboard input to that terminal, but it still successfully read. My terminal looks something like this:
normal formatted output
normal formatted output
running vUE-rfal
print1
print2
print3_extra_long
print4
If I replace the body of the run_vUE_rfal function with some simple prints, the terminal does not break. I have many other ssh's and telnets in this script that work fine. However, this is the only one I'm running in a separate thread as it is the only one that does not return. I need to maintain the ability to close the process of the remote machine when my script is finished.
Any explanations to the cause and idea for a fix are much appreciated.
Thanks in advance.
It seems the process you control is changing terminal settings. These are bypassing stderr and stdout - for good reasons. E.g. ssh itself needs this to ask users for passwords even when it's output is being redirected.
A way to solve this could be to use the python-module pexpect (it's a 3rd-party library) to launch your process, as it will create its' own fake-tty you don't care about.
BTW, to "repair" your terminal, use the reset command. As you already noticed, you can enter commands. reset will set the terminal to default settings.
I'm using Node to execute a Python script. The Python script SSH's into a server, and then runs a Pig job. I want to be able to get the standard out from the Pig job, and display it in the browser.
I'm using the PExpect library to make the SSH calls, but this will not print the output of the pig call until it has totally completed (at least the way I have it written). Any tips on how to restructure it?
child.sendline(command)
child.expect(COMMAND_PROMPT)
print(child.before)
I know I shouldn't be expecting the command prompt (cause that will only show up when the process ends), but I'm not sure what I should be expecting.
Repeating my comment as an answer, since it solved the issue:
If you set child.logfile_read to a writable file-like object (e.g. sys.stdout), Pexpect will the forward the output there as it reads it.
child.logfile_read = sys.stdout
child.sendline(command)
child.expect(COMMAND_PROMPT)
I am using Supervisor (process controller written in python) to start and control my web server and associated services. I find the need at times to enter into pdb (or really ipdb) to debug when the server is running. I am having trouble doing this through Supervisor.
Supervisor allows the processes to be started and controlled with a daemon called supervisord, and offers access through a client called supervisorctl. This client allows you to attach to one of the foreground processes that has been started using a 'fg' command. Like this:
supervisor> fg webserver
All logging data gets sent to the terminal. But I do not get any text from the pdb debugger. It does accept my input so stdin seems to be working.
As part of my investigation I was able to confirm that neither print nor raw_input send and text out either; but in the case of raw_input the stdin is indeed working.
I was also able to confirm that this works:
sys.stdout.write('message')
sys.flush()
I though that when I issued the fg command that it would be as if I had run the process in the foreground in the standard terminal ... but it appears that supervisorctl is doing something more. Regular printing does not flush for example. Any ideas?
How can I get pdb, standard prints, etc to work properly when connecting to the foreground terminal using the fg command in supervisorctl?
(Possible helpful ref: http://supervisord.org/subprocess.html#nondaemonizing-of-subprocesses)
It turns out that python defaults to buffering its output stream. In certain cases (such as this one) - it results in output being detained.
Idioms like this exist:
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
to force the buffer to zero.
But the better alternative I think is to start the base python process in an unbuffered state using the -u flag. Within the supervisord.conf file it simply becomes:
command=python -u script.py
ref: http://docs.python.org/2/using/cmdline.html#envvar-PYTHONUNBUFFERED
Also note that this dirties up your log file - especially if you are using something like ipdb with ANSI coloring. But since it is a dev environment it is not likely that this matters.
If this is an issue - another solution is to stop the process to be debugged in supervisorctl and then run the process temporarily in another terminal for debugging. This would keep the logfiles clean if that is needed.
It could be that your webserver redirects its own stdout (internally) to a log file (i.e. it ignores supervisord's stdout redirection), and that prevents supervisord from controlling where its stdout goes.
To check if this is the case, you can tail -f the log, and see if the output you expected to see in your terminal goes there.
If that's the case, see if you can find a way to configure your webserver not to do that, or, if all else fails, try working with two terminals... (one for input, one for ouptut)