Paramiko hangs on channel.makefile.read() in a while loop - python

I haven't been able to resolve this issue, but I suspect it's easy for someone familiar with Paramiko/ssh2 to figure out.
The code below works fine when executed only once, but when wrapped in a while loop it hangs on stdout.read(). I could not use exec_command because it was not returning the correct output (the device I am SSHing into is not a standard microcontroller, and I'm still uncertain exactly what encoding or ssh parameters it uses). Since this worked, I wanted to query the device continously, but it didn't work when wrapping the commands in a while loop.
I also tried changing how the while loop was wrapped, including wrapping the whole code block starting with the intial SSH connection, wrapping around channel.close, etc.
import paramiko
import time
freewave_shell = paramiko.SSHClient()
freewave_shell.set_missing_host_key_policy(paramiko.AutoAddPolicy())
freewave_shell.connect("an.ip.add.ress", username="user", password="pass")
chan = freewave_shell.invoke_shell()
while (1)
stdin = chan.makefile_stdin('wb')
stdout = chan.makefile('rb')
stdin.write('''
signalLevel
noiseLevel
signalMargin
VSWR
exit
''')
print('HERE')
print(stdout.read())
stdout.close()
stdin.close()
chan.close()
freewave_shell.close()

I do not think your code is anywhere near reliable.
But what's the primary issue is that if you close the I/O, you have to reconnect the channel. So you have to move the invoke_shell call into the loop.

Related

How to end a python subprocess with no return?

I'm working on a BCP wrapper method in Python, but have run into an issue invoking the command with subprocess.
As far as I can tell, the BCP command doesn't return any value or indication that it has completed outside of what it prints to the terminal window, which causes subprocess.call or subprocess.run to hang while they wait for a return.
subprocess.Popen allows a manual .terminate() method, but I'm having issues getting the table to write afterwards.
The bcp command works from the command line with no issues, it loads data from a source csv according to a .fmt file and writes an error log file. My script is able to dismount the file from log path, so I would consider the command itself irrelevant and the question to be around the behavior of the subprocess module.
This is what I'm trying at the moment:
process = subprocess.Popen(bcp_command)
try:
path = Path(log_path)
sleep_counter = 0
while path.is_file() == False and sleep_counter < 16:
sleep(1)
sleep_counter +=1
finally:
process.terminate()
self.datacommand = datacommand
My idea was to check that the error log file has been written by the bcp command as a way to tell that the process had finished, however while my script no longer freezes with this, and the files are apparently being successfully written and dismounted later on in the script. The script terminates in less than the 15 seconds that the sleep loop would use to end it as well.
When the process froze my Spyder shell (and Idle, so it's not the IDE), I could force terminate it by closing the console itself and it would write to the server at least.
However it seems like by using the .terminate() the command isn't actually writing anything to the server.
I checked if a dumb 15 second time-out (it takes about 2 seconds to do the BCP with this data) would work as well, in case it was writing an error log before the load finished.
Still resulted in an empty table on SQL server.
How can I get subprocess to execute a command without hanging?
Well, it seems to be a more general issue about calling helper functions with Popen
as seen here:
https://github.com/dropbox/pyannotate/issues/67
I was able to fix the hanging issue by changing it to:
subprocess.Popen(bcp_command, close_fds = True)

Is it possible to write to input / read output from a detached subprocess?

I'm trying to manage a game server (a server for players to join, I didn't create the game) through a Python module. I noticed, however, that the server stops when the Python script stops to ask for input (from input()). Is there any way around this?
The server is ran as a subprocess:
server = subprocess.Popen("D:\Windows\System32\cmd.exe", stdin=subprocess.PIPE, stdout=subprocess.PIPE) followed by server.stdin.write calls to run the server exe file
The server seems to work fine if ran without a stdout pipe, but I still need to receive output from it without it stopping if possible.
I apologize for the vague question and my lack of python knowledge.
It sounds like you want to do two things:
Service a subprocess's stdout.
Wait for user input on input.
And you need to do them both simultaneously, and in something close to real time—while you block reading from the subprocess, the user can't enter any commands, and while you block reading from user input, the subprocess hangs on stalled pipe.
The simplest way to do this is to just use a thread for each.
Without seeing any code, it's hard to show a good example, but something like this:
def service_proc_stdout(proc):
while True:
buf = proc.stdout.read()
do_proc_stuff(buf)
proc = subprocess.Popen(…)
t = threading.Thread(target=service_proc_stdout, args=(proc,))
t.start()
while True:
command = input()
do_command_stuff(command)
It sounds like your do_command_stuff is writing to proc.stdin. That may just work, but it's possible that proc.stdin may block if you push input into it too fast, preventing you from reading user input. If you need to solve that, just start a third thread:
def service_proc_stdin(q, proc):
while True:
msg = q.get()
proc.stdin.write(msg)
q = queue.Queue()
tstdin = threading.Thread(target=service_proc_stdin, args=(q, proc))
tstdin.start()
… and now, instead of directly calling proc.stdin.write(…), you call q.put(…).
Threads aren't the only way to handle the concurrency here. For example, you could use an asyncio event loop, or a manual selectors loop around non-blocking pipes. But it's probably the simplest change, at least if you don't need to share or pass anything between the threads beyond messages you push onto a queue.

Read from pty without endless hanging

I have a script, that prints colored output if it is on tty. A bunch of them executes in parallel, so I can't put their stdout to tty. I don't have control over the script code either (to force coloring), so I want to fake it via pty. My code:
invocation = get_invocation()
master, slave = pty.openpty()
subprocess.call(invocation, stdout=slave)
print string_from_fd(master)
And I can't figure out, what should be in string_from_fd. For now, I have something like
def string_from_fd(fd):
return os.read(fd, 1000)
It works, but that number 1000 looks strange . I think output can be quiet large, and any number there could be not sufficient. I tried a lot of solutions from stack overflow, but none of them works (it prints nothing or hanging forever).
I am not very familiar with file descriptors and all that, so any clarification if I'm doing something wrong would be much appreciated.
Thanks!
This won't work for long outputs: subprocess.call will block once the PTY's buffer is full. That's why subprocess.communicate exists, but that won't work with a PTY.
The standard/easiest solution is to use the external module pexpect, which uses PTYs internally: For example,
pexpect.spawn("/bin/ls --color=auto").read()
will give you the ls output with color codes.
If you'd like to stick to subprocess, then you must use subprocess.Popen for the reason stated above. You are right in your assumption that by passing 1000, you read at most 1000 bytes, so you'll have to use a loop. os.read blocks if there is nothing to read and waits for data to appear. The catch is how to recognize when the process terminated: In this case, you know that no more data will arrive. The next call to os.read will block forever. Luckily, the operating system helps you detect this situation: If all file descriptors to the pseudo terminal that could be used for writing are closed, then os.read will either return an empty string or return an error, depending on the OS. You can check for this condition and exit the loop when this happens. Now the final piece to understanding the following code is to understand how open file descriptors and subprocess go together: subprocess.Popen internally calls fork(), which duplicates the current process including all open file descriptors, and then within one of the two execution paths calls exec(), which terminates the current process in favour of a new one. In the other execution path, control returns to your Python script. So after calling subprocess.Popen there are two valid file descriptors for the slave end of the PTY: One belongs to the spawned process, one to your Python script. If you close yours, then the only file descriptor that could be used to send data to the master end belongs to the spawned process. Upon its termination, it is closed, and the PTY enters the state where calls to read on the master end fail.
Here's the code:
import os
import pty
import subprocess
master, slave = pty.openpty()
process = subprocess.Popen("/bin/ls --color", shell=True, stdout=slave,
stdin=slave, stderr=slave, close_fds=True)
os.close(slave)
output = []
while True:
try:
data = os.read(master, 1024)
except OSError:
break
if not data:
break
output.append(data) # In Python 3, append ".decode()" to os.read()
output = "".join(output)

Ensuring order of commands in Python

I have a .jar file that I'm running with arguments via Popen. This server takes about 4 seconds to start up and then dumps out "Server Started" on the terminal and then runs until the user quits the terminal. However, the print and webbrowser.open execute immediately because of Popen and if I use call, they never run at all. Is there a way to ensure that the print and webbrowser don't run until after the server is started other than using wait? Maybe grep for server started?
from subprocess import Popen
import glob
import sys
import webbrowser
reasoner = glob.glob("reasoner*.jar")
reasoner = reasoner.pop()
port = str(input("Enter connection port: "))
space = ""
portArg = ("-p", port)
portArg = space.join(portArg)
print "Navigate to the Reasoner at http://locahost:" + port
reasoner_process = Popen(["java", "-jar", reasoner, "-i", "0.0.0.0", portArg, "--dbconnect", "jdbc:h2:tcp://localhost//tmp/UXDemo;user=sa;password=admin"])
# I want the following to execute after the .jar process above
print "Opening http://locahost:" + port + "..."
webbrowser.open("http://locahost:" + port)
What you're looking to do is a very simple, special version of interacting with a CLI app. So, you have two options.
First, you can use a library like pexpect that's designed to handle driving almost any CLI application. It may be overkill, and there is a bit of a learning curve, but once you get the basics down this will make your problem trivial: you launch the JAR, block expecting "Server Started", then close.
Alternatively, you can do this manually with the Popen pipes. In general this has a lot of problems, but when you know there's going to exactly one output that fits easily into 128 bytes and you don't want to do anything but block on that output and then close the pipe, none of those problems comes up. So:
reasoner_process = Popen(args, stdout=PIPE)
line = reasoner_process.stdout.readline()
if line.strip() != 'Server Started':
# error handling
# Any code that you want to do while the server is running goes here
reasoner_process.stdout.close()
reasoner_process.kill()
reasoner_process.wait()
But first make sure you actually have to kill it; often closing the pipe is sufficient, in which case you can and should leave out the kill(), in which case you can also check the exit code and raise if it's not 0.
Also, you probably want a with contextlib.closing(…) or whatever's appropriate, or just a try/finally to make sure you can raise an exception for error handling and not leak the child. (Python 3.2+ makes this a lot simpler, because it guarantees that both the pipes and the Popen itself are usable as context managers.)
Finally, I was assuming that "runs until the user quits the terminal" means you want to wait for it to start, then leave it running while you do other stuff, then kill it. If your workflow is different, you obviously need to change the order in which you do things.

Python SSH not giving full output

I am trying to write a script that logs onto a remote machine, runs a command and returns the output. I'm doing this in python, using the paramiko library. However, for some reason the full output isn't being produced, only a single line of it.
In an attempt to isolate the problem, I created a local script, called simple, which runs the command and sends the output to a file, remote_test_output.txt. Then I simply sftp the file over instead. The file only contained the same one line. The only line of output is the same every time: the response code of the command.
When I do this all manually (ssh over, log in, and run ./simple), it all works as intended and the output file is correct. However, doing it through the script on my machine, it only returns the single line.
my code:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(host, username, password)
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command('LD_LIBRARY_PATH=/opt/bin ./simple\n')
print "output:", ssh_stdout.read()+"end" #Reading output of the executed command
print "err:", ssh_stderr.read()#Reading the error stream of the executed command
sftp = ssh.open_sftp()
sftp.get('remote_test_output.txt', 'local_test_output.txt')
sftp.close()
What is returned:
response code: 128
What should be returned:
field1:value1
field2:value2
response code: 128
field3:value3
field4:value4
etc
Does anyone have any ideas why the command I'm trying to call isn't outputting normally?
I have to include the LD_LIBRARY_PATH variable assignment or I get a library does not exist error.
According to paramiko's documentation, the "exec_command" method returns a Channel object. First question: did you try to set the bufsize parameter to one? The Channel object "behaves like a socket". So it is said in the documentation:
http://www.facebook.com/photo.php?fbid=149324975212197&set=a.128010850676943.34896.126277080850320&type=1&ref=nf
It means that recv() (and possibly read()) will only read the data that is in the read buffer. So whenever your read() call returns, it does not mean that the process was already executed on the remote side. You should use the exit_status_ready() method to check if your command was executed:
http://www.lag.net/paramiko/docs/paramiko.Channel-class.html#exit_status_ready
And only after that can you read all the data. Well, this is what I guess. I may be wrong, but right now I cannot test my theory.

Categories