Paramiko stdout.readlines() vs channel.recv() - python

I'm new to python paramiko. I know there are two ways to execute a command in a remote server invokde_shell and exec_command. In few cases the output is read using stdout.readlines() whereas in other cases using channel.recv with exit_status as loop condition. It is very difficult to understand the difference between both and which one to use for my script. Can anyone please explain ?

This is rather broad question, so only briefly:
readlines vs recv – This is nothing Paramiko-specific. You have the same set of functions, when reading local files or local program input. Use whatever fits your needs. If you need to read by bytes (e.g. when processing a binary input), you probably want to use recv (or read). If you want to process a textual input by lines, use readlines (or readline).
You also mix in shell vs. exec into your question, what is a separate stuff, covered here:
What is the difference between exec_command and send with invoke_shell() on Paramiko?
Overall, you better ask a specific question about implementing your specific problem.

Related

Cannot get reset_input_buffer() function to work at all in Pyserial 3.5. Does anyone have any idea what may be happening?

I am trying to simulate a communication protocol where I am following a pattern, so I constantly loop though looking for the same set of characters to reply information. I'm using an RS-232 adapter and the protocol I am simulating is asynchronous and half-duplex where the rx/tx lines are tied together by design and that causes a sort of echo when reading after writing.
That said, I need to be able to clear the input buffer after every write I send out in order to avoid reading what I just wrote. So whenever I use reset_input_buffer() it does not clear the last message I sent out. I have tried to fix this using a couple of methods, such as: using reset_output_buffer() together with reset_input_buffer(), using reset_input_buffer() twice, and using flush(). None of these methods make any difference, the only other method that works to clear the buffer is closing and immediately opening the port but this causes a delay that messes with the timing as it is critical at certain times.
I'm open to any suggestions, please help!

Output from a process

I have to execute a command and store the output of it in file. The output spans multiple pages and i have to press enter multiple times to see the complete output( similar to that when a man returns multiple pages). I am thinking of using the subprocess module, but how to provide input to the process, when the process prompts.
Disclaimer: I don't know which command you're actually executing so this is just a stab in the dark.
You should not have to provide any input.
Piping the output of the command to cat solves your problem:
less testfile.txt | cat
Also if your goal is to store the output in another file, you can simply to this (this will overwrite):
less testfile.txt > testfilecopy.txt
(and this will append):
less textfile.txt >> logfile.txt
See: https://unix.stackexchange.com/questions/15855/how-to-dump-a-man-page
The best solution is to check if the process does not support a command-line flag to run in "batch mode", disable paging or something similar which will suppress any such "waits". But I guess you have already done that. Given that you have to enter "-help" interactively tells me it's probably no standard unix command which are usually quite easy to run in a sub-process.
Your best bet in that case would be to use expect. There are python bindings available under pexpect.
Expect scripts tend to be fairly ugly, and error-prone. You have to be diligent with error handling. I have only limited practical experience with it as I only modified some of our existing scripts. I have not yet written one myself, but from our existing scripts I know they work, and they work reliably.

Feasibility of using pipe for ruby-python communication

Currently, I have two programs, one running on Ruby and the other in Python. I need to read a file in Ruby but I need first a library written in Python to parse the file. Currently, I use XMLRPC to have the two programs communicate. Porting the Python library to Ruby is out of question. However, I find and read that using XMLRPC has some performance overhead. Recently, I read that another solution for the Ruby-Python conundrum is the use of pipes. So I tried to experiment on that one. For example, I wrote this master script in ruby:
(0..2).each do
slave = IO.popen(['python','slave.py'],mode='r+')
slave.write "master"
slave.close_write
line = slave.readline
while line do
sleep 1
p eval line
break if slave.eof
line = slave.readline
end
end
The following is the Python slave:
import sys
cmd = sys.stdin.read()
while cmd:
x = cmd
for i in range(0,5):
print "{'%i'=>'%s'}" % (i, x)
sys.stdout.flush()
cmd = sys.stdin.read()
Everything seems to work fine:
~$ ruby master.rb
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
{"0"=>"master"}
{"1"=>"master"}
{"2"=>"master"}
{"3"=>"master"}
{"4"=>"master"}
My question is, is it really feasible to implement the use of pipes for working with objects between Ruby and Python? One consideration is that there may be multiple instances of master.rb running. Will concurrency be an issue? Can pipes handle extensive operations and objects to be passed in between? If so, would it be a better alternative for RPC?
Yes. No. If you implement it, yes. Depends on what your application needs.
Basically if all you need is simple data passing pipes are fine, if you need to be constantly calling functions on objects in your remote process then you'll probably be better of using some form of existing RPC instead of reinventing the wheel. Whether that should be XMLRPC or something else is another matter.
Note that RPC will have to use some underlying IPC mechanism, which could well be pipes. but might also be sockets, message queues, shared memory, whatever.

How can I read output from another program? [duplicate]

This question already has answers here:
read subprocess stdout line by line
(10 answers)
Closed 21 days ago.
How can I receive input from the terminal in Python?
I am using Python to interface with another program which generates output from user input.
I am using subprocess.Popen() to input to the program, but I can't set stdout to subprocess.PIPE because the program does not seem to flush ever, so everything gets stuck in the buffer.
The program's standard output seems to be to print to terminal, and I see output when I do not redirect stdout. However, I need Python to read and interpret the output which is now in the terminal.
Sorry if this is a stupid question, but I can't seem to get this to work.
Buffering in child processes is a common problem. Here are four possible approaches.
First, and easiest, you could read one byte at a time from your pipe. This is what I would call a "dirty hack" and it carries a performance penalty, but it's easy and it guarantees that your read() calls will only block until the first byte comes in, rather than wait for a buffer to fill up that's never going to fill up. However, this does not force the other process to flush its write buffer, so if that is the issue this approach will not help you anyway.
Second, and I think next-easiest, consider using the Twisted framework which has a facility for using a virtual terminal, or pty ("pseudo-teletype" I think) to talk to your child process. However, this can affect the design of your application (possibly for the better, but this may not be in the cards for you regardless). http://twistedmatrix.com/documents/current/core/howto/process.html
If neither of the above options works for you, you're reduced to solving gritty I/O concurrency issues yourself.
Third, try setting your pipes (all of them, before fork()) to non-blocking mode using fcntl() with O_NONBLOCK. Then you can use select() to test for read/write readiness before trying the read/write; but you still have to catch IOError and test for EAGAIN because it can happen even in this case. This may, depending on the behavior of the child process, allow you to wait until the data really shows up before trying to read it in.
The last resort is to implement the PTY logic yourself. If you've seen references to stuff like termio options, ioctl() calls, etc. then that's what you're up against. I have not done this before, because it's complicated and I have never really needed to. If this is your destiny, good luck.
Have you tried setting the bufsize in your Popen object to 0? I'm not sure if you can force the buffer to be unbuffered from the receiving size, but I'd try it.
http://docs.python.org/library/subprocess.html#using-the-subprocess-module

Closing files in Python

In this discussion about the easiest way to run a process and discard its output, I suggested the following code:
with open('/dev/null', 'w') as dev_null:
subprocess.call(['command'], stdout=dev_null, stderr=dev_null)
Another developer suggested this version:
subprocess.call(['command'], stdout=open('/dev/null', 'w'), stderr=STDOUT)
The C++ programmer in me wants to say that when objects are released is an implementation detail, so to avoid leaving a filehandle open for an indeterminate period of time, I should use with. But a couple of resources suggest that Python always or almost always uses reference counting for code like this, in which case the filehandle should be reclaimed as soon as subprocess.call is done and using with is unnecessary.
(I guess that leaving a filehandle open to /dev/null in particular is unlikely to matter, so pretend it's an important file.)
Which approach is best?
You are correct, refcouting is not guaranteed. In fact, only CPython (which is the main implementation, yes, but not even remotely the only one) provdies refcounting. In case CPython ever changes that implementation detail (unlikely, yes, but possible), or your code is ever run on an alternate implementation, or you lose refcouting because of any other reason, the file won't be closed. Therefore, and given that the with statement makes cleanup very easy, I would suggest you always use a context manager when you open files.
When the pipe to the null device closes is irrelevant - it won't lead to data loss in the output or some such. While you maybe want to use the with variant always to ensure that your output files are always properly flushed and closed, etc., this isn't an example where this matters.
The entire point of the with statement is to have a controlled cleanup process. You're doing it right, don't let anyone convince you otherwise.

Categories