Giving input to terminal in python - python

I'm writing a code to read serial input. Once the serial input has been read, I have to add a time stamp below it and then the output from a certain software. To get the output from the software, I want python to write a certain command to the terminal, and then read the output that comes on the terminal. Could you suggest how do I go about doing the last step: namely, writing to the terminal then reading the output? I'm a beginner in python, so please excuse me if this sounds trivial.

To run a command and get the returned output you can use the subprocess module's check_output function.
import subprocess
output = subprocess.check_output("ls -a", shell=True)
That will return the current directory contents in MacOS/Linux and store the output for you to read from later in your program. The "shell=True" allows you to execute a command as a string "ls -a". If you do not use "shell=True" you will pass the command as a list of each part of the command, example subprocess.check_output(["ls", "-a"]). Subprocess is a great module included with Python that allows a lot of command line execution.
So with subprocess you should be able to call another program, code, command, etc.by using a shell command.

You would need to have python implemented into the software.
Also, I believe this is a task for GCSE Computing this year as I was privileged enough to choose what test we are doing and there was a question about serial numbers.

Related

missing stdout before subprocess.Popen crash [duplicate]

I am using a 3rd-party python module which is normally called through terminal commands. When called through terminal commands it has a verbose option which prints to terminal in real time.
I then have another python program which calls the 3rd-party program through subprocess. Unfortunately, when called through subprocess the terminal output no longer flushes, and is only returned on completion (the process takes many hours so I would like real-time progress).
I can see the source code of the 3rd-party module and it does not set printing to be flushed such as print('example', flush=True). Is there a way to force the flushing through my module without editing the 3rd-party source code? Furthermore, can I send this output to a log file (again in real time)?
Thanks for any help.
The issue is most likely that many programs work differently if run interactively in a terminal or as part of a pipe line (i.e. called using subprocess). It has very little to do with Python itself, but more with the Unix/Linux architecture.
As you have noted, it is possible to force a program to flush stdout even when run in a pipe line, but it requires changes to the source code, by manually applying stdout.flush calls.
Another way to print to screen, is to "trick" the program to think it is working with an interactive terminal, using a so called pseudo-terminal. There is a supporting module for this in the Python standard library, namely pty. Using, that, you will not explicitly call subprocess.run (or Popen or ...). Instead you have to use the pty.spawn call:
def prout(fd):
data = os.read(fd, 1024)
while(data):
print(data.decode(), end="")
data = os.read(fd, 1024)
pty.spawn("./callee.py", prout)
As can be seen, this requires a special function for handling stdout. Here above, I just print it to the terminal, but of course it is possible to do other thing with the text as well (such as log or parse...)
Another way to trick the program, is to use an external program, called unbuffer. Unbuffer will take your script as input, and make the program think (as for the pty call) that is called from a terminal. This is arguably simpler if unbuffer is installed or you are allowed to install it on your system (it is part of the expect package). All you have to do then, is to change your subprocess call as
p=subprocess.Popen(["unbuffer", "./callee.py"], stdout=subprocess.PIPE)
and then of course handle the output as usual, e.g. with some code like
for line in p.stdout:
print(line.decode(), end="")
print(p.communicate()[0].decode(), end="")
or similar. But this last part I think you have already covered, as you seem to be doing something with the output.

Is it possible to launch and record only the input of a bash session?

I am trying to achieve the following:
Have a python script launch a shell. User uses that shell for whatever purposes he needs. After having closed the shell, a log of only the input commands is available to the python script for parsing.
All I have gotten are ways to invoke the shell through popen and similar, but that's not quite what I need.
The easiest way to do this is with pexpect. Moreover, the examples it ships with include a script.py, which
out-of-the-box acts like the UNIX script command (recording both stdin and stdout), but requires only a one-line change to do what you intend:
Change p.logfile = fout to p.logfile_send = fout, and you'll be logging only data sent to the remote process; alternately, you could make it p.logfile_recv = fout and you would log only data received by that process.

Getting output files from external program using python

I am using Python 2.7.3 in Ubuntu 12.04 OS. I have an external program say 'xyz' whose input is a single file and two files say 'abc.dat' and 'gef.dat' are its output.
When I used os.system or subprocess.check_output or os.popen none of them printed the output files in the working directory.
I need these output files for further calculations.
Plus I've to keep calling the 'xyz' program 'n' times and have to keep getting the output 'abc.dat' and 'gef.dat' every time from it. Please help.
Thank you
I can not comment on your question because my reputation is too low.
If you use os.system or subprocess.check_output or os.popen, you will just get the standard output of your xyz program (if it is printing something in the screen). To see the files in some directory, you can use os.listdir(). Then you can use these files in your script afterwards. It may also be worth using subprocess.check_call.
There may be other better and more efficient solutions.
First, run the program which you invoked in python script directly and see if it generates those two files.
Assume it does, the problem is in your python script. Try using subprocess.Popen then call communicate().
Here's an example:
from subprocess import Popen
p = Popen(["xyz",])
p.communicate()
communicate waits for process to terminate. you should be able to get output files when executing code after p.communicate().
Thank you for answering my question but the answer to my question is this -
import subprocess
subprocess.call("/path/to/software/xyz abc.dat", shell=True)
which gave me the desired the output.
I tried the subprocess-related commands but they returned error " No such file or directory". The 'shell=True' worked like a charm.
Thank you all again for taking your time to answer my question.

Writing and reading stdout unbuffered to a file over SSH

I'm using Node to execute a Python script. The Python script SSH's into a server, and then runs a Pig job. I want to be able to get the standard out from the Pig job, and display it in the browser.
I'm using the PExpect library to make the SSH calls, but this will not print the output of the pig call until it has totally completed (at least the way I have it written). Any tips on how to restructure it?
child.sendline(command)
child.expect(COMMAND_PROMPT)
print(child.before)
I know I shouldn't be expecting the command prompt (cause that will only show up when the process ends), but I'm not sure what I should be expecting.
Repeating my comment as an answer, since it solved the issue:
If you set child.logfile_read to a writable file-like object (e.g. sys.stdout), Pexpect will the forward the output there as it reads it.
child.logfile_read = sys.stdout
child.sendline(command)
child.expect(COMMAND_PROMPT)

Getting live output from running unix command in python

I am using below code for running unix commands:
cmd = 'ls -l'
(status,output) = commands.getstatusoutput(cmd)
print output
But the problem is that it shows output only after the command completed, but i want to see the output printed as the execution progresses.
ls -l is just dummy command, i am using some complex command in actual program.
Thanks!!
Since this is homework, here's what to do instead of the full solution:
Use the subprocess.Popen class to call the executable. Note that the constructor takes a named stdout argument, and take a look at subprocess.PIPE.
Read from the Popen object's STDOUT pipe in a separate thread to avoid dead locks. See the threading module.
Wait until the subprocess has finished (see Popen.wait).
Wait until the thread has finished processing the output (see Thread.join). Note that this may very well happen after the subprocess has finished.
If you need more help please describe your precise problem.
Unless there are simpler ways in Python which I'm not aware of, I believe you'll have to dig into the slightly more complex os.fork and os.pipe functions.
Basically, the idea is to fork your process, have the child execute your command, while having its standard output redirected to a pipe which will be read by the parent. You'll easily find examples of this kind of pattern.
Most programs will use block buffered output if they are not connected to a tty, so you need to run the program connected to a pty; the easiest way is to use pexpect:
for line in pexpect.spawn('command arg1 arg2'):
print line

Categories