I'm trying to work with pipes on Python 3.3/Linux as of https://stackoverflow.com/a/6193800/2375044, but if I use the following, program "hangs":
import os
readEnd, writeEnd = os.pipe()
readFile = os.fdopen(readEnd)
firstLine = readFile.readline()
Changing os.fdopen(readEnd) to os.fdopen(readEnd, 'r+') I get
io.UnsupportedOperation: File or stream is not seekable.
I need a readline() function over the pipe, but I don't know what else to do.
Related
I have a .exe programme that produces real-time data. I want to extract the output when running the programme in real-time, however It's my first time trying this out, and so I wanted help in approaching this.
I have opened it with the following:
cmd = r'/Applications/StockSpy Realtime Stocks Quote.app/Contents/MacOS/StockSpy Realtime Stocks Quote'
import subprocess
with open('output.txt', 'wb') as f:
subprocess.check_call(cmd, stdout=f)
# to read line by line
with open('output.txt') as f:
for line in f:
print(line)
# output = qx(cmd)
with the aim to store the output. However, it does not save any of the output, I get a blank textfile.
I managed to save the output by following this code:
from subprocess import STDOUT, check_call as x
with open(os.devnull, 'rb') as DEVNULL, open('output.txt', 'wb') as f:
x(cmd, stdin=DEVNULL, stdout=f, stderr=STDOUT)
from How do I get all of the output from my .exe using subprocess and Popen?
What you are trying to do can be achieved with python using something like this:
import subprocess
with subprocess.Popen(['/path/to/executable'], stdout=subprocess.PIPE) as proc:
data = proc.stdout.read() # the data variable will contain the
# what would usually be the output
"""Do something with data..."""
In Python 2.7 I have the following code inside certain loop
file = open("log.txt", 'a+')
last_position = file.tell()
subprocess.Popen(["os_command_producing_error"], stderr = file)
file.seek(last_position)
error = file.read()
print(error) # example of some action with the error
The intention is that the error that was just given by stderr gets, say printed, while file is keeping the whole record.
I am a beginner in Python and I am not clear what happens in the stderr = file.
My problem is that error keeps being empty, even though errors keep getting logged in the file.
Could someone explain why?
I have tried adding closing and opening the file again, or file.flush() right after the subprocess line. But still the same effect.
Edit: The code in the answer below makes sense to me and it seems to work for for the author of that post. For me (in Windows) it is not working. It gives an empty err and an empty file log.txt. If I run it line by line (e.g. debugging) it does work. How to understand and solve this problem?
Edit: I changed the Popen with call and now it works. I guess call waits for the subprocess to finish in order to continue with the script.
error is empty because you are reading too soon before the process has a chance to write anything to the file. Popen() starts a new process; it does not wait for it to finish.
call() is equivalent to Popen().wait() that does wait for the child process to exit that is why you should see non-empty error in this case (if the subprocess does write anything to stderr).
#!/usr/bin/env python
import subprocess
with open("log.txt", 'a+') as file:
subprocess.check_call(["os_command_producing_error"], stderr=file)
error = file.read()
print(error)
You should be careful with mixing buffered (.read()) and unbuffered I/O (subprocess).
You don't need the external file here, to read the error:
#!/usr/bin/env python
import subprocess
error = subprocess.check_output(["os_command_producing_error"],
stderr=subprocess.STDOUT)
print(error)
It merges stderr and stdout and returns the output.
If you don't want to capture stdout then to get only stderr, you could use Popen.communicate():
#!/usr/bin/env python
import subprocess
p = subprocess.Popen(["os_command_producing_error"], stderr=subprocess.PIPE)
error = p.communicate()[1]
print(error)
You could both capture stderr and append it to a file:
#!/usr/bin/env python
import subprocess
error = bytearray()
p = subprocess.Popen(["os_command_producing_error"],
stderr=subprocess.PIPE, bufsize=1)
with p.stderr as pipe, open('log.txt', 'ab') as file:
for line in iter(pipe.readline, b''):
error += line
file.write(line)
p.wait()
print(error)
See Python: read streaming input from subprocess.communicate().
Try these following codes:
file = open("log.txt", 'a+')
sys.stderr = file
last_position = file.tell()
try:
subprocess.call(["os_command_producing_error"])
except:
file.close()
err_file = open("log.txt", 'r')
err_file.seek(last_position)
err = err_file.read()
print err
err_file.close()
sys.stderr map the standard error message like sys.stdout(map standard output) and sys.stdin(map standard input).
And this will map the standard error to file. So all of the standard error will be write to the file log.txt.
I'm trying to make multiple program communicate using Named Pipes under python.
Here's how I'm proceeding :
import os
os.mkfifo("/tmp/p")
file = os.open("/tmp/p", os.O_RDONLY)
while True:
line = os.read(file, 255)
print("'%s'" % line)
Then, after starting it I'm sending a simple data through the pipe :
echo "test" > /tmp/p
I expected here to have test\n showing up, and the python blocks at os.read() again.
What is happening is python to print the 'test\n' and then print '' (empty string) infinitely.
Why is that happening, and what can I do about that ?
From http://man7.org/linux/man-pages/man7/pipe.7.html :
If all file descriptors referring to the write end of a pipe have been
closed, then an attempt to read(2) from the pipe will see end-of-file
From https://docs.python.org/2/library/os.html#os.read :
If the end of the file referred to by fd has been reached, an empty string is returned.
So, you're closing the write end of the pipe (when your echo command finishes) and Python is reporting that as end-of-file.
If you want to wait for another process to open the FIFO, then you could detect when read() returns end-of-file, close the FIFO, and open it again. The open should block until a new writer comes along.
As an alternative to user9876's answer you can open your pipe for writing right after creating it, this allows it to stay open for writing at all times.
Here's an example contextmanager for working with pipes:
#contextlib.contextmanager
def pipe(path):
try:
os.mkfifo(path)
except FileExistsError:
pass
try:
with open(path, 'w'): # dummy writer
with open(path, 'r') as reader:
yield reader
finally:
os.unlink(path)
And here is how you use it:
with pipe('myfile') as reader:
while True:
print(reader.readline(), end='')
I'm trying to see if a file I input using sys.stdin has a .gz file extension. Usually I just use a file path directly but when I do sys.stdin it automatically opens it into an reading object.
Is there any way to get the file name from stdin without doing os.getcwd() or getting a full file path?
I was trying sys.stdin.endswith('.gz') but it doesn't work (obviously b/c it's not a string) but is there anything I can do with the sys.stdin object to just grab the extension before I proceed to process it?
import sys
file = sys.stdin
if file.endswith('.gz'):
print 'yup'
try like this
import sys
file = sys.stdin
if file.name.endswith('.gz'):
print 'yup'
Update:
file = raw_input("Enter filename:")
sys.stdin = open(file, 'r')
if sys.stdin.name.endswith('.gz'):
print 'yup'
Here's my code
fh = open("temp.txt", "w")
process = subprocess.Popen(["test"], shell=True, stdout=fh)
If the process doesn't exit is it necessary to free the file handle or killing the subprocess will suffice?
Your file object was opened by your Python code and will not be closed by the subprocess. To make sure it is properly closed is your responsibility.
You could either use (not the best option):
fh = open("temp.txt", "w")
process = subprocess.Popen(["test"], shell=True, stdout=fh)
fh.close()
or (better):
with open("temp.txt", "w") as fh:
process = subprocess.Popen(["test"], shell=True, stdout=fh)
The latter will make sure that your file object is always closed properly, even if the subprocess command fails with some error.