In Python 2.7 I have the following code inside certain loop
file = open("log.txt", 'a+')
last_position = file.tell()
subprocess.Popen(["os_command_producing_error"], stderr = file)
file.seek(last_position)
error = file.read()
print(error) # example of some action with the error
The intention is that the error that was just given by stderr gets, say printed, while file is keeping the whole record.
I am a beginner in Python and I am not clear what happens in the stderr = file.
My problem is that error keeps being empty, even though errors keep getting logged in the file.
Could someone explain why?
I have tried adding closing and opening the file again, or file.flush() right after the subprocess line. But still the same effect.
Edit: The code in the answer below makes sense to me and it seems to work for for the author of that post. For me (in Windows) it is not working. It gives an empty err and an empty file log.txt. If I run it line by line (e.g. debugging) it does work. How to understand and solve this problem?
Edit: I changed the Popen with call and now it works. I guess call waits for the subprocess to finish in order to continue with the script.
error is empty because you are reading too soon before the process has a chance to write anything to the file. Popen() starts a new process; it does not wait for it to finish.
call() is equivalent to Popen().wait() that does wait for the child process to exit that is why you should see non-empty error in this case (if the subprocess does write anything to stderr).
#!/usr/bin/env python
import subprocess
with open("log.txt", 'a+') as file:
subprocess.check_call(["os_command_producing_error"], stderr=file)
error = file.read()
print(error)
You should be careful with mixing buffered (.read()) and unbuffered I/O (subprocess).
You don't need the external file here, to read the error:
#!/usr/bin/env python
import subprocess
error = subprocess.check_output(["os_command_producing_error"],
stderr=subprocess.STDOUT)
print(error)
It merges stderr and stdout and returns the output.
If you don't want to capture stdout then to get only stderr, you could use Popen.communicate():
#!/usr/bin/env python
import subprocess
p = subprocess.Popen(["os_command_producing_error"], stderr=subprocess.PIPE)
error = p.communicate()[1]
print(error)
You could both capture stderr and append it to a file:
#!/usr/bin/env python
import subprocess
error = bytearray()
p = subprocess.Popen(["os_command_producing_error"],
stderr=subprocess.PIPE, bufsize=1)
with p.stderr as pipe, open('log.txt', 'ab') as file:
for line in iter(pipe.readline, b''):
error += line
file.write(line)
p.wait()
print(error)
See Python: read streaming input from subprocess.communicate().
Try these following codes:
file = open("log.txt", 'a+')
sys.stderr = file
last_position = file.tell()
try:
subprocess.call(["os_command_producing_error"])
except:
file.close()
err_file = open("log.txt", 'r')
err_file.seek(last_position)
err = err_file.read()
print err
err_file.close()
sys.stderr map the standard error message like sys.stdout(map standard output) and sys.stdin(map standard input).
And this will map the standard error to file. So all of the standard error will be write to the file log.txt.
Related
I am running a Python script using subprocess and willing to save output to a file as well as show live logs on terminal.
I have written below code and its saving logs in file but not showing live script execution logs on terminal.
TCID = sys.argv[1]
if TCID == "5_2_5_3":
output = subprocess.check_output([sys.executable, './script.py'])
with open('scriptout.log', 'wb') as outfile:
outfile.write(output)
I think this will fix your issue
import subprocess
outputfile = open('scriptout.log', 'a')
process = subprocess.Popen(["ping", "127.0.0.1"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = process.stdout.readline()
if output == b'' and process.poll() is not None:
break
if output:
out = output.decode()
outputfile.write(out)
print(out, end="")
outputfile.close()
also I tried
import subprocess
output = subprocess.check_output(["ping", "127.0.0.1"])
with open('scriptout.log', 'wb') as outfile:
print(output)
outfile.write(output)
but it outputs after command execution ends. Also I want try with logging module but I don't know how to use it sorry :(
I have a issue with my raspberry pi that starts up a python script.How do I save the printed output to a file when it is running on boot? I found script below on the internet but it doesn't seem to write the printed text,it creates the file but the content is empty.
sudo python /home/pi/python.py > /home/pi/output.log
It does write its output to the file but you cannot see it until the python file has finished executing due to buffer never flushed.
If you change the output to a file within your python script you can periodicity call flush in your code to push the output through to the file as and when you wish, something like this.
import sys
import time
outputFile = "output.txt";
with open(outputFile, "w+") as sys.stdout:
while True:
print("some output")
sys.stdout.flush() # force buffer content out to file
time.sleep(5) # wait 5 seconds
if you want to set the output back to the terminal, you may want to save a reference to the original stdout like this
import time
outputFile = "output.txt";
original_stdout = sys.stdout
with open(outputFile, "w+") as sys.stdout:
print("some output in file")
sys.stdout.flush()
time.sleep(5)
sys.stdout = original_stdout
print("back in terminal")
I'm trying to make multiple program communicate using Named Pipes under python.
Here's how I'm proceeding :
import os
os.mkfifo("/tmp/p")
file = os.open("/tmp/p", os.O_RDONLY)
while True:
line = os.read(file, 255)
print("'%s'" % line)
Then, after starting it I'm sending a simple data through the pipe :
echo "test" > /tmp/p
I expected here to have test\n showing up, and the python blocks at os.read() again.
What is happening is python to print the 'test\n' and then print '' (empty string) infinitely.
Why is that happening, and what can I do about that ?
From http://man7.org/linux/man-pages/man7/pipe.7.html :
If all file descriptors referring to the write end of a pipe have been
closed, then an attempt to read(2) from the pipe will see end-of-file
From https://docs.python.org/2/library/os.html#os.read :
If the end of the file referred to by fd has been reached, an empty string is returned.
So, you're closing the write end of the pipe (when your echo command finishes) and Python is reporting that as end-of-file.
If you want to wait for another process to open the FIFO, then you could detect when read() returns end-of-file, close the FIFO, and open it again. The open should block until a new writer comes along.
As an alternative to user9876's answer you can open your pipe for writing right after creating it, this allows it to stay open for writing at all times.
Here's an example contextmanager for working with pipes:
#contextlib.contextmanager
def pipe(path):
try:
os.mkfifo(path)
except FileExistsError:
pass
try:
with open(path, 'w'): # dummy writer
with open(path, 'r') as reader:
yield reader
finally:
os.unlink(path)
And here is how you use it:
with pipe('myfile') as reader:
while True:
print(reader.readline(), end='')
I am trying to redirect all the stdout to a file, out.txt. But first commands's output display's on the terminal and the rest is fed to the file. I am not sure whats wrong in the piece of code below.
import os
import sys
import subprocess
orig_stdout = sys.stdout
f = file('out.txt', 'w')
sys.stdout = f
os.system("date") #First command
cmd = ["ls", "-al"]
exe_cmd = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output, err = exe_cmd.communicate()
print output #Second command
sys.stdout = orig_stdout
Assigning a file object to sys.stdout redirects python code that uses sys.stdout but doesn't redirect code that uses the underlying file descriptor.
os.system("date")
spawns a new process that uses the underlying file descriptor, so its not redirected.
exe_cmd = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output, err = exe_cmd.communicate()
print output #Second command
spawns a new process with a pipe that is read by the parent process. print uses the parent sys.stdout so it is redirected.
A standard way to redirect is to hand the file object to one of the subprocess calls. The child writes directly to the file without parent interaction.
with open('out.txt', 'w') as f:
cmd = ["ls", "-al"]
subprocess.call(cmd, stdout=f)
Here's my code
fh = open("temp.txt", "w")
process = subprocess.Popen(["test"], shell=True, stdout=fh)
If the process doesn't exit is it necessary to free the file handle or killing the subprocess will suffice?
Your file object was opened by your Python code and will not be closed by the subprocess. To make sure it is properly closed is your responsibility.
You could either use (not the best option):
fh = open("temp.txt", "w")
process = subprocess.Popen(["test"], shell=True, stdout=fh)
fh.close()
or (better):
with open("temp.txt", "w") as fh:
process = subprocess.Popen(["test"], shell=True, stdout=fh)
The latter will make sure that your file object is always closed properly, even if the subprocess command fails with some error.