Here's my code
fh = open("temp.txt", "w")
process = subprocess.Popen(["test"], shell=True, stdout=fh)
If the process doesn't exit is it necessary to free the file handle or killing the subprocess will suffice?
Your file object was opened by your Python code and will not be closed by the subprocess. To make sure it is properly closed is your responsibility.
You could either use (not the best option):
fh = open("temp.txt", "w")
process = subprocess.Popen(["test"], shell=True, stdout=fh)
fh.close()
or (better):
with open("temp.txt", "w") as fh:
process = subprocess.Popen(["test"], shell=True, stdout=fh)
The latter will make sure that your file object is always closed properly, even if the subprocess command fails with some error.
Related
At the moment, I have code to unzip a file and then zipping it back up in another file using Popen
with open('./file.gz', 'rb') as in_file:
g_unzip_process = Popen(['gunzip', '-c'], stdin=in_file, stdout=PIPE)
with open('./outfile.gz', 'wb+') as out_file:
Popen(['gzip'], stdin=g_unzip_process.stdout, stdout=out_file)
Now, I am trying to insert something in between.
It should look like this:
with open('./file.gz', 'rb') as in_file:
g_unzip_process = Popen(['gunzip', '-c'], stdin=in_file, stdout=PIPE)
transform_process = Popen(['python', 'transform.py'], stdin=g_unzip_process.stdout, stdout=PIPE)
with open('./outfile.gz', 'wb+') as out_file:
Popen(['gzip'], stdin=transform_process.stdout, stdout=out_file)
My transform.py code looks like this
for line in sys.stdin:
sys.stdout.write(line)
sys.stdin.close()
sys.stdout.close()
After running it, why is it that my outfile.gz file is empty?
I have also tried reading each line by running:
for line in transform_process.stdout:
print(line)
How do I make it so that those lines will be written inside outfile.gz?
I was able to get this working by calling Popen.wait() for writing to outfile.gz
In Python 2.7 I have the following code inside certain loop
file = open("log.txt", 'a+')
last_position = file.tell()
subprocess.Popen(["os_command_producing_error"], stderr = file)
file.seek(last_position)
error = file.read()
print(error) # example of some action with the error
The intention is that the error that was just given by stderr gets, say printed, while file is keeping the whole record.
I am a beginner in Python and I am not clear what happens in the stderr = file.
My problem is that error keeps being empty, even though errors keep getting logged in the file.
Could someone explain why?
I have tried adding closing and opening the file again, or file.flush() right after the subprocess line. But still the same effect.
Edit: The code in the answer below makes sense to me and it seems to work for for the author of that post. For me (in Windows) it is not working. It gives an empty err and an empty file log.txt. If I run it line by line (e.g. debugging) it does work. How to understand and solve this problem?
Edit: I changed the Popen with call and now it works. I guess call waits for the subprocess to finish in order to continue with the script.
error is empty because you are reading too soon before the process has a chance to write anything to the file. Popen() starts a new process; it does not wait for it to finish.
call() is equivalent to Popen().wait() that does wait for the child process to exit that is why you should see non-empty error in this case (if the subprocess does write anything to stderr).
#!/usr/bin/env python
import subprocess
with open("log.txt", 'a+') as file:
subprocess.check_call(["os_command_producing_error"], stderr=file)
error = file.read()
print(error)
You should be careful with mixing buffered (.read()) and unbuffered I/O (subprocess).
You don't need the external file here, to read the error:
#!/usr/bin/env python
import subprocess
error = subprocess.check_output(["os_command_producing_error"],
stderr=subprocess.STDOUT)
print(error)
It merges stderr and stdout and returns the output.
If you don't want to capture stdout then to get only stderr, you could use Popen.communicate():
#!/usr/bin/env python
import subprocess
p = subprocess.Popen(["os_command_producing_error"], stderr=subprocess.PIPE)
error = p.communicate()[1]
print(error)
You could both capture stderr and append it to a file:
#!/usr/bin/env python
import subprocess
error = bytearray()
p = subprocess.Popen(["os_command_producing_error"],
stderr=subprocess.PIPE, bufsize=1)
with p.stderr as pipe, open('log.txt', 'ab') as file:
for line in iter(pipe.readline, b''):
error += line
file.write(line)
p.wait()
print(error)
See Python: read streaming input from subprocess.communicate().
Try these following codes:
file = open("log.txt", 'a+')
sys.stderr = file
last_position = file.tell()
try:
subprocess.call(["os_command_producing_error"])
except:
file.close()
err_file = open("log.txt", 'r')
err_file.seek(last_position)
err = err_file.read()
print err
err_file.close()
sys.stderr map the standard error message like sys.stdout(map standard output) and sys.stdin(map standard input).
And this will map the standard error to file. So all of the standard error will be write to the file log.txt.
I am trying to redirect all the stdout to a file, out.txt. But first commands's output display's on the terminal and the rest is fed to the file. I am not sure whats wrong in the piece of code below.
import os
import sys
import subprocess
orig_stdout = sys.stdout
f = file('out.txt', 'w')
sys.stdout = f
os.system("date") #First command
cmd = ["ls", "-al"]
exe_cmd = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output, err = exe_cmd.communicate()
print output #Second command
sys.stdout = orig_stdout
Assigning a file object to sys.stdout redirects python code that uses sys.stdout but doesn't redirect code that uses the underlying file descriptor.
os.system("date")
spawns a new process that uses the underlying file descriptor, so its not redirected.
exe_cmd = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output, err = exe_cmd.communicate()
print output #Second command
spawns a new process with a pipe that is read by the parent process. print uses the parent sys.stdout so it is redirected.
A standard way to redirect is to hand the file object to one of the subprocess calls. The child writes directly to the file without parent interaction.
with open('out.txt', 'w') as f:
cmd = ["ls", "-al"]
subprocess.call(cmd, stdout=f)
I'm trying to work with pipes on Python 3.3/Linux as of https://stackoverflow.com/a/6193800/2375044, but if I use the following, program "hangs":
import os
readEnd, writeEnd = os.pipe()
readFile = os.fdopen(readEnd)
firstLine = readFile.readline()
Changing os.fdopen(readEnd) to os.fdopen(readEnd, 'r+') I get
io.UnsupportedOperation: File or stream is not seekable.
I need a readline() function over the pipe, but I don't know what else to do.
I want to append the STDOUT of subprocess.call() to an existing file. My code below overwrites the file -
log_file = open(log_file_path, 'r+')
cmd = r'echo "some info for the log file"'
subprocess.call(cmd, shell=True, stdout=log_file, stderr=STDOUT)
log_file.close()
I'm looking for the equivalent of >> in subprocess.call() or subprocess.Popen(). It's driving me crazy trying to find it..
UPDATE:
Following the answers so far I've updated my code to
import subprocess
log_file = open('test_log_file.log', 'a+')
cmd = r'echo "some info for the log file\n"'
subprocess.call(cmd, shell=True, stdout=log_file, stderr=subprocess.STDOUT)
log_file.close()
I'm running this code from the command line in windows -
C:\users\aidan>test_subprocess.py
This adds the text to the log file. When I run the script again, nothing new is added. It still seems to be overwriting the file..
Use the 'a' append mode instead:
log_file = open(log_file_path, 'a+')
If you still see previous content overwritten, perhaps Windows needs you to explicitly seek to the end of the file; open as 'r+' or 'w' and seek to the end of the file:
import os
log_file = open(log_file_path, 'r+')
log_file.seek(0, os.SEEK_END)
Modify how you open log_file_path. You are opening the file for reading and writing 'r+'. Use the 'a' append mode instead of 'r+':
log_file = open(log_file_path, 'a+')