with open('pf_d.txt', 'w+') as outputfile:
rc = subprocess.call([pf, 'disable'], shell=True, stdout=outputfile, stderr=outputfile)
print outputfile.readlines()
output.readlines() is returning [] even though the file is written with some data. Something is wrong here.
looks like subprocess.call() is not blocking and the file is being written after the read function. How do i solve this?
The with open('pf_d.txt', 'w+') as outputfile: construct is called context manager. In this case, the resource is a file represented by the handle/file object outputfile. The context manager makes sure that the file is closed when the context is left. Closing implicates flushing, and re-opening the file after that will show you all its contents. So, one option to solve your issue is to read your file after it has been closed:
with open('pf_d.txt', 'w+') as outputfile:
rc = subprocess.call(...)
with open('pf_d.txt', 'r') as outputfile:
print outputfile.readlines()
Another option is to re-use the same file object, after flushing and seeking:
with open('pf_d.txt', 'w+') as outputfile:
rc = subprocess.call(...)
outputfile.flush()
outputfile.seek(0)
print outputfile.readlines()
A file handle is always represented by a file pointer, indicating the current position in the file. write() forwards this pointer to the end of the file. seek(0) moves it back to the beginning, so that a subsequent read() startes from the beginning of the file.
Related
This is a really basic question but I want to know if its possible to open a file and keep writing to it while the actual file gets update in real time.
Basically I want to be able to do this and have 'File' act kinda like sys.stdout where you don't have to close the file for the output to be visible.
File = open("File.txt", "w")
File.write("Hello")
All you have to do is use the flush function:
File = open("File.txt", "w")
File.write("Hello")
File.flush()
This will have the output written to the file without closing the connection.
I am trying to copy the content of one file to another.
The script successfully copies content to the file but when I try to run READ command with the output file to print the output, it is blank.
from sys import argv
script, inputFile, outputFile = argv
inFile = open(inputFile)
inData = inFile.read()
outFile = open(outputFile, 'w+')
outFile.write(inData)
print("The new data is:\n",outFile.read())
inFile.close()
outFile.close()
After the write operation the file pointer is at the end of file so you'd need to reset it to the start. Also, the filesystem IO buffers may not have been flushed at that point (you haven't closed the file yet)...
Simple solution: close the outFile and reopen it for reading.
As a side note: always make sure you DO close your files whatever happens, specially when writing, else you may end up with corrupted data. The simplest way is the with statement:
with open(...) as infile, (...) as outfile:
outfile.write(infile.read())
# at this point both files have been automagically closed
You forgot to return to the beginning of outFile after writing to it.
So inserting outFile.seek(0) should fix your issues.
After you are done writing, file pointer is at the end of the file, so no data is there. Reposition pointer to start of the file.
I know python does a lot of stuff automatically.
So if we don't close the file manually then it can automatically close the file.
But I have observed that just closing the file (close()) does not flush the buffer (flush()).
So is this the particular case where python does not do automatically?
Here, I have an example:
# no_flush_on_close.py
def write_file(filename):
f = open(filename, 'w')
f.write('Hello, world\n')
f.close()
write_file('no_flush_on_close.txt')
Running this script will create the text file with the line "Hello, world" in it. It tells me that flush() was called on close(). Now, comment out the f.close() line, delete the text file and try again--same result.
The only case where this does not work is when you have an exeption (error) raised, then the file will not be flushed. To deal with that situation, use the context manager form of open() (AKA the with statement):
def write_file(filename):
with open(filename, 'w') as f:
f.write('Hello, world\n')
raise RuntimeError('Will it flush?') # Yes, it will flush and close
The context manager ensures that the file is properly flushed and closed, so it is a good practice to use it.
I am bassicly trying to read a number from a file, convert it to an int, add one to it, then rewrite the new number back to the file. However every time I run this code when i open the .txt file it is blank. Any help would be appreciated thanks! I am a python newb.
f=open('commentcount.txt','r')
counts = f.readline()
f.close
counts1 = int(counts)
counts1 = counts1 + 1
print(counts1)
f2 = open('commentcount.txt','w') <---(the file overwriting seems to happen here?)
f2.write(str(counts1))
Having empty files
This issue is caused by you failing to close the file descriptor. You have f.close but it should be f.close() (a function call). And you also need an f2.close() in the end.
Without the close it takes a while until the contents of the buffer arrive in the file. And it is a good practice to close file descriptors as soon as they are not used.
As a side note, you can use the following syntactic sugar to ensure that the file descriptor is closed as soon as possible:
with open(file, mode) as f:
do_something_with(f)
Now, regarding the overwriting part:
Writing to file without overwriting the previous content.
Short answer: You don't open the file in the proper mode. Use the append mode ("a").
Long answer:
It is the intended behavior. Read the following:
>>> help(open)
Help on built-in function open in module __builtin__:
open(...)
open(name[, mode[, buffering]]) -> file object
Open a file using the file() type, returns a file object. This is the
preferred way to open a file. See file.__doc__ for further information.
>>> print file.__doc__
file(name[, mode[, buffering]]) -> file object
Open a file. The mode can be 'r', 'w' or 'a' for reading (default),
writing or appending. The file will be created if it doesn't exist
when opened for writing or appending; it will be truncated when
opened for writing. Add a 'b' to the mode for binary files.
Add a '+' to the mode to allow simultaneous reading and writing.
If the buffering argument is given, 0 means unbuffered, 1 means line
buffered, and larger numbers specify the buffer size. The preferred way
to open a file is with the builtin open() function.
Add a 'U' to mode to open the file for input with universal newline
support. Any line ending in the input file will be seen as a '\n'
in Python. Also, a file so opened gains the attribute 'newlines';
the value for this attribute is one of None (no newline read yet),
'\r', '\n', '\r\n' or a tuple containing all the newline types seen.
So, reading the manuals shows that if you want the content to be kept you should open in append mode:
open(file, "a")
you should use the with statement. this assume that the file descriptor is closed no matter what:
with open('file', 'r') as fd:
value = int(fd.read())
with open('file', 'w') as fd:
fd.write(value + 1)
You never close the file. If you don't properly close the file the OS might not commit any changes. To avoid this problem it is recommended that you use Python's with statement to open files as it it will close them for you once you are done with the file.
with open('my_file.txt', a) as f:
do_stuff()
python open file paramters:
w:
Opens a file for writing only. Overwrites the file if the file exists.
If the file does not exist, creates a new file for writing.
You can use a (append):
Opens a file for appending. The file pointer is at the end of the file
if the file exists. That is, the file is in the append mode. If the
file does not exist, it creates a new file for writing.
for more information you can read here
One more advice is to use with:
with open("x.txt","a") as f:
data = f.read()
............
For example:
with open('c:\commentcount.txt','r') as fp:
counts = fp.readline()
counts = str(int(counts) + 1)
with open('c:\commentcount.txt','w') as fp:
fp.write(counts)
Note this will work only if you have a file name commentcount and it has a int at the first line since r does not create new file, also it will be only one counter...it won't append a new number.
I have a python script that runs a subprocess to get some data and then process it. What I'm trying to achieve is have the data written to a file, and then use the data from the file to do the processing (the reason is that the subprocess is slow, but can change based on the date, time, and parameters I use, and I need to run the script frequently)
I've tried various methods, including opening the file as w+ and trying to seek to the beginning after the write is done, but nothing seems to work - the file is written, but when I try to read back from it (using file.readline()) i get EOF back.
This is what I'm essentially trying to accomplish:
myFile = open(fileName, "w")
p = subprocess.Popen(args, stdout=myFile)
myFile.flush() # force the file to disk
os.fsync(myFile) # ..
myFile.close()
myFile = open(fileName, "r")
while myFile.readline():
pass # do stuff
myFile.close()
But even though the file is correctly written (after the script runs, i can see the contents of the file), readline never returns a valid line. Like I said I also tried using the same file object, and doing seek(0) on it, to no luck. This only worked when opening the file as r+, which fails when the file doesn't already exist.
Any help would be appreciated. Also if there's a cleaner way to do this, i'm open to it :)
PS: I realize I can Popen and stdout to a pipe, read from the pipe and then write line by line the data to the file as I do that, but I'm trying to separate the creation of the data file from the reading.
The subprocess almost certainly isn't finishing before you try to read from the file. In fact, it's likely that the subprocess isn't even writing anything before you try to read from the file. For true separation you're going to have to have the subprocess write to a temporary file then replace the file you read from, so that you either read the previous version or the new version but never get to see the partially-written file from the new version.
You can do this in a number of ways; the easiest would be to change the subprocess, but I don't know if that's an option for you here. Alternatively, you can wrap it in your own separate script to manage the files. You probably don't want to call the subprocess in the script that analyses the output file either; you'll want a cronjob or something to regenerate periodically.
This should work as is provided the subprocess is finishing in time (see James's answer).
If you want to wait for it to finish, add p.wait() after the Popen invocation.
What is your actual while loop, though? while myFile.readline() makes it seem as you're not actually saving the line for anything. Try this:
myFile = open(fileName, "r")
print myFile.readlines()
myFile.close()
Or, if you want to interactively examine the state of your program:
myFile = open(fileName, "r")
import pdb; pdb.set_trace()
myFile.close()
Then you can do things like print myFile.readlines() after it stops.
#James Aylett pointed me to the right path, it appears that my problem was that subprocess.Popen wasn't finished running when I call .flush().
The solution, is to call p.wait() right after the subprocess.Popen call, to allow for the underlying command to finish. After doing that, .flush does the right thing (since all the data is there), and I can proceed to read from the file.
So the above code becomes:
myFile = open(fileName, "w")
p = subprocess.Popen(args, stdout=myFile)
p.wait() # <-- Missing line
myFile.flush() # force the file to disk
os.fsync(myFile) # ..
myFile.close()
myFile = open(fileName, "r")
while myFile.readline():
pass # do stuff
myFile.close()
And then it all works!