Imagine that you are reading the byte contents of a file in python, with the goal of writing them to a temporary file or bytesio.
What I have not been able to answer is what will happen if say the file is large and while it's open there's a change in the file?
Is there a way to ensure that the file is read correctly, without errors?
I would have dealt with that by simply copying it in the memory first but this doesn't seem to be wise in the scenario of large files.
Related
When opening and appending to a file in python, does that file get loaded into memory? I'm asking this because I'm writing a program where I write to several files in a round-robin fashion where I have the guarantee that any one file can fit into memory but not all files can fit into memory at the same time. Opening and closing files every time I append is not an option since that would be too slow. As such, I would need all the files opened simultaneously.
The answer is NO. Regarding the documentations of open() wraps a system call and returns a file object (Not the content of file): https://docs.python.org/2/library/functions.html#open
Open a file, returning an object of the file type described in section
File Objects.
The file contents are not loaded into RAM unless you read the file with eg.: readlines(), read()
I am running a long Python program which prints values to a .txt file in an iterative way. I am trying to read the values using terminal "gedit/tail/less" commands and trying to plot them in Gnuplot. But I am not able to read the .txt file till the whole execution is over. What is the correct argument for such file handling ?
The files are written when they are closed or when the size of the buffer is too large to store.
That is even when you use file.write("something"), something isn't written in the file till you close the file, or with block is over.
with open("temp.txt","w") as w:
w.write("hey")
x=input("touch")
w.write("\nhello")
w.write(x)
run this code and try to read the file before touch, it'll be empty, but after the with block is over you can see the contents.
If you are going to access the file from many sources, then you have to be careful of this, and also not to modify it from multiple sources.
EDIT: I forgot to say, you have to continuously close the file and open it in append mode if you want some other program to read it while you are writing to the file.
I have a process that I want to run, which does conversion of files into pdfs. I'm using Libreoffice for that. I call Libreoffice as a subprocess, from Python. Libreoffice then writes to a new file on my system. This is of course totally independent of my Python program. After the conversion I will read the file and then use this file for something else, in Python.
But is it at all possible to capture this file as a bytes object in Python instead of a file? This would eliminate the need for reading the file after the conversion, and just keeping it in memory.
I ve written a script that fetches bitcoin data and saves it in .txt files or in the case where the .txt files exist, it updates them. The .txt files are nodes and relationships connecting the nodes for neo4j.
At the beginning of the script:
It checks whether the files exist, so it opens them and appends new lines OR
In case the files do not exist, the script creates them and starts appending lines.
The .txt files are constantly open, the script writes the new data. The .txt files close when all the data are written or I terminate the execution.
My question is:
Should I open, write, close each .txt file for each iteration and for each .txt file?
or
Should I keep it the way it is now; open the .txt files, do all the writing, when the writing is done close the .txt file
I am saving data from 6013 blocks. Which way would minimize risk of corrupting the data written in the .txt files?
Keeping files open will be faster. In the comments you mentioned that "Loss of data previously written is not an option". The probability of corrupting files is higher for open files so open and close file on each iteration is more reliable.
There is also an option to keep data in some buffer and to write/append buffer to file when all data is received or on user/system interrupt or network timeout.
I think keeping the file open will be more efficient, because python won't need to search for the file and open it every time you want to read/write the file.
I guess it should look like this
with open(filename, "a") as file:
while True:
data = # get data
file.write(data)
Run a benchmark and see for yourself would the typical answer for this kind of question.
Nevertheless opening and closing a file does have a cost. Python needs to allocate memory for the buffer and data structures associated with the file and call some operating system functions, e.g. the open syscall which in turn would search the file in cache or on disk.
On the other hand there is a limit on the number of files a program, the user, the whole system, etc can open at the same time. For example on Linux, the value in /proc/sys/fs/file-max denotes the maximum number of file-handles that the kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit (source).
If your program runs in such a restrictive environment then it would be good to keep the file open only when needed.
I know that it doesn't make sense to open file for reading if it doesn't exist, unlike for writing. But I need to create a file object, write data to it and then read it later, that's why I want to use the "r+" mode. Of course I can just open the file for writing once and then open the saved file for reading, but the problem is I don't want the file to be saved to disc. Any ideas?
Maybe you should be using a StringIO then. It imitates file-like operations (such as writing to and reading from it).