I have a python script which will run various other scripts when it sees various files have been updated. It rapidly polls the files to check for updates by looking at the file modified dates.
For the most part this has worked as expected. When one of my scripts updates a file, another script is triggered and the appropriate action(s) are taken. For reference I am using pickles as the file type.
However, adding a new file and corresponding script into the mix just now, I've noticed an issue where the file has its modified date updated twice. Once when I perform the pickle.dump() and again when I exit the "with" statement (when the file closes). This means that the corresponding actions trigger twice rather than once. I guess this makes sense but what's confusing is this behaviour doesn't happen with any of my other files.
I know a simple workaround would be to poll the files slightly less frequently since the gap between the file updates is extremely small. But I want to understand why this issue is occuring some times but not other times.
I think what you observe is 2 actions: file created and file updated.
To resolve this, create and populate file outside of monitored folders, and once "with" block is over (file is closed), move it from temporary location to a proper place.
to do this, look at tempfile module in standard library
If the pickle is big enough (typically somewhere around 4+ KB, though it will vary by OS/file system), this would be expected behavior. The majority of the pickle would be written during the dump call as buffers filled and got written, but whatever fraction doesn't consume the full file buffer would be left in the buffer until the file is closed (which implicitly flushes any outstanding buffered data before closing the handle).
I agree with the other answer that the usual solution is to write the file in a different folder (but on the same file system), then immediately after closing it, us os.replace to perform an atomic rename that moves it from the temporary location to the final location, so there is no gap between file open, file population, and file close; the file is either there in its entirety, or not at all.
Related
I need to read a application log file dynamically and based on the newly added lines in the log, need to take some action for example, notify a service(call-back).
I have no control on who writes to this log file. For now, reading the whole log file is also fine, since its an error log and gets updated very less often hence will not grow too much in a single uptime. In that case I can read the whole file(and find out the diff) whenever it is updated. If the solution comes from within python's core libs, it will be helpful.
If the log file is only appended without any lines being deleted you can probably try to save the index of the last line.
If you later open the file again, you can start reading the lines from that index. As I can see, this is possible to do with readlines() or probably even better with linecache (which was mentioned later in the first link).
In Python, if you either open a file without calling close(), or close the file but not using try-finally or the "with" statement, is this a problem? Or does it suffice as a coding practice to rely on the Python garbage-collection to close all files? For example, if one does this:
for line in open("filename"):
# ... do stuff ...
... is this a problem because the file can never be closed and an exception could occur that prevents it from being closed? Or will it definitely be closed at the conclusion of the for statement because the file goes out of scope?
In your example the file isn't guaranteed to be closed before the interpreter exits. In current versions of CPython the file will be closed at the end of the for loop because CPython uses reference counting as its primary garbage collection mechanism but that's an implementation detail, not a feature of the language. Other implementations of Python aren't guaranteed to work this way. For example IronPython, PyPy, and Jython don't use reference counting and therefore won't close the file at the end of the loop.
It's bad practice to rely on CPython's garbage collection implementation because it makes your code less portable. You might not have resource leaks if you use CPython, but if you ever switch to a Python implementation which doesn't use reference counting you'll need to go through all your code and make sure all your files are closed properly.
For your example use:
with open("filename") as f:
for line in f:
# ... do stuff ...
Some Pythons will close files automatically when they are no longer referenced, while others will not and it's up to the O/S to close files when the Python interpreter exits.
Even for the Pythons that will close files for you, the timing is not guaranteed: it could be immediately, or it could be seconds/minutes/hours/days later.
So, while you may not experience problems with the Python you are using, it is definitely not good practice to leave your files open. In fact, in cpython 3 you will now get warnings that the system had to close files for you if you didn't do it.
Moral: Clean up after yourself. :)
Although it is quite safe to use such construct in this particular case, there are some caveats for generalising such practice:
run can potentially run out of file descriptors, although unlikely, imagine hunting a bug like that
you may not be able to delete said file on some systems, e.g. win32
if you run anything other than CPython, you don't know when file is closed for you
if you open the file in write or read-write mode, you don't know when data is flushed
The file does get garbage collected, and hence closed. The GC determines when it gets closed, not you. Obviously, this is not a recommended practice because you might hit open file handle limit if you do not close files as soon as you finish using them. What if within that for loop of yours, you open more files and leave them lingering?
Hi It is very important to close your file descriptor in situation when you are going to use it's content in the same python script. I today itself realize after so long hecting debugging. The reason is content will be edited/removed/saved only after you close you file descriptor and changes are affected to file!
So suppose you have situation that you write content to a new file and then without closing fd you are using that file(not fd) in another shell command which reads its content. In this situation you will not get you contents for shell command as expected and if you try to debug you can't find the bug easily. you can also read more in my blog entry http://magnificentzps.blogspot.in/2014/04/importance-of-closing-file-descriptor.html
During the I/O process, data is buffered: this means that it is held in a temporary location before being written to the file.
Python doesn't flush the buffer—that is, write data to the file—until it's sure you're done writing. One way to do this is to close the file.
If you write to a file without closing, the data won't make it to the target file.
Python uses close() method to close the opened file. Once the file is closed, you cannot read/write data in that file again.
If you will try to access the same file again, it will raise ValueError since the file is already closed.
Python automatically closes the file, if the reference object has been assigned to some another file. Closing the file is a standard practice as it reduces the risk of being unwarrantedly modified.
One another way to solve this issue is.... with statement
If you open a file using with statement, a temporary variable gets reserved for use to access the file and it can only be accessed with the indented block. With statement itself calls the close() method after execution of indented code.
Syntax:
with open('file_name.text') as file:
#some code here
I would like to check every minute if there was a file like "RESULTS.ODB" generated and if this file is bigger than 1.5 Gigabyte there starts another subprocess to get the Data from this file. How can i make sure that the file isn´t in progress to be written and everything is included?
I hope you know what i mean. Any ideas how to handle that?
Thank you very much. :)
If you have no control over the writing process, then you are at some point bound to fail somewhere.
If you do have control over the writer, a simple way to "lock" files is to create a symlink. If your symlink creation fails, there is already a write in progress. If it succeeds, you just acquired the "lock".
But if you do not have any control over writing and creation of the file, there will be trouble. You can try the approach as outlined here: Ensuring that my program is not doing a concurrent file write
This will read timestamps of the file and "guess" from them if writing has completed or not. This is more reliable than checking the file size, as you could end up with a file over your size threshold but writing still in progress.
In this case the problem would be the writer starting to write before you have read the file in its entirety. Now your reader would fail when the file it was reading disappeared half way through.
If you are on a Unix platform, you have no control over write and you absolutely need to do this, I would do something like this:
Check if file exists and if it does, if the "last written" timestamp
is "old enough" for me to assume the file is there
Rename the file to a different name
Check the renamed file that it still matches your criteria
Get data from the renamed file
Nevertheless, this will eventually fail and you will lose an update, as there is no way to make this atomic. Renaming will remove the problem of overwriting the file before you have read it, but if the writer decides to start writing between 1 and 2, you not only will receive an incomplete file but you might also break the writer if it does not like the file disappearing half way through.
I would rather try to find a way to somehow chain the actions together. Either your writer triggering the read process or adding a locking mechanism. Writing 1.5GB of data is not instantaneous and eventually the unexpected will happen.
Or if you definitely cannot do anything like that, could you ensure for example that your writer writes maximum once in N minutes or so? If you could be sure it never writes twice within a 5 minute window, you would wait in your reader until the file is 3 minutes old and then rename it and read the renamed file. You could also check if you could prevent the writer from overwriting. If you can do this, then you can safely process the file in your reader when it is "old enough" and has not changed in whatever grace period you decide to give it, and when you have read it, you will delete the file allowing the next update to appear.
Without knowing more about your environment and processes involved this is the best I can come up with. But there is no universal solution to this problem. It needs a workaround that is tailored to your particular environment.
In Python, if you either open a file without calling close(), or close the file but not using try-finally or the "with" statement, is this a problem? Or does it suffice as a coding practice to rely on the Python garbage-collection to close all files? For example, if one does this:
for line in open("filename"):
# ... do stuff ...
... is this a problem because the file can never be closed and an exception could occur that prevents it from being closed? Or will it definitely be closed at the conclusion of the for statement because the file goes out of scope?
In your example the file isn't guaranteed to be closed before the interpreter exits. In current versions of CPython the file will be closed at the end of the for loop because CPython uses reference counting as its primary garbage collection mechanism but that's an implementation detail, not a feature of the language. Other implementations of Python aren't guaranteed to work this way. For example IronPython, PyPy, and Jython don't use reference counting and therefore won't close the file at the end of the loop.
It's bad practice to rely on CPython's garbage collection implementation because it makes your code less portable. You might not have resource leaks if you use CPython, but if you ever switch to a Python implementation which doesn't use reference counting you'll need to go through all your code and make sure all your files are closed properly.
For your example use:
with open("filename") as f:
for line in f:
# ... do stuff ...
Some Pythons will close files automatically when they are no longer referenced, while others will not and it's up to the O/S to close files when the Python interpreter exits.
Even for the Pythons that will close files for you, the timing is not guaranteed: it could be immediately, or it could be seconds/minutes/hours/days later.
So, while you may not experience problems with the Python you are using, it is definitely not good practice to leave your files open. In fact, in cpython 3 you will now get warnings that the system had to close files for you if you didn't do it.
Moral: Clean up after yourself. :)
Although it is quite safe to use such construct in this particular case, there are some caveats for generalising such practice:
run can potentially run out of file descriptors, although unlikely, imagine hunting a bug like that
you may not be able to delete said file on some systems, e.g. win32
if you run anything other than CPython, you don't know when file is closed for you
if you open the file in write or read-write mode, you don't know when data is flushed
The file does get garbage collected, and hence closed. The GC determines when it gets closed, not you. Obviously, this is not a recommended practice because you might hit open file handle limit if you do not close files as soon as you finish using them. What if within that for loop of yours, you open more files and leave them lingering?
Hi It is very important to close your file descriptor in situation when you are going to use it's content in the same python script. I today itself realize after so long hecting debugging. The reason is content will be edited/removed/saved only after you close you file descriptor and changes are affected to file!
So suppose you have situation that you write content to a new file and then without closing fd you are using that file(not fd) in another shell command which reads its content. In this situation you will not get you contents for shell command as expected and if you try to debug you can't find the bug easily. you can also read more in my blog entry http://magnificentzps.blogspot.in/2014/04/importance-of-closing-file-descriptor.html
During the I/O process, data is buffered: this means that it is held in a temporary location before being written to the file.
Python doesn't flush the buffer—that is, write data to the file—until it's sure you're done writing. One way to do this is to close the file.
If you write to a file without closing, the data won't make it to the target file.
Python uses close() method to close the opened file. Once the file is closed, you cannot read/write data in that file again.
If you will try to access the same file again, it will raise ValueError since the file is already closed.
Python automatically closes the file, if the reference object has been assigned to some another file. Closing the file is a standard practice as it reduces the risk of being unwarrantedly modified.
One another way to solve this issue is.... with statement
If you open a file using with statement, a temporary variable gets reserved for use to access the file and it can only be accessed with the indented block. With statement itself calls the close() method after execution of indented code.
Syntax:
with open('file_name.text') as file:
#some code here
I am writing a script that will be polling a directory looking for new files.
In this scenario, is it necessary to do some sort of error checking to make sure the files are completely written prior to accessing them?
I don't want to work with a file before it has been written completely to disk, but because the info I want from the file is near the beginning, it seems like it could be possible to pull the data I need without realizing the file isn't done being written.
Is that something I should worry about, or will the file be locked because the OS is writing to the hard drive?
This is on a Linux system.
Typically on Linux, unless you're using locking of some kind, two processes can quite happily have the same file open at once, even for writing. There are three ways of avoiding problems with this:
Locking
By having the writer apply a lock to the file, it is possible to prevent the reader from reading the file partially. However, most locks are advisory so it is still entirely possible to see partial results anyway. (Mandatory locks exist, but a strongly not recommended on the grounds that they're far too fragile.) It's relatively difficult to write correct locking code, and it is normal to delegate such tasks to a specialist library (i.e., to a database engine!) In particular, you don't want to use locking on networked filesystems; it's a source of colossal trouble when it works and can often go thoroughly wrong.
Convention
A file can instead be created in the same directory with another name that you don't automatically look for on the reading side (e.g., .foobar.txt.tmp) and then renamed atomically to the right name (e.g., foobar.txt) once the writing is done. This can work quite well, so long as you take care to deal with the possibility of previous runs failing to correctly write the file. If there should only ever be one writer at a time, this is fairly simple to implement.
Not Worrying About It
The most common type of file that is frequently written is a log file. These can be easily written in such a way that information is strictly only ever appended to the file, so any reader can safely look at the beginning of the file without having to worry about anything changing under its feet. This works very well in practice.
There's nothing special about Python in any of this. All programs running on Linux have the same issues.
On Unix, unless the writing application goes out of its way, the file won't be locked and you'll be able to read from it.
The reader will, of course, have to be prepared to deal with an incomplete file (bearing in mind that there may be I/O buffering happening on the writer's side).
If that's a non-starter, you'll have to think of some scheme to synchronize the writer and the reader, for example:
explicitly lock the file;
write the data to a temporary location and only move it into its final place when the file is complete (the move operation can be done atomically, provided both the source and the destination reside on the same file system).
If you have some control over the writing program, have it write the file somewhere else (like the /tmp directory) and then when it's done move it to the directory being watched.
If you don't have control of the program doing the writing (and by 'control' I mean 'edit the source code'), you probably won't be able to make it do file locking either, so that's probably out. In which case you'll likely need to know something about the file format to know when the writer is done. For instance, if the writer always writes "DONE" as the last four characters in the file, you could open the file, seek to the end, and read the last four characters.
Yes it will.
I prefer the "file naming convention" and renaming solution described by Donal.