Why does shutil.copy2 sometimes modify the files' st_mtime? - python

I'm currently writing a little Python3 script for monitoring parts of my macOS filesystem. It checks selected folders for new and modified files and copies them (via shutil.copy2) into a review folder which I check from time to time. The test for modification uses a comparison between the st_mtime of the original and the (already) copied files. While testing it I encountered some strange behaviour: Some files got copied over and over again, despite being unmodified.
After some poking around I found out that shutil.copy2 apparently doesn't always carry over the exact st_mtime. (I also tried out shutil.copystat explicitly, with the same result -- which isn't much of a surprise.)
To illustrate my problem: When I run the following code ...
from shutil import copy2
from os import stat
source = '/Users/me/myfile'
target = source + '-copy'
copy2(source, target)
print(stat(source).st_mtime, stat(target).st_mtime)
... the result sometimes looks like (not always):
1600616170.8300607 1600616170.83006
When I'm using the nanosecond version st_mtime_ns the result looks like:
1600616170830060720 1600616170830060000
Now my question: Does anyone know what's going on here?

Related

Primer needed in python pathnames

I am a very novice coder, and Python is my first (and, practically speaking, only) language. I am charged as part of a research job with manipulating a collection of data analysis scripts, first by getting them to run on my computer. I was able to do this, essentially by removing all lines of coding identifying paths, and running the scripts through a Jupyter terminal opened in the directory where the relevant modules and CSV files live so the script knows where to look (I know that Python defaults to the location of the terminal).
Here are the particular blocks of code whose function I don't understand
import sys
sys.path.append('C:\Users\Ben\Documents\TRACMIP_Project\mymodules/')
import altdata as altdata
I have replaced the pathname in the original code with the path name leading to the directory where the module is; the file containing all the CSV files that end up being referenced here is also in mymodules.
This works depending on where I open the terminal, but the only way I can get it to work consistently is by opening the terminal in mymodules, which is fine for now but won't work when I need to work by accessing the server remotely. I need to understand better precisely what is being done here, and how it relates to the location of the terminal (all the documentation I've found is overly technical for my knowledge level).
Here is another segment I don't understand
import os.path
csvfile = 'csv/' + model +'_' + exp + '.csv'
if os.path.isfile(csvfile): # csv file exists
hcsvfile = open(csvfile )
I get here that it's looking for the CSV file, but I'm not sure how. I'm also not sure why then on some occasions depending on where I open the terminal it's able to find the module but not the CSV files.
I would love an explanation of what I've presented, but more generally I would like information (or a link to information) explaining paths and how they work in scripts in modules, as well as what are ways of manipulating them. Thanks.
sys.path
This is simple list of directories where python will look for modules and packages (.py and dirs with __init__.py file, look at modules tutorial). Extending this list will allow you to load modules (custom libs, etc.) from non default locations (usually you need to change it in runtime, for static dirs you can modify startup script to add needed enviroment variables).
os.path
This module implements some useful functions on pathnames.
... and allows you to find out if file exists, is it link, dir, etc.
Why you failed loading *.csv?
Because sys.path responsible for module loading and only for this. When you use relative path:
csvfile = 'csv/' + model +'_' + exp + '.csv'
open() will look in current working directory
file is either a string or bytes object giving the pathname (absolute or relative to the current working directory)...
You need to use absolute paths by constucting them with os.path module.
I agree with cdarke's comment that you are probably running into an issue with backslashes. Replacing the line with:
sys.path.append(r'C:\Users\Ben\Documents\TRACMIP_Project\mymodules')
will likely solve your problem. Details below.
In general, Python treats paths as if they're relative to the current directory (where your terminal is running). When you feed it an absolute path-- which is a path that includes the root directory, like the C:\ in C:\Users\Ben\Documents\TRACMIP_Project\mymodules-- then Python doesn't care about the working directory anymore, it just looks where you tell it to look.
Backslashes are used to make special characters within strings, such as line breaks (\n) and tabs (\t). The snag you've hit is that Python paths are strings first, paths second. So the \U, \B, \D, \T and \m in your path are getting misinterpreted as special characters and messing up Python's path interpretation. If you prefix the string with 'r', Python will ignore the special characters meaning of the backslash and just interpret it as a literal backslash (what you want).
The reason it still works if you run the script from the mymodules directory is because Python automatically looks in the working directory for files when asked. sys.path.append(path) is telling the computer to include that directory when it looks for commands, so that you can use files in that directory no matter where you're running the script. The faulty path will still get added, but its meaningless. There is no directory where you point it, so there's nothing to find there.
As for path manipulation in general, the "safest" way is to use the function in os.path, which are platform-independent and will give the correct output whether you're working in a Windows or a Unix environment (usually).
EDIT: Forgot to cover the second part. Since Python paths are strings, you can build them using string operations. That's what is happening with the line
csvfile = 'csv/' + model +'_' + exp + '.csv'
Presumably model and exp are strings that appear in the filenames in the csv/ folder. With model = "foo" and exp = "bar", you'd get csv/foo_bar.csv which is a relative path to a file (that is, relative to your working directory). The code makes sure a file actually exists at that path and then opens it. Assuming the csv/ folder is in the same path as you added in sys.path.append, this path should work regardless of where you run the file, but I'm not 100% certain on that. EDIT: outoftime pointed out that sys.path.append only works for modules, not opening files, so you'll need to either expand csv/ into an absolute path or always run in its parent directory.
Also, I think Python is smart enough to not care about the direction of slashes in paths, but you should probably not mix them. All backslashes or all forward slashes only. os.path.join will normalize them for you. I'd probably change the line to
csvfile = os.path.join('csv\', model + '_' + exp + '.csv')
for consistency's sake.

Strange race condition: FileNotFoundError with mkdtemp

I am getting a rather bizarre race condition in Mac OS X with Python (I've only tested Python 3.3). I am making several temporary directories, writing things to them, and then clearing them. Something along the lines of
while running:
(do something)
tempdir = mkdtemp('name')
try:
(write some stuff to tempdir)
finally:
shutil.rmtree(tempdir)
However, in some of the later loops of the (write some stuff to tempdir), I get errors like
with open(os.path.join("/var/folders/yc/8wpl9rlx47qgzxqpcf003k280000gn/T/tmp0fh2ztname", "file"), 'w', encoding='utf-8') as fn:
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/yc/8wpl9rlx47qgzxqpcf003k280000gn/T/tmpups5dpname/file'
(I've inlined the temp dir path for clarity)
Notice how the path being opened is not the same as the path that it can't find. In each case, the path in the error message is the temporary directory from the previous iteration of the loop.
The error is reproducible most of the time in the same place (after about the fourth iteration), but not every time.
EDIT: I just realized this is probably relevant. The (write some stuff to tempdir) stuff actually happens in a subprocess. This is how I am sure about the tempdir path, I have to pass it on to the subprocess (I actually lied about the "clarity" bit, I am actually writing out a Python file with that exact with open line). This is how I know for sure that the tempdir path is indeed different from the one being used.
I figured it out. It turns out it has nothing to do with mkdtemp (a sigh of relief that Mac OS X and Python are doing the right things there).
The problem is that I was writing out the code to a file, including the with open(os.path.join("/var/folders/yc/8wpl9rlx47qgzxqpcf003k280000gn/T/tmp0fh2ztname", "file"), 'w', encoding='utf-8') as fn: bit, and running it in a subprocess. The issue was that I was using the same file each time, and the .pyc files were not being invalidated correctly.
The error message was confusing because when Python generates a traceback, it reads the .py file (where the actual code is), but what is actually run is the .pyc file.
If I understand http://nedbatchelder.com/blog/200804/the_structure_of_pyc_files.html correctly, the timestamps in .pyc files ony have one second granularity (this explains why this was reproducible in the same place each time: the same fourth item in the loop ran in under a second).
The solution was to explicitly delete the .pyc files when writing out the file (in other circumstances you could also write out to a temp file itself, but in my case I needed the file to be importable under the same name).
Something along the lines of
if sys.version_info >= (3,):
os.unlink(os.path.join(path_to_file, '__pycache__', 'file.cpython-%s%s.pyc' % sys.version_info[:2]))
os.unlink(os.path.join(path_to_file, '__pycache__', 'file.cpython-%s%s.pyo' % sys.version_info[:2]))
else:
os.unlink(os.path.join(path_to_file, 'file.pyc'))
os.unlink(os.path.join(path_to_file, 'file.pyo'))

What is the fastest method of finding a file in Linux and Windows using Python?

I am writing a plug-in for RawTherapee in Python. I need to extract the version number from a file called 'AboutThisBuild.txt' that may exist anywhere in the directory tree. Although RawTherapee knows where it is installed this data is baked into the binary file.
My plug-in is being designed to collect basic system data when run without any command line parameters for the purpose of short circuiting troubleshooting. By having the version number, revision number and changeset (AKA Mercurial), I can sort out why the script may not be working as expected. OK that is the context.
I have tried a variety of methods, some suggested elsewhere on this site. The main one is using os.walk and fnmatch.
The problem is speed. Searching the entire directory tree is like watching paint dry!
To reduce load I have tried to predict likely hiding places and only traverse these. This is quicker but has the obvious disadvantage of missing some files.
This is what I have at the moment. Tested on Linux but not Windows as yet as I am still researching where the file might be placed.
import fnmatch
import os
import sys
rootPath = ('/usr/share/doc/rawtherapee',
'~',
'/media/CoreData/opt/',
'/opt')
pattern = 'AboutThisBuild.txt'
# Return the first instance of RT found in the paths searched
for CheckPath in rootPath:
print("\n")
print(">>>>>>>>>>>>> " + CheckPath)
print("\n")
for root, dirs, files in os.walk(CheckPath, True, None, False):
for filename in fnmatch.filter(files, pattern):
print( os.path.join(root, filename))
break
Usually 'AboutThisBuild.txt' is stored in a directory/subdirectory called 'rawtherapee' or has the string somewhere in the directory tree. I had naively though I could get the 5000 odd directory names and search these for 'rawtherapee' then use os.walk to traverse those directories but all modules and functions I have looked at collate all files in the directory (again).
Anyone have a quicker method of searching the entire directory tree or am I stuck with this hybrid option?
I am a beginner in Python, but I think I know the simplest way of finding a file in Windows.
import os
for dirpath, subdirs, filenames in os.walk('The directory you wanna search the file in'):
if 'name of your file with extension' in filenames:
print(dirpath)
This code will print out the directory of the file you are searching for in the console. All you have to do is get to the directory.
The thing about searching is that it doesn't matter too much how you get there (eg cheating). Once you have a result, you can verify it is correct relatively quickly.
You may be able to identify candidate locations fairly efficiently by guessing. For example, on Linux, you could first try looking in these locations (obviously not all are directories, but it doesn't do any harm to os.path.isfile('/;l$/AboutThisBuild.txt'))
$ strings /usr/bin/rawtherapee | grep '^/'
/lib/ld-linux.so.2
/H=!
/;l$
/9T$,
/.ba
/usr/share/rawtherapee
/usr/share/doc/rawtherapee
/themes/
/themes/slim
/options
/usr/share/color/icc
/cache
/languages/default
/languages/
/languages
/themes
/batch/queue
/batch/
/dcpprofiles
/#q=
/N6rtexif16NAISOInterpreterE
If you have it installed, you can try the locate command
If you still don't find it, move on to the brute force method
Here is a rough equivalent of strings using Python
>>> from string import printable, whitespace
>>> from itertools import groupby
>>> pathchars = set(printable) - set(whitespace)
>>> with open("/usr/bin/rawtherapee") as fp:
... data = fp.read()
...
>>> for k, g in groupby(data, pathchars.__contains__):
... if not k: continue
... g = ''.join(g)
... if len(g) > 3 and g.startswith("/"):
... print g
...
/lib64/ld-linux-x86-64.so.2
/^W0Kq[
/pW$<
/3R8
/)wyX
/WUO
/w=H
/t_1
/.badpixH
/d$(
/\$P
/D$Pv
/D$#
/D$(
/l$#
/d$#v?H
/usr/share/rawtherapee
/usr/share/doc/rawtherapee
/themes/
/themes/slim
/options
/usr/share/color/icc
/cache
/languages/default
/languages/
/languages
/themes
/batch/queue.csv
/batch/
/dcpprofiles
/#q=
/N6rtexif16NAISOInterpreterE
It sounds like you need a pure python solution here. If not, other answers will suffice.
In this case, you should traverse the folders using a queue and threads. While some may say Threads are never the solution, Threads are a great way of speeding up when you are I/O bound, which you are in this case. Essentially, you'll os.listdir the current dir. If it contains your file, party like it's 1999. If it doesn't, add each subfolder to the work queue.
If you're clever, you can play with depth first vs breadth first traversal to get the best results.
There is a great example I have used quite successfully at work at http://www.tutorialspoint.com/python/python_multithreading.htm. See the section titled Multithreaded Priority Queue. The example could probably be updated to include threadpools though, but it's not necessary.

Limitation to Python's glob?

I'm using glob to feed file names to a loop like so:
inputcsvfiles = glob.iglob('NCCCSM*.csv')
for x in inputcsvfiles:
csvfilename = x
do stuff here
The toy example that I used to prototype this script works fine with 2, 10, or even 100 input csv files, but I actually need it to loop through 10,959 files. When using that many files, the script stops working after the first iteration and fails to find the second input file.
Given that the script works absolutely fine with a "reasonable" number of entries (2-100), but not with what I need (10,959) is there a better way to handle this situation, or some sort of parameter that I can set to allow for a high number of iterations?
PS- initially I was using glob.glob, but glob.iglob fairs no better.
Edit:
An expansion of above for more context...
# typical input file looks like this: "NCCCSM20110101.csv", "NCCCSM20110102.csv", etc.
inputcsvfiles = glob.iglob('NCCCSM*.csv')
# loop over individial input files
for x in inputcsvfiles:
csvfile = x
modelname = x[0:5]
# ArcPy
arcpy.AddJoin_management(inputshape, "CLIMATEID", csvfile, "CLIMATEID", "KEEP_COMMON")
do more stuff after
The script fails at the ArcPy line, where the "csvfile" variable gets passed into the command. The error reported is that it can't find a specified csv file (e.g., "NCCSM20110101.csv"), when in fact, the csv is definitely in the directory. Could it be that you can't reuse a declared variable (x) multiple times as I have above? Again, this will work fine if the directory being glob'd only has 100 or so files, but if there's a whole lot (e.g., 10,959), it fails seemingly arbitrarily somewhere down the list.
Try doing a ls * on shell for those 10,000 entries and shell would fail too. How about walking the directory and yield those files one by one for your purpose?
#credit - #dabeaz - generators tutorial
import os
import fnmatch
def gen_find(filepat,top):
for path, dirlist, filelist in os.walk(top):
for name in fnmatch.filter(filelist,filepat):
yield os.path.join(path,name)
# Example use
if __name__ == '__main__':
lognames = gen_find("NCCCSM*.csv",".")
for name in lognames:
print name
One issue that arose was not with Python per se, but rather with ArcPy and/or MS handling of CSV files (more the latter, I think). As the loop iterates, it creates a schema.ini file whereby information on each CSV file processed in the loop gets added and stored. Over time, the schema.ini gets rather large and I believe that's when the performance issues arise.
My solution, although perhaps inelegant, was do delete the schema.ini file during each loop to avoid the issue. Doing so allowed me to process the 10k+ CSV files, although rather slowly. Truth be told, we wound up using GRASS and BASH scripting in the end.
If it works for 100 files but fails for 10000, then check that arcpy.AddJoin_management closes csvfile after it is done with it.
There is a limit on the number of open files that a process may have at any one time (which you can check by running ulimit -n).

Performance problems in Python: os.walk() + filecmp.dircmp().report_full_closure()

I currently have a need to compare directories after incremental data migrations occur. I wrote a python script to iterate through a list of source/destinations, perform the incremental copy from source to destination, then immediately compare the number of files and folders in each directory. To do this comparison, we very simply use:
for (path, dirs, files) in os.walk(destd):
destFileCount += len(files)
destDirCount += len(dirs)
If the number of files/dirs returned are different, we call another section of code to see what exactly is different. To do that, we run the following and send the output to a file:
filecmp.dircmp(sourced, destd).report_full_closure()
We use the report_full_closure piece as I'm not aware of another way to do a recursive comparison. The script then searches the resulting file for lines starting with "only in" and prints them to the screen, effectively showing us the differences.
However inefficient, this works like a charm on directories with under 90,000 files or so but once we hit that upper limit the script becomes sluggish to the extent that it isn't feasible to use it for this purpose. I suppose my questions can be separated into the following:
Am I making a logical error in using both of these modules [os.walk + filecmp.dircmp().report_full_closure()]? i.e., am I really saving time being able to skip the filecmp, or should I just only do the filecmp and skip the file/dir count altogether?
Is there any way to combine these two functions by sort of 'caching' the files from one for use in the other?
Is there a quicker way to perform either of these functions? I've searched high and low, so I'm guessing there is not.
I really appreciate your thoughts on this matter. This script has morphed and grown considerably so please forgive me if the answer is extremely obvious...
Thank you,
M
I would do something like this:
dir_diff = filecmp.dircmp(sourced, destd)
if dir_diff.left_only or dir_diff.right_only:
dir_diff.report_full_closure()
EDIT:
Here is a nice blog about filecmp module.
There could be 2 reasons why the differences returned by dircmp are not accurate:
1. It uses os.stat to compare the files (shallow comparison), which is good most of the time, but your requirement may be different.
2. Funny files - renaming file to a directory, etc are shown under common. So, you need to consider dir_diff.common_funny as well. Here is the modified code:
dir_diff = filecmp.dircmp(sourced, destd)
if dir_diff.left_only or dir_diff.right_only or dir_diff.common_funny:
dir_diff.report_full_closure()

Categories