Let's consider a site where users can upload files. Can I use MD5 or SHA1 hashes of their contents as filenames? If not, what should I use? To avoid collisions.
You can use almost anything as a filename, minus reserved characters. Those particular choices tell you nothing about the file itself, aside from its hash value. Provided they aren't uploading identical files, that should prevent file naming collisions. If you don't care about that, have at it.
Usually people upload files in order for someone to pull them back down. So you'd need to have a descriptor of some kind; otherwise users would need to open a mass of files to get the one they want. Perhaps a better option would be to let the user select a name (up to a character limit) and then append the datetime code. Then, in order to have a collision, you'd need to have 2 users select the exact same name at the exact same time. Include seconds in the datetime code, and the chances of collision approach (but never equal) zero.
Despite the SHA1 collision attack previously, SHA1 hash collision probability is still so low that can be assumed to be safe to use as filenames in most cases.
The other common approach is using GUID/UUID for every file. So the only question left is how do you want to handle two identical files uploaded by two users. The easiest way is treat them as two separate files and neither of them will be affected by each other.
Though sometimes you might be concerned about storage space. For example, if the files uploaded are really big, you might want to consider storing the two identical files as one to save space. Depending on the user experience of your system, you might need to handle some situations afterwards, such as when one of the two users removed the file. However these are not difficult to handle and just depend on the rest of your system.
Related
Let's say we have a test.zip file and we update a file:
zfh = zipfile.ZipFile("test.zip", mode = "a")
zfh.write("/home/msala/test.txt")
zfh.close()
Repeating a few times this "update", using the builtin method printdir()
I see in the archive there are stored not only the last one "test.txt" but also all the previous copies of the file.
Ok, I understand the zipfile library hasn't a delete method.
Questions:
if I call the builtin method extract("/home/msala/test.txt"),
which copy of the file is extracted and written to the file system ?
inside the zip archive, is there any flag telling that old copies .. are old copies, superseded by the last one ?
At the moment I list all the stored files and sort them by filename, last modification time...
The tl;dr is no, you can't do this without building a bit of extra info—but that can be done without sorting, and, even if you did have to sort, the performance cost would be irrelevant.
First, let me explain how zipfiles work. (Even if you understand this, later readers with the same problem may not.)
Unfortunately, the specification is a copyrighted and paywalled ISO document, so I can't link to it or quote it. The original PKZip APPNOTE.TXT which is the de facto pro-standardization standard is available, however. And numerous sites like Wikipedia have nice summaries.
A zipfile is 0 or more fragments, followed by a central directory.
Fragments are just treated as if they were all concatenated into one big file.
The body of the file can contain zip entries, in arbitrary order, along with anything you want. (This is how DOS/Windows self-extracting archives work—the unzip executable comes at the start of the first fragment.) Anything that looks like a zip entry, but isn't referenced by the central directory, is not treated as a zip entry (except when repairing a corrupted zipfile.)
Each zip entries starts with a header that gives you the filename, compression format, etc. of the following data.
The directory is a list of directory entries that contain most of the same information, plus a pointer to where to find the zip entry.
It's the order of directory entries that determines the order of the files in the archive.
if I call the builtin method extract("/home/msala/test.txt"), which copy of the file is extracted and written to the file system ?
The behavior isn't really specified anywhere.
Extracting the whole archive should extract both files, in the order present in the zip directory (the same order given by infolist), with the second one overwriting the first.
But extracting by name doesn't have to give you both—it could give you the last one, or the first, or pick one at random.
Python gives you the last. The way this works is that, when reading the directory, it builds a dict mapping filenames to ZipInfos, just adding them as encountered, so the last one will overwrite the previous ones. (Here's the 3.7 code.) Whenever you try to access something by filename, it just looks up the filename in that dict to get the ZipInfo.
But is that something you want to rely on? I'm not sure. On the one hand, this behavior has been the same from Python 1.6 to 3.7, which is usually a good sign that it's not going to change, even if it's never been documented. On the other hand, there are open issues—including #6818, which is intended to add deletion support to the library one way or another—that could change it.
And it's really not that hard to do the same thing yourself. With the added benefit that you can use a different rule—always keep the first, always keep the one with the latest mod time, etc.
You seem to be worried about the performance cost of sorting the infolist, which is probably not worth worrying about. The time it takes to read and parse the zip directory is going to make the cost of your sort virtually invisible.
But you don't really need to sort here. After all, you don't want to be able to get all of the entries with a given name in some order, you just want to get one particular entry for each name. So, you can just do what ZipFile does internally, which takes only linear time to build, and constant time each time you search it. And you can use any rule you want here.
entries = {}
for entry in zfh.infolist():
if entry.filename not in entries:
entries[entry.filename] = entries
This keeps the first entry for any name. If you want to keep the last, just remove the if. If you want to keep the latest by modtime, just change it if entry.date_time > entries[entry.filename].date_time:. And so on.
Now, instead of relying on what happens when you call extract("home/msala/test.txt"), you can call extract(entries["home/msala/test.txt"]) and know that you're getting the first/last/latest/whatever file of that name.
inside the zip archive, is there any flag telling that old copies .. are old copies, superseded by the last one ?
No, not really.
The way to delete a file is to remove it from the central directory. Which you do just by rewriting the central directory. Since it comes at the end of the zipfile, and is almost always more than small enough to fit on even the smallest floppy, this was generally considered fine even back in the DOS days.
(But notice that if you unplug the computer in the middle of it, you've got a zipfile without a central directory, which has to be rebuilt by scanning all of the file entries. So, many newer tools will instead, at least for smaller files, rewrite the whole file to a tempfile then rename it over the original, to guarantee a safe, atomic write.)
At least some early tools would sometimes, especially for gigantic archives, rewrite the entry's pathname's first byte with a NUL. But this doesn't really mark the entry as deleted, it just renames it to "\0ome/msala/test.txt". And many modern tools will in fact treat it as meaning exactly that and give you weird errors telling you they can't find a directory named 'ome' or '' or something else fun. Plus, this means the filename in the directory entry no longer matches the filename in the file entry header, which will cause many modern tools to flag the zipfile as corrupted.
At any rate, Python's zipfile module doesn't do either of these, so you'd need to subclass ZipFile to add the support yourself.
I solved this way, similar to database records management.
Adding a file to the archive, I look for previous stored copies (same filename).
For each of them, I set their field "comment" to a specific marker, for example "deleted".
We add the new file, with comment = empty.
As we like, we can "vacuum": shrink the zip archive using the usually tools (under the hood a new archive is created, discarding the files having the comment set to "deleted").
This way, we have also a simple "versioning".
We have all the previous files copies, until the vacuum.
Good evening. I am looking at developing some code that will collect EXIF data from JPEG images and store it in a MySQL database using Python v2.x The stumbling block lies in the fact that the JPEGs are scattered in a number of subdirectories and further subdirectories in a root folder so for example 200 JPEGs may be stored in root > subroot > subsubroot1 as well as a further 100 in root > subroot > subroot2. Once all images are identified, they will be scanned and their respective EXIF data abstracted before being added to a MySQL table.
At the moment I am just at the planning stage but I am just wondering, what would be the most efficient and pythonic way to carry out the recursive searching? I am looking to scan the root directory and append any new identified subdirectories to a list and then scan all subdirectory paths in the list for further subdirectories until I have a total list of all directories. This just seems to be a clumsy way though IMHO and a bit repetitive so I assume there may be a more OOP manner of carrying out this function.
Similarly, I am only looking to add new info to my MySQL table and so what would be the most efficient way to establish if an entry already exists? The filename both in the table and the JPEG file name will be its MD5 hash values. I was considering scanning through the table at the beginning of the code and placing all filenames in a set and so, before scanning a new JPEG, if an entry already exists in the set, there would be no need to extract the EXIF and move onto the next picture. Is this an efficient method however or would it be better to scan through the MySQL table when a new image is encountered? I anticipate the set method may be the most efficient however the table may potentially contain tens of millions of entries eventually and so to add the filenames for these entries into a set (volatile memory) may not be the best idea.
Thanks folks.
I would just write a function that scanned a directory for all files; if it's a jpeg, add the full path name of the jpeg to the list of results. If it's a directory, then immediately call the function with the newly discovered directory as an argument. If it's another type of file, do nothing. This is a classic recursive divide-and-conquer strategy. It will break if there are loops in your directory path, for instance with symlinks -- if this is a danger for you, then you'll have to make sure you don't traverse the same directory twice by finding the "real" non-symlinked path of each directory and recording it.
How to avoid duplicate entries is a trickier problem and you have to consider whether you are tolerant of two differently-named files with the exact same contents (and also consider the edge cases of symlinked or multiply-hard-linked files), how new files appear in the directories you are scanning and whether you have any control over that process. One idea to speed it up would be to use os.path.getmtime(). Record the moment you start the directory traversal process. Next time around, have your recursive traversal process ignore any jpeg files with an mtime older than your recorded time. This can't be your only method of keeping track because files modified between the start and end times of your process may or may not be recorded, so you will still have to check the database for those records (for instance using the full path, a hash on the file info or a hash on the data itself, depending on what kind of duplication you're intolerant of) but used as a heuristic it should speed up the process greatly.
You could theoretically load all filenames (probably paths and not filenames) from the database into memory to speed up comparison, but if there's any danger of the table becoming very large it would be better to leave that info in the database. For instance, you could create a hash from the filename, and then simply add that to the database with a UNIQUE constraint -- the database will reject any duplicate entries, you can catch the exception and go on your way. This won't be slow if you use the aforementioned heuristic checking file mtime.
Be sure you account for the possibility of files that may be only modified and not newly created, if that's important for your application.
I'm looking to schedule FTP file transfers, but to conserve bandwidth, I would like to only upload files that have changed. What's a good reliable way to do this that will work on a variety of different hosting providers?
First, checking to see whether a local file has changed really doesn't have anything to do with FTP. You're stating that you're only going to open an FTP connection to upload a file if/when it has changed.
At a high level, the basic strategy you're going to need to employ is by keeping track of when your application last checked for changes (previous execution timestamp), and compare that to the timestamps of the files you are interested in uploading. If the timestamp on the files is more recent, they will most likely have changed. I say most likely because it is possible to update only the timestamp (e.g. touch on unix/linux).
Here's a quick example showing you how to check the modification time for all of the items in a specific directory:
import os, time
checkdir="./"
for item in os.listdir(checkdir):
item_path = "%s%s"%(checkdir,item)
mtime = os.path.getmtime(item_path)
print "%s: %s" %(item_path,mtime)
Note that this does not differentiate between file types (e.g. regular file, directory, symlink). Read the docs on os.path to discover how to determine file type so you can skip certain types, if you so choose.
You'll still need to come up with the logic to store the time of the previous 'scan' so that you refer to it in subsequent scans. A really simple way to do this would be to store a value in a file.
Make sure you use a locking strategy in case two 'scans' overlap. FTP uploads will take some time to complete.
I have a listing of millions of files and am uploading them to Amazon's S3. I need to create unique keys for each of the images. I'd rather not use md5 because it requires scanning the entire file, which can be slow. Additionally, there could be duplicate images, which is allowed in our application. Any suggestion for quickly generating an almost guaranteed to be unique key? Preferably, 32-characters alpha numeric (can be case sensitive). Thanks!
I would not call this a hash, since that implies generating a unique value based on the file contents.
Instead, UUIDs might be what you're after.
Just use an MD5 hash on the actual FILE after it's been uploaded and stored.
http://docs.python.org/library/md5.html
Apply hash to database table or, however you're storing it.
MD5 of datetime.now() (at a time of file upload) will be OK, imho.
I have a web server that is dynamically creating various reports in several formats (pdf and doc files). The files require a fair amount of CPU to generate, and it is fairly common to have situations where two people are creating the same report with the same input.
Inputs:
raw data input as a string (equations, numbers, and
lists of words), arbitrary length, almost 99% will be less than about 200 words
the version of the report creation tool
When a user attempts to generate a report, I would like to check to see if a file already exists with the given input, and if so return a link to the file. If the file doesn't already exist, then I would like to generate it as needed.
What solutions are already out there? I've cached simple http requests before, but the keys were extremely simple (usually database id's)
If I have to do this myself, what is the best way. The input can be several hundred words, and I was wondering how I should go about transforming the strings into keys sent to the cache.
//entire input, uses too much memory, one to one mapping
cache['one two three four five six seven eight nine ten eleven...']
//short keys
cache['one two'] => 5 results, then I must narrow these down even more
Is this something that should be done in a database, or is it better done within the web app code (python in my case)
Thanks you everyone.
This is what Apache is for.
Create a directory that will have the reports.
Configure Apache to serve files from that directory.
If the report exists, redirect to a URL that Apache will serve.
Otherwise, the report doesn't exist, so create it. Then redirect to a URL that Apache will serve.
There's no "hashing". You have a key ("a string (equations, numbers, and lists of words), arbitrary length, almost 99% will be less than about 200 words") and a value, which is a file. Don't waste time on a hash. You just have a long key.
You can compress this key somewhat by making a "slug" out of it: remove punctuation, replace spaces with _, that kind of thing.
You should create an internal surrogate key which is a simple integer.
You're simply translating a long key to a "report" which either exists as a file or will be created as a file.
The usual thing is to use a reverse proxy like Squid or Varnish