The task:
I am working with 4 TB of data/files, stored on an external usb disk: images, html, videos, executables and so on.
I want to index all those files in a sqlite3 database with the following schema:
path TEXT, mimetype TEXT, filetype TEXT, size INT
So far:
I os.walk recursively through the mounted directory, execute the linux file command with python's subprocess and get the size with os.path.getsize(). Finally the results are written into the database, stored on my computer - the usb is mounted with -o ro, of course. No threading, by the way
You can see the full code here http://hub.darcs.net/ampoffcom/smtid/browse/smtid.py
The problem:
The code is really slow. I realized that the deeper the direcory structure, the slower the code. I suppose, os.walk might be a problem.
The questions:
Is there a faster alternative to os.walk?
Would threading fasten things up?
Is there a faster alternative to os.walk?
Yes. In fact, multiple.
scandir (which will be in the stdlib in 3.5) is significantly faster than walk.
The C function fts is significantly faster than scandir. I'm pretty sure there are wrappers on PyPI, although I don't know one off-hand to recommend, and it's not that hard to use via ctypes or cffi if you know any C.
The find tool uses fts, and you can always subprocess to it if you can't use fts directly.
Would threading fasten things up?
That depends on details your system that we don't have, but… You're spending all of your time waiting on the filesystem. Unless you have multiple independent drives that are only bound together at user-level (that is, not LVM or something below it like RAID) or not at all (e.g., one is just mounted under the other's filesystem), issuing multiple requests in parallel will probably not speed things up.
Still, this is pretty easy to test; why not try it and see?
One more idea: you may be spending a lot of time spawning and communicating with those file processes. There are multiple Python libraries that use the same libmagic that it does. I don't want to recommend one in particular over the others, so here's search results.
As monkut suggests, make sure you're doing bulk commits, not autocommitting each insert with sqlite. As the FAQ explains, sqlite can do ~50000 inserts per second, but only a few dozen transactions per second.
While we're at it, if you can put the sqlite file on a different filesystem than the one you're scanning (or keep it in memory until you're done, then write it to disk all at once), that might be worth trying.
Finally, but most importantly:
Profile your code to see where the hotspots are, instead of guessing.
Create small data sets and benchmark different alternatives to see how much benefit you get.
Related
Due to several edits, this question might have become a bit incoherent. I apologize.
I'm currently writing a Python server. It will never see more than 4 active users, but I'm a computer science student, so I'm planning for it anyway.
Currently, I'm about to implement a function to save a backup of the current state of all relevant variables into CSV files. Of those I currently have 10, and they will never be really big, but... well, computer science student and so on.
So, I am currently thinking about two things:
When to run a backup?
What kind of backup?
When to run:
I can either run a backup every time a variable changes, which has the advantage of always having the current state in the backup, or something like once every minute, which has the advantage of not rewriting the file hundreds of times per minute if the server gets busy, but will create a lot of useless rewrites of the same data if I don't implement a detection which variables have changed since the last backup.
Directly related to that is the question what kind of backup I should do.
I can either do a full backup of all variables (Which is pointless if I'm running a backup every time a variable changes, but might be good if I'm running a backup every X minutes), or a full backup of a single variable (Which would be better if I'm backing up each time the variables change, but would involve either multiple backup functions or a smart detection of the variable that is currently backed up), or I can try some sort of delta-backup on the files (Which would probably involve reading the current file and rewriting it with the changes, so it's probably pretty stupid, unless there is a trick for this in Python I don't know about).
I cannot use shelves because I want the data to be portable between different programming languages (java, for example, probably cannot open python shelves), and I cannot use MySQL for different reasons, mainly that the machine that will run the Server has no MySQL support and I don't want to use an external MySQL-Server since I want the server to keep running when the internet connection drops.
I am also aware of the fact that there are several ways to do this with preimplemented functions of python and / or other software (sqlite, for example). I am just a big fan of building this stuff myself, not because I like to reinvent the wheel, but because I like to know how the things I use work. I'm building this server partly just for learning python, and although knowing how to use SQLite is something useful, I also enjoy doing the "dirty work" myself.
In my usage scenario of possibly a few requests per day I am tending towards the "backup on change" idea, but that would quickly fall apart if, for some reason, the server gets really, really busy.
So, my question basically boils down to this: Which backup method would be the most useful in this scenario, and have I possibly missed another backup strategy? How do you decide on which strategy to use in your applications?
Please note that I raise this question mostly out of a general curiosity for backup strategies and the thoughts behind them, and not because of problems in this special case.
Use sqlite. You're asking about building persistent storage using csv files, and about how to update the files as things change. What you're asking for is a lightweight, portable relational (as in, table based) database. Sqlite is perfect for this situation.
Python has had sqlite support in the standard library since version 2.5 with the sqlite3 module. Since a sqlite database is implemented as a single file, it's simple to move them across machines, and Java has a number of different ways to interact with sqlite.
I'm all for doing things for the sake of learning, but if you really want to learn about data persistence, I wouldn't marry yourself to the idea of a "csv database". I would start by looking at the wikipedia page for Persistence. What you're thinking about is basically a "System Image" for your data. The Wikipedia article describes some of the same shortcomings of this approach that you've mentioned:
State changes made to a system after its last image was saved are lost
in the case of a system failure or shutdown. Saving an image for every
single change would be too time-consuming for most systems
Rather than trying to update your state wholesale at every change, I think you'd be better off looking at some other form of persistence. For example, some sort of journal could work well. This makes it simple to just append any change to the end of a log-file, or some similar construct.
However, if you end up with many concurrent users, with processes running on multiple threads, you'll run in to concerns of whether or not your changes are atomic, or if they conflict with one another. While operating systems generally have some ways of dealing with locking files for edits, you're opening up a can of worms trying to learn about how that works and interacts with your system. At this point you're back to needing a database.
So sure, play around with a couple different approaches. But as soon as you're looking to just get it working in a clear and consistent manner, go with sqlite.
If your data is in CSV files, why not use a revision control system on those files? E.g. git would be pretty fast and give excellent history. The repository would be wholly contained in the directory where the files reside, so it's pretty easy to handle. You could also replicate that repository to other machines or directories easily.
I have a django site running with a mysql database backend. I accept fairly large uploads from one of the admin users to bulk import some data. The data comes in a format that is slightly different than the form it needs to be in the database so I need to do a little parsing.
I'd like to be able to pivot this data into csv and write it into a cStringIO object then simply use mysql's bulk import command to load that file. I'd prefer to skip writing the file to the disk first, but I can't seem to find a way around it. I've done basically this exact thing with postgresql in the past, but unfortunately this project is on mysql.
The short: Can I take an in memory file like object and somehow use mysql bulk import operation
There is an excellent tutorial called Generator Tricks for Systems Programmers that addresses processing large log files, which is similar, but not identical, to your situation. As long as you can perform the needed transform with access to only the current (and possibly previous) data in the stream, this may work for you.
I have mentioned this gem in a number of answers because I think that it introduces a different way of thinking that can be quite valuable. There is a companion piece, A Curious Course on Coroutines and Concurrency, that can seriously twist your head around.
If by "bulk import" you mean LOAD DATA [LOCAL] INFILE then, no, there's no way around first writing the data to some file, damn it all. You (and I) would really like to write the table directly from an array.
But some OSs, like Linux, allow a RAM-resident filesystem that eases some of the hurt. I'm not enough of a sysadmin to know how to set up one of these guys; I had to get my ISP's tech support to do it for me. I found an article that might have useful info.
HTH
First, sorry for my bad English.
I'm writing a python script, which compares the files in two different directories. But for performance, I want to know that: "Are the directories on the same physical disk or not?", so I can read them simultaneously for performance gain.
My current idea is getting "mount" commands output, and getting the /dev/sd* directories path and use them for identify the disks. But sometimes you can mount an already mounted directory on somewhere else(or something like that, i'm not so sure), so things get complicated.
Is there a better way to do that, like a library?
(If there is a cross-platform way, I will be more appreciated, but it seems it's hard to find a cross-platform library like this.)
You are looking for the stat function from linux, which is also provided to you by python (see http://docs.python.org/library/os.html#os.stat).
You will have to compare the st_dev from the resulting structure and both files will be on the same filesystem if they match.
Using this function is as portable as you can get (better than mount or df).
Bonus: you do not have to run expensive exec calls and do error prone text parsing.
An easier alternative to using mount might be to invoke df <directory>.
This prints out the filesystem. Also, on my Ubuntu box, passing -P to df makes the output a little bit easier to parse.
This is what I've done for a project. I have a few data structures that are bascially dictionaries with some methods that operate on the data. When I save them to disk, I write them out to .py files as code that when imported as a module will load the same data into such a data structure.
Is this reasonable? Are there any big disadvantages? The advantage I see is that when I want to operate with the saved data, I can quickly import the modules I need. Also, the modules can be used seperate from the rest of the application because you don't need a separate parser or loader functionality.
By operating this way, you may gain some modicum of convenience, but you pay many kinds of price for that. The space it takes to save your data, and the time it takes to both save and reload it, go up substantially; and your security exposure is unbounded -- you must ferociously guard the paths from which you reload modules, as it would provide an easy avenue for any attacker to inject code of their choice to be executed under your userid (pickle itself is not rock-solid, security-wise, but, compared to this arrangement, it shines;-).
All in all, I prefer a simpler and more traditional arrangement: executable code lives in one module (on a typical code-loading path, that does not need to be R/W once the module's compiled) -- it gets loaded just once and from an already-compiled form. Data live in their own files (or portions of DB, etc) in any of the many suitable formats, mostly standard ones (possibly including multi-language ones such as JSON, CSV, XML, ... &c, if I want to keep the option open to easily load those data from other languages in the future).
It's reasonable, and I do it all the time. Obviously it's not a format you use to exchange data, so it's not a good format for anything like a save file.
But for example, when I do migrations of websites to Plone, I often get data about the site (such as a list of which pages should be migrated, or a list of how old urls should be mapped to new ones, aor lists of tags). These you typically get in Word och Excel format. Also the data often needs massaging a bit, and I end up with what for all intents and purposes are a dictionaries mapping one URL to some other information.
Sure, I could save that as CVS, and parse it into a dictionary. But instead I typically save it as a Python file with a dictionary. Saves code.
So, yes, it's reasonable, no it's not a format you should use for any sort of save file. It however often used for data that straddles the border to configuration, like above.
The biggest drawback is that it's a potential security problem since it's hard to guarantee that the files won't contains arbitrary code, which could be really bad. So don't use this approach if anyone else than you have write-access to the files.
A reasonable option might be to use the Pickle module, which is specifically designed to save and restore python structures to disk.
Alex Martelli's answer is absolutely insightful and I agree with him. However, I'll go one step further and make a specific recommendation: use JSON.
JSON is simple, and Python's data structures map well into it; and there are several standard libraries and tools for working with JSON. The json module in Python 3.0 and newer is based on simplejson, so I would use simplejson in Python 2.x and json in Python 3.0 and newer.
Second choice is XML. XML is more complicated, and harder to just look at (or just edit with a text editor) but there is a vast wealth of tools to validate it, filter it, edit it, etc.
Also, if your data storage and retrieval needs become at all nontrivial, consider using an actual database. SQLite is terrific: it's small, and for small databases runs very fast, but it is a real actual SQL database. I would definitely use a Python ORM instead of learning SQL to interact with the database; my favorite ORM for SQLite would be Autumn (small and simple), or the ORM from Django (you don't even need to learn how to create tables in SQL!) Then if you ever outgrow SQLite, you can move up to a real database such as PostgreSQL. If you find yourself writing lots of loops that search through your saved data, and especially if you need to enforce dependencies (such as if foo is deleted, bar must be deleted too) consider going to a database.
I understood that in certain Windows XP programs, like Photoshop, there is something called "scratch disks". What I understood that this means, and please correct me if I'm wrong, is that Photoshop manages its own virtual memory on the hard-drive, instead of letting Windows manage it. I understood that the reason for this is some limitation by Windows XP on how much total memory a process can take, regardless of HD space. I think it's around 3 GB. Did I get it right so far?
I am making an application in Python for running simulations. It will take a lot of memory, and will run on Windows XP. Is it possible for it to use scratch disks? How?
Until you ACTUALLY run out of memory, thinking about this is a waste of time.
When you finally do run out of memory, you'll need to use a temporary file to store objects that your process needs, but can't fit into memory.
Use pickle or shelve (see Data Persistence) your objects in a file. If that file happens to be on a disk named "scratch", well that's nice.
Sometimes you want your temporary files to be on a separate disk from your other working files for performance reasons. In some environments (SAN, NAS, storage arrays) your disks are virtual and looking for a "scratch" disk doesn't have any performance benefit. In other environments (i.e., you own all the hardware) you can put temporary files on some other drive, making that drive a "scratch" disk.
I understood that the reason for this is some limitation by Windows XP on how much total memory a process can take, regardless of HD space. I think it's around 3 GB.
Just an FYI, this is more a limitation of a 32-bit OS rather than being a Windows XP problem. You'll have the same problem in 32-bit Vista, linux, bsd... you get the idea. If you go the 64-bit route, you don't have these problems.
For example, Windows XP x64 allows up to 8 terabytes of memory per process.
Scratch disks will benefit your application in the case that it works with very big files,
Is that the case?
If not, then i don't think you may find something that will benefit your application in scratch disks.
Memory mapped files might be what you are looking for. Python's implementation lets you use a file like a mutable string in memory.
The Win32 API provides this: link text.
You may be able to use these functions through PyWin32.
You could combine S.Lott's answer about using pickle (you should use cPickle though for better performance) with SqlLite.
sqlite is built into python 2.5 and up, so all you'll need to do is import :), then just store the pickled objects as strings in there and you'll have a nice fast method of accessing the data (compared to building your own method) that will help keep you organized as well.
note: cPickle is almost identical to pickle in use. Only difference is that it is written in C
Useful Python Docs:
sqlite3 module
pickle module
edit: It may be a good idea to have a user controlled memory usage limit. It would be a shame to be storing a bunch a data on disk and waiting on slow-ass disk I/O when the user has 8GB of RAM ;)
You are probably looking for something like ZODB. However, though ZODB tries hard to be transparent, no solution is going to be 100% free of artifacts. You have to write your code with an awareness that your objects primarily live in a database, but that there are multiple representations of your objects, there are caching/syncing issues, etc. Nothing is going to make this very difficult problem completely trivial for you.