Persistant database state strategies - python

Due to several edits, this question might have become a bit incoherent. I apologize.
I'm currently writing a Python server. It will never see more than 4 active users, but I'm a computer science student, so I'm planning for it anyway.
Currently, I'm about to implement a function to save a backup of the current state of all relevant variables into CSV files. Of those I currently have 10, and they will never be really big, but... well, computer science student and so on.
So, I am currently thinking about two things:
When to run a backup?
What kind of backup?
When to run:
I can either run a backup every time a variable changes, which has the advantage of always having the current state in the backup, or something like once every minute, which has the advantage of not rewriting the file hundreds of times per minute if the server gets busy, but will create a lot of useless rewrites of the same data if I don't implement a detection which variables have changed since the last backup.
Directly related to that is the question what kind of backup I should do.
I can either do a full backup of all variables (Which is pointless if I'm running a backup every time a variable changes, but might be good if I'm running a backup every X minutes), or a full backup of a single variable (Which would be better if I'm backing up each time the variables change, but would involve either multiple backup functions or a smart detection of the variable that is currently backed up), or I can try some sort of delta-backup on the files (Which would probably involve reading the current file and rewriting it with the changes, so it's probably pretty stupid, unless there is a trick for this in Python I don't know about).
I cannot use shelves because I want the data to be portable between different programming languages (java, for example, probably cannot open python shelves), and I cannot use MySQL for different reasons, mainly that the machine that will run the Server has no MySQL support and I don't want to use an external MySQL-Server since I want the server to keep running when the internet connection drops.
I am also aware of the fact that there are several ways to do this with preimplemented functions of python and / or other software (sqlite, for example). I am just a big fan of building this stuff myself, not because I like to reinvent the wheel, but because I like to know how the things I use work. I'm building this server partly just for learning python, and although knowing how to use SQLite is something useful, I also enjoy doing the "dirty work" myself.
In my usage scenario of possibly a few requests per day I am tending towards the "backup on change" idea, but that would quickly fall apart if, for some reason, the server gets really, really busy.
So, my question basically boils down to this: Which backup method would be the most useful in this scenario, and have I possibly missed another backup strategy? How do you decide on which strategy to use in your applications?
Please note that I raise this question mostly out of a general curiosity for backup strategies and the thoughts behind them, and not because of problems in this special case.

Use sqlite. You're asking about building persistent storage using csv files, and about how to update the files as things change. What you're asking for is a lightweight, portable relational (as in, table based) database. Sqlite is perfect for this situation.
Python has had sqlite support in the standard library since version 2.5 with the sqlite3 module. Since a sqlite database is implemented as a single file, it's simple to move them across machines, and Java has a number of different ways to interact with sqlite.
I'm all for doing things for the sake of learning, but if you really want to learn about data persistence, I wouldn't marry yourself to the idea of a "csv database". I would start by looking at the wikipedia page for Persistence. What you're thinking about is basically a "System Image" for your data. The Wikipedia article describes some of the same shortcomings of this approach that you've mentioned:
State changes made to a system after its last image was saved are lost
in the case of a system failure or shutdown. Saving an image for every
single change would be too time-consuming for most systems
Rather than trying to update your state wholesale at every change, I think you'd be better off looking at some other form of persistence. For example, some sort of journal could work well. This makes it simple to just append any change to the end of a log-file, or some similar construct.
However, if you end up with many concurrent users, with processes running on multiple threads, you'll run in to concerns of whether or not your changes are atomic, or if they conflict with one another. While operating systems generally have some ways of dealing with locking files for edits, you're opening up a can of worms trying to learn about how that works and interacts with your system. At this point you're back to needing a database.
So sure, play around with a couple different approaches. But as soon as you're looking to just get it working in a clear and consistent manner, go with sqlite.

If your data is in CSV files, why not use a revision control system on those files? E.g. git would be pretty fast and give excellent history. The repository would be wholly contained in the directory where the files reside, so it's pretty easy to handle. You could also replicate that repository to other machines or directories easily.

Related

How to share sensitive data among programs while keeping the possibility of comparing them with other local data?

Context
As part of my studies, I am creating a bot capable of detecting scam messages, in Python 3. One of the problems I am facing is the detection of fraudulent websites.
Currently, I have a list of domain names saved in a CSV file, containing both known domains considered safe (discord.com, google.com, etc.), and known fraudulent domains (free-nitro.ru etc.)
To share this list between my personal computer and my server, I regularly "deploy" it in ftp. But since my bot also uses GitHub and a MySQL database, I'm looking for a better system to synchronize this list of domain names without allowing anyone to access it.
I feel like I'm looking for a miracle solution that doesn't exist, but I don't want to overestimate my knowledge so I'm coming to you for advice, thanks in advance!
My considered solutions:
Put the domain names in a MySQL table
Advantages: no public access, live synchronization
Disadvantages: my scam detection script should be able to work offline
Hash the domain names before putting them on git
Advantages: no public access, easy to do, supports equality comparison
Disadvantages: does not support similarity comparison, which is an important part of the program
Hash domain names with locality-sensitive hashing
Advantages: no easy public access, supports equality and similarity comparison
Disadvantages : similarities less precise than in clear, and impossible to hash a new string from the server without knowing at least the seed of the random, so public access problems
My opinion
It seems to me that the last solution, with the LSH, is the one that causes the least problems. But it is far from satisfying me, and I hope to find better.
For the LSH algorithm, I have reproduced it here (from this notebook). I get similarity coefficients between 10% and 40% lower than those obtained with the current plain method.
EDIT: for clarification purpose, maybe my intentions weren’t clear enough (I’m sorry, English is not my native language and I’m bad at explaining things lol). The database or GitHub are just convenient ways to share info between my different bot instances. I could have one locally running on my pc, one on my VPS, one other god know where… and this is why I don’t want a FTP or any kind of synchronisation process involving an IP and/or a fixed destination folder. Ideally I’d like to just take my program at any time, download it wherever I want (by git clone) and just run it.
Please tell me if this isn’t clear enough, thanks :)
At the end I think I'll use yet another solution. I'm thinking of using the MySQL database to store domain names, but only use it in my script to synchronize to it, keeping a local CSV version.
In short, the workflow I'm imagining:
I edit my SQL table when I want to add/remove items to it
When the bot is launched, the script connects to the DB and retrieves all the information from the table
Once the information is retrieved, it saves it in a CSV file and finishes running the rest of the script
If at launch no internet connection is available, the synchronization to the DB is not done and only the CSV file is used.
This way I have the advantages of no public access, an automatic synchronization, an access even offline after the first start, and I keep the support of comparison by similarity since no hash is done.
If you think you can improve my idea, I'm interested!

Is there a way to combine the power of a DB with the transparency and simplicity of a textual file base?

I've just discovered Sir - a database based on text files, but it's far from ready and it's written in JS (i.e. not for me).
My first intuition was to ask if there's something like this available for Python or C++, but since that's not the kind of question one should ask on Stackoverflow let me put it more general:
I like the way e.g. git is made - it stores data as easy to handle separate files and it's astonishingly fast at the same time. Moreover git does not require a server which holds data in memory to be fast (the filesystem cache is doing a good enough job) and - maybe the best part - the way git keeps data in "memory" (the filesystem) is intrinsically language agnostic.
Of course git is not a database and databases have different challenges to master but I still dare to ask: are there generic approaches to make databases as transparent and manually modifiable as git is?
Are there keywords, examples, generally accepted concepts or working projects (like Sir but preferably Python or C++ based) I should learn to know if I want to enhance my fuzzy filesystem polluting project with a database-like fast technology, providing a nice query language without having to sacrifice the simplicity to just manually edit/copy/overwrite files on the filesystem?
SQLite is exactly what you are looking for. It is built in into Python as well: sqlite3.
It's just not human readable, but neither is git. It is purely serverless based on files however, just like git.

Using Python to generate JSON Data

I'm working on a little project, the aim being to generate a report from a database for a server. The database is SQLite and contains tables like 'connections', 'downloads', etc.
The report I produce will ultimately contain a number of graphs displaying things like 'connections per day', 'top downloads this month', etc.
I plan to use flot for the graphs because the graphs it makes look very nice:
This is my current plan for how my reports will work:
Static .HTML file which is the report. This will contain headings, embedded flot graphs, etc.
JSON Data file. These will be generated by my report generation python script, they will basically contain a JSON variable for each graph representing the dataset that the graph should map. ([100,2009-2-2],[192,2009-2-3]...)
Report generation python script, this will load the SQLite database, run a list of set SQL queries and spit out the JSON Data files.
Does this sound like a sensible set up? I can't help but feel it could be improved but I don't see how. I want the reports to be static. The server they run on cannot take heavy loads so a dynamically generated report is out of the question and also unnecessary for this application.
My concerns are:
I feel that the Python script is largely pointless, all of the processing performed is done by SQLite, my script is basically going to be used to store SQL queries and package up the output. With a bit more work SQLite could probably do this for me.
It seems I'm solving a problem that must have been solved many times before 'take sql queries, spit out pretty graphs in a daily report' must have been done hundreds of times. I'm just having trouble tracking down any broad implementations.
It sounds sensible to me.
You need some programming language to talk to SQLite. You could do it in C, but if you can write the necessary glue code easily in Python, why not? You'll almost certainly save more time writing it than you'll lose from not having the most efficient possible program.
There are definitely programs to analyse logs for you - I've heard of Piwik, for instance. That's for dynamic reports, but no doubt there are projects to do static reports too. But they may not fit the data and output you're after. Writing it yourself means you know precisely what you're getting, so if it's not too much work, carry on.
I feel that the Python script is largely pointless, all of the processing performed is done by SQLite, my script is basically going to be used to store SQL queries and package up the output. With a bit more work SQLite could probably do this for me.
Maybe so, but even then, Python is a great glue language. Also, if you need to do some processing SQLite isn't good at, Python is already there.
It seems I'm solving a problem that must have been solved many times before 'take sql queries, spit out pretty graphs in a daily report' must have been done hundreds of times. I'm just having trouble tracking down any broad implementations.
I think you're leaning towards the general class of HTTP-served reporting. One thing out there that overlaps your problem set is Django, which provides a Python interface between database (SQLite is supported) and web server, along with a templating system for your outputs.
If you just want one or two pieces of a solution, then I recommend looking at SQLAlchemy for interfacing with the database, Jinja2 for templating, and/or Werkzeug for HTTP server interface.

Best DataMining Database

I am an occasional Python programer who only have worked so far with MYSQL or SQLITE databases. I am the computer person for everything in a small company and I have been started a new project where I think it is about time to try new databases.
Sales department makes a CSV dump every week and I need to make a small scripting application that allow people form other departments mixing the information, mostly linking the records. I have all this solved, my problem is the speed, I am using just plain text files for all this and unsurprisingly it is very slow.
I thought about using mysql, but then I need installing mysql in every desktop, sqlite is easier, but it is very slow. I do not need a full relational database, just some way of play with big amounts of data in a decent time.
Update: I think I was not being very detailed about my database usage thus explaining my problem badly. I am working reading all the data ~900 Megas or more from a csv into a Python dictionary then working with it. My problem is storing and mostly reading the data quickly.
Many thanks!
Quick Summary
You need enough memory(RAM) to solve your problem efficiently. I think you should upgrade memory?? When reading the excellent High Scalability Blog you will notice that for big sites to solve there problem efficiently they store the complete problem set in memory.
You do need a central database solution. I don't think hand doing this with python dictionary's only will get the job done.
How to solve "your problem" depends on your "query's". What I would try to do first is put your data in elastic-search(see below) and query the database(see how it performs). I think this is the easiest way to tackle your problem. But as you can read below there are a lot of ways to tackle your problem.
We know:
You used python as your program language.
Your database is ~900MB (I think that's pretty large, but absolute manageable).
You have loaded all the data in a python dictionary. Here I am assume the problem lays. Python tries to store the dictionary(also python dictionary's aren't the most memory friendly) in your memory, but you don't have enough memory(How much memory do you have????). When that happens you are going to have a lot of Virtual Memory. When you attempt to read the dictionary you are constantly swapping data from you disc into memory. This swapping causes "Trashing". I am assuming that your computer does not have enough Ram. If true then I would first upgrade your memory with at least 2 Gigabytes extra RAM. When your problem set is able to fit in memory solving the problem is going to be a lot faster. I opened my computer architecture book where it(The memory hierarchy) says that main memory access time is about 40-80ns while disc memory access time is 5 ms. That is a BIG difference.
Missing information
Do you have a central server. You should use/have a server.
What kind of architecture does your server have? Linux/Unix/Windows/Mac OSX? In my opinion your server should have linux/Unix/Mac OSX architecture.
How much memory does your server have?
Could you specify your data set(CSV) a little better.
What kind of data mining are you doing? Do you need full-text-search capabilities? I am not assuming you are doing any complicated (SQL) query's. Performing that task with only python dictionary's will be a complicated problem. Could you formalize the query's that you would like to perform? For example:
"get all users who work for departement x"
"get all sales from user x"
Database needed
I am the computer person for
everything in a small company and I
have been started a new project where
I think it is about time to try new
databases.
You are sure right that you need a database to solve your problem. Doing that yourself only using python dictionary's is difficult. Especially when your problem set can't fit in memory.
MySQL
I thought about using mysql, but then
I need installing mysql in every
desktop, sqlite is easier, but it is
very slow. I do not need a full
relational database, just some way of
play with big amounts of data in a
decent time.
A centralized(Client-server architecture) database is exactly what you need to solve your problem. Let all the users access the database from 1 PC which you manage. You can use MySQL to solve your problem.
Tokyo Tyrant
You could also use Tokyo Tyrant to store all your data. Tokyo Tyrant is pretty fast and it does not have to be stored in RAM. It handles getting data a more efficient(instead of using python dictionary's). However if your problem can completely fit in Memory I think you should have look at Redis(below).
Redis:
You could for example use Redis(quick start in 5 minutes)(Redis is extremely fast) to store all sales in memory. Redis is extremely powerful and can do this kind of queries insanely fast. The only problem with Redis is that it has to fit completely in RAM, but I believe he is working on that(nightly build already supports it). Also like I already said previously solving your problem set completely from memory is how big sites solve there problem in a timely manner.
Document stores
This article tries to evaluate kv-stores with document stores like couchdb/riak/mongodb. These stores are better capable of searching(a little slower then KV stores), but aren't good at full-text-search.
Full-text-search
If you want to do full-text-search queries you could like at:
elasticsearch(videos): When I saw the video demonstration of elasticsearch it looked pretty cool. You could try put(post simple json) your data in elasticsearch and see how fast it is. I am following elastissearch on github and the author is commiting a lot of new code to it.
solr(tutorial): A lot of big companies are using solr(github, digg) to power there search. They got a big boost going from MySQL full-text search to solr.
You probably do need a full relational DBMS, if not right now, very soon. If you start now while your problems and data are simple and straightforward then when they become complex and difficult you will have plenty of experience with at least one DBMS to help you. You probably don't need MySQL on all desktops, you might install it on a server for example and feed data out over your network, but you perhaps need to provide more information about your requirements, toolset and equipment to get better suggestions.
And, while the other DBMSes have their strengths and weaknesses too, there's nothing wrong with MySQL for large and complex databases. I don't know enough about SQLite to comment knowledgeably about it.
EDIT: #Eric from your comments to my answer and the other answers I form even more strongly the view that it is time you moved to a database. I'm not surprised that trying to do database operations on a 900MB Python dictionary is slow. I think you have to first convince yourself, then your management, that you have reached the limits of what your current toolset can cope with, and that future developments are threatened unless you rethink matters.
If your network really can't support a server-based database than (a) you really need to make your network robust, reliable and performant enough for such a purpose, but (b) if that is not an option, or not an early option, you should be thinking along the lines of a central database server passing out digests/extracts/reports to other users, rather than simultaneous, full RDBMS working in a client-server configuration.
The problems you are currently experiencing are problems of not having the right tools for the job. They are only going to get worse. I wish I could suggest a magic way in which this is not the case, but I can't and I don't think anyone else will.
Have you done any bench marking to confirm that it is the text files that are slowing you down? If you haven't, there's a good chance that tweaking some other part of the code will speed things up so that it's fast enough.
It sounds like each department has their own feudal database, and this implies a lot of unnecessary redundancy and inefficiency.
Instead of transferring hundreds of megabytes to everyone across your network, why not keep your data in MySQL and have the departments upload their data to the database, where it can be normalized and accessible by everyone?
As your organization grows, having completely different departmental databases that are unaware of each other, and contain potentially redundant or conflicting data, is going to become very painful.
Does the machine this process runs on have sufficient memory and bandwidth to handle this efficiently? Putting MySQL on a slow machine and recoding the tool to use MySQL rather than text files could potentially be far more costly than simply adding memory or upgrading the machine.
Here is a performance benchmark of different database suits ->
Database Speed Comparison
I'm not sure how objective the above comparison is though, seeing as it's hosted on sqlite.org. Sqlite only seems to be a bit slower when dropping tables, otherwise you shouldn't have any problems using it. Both sqlite and mysql seem to have their own strengths and weaknesses, in some tests the one is faster then the other, in other tests, the reverse is true.
If you've been experiencing lower then expected performance, perhaps it is not sqlite that is the causing this, have you done any profiling or otherwise to make sure nothing else is causing your program to misbehave?
EDIT: Updated with a link to a slightly more recent speed comparison.
It has been a couple of months since I posted this question and I wanted to let you all know how I solved this problem. I am using Berkeley DB with the module bsddb instead loading all the data in a Python dictionary. I am not fully happy, but my users are.
My next step is trying to get a shared server with redis, but unless users starts complaining about speed, I doubt I will get it.
Many thanks everybody who helped here, and I hope this question and answers are useful to somebody else.
If you have that problem with a CSV file, maybe you can just pickle the dictionary and generate a pickle "binary" file with pickle.HIGHEST_PROTOCOL option. It can be faster to read and you get a smaller file. You can load the CSV file once and then generate the pickled file, allowing faster load in next accesses.
Anyway, with 900 Mb of information, you're going to deal with some time loading it in memory. Another approach is not loading it on one step on memory, but load only the information when needed, maybe making different files by date, or any other category (company, type, etc..)
Take a look at mongodb.

How close is Python to being able to wrap it in a workbook type skin?

With my luck this question will be closed too quickly. I see a tremendous possibility for a python application that basically is like a workbook. Imagine if you will that instead of writing code you select from a menu of choices. For example, the File menu would have an open command that lets the user navigate to a file or directory of file or a webpage, even a list of web pages and specify those as the things that will be the base for the next actions.
Then you have a find menu. The menu would allow easy access to the various parsing tools, regular expression and string tools so you can specify the thing you want to find within the files.
Another menu item could allow you to create queries to interact with database objects.
I could go on and on. As the language becomes more higher level then these types of features become easier to implement. There is a tremendous advantage to developing something like this. How much time is spent reinventing the wheel for mundane tasks? Programmers have functions that they have built to do many mundane tasks but what about democratizing the power offered by a tool like Python.
I have people in my office all of the time asking how to solve problems that seem intractable to them, but when I show them how with a few lines of code their problem is solvable except for the edge cases they become amazed. I deflect their gratitude with the observation that it is not really that hard except for being able to construct the right google search to identify the right package or library to solve the problem. There is nothing amazing about my ability to use lxml and sets to pull all bolded sections from a collection of say 12,000 documents and compare across time and across unique identifiers in the collection how those bolded sections have evolved/changed or converged. The amazing piece is that someone wrote the libraries to do these things.
What is the advantage to the community for something like this. Imagine if you would an interface that looks like a workbook but interacts with an app-store. So if you want to pull something from html file you go to the app store and buy a plug-in that handles the work. If the workbook is built robustly enough it could be licensed to a machine, the 'apps' would be tied to a particular workbook.
Just imagine the creativity that could be unleashed by users if they could get over the feeling that access to this power is difficult. You guys may not see this but I see Python being so close to being able to port to something like a workbook framework. Weren't the early spreadsheet programs nothing more than a frame around some Fortran libraries that had been ported to C?
Comments or is there such an application and I have not found it.
There are Python application that are based on generating code -- the most amazing one probably Resolver One, which focuses on spreadsheets (and hinges on IronPython). With that exception, however, interacting based on the UI paradigm you have in mind (pick one of this, one of that, etc) tends to be pretty limited in the gamut of choices it offers to let the user generate the exact application they need -- there's just so much more you can say by writing even a little script, than what you can say by point-and-grunt.
That being said, Python would surely be a great choice both to implement such an app and as the language to generate... if and when you have a UI sketch that looks like it can actually allow non-programmers to specify a large-enough spectrum of apps in a broad-enough domain!-). Spreadsheets have proven themselves in this sense, but I don't know of other niches or approaches that have actually done so -- do you?
Your idea kinda reminded me of something I stumbled across months ago: http://www.ailab.si/orange/
Is your concept very similar to Microsoft Access? Generally programmers tend not to write such programs because they produce such horrible code that the authors themselves would never want to use their program.

Categories