Flat file key-value store in python - python

I'm looking for a flat-file, portable key-value store in Python. I'll be using strings for keys and either strings or lists for values. I looked at ZODB but I'd like something which is more widely used and is more actively developed. Do any of the dmb modules in Python require system libraries or a database server (like mysql or the likes) or can I write to file with any of them?
If a dbm does not support a python lists, I imagine that I can just serialize it?

You may want to consider h5py which is a Python interface to HDF5.
In [1]: import h5py
In [2]: f = h5py.File('test.hdf5', 'w')
In [3]: f['abc'] = [1, 2, 3]
In [4]: f['d'] = 'hello'
In [5]: f.close()
In [6]: f2 = h5py.File('test.hdf5', 'r')
In [7]: f2['abc'].value
Out[7]: array([1, 2, 3])
In [8]: list(f2['abc'])
Out[8]: [1, 2, 3]
In [10]: f2['d'].value
Out[10]: 'hello'

There is default support for sqlite and is included in standard library, but for sake of simplicity you can use shelve
http://docs.python.org/library/shelve.html
edit:
I havent tested this, buy dbm might be soltion for you. It is key-value database on UNIX since 1979.
http://docs.python.org/library/anydbm.html#module-anydbm and in case you need serialisation you can use pickle.

Sorry to ask an obvious question, but wouldn't a JSON file serve you just fine?

You can look at the shelve module. It uses pickle under the hood, and allows you to create a key-value look up that persists between launches.
Additionally, the json module with dump and load methods would probably work pretty well as well.

Related

get murmur hash of a file with Python 3

The documentation for the python library Murmur is a bit sparse.
I have been trying to adapt the code from this answer:
import hashlib
from functools import partial
def md5sum(filename):
with open(filename, mode='rb') as f:
d = hashlib.md5()
for buf in iter(partial(f.read, 128), b''):
d.update(buf)
return d.hexdigest()
print(md5sum('utils.py'))
From what I read in the answer, the md5 can't operate on the whole file at once so it needs this looping. Not sure exactly what would happen on the line d.update(buf) however.
The public methods in hashlib.md5() are:
'block_size',
'copy',
'digest',
'digest_size',
'hexdigest',
'name',
'update'
whereas mmh3 has
'hash',
'hash64',
'hash_bytes'
No update or hexdigest methods..
Does anyone know how to achieve a similar result?
The motivation is testing for uniqueness as fast as possible, the results here suggests murmur is a good candidate.
Update -
Following the comment from #Bakuriu I had a look at mmh3 which seems to be better documented.
The public methods inside it are:
import mmh3
print([x for x in dir(mmh3) if x[0]!='_'])
>>> ['hash', 'hash128', 'hash64', 'hash_bytes', 'hash_from_buffer']
..so no "update" method. I had a look at the source code for mmh3.hash_from_buffer but it does not look like it contains a loop and it is also not in Python, can't really follow it. Here is a link to the line
So for now will just use CRC-32 which is supposed to be almost as good for the purpose, and it is well documented how to do it. If anyone posts a solution will test it out.
To hash a file using murmur, one has to load it completely into memory and hash it in one go.
import mmh3
with open('main.py') as file:
data = file.read()
hash = mmh3.hash_bytes(data, 0xBEFFE)
print(hash.hex())
If your file is too large to fit into memory, you could use incremental/progressive hashing: add your data in multiple chunks and hash them on the fly (like your example above).
Is there a Python library for progressive hashing with murmur?
I tried to find one, but it seems there is none.
Is progressive hashing even possible with murmur?
There is a working implementation in C:
https://github.com/rurban/smhasher/blob/master/PMurHash.h
https://github.com/rurban/smhasher/blob/master/PMurHash.c

How would someone benchmark io.open() in Python?

Let's say I want to see if reading a file using os.open() is actually faster than io.open(). I know that one returns a file descriptor and the other one returns an object, but the contents of a file can be read both ways (note that I'm interested in reading binary data (b or os.O_BINARY). Any thoughts?
https://docs.python.org/2/library/timeit.html is the usual way of doing these kind of microbenchmarks.
from timeit import timeit
timeit("f=os.open('testfile', os.O_RDONLY);os.read(f, 999);os.close(f)",
setup="import os")
Should get you started. You may need to try with turning the garbage collector on and off and see if there's a difference too. But that's all documented in the timeit docs.

How to save Python in-memory dictionary to a file as a Python source code?

How to get Python source code representation of in-memory Python dictionary?
I decided to ask this question after reading Thomas Kluyver's comment on Rob Galanakis' blog post titled Why bother with python and config files? In his comment Thomas states
But if you want any way to change settings inside the application
(like a preferences dialog), there’s no good way to automatically
write a correct Python file.
Assuming it uses only "basic" Python types, you can write out the repr() of the structure, and then use ast.literal_eval() to read it back in after.
As the article says, you're better off using JSON/YAML or other formats, but if you seriously wanted to use a Python dict and are only using basic Python types...
Writing out (attempt to use pformat to try and make it more human readable):
from pprint import pformat # instead of using repr()
d = dict(enumerate('abcdefghijklmnopqrstuvwxyz'))
open('somefile.py').write(pformat(d))
Reading back:
from ast import literal_eval
d = literal_eval(open('somefile.py').read())

Does converting json to dict with eval a good choice?

I am getting a json object from a remote server, and converting it to a python string like this:
a = eval(response)
Is this stupid in any way, or do I have a better option?
Using eval is not a good way to process JSON:
JSON isn't even valid Python, because of true, false, and null.
eval will execute arbitrary Python code, so you are at the mercy of malicious injection of code.
Use the json module available in the standard library instead:
import json
data = json.loads("[1, 2, 3]")
If you're using a version of Python older than 2.6, you'll need to download the module yourself. It's called simplejson and can be downloaded from PyPi.
Yes, very. Use a json decoder instead:
>>> from simplejson import loads
>>> loads(response)

Loading a large dictionary using python pickle

I have a full inverted index in form of nested python dictionary. Its structure is :
{word : { doc_name : [location_list] } }
For example let the dictionary be called index, then for a word " spam ", entry would look like :
{ spam : { doc1.txt : [102,300,399], doc5.txt : [200,587] } }
I used this structure as python dict are pretty optimised and it makes programming easier.
for any word 'spam', the documents containig it can be given by :
index['spam'].keys()
and posting list for a document doc1 by:
index['spam']['doc1']
At present I am using cPickle to store and load this dictionary. But the pickled file is around 380 MB and takes a long time to load - 112 seconds(approx. I timed it using time.time()) and memory usage goes to 1.2 GB (Gnome system monitor). Once it loads, its fine. I have 4GB RAM.
len(index.keys()) gives 229758
Code
import cPickle as pickle
f = open('full_index','rb')
print 'Loading index... please wait...'
index = pickle.load(f) # This takes ages
print 'Index loaded. You may now proceed to search'
How can I make it load faster? I only need to load it once, when the application starts. After that, the access time is important to respond to queries.
Should I switch to a database like SQLite and create an index on its keys? If yes, how do I store the values to have an equivalent schema, which makes retrieval easy. Is there anything else that I should look into ?
Addendum
Using Tim's answer pickle.dump(index, file, -1) the pickled file is considerably smaller - around 237 MB (took 300 seconds to dump)... and takes half the time to load now (61 seconds ... as opposed to 112 s earlier .... time.time())
But should I migrate to a database for scalability ?
As for now I am marking Tim's answer as accepted.
PS :I don't want to use Lucene or Xapian ...
This question refers Storing an inverted index . I had to ask a new question because I wasn't able to delete the previous one.
Try the protocol argument when using cPickle.dump/cPickle.dumps. From cPickle.Pickler.__doc__:
Pickler(file, protocol=0) -- Create a pickler.
This takes a file-like object for writing a pickle data stream.
The optional proto argument tells the pickler to use the given
protocol; supported protocols are 0, 1, 2. The default
protocol is 0, to be backwards compatible. (Protocol 0 is the
only protocol that can be written to a file opened in text
mode and read back successfully. When using a protocol higher
than 0, make sure the file is opened in binary mode, both when
pickling and unpickling.)
Protocol 1 is more efficient than protocol 0; protocol 2 is
more efficient than protocol 1.
Specifying a negative protocol version selects the highest
protocol version supported. The higher the protocol used, the
more recent the version of Python needed to read the pickle
produced.
The file parameter must have a write() method that accepts a single
string argument. It can thus be an open file object, a StringIO
object, or any other custom object that meets this interface.
Converting JSON or YAML will probably take longer than pickling most of the time - pickle stores native Python types.
Do you really need it to load all at once? If you don't need all of it in memory, but only the select parts you want at any given time, you may want to map your dictionary to a set of files on disk instead of a single file… or map the dict to a database table. So, if you are looking for something that saves large dictionaries of data to disk or to a database, and can utilize pickling and encoding (codecs and hashmaps), then you might want to look at klepto.
klepto provides a dictionary abstraction for writing to a database, including treating your filesystem as a database (i.e. writing the entire dictionary to a single file, or writing each entry to it's own file). For large data, I often choose to represent the dictionary as a directory on my filesystem, and have each entry be a file. klepto also offers caching algorithms, so if you are using a filesystem backend for the dictionary you can avoid some speed penalty by utilizing memory caching.
>>> from klepto.archives import dir_archive
>>> d = {'a':1, 'b':2, 'c':map, 'd':None}
>>> # map a dict to a filesystem directory
>>> demo = dir_archive('demo', d, serialized=True)
>>> demo['a']
1
>>> demo['c']
<built-in function map>
>>> demo
dir_archive('demo', {'a': 1, 'c': <built-in function map>, 'b': 2, 'd': None}, cached=True)
>>> # is set to cache to memory, so use 'dump' to dump to the filesystem
>>> demo.dump()
>>> del demo
>>>
>>> demo = dir_archive('demo', {}, serialized=True)
>>> demo
dir_archive('demo', {}, cached=True)
>>> # demo is empty, load from disk
>>> demo.load()
>>> demo
dir_archive('demo', {'a': 1, 'c': <built-in function map>, 'b': 2, 'd': None}, cached=True)
>>> demo['c']
<built-in function map>
>>>
klepto also has other flags such as compression and memmode that can be used to customize how your data is stored (e.g. compression level, memory map mode, etc).
It's equally easy (the same exact interface) to use a (MySQL, etc) database as a backend instead of your filesystem. You can also turn off memory caching, so every read/write goes directly to the archive, simply by setting cached=False.
klepto provides access to customizing your encoding, by building a custom keymap.
>>> from klepto.keymaps import *
>>>
>>> s = stringmap(encoding='hex_codec')
>>> x = [1,2,'3',min]
>>> s(x)
'285b312c20322c202733272c203c6275696c742d696e2066756e6374696f6e206d696e3e5d2c29'
>>> p = picklemap(serializer='dill')
>>> p(x)
'\x80\x02]q\x00(K\x01K\x02U\x013q\x01c__builtin__\nmin\nq\x02e\x85q\x03.'
>>> sp = s+p
>>> sp(x)
'\x80\x02UT28285b312c20322c202733272c203c6275696c742d696e2066756e6374696f6e206d696e3e5d2c292c29q\x00.'
klepto also provides a lot of caching algorithms (like mru, lru, lfu, etc), to help you manage your in-memory cache, and will use the algorithm do the dump and load to the archive backend for you.
You can use the flag cached=False to turn off memory caching completely, and directly read and write to and from disk or database. If your entries are large enough, you might pick to write to disk, where you put each entry in it's own file. Here's an example that does both.
>>> from klepto.archives import dir_archive
>>> # does not hold entries in memory, each entry will be stored on disk
>>> demo = dir_archive('demo', {}, serialized=True, cached=False)
>>> demo['a'] = 10
>>> demo['b'] = 20
>>> demo['c'] = min
>>> demo['d'] = [1,2,3]
However while this should greatly reduce load time, it might slow overall execution down a bit… it's usually better to specify the maximum amount to hold in memory cache and pick a good caching algorithm. You have to play with it to get the right balance for your needs.
Get klepto here: https://github.com/uqfoundation
A common pattern in Python 2.x is to have one version of a module implemented in pure Python, with an optional accelerated version implemented as a C extension; for example, pickle and cPickle. This places the burden of importing the accelerated version and falling back on the pure Python version on each user of these modules. In Python 3.0, the accelerated versions are considered implementation details of the pure Python versions. Users should always import the standard version, which attempts to import the accelerated version and falls back to the pure Python version. The pickle / cPickle pair received this treatment.
Protocol version 0 is the original “human-readable” protocol and is backwards compatible with earlier versions of Python.
Protocol version 1 is an old binary format which is also compatible with earlier versions of Python.
Protocol version 2 was introduced in Python 2.3. It provides much more efficient pickling of new-style classes. Refer to PEP 307 for information about improvements brought by protocol 2.
Protocol version 3 was added in Python 3.0. It has explicit support for bytes objects and cannot be unpickled by Python 2.x. This is the default protocol, and the recommended protocol when compatibility with other Python 3 versions is required.
Protocol version 4 was added in Python 3.4. It adds support for very large objects, pickling more kinds of objects, and some data format optimizations. Refer to PEP 3154 for information about improvements brought by protocol 4.
If your dictionary is huge and should only be compatible with Python 3.4 or higher, use:
pickle.dump(obj, file, protocol=4)
pickle.load(file, encoding="bytes")
or:
Pickler(file, 4).dump(obj)
Unpickler(file).load()
That said, in 2010 the json module was 25 times faster at encoding and 15 times faster at decoding simple types than pickle. My 2014 benchmark says marshal > pickle > json, but marshal's coupled to specific Python versions.
Have you tried using an alternative storage format such as YAML or JSON? Python supports JSON natively from Python 2.6 using the json module I think, and there are third party modules for YAML.
You may also try the shelve module.
Dependend on how long is 'long' you have to think about the trade-offs you have to make: either have all data ready in memory after (long) startup, or load only partial data (then you need to split up the date in multiple files or use SQLite or something like this). I doubt that loading all data upfront from e.g. sqlite into a dictionary will bring any improvement.

Categories