Compressing measurements data files - python

from measurements I get text files that basically contain a table of float numbers, with the dimensions 1000x1000. Those take up about 15MB of space which, considering that I get about 1000 result files in a series, is unacceptable to save. So I am trying to compress those by as much as possible without loss of data. My idea is to group the numbers into ~1000 steps over the range I expect and save those. That would provide sufficient resolution. However I still have 1.000.000 points to consider and thus my resulting file is still about 4MB. I probably won t be able to compress that any further?
The bigger problem is the calculation time this takes. Right now I d guesstimate 10-12 secs per file, so about 3 hrs for the 1000 files. WAAAAY to much. This is the algorithm I thougth up, do you have any suggestions? There's probably far more efficient algorithms to do that, but I am not much of a programmer...
import numpy
data=numpy.genfromtxt('sample.txt',autostrip=True, case_sensitive=True)
out=numpy.empty((1000,1000),numpy.int16)
i=0
min=-0.5
max=0.5
step=(max-min)/1000
while i<=999:
j=0
while j<=999:
k=(data[i,j]//step)
out[i,j]=k
if data[i,j]>max:
out[i,j]=500
if data[i,j]<min:
out[i,j]=-500
j=j+1
i=i+1
numpy.savetxt('converted.txt', out, fmt="%i")
Thanks in advance for any hints you can provide!
Jakob

I see you store the numpy arrays as text files. There is a faster and more space-efficient way: just dump it.
If your floats can be stored as 32-bit floats, then use this:
data = numpy.genfromtxt('sample.txt',autostrip=True, case_sensitive=True)
data.astype(numpy.float32).dump(open('converted.numpy', 'wb'))
then you can read it with
data = numpy.load(open('converted.numpy', 'rb'))
The files will be 1000x1000x4 Bytes, about 4MB.
The latest version of numpy supports 16-bit floats. Maybe your floats will fit in its limiter range.

numpy.savez_compressed will let you save lots of arrays into a single compressed, binary file.
However, you aren't going to be able to compress it that much -- if you have 15GB of data, you're not magically going to fit it in 200MB by compression algorithms. You have to throw out some of your data, and only you can decide how much you need to keep.

Use the zipfile, bz2 or gzip module to save to a zip, bz2 or gz file from python. Any compression scheme you write yourself in a reasonable amount of time will almost certainly be slower and have worse compression ratio than these generic but optimized and compiled solutions. Also consider taking eumiro's advice.

Related

Indexing very large hex file with python

I'm trying to write a program that parses data from a (very) large file that contains even rows of 8 sets of 16 bit hex values. For instance, one row would look like this:
edfc b600 edfc 2102 81fb 0000 d1fe 0eff
The data files are expected to be anywhere between 1-4 TB, so I wasn't sure what the best approach would be. If I load this file using Python's open() function, could this turn out badly? I'm worried about how much of an impact this will have on my memory if I'm loading such a large file just to index through. Alternatively, if there's a method I can use to load just the section of data I want from the file, that would be ideal, but as far as I know, I don't think that's even possible. Is this correct?
Anyway, Some sort of idea as to how to approach this very general problem would be much appreciated!
Found an answer from Github. In numpy, there's a function called memmap that works for what I'm doing.
samples = np.memmap("hexdump_samples", mode="r", dtype=np.int16)[100:159]
This didn't seem to cause any issues with the smaller data set I was using, but I can't imagine this causing any issues with memory with the larger files. As far as I understand, this wouldn't cause any issues.
It depends on your computer hardware, how much RAM you have. Python is an interpreted language with a bunch of safeguards, but I wouldn't risk trying to open that file with Python. I would recommend using C or C++, they are good with large amounts of data and memory management. You can then parse the data in bite sized chunks, maybe 16MB per chunk. Python is a extremely slow and memory inefficient compared to C.

Space efficient file type to store double precision floats

I am currently running Simulations written in C later analyzing the results using Python scripts.
ATM the C Programm is writing the results (lots of double values) in a text file which is slowly but surely eating a lot of disc space.
Is there a file format which is more space efficient to store lots of numeric values?
At best but not necessarily it should fulfill the following requirements
Values can be appended continuously such that not all values have to be in memory at once.
The file is more or less easily readable using Python.
I feel like this should be a really common question, but looking for an answer I only found descriptions of various data types within C.
Binary file, but please, be careful with the format of data that you are saving. If possible, reduce the width of each variable that you are using. For example, do you need to save decimal or float, or you can have just 16 or 32 bit integer?
Further, yes, you may apply some of the compression scheme to compress the data before saving, and decompress it after reading, but that requires much more work, and it is probably an overkill for what you are doing.

how to rapidaly load data into memory with python?

I have a large csv file (5 GB) and I can read it with pandas.read_csv(). This operation takes a lot of time 10-20 minutes.
How can I speed it up?
Would it be useful to transform the data in a sqllite format? In case what should I do?
EDIT: More information:
The data contains 1852 columns and 350000 rows. Most of the columns are float65 and contain numbers. Some other contains string or dates (that I suppose are considered as string)
I am using a laptop with 16 GB of RAM and SSD hard drive. The data should fit fine in memory (but I know that python tends to increase the data size)
EDIT 2 :
During the loading I receive this message
/usr/local/lib/python3.4/dist-packages/pandas/io/parsers.py:1164: DtypeWarning: Columns (1841,1842,1844) have mixed types. Specify dtype option on import or set low_memory=False.
data = self._reader.read(nrows)
EDIT: SOLUTION
Read one time the csv file and save it as
data.to_hdf('data.h5', 'table')
This format is incredibly efficient
This actually depends on which part of reading it is taking 10 minutes.
If it's actually reading from disk, then obviously any more compact form of the data will be better.
If it's processing the CSV format (you can tell this because your CPU is at near 100% on one core while reading; it'll be very low for the other two), then you want a form that's already preprocessed.
If it's swapping memory, e.g., because you only have 2GB of physical RAM, then nothing is going to help except splitting the data.
It's important to know which one you have. For example, stream-compressing the data (e.g., with gzip) will make the first problem a lot better, but the second one even worse.
It sounds like you probably have the second problem, which is good to know. (However, there are things you can do that will probably be better no matter what the problem.)
Your idea of storing it in a sqlite database is nice because it can at least potentially solve all three at once; you only read the data in from disk as-needed, and it's stored in a reasonably compact and easy-to-process form. But it's not the best possible solution for the first two, just a "pretty good" one.
In particular, if you actually do need to do array-wide work across all 350000 rows, and can't translate that work into SQL queries, you're not going to get much benefit out of sqlite. Ultimately, you're going to be doing a giant SELECT to pull in all the data and then process it all into one big frame.
Writing out the shape and structure information, then writing the underlying arrays in NumPy binary form. Then, for reading, you have to reverse that. NumPy's binary form just stores the raw data as compactly as possible, and it's a format that can be written blindingly quickly (it's basically just dumping the raw in-memory storage to disk). That will improve both the first and second problems.
Similarly, storing the data in HDF5 (either using Pandas IO or an external library like PyTables or h5py) will improve both the first and second problems. HDF5 is designed to be a reasonably compact and simple format for storing the same kind of data you usually store in Pandas. (And it includes optional compression as a built-in feature, so if you know which of the two you have, you can tune it.) It won't solve the second problem quite as well as the last option, but probably well enough, and it's much simpler (once you get past setting up your HDF5 libraries).
Finally, pickling the data may sometimes be faster. pickle is Python's native serialization format, and it's hookable by third-party modules—and NumPy and Pandas have both hooked it to do a reasonably good job of pickling their data.
(Although this doesn't apply to the question, it may help someone searching later: If you're using Python 2.x, make sure to explicitly use pickle format 2; IIRC, NumPy is very bad at the default pickle format 0. In Python 3.0+, this isn't relevant, because the default format is at least 3.)
Python has two built-in libraries called pickle and cPickle that can store any Python data structure.
cPickle is identical to pickle except that cPickle has trouble with Unicode stuff and is 1000x faster.
Both are really convenient for saving stuff that's going to be re-loaded into Python in some form, since you don't have to worry about some kind of error popping up in your file I/O.
Having worked with a number of XML files, I've found some performance gains from loading pickles instead of raw XML. I'm not entirely sure how the performance compares with CSVs, but it's worth a shot, especially if you don't have to worry about Unicode stuff and can use cPickle. It's also simple, so if it's not a good enough boost, you can move on to other methods with minimal time lost.
A simple example of usage:
>>> import pickle
>>> stuff = ["Here's", "a", "list", "of", "tokens"]
>>> fstream = open("test.pkl", "wb")
>>> pickle.dump(stuff,fstream)
>>> fstream.close()
>>>
>>> fstream2 = open("test.pkl", "rb")
>>> old_stuff = pickle.load(fstream2)
>>> fstream2.close()
>>> old_stuff
["Here's", 'a', 'list', 'of', 'tokens']
>>>
Notice the "b" in the file stream openers. This is important--it preserves cross-platform compatibility of the pickles. I've failed to do this before and had it come back to haunt me.
For your stuff, I recommend writing a first script that parses the CSV and saves it as a pickle; when you do your analysis, the script associated with that loads the pickle like in the second block of code up there.
I've tried this with XML; I'm curious how much of a boost you will get with CSVs.
If the problem is in the processing overhead, then you can divide the file into smaller files and handle them in different CPU cores or threads. Also for some algorithms the python time will increase non-linearly and the dividing method will help in these cases.

Creating very large NUMPY arrays in small chunks (PyTables vs. numpy.memmap)

There are a bunch of questions on SO that appear to be the same, but they don't really answer my question fully. I think this is a pretty common use-case for computational scientists, so I'm creating a new question.
QUESTION:
I read in several small numpy arrays from files (~10 MB each) and do some processing on them. I want to create a larger array (~1 TB) where each dimension in the array contains the data from one of these smaller files. Any method that tries to create the whole larger array (or a substantial part of it) in the RAM is not suitable, since it floods up the RAM and brings the machine to a halt. So I need to be able to initialize the larger array and fill it in small batches, so that each batch gets written to the larger array on disk.
I initially thought that numpy.memmap is the way to go, but when I issue a command like
mmapData = np.memmap(mmapFile,mode='w+', shape=(large_no1,large_no2))
the RAM floods and the machine slows to a halt.
After poking around a bit it seems like PyTables might be well suited for this sort of thing, but I'm not really sure. Also, it was hard to find a simple example in the doc or elsewhere which illustrates this common use-case.
IF anyone knows how this can be done using PyTables, or if there's a more efficient/faster way to do this, please let me know! Any refs. to examples appreciated!
That's weird. The np.memmap should work. I've been using it with 250Gb data on a 12Gb RAM machine without problems.
Does the system really runs out of memory at the very moment of the creation of the memmap file? Or it happens along the code? If it happens at the file creation I really don't know what the problem would be.
When I started using memmap I've made some mistakes that led me to memory run out. For me, something like the below code should work:
mmapData = np.memmap(mmapFile, mode='w+', shape = (smallarray_size,number_of_arrays), dtype ='float64')
for k in range(number_of_arrays):
smallarray = np.fromfile(list_of_files[k]) # list_of_file is the list with the files name
smallarray = do_something_with_array(smallarray)
mmapData[:,k] = smallarray
It may not be the most efficient way, but it seems to me that it would have the lowest memory usage.
Ps: Be aware that the default dtype value for memmap(int) and fromfile(float) are different!
HDF5 is a C library that can efficiently store large on-disk arrays. Both PyTables and h5py are Python libraries on top of HDF5. If you're using tabular data then PyTables might be preferred; if you have just plain arrays then h5py is probably more stable/simpler.
There are out-of-core numpy array solutions that handle the chunking for you. Dask.array would give you plain numpy semantics on top of your collection of chunked files (see docs on stacking.)

Statistics / distributions over very large data sets

Looking at the discussions about simple statistics from a file of data, I wonder which of these techniques would scale best over very large data sets (~millions of entries, Gbytes of data).
Are numpy solutions that read entire data set into memory appropriate here? See:
Binning frequency distribution in Python
You are not telling which kind of data you haveand what you want to calculate!
If you have something that is or is easily converted into positive integers of moderate size (e.g., 0..1e8), you may use bincount. Here is an example of how to make a distribution (histogram) of the byte values of all bytes in a very large file (works up to whatever your file system can manage):
import numpy as np
# number of bytes to read at a time
CHUNKSIZE = 100000000
# open the file
f = open("myfile.dat", "rb")
# cumulative distribution array
cum = np.zeros(256)
# read through the file chunk by chunk
while True:
chunkdata = np.fromstring(f.read(CHUNKSIZE), dtype='uint8')
cum += np.bincount(chunkdata)
if len(chunkdata < CHUNKSIZE):
break
This is very fast, the speed is really limited by the disk access. (I got approximately 1 GB/s with a file in the OS cache.)
Of course, you may want to calculate some other statistics (standard deviation, etc.), but even then you can usually use the distributions (histograms) to calculate that statistics. However, if you do not need the distribution, then there may be even faster methods. Calculating the average is the same as just adding all values together.
If you have a text file, then the major challenge is in parsing the file chunk-by-chunk. The standard methods loadtxt and csv module are not necessarily very efficient with very large files.
If you have floating point numbers or very large integers, the method above does not work directly, but in some cases you may just use some bits of the FP numbers or round things to closest integer, etc. In any case the question really boils down to what kind of data you really have, and what statistics you want to calculate. There is no Swiss knife which would solve all statistics problems with huge files.
Reading the data into memory is a very good option if you have enough memory. In certain cases you can do it without having enough memory (use numpy.memmap). If you have a text file with 1 GB of floating point numbers, the end result may fit into less than 1 GB, and most computers can handle that very well. Just make sure you are using a 64-bit Python.

Categories