According to this FAQ on zlib.net it is possible to:
access data randomly in a compressed stream
I know about the module Bio.bgzf of Biopyton 1.60, which:
supports reading and writing BGZF files (Blocked GNU Zip Format), a variant of GZIP with efficient random access, most commonly used as part of the BAM file format and in tabix. This uses Python’s zlib library internally, and provides a simple interface like Python’s gzip library.
But for my use case I don't want to use that format. Basically I want something, which emulates the code below:
import gzip
large_integer_new_line_start = 10**9
with gzip.open('large_file.gz','rt') as f:
f.seek(large_integer_new_line_start)
but with the efficiency offered by the native zlib.net to provide random access to the compressed stream. How do I leverage that random access capability in Python?
I gave up on doing random access on a gzipped file using Python. Instead I converted my gzipped file to a block gzipped file with a block compression/decompression utility on the command line:
zcat large_file.gz | bgzip > large_file.bgz
Then I used BioPython and tell to get the virtual_offset of line number 1 million of the bgzipped file. And then I was able to rapidly seek the virtual_offset afterwards:
from Bio import bgzf
file='large_file.bgz'
handle = bgzf.BgzfReader(file)
for i in range(10**6):
handle.readline()
virtual_offset = handle.tell()
line1 = handle.readline()
handle.close()
handle = bgzf.BgzfReader(file)
handle.seek(virtual_offset)
line2 = handle.readline()
handle.close()
assert line1==line2
I would like to also point to the SO answer by Mark Adler here on examples/zran.c in the zlib distribution.
You are looking for dictzip.py, part of the serpento package. However, you have to compress the files with dictzip, which is a random seekable backward compatible variant of the gzip compression.
The indexed_gzip program might be what you wanted. It also uses zran.c under the hood.
If you just want to access the file from a random point can't you just do:
from random import randint
with open(filename) as f:
f.seek(0, 2)
size = f.tell()
f.seek(randint(0, size), 2)
Related
I am learning machine learning and data analysis on wav files.
I know if I have wav files directly I can do something like this to read in the data
import librosa
mono, fs = librosa.load('./small_data/time_series_audio.wav', sr = 44100)
Now I'm given a gz-file "music_feature_extraction_test.tar.gz"
I'm not sure what to do now.
I tried:
with gzip.open('music_train.tar.gz', 'rb') as f:
for files in f :
mono, fs = librosa.load(files, sr = 44100)
but it gives me:
TypeError: lstat() argument 1 must be encoded string without null bytes, not str
Can anyone help me out?
There are several things going on:
The file you are given is a gzipped-compressed tarball. Take a look at the tarfile module, it can read gzip-compressed files directly. You'll get an iterator over it's members, each of which is an individual file.
AFAIKS librosa can't read from an in-memory buffer so you have to unpack the tar-members to temporary files. The tempfile-module is your friend here, a NamedTemporaryFile will provide you with a self-deleting file that you can uncompress to and provide to librosa.
You probably want to implement this as a simple generator function that takes the tarfile-name as it's input, iterates over it's members and yields what librosa.load() provides you. That way everything gets cleaned up automatically.
The basic loop would therefore be
Open the tarball using the tarfile-module. For each member
Get a new temporary file using NamedTemporaryFile. Copy the content of the tarball-member to that file. You may want to use shutil.copyfileobj to avoid reading the entire wav-file into memory before writing it to disk.
The NamedTemporaryFile has a filename-attribute. Pass that to librosa.open.
yield the return value of librosa.open to the caller.
You can use PySoundFile to read from the compressed file.
https://pysoundfile.readthedocs.io/en/0.9.0/#virtual-io
import soundfile
with gzip.open('music_train.tar.gz', 'rb') as gz_f:
for file in gz_f :
fs, mono = soundfile.read(file, samplerate=44100)
Maybe you should also check if you need to resample the data before processing it with librosa:
https://librosa.github.io/librosa/ioformats.html#read-specific-formats
So a quick way to write a BytesIO object to a file would be to just use:
with open('myfile.ext', 'wb') as f:
f.write(myBytesIOObj.getvalue())
myBytesIOObj.close()
However, if I wanted to iterate over the myBytesIOObj as opposed to writing it in one chunk, how would I go about it? I'm on Python 2.7.1. Also, if the BytesIO is huge, would it be a more efficient way of writing by iteration?
Thanks
shutil has a utility that will write the file efficiently. It copies in chunks, defaulting to 16K. Any multiple of 4K chunks should be a good cross platform number. I chose 131072 rather arbitrarily because really the file is written to the OS cache in RAM before going to disk and the chunk size isn't that big of a deal.
import shutil
myBytesIOObj.seek(0)
with open('myfile.ext', 'wb') as f:
shutil.copyfileobj(myBytesIOObj, f, length=131072)
BTW, there was no need to close the file object at the end. with defines a scope, and the file object is defined inside that scope. The file handle is therefore closed automatically on exit from the with block.
Since Python 3.2 it's possible to use the BytesIO.getbuffer() method as follows:
from io import BytesIO
buf = BytesIO(b'test')
with open('path/to/file', 'wb') as f:
f.write(buf.getbuffer())
This way it doesn't copy the buffer's content, streaming it straight to the open file.
Note: The StringIO buffer doesn't support the getbuffer() protocol (as of Python 3.9).
Before streaming the BytesIO buffer to file, you might want to set its position to the beginning:
buf.seek(0)
I would like to read compressed files directly from Google Cloud Storage and open them with the Python csv package.
The code for a local file would be:
def reader(self):
print "reading local compressed file: ", self._filename
self._localfile = gzip.open(self._filename, 'rb')
csvReader = csv.reader(self._localfile, delimiter=',', quotechar='"')
return csvReader
I have played with several GCS APIs (JSON based, cloud.storage), but none of them seem to give me something that I can stream through gzip. What is more, even if the file was uncompressed, I could not open the file and give it to cv.reader (Iterator type).
My compressed CSV files are about 500MB, while uncompressed they use up to a few GB. I don't think it would be a good idea to: 1 - locally download the files before opening them (unless I can overlap download and computation) or 2 - Open it entirely in memory before computing.
Finally, I current run this code on my local machine, but ultimately, I will move to AppEngine, so it must work there too.
Thanks!!
Using GCS, cloudstorage.open(filename, 'r') will give you a read-only file-like object (earlier created similarly but with 'w':-) which you can use, a chunk at a time, with the standard Python library's zlib module, specifically a zlib.decompressobj, if, of course, the GS object was originally created in the complementary way (with a zlib.compressobj).
Alternatively, for convenience, you can use the standard Python library's gzip module, e.g for the reading phase something like:
compressed_flo = cloudstorage.open('objname', 'r')
uncompressed_flo = gzip.GzipFile(fileobj=compressed_flo,mode='rb')
csvReader = csv.reader(uncompressed_flo)
and vice versa for the earlier writing phase, of course.
Note that when you run locally (with the dev_appserver), the GCS client library uses local disk files to simulate GCS -- in my experience that's good for development purposes, and I can use gsutil or other tools when I need to interact with "real" GCS storage from my local workstation... GCS is for when I need such interaction from my GAE app (and for developing said GAE app locally in the first place:-).
So, you have gzipped files stored on GCS. You can process the data stored on GCS in a stream-like fashion. That is, you can download, unzip, and process simultaneously. This avoids
to have the unzipped file on disk
to have to wait until the download is complete before being able to process the data.
gzip files have a small header and footer, and the body is a compressed stream, consisting of a series of blocks, and each block is decompressable on its own. Python's zlib package helps you with that!
Edit: This is example code for how to decompress and analzye a zlib or gzip stream chunk-wise, purely based on zlib:
import zlib
from collections import Counter
def stream(filename):
with open(filename, "rb") as f:
while True:
chunk = f.read(1024)
if not chunk:
break
yield chunk
def decompress(stream):
# Generate decompression object. Auto-detect and ignore
# gzip wrapper, if present.
z = zlib.decompressobj(32+15)
for chunk in stream:
r = z.decompress(chunk)
if r:
yield r
c = Counter()
s = stream("data.gz")
for chunk in decompress(s):
for byte in chunk:
c[byte] += 1
print c
I tested this code with an example file data.gz, created with GNU gzip.
Quotes from http://www.zlib.net/manual.html:
windowBits can also be greater than 15 for optional gzip decoding. Add
32 to windowBits to enable zlib and gzip decoding with automatic
header detection, or add 16 to decode only the gzip format (the zlib
format will return a Z_DATA_ERROR). If a gzip stream is being decoded,
strm->adler is a crc32 instead of an adler32.
and
Any information contained in the gzip header is not retained [...]
Is it possible to append to a gzipped text file on the fly using Python ?
Basically I am doing this:-
import gzip
content = "Lots of content here"
f = gzip.open('file.txt.gz', 'a', 9)
f.write(content)
f.close()
A line is appended (note "appended") to the file every 6 seconds or so, but the resulting file is just as big as a standard uncompressed file (roughly 1MB when done).
Explicitly specifying the compression level does not seem to make a difference either.
If I gzip an existing uncompressed file afterwards, it's size comes down to roughly 80kb.
Im guessing its not possible to "append" to a gzip file on the fly and have it compress ?
Is this a case of writing to a String.IO buffer and then flushing to a gzip file when done ?
That works in the sense of creating and maintaining a valid gzip file, since the gzip format permits concatenated gzip streams.
However it doesn't work in the sense that you get lousy compression, since you are giving each instance of gzip compression so little data to work with. Compression depends on taking advantage the history of previous data, but here gzip has been given essentially none.
You could either a) accumulate at least a few K of data, many of your lines, before invoking gzip to add another gzip stream to the file, or b) do something much more sophisticated that appends to a single gzip stream, leaving a valid gzip stream each time and permitting efficient compression of the data.
You find an example of b) in C, in gzlog.h and gzlog.c. I do not believe that Python has all of the interfaces to zlib needed to implement gzlog directly in Python, but you could interface to the C code from Python.
I'm querying a database and archiving the results using Python, and I'm trying to compress the data as I write it to the log files. I'm having some problems with it, though.
My code looks like this:
log_file = codecs.open(archive_file, 'w', 'bz2')
for id, f1, f2, f3 in cursor:
log_file.write('%s %s %s %s\n' % (id, f1 or 'NULL', f2 or 'NULL', f3))
However, my output file has a size of 1,409,780. Running bunzip2 on the file results in a file with a size of 943,634, and running bzip2 on that results in a size of 217,275. In other words, the uncompressed file is significantly smaller than the file compressed using Python's bzip codec. Is there a way to fix this, other than running bzip2 on the command line?
I tried Python's gzip codec (changing the line to codecs.open(archive_file, 'a+', 'zip')) to see if it fixed the problem. I still get large files, but I also get a gzip: archive_file: not in gzip format error when I try to uncompress the file. What's going on there?
EDIT: I originally had the file opened in append mode, not write mode. While this may or may not be a problem, the question still holds if the file's opened in 'w' mode.
As other posters have noted, the issue is that the codecs library doesn't use an incremental encoder to encode the data; instead it encodes every snippet of data fed to the write method as a compressed block. This is horribly inefficient, and just a terrible design decision for a library designed to work with streams.
The ironic thing is that there's a perfectly reasonable incremental bz2 encoder already built into Python. It's not difficult to create a "file-like" class which does the correct thing automatically.
import bz2
class BZ2StreamEncoder(object):
def __init__(self, filename, mode):
self.log_file = open(filename, mode)
self.encoder = bz2.BZ2Compressor()
def write(self, data):
self.log_file.write(self.encoder.compress(data))
def flush(self):
self.log_file.write(self.encoder.flush())
self.log_file.flush()
def close(self):
self.flush()
self.log_file.close()
log_file = BZ2StreamEncoder(archive_file, 'ab')
A caveat: In this example, I've opened the file in append mode; appending multiple compressed streams to a single file works perfectly well with bunzip2, but Python itself can't handle it (although there is a patch for it). If you need to read the compressed files you create back into Python, stick to a single stream per file.
The problem seems to be that output is being written on every write(). This causes each line to be compressed in its own bzip block.
I would try building a much larger string (or list of strings if you are worried about performance) in memory before writing it out to the file. A good size to shoot for would be 900K (or more) as that is the block size that bzip2 uses
The problem is due to your use of append mode, which results in files that contain multiple compressed blocks of data. Look at this example:
>>> import codecs
>>> with codecs.open("myfile.zip", "a+", "zip") as f:
>>> f.write("ABCD")
On my system, this produces a file 12 bytes in size. Let's see what it contains:
>>> with codecs.open("myfile.zip", "r", "zip") as f:
>>> f.read()
'ABCD'
Okay, now let's do another write in append mode:
>>> with codecs.open("myfile.zip", "a+", "zip") as f:
>>> f.write("EFGH")
The file is now 24 bytes in size, and its contents are:
>>> with codecs.open("myfile.zip", "r", "zip") as f:
>>> f.read()
'ABCD'
What's happening here is that unzip expects a single zipped stream. You'll have to check the specs to see what the official behavior is with multiple concatenated streams, but in my experience they process the first one and ignore the rest of the data. That's what Python does.
I expect that bunzip2 is doing the same thing. So in reality your file is compressed, and is much smaller than the data it contains. But when you run it through bunzip2, you're getting back only the first set of records you wrote to it; the rest is discarded.
I'm not sure how different this is from the codecs way of doing it but if you use GzipFile from the gzip module you can incrementally append to the file but it's not going to compress very well unless you are writing large amounts of data at a time (maybe > 1 KB). This is just the nature of the compression algorithms. If the data you are writing isn't super important (i.e. you can deal with losing it if your process dies) then you could write a buffered GzipFile class wrapping the imported class that writes out larger chunks of data.