Fastest way to change head of a gzip file? - python

I maintain a benchmark library with gz-compressed files that contain descriptive metadata in the first few lines. By hand, I can decompress a 246MB gz-compressed file (using gunzip), change it, and compress it back (using gzip) in under 2 minutes using the linux terminal. On the same file, the following script takes almost 5 minutes to complete (using Python 2.7.5) and 12+ minutes (using Python 3.4.1) before I killed it.
import os, gzip, shutil
def renew_file(file, tempfile):
f = gzip.open(file,'r')
try:
# Read and modify first line(s)
buf = f.readline()
buf = 'changing first line\n'
# Write change to temporary file
f2 = gzip.open(tempfile,'w')
try:
f2.write(buf)
shutil.copyfileobj(f,f2)
finally:
f2.close()
finally:
f.close()
# Overwrite file
os.rename(tempfile, file)
Any suggestions on how to achieve higher performance?

Gzip on the command line defaults to a compression level of 6. Python, however, defaults to a compression level of 9, which is slower but produces smaller files. You can either pass compresslevel=6 to gzip.open() if you want larger files sooner, or pass -9 to gzip if you want smaller files later.

Related

get size of compressed file while compressing

I currently try to create a module that writes a *.gz file up to a specific size. I want to use it for a custom log handler to specify the maximum size of a zipped logfile. I already made my way through the gzip documentation and also the zlib documentation.
I could use zlib right away and measure the length of my compressed bytearray, but then I would have to create and write the gzip file header by myself. The zlib-documentaion itself says: For reading and writing .gz files see the gzip module..
But I do not see any option for getting the size of the compressed file in the gzip module.
the logfile opened via logfile = gzip.open("test.gz", "ab", compresslevel=6) does have a .size parameter, but this is the size of the original file, not the compressed file.
Also os.path.getsize("test.gz") is zero until logfile is closed and is actually written to the disk.
Do you have any idea how I can use the built-in gzip module to close a compressed file once it reached a certain size? Without closing and re-opening it all the time?
Or is this even possible?
Thanks for any help on this!
Update:
It is not true that no data is written to disk until the file is closed, it just takes some time to collect some kilobytes before the filesize changes. This is good enogh for me and my usecase, so this is solved. Thanks for any input!
My test code for this:
import os
import gzip
import time
data = 'Hello world'
limit = 10000
i = 0
logfile = gzip.open("test.gz", "wb", compresslevel=6)
while i < limit:
msg = f"{data} {str(i)} \n"
logfile.write(msg.encode("utf-8"))
print(os.path.getsize("test.gz"))
print(logfile.size)
if i > 1000:
logfile.flush()
break
#time.sleep(0.03)
i += 1
logfile.close()
print(f"final size of *.gz file: {os.path.getsize('test.gz')}")
print(f"final size of logfile object file: {logfile.size}")
gzip does not actually compress the file until after you close it so it does no really make sense to ask to know the size of the compressed file beforehand. One thing you could do is look at the size of compressed files you obtain on real data from your use case and do a linear regression to have some kind of approximation of the compression ratio.

Python streaming compression

I have a program constantly writing to a file, which is compressed on the fly:
import lzma
with lzma.open('file.lz', 'wb') as f:
for ...:
# do something
f.write(item)
So, the file is append-only. At the same time, I need to be able to run another program which will read from this file - not streaming/following, just one-shot reading of the current content. Basically, this works:
import lzma
with lzma.open('file.lz', 'rb') as f:
content = f.read()
But writes in the first program don't write to the file immediately, and instead they buffer the data for some time (I see buffers of the size 8k to 60k). When small writes happen infrequently the file content can become very far from the current state and I'd like to flush it or do something similar (each n records or each n minutes). However, f.flush() doesn't seem to do anything. What's the best solution here, maybe I overlooked something obvious?

Python Gzip - Appending to file on the fly

Is it possible to append to a gzipped text file on the fly using Python ?
Basically I am doing this:-
import gzip
content = "Lots of content here"
f = gzip.open('file.txt.gz', 'a', 9)
f.write(content)
f.close()
A line is appended (note "appended") to the file every 6 seconds or so, but the resulting file is just as big as a standard uncompressed file (roughly 1MB when done).
Explicitly specifying the compression level does not seem to make a difference either.
If I gzip an existing uncompressed file afterwards, it's size comes down to roughly 80kb.
Im guessing its not possible to "append" to a gzip file on the fly and have it compress ?
Is this a case of writing to a String.IO buffer and then flushing to a gzip file when done ?
That works in the sense of creating and maintaining a valid gzip file, since the gzip format permits concatenated gzip streams.
However it doesn't work in the sense that you get lousy compression, since you are giving each instance of gzip compression so little data to work with. Compression depends on taking advantage the history of previous data, but here gzip has been given essentially none.
You could either a) accumulate at least a few K of data, many of your lines, before invoking gzip to add another gzip stream to the file, or b) do something much more sophisticated that appends to a single gzip stream, leaving a valid gzip stream each time and permitting efficient compression of the data.
You find an example of b) in C, in gzlog.h and gzlog.c. I do not believe that Python has all of the interfaces to zlib needed to implement gzlog directly in Python, but you could interface to the C code from Python.

Python securely remove file

How can I securely remove a file using python? The function os.remove(path) only removes the directory entry, but I want to securely remove the file, similar to the apple feature called "Secure Empty Trash" that randomly overwrites the file.
What function securely removes a file using this method?
You can use srm to securely remove files. You can use Python's os.system() function to call srm.
You can very easily write a function in Python to overwrite a file with random data, even repeatedly, then delete it. Something like this:
import os
def secure_delete(path, passes=1):
with open(path, "ba+") as delfile:
length = delfile.tell()
with open(path, "br+") as delfile:
for i in range(passes):
delfile.seek(0)
delfile.write(os.urandom(length))
os.remove(path)
Shelling out to srm is likely to be faster, however.
You can use srm, sure, you can always easily implement it in Python. Refer to wikipedia for the data to overwrite the file content with. Observe that depending on actual storage technology, data patterns may be quite different. Furthermore, if you file is located on a log-structured file system or even on a file system with copy-on-write optimisation, like btrfs, your goal may be unachievable from user space.
After you are done mashing up the disk area that was used to store the file, remove the file handle with os.remove().
If you also want to erase any trace of the file name, you can try to allocate and reallocate a whole bunch of randomly named files in the same directory, though depending on directory inode structure (linear, btree, hash, etc.) it may very tough to guarantee you actually overwrote the old file name.
So at least in Python 3 using #kindall's solution I only got it to append. Meaning the entire contents of the file were still intact and every pass just added to the overall size of the file. So it ended up being [Original Contents][Random Data of that Size][Random Data of that Size][Random Data of that Size] which is not the desired effect obviously.
This trickery worked for me though. I open the file in append to find the length, then reopen in r+ so that I can seek to the beginning (in append mode it seems like what caused the undesired effect is that it was not actually possible to seek to 0)
So check this out:
def secure_delete(path, passes=3):
with open(path, "ba+", buffering=0) as delfile:
length = delfile.tell()
delfile.close()
with open(path, "br+", buffering=0) as delfile:
#print("Length of file:%s" % length)
for i in range(passes):
delfile.seek(0,0)
delfile.write(os.urandom(length))
#wait = input("Pass %s Complete" % i)
#wait = input("All %s Passes Complete" % passes)
delfile.seek(0)
for x in range(length):
delfile.write(b'\x00')
#wait = input("Final Zero Pass Complete")
os.remove(path) #So note here that the TRUE shred actually renames to file to all zeros with the length of the filename considered to thwart metadata filename collection, here I didn't really care to implement
Un-comment the prompts to check the file after each pass, this looked good when I tested it with the caveat that the filename is not shredded like the real shred -zu does
The answers implementing a manual solution did not work for me. My solution is as follows, it seems to work okay.
import os
def secure_delete(path, passes=1):
length = os.path.getsize(path)
with open(path, "br+", buffering=-1) as f:
for i in range(passes):
f.seek(0)
f.write(os.urandom(length))
f.close()

How do the compression codecs work in Python?

I'm querying a database and archiving the results using Python, and I'm trying to compress the data as I write it to the log files. I'm having some problems with it, though.
My code looks like this:
log_file = codecs.open(archive_file, 'w', 'bz2')
for id, f1, f2, f3 in cursor:
log_file.write('%s %s %s %s\n' % (id, f1 or 'NULL', f2 or 'NULL', f3))
However, my output file has a size of 1,409,780. Running bunzip2 on the file results in a file with a size of 943,634, and running bzip2 on that results in a size of 217,275. In other words, the uncompressed file is significantly smaller than the file compressed using Python's bzip codec. Is there a way to fix this, other than running bzip2 on the command line?
I tried Python's gzip codec (changing the line to codecs.open(archive_file, 'a+', 'zip')) to see if it fixed the problem. I still get large files, but I also get a gzip: archive_file: not in gzip format error when I try to uncompress the file. What's going on there?
EDIT: I originally had the file opened in append mode, not write mode. While this may or may not be a problem, the question still holds if the file's opened in 'w' mode.
As other posters have noted, the issue is that the codecs library doesn't use an incremental encoder to encode the data; instead it encodes every snippet of data fed to the write method as a compressed block. This is horribly inefficient, and just a terrible design decision for a library designed to work with streams.
The ironic thing is that there's a perfectly reasonable incremental bz2 encoder already built into Python. It's not difficult to create a "file-like" class which does the correct thing automatically.
import bz2
class BZ2StreamEncoder(object):
def __init__(self, filename, mode):
self.log_file = open(filename, mode)
self.encoder = bz2.BZ2Compressor()
def write(self, data):
self.log_file.write(self.encoder.compress(data))
def flush(self):
self.log_file.write(self.encoder.flush())
self.log_file.flush()
def close(self):
self.flush()
self.log_file.close()
log_file = BZ2StreamEncoder(archive_file, 'ab')
A caveat: In this example, I've opened the file in append mode; appending multiple compressed streams to a single file works perfectly well with bunzip2, but Python itself can't handle it (although there is a patch for it). If you need to read the compressed files you create back into Python, stick to a single stream per file.
The problem seems to be that output is being written on every write(). This causes each line to be compressed in its own bzip block.
I would try building a much larger string (or list of strings if you are worried about performance) in memory before writing it out to the file. A good size to shoot for would be 900K (or more) as that is the block size that bzip2 uses
The problem is due to your use of append mode, which results in files that contain multiple compressed blocks of data. Look at this example:
>>> import codecs
>>> with codecs.open("myfile.zip", "a+", "zip") as f:
>>> f.write("ABCD")
On my system, this produces a file 12 bytes in size. Let's see what it contains:
>>> with codecs.open("myfile.zip", "r", "zip") as f:
>>> f.read()
'ABCD'
Okay, now let's do another write in append mode:
>>> with codecs.open("myfile.zip", "a+", "zip") as f:
>>> f.write("EFGH")
The file is now 24 bytes in size, and its contents are:
>>> with codecs.open("myfile.zip", "r", "zip") as f:
>>> f.read()
'ABCD'
What's happening here is that unzip expects a single zipped stream. You'll have to check the specs to see what the official behavior is with multiple concatenated streams, but in my experience they process the first one and ignore the rest of the data. That's what Python does.
I expect that bunzip2 is doing the same thing. So in reality your file is compressed, and is much smaller than the data it contains. But when you run it through bunzip2, you're getting back only the first set of records you wrote to it; the rest is discarded.
I'm not sure how different this is from the codecs way of doing it but if you use GzipFile from the gzip module you can incrementally append to the file but it's not going to compress very well unless you are writing large amounts of data at a time (maybe > 1 KB). This is just the nature of the compression algorithms. If the data you are writing isn't super important (i.e. you can deal with losing it if your process dies) then you could write a buffered GzipFile class wrapping the imported class that writes out larger chunks of data.

Categories