Using Memoize in python (2.7) - python

I do not want to extract files on the disk but keep the final .txt in memory and parse the file. I can't find anything using Memoize in python 2.7.
.zip -> .gz -> .txt(data needs to be parsed)
My second choice it unzip and parse the .txt file data. Any thoughts?

You can unzip the file and write it to an io.BytesIO object, which is essentially an in memory file.
https://docs.python.org/2/library/io.html#buffered-streams
You can then use any function that works for a regular file such as read, seek etc.
This case you get a virtual file that works for any format. If you are certain about the txt is the only thing you are going to use. io module also provides other pure text streams.

Related

Reading a single-entry ZIP file incrementally from an unseekable stream in Python

We often need to unzip extremely large (unencrypted) ZIP files that are hosted by partners over HTTPS. Generally, the ZIP file format (shown below) needs to download in full to be able to see the "central directory" data to identify file entries; however, in our case, we can assume there's exactly one large text file that was zipped, and we could begin extracting and parsing data immediately without needing to wait for the ZIP file to buffer.
If we were using C#, we could use https://github.com/icsharpcode/SharpZipLib/wiki/Unpack-a-zip-using-ZipInputStream (implementation here) which handles this pattern elegantly.
However, it seems that the Python standard library's zipfile module doesn't support this type of streaming; it assumes that the input file-like object is seekable, and all tutorials point to iterating first over namelist() which seeks to the central directory data, then open(name) which seeks back to the file entry.
Many other examples on StackOverflow recommend using BytesIO(response.content) which might appear to pipe the content in a streaming way; however, .content in the Requests library consumes the entire stream and buffers the entire thing to memory.
Is there an alternate way to use zipfile or a third-party Python library to do this in a completely streaming way?
Is there an alternate way to use zipfile or a third-party Python library to do this in a completely streaming way?
Yes: https://github.com/uktrade/stream-unzip can do it [full disclosure: essentially written by me].
We often need to unzip extremely large (unencrypted) ZIP files that are hosted by partners over HTTPS.
The example from the README shows how to to this, using stream-unzip and httpx
from stream_unzip import stream_unzip
import httpx
def zipped_chunks():
# Any iterable that yields a zip file
with httpx.stream('GET', 'https://www.example.com/my.zip') as r:
yield from r.iter_bytes()
for file_name, file_size, unzipped_chunks in stream_unzip(zipped_chunks()):
for chunk in unzipped_chunks:
print(chunk)
If you do just want the first file, you can use break after the first file:
for file_name, file_size, unzipped_chunks in stream_unzip(zipped_chunks()):
for chunk in unzipped_chunks:
print(chunk)
break
Also
Generally, the ZIP file format (shown below) needs to download in full to be able to see the "central directory" data to identify file entries
This isn't completely true.
Each file has a "local" header that contains its name, and it can be worked out when the compressed data for any member file ends (via information in the local header if it's there or from the compressed data itself). While there is more information in the central file directory at the end, if you just need the name + bytes of the files, then it is possible to start unzipping a ZIP file, that contains multiple files, as it's downloading.
I can't claim its absolutely possible in all cases: technically ZIP allows for many different compression algorithms and I haven't investigated them all. However, for DEFLATE, which is the one most commonly used, it is possible.
It's even possible to download one specific file from .zip without downloading whole file. All you need is server that allows to read bytes in ranges, fetch end recored (to know size of CD), fetch central directory (to know where file starts and ends) and then fetch proper bytes and handle them.
Using Onlinezip you can handle file like local file. Event API is identical as FileZip in python
[full disclosure: I'm author of library]

In Python, is it possible to let a subprocess write to an in-memory bytes object instead of a file?

I have a process that I want to run, which does conversion of files into pdfs. I'm using Libreoffice for that. I call Libreoffice as a subprocess, from Python. Libreoffice then writes to a new file on my system. This is of course totally independent of my Python program. After the conversion I will read the file and then use this file for something else, in Python.
But is it at all possible to capture this file as a bytes object in Python instead of a file? This would eliminate the need for reading the file after the conversion, and just keeping it in memory.

How do I open/convert .pkz files?

A python package that I'm using has data stored under a single file with a .pkz extension. How would I unzip (?) this file to view the format of data within?
Looks like what you are referencing is just a one-off file format used in sample data in scikit-learn. The .pkz is just a compressed version of a Python pickle file which usually has the extension .pkl.
Specifically you can see this in one of their sample files here along with the fact they are using the zlib_codec. To open it, you can go in reverse or try uncompressing from the command line.
Before attempting to open an PKZ file, you'll need to determine what kind of file you are dealing with and whether it is even possible to open or view the file format.
Files which are given the .PKZ extension are known as Winoncd Images Mask files, however other file types may also use this extension. If you are aware of any additional file formats that use the PKZ extension, please let us know.
How to open a PKZ file:
The best way to open an PKZ file is to simply double-click it and let the default assoisated application open the file. If you are unable to open the file this way, it may be because you do not have the correct application associated with the extension to view or edit the PKZ file.
If you can do it, great, you have a program installed that can do it, lets say that program is called pkzexecutor.exe, with python, you just have to do:
import subprocess
import os
path_to_notepad = 'C:\\Windows\\System32\\pkzexecutor.exe'
path_to_file = 'C:\\Users\\Desktop\\yourfile.pkz'
subprocess.call([path_to_notepad, path_to_file])
From the source code for fetch_olivetti_faces, the file appears to be downloaded from http://cs.nyu.edu/~roweis/data/ and originally has a .mat file extension, meaning it is actually a MATLAB file. If you have access to MATLAB or another program which can read those files, try opening it from there with the original file extension and see what that gives you.
(If you want to try opening this file in Python itself, then perhaps give this question a look: Read .mat files in Python )

compressed archive with quick access to individual file

I need to come up with a file format for new application I am writing.
This file will need to hold a bunch other text files which are mostly text but can be other formats as well.
Naturally, a compressed tar file seems to fit the bill.
The problem is that I want to be able to retrieve some data from the file very quickly and getting just a particular file from a tar.gz file seems to take longer than it should. I am assumeing that this is because it has to decompress the entire file even though I just want one. When I have just a regular uncompressed tar file I can get that data real quick.
Lets say the file I need quickly is called data.dat
For example the command...
tar -x data.dat -zf myfile.tar.gz
... is what takes a lot longer than I'd like.
MP3 files have id3 data and jpeg files have exif data that can be read in quickly without opening the entire file.
I would like my data.dat file to be available in a similar way.
I was thinking that I could leave it uncompressed and seperate from the rest of the files in myfile.tar.gz
I could then create a tar file of data.dat and myfile.tar.gz and then hopefully that data would be able to be retrieved faster because it is at the head of outer tar file and is uncompressed.
Does this sound right?... putting a compressed tar inside of a tar file?
Basically, my need is to have an archive type of file with quick access to one particular file.
Tar does this just fine, but I'd also like to have that data compressed and as soon as I do that, I no longer have quick access.
Are there other archive formats that will give me that quick access I need?
As a side note, this application will be written in Python. If the solution calls for a re-invention of the wheel with my own binary format I am familiar with C and would have no problem writing the Python module in C. Idealy I'd just use tar, dd, cat, gzip, etc though.
Thanks,
~Eric
ZIP seems to be appropriate for your situation. Files are compressed individually, which means you access them without streaming through everything before.
In Python, you can use zipfile.

File encryption with Python

Is there a way to encrypt files (.zip, .doc, .exe, ... any type of file) with Python?
I've looked at a bunch of crypto libraries for Python including pycrypto and ezpycrypto but as far as I see they only offer string encryption.
In Python versions prior to version 3.0, the read method of a file object will return a string, provide this string to the encryption library of your choice, the resulting string can be written to a file.
Keep in mind that on Windows-based operating systems, the default mode used when reading files may not accurately provide the contents of the file. I suggest that you be familiar with the nuances of file modes and how they behave on Windows-based OSes.
You can read the complete file into a string, encrypt it, write the encrypted string in a new file. If the file is too large, you can read in chunks.
Every time you .read from a file, you get a string (in Python < 3.0).

Categories