Hadoop: Process image files in Python code - python

I'm working on a side project where we want to process images in a hadoop mapreduce program (for eventual deployment to Amazon's elastic mapreduce). The input to the process will be a list of all the files, each with a little extra data attached (the lat/long position of the bottom left corner - these are aerial photos)
The actual processing needs to take place in Python code so we can leverage the Python Image Library. All the Python streaming examples I can find use stdin and process text input. Can I send image data to Python through stdin? If so, how?
I wrote a Mapper class in Java that takes the list of files and saves the names, the extra data, and the binary contents to a sequence file. I was thinking maybe I need to write a custom Java mapper that takes in the sequence file and pipes it to Python. Is that the right approach? If so, what should the Java to pipe the images out and the Python to read them in look like?
In case it's not obvious, I'm not terribly familiar with Java OR Python, so it's also possible I'm just biting off way more than I can chew with this as my introduction to both languages...

There are a few possible approaches that I can see:
Use both the extra data and the file contents as input to your python program. The tricky part here will be the encoding. I frankly have no idea how streaming works with raw binary content, and I'm assuming that basic answer is "not well." The main issue is that the stdin/stdout communication between processes is very text-based, relying on delimiting input with tabs and newlines, and things like that. You would need to worry about the encoding of the image data, and probably have some sort of pre-processing step, or a custom InputFormat so that you could represent the image as text.
Use only the extra data and the file location as input to your python program. Then the program can independently read the actual image data from the file. The hiccup here is making sure that the file is available to the python script. Remember this is a distributed environment, so the files would have to be in HDFS or somewhere similar, and I don't know if there are good libraries for reading files from HDFS in python.
Do the java-python interaction yourself. Write a java mapper that uses the Runtime class to start the python process itself. This way you get full control over exactly how the two worlds communicate, but obviously its more code and a bit more involved.

Related

Save Data in C++, Load from Python - Recommended Data Formats

I have a ROS/CPP simulator that saves large amounts of data to a rosbag (around 90 MB). I want to read this data frequently from Python and since reading rosbags is slow and cumbersome, I currently have another python script that reads the rosbag and saves the relevant contents to a HDF5 file.
It would be nice though to be able to just save the data from the simulator directly (in C++) and then read it from my scripts (in Python). So I was wondering which data format I should use.
It should be:
Fast to load from Python
Be compact (so ideally a binary of some sort)
Be easy to use
You might be wondering why I don't just save to HDF5 from my C++ simulator, but it just doesn't seem to be easy. There is basically nothing on forums such as Stackoverflow and the HDF5 Group website is opaque, seems to have some complicated licensing and very poor examples. I just want something quick and dirty that I can get running this afternoon.
You may want to have a look at HDFql as it is a high-level language (similar to SQL) to manage HDF5 files. Amongst others, HDFql supports C++ and Python. There are some examples that illustrates how to use HDFql in these languages here.
I see two solutions that can be useful for your problem :
LV: Length Value that you can store directly in binary into a file.
JSON: This does not add many data more than you need, and there are many libraries in Python or C++ that can simplify you the work
Protocol Buffers is an option with language bindings in C++ and Python, though it might be more time investment than quick/dirty running this afternoon.

Debugging a python script which first needs to read large files. Do I have to load them every time anew?

I have a python script which starts by reading a few large files and then does something else. Since I want to run this script multiple times and change some of the code until I am happy with the result, it would be nice if the script did not have to read the files every time anew, because they will not change. So I mainly want to use this for debugging.
It happens to often, that I run scripts with bugs in them, but I only see the error message after minutes, because the reading took so long.
Are there any tricks to do something like this?
(If it is feasible, I create smaller test files)
I'm not good at Python, but it seems to be able to dynamically reload code from a changed module: How to re import an updated package while in Python Interpreter?
Some other suggestions not directly related to Python.
Firstly, try to create a smaller test file. Is the whole file required to demonstrate the bug you are observing? Most probably it is only a small part of your input file that is relevant.
Secondly, are these particular files required, or the problem will show up on any big amount of data? If it shows only on particular files, then once again most probably it is related to some feature of these files and will show also on a smaller file with the same feature. If the main reason is just big amount of data, you might be able to avoid reading it by generating some random data directly in a script.
Thirdly, what is a bottleneck of your reading the file? Is it just hard drive performance issue, or do you do some heavy processing of the read data in your script before actually coming to the part that generates problems? In the latter case, you might be able to do that processing once and write the results to a new file, and then modify your script to load this processed data instead of doing the processing each time anew.
If the hard drive performance is the issue, consider a faster filesystem. On Linux, for example, you might be able to use /dev/shm.

create pdf from python

I'm looking to generate PDF's from a Python application.
They start relatively simple but some may become more complex (Essentially letter like documents but will include watermarks for example later)
I've worked in raw postscript before and providing I can generate the correct headers etc and file at the end of it I want to avoid use of complex libs that may not do entirely what I want. Some seem to have got bitrot and no longer supported (pypdf and pypdf2) Especially when I know PDF/Postscript can do exactly what I need. PDF content really isn't that complex.
I can generate EPS (Encapsulated postscript) fine by just writing the appropriate text headers to file and my postscript code. But Inspecting PDF's there is a lil binary header I'm not sure how to generate.
I could generate an EPS and convert it. I'm not overly happy with this as the production environment is a Windows 2008 server (Dev is Ubuntu 12.04) and making something and converting it seems very silly.
Has anyone done this before?
Am I being pedantic by not wanting to use a library?
borrowed from ask.yahoo
A PDF file starts with "%PDF-1.1" if it is a version 1.1 type of PDF file. You can read PDF files ok when they don't have binary data objects stored in them, and you could even make one using Notepad if you didn't need to store a binary object like a Paint bitmap in it.
But after seeing the "%PDF-1.1" you ignore what's after that (Adobe Reader does, too) and go straight to the end of the file to where there is a line that says "%%EOF". That's always the last thing in the file; and if that's there you know that just a few characters before that place in the file there's the word "startxref" followed by a number. This number tells a reader program where to look in the file to find the start of the list of items describing the structure of the file. These items in the list can be page objects, dictionary objects, or stream objects (like the binary data of a bitmap), and each one has "obj" and "endobj" marking out where its description starts and ends.
For fairly simple PDF files, you might be able to type the text in just like you did with Notepad to make a working PDF file that Adobe Reader and other PDF viewer programs could read and display correctly.
Doing something like this is a challenge, even for a simple file, and you'd really have to know what you're doing to get any binary data into the file where it's supposed to go; but for character data, you'd just be able to type it in. And all of the commands used in the PDF are in the form of strings that you could type in. The hardest part is calculating those numbers that give the file offsets for items in the file (such as the number following "startxref").
If the way the file format is laid out intrigues you, go ahead and read the PDF manual, which tells the whole story.
http://www.adobe.com/content/dam/Adobe/en/devnet/acrobat/pdfs/PDF32000_2008.pdf
but really you should probably just use a library
Thanks to #LukasGraf for providing this link http://www.gnupdf.org/Introduction_to_PDF that shows how to create a simple hello world pdf from scratch
As long as you're working in Python 2.7, Reportlab seems to be the best solution out there at the moment. It's quite full-featured, and can be a little complex to work with, depending on exactly what you're doing with it, but since you seem to be familiar with PDF internals in general hopefully the learning curve won't be too steep.
I recommend you to use a library. I spent a lot of time creating pdfme and learned a lot of things along the way, but it's not something you would do for a single project. If you want to use my library check the docs here.

How to parse a .shp file?

I am interested in gleaning information from an ESRI .shp file.
Specifically the .shp file of a polyline feature class.
When I open the .dbf of a feature class, I get what I would expect: a table that can open in excel and contains the information from the feature class' table.
However, when I try to open a .shp file in any program (excel, textpad, etc...) all I get is a bunch of gibberish and unusual ASCII characters.
I would like to use Python (2.x) to interpret this file and get information out of it (in this case the vertices of the polyline).
I do not want to use any modules or non built-in tools, as I am genuinely interested in how this process would work and I don't want any dependencies.
Thank you for any hints or points in the right direction you can give!
Your question, basically, is "I have a file full of data stored in an arbitrary binary format. How can I use python to read such a file?"
The answer is, this link contains a description of the format of the file. Write a dissector based on the technical specification.
If you don't want to go to all the trouble of writing a parser, you should take look at pyshp, a pure Python shapefile library. I've been using it for a couple of months now, and have found it quite easy to use.
There's also a python binding to shapelib, if you search the web. But I found the pure Python solution easier to hack around with.
might be a long shot, but you should check out ctypes, and maybe use the .dll file that came with a program (if it even exists lol) that can read that type of file. in my experience, things get weird when u start digging around .dlls

Should I use a ramdisk for pictures that are converted and removed?

I have a little program here (python 2.7) that runs on an old machine and it basically keeps getting pictures (for timelapses) by running an external binary and converts them to an efficient format to save up disk space.
I want to minimize the disk operations, because it's already pretty old and I want it to last some more time.
At the moment the program writes the data from the camera on the disk, then converts it and removes the original data. However it does that for every image, 1- it writes a large file on disk, 2- reads it to convert, 3- and then deletes it... a bunch of disc operations that aren't necessary and could be done in ram, because the original file doesn't have to be stored and is only used as a basis to create another one.
I was sure a ramdisk was the solution, then I googled on how to do that, and google returned me a bunch of links that discourage the use of ramdisk, the reasons are many: because they are not useful in modern systems (i'm running a pretty new linux kernel); they should only be used if you want to decrypt data that shouldn't hit the disk; some tests shows that ramdisk could be actually slower than hd; the operating system has a cache...
So I'm confused...
In this situation, should I use a ramdisk?
Thank you.
PS: If you want more info: I have a proprietary high-res camera, and a proprietary binary that I run to capture a single image, I can specify where it will write the file, which is a huge TIFF file, and then the python program runs the convert program from imagemagick to convert it to JPEG and then compress it in tar.bz2, so the quality is almost the same but the filesize is 1/50 of the TIFF.
My experience with ramdisks is congruent with what you've mentioned here. I lost performance when I moved to them because there was less memory available for the kernel to do it's caching intelligently and that messed things up.
However, from your question, I understand that you want to optimise for number of disk operations rather than speed in which case a RAM disk might make sense. As with most of these kinds of problems, monitoring is the right way to do it.
Another thing that struck me was that if your original image is not that big, you might want to buy a cheap USB stick and do the I/O on that rather than on your main drive. Is that not an option?
Ah, proprietary binaries that only give certain options. Yay. The simplest solution would be adding a solid state hard drive. You will still be saving to disk, but disk IO will be much higher for reading and writing.
A better solution would be outputting the tiff to stdout, perhaps in a different format, and piping it to your python program. It would never hit the hard drive at all, but it would be more work. Of course, if the binary doesn't allow you to do this, then it's moot.
If on Debian (and possibly its derivatives), use "/run/shm" directory.

Categories