How to extract jpeg image DC Coefficients with the PIL? - python

I have read through all the possible related Q&A to this question. I'm using Python 2.7.14 and the Python Image Library (PIL). I would like to extract the Quantized DCT values of an existing jpeg image. I'm specifically looking to extract the DC coefficient associated with each 8x8 pixel area. I'm not interesting in changing or re-encoding, I just want to obtain the DC values [0,0]. This lower level function does not seem to be accessible from either methods or attributes associated with the Image class. Any suggestions would be greatly appreciated.

Related

Is there a way to extract visual data from a folium map?

I am constructing a map with meaningful data using Folium, on Python. But, I need to extract the information(for example an image which is bounded by max-min lat-long values). I tried several different ways. However, I don't get the data I desired.
A sample map, constructed using Folium, in an html file.
I need to use this as an RGB image rather than an interactive map. As much as I see, there is no such functionality. At least, I could not find. Is there a way?
Assuming that there is no such way, and I decided to crop this image using selenium browser method. So, I firstly had to indicate the boundaries in order to capture the image with corresponding latitude, longitude values. I applied fit_bounds(), but it is not bounded by given max/min lat-long values. There is padding-like expansion outside of the boundaries. Therefore, this way also failed. Could you please let me know if there is a solution for this purpose? Simply, briefly, I need to have the data that includes the RGB image, lat-long values(at least the boundaries) and these are retrieved directly from a folium map if possible.
Thank you in advance for any support.

Crop features detected in OpenCV

I am trying to crop out features from a photo using opencv and haven't quite been able to find anything that helps to do so. I have photos in which I am trying to crop out rivets from metal panels to create a dataset of rivets that focus in on just the rivets. I have been able to use feature detection and feature matching using Orb to match features but I am unsure of how to then crop out those features. Ideally each photo should provide me with multiple cropped out photos of the rivets. Does anyone have any experience with anything such as this?
For template matching with OpenCV, you can use template matching (which is nicely described here)
If your template is skewed, rotated, etc. in the photo, you can use feature homography
For cropping the part of the image, you can look at this previously answered question.

Python: Extract GLGCM features

There is a type of texture features called GLGCM (Gray Level Gradient Based Co-occurrence Matrix) that captures information about how different image gradients co-occur with each other.
GLGCM is different from normal GLCM.
Can anyone help me find an implementation for GLGCM in Python?
I don't have access to the paper right now, so I am not sure about how the details are but, what if you use GLCM on gradient image normalized into 0-255 range?
Python implementation could be found in scikit-image library

How to align multiple camera images using opencv

Imagine someone taking a burst shot from camera, he will be having multiple images, but since no tripod or stand was used, images taken will be slightly different.
How can I align them such that they overlay neatly and crop out the edges
I have searched a lot, but most of the solutions were either making a 3D reconstruction or using matlab.
e.g. https://github.com/royshil/SfM-Toy-Library
Since I'm very new to openCV, I will prefer a easy to implement solution
I have generated many datasets by manually rotating and cropping images in MSPaint but any link containing corresponding datasets(slightly rotated and translated images) will also be helpful.
EDIT:I found a solution here
http://www.codeproject.com/Articles/24809/Image-Alignment-Algorithms
which gives close approximations to rotation and translation vectors.
How can I do better than this?
It depends on what you mean by "better" (accuracy, speed, low memory requirements, etc). One classic approach is to align each frame #i (with i>2) with the first frame, as follows:
Local feature detection, for instance via SIFT or SURF (link)
Descriptor extraction (link)
Descriptor matching (link)
Alignment estimation via perspective transformation (link)
Transform image #i to match image 1 using the estimated transformation (link)

Horizontal and vertical edge profiles using python-opencv

I am trying to detect a vehicle in an image (actually a sequence of frames in a video). I am new to opencv and python and work under windows 7.
Is there a way to get horizontal edges and vertical edges of an image and then sum up the resultant images into respective vectors?
Is there a python code or function available for this.
I looked at this and this but would not get a clue how to do it.
You may use the following image for illustration.
EDIT
I was inspired by the idea presented in the following paper (sorry if you do not have access).
Betke, M.; Haritaoglu, E. & Davis, L. S. Real-time multiple vehicle detection and tracking from a moving vehicle Machine Vision and Applications, Springer-Verlag, 2000, 12, 69-83
I would take a look at the squares example for opencv, posted here. It uses canny and then does a contour find to return the sides of each square. You should be able to modify this code to get the horizontal and vertical lines you are looking for. Here is a link to the documentation for the python call of canny. It is rather helpful for all around edge detection. In about an hour I can get home and give you a working example of what you are wanting.
Do some reading on Sobel filters.
http://en.wikipedia.org/wiki/Sobel_operator
You can basically get vertical and horizontal gradients at each pixel.
Here is the OpenCV function for it.
http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=sobel#sobel
Once you get this filtered images then you can collect statistics column/row wise and decide if its an edge and get that location.
Typically geometrical approaches to object detection are not hugely successful as the appearance model you assume can quite easily be violated by occlusion, noise or orientation changes.
Machine learning approaches typically work much better in my opinion and would probably provide a more robust solution to your problem. Since you appear to be working with OpenCV you could take a look at Casacade Classifiers for which OpenCV provides a Haar wavelet and a local binary pattern feature based classifiers.
The link I have provided is to a tutorial with very complete steps explaining how to create a classifier with several prewritten utilities. Basically you will create a directory with 'positive' images of cars and a directory with 'negative' images of typical backgrounds. A utiltiy opencv_createsamples can be used to create training images warped to simulate different orientations and average intensities from a small set of images. You then use the utility opencv_traincascade setting a few command line parameters to select different training options outputting a trained classifier for you.
Detection can be performed using either the C++ or the Python interface with this trained classifier.
For instance, using Python you can load the classifier and perform detection on an image getting back a selection of bounding rectangles using:
image = cv2.imread('path/to/image')
cc = cv2.CascadeClassifier('path/to/classifierfile')
objs = cc.detectMultiScale(image)

Categories