There is a type of texture features called GLGCM (Gray Level Gradient Based Co-occurrence Matrix) that captures information about how different image gradients co-occur with each other.
GLGCM is different from normal GLCM.
Can anyone help me find an implementation for GLGCM in Python?
I don't have access to the paper right now, so I am not sure about how the details are but, what if you use GLCM on gradient image normalized into 0-255 range?
Python implementation could be found in scikit-image library
Related
I have a theoretical background in probability but I want to implement Chebyshev inequality on any gray image for a better binarization or segmentation. Having known the distribution of the gray image it is possible to find the value k where pixels do not differ more than a certain value from the mean.
My questions are as follows:
How to implement Chebyshev's inequality in python-OpenCV?
Is There any public codes that I can implement and hedge around my parameters based on the image statistics?
I have seen an implementation of 1D data but it is not clearly explained so it is not easy to follow when implementing 2D and 3D refer this one and this.
NB: The nature of my grayscale image is mixed gaussian.
Thank you in advance for any suggestions
I have read through all the possible related Q&A to this question. I'm using Python 2.7.14 and the Python Image Library (PIL). I would like to extract the Quantized DCT values of an existing jpeg image. I'm specifically looking to extract the DC coefficient associated with each 8x8 pixel area. I'm not interesting in changing or re-encoding, I just want to obtain the DC values [0,0]. This lower level function does not seem to be accessible from either methods or attributes associated with the Image class. Any suggestions would be greatly appreciated.
I'm very new to OpenCV, and i want to create simple object detector, that uses SVM. Instead of HOG, i would like to extract from my object color histograms(for example), but i couldn't find any information about it for OpenCV, everywhere is using HOG.
And my second question: is Python implemenation for SVM has less functionality than C++ (both for OpenCV) ?
You can use the OpenCV function calcHist to compute histograms.
calcHist(&bgr_planes[0], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate );
where,
&bgr_planes[0]: The source array(s)
1: The number of source arrays
0: The channel (dim) to be measured. In this case it is just the
intensity so we just write 0.
Mat(): A mask to be used on the source array
b_hist: The Mat object where the histogram will be stored
1: The histogram dimensionality.
histSize: The number of bins per each used dimension
histRange: The range of values to be measured per each dimension
uniform and accumulate
Refer to the docs for more information.
You can also look at this answer which discusses C++ OpenCV SVM implementation and this answer which discusses Python OpenCV SVM implementation to get started.
Imagine someone taking a burst shot from camera, he will be having multiple images, but since no tripod or stand was used, images taken will be slightly different.
How can I align them such that they overlay neatly and crop out the edges
I have searched a lot, but most of the solutions were either making a 3D reconstruction or using matlab.
e.g. https://github.com/royshil/SfM-Toy-Library
Since I'm very new to openCV, I will prefer a easy to implement solution
I have generated many datasets by manually rotating and cropping images in MSPaint but any link containing corresponding datasets(slightly rotated and translated images) will also be helpful.
EDIT:I found a solution here
http://www.codeproject.com/Articles/24809/Image-Alignment-Algorithms
which gives close approximations to rotation and translation vectors.
How can I do better than this?
It depends on what you mean by "better" (accuracy, speed, low memory requirements, etc). One classic approach is to align each frame #i (with i>2) with the first frame, as follows:
Local feature detection, for instance via SIFT or SURF (link)
Descriptor extraction (link)
Descriptor matching (link)
Alignment estimation via perspective transformation (link)
Transform image #i to match image 1 using the estimated transformation (link)
I am trying to detect a vehicle in an image (actually a sequence of frames in a video). I am new to opencv and python and work under windows 7.
Is there a way to get horizontal edges and vertical edges of an image and then sum up the resultant images into respective vectors?
Is there a python code or function available for this.
I looked at this and this but would not get a clue how to do it.
You may use the following image for illustration.
EDIT
I was inspired by the idea presented in the following paper (sorry if you do not have access).
Betke, M.; Haritaoglu, E. & Davis, L. S. Real-time multiple vehicle detection and tracking from a moving vehicle Machine Vision and Applications, Springer-Verlag, 2000, 12, 69-83
I would take a look at the squares example for opencv, posted here. It uses canny and then does a contour find to return the sides of each square. You should be able to modify this code to get the horizontal and vertical lines you are looking for. Here is a link to the documentation for the python call of canny. It is rather helpful for all around edge detection. In about an hour I can get home and give you a working example of what you are wanting.
Do some reading on Sobel filters.
http://en.wikipedia.org/wiki/Sobel_operator
You can basically get vertical and horizontal gradients at each pixel.
Here is the OpenCV function for it.
http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=sobel#sobel
Once you get this filtered images then you can collect statistics column/row wise and decide if its an edge and get that location.
Typically geometrical approaches to object detection are not hugely successful as the appearance model you assume can quite easily be violated by occlusion, noise or orientation changes.
Machine learning approaches typically work much better in my opinion and would probably provide a more robust solution to your problem. Since you appear to be working with OpenCV you could take a look at Casacade Classifiers for which OpenCV provides a Haar wavelet and a local binary pattern feature based classifiers.
The link I have provided is to a tutorial with very complete steps explaining how to create a classifier with several prewritten utilities. Basically you will create a directory with 'positive' images of cars and a directory with 'negative' images of typical backgrounds. A utiltiy opencv_createsamples can be used to create training images warped to simulate different orientations and average intensities from a small set of images. You then use the utility opencv_traincascade setting a few command line parameters to select different training options outputting a trained classifier for you.
Detection can be performed using either the C++ or the Python interface with this trained classifier.
For instance, using Python you can load the classifier and perform detection on an image getting back a selection of bounding rectangles using:
image = cv2.imread('path/to/image')
cc = cv2.CascadeClassifier('path/to/classifierfile')
objs = cc.detectMultiScale(image)