How to determine smoothness or roughness of contour OpenCv Python - python

I am trying to compare simple geometric images on a white background. These images should be labeled similar (Ex1 Ex2 and Ex1 Ex2). Due to the nature of the objects, Feature Matching does not generate enough accurate matches to determine similarity and matchshapes does not take into account internal features. I would like to detect when the outer contour has additives on it that distort the shape when compared to the corresponding image. Is it possible to determine the smoothness / roughness of a contour using opencv?

Related

Counting bacterial colonies on a Petri dish: blob and blob cluster detection on a heterogenous surface

I have a task of identifying the number of bacterial colonies on a relatively diverse set of top-down photos of a Petri dish located on a table. The basic process is the following:
detect the Petri dish on the image, crop everything outside of it;
apply binary thresholding which should result in a black background and white colonies or clusters thereof;
use simple blob detector or watershed to identify the colonies, highlight them on the source image and output their count.
Input example 1 Input example 2 Input example 3: an edge case
Problem 1
The table around the Petri dish isn't smooth and contains spots so I usually use Hough transform to detect the dish and remove everything outside of it. The problem is that there are light reflections near the edge of the Petri dish represented as rings with their radius on par with that of the dish edge, as well as other reflections that obscure the view of the colonies and affect the thresholding applied. So I need reliable code for detecting the innermost circle that has roughly the same centre as the outer border of the Petri dish and doesn't contain any further reflections, i.e. cropping at the outer border is sub-optimal.
Cropping attempt
fig. 1. Grab first detected circle with a radius within the range of [int(image.shape[1]/4),int(image.shape[1]/2)] from circle Hough Transform, use a mask and crop to [x+r:x-r,y+r:y-r]
Problem 2.1
The colonies have a colour usually close to the colour of the background (the agar) and in different areas these can overlap (e.g. colony colour in a section A has the same colour as the background in a section B). This renders the method of general thresholding useless. Different photos having different brightness is an issue as well in the context of the binary method and its rigid parameters - for some images a param of (184,255) is useful while on others only a setting as low as (120,255) results in something half-usable.
Gaussian blur, pyrmeanshift, binary threshold
fig 2.1. Gaussian blur (3,3) + pyrmeanshift (6,27) + binary threshold (205,255)
Problem 2.2
The bacterial colonies have round shapes which sometimes form clusters of overlapping circles so the simple blob detector tends to ignore those. The algorithm is supposed to detect the cluster and identify how many colonies (circles) are in it. To tackle this, I've tried Euclidean distance transform coupled with watershed as an alternative to simple blob detector but this needs to be fed a clean image not containing anything other than the colonies themselves, so a robust threshold algorithm is required for removing all the light reflections and eliminating the background's (agar's) gradient. There are also many spots on the Petri dish usually smaller in size than the colonies and not really round - these should be ignored by the detector algorithm. I've heard of adaptive thresholding used for overcoming the problem of a varied background but this tends to convert non-colony small spots on the dish into full-fledged circles which isn't very optimal.
Adaptive threshold - Gaussian method
fig 2.2. Adaptive threshold (Gaussian C)
An attempt at detecting colonies on input example 2
fig 3. A failed attempt at using watershed with distance transform, demonstrating that this algorithm requires a well cleaned-up and properly thresholded input
I'm interested to know whether this task is feasible in the context of a varied collection of photos taken in different lighting conditions as well as different colonies having different sizes and colours. If so, what ways are there to approach this?

How to find center coordinates of numbers in an image

I'm currently working on my first assignment in image processing (using OpenCV in Python). My assignment is to calculate a precise score (to tenths of a point) of one to several shooting holes in an image uploaded by a user. One of the requirements is to transform the uploaded shooting target image to be from "birds-eye view" for further processing. For that I have decided that I need to find center coordinates of numbers (7 & 8) to select them as my 4 quadrilateral.
Unfortunately, there are several limitations that need to be taken into account.
Limitations:
resolution of the processed shooting target image can vary
the image can be taken in different lighting conditions
the image processed by this part of my algorithm will always be taken under an angle (extreme angles will be automatically rejected)
the image can be slightly rotated (+/- 10 degrees)
the shooting target can be just a part of the image
the image can be only of the center black part of the target, meaning the user doesn't have to take a photo of the whole shooting target (but there always has to be the center black part on it)
this algorithm can take a maximum of 2000ms runtime
What I have tried so far:
Template matching
here I quickly realized that it was unusable since the numbers could be slightly rotated and a different scale
Feature matching
I have tried all of the different feature matching types (SIFT, SURF, ORB...)
unfortunately, the numbers do not have that specific set of features so they matched a quite lot of false positives, but I could possibly filter them by adding shape matching, etc..
the biggest blocker was runtime, the runtime of only a single number feature matching took around 5000ms (even after optimizations) (on MacBook PRO 2017)
Optical character recognition
I mostly tried using pytesseract library
even after thresholding the image to inverted binary (so the text of numbers 7 and 8 is black and the background white) it failed to recognize them
I also tried several ways of preprocessing the image and I played a lot with the tesseract config parameter but it didn't seem to help whatsoever
Contour detection
I have easily detected all of the wanted numbers (7 & 8) as single contours but failed to filter out all of the false positives (since the image can be in different resolutions and also there are two types of targets with different sizes of the numbers I couldn't simply threshold the contour by its width, height or area)
After I would detect the numbers as contours I wanted to extract them as some ROI and then I would use OCR on them (but since there were so many false positives this would take a lot of time)
I also tried filtering them by using cv2.matchShapes function on both contours and cropped template / ROI but it seemed really unreliable
Example processed images:
high resolution version here
high resolution version here
high resolution version here
high resolution version here
high resolution version here
high resolution version here
As of right now, I'm lost on how to progress about this. I have tried everything I could think of. I would be immensely happy if any of you image recognition experts gave me any kind of advice or even better a usable code example to help me solve my problem.
Thank you all in advance.
Find the black disk by adaptive binarization and contour (possibly blur to erase the inner features);
Fit an ellipse to the outline, as accurate as possible;
Find at least one edge of the square (Hough lines);
Classify the edge as one of NWSE (according to angle);
Use the ellipse and the line information to reconstruct the perspective transformation (it is an homography);
Apply the inverse homography to straighten the image and obtain the exact target center and axis;
Again by adaptive binarization, find the bullet holes (center/radius);
Rate the holes after their distance to the center, relative to the back disk radius.
If the marking scheme is variable, detect the circles (Hough circles, using the known center, or detect peaks in an oblique profile starting from the center).
If necessary, you could OCR the digits, but it seems that the score is implicitly starting at one in the outer ring.

Image Segmentation based on Pixel Density

I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.

Contour completion in image segmentation

I am attempting to use machine learning (namely random forests) for image segmentation. The classifier utilizes a number of different pixel level features to classify pixels as either edge pixels or non edge pixels. I recently applied my classifier to a set of images that are pretty difficult to segment even manually (Image segmentation based on edge pixel map) and am still working on obtaining reasonable contours from the resulting probability map. I also applied the classifier to an easier set of images and am obtaining quite good predicted outlines (Rand index > 0.97) when I adjust the threshold to 0.95. I am interested in improving the segmentation result by filtering contours extracted from the probability map.
Here is the original image:
The expert outlines:
The probability map generated from my classifier:
This can be further refined when I convert the image to binary based on a threshold of 0.95:
I tried filling holes in the probability map, but that left me with a lot of noise and sometimes merged nearby cells. I also tried contour finding in openCV but this didn't work either as many of these contours are not completely connected - a few pixels will be missing here and there in the outlines.
Edit: I ended up using Canny edge detection on the probability map.
The initial image seems to be well contrasted and I guess we can simply threshold to obtain a good estimate of the cells. Here is a morphological area based filtering of the thresholded image:
Threshold:
Area based opening filter(this needs to be set based on your dataset of cells under study):
Area based closing filter(this needs to be set based on your dataset of cells under study):
Contours using I-Erosion(I):
Code snippet:
C is input image
C10 = C>10; %threshold depends on the average contrast in your dataset
C10_areaopen = bwareaopen(C10,2500); %area filters average remove small components that are not cells
C10_areaopenclose = ~bwareaopen(~C10_areaopen,100); %area filter fills holes
se = strel('disk',1);
figure, imshow(C10_areaopenclose-imerode(C10_areaopenclose,se)) %inner contour
To get smoother shapes I guess fine opening operations can be performed on the filtered images, thus removing any concave parts of the cells. Also for cells that are attached one could use the distance function and the watershed over the distance function to obtain segmentations of the cells: http://www.ias-iss.org/ojs/IAS/article/viewFile/862/765
I guess this can be also used on your probability/confidence maps to perform nonlinear area based filtering.

Counting the pips on dice using OpenCV

I am doing a dice value recognition hobby project that I want to run on a Raspberry Pi. For now, I am just learning OpenCV as that seems like the hardest thing for me. I have gotten this far, where I have dilated, eroded and canny filtered out the dice. This has given me a hierarchy of contours. The image shows the bounding rectangles for the parent contours:
My question is: how would I proceed to count the pips? Is it better to do some template matching for face values, or should I mathematically test if a pip is in a valid position within the bounding box?
There could be multiple ways to do it:
Use hole filling and then morphological operator to filter circles.
Simpler approach would be using white pixel density (% of white pixels). Five dot would have higher white pixel density.
Use image moments (mathematical property which represents shape and structure of image) to train the neural network for different kinds of dice faces.
Reference:
Morphology
http://blogs.mathworks.com/pick/2008/05/23/detecting-circles-in-an-image/
As Sivam Kalra Said, there are many valid approaches.
I would go with template matching, as it should be robust and relatively easy to implement.
using your green regions in the canny image, copy each found die face from the original grayscale image into a smaller search image. The search image should be slightly larger than a die face, and larger than your 6 pattern images.
optionally normalize the search image
use cvMatchTemplate with each of the 6 possible dice patterns (I recommend the CV_TM_SQDIFF_NORMED algorithm, but test which works best)
find and store the global minimum in the result image for each of the 6 matches
rotate the search image in ~2° steps from 0° to 90°, and repeat the template match for each step
the dice pattern with the lowest minimum over all steps is the correct one.
contour hierechy could be a good and very easy option, but you need a perpendicular vision.
so you can do it with contours but fitting circles with som threshold
(sorry about my apalling english)

Categories