How to smooth a contour using opencv in python? - python

I am trying to find the contours of human hands for gesture recognition.After some pre processing and thresholding, I am extracting the contour. The contours I am getting are as below,
I want to smooth out the contour. I tried using scipy.ndimage.zoom but it seems to be changing the dimensions of the input numpy array. How can I smooth out the contour keeping the shape of the input array the same?

Related

How to improve stabilization of image sequence of face using Python OpenCV

I have been capturing a photo of my face every day for the last couple of months, resulting in a sequence of images taken from the same spot, but with slight variations in orientation of my face. I have tried several ways to stabilize this sequence using Python and OpenCV, with varying rates of success. My question is: "Is the process I have now the best way to tackle this, or are there better techniques / order to execute things in?"
My process so far looks like this:
Collect images, keep the original image, a downscaled version and a downscaled grayscale version
Using dlib.get_frontal_face_detector() on the grayscale image, get a rectangle face containing my face.
Using the dlib shape-predictor 68_face_landmarks.dat, obtain the coordinates of the 68 face landmarks, and extract the position of eyes, nose, chin and mouth (specifically landmarks 8, 30, 36, 45, 48 and 54)
Using a 3D representation of my face (i.e. a numpy array containing 3D coordinates of an approximation of these landmarks on my real actual face in an arbitrary reference frame) and cv2.solvePnP, calculate a perspective transform matrix M1 to align the face with my 3D representation
Using the transformed face landmarks (i.e. cv2.projectPoints(face_points_3D, rvec, tvec, ...) with _, rvec, tvec = cv2.solvePnP(...)), calculate the 2D rotation and translation required to align the eyes vertically, center them horizontally and place them on a fixed distance from each other, and obtain the transformation matrix M2.
Using M = np.matmul(M2, M1) and cv2.warpPerspective, warp the image.
Using this method, I get okay-ish results, but it seems the 68 landmark prediction is far from perfect, resulting in twitchy stabilization and sometimes very skewed images (in that I can't remember having such a large forehead...). For example, the landmark prediction of one of the corners of the eye not always aligns with the actual eye, resulting in a perspective transform with the actual eye being skewed 20px down.
In an attempt to fix this, I have tried using SIFT to find features in two different photos (aligned using above method) and obtain another perspective transform. I then force the features to be somewhere around my detected face landmarks as to not align the background (using a mask in cv2.SIFT_create().detectAndCompute(...)), but this sometimes results in features only (or predominantly) being found around only one of the eyes, or not around the mouth, resulting again in extremely skewed images.
What would be a good way to get a smooth sequence of images, stabilized around my face? For reference, this video (not mine, which is stabilized around the eyes).

How to create a boundary mask around an object?

I have some processed images that have noise (background pixels) around the boundaries. Is there a way to detect only the boundary of the object itself and create a mask to remove the background pixels around the boundaries?
Im a beginner to OpenCV so any code samples would help.
Example:
Original Image
Processed Image
Expected Output
I have tried the findContours method but it creates a mask that includes the noisy pixels as well.
Also i have tried the erode method but it does not give the same results for different image sizes so that is not the solution im looking for.

Edge Detection from image using python libraries and Contours Draw

Hellow everyone,
I am trying very hard to extract edges from a specific image. I have tried many many ways, including;
grayscale, blurring (laplacian,gaussian, averaging etc), gradient (sobel, prewitt, canny)
With morfological transformations
Even thresholding with different combinations
Even HSV convert and masking and then thresholding
Using Contour Methods with area thresholding
Except all of this, I have tried different combinations with all the above. BUT neither of the above, had an excellent result. Main problem is still too many edges/lines. The image is an orthomosaic 2D photo of a marble wall. I will upload the image. Anyone have any ideas?
P.S The final result should be an image that has only the "skeleton' or/ shape of the marbles.
Wall.tif

How to convert from edges to contours in OpenCV

I have been getting images like this after edge detection:
I'd like it to connect the edges together into straight-line polygons.
I thought this could be done using findCountours with chain approximation, but that doesn't seem to be working well for me.
How can I convert an image like the one above into a simple straight-line polygons (that look like skewed triangles and trapezoids and squares)?
You need to first detect the lines and then construct the contours. You can do that using HoughLines(). There is a short tutorial here.
Blur the image, then find the contours.
If the edges are that close together, a simple blurring with something like
def blur_image(image, amount=3):
'''Blurs the image
Does not affect the original image'''
kernel = np.ones((amount, amount), np.float32) / (amount**2)
return cv2.filter2D(image, -1, kernel)
should connect all the little gaps, and you can do contour detection with that.
If you then want to convert those contours into polygons, you can look to approximate those contours as polygons. A great tutorial with code for that is here.
The basic idea behind detecting polygons is running
cv2.findContours(image, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
Which tells OpenCV to approximate the contours with straight lines, creating polygons. (I'm not sure what the cv2.RETR_EXTERNAL parameter does at this time.)

Contour completion in image segmentation

I am attempting to use machine learning (namely random forests) for image segmentation. The classifier utilizes a number of different pixel level features to classify pixels as either edge pixels or non edge pixels. I recently applied my classifier to a set of images that are pretty difficult to segment even manually (Image segmentation based on edge pixel map) and am still working on obtaining reasonable contours from the resulting probability map. I also applied the classifier to an easier set of images and am obtaining quite good predicted outlines (Rand index > 0.97) when I adjust the threshold to 0.95. I am interested in improving the segmentation result by filtering contours extracted from the probability map.
Here is the original image:
The expert outlines:
The probability map generated from my classifier:
This can be further refined when I convert the image to binary based on a threshold of 0.95:
I tried filling holes in the probability map, but that left me with a lot of noise and sometimes merged nearby cells. I also tried contour finding in openCV but this didn't work either as many of these contours are not completely connected - a few pixels will be missing here and there in the outlines.
Edit: I ended up using Canny edge detection on the probability map.
The initial image seems to be well contrasted and I guess we can simply threshold to obtain a good estimate of the cells. Here is a morphological area based filtering of the thresholded image:
Threshold:
Area based opening filter(this needs to be set based on your dataset of cells under study):
Area based closing filter(this needs to be set based on your dataset of cells under study):
Contours using I-Erosion(I):
Code snippet:
C is input image
C10 = C>10; %threshold depends on the average contrast in your dataset
C10_areaopen = bwareaopen(C10,2500); %area filters average remove small components that are not cells
C10_areaopenclose = ~bwareaopen(~C10_areaopen,100); %area filter fills holes
se = strel('disk',1);
figure, imshow(C10_areaopenclose-imerode(C10_areaopenclose,se)) %inner contour
To get smoother shapes I guess fine opening operations can be performed on the filtered images, thus removing any concave parts of the cells. Also for cells that are attached one could use the distance function and the watershed over the distance function to obtain segmentations of the cells: http://www.ias-iss.org/ojs/IAS/article/viewFile/862/765
I guess this can be also used on your probability/confidence maps to perform nonlinear area based filtering.

Categories