Good day.
I'm trying to analyze a human body contour using openCV. I already recognized head (topmost point of contour), left and right arms (leftmost and rightmost poitns), left and right legs (I split a body contour into two halves and found bottommost points). I also have shoulder points (topmost points between corresponding leg and foot).
But I also need to recognize armpits and I have no idea how to. Here's what I already have (red outline is my contour. I'm working with HSV image, if that even matters):
You must use findContour() and then convex hull here which could serve you better.
Related
How can i Calculate Distance between center of the circle and extreme edges.
Below is the sample image and the green line is the Distance which i want to calculate.
How can do this using Open CV?
Hints:
Not easy to make this reliably. In the first place, make sure that you take the picture in front, to avoid parallax errors.
Possibly find the central circle(s) by Hough. This gives you the center.
Now the hard par is to segment the blades. Possibly by thresholding on a luminance image or anything close to that cream color. Or using a general segmentation method.
Then figure out which blade is segmented most accurately, and find the longest distance from the center to any point of the blade (that gives you the tip).
If you don't care about accuracy, it is not impossible that Hough gives you the outer circle.
What I'm doing
I'm trying to process (badly taken) photos of receipts and I'm stuck at warping perspective. My first attempt was to find the corners of the receipt using contour which worked pretty well.
But then I have images like this which part of the receipt was not captured (perhaps blocked by another piece of paper, etc.) so using the corners would yield bad result.
What I tried
I then moved on to line detection using Hough transform. The idea is that receipts usually have a few horizontal lines across. This is what I have so far.
My first thought was to use findHomography using points on two sides as source. To calculate the y-coordinate of the destination points, I'd find the distance between that point and some reference line.
The problem
But then I realized that this is not the correct way, as a line that's exactly halfway between top and bottom in the real receipt wouldn't be half way in the warped image.
Question
So I don't know the locations of the "destination" points, but what I do know is that all these angles between the white and red lines should be 90 degrees. How do I find the transformation matrix in this case?
I'm trying to find a number of detected corners in python using Harris Corner Detector in real time. I searched the whole internet but I couldn't find any link for that nor single post. You can suggest any other better method for detecting corners and counting detected corners. Thanks in advance for help.
The Harris Corner Detector is an image filter that yields a high response at corner points in an image.
If you look at the OpenCV tutorial on Harris, you notice the following code is used to draw circle around these high responses:
# Drawing a circle around corners
for i in range(dst_norm.shape[0]):
for j in range(dst_norm.shape[1]):
if int(dst_norm[i,j]) > thresh:
cv.circle(dst_norm_scaled, (j,i), 5, (0), 2)
You can use the same code for counting up a counter instead of drawing a circle. As you can see, this involves coming up with a threshold value suited to your specific image data.
I am trying to extract the tiles ( Letters ) placed on a Scrabble Board. The goal is to identify / read all possible words present on the board.
An example image -
Ideally, I would like to find the four corners of the scrabble Board, and apply perspective transform, for further processing.
After Perspective transform -
The algorithm that I am using is as follows -
Apply Adaptive thresholding to the gray scale image of the Scrabble Board.
Dilate / Close the image, find the largest contour in the given image, then find the convex hull, and completely fill the area enclosed by the convex hull.
Find the boundary points ( contour ) of the resultant image, then apply Contour approximation to get the corner points, then apply perspective transform
Corner Points found -
This approach works with images like these. But, as you can see, many square boards have a base, which is curved at the top and the bottom. Sometimes, the base is a big circular board. And with these images my approach fails. Example images and outputs -
Board with Circular base:
Points found using above approach:
I can post more such problematic images, but this image should give you an idea about the problem that I am dealing with. My question is -
How do I find the rectangular board when a circular board is also present in the image?
Some points I would like to state -
I tried using hough lines to detect the lines in the image, find the largest vertical line(s), and then find their intersections to detect the corner points. Unfortunately, because of the tiles, all lines seem to be distorted / disconnected, and hence my attempts have failed.
I have also tried to apply contour approximation to all the contours found in the image ( I was assuming that the large rectangle, too, would be a contour ), but that approach failed as well.
I have implemented the solution in openCV-python. Since the approach is what matters here, and the question was becoming a tad too long, I didn't post the relevant code.
I am willing to share more such problematic images as well, if it is required.
Thank you!
EDIT1
#Silencer's answer has been mighty helpful to me for identifying letters in the image, but I want to accurately find the placement of the words in the image. Hence, I feel identifying the rows and columns is necessary, and I can do that only when a perspective transform is applied to the board.
I wrote an answer on MSER text detection:
Trying to Plot OpenCV's MSER regions using matplotlib
The code generate the following results on your images.
You can have a try.
I think #silencer has already given quite promising solution.
But to perform perspective transform as you have mentioned that you have already tried with hough lines to find the largest rectangle but it fails because for tiles present.
Given you have large image data set may be more than 1000 images, you can also give a shot to Deep learning based approach where you can train a model with images as input and corresponding rectangle boundary points coordinate as outputs.
I have a figure that I get contour lines from a picture of face use by opencv (image below). I need to determine most curvature points of this contour in python (in the image, I need to define the red dots on right of the red line). How can I do it ?
Just find contour and compute curvature then find maximums: curvature
This article also may be useful: article