I detected with OpenCV some circles with specific pattern.
I made array from it with x,y coordinates and a value from this point.
circles = cv2.HoughCircles(blurred, cv2.HOUGH_GRADIENT, 1, minDist, param1=param1, param2=param2, minRadius=minRadius, maxRadius=maxRadius)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(img,(i[0], i[1]), i[2], (0, 255, 0), 2)
temperatureArrayValue = np.uint16(np.mean(rawData[(i[1]-5):(i[1]+5),(i[0]-5):(i[0]+5)]))
temperatureArray[temperatureArrayIteration][0] = i[0] #640
temperatureArray[temperatureArrayIteration][1] = i[1] #480
temperatureArray[temperatureArrayIteration][2] = temperatureArrayValue
My main goal is to is to detect every circle in this pattern, if not put coordinates where circle should be and put there a value = 0. This helps me to compare few frames to find out how the value inside circles changes in time.
This code above don't detect that 1 circle is actually the highest circle in the picture. I don't know how to compare two arrays and be sure that every element of array corresponds to specific circle. Frames is from video so coordinates change dynamically
Related
I'm trying to morph two images of faces using an inverse warp. I have the Delaunay triangles for both images as well as all transformation matrices for all pairs of corresponding triangles.
I have applied the matrix to every pixel inside the triangles, but the image I am getting is all messed up and some pixels aren't being filled in as well.
I suspect the vertices lists are not in order which means the triangles are not corresponding. Or it could just be me messing up the row, cols order.
Here's my code:
from scipy.spatial import Delaunay
from skimage.draw import polygon
import numpy as np
def drawDelaunay(img, landmarks, color):
tri = Delaunay(landmarks)
vertices = []
for t in landmarks[tri.simplices]:
# t = [int(i) for i in t]
pt1 = [t[0][0], t[0][1]]
pt2 = [t[1][0], t[1][1]]
pt3 = [t[2][0], t[2][1]]
cv2.line(img, pt1, pt2, color, 1, cv2.LINE_AA, 0)
cv2.line(img, pt2, pt3, color, 1, cv2.LINE_AA, 0)
cv2.line(img, pt3, pt1, color, 1, cv2.LINE_AA, 0)
vertices.append([pt1, pt2, pt3])
return img, vertices
def getAffineMat(triangle1, triangle2):
x = np.transpose(np.matrix([*triangle1]))
y = np.transpose(np.matrix([*triangle2]))
# Add ones to bottom of x and y
x = np.vstack((x, [1,1,1]))
y = np.vstack((y, [1,1,1]))
xInv = np.linalg.pinv(x)
return np.dot(y, xInv)
srcImg = face2
srcRows, srcCols, srcDepth = face2.shape
destImg = np.zeros(face1.shape, dtype=np.uint8)
for triangle1, triangle2 in zip(vertices1, vertices2):
transMat = getAffineMat(triangle1, triangle2)
r, c = list(map(list, zip(*triangle2)))
rr, cc = polygon(r, c)
for row, col in zip(rr, cc):
transformed = np.dot(transMat, [col, row, 1])
srcX, srcY, *_ = np.array(transformed.T)
# Check if pixel is within image boundaries
if isWithinBounds(srcCols, srcRows, col, row):
# Interpolate the color of the pixel from the four nearest pixels
color = bilinearInterpolation(srcImg, srcX, srcY)
# Set the color of the current pixel in the destination image
destImg[row, col] = color
I wish to implement this without getAffineTransform or warpAffine. Any help would be much appreciated!
Sources:
Transfer coordinates from one triangle to another triangle
https://devendrapratapyadav.github.io/FaceMorphing/
But you don't have corresponding triangles! This looks like 2 separates Delaunay triangulation. Maybe made on matching points, but still no matching triangles. You can't do two Delaunay triangulation, one in each image, and expect them to match. You need 1 delaunay triangulation, and then use the same edges on both sides (so, for at least one side, triangulation will not be exactly Delaunay).
Look for example at the top-right corner of your images.
On one side you have you have 4 outgoing edges (counting those we can't see because they are confused with te image border, but they have to be there), on the other you have 6 outgoing edges.
The number of edges connected to two matching vertices is supposed to be a constant (otherwise, how could you warp anything?).
So, clearly, I think (but you did not provide any code, for that, since you postulate that triangulation is correct, when I am pretty sure it is triangulation that is not. So I can only surmise), you got a two sets of matching points, then performed 2 Delaunay's triangulation on those 2 sets of points, expecting to be able to match triangles, even tho they are not at all the same triangles.
Edit: how to transform
(in reply to your question in comment)
It's the same triangulations. You have a list of points p₁, p₂, p₃, ..., pₙ in the first images. A matching list of points q₁, q₂, q₃, ..., qₙ in the second image. You perform a triangulation in the 1st image. Whose output should be a list of triplets of indices, such as (1,3,4), (1, 2, 3), ... meaning that optimal triangulation in 1st image is the one made of triangle (p₁,p₃, p₄), (p₁, p₂, p₃), ...
And in the second image, you use triangulation (q₁,q₃,q₄), (q₁, q₂, q₃), ...
Even if it is not the optimal triangulation of q₁,q₂,...,qₙ (the one that maximize smallest angle). It should not be that far, if q₁,q₂,...,qₙ are not that different from p₁,p₂,...,pₙ (which they are not supposed to be, if you tried to match consistently both images).
So, transformation matrices are the one transforming coordinates in each matching triangles (there are one transformation for each pair of matching triangles).
To decide which point (x',y') of second image matches point (x,y) of first image, you need
to identify in which triangle (i,j,k) (that is (pᵢ,pⱼ,pₖ)) (x,y) is,
Find barycentric coordinates of (x,y) inside this triangle: (x,y)=αpᵢ+βpⱼ+γpₖ
Assume that (x',y') have the same barycentric coordinates inside the matching triangle, that is (x',y')=αqᵢ+βqⱼ+γqₖ
Transformation matrix (for triangle (i,j,k)) is the one going from (x,y) to (x',y')
After have segmented my lemons successfully I would like to get his size in pixels and then convert this value to millimeters. I'm reading a thesis were this guys did that but with strawberries. The first step was crop the segmented strawberries in a rectangle:
The image (b) was called the 'minimum rectangle'. According the authors to create it, This is built depending on the extreme values of the region:
- the highest point
- the extreme left point
- the lowest point
- the extreme right point of the region of interest.
Once this is done, the width of the rectangle is measured, which will indicate the measurement of the diameter of the strawberry in pixels.
In my case this is my input image:
And this is my desired output:
I'm programming in python with opencv. I would like to crop my input image and then find the minimum rectangle to get the width of the rectangle which will show the diameter of the lemon in pixels.
According the thesis, to convert the measure in pixels to a measure of the real world as in millimeters, I should take a photography with a rectangle with a 3 cm of side with the same conditions as were take the images of the lemons. Then I should segment this rectangle and then find his minimun rectangle as the image of above and find his width in pixels as result of it with a rectangle of known measures i.g they got 176 pixels of width. Of this way they got:
1mm = 176/30 = 5.87 pixels
With this information I would like to compute the width of my lemons and get this first in pixels, the convert it to milimetters. Guys if you can do it, please suppost that I taked a photography of a know figure of 3cm of side, the same as the thesis. By the moment I can't get the minimun rectangle because I don't know how get it, is because that I asking for his help to you.
Well guys I would like to see your suggestions, any I idea I will apreciate it. Thanks so much.
Thanks you.
Once you have the thresholded image (mask) of your blob of interest (the lemon) it is very straightforward to get its (rotated) minimum area rectangle or bounding rectangle. Use the cv2.minAreaRect function to get the former or the cv2.boundingRect function to get the later. In both cases you need to compute the contours of the binary mask, get the outer and biggest contour and pass that to either function.
Let's see an example for getting both:
# image path
path = "C://opencvImages//"
fileName = "TAkY2.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Grayscale conversion:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Thresholding:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
This is just to get the binary mask, you already have this. This is the result:
Now, get the contours. And just to draw some results, prepare a couple of deep copies of the input that we will use to check out things:
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# Deep copies of the input image to draw results:
minRectImage = inputImage.copy()
polyRectImage = inputImage.copy()
Now, get the contours and filter them by a minimum area (minArea) value. You want to just keep the biggest contour - that's the lemon perimeter:
# Look for the outer bounding boxes:
for i, c in enumerate(contours):
if hierarchy[0][i][3] == -1:
# Get contour area:
contourArea = cv2.contourArea(c)
# Set minimum area threshold:
minArea = 1000
# Look for the largest contour:
if contourArea > minArea:
# Option 1: Get the minimum area bounding rectangle
# for this contour:
boundingRectangle = cv2.minAreaRect(c)
# Get the rectangle points:
rectanglePoints = cv2.boxPoints(boundingRectangle)
# Convert float array to int array:
rectanglePoints = np.intp(rectanglePoints)
# Draw the min area rectangle:
cv2.drawContours(minRectImage, [rectanglePoints], 0, (0, 0, 255), 2)
cv2.imshow("minAreaRect", minRectImage)
cv2.waitKey(0)
This portion of code gets you these results. Note that this rectangle is rotated to encompass the minimum area of the contour, just as if you were actually measuring the lemon with a caliper:
You can also get the position of the four corners of this rectangle. Still, inside the loop, we have the following bit of code:
# Draw the corner points:
for p in rectanglePoints:
cv2.circle(minRectImage, (p[0], p[1]), 3, (0, 255, 0), -1)
cv2.imshow("minAreaRect Points", minRectImage)
cv2.waitKey(0)
These are the corners of the min area rectangle:
You might or might not like this result. You might be looking for the bounding rectangle that is not rotated. In such case you can use cv2.boundingRect, but first, you need to approximate the contour to a polygon-based set of points. This is the approach, continuing from the last line of code:
# Option2: Approximate the contour to a polygon:
contoursPoly = cv2.approxPolyDP(c, 3, True)
# Convert the polygon to a bounding rectangle:
boundRect = cv2.boundingRect(contoursPoly)
# Set the rectangle dimensions:
rectangleX = boundRect[0]
rectangleY = boundRect[1]
rectangleWidth = boundRect[0] + boundRect[2]
rectangleHeight = boundRect[1] + boundRect[3]
# Draw the rectangle:
cv2.rectangle(polyRectImage, (int(rectangleX), int(rectangleY)),
(int(rectangleWidth), int(rectangleHeight)), (0, 255, 0), 2)
cv2.imshow("Poly Rectangle", polyRectImage)
cv2.imwrite(path + "polyRectImage.png", polyRectImage)
cv2.waitKey(0)
This is the result:
Edit:
This is the bit that actually crops the lemon from the last image:
# Crop the ROI:
croppedImg = inputImage[rectangleY:rectangleHeight, rectangleX:rectangleWidth]
This is the final output:
I have this image of rectangular objects:
I want to be able to find the angle of rotation of each object (the circle facing up being 180 degrees).
My Idea: I will first locate each part and draw a contour around each rectangular object, then I will identify each circle within that object and draw a contour. I will draw a straight line that will act as a constant in the center of the image then I will draw a straight line on each part with the 'North' side of the line going towards the small circle. Then using the constant and that line find the angle between the two lines.
So how would I go about doing this? I'm open to other ideas, but this is the simplest I could come up with and I'm not sure how to go about doing it.
This is what I have. I need to index each contour and add numbers to identify each one once its drawn. AreaMin & AreaMax are trackbars for area.
if areaMin < area < areaMax:
cv2.drawContours(imgContour, cnt, -1, (255, 0, 0), 7)
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02 * peri, True)
x , y , w, h = cv2.boundingRect(approx)
cv2.rectangle(imgContour, (x , y ), (x + w , y + h ), (255, 0, 0), 5)
I think you could calculate the length of two sides of the rectangle and their ratio, and then use math.atan() method to get the angle
I'm using the code explained in https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/#comment-480634 and trying to basically detect the small rounded profile images (to be precise 5) displayed lower half of this sample instagram page (attached). What I can't figure out is why:
1. only one out of the 5 small rounded profile circles is captured by the code
2. how come there's a big circle displayed on the page and seems quite absurd to me.
Here is the code I'm using:
# we create a copy of the original image so we can draw our detected circles
# without destroying the original image.
image = cv2.imread("instagram_page.png")
# the cv2.HoughCircles function requires an 8-bit, single channel image,
# so we’ll convert from the RGB color space to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#blurred = cv2.GaussianBlur(gray, (5, 5), 0)
# detect circles in the image. We pass in the image we want to detect circles as the first argument,
# the circle detection method as the second argument (currently, the cv2.cv.HOUGH_GRADIENT method
# is the only circle detection method supported by OpenCV and will likely be the only method for some time),
# an accumulator value of 1.5 as the third argument, and finally a minDist of 100 pixels.
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.7, minDist= 1, param1 = 300, param2 = 100, minRadius=3, maxRadius=150)
print("Circles len -> {}".format(len(circles)))
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
# converting our circles from floating point (x, y) coordinates to integers,
# allowing us to draw them on our output image.
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
orange = (39, 127, 255)
cv2.circle(output, (x, y), r, orange, 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
img_name = "Output"
cv2.namedWindow(img_name,cv2.WINDOW_NORMAL)
cv2.resizeWindow(img_name, 800,800)
cv2.imshow(img_name, output)
cv2.waitKey(0)
cv2.destroyAllWindows()
I use minDist = 1 to make sure those close circles are potentially captured. Does anybody see something completely wrong with my parameters?
I played around with the parameters and managed to detect all circles (Ubuntu 16.04 LTS x64, Python 3.7, numpy==1.15.1, python-opencv==3.4.3):
circles = cv2.HoughCircles(
gray,
cv2.HOUGH_GRADIENT,
1.7,
minDist=100,
param1=48,
param2=100,
minRadius=2,
maxRadius=100
)
So I'm currently working on a project(not school related or anything) and a part of it involves being able to detect and project a grid in a picture onto a square image so we can get rid of any skewing the image may have and things alike. My problem now is that I cannot determine what points within my image are the corners of my grid. I have tried using a Hough transform but the problem with that is that many lines are generated, including the grid lines and thus it would be hard to determine which of the detected lines are the edges of the grid automatically. I also tried using a contour detector which gives a similar problem, although its more accurate in tracing out the edges of the grid. I'm unable to pick out what contours belong to the edge of the grid and what contours are say grid lines or just miscellaneous
A screenshot of the results from the Hough transform:
and A screenshot of the result from the contour detection:
.
Thanks for any help or advice in advance.
You probably need to look through the contours and find the largest 4 sided one to grab the outside of your grid.
You would use something like this helper function (processed is my preprocessed image):
def largest_4_sided_contour(processed, show_contours=False):
_, contours, _ = cv2.findContours(
processed, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Sort the contours by area
contours = sorted(contours, key=cv2.contourArea, reverse=True)
look at the biggest 5 (if there are more than 5, otherwise just look at all of them)
for cnt in contours[:min(5, len(contours))]:
If the number of sides is about 4, that's the one we are looking for so we can stop looking and return that one.
if len(approx(cnt)) == 4:
return cnt
return None
There are some irregularities to your grid so you may have to do some preprocessing or look for a range of the number of sides but generally, by looking at the area of the contour and narrowing down by the number of sides, you should be able to figure something out.
You mentioned getting the corners so this is that step:
def get_rectangle_corners(cnt):
''' gets corners from a contour '''
pts = cnt.reshape(cnt.shape[0], 2)
rect = np.zeros((4, 2), dtype="float32")
# the top-left point has the smallest sum whereas the
# bottom-right has the largest sum
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]
# compute the difference between the points -- the top-right
# will have the minumum difference and the bottom-left will
# have the maximum difference
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]
return rect