Finding the center of circles in an image - python

I'm fairly new to programming and need to write a program that will find the center of 4 discs (circles) that are located in each corner of a square image.
I don't know the exact coordinates of the discs but have a very good approximation of them. How can I go about finding the location of the centers of each of the 4 discs?

The window will most likely draw images using the top left corner as X= 0, Y = 0. Circles tend to be drawn from the centre outwards, so the centre of the circle is most likely the (X,Y) coordinates used to draw the given circle. You can probably find a way to request the values for the circles coordinates depending how they're being drawn and what you're using to draw them. There are a number of ways of drawing to screen with python, so It depends how you are doing it. Can you provide a little more details?

You can look for the centroid of shapes using moments in OpenCV:
import cv2
# read image through command line
img = cv2.imread(args["ipimage"])
# or load it from a path
#img = cv2.imread(R"/usr/home/dinges/4c.png")
# convert the image to grayscale
gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# convert the grayscale image to binary image
ret,thresh = cv2.threshold(gray_image,127,255,0)
# find contours in the binary image
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
# calculate moments for each contour
M = cv2.moments(c)
# calculate x,y coordinate of center
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
print('centroid: X:{}, Y:{}'.format(cX, cY))
cv2.circle(img, (cX, cY), 5, (255, 255, 255), -1)
cv2.putText(img, "centroid", (cX -25, cY -25),cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
# display the image
cv2.imshow("Image", img)
cv2.waitKey(0)
Reference: Find the Center of a Blob (Centroid) using OpenCV

Related

Python code isnt working the way I want it to work

I'm trying to create a puzzle solver for a game I play, I have everything working seamlessly well but when I run it in game it doesn't work. The game creates orange squares the appear and then quickly disappear and my goal is to click on them when they appear. The color and everything is correct but no click is made. However when I go to the Image to see if what im screenshotting is appearing correctly it starts spam clicking on the orange boxes in the Image. So I'm puzzled, what do I do to fix this issue?
import cv2
import numpy as np
import pyautogui
#Define the lower and upper bounds of the orange color in RGB color space
lower_orange = np.array([255, 162, 0])
upper_orange = np.array([255, 162, 15])
while True:
# Take a screenshot of the screen
image = pyautogui.screenshot()
image = np.array(image)
# Convert the image to HSV color space
rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Create a mask by thresholding the image to only include pixels in the orange color range
orange_mask = cv2.inRange(rgb_image, lower_orange, upper_orange)
# Find all the contours in the mask
contours, _ = cv2.findContours(orange_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Iterate over the contours and draw a rectangle around each one
for contour in contours:
x, y, w, h = cv2.boundingRect(contour)
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Calculate the center of the contour
cx = x + w // 2
cy = y + h // 2
# Click the center of the contour
pyautogui.click(cx, cy)
# Show the image
cv2.imshow('image', image)
cv2.waitKey(1)
The big problem with your code is that you have a While loop that only takes a screenshot and converts it into a Numpy array. The rest of your code needs to be indented to be apart of the loop.
Here is the fixed code:
import cv2
import numpy as np
import pyautogui
#Define the lower and upper bounds of the orange color in RGB color space
lower_orange = np.array([255, 162, 0])
upper_orange = np.array([255, 162, 15])
while True:
# Take a screenshot of the screen
image = pyautogui.screenshot()
image = np.array(image)
# Convert the image to HSV color space
rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Create a mask by thresholding the image to only include pixels in the orange color range
orange_mask = cv2.inRange(rgb_image, lower_orange, upper_orange)
# Find all the contours in the mask
contours, _ = cv2.findContours(orange_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Iterate over the contours and draw a rectangle around each one
for contour in contours:
x, y, w, h = cv2.boundingRect(contour)
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Calculate the center of the contour
cx = x + w // 2
cy = y + h // 2
# Click the center of the contour
pyautogui.click(cx, cy)
# Show the image
cv2.imshow('image', image)
cv2.waitKey(1)

Rotating QR code to the correct position using Python OpenCV

I'm a beginner in python and currently studying QR code detection and decoding. I'm having a hard time rotating the detected QR code to the right position. I already used minAreaRect() to rotate my QR code but it doesn't work. Is there any workaround or a right way to do this? thanks!
ROI2 = cv2.imread('ROI.png')
gray2 = cv2.cvtColor(ROI2, cv2.COLOR_BGR2GRAY)
blur2 = cv2.GaussianBlur(gray2, (9, 9), 0)
thresh2 = cv2.threshold(blur2, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Morph close
# kernel2 = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))
# close2 = cv2.morphologyEx(thresh2, cv2.MORPH_CLOSE, kernel2, iterations=10)
# Find contours and filter for QR code
cnts2 = cv2.findContours(thresh2, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts2 = cnts2[0] if len(cnts2) == 2 else cnts2[1]
c = sorted(cnts2, key=cv2.contourArea, reverse=True)[0]
draw = cv2.cvtColor(thresh2, cv2.COLOR_GRAY2BGR)
cv2.drawContours(draw, [c], 0, (0, 255, 0), 2)
rotrect = cv2.minAreaRect(c)
box = cv2.boxPoints(rotrect)
box = numpy.int0(box)
cv2.drawContours(draw, [box], 0, (0, 0, 255), 2)
cv2.imshow('thresh', thresh2)
cv2.imshow('ROI', ROI2)
cv2.imshow('minarearect', draw)
From my understanding, you're trying to deskew an image. To do this, we need to first compute the rotated bounding box angle then perform a linear transformation. The idea is to use
cv2.minAreaRect + cv2.warpAffine. According to the documentation, cv2.minAreaRect returns
(center(x, y), (width, height), angle of rotation) = cv2.minAreaRect(...)
The third parameter gives us the angle we need to deskew the image.
Input image -> Output result
Skew angle: -39.99416732788086
Code
import cv2
import numpy as np
# Load image, grayscale, Otsu's threshold
image = cv2.imread('2.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = 255 - gray
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Compute rotated bounding box
coords = np.column_stack(np.where(thresh > 0))
angle = cv2.minAreaRect(coords)[-1]
if angle < -45:
angle = -(90 + angle)
else:
angle = -angle
print("Skew angle: ", angle)
# Rotate image to deskew
(h, w) = image.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE)
cv2.imshow('rotated', rotated)
cv2.waitKey()
Note: See Python OpenCV skew correction for another approach using the Projection Profile Method to correct skew.
Detect the code using QRCodeDetector::detectAndDecode and redraw it from the straight_qrcode value. The QRCodeDetector might not be able to decode all codes that you can locate using simple thresholding and contours. Especially when pieces of it are missing (including the quiet zone), the QR code detector might have trouble.
This will always show the code in its canonical orientation, with the finder patterns to the NW, NE, and SW directions.
Simple minAreaRect will only align the code's edges to the image axes, but it would not be able to tell which way is "up" in the QR code.
import cv2 as cv
im = cv.imread("OnDlO.png")
det = cv.QRCodeDetector()
(rv, points, straight_qrcode) = det.detectAndDecode(im)
# rv == 'testing123456'
# points:
# array([[[304. , 36. ],
# [415. , 321. ],
# [141.55959, 428.3963 ],
# [ 32. , 151. ]]], dtype=float32)
# some white padding
with_quiet_zone = cv.copyMakeBorder(straight_qrcode, 1, 1, 1, 1, borderType=cv.BORDER_CONSTANT, value=255)
# scale it up for display
larger = cv.resize(with_quiet_zone, dsize=None, fx=16, fy=16, interpolation=cv.INTER_NEAREST)
# and show it
cv.imshow("larger", larger)
cv.waitKey()
input:
output:

HoughCircles circle detection not working?

I have been trying to write a program that can detect circles on my screen.
This is my screen before code processing
As you can see on the image, there are three circles that the code should detect. I am using HoughCircles function from OpenCV library to achieve this task. My code is below.
ss = gui.screenshot()
img = cv2.cvtColor(np.array(ss), cv2.COLOR_RGB2BGR)
output = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 100)
if circles is not None:
print("circles found", len(circles))
circles = np.round(circles[0, :]).astype("int")
for (x, y, r) in circles:
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
cv2.imshow("output", np.hstack([gray, output]))
cv2.waitKey(0)
cv2.imshow("output", gray)
cv2.waitKey(0)
I am first taking screenshot of my screen. Then, I convert it to use it for opencv.
However, this code does not detect any circles for the screenshot shown in the first picture. I know this because when ran, my program does not print "circles found". Moreover, to show that I have been taking screenshots and transforming them to grayscale properly, I have this image taken from the last two lines of my code.
picture in a gray scale
To show that my code works with other circle images, here is a picture of a regular circle:
before detection
after detection
Any help would be very appreciated!
Here's an alternative solution to detect the circles without using the Hough Transform. As your input image has a very distinct blue hue to the blobs of interest, you can try to create a segmentation mask based on their HSV values. Then, detect contours and approximate each contour using a circle. The last step can be implemented using the cv2.minEnclosingCircle, which, as its name suggest, can compute the Minimum Enclosing Circle of a contour.
Let's see the code:
# image path
path = "D://opencvImages//"
fileName = "XUzFw.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Create a deep copy of the input for results:
inputImageCopy = inputImage.copy()
# Convert the image to the HSV color space:
hsvImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2HSV)
# Set the HSV values:
lowRange = np.array([78, 0, 158])
uppRange = np.array([125, 255, 255])
# Create the HSV mask
mask = cv2.inRange(hsvImage, lowRange, uppRange)
This generates the following segmentation mask:
As you can see, the only blobs that remain are the circles. Now, let's compute the contours and find the minimum enclosing circle:
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Use a list to store the center and radius of the target circles:
detectedCircles = []
# Look for the outer contours:
for i, c in enumerate(contours):
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, (0, 0, 255), 2)
# Store the center and radius:
detectedCircles.append([center, radius])
# Let's see the results:
cv2.namedWindow("Circles", cv2.WINDOW_NORMAL)
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
This is the result of the detection:
Additionally, you can check out the data stored in the detectedCircles list:
# Check out the detected circles:
for i in range(len(detectedCircles)):
# Get circle data:
center, r = detectedCircles[i]
# Print it:
print("i: "+str(i)+" x: "+str(center[0])+" y: "+str(center[1])+" r: "+str(r))
Which yields:
i: 0 x: 395 y: 391 r: 35
i: 1 x: 221 y: 391 r: 36
i: 2 x: 567 y: 304 r: 35
These are the parameters of houghCircles that works for me. You should also consider running a gaussian blur over the image before trying to find the circles.
I'm not a huge fan of houghCircles. I find it to be really finicky and I don't like how much of what it does is hidden inside the function. It makes tuning it mostly trial-and-error. These parameters work for this particular image, but I wouldn't count on this continuing to work under different lighting conditions or for different colors.
import cv2
import numpy as np
# load image
img = cv2.imread("spheres.png");
# grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY);
gray = cv2.GaussianBlur(gray,(5,5),0);
# circles
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, dp = 1, minDist = 100, param1=65, param2=20, minRadius=20, maxRadius=50)
# draw circles
if circles is not None:
# round to ints
circles = np.uint16(np.around(circles));
for circle in circles[0, :]:
# unpack and draw
x, y, radius = circle;
center = (x,y);
cv2.circle(img, center, radius, (255, 0, 255), 3);
# show
cv2.imshow("Image", img);
cv2.imshow("Gray", gray);
cv2.waitKey(0);

How to fill a circle contour at the edge of an image?

In the image above, I want to eventually be able to fill in the contours of the colored circles. Unfortunately, the yellow circle on the bottom is right at the edge of the image, so cv2.findContours() doesn't work:
_, green_contours, _ = cv2.findContours(green_seg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
Does anyone know how I can somehow fill in the yellow circle contour even though it's at the edge of the image? Since it's at the edge of the image, the pixel value on the bottom edge of the image doesn't complete the circle, isn't valued at 255, if that makes sense.
I looked online and some people say it's possible to draw a big box around the image and then do the contours, but if I do that, then opencv only draws the big contour around the entire image and not the circle.
Any other thoughts would be greatly appreciated!
EDIT: The image I show above is just one case I'm considering. I'm trying to think how I can make this general enough such that for any contour that is at the edge of the border, I can still fill in the contour with cv2.drawContour().
If you have (more or less) convex polygons, you actually CAN use cv2.findContours. Having the contours, try to find the center of mass, e.g. using cv2.moments, and then use this as the seed point in cv2.floodFill.
Please see the following code snippet. I assume, you can identify your polygons by color. Also, instead of some advanced finding of the center of mass, I just used the center point of the bounding rectangle of each contour. Maybe, that's also sufficient for your use case!?
import cv2
import numpy as np
# Set up test image
colors = [(0, 255, 0), (0, 0, 255)]
input = np.zeros((400, 400, 3), np.uint8)
cv2.circle(input, (100, 100), 50, colors[0], 10)
cv2.circle(input, (150, 350), 75, colors[1], 10)
output = input.copy()
# Iterate all colors...
for i, c in enumerate(colors):
# Mask color
img = np.all(output == c, axis=2).astype(np.uint8) * 255
# Find contours with respect to OpenCV version
cnts = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
# Get bounding rectangles; derive seed points for flood filling; flood fill
rects = [cv2.boundingRect(c) for c in cnts]
seeds = [(np.int32(r[0] + r[2] / 2), np.int32(r[1] + r[3] / 2)) for r in rects]
[cv2.floodFill(output, mask=None, seedPoint=s, newVal=c) for s in seeds]
cv2.imshow('input', input)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
That's the input:
And, that's the output:
Hope that helps!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.1
NumPy: 1.18.1
OpenCV: 4.1.2
----------------------------------------
A simple way, perhaps not so precise, is to compute the convex hull for each contour and draw the interior with a fill color:
import cv2
import numpy as np
import sys
# load input image from the cmd-line
img = cv2.imread('test_images/partial_contour.png')
if (img is None):
print('!!! Failed imread')
sys.exit(-1)
output_img = img.copy()
# isolate just the colored drawings
ret, thres_bgr = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY)
thres_bgr[np.where((thres_bgr == [255,255,255]).all(axis=2))] = [0,0,0] # replace white pixels for black
# convert from 3-channels (BGR) to a single channel (gray)
gray_img = cv2.cvtColor(thres_bgr, cv2.COLOR_BGR2GRAY)
# this loop processes all the contours found in the image
contours, hierarchy = cv2.findContours(gray_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for contourIdx, cnt in enumerate(contours):
# compute a convex hull
hull = cv2.convexHull(cnt)
# fill the inside with red
cv2.fillPoly(output_img, pts=[hull], color=(0, 0, 255))
cv2.imshow('output_img', output_img)
cv2.imwrite('fill_partial_cnt_output.png', output_img)
cv2.waitKey(0)

How can I find the angle of a T shape in OpenCV

I'm using OpenCV 2.4 to do some tracking, and I can get a contour of the shape I want, which is a T.
Input image:
I can use cv2.minAreaRect(my_t_contour) and get the angle of that rect, but that only gives me 0-180 degrees. But this is a T shape though, so I want to be able to tell 0-360. I was thinking of:
Split the contour into two rects
Get a line through the rects (either using skeletonize > HoughLinesP)
Determine which line is which, determine their gradient (using the coordinates I get from HoughLinesP) and then determine the direction of the T.
But I'm stuck at number 1, how can I split a contour into two shapes?
Method 1: draw center of contour and center of minAreaRect of contour
dst = cv2.cvtColor(r_target, cv2.COLOR_BGR2GRAY)
dst = cv2.GaussianBlur(dst, (11, 11), 0)
ret,dst = cv2.threshold(dst,110,255,cv2.THRESH_BINARY_INV)
cnts = cv2.findContours(dst, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for c in cnts:
# get minAreaRect around contour and draw its center in red
rect = cv2.minAreaRect(c)
cv2.circle(r_target, (int(rect[0][0]), int(rect[0][1])), 7, (0, 0, 255), -1)
# get moments of contour to get center and draw it in white
M = cv2.moments(c)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
cv2.circle(r_target, (cX, cY), 7, (255, 255, 255), -1)
Next step would probably calculate a simple gradient between the centers to determine the angle.
Method 2: skeletonize the image and get lines using HoughLinesP.
dst = cv2.cvtColor(r_target, cv2.COLOR_BGR2GRAY)
dst = cv2.GaussianBlur(dst, (11, 11), 0)
ret,dst = cv2.threshold(dst,110,255,cv2.THRESH_BINARY)
dst = 1 - dst / 255
dst = skimage.morphology.skeletonize(dst).astype(np.uint8)
rho = 1
theta = np.pi / 180
threshold = 1
minLineLength = 30
maxLineGap = 15
lines = cv2.HoughLinesP(dst, rho, theta, threshold, minLineLength=minLineLength, maxLineGap=maxLineGap)
for line in lines[0]:
cv2.line(r_target, (line[0], line[1]), (line[2], line[3]), (0, 255, 0), 1, 8)
But the lines don't come out nicely. This is how the skeleton looks like:
I'm still experimenting with the variables but is there a specific thought process around using HoughLinesP?
As a variant you can use PCA, find first component direction, and use it as an searced angle. You can check here for an example: http://docs.opencv.org/trunk/d1/dee/tutorial_introduction_to_pca.html

Categories