I'm trying to calculate the distance between two pixels, but in a specific way. I need to know the thickness of the red line in the image, so my idea was to go through the image by columns, find the coordinates of the two edge points and calculate the distance between them. Do this for the two lines, both top and bottom. Do this for each column and then calculate the average.
I should also do a conversion from pixels to real scale.
This is my code for now:
# Make numpy array from image
npimage = np.array(image)
# Describe what a single red pixel looks like
red = np.array([255, 0, 0], dtype=np.uint8)
firs_point = 0
first_find = False
for i in range(image.width):
column = npimage[:,i]
for row in column:
comparison = row == red
equal_arrays = comparison.all()
if equal_arrays == True and first_find == False:
first_x_coord = i
first_find = True
I can't get the coordinates. Can someone help me please? Of course, if there are more optimal ways to calculate it, I will be happy to accept proposals. I am very new!
Thank you very much!
After properly masking all red pixels, you can calculate the cumulative sum per each column in that mask:
Below each red line, you have a large area with a constant value: Below the first red line, it's the thickness of that red line. Below the second red line, it's the cumulative thickness of both red lines, and so on (if there would be even more red lines).
So, now, for each column, calculate the histogram from the cumulative sum, and filter out these peaks; leaving out the 0 in the histogram, that'd be the large black area at the top. Per column, you get the above mentioned (cumulative) thickness values for all red lines. The remainder is to extract the actual, single thickness values, and calculate the mean over all those.
Here's my code:
import cv2
import numpy as np
# Read image
img = cv2.imread('Dc4zq.png')
# Mask RGB pure red
mask = (img == [0, 0, 255]).all(axis=2)
# We check for two lines
n = 2
# Cumulative sum for each column
cs = np.cumsum(mask, axis=0)
# Thickness values for each column
tvs = np.zeros((n, img.shape[1]))
for c in range(img.shape[1]):
# Calculate histogram of cumulative sum for a column
hist = np.histogram(cs[:, c], bins=np.arange(img.shape[1]+1))
# Get n highest histogram values
# These are the single thickness values for a column
tv = np.sort(np.argsort(hist[0][1:])[::-1][0:n]+1)
tv[1:] -= tv[:-1]
tvs[:, c] = tv
# Get mean thickness value
mtv = np.mean(tvs.flatten())
print('Mean thickness value:', mtv)
The final result is:
Mean thickness value: 18.92982456140351
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
NumPy: 1.20.1
OpenCV: 4.5.1
----------------------------------------
EDIT: I'll provide some more details on the "NumPy magic" involved.
# Calculate the histogram of the cumulative sum for a single column
hist = np.histogram(cs[:, c], bins=np.arange(img.shape[1] + 1))
Here, bins represent the intervals for the histogram, i.e. [0, 1], [1, 2], and so on. To also get the last interval [569, 570], you need to use img.shape[1] + 1 in the np.arange call, because the right limit is not included in np.arange.
# Get the actual histogram starting from bin 1
hist = hist[0][1:]
In general, np.histogram returns a tuple, where the first element is the actual histogram. We extract that, and only look at all bins larger 0 (remember, the large black area).
Now, let's disassemble this code:
tv = np.sort(np.argsort(hist[0][1:])[::-1][0:n]+1)
This line can be rewritten as:
# Get the actual histogram starting from bin 1
hist = hist[0][1:]
# Get indices of sorted histogram; these are the actual bins
hist_idx = np.argsort(hist)
# Reverse the found indices, since we want those bins with the highest counts
hist_idx = hist_idx[::-1]
# From that indices, we only want the first n elements (assuming there are n red lines)
hist_idx = hist_idx[:n]
# Add 1, because we cut the 0 bin
hist_idx = hist_idx + 1
# As a preparation: Sort the (cumulative) thickness values
tv = np.sort(hist_idx)
By now, we have the (cumulative) thickness values for each column. To reconstruct the actual, single thickness values, we need the "inverse" of the cumulative sum. There's this nice Q&A on that topic.
# The "inverse" of the cumulative sum to reconstruct the actual thickness values
tv[1:] -= tv[:-1]
# Save thickness values in "global" array
tvs[:, c] = tv
using opencv:
img = cv2.imread(image_path)
average_line_width = np.average(np.count_nonzero((img[:,:,:]==np.array([0,0,255])).all(2),axis=0))/2
print(average_line_width)
using pil
img = np.asarray(Image.open(image_path))
average_line_width = np.average(np.count_nonzero((img[:,:,:]==np.array([255,0,0])).all(2),axis=0))/2
print(average_line_width)
output in both cases:
18.430701754385964
I'm not sure I got it but I used the answer of joostblack to calculcate both average thickness in pixel of both lines. Here is my code with comments:
import numpy as np
## Read the image
img = cv2.imread('img.png')
## Create a mask on the red part (I don't use hsv here)
lower_val = np.array([0,0,0])
upper_val = np.array([150,150,255])
mask = cv2.inRange(img, lower_val, upper_val)
## Apply the mask on the image
only_red = cv2.bitwise_and(img,img, mask= mask)
gray = cv2.cvtColor(only_red, cv2.COLOR_BGR2GRAY)
## Find Canny edges
edged = cv2.Canny(gray, 30, 200)
## Find contours
img, contours, hier = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
## Select contours using a bonding box
coords=[]
for c in contours:
x,y,w,h = cv2.boundingRect(c)
if w>10:
## Get coordinates of the bounding box is width is sufficient (to avoid noise because you have a huge red line on the left of your image)
coords.append([x,y,w,h])
## Use the previous coordinates to cut the image and compute the average thickness for one red line using the answer proposed by joostblack
for x,y,w,h in coords:
average_line_width = np.average(np.count_nonzero(only_red[y:y+h,x:x+w],axis=0))
print(average_line_width)
## Show you the selected result
cv2.imshow('image',only_red[y:y+h,x:x+w])
cv2.waitKey(0)
The first one is average 6.34 pixels when the 2nd is 5.94 pixels (in the y axis). If you want something more precise you'll need to change this formula!
One way of doing this is to calculate the medial axis (centreline) of the red pixels. And then, as that line is 1px wide, the number of centreline pixels gives the length of the red lines. If you also calculate the number of red pixels, you can easily determine the average line thickness using:
average thickness = number of red pixels / length of red lines
The code looks like this:
#!/usr/bin/env python3
import cv2
import numpy as np
from skimage.morphology import medial_axis
# Load image
im=cv2.imread("Dc4zq.png")
# Make mask of all red pixels and count them
mask = np.alltrue(im==[0,0,255], axis=2)
nRed = np.count_nonzero(mask)
# Get medial axis of red lines and line length
skeleton = (medial_axis(mask*255)).astype(np.uint8)
lenRed = np.count_nonzero(skeleton)
cv2.imwrite('DEBUG-skeleton.png',(skeleton*255).astype(np.uint8))
# We now know the length of the red lines and the total number of red pixels
aveThickness = nRed/lenRed
print(f'Average thickness: {aveThickness}, red line length={lenRed}, num red pixels={nRed}')
That gives the skeleton as follows:
Sample Output
Average thickness: 16.662172878667725, red line length=1261, num red pixels=21011
Related
I have a dataset with two classes of images: Cityscape and Landscape. What I want to do is calculate the gradient(orientation) of the edges of each image and show that images of cityscapes have more vertical/horizontal edges than landscape images.
What I've done is calculated vertical, horizontal, 45 degree and 135 degree edges. I've applied a Canny filter to the images, calculated the x,y gradients and also applied a threshold to the images show it shows edges above that threshold. The result of this thresholding is seen here:
This is my code for this image manipulation as well as calculating the gradients:
def gradient(image):
# Step 1
img = image
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Step 2
bi = cv2.bilateralFilter(gray, 15, 75, 75)
# Step 3
dst = cv2.Canny(bi, 100, 200)
#print(np.count_nonzero(dst)) #--> make sure it's not all zeroes
# Step 4
#--- create a black image to see where those edges occur ---
mask = np.zeros_like(gray)
#--- applying a threshold and turning those pixels above the threshold to white ---
mask[dst > 0.1 * dst.max()] = 255
# Step 5
img[dst > 0.1 * dst.max()] = [255, 0, 0] #--- [255, 0, 0] --> Red ---
Gx = cv2.Sobel(mask,cv2.CV_64F,1,0,ksize=5)
Gy = cv2.Sobel(mask,cv2.CV_64F,0,1,ksize=5)
#orientation of the edges
theta = np.arctan2(Gy, Gx)
#magnitude
M = np.sqrt(Gx*Gx + Gy*Gy)
#Vertical edges:
v = abs(Gy)
#Horizontal edges:
h = abs(Gx)
#45 Degree edges:
deg45 = M*abs(np.cos(theta - np.pi/4))
#135 Degree edges:
deg135 = M*abs(np.cos(theta - 3*np.pi/4))
print('Vertical:')
#print(v)
print(np.count_nonzero(v))
print('Horizontal:')
#print(h)
print(np.count_nonzero(h))
What I want is to calculate the v,h,deg45,deg135 for the edges shown as red in the image above (Step 5). If that is not possible, then do that for the image with the white edges (Step 4). Can anyone help?
EDIT: So as to avoid confusion, what I want to do is to get the amount of vertical, horizontal etc edges in a given image, so that I can compare those numbers for cityscapes vs landscape images.
If what you want is the total number of pixels comprising horizontal vs vertical edges, I would suggest defining some threshold for horizontal vs vertical (say 15 degrees). So you can count the number of elements of theta for which
abs(theta) < pi/12 (horizontal)
or abs(theta) > pi-pi/12 (horizontal)
or pi/2 - pi/12 < abs(theta) < pi/2+pi/12 (vertical)
What you're storing in v and h are the vertical and horizontal components of the gradient at each point and what you need is to compare the values of v and h to determine for each point if the gradient vector should count as horizontal or vertical. Comparing theta is probably the most intuitive way to do this.
In order to get the number of elements of theta that satisfy a particular condition, I would suggest using a generator expression:
sum(1 for i in theta if (abs(i)<pi/12) or (abs(i)>pi-pi/12))
would give you the number of horizontal edge pixels for example.
I think Hough transform fits more to your needs if you want to count or control how many linear features you have in your image, either you could count how many linear feature for each specific orientation (in the hough space). As soon you are using Python, this and this link might be helpful!
I'm trying to cut multiple images with a green background. The center of the pictures is green and i want to cut the rest out of the picture. The problem is, that I got the pictures from a video, so sometimes the the green center is bigger and sometimes smaller. My true task is to use K-Means on the knots, therefore i have for example a green background and two ropes, one blue and one red.
I use python with opencv, numpy and matplotlib.
I already cut the center, but sometimes i cut too much and sometimes i cut too less. My Imagesize is 1920 x 1080 in this example.
Here the knot is left and there is more to cut
Here the knot is in the center
Here is another example
Here is my desired output from picture 1
Example 1 which doesn't work with all algorithm
Example 2 which doesn't work with all algorithm
Example 3 which doesn't work with all algorithm
Here is my Code so far:
import numpy as np
import cv2
import matplotlib.pyplot as plt
from PIL import Image, ImageEnhance
img = cv2.imread('path')
print(img.shape)
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
crop_img = imgRGB[500:500+700, 300:300+500]
plt.imshow(crop_img)
plt.show()
You can change color to hsv.
src = cv2.imread('path')
imgRGB = cv2.cvtColor(src, cv2.COLOR_BGR2RGB)
imgHSV = cv2.cvtColor(imgRGB, cv2.COLOR_BGR2HSV)
Then use inRange to find only green values.
lower = np.array([20, 0, 0]) #Lower values of HSV range; Green have Hue value equal 120, but in opencv Hue range is smaler [0-180]
upper = np.array([100, 255, 255]) #Uppervalues of HSV range
imgRange = cv2.inRange(imgHSV, lower, upper)
Then use morphology operations to fill holes after not green lines
#kernels for morphology operations
kernel_noise = np.ones((3,3),np.uint8) #to delete small noises
kernel_dilate = np.ones((30,30),np.uint8) #bigger kernel to fill holes after ropes
kernel_erode = np.ones((38,38),np.uint8) #bigger kernel to delete pixels on edge that was add after dilate function
imgErode = cv2.erode(imgRange, kernel_noise, 1)
imgDilate = cv2.dilate(imgErode , kernel_dilate, 1)
imgErode = cv2.erode(imgDilate, kernel_erode, 1)
Put mask on result image. You can now easly find corners of green screen (findContours function) or use in next steps result image
res = cv2.bitwise_and(imgRGB, imgRGB, mask = imgErode) #put mask with green screen on src image
The code below does what you want. First it converts the image to the HSV colorspace, which makes selecting colors easier. Next a mask is made where only the green parts are selected. Some noise is removed and the rows and columns are summed up. Finally a new image is created based on the first/last rows/cols that fall in the green selection.
Since in all provided examples a little extra of the top needed to be cropped off I've added code to do that. First I've inverted the mask. Now you can use the sum of the rows/cols to find the row/col that is fully within the green selection. It is done for the top. In the image below the window 'Roi2' is the final image.
Edit: updated code after comment by ts.
Updated result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("gr.png")
# convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# set lower and upper color limits
lower_val = (30, 0, 0)
upper_val = (65,255,255)
# Threshold the HSV image to get only green colors
# the mask has white where the original image has green
mask = cv2.inRange(hsv, lower_val, upper_val)
# remove noise
kernel = np.ones((8,8),np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# sum each row and each volumn of the image
sumOfCols = np.sum(mask, axis=0)
sumOfRows = np.sum(mask, axis=1)
# Find the first and last row / column that has a sum value greater than zero,
# which means its not all black. Store the found values in variables
for i in range(len(sumOfCols)):
if sumOfCols[i] > 0:
x1 = i
print('First col: ' + str(i))
break
for i in range(len(sumOfCols)-1,-1,-1):
if sumOfCols[i] > 0:
x2 = i
print('Last col: ' + str(i))
break
for i in range(len(sumOfRows)):
if sumOfRows[i] > 0:
y1 = i
print('First row: ' + str(i))
break
for i in range(len(sumOfRows)-1,-1,-1):
if sumOfRows[i] > 0:
y2 = i
print('Last row: ' + str(i))
break
# create a new image based on the found values
#roi = img[y1:y2,x1:x2]
#show images
#cv2.imshow("Roi", roi)
# optional: to cut off the extra part at the top:
#invert mask, all area's not green become white
mask_inv = cv2.bitwise_not(mask)
# search the first and last column top down for a green pixel and cut off at lowest common point
for i in range(mask_inv.shape[0]):
if mask_inv[i,0] == 0 and mask_inv[i,x2] == 0:
y1 = i
print('First row: ' + str(i))
break
# create a new image based on the found values
roi2 = img[y1:y2,x1:x2]
cv2.imshow("Roi2", roi2)
cv2.imwrite("img_cropped.jpg", roi2)
cv2.waitKey(0)
cv2.destroyAllWindows()
First step is to extract green channel from your image, this is easy with OpenCV numpy and would produce grayscale image (2D numpy array)
import numpy as np
import cv2
img = cv2.imread('knots.png')
imgg = img[:,:,1] #extracting green channel
Second step is using thresholding, which mean turning grayscale image into binary (black and white ONLY) image for which OpenCV has ready function: https://docs.opencv.org/3.4.0/d7/d4d/tutorial_py_thresholding.html
imgt = cv2.threshold(imgg,127,255,cv2.THRESH_BINARY)[1]
Now imgt is 2D numpy array consisting solely of 0s and 255s. Now you have to decide how you would look for places of cuts, I suggest following:
topmost row of pixel containing at least 50% of 255s
bottommost row of pixel containing at least 50% of 255s
leftmost column of pixel containing at least 50% of 255s
rightmost column of pixel containing at least 50% of 255s
Now we have to count number of occurences in each row and each column
height = img.shape[0]
width = img.shape[1]
columns = np.apply_along_axis(np.count_nonzero,0,imgt)
rows = np.apply_along_axis(np.count_nonzero,1,imgt)
Now columns and rows are 1D numpy arrays containing number of 255s for each column/row, knowing height and width we could get 1D numpy arrays of bool values following way:
columns = columns>=(height*0.5)
rows = rows>=(width*0.5)
Here 0.5 means 50% mentioned earlier, feel free to adjust that value to your needs. Now it is time to find index of first True and last True in columns and rows.
icolumns = np.argwhere(columns)
irows = np.argwhere(rows)
leftcut = int(min(icolumns))
rightcut = int(max(icolumns))
topcut = int(min(irows))
bottomcut = int(max(irows))
Using argwhere I got numpy 1D arrays of indexes of Trues, then found lowest and greatest. Finally you can clip your image and save it
imgout = img[topcut:bottomcut,leftcut:rightcut]
cv2.imwrite('out.png',imgout)
There are two places which might be requiring adjusting: % of 255s (in my example 50%) and threshold value (127 in cv2.threshold).
EDIT: Fixed line with cv2.threshold
Based on the new images you added I assume that you do not only want to cut out the non green parts as you asked, but that you want a smaller frame around the ropes/knot. Is that correct? If not, you should upload the video and describe the purpose/goal of the cropping a bit more, so that we can better help you.
Assuming you want a cropped image with only the ropes, the solution is quite similar the the previous answer. However, this time the red and blue of the ropes are selected using HSV. The image is cropped based on the resulting mask. If you want the image somewhat bigger than just the ropes, you can add extra margins - but be sure to account/check for the edge of the image.
Note: the code below works for the images that that have a full green background, so I suggest you combine it with one of the solutions that only selects the green area. I tested this for all your images as follows: I took the code from my other answer, put it in a function and added return roi2 at the end. This output is fed into a second function that holds the code below. All images were processed successful.
Result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("image.JPG")
# blue
lower_val_blue = (110, 0, 0)
upper_val_blue = (179,255,155)
# red
lower_val_red = (0, 0, 150)
upper_val_red = (10,255,255)
# Threshold the HSV image
mask_blue = cv2.inRange(img, lower_val_blue, upper_val_blue)
mask_red = cv2.inRange(img, lower_val_red, upper_val_red)
# combine masks
mask_total = cv2.bitwise_or(mask_blue,mask_red)
# remove noise
kernel = np.ones((8,8),np.uint8)
mask_total = cv2.morphologyEx(mask_total, cv2.MORPH_CLOSE, kernel)
# sum each row and each volumn of the mask
sumOfCols = np.sum(mask_total, axis=0)
sumOfRows = np.sum(mask_total, axis=1)
# Find the first and last row / column that has a sum value greater than zero,
# which means its not all black. Store the found values in variables
for i in range(len(sumOfCols)):
if sumOfCols[i] > 0:
x1 = i
print('First col: ' + str(i))
break
for i in range(len(sumOfCols)-1,-1,-1):
if sumOfCols[i] > 0:
x2 = i
print('Last col: ' + str(i))
break
for i in range(len(sumOfRows)):
if sumOfRows[i] > 0:
y1 = i
print('First row: ' + str(i))
break
for i in range(len(sumOfRows)-1,-1,-1):
if sumOfRows[i] > 0:
y2 = i
print('Last row: ' + str(i))
break
# create a new image based on the found values
roi = img[y1:y2,x1:x2]
#show image
cv2.imshow("Result", roi)
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am trying to find centers of 2 squares in the same image which looks as follows:
I am able to detect the lines that make up the square. My output looks as follows:
As documented here to find the center of a polygon, I used moments to find center. Here is what I did.
import cv2
import numpy as np
img = cv2.imread('images/sq.png', 0)
gray = img
kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)
ret,thresh = cv2.threshold(blur_gray,100,255,0)
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(thresh, low_threshold, high_threshold)
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi / 180 # angular resolution in radians of the Hough grid
threshold = 3 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
line_image = np.copy(img) * 0 # creating a blank to draw lines on
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),2)
print("x1 {} y1 {} x2 {} y2 {}".format(x1,y1,x2,y2))
lines_edges = cv2.addWeighted(img, 0.5, line_image, 1, 0)
line_image_gray = cv2.cvtColor(line_image, cv2.COLOR_RGB2GRAY)
M = cv2.moments(line_image_gray)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
cv2.circle(lines_edges, (cx, cy), 5, (0, 0, 255), 1)
cv2.imshow("res", lines_edges)
cv2.imshow("line_image", line_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
But this finds the center between 2 detected squares. How could I find the centers of each square while only using Hough methods?
Given that you have a requirement to use the Hough transform, I suggest you prepare the image better for it. The Canny edge detector will detect the inner and outer edges of the black line here, leading to two pairs of lines detected by Hough.
Instead, follow a procedure like this:
Find all black (or nearly-black) pixels. For example pixels where all three RGB components are below 50. This will return the squares by themselves.
Apply a morphological thinning (or a skeleton) to turn this into a 1-pixel thick outline of the squares.
Apply the Hough transform on the result, and detect line segments.
Proper pre-processing makes the Hough transform easier to set up, as there will be a larger range of parameters that yields the correct results.
Next, find segments that start or end at the same pixel, with a little bit of tolerance (i.e. start or end points are within a few pixels of each other), to determine which of the lines belong together in the same shape.
You could use this method combined with the following code to find which lines are part of the same square:
How can I check if two segments intersect?
Where 'lines' is a list of the recognized lines, and intersects(line1, line2) is a function using the process in the above link
squares = [[lines(1)]]
for line1 in lines:
for square in squares:
for line2 in square:
if line1 != line2:
if intersects(line1, line2):
square.append(line1)
else:
squares.append([line1])
This gives you 'squares' that contain the lines that are a part of it. You could then use the moment function on each individually.
I'm trying to take a picture (.jpg file) and find the exact centers (x/y coords) of two differently colored circles in this picture. I've done this in python 2.7. My program works well, but it takes a long time and I need to drastically reduce the amount of time it takes to do this. I currently check every pixel and test its color, and I know I could greatly improve efficiency by pre-sampling a subset of pixels (e.g. every tenth pixel in both horizontal and vertical directions to find areas of the picture to hone in on). My question is if there are pre-developed functions or ways of finding the x/y coords of objects that are much more efficient than my code. I've already removed function calls within the loop, but that only reduced the run time by a few percent.
Here is my code:
from PIL import Image
import numpy as np
i = Image.open('colors4.jpg')
iar = np.asarray(i)
(numCols,numRows) = i.size
print numCols
print numRows
yellowPixelCount = 0
redPixelCount = 0
yellowWeightedCountRow = 0
yellowWeightedCountCol = 0
redWeightedCountRow = 0
redWeightedCountCol = 0
for row in range(numRows):
for col in range(numCols):
pixel = iar[row][col]
r = pixel[0]
g = pixel[1]
b = pixel[2]
brightEnough = r > 200 and g > 200
if r > 2*b and g > 2*b and brightEnough: #yellow pixel
yellowPixelCount = yellowPixelCount + 1
yellowWeightedCountRow = yellowWeightedCountRow + row
yellowWeightedCountCol = yellowWeightedCountCol + col
if r > 2*g and r > 2*b and r > 100: # red pixel
redPixelCount = redPixelCount + 1
redWeightedCountRow = redWeightedCountRow + row
redWeightedCountCol = redWeightedCountCol + col
print "Yellow circle location"
print yellowWeightedCountRow/yellowPixelCount
print yellowWeightedCountCol/yellowPixelCount
print " "
print "Red circle location"
print redWeightedCountRow/redPixelCount
print redWeightedCountCol/redPixelCount
print " "
Update: As I mentioned below, the picture is somewhat arbitrary, but here is an example of one frame from the video I am using:
First you have to do some clearing:
what do you consider fast enough? where is the sample image so we can see what are you dealing with (resolution, bit per pixel). what platform (especially CPU so we can estimate speed).
As you are dealing with circles (each one encoded with different color) then it should be enough to find bounding box. So find min and max x,y coordinates of the pixels of each color. Then your circle is:
center.x=(xmin+xmax)/2
center.y=(ymin+ymax)/2
radius =((xmax-xmin)+(ymax-ymin))/4
If coded right even with your approach it should take just few ms. on images around 1024x1024 resolution I estimate 10-100 ms on average machine. You wrote your approach is too slow but you did not specify the time itself (in some cases 1us is slow in other 1min is enough so we can only guess what you need and got). Anyway if you got similar resolution and time is 1-10 sec then you most likelly use some slow pixel access (most likely from GDI) like get/setpixel use bitmap Scanline[] or direct Pixel access with bitblt or use own memory for images.
Your approach can be speeded up by using ray cast to find approximate location of circles.
cast horizontal lines
their distance should be smaller then radius of smallest circle you search for. cast as many rays until you hit each circle with at least 2 rays
cast 2 vertical lines
you can use found intersection points from #1 so no need to cast many rays just 2 ... use the H ray where intersection points are closer together but not too close.
compute you circle properties
so from the 4 intersection points compute center and radius as it is axis aligned rectangle +/- pixel error it should be as easy just find the mid point of any diagonal and radius is also obvious as half of diagonal size.
As you did not share any image we can only guess what you got in case you do no have circles or need an idea for different approach see:
Algorithms: Ellipse matching
find archery target in image of different perspectives
If you are sure of the colours of the circle, easier method be to filter the colors using a mask and then apply Hough circles as Mathew Pope suggested.
Here is a snippet to get you started quick.
import cv2 as cv2
import numpy as np
fn = '200px-Traffic_lights_dark_red-yellow.svg.png'
# OpenCV reads image with BGR format
img = cv2.imread(fn)
# Convert to HSV format
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# lower mask (0-10)
lower_red = np.array([0, 50, 50])
upper_red = np.array([10, 255, 255])
mask = cv2.inRange(img_hsv, lower_red, upper_red)
# Bitwise-AND mask and original image
masked_red = cv2.bitwise_and(img, img, mask=mask)
# Check for circles using HoughCircles on opencv
circles = cv2.HoughCircles(mask, cv2.cv.CV_HOUGH_GRADIENT, 1, 20, param1=30, param2=15, minRadius=0, maxRadius=0)
print 'Radius ' + 'x = ' + str(circles[0][0][0]) + ' y = ' + str(circles[0][0][1])
One example of applying it on image looks like this. First is the original image, followed by the red colour mask obtained and the last is after circle is found using Hough circle function of OpenCV.
Radius found using the above method is Radius x = 97.5 y = 99.5
Hope this helps! :)
I have an image:
and the range of colours in the image is generated as the linear interpolation through these RGB values:
rgb = [165,0,38], w = 0.0
rgb = [222,63,46], w = 0.125
rgb = [248,142,82], w = 0.25
rgb = [253,212,129], ...
rgb = [254,254,189]
rgb = [203,232,129]
rgb = [132,202,102]
rgb = [42,159,84]
rgb = [0,104,55], w = 1.0
How can I create a graph/histogram where the x-axis is the range of colours and the value is the percentage of the image with that colour of pixel.
Here's a rather brute force attempt with how I would solve this problem. Mind you, I'm finding colour distances in the RGB space but it is well known that colour distances in RGB do not mimic human perception of colours very well... but this is something to get you started. Be advised that you need numpy and matplotlib installed. matplotlib is to allow for plotting the histogram as a stem plot.
Basically, those RGB values that you have defined we can consider as keypoints. From here, we need to define the total number of bins in the histogram that we would need to compute. I set this to 64 to start. What you need to do first is interpolate the red, green and blue values for those values that you have defined so that we can create a RGB lookup table. As such, we would need to generate 64 RGB values from the beginning RGB tuple to the ending RGB tuple, using those keypoints that you have defined and we will linearly interpolate those RGB values.
This RGB lookup table will be a 64 x 3 array and the basic algorithm is to extract a RGB pixel from your image and determine the closest pixel to the lookup table from this pixel. We find this index that produces the minimum distance and we would increment the corresponding bin in the histogram. I compute this by the Euclidean distance squared. There's no point in taking the square root to get the Euclidean distance as we want to find the minimum distance. Square rooting each term won't change which pixel colour is closest to which entry in the lookup table. We would repeat this for the rest of the pixels in the image.
To compute the minimum distance, use numpy.sum as well as subtracting each pixel you get in the input image with each location in the lookup table via broadcasting. We square each of the distances, sum them up, then determine the location in the lookup table that gives us the minimum by numpy.argmin.
Now, in order to create the interpolated RGB lookup, I called numpy.interp on the red, green and blue channel keypoints where the output (y) values are from those keypoint values for the red, green and blue values you defined, and the input (x) values are dummy input values that are linearly increasing from 0 up to as many control points as we have subtracted by 1. So our input x keypoints are:
[0, 1, 2, 3, ..., N-1]
N is the total number of keypoints, and the output keypoints are the red, green and blue keypoint values respectively. In order to create a lookup of 64 values, we would need to create 64 points between 0 to N-1, and we can achieve this with numpy.linspace.
Now one intricacy with OpenCV is that images are read in BGR format. As such, I flipped the channels so that they are RGB, and I also cast the image as float32 so that we can maintain the precision when calculating the distances. Also, once I calculate the histogram, because you want percentages, I convert the histogram into percentages by dividing by the total number of values in the histogram (which is the number of pixels in the image) and multiply by 100% to get this in percentage.
Without further ado, here's my attempt in code. Your image looks like a wheat field, and so I called your image wheat.png, but rename it to whatever your image is called:
import numpy as np # Import relevant libraries
import cv2
import matplotlib.pyplot as plt
# Read in image
img = cv2.imread('wheat.png')
# Flip the channels as the image is in BGR and cast to float
img = img[:,:,::-1].astype('float32')
# control points for RGB - defined by you
rgb_lookup = np.array([[165,0,38], [222,63,46], [248,142,82],
[253,212,129], [254,254,189], [203,232,129],
[132,202,102], [42,159,84], [0,104,55]])
# Define number of bins for histogram
num_bins = 64
# Define dummy x keypoint values
x_keypt = np.arange(rgb_lookup.shape[0])
# Define interpolating x values
xp = np.linspace(x_keypt[0], x_keypt[-1], num_bins)
# Define lookup tables for red, green and blue
red_lookup = np.interp(xp, x_keypt, rgb_lookup[:,0])
green_lookup = np.interp(xp, x_keypt, rgb_lookup[:,1])
blue_lookup = np.interp(xp, x_keypt, rgb_lookup[:,2])
# Define final RGB lookup
rgb_final_lookup = np.column_stack([red_lookup, green_lookup, blue_lookup])
# Brute force
# For each pixel we have in our image, find the closest RGB distance
# from this pixel to each pixel in our lookup. Find the argmin,
# then log into histogram accordingly
hist = np.zeros(num_bins)
# Get the rows and columns of the image
rows = img.shape[0]
cols = img.shape[1]
# For each pixel
for i in np.arange(rows):
for j in np.arange(cols):
# Get colour pixel value
val = img[i,j,:]
# Find closest distance to lookup
dists = np.sum((rgb_final_lookup - val)**2.0, axis=1)
# Get location for histogram
ind = np.argmin(dists)
# Increment histogram
hist[ind] += 1
# Get percentage calculation
hist = 100*hist / (rows*cols)
# Plot histogram
plt.stem(np.arange(num_bins), hist)
plt.title('Histogram of colours')
plt.xlabel('Bin number')
plt.ylabel('Percentage')
plt.show()
The graph we get is:
The above figure makes sense. At the beginning of your colour spectrum, there are a lot of red and yellowish pixels, which are defined near the beginning of your spectrum. The green and whitish pixels are more towards the end and don't comprise of most of the pixels. You'll need to play around with the number of bins to get this working to your tastes.
Good luck!