I have an image to process.I need detect all the circles in the image.Here is it.
And here is my code.
import cv2
import cv2.cv as cv
img = cv2.imread(imgpath)
cv2.imshow("imgorg",img)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imshow("gray",gray)
ret,thresh = cv2.threshold(gray, 199, 255, cv.CV_THRESH_BINARY_INV)
cv2.imshow("thresh",thresh)
cv2.waitKey(0)
cv2.destrotAllWindows()
Then,I got a image like this.
And I tried to use erode and dilate to divided them into single.But it doesnt work.My question is how to divide these contacted circles into single,so i can detect them.
According to #Micka's idea,I tried to process the image in following way,and here is my code.
import cv2
import cv2.cv as cv
import numpy as np
def findcircles(img,contours):
minArea = 300;
minCircleRatio = 0.5;
for contour in contours:
area = cv2.contourArea(contour)
if area < minArea:
continue
(x,y),radius = cv2.minEnclosingCircle(contour)
center = (int(x),int(y))
radius = int(radius)
circleArea = radius*radius*cv.CV_PI;
if area/circleArea < minCircleRatio:
continue;
cv2.circle(img, center, radius, (0, 255, 0), 2)
cv2.imshow("imggg",img)
img = cv2.imread("a.png")
cv2.imshow("org",img)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,threshold = cv2.threshold(gray, 199, 255,cv. CV_THRESH_BINARY_INV)
cv2.imshow("threshold",threshold)
blur = cv2.medianBlur(gray,5)
cv2.imshow("blur",blur)
laplacian=cv2.Laplacian(blur,-1,ksize = 5,delta = -50)
cv2.imshow("laplacian",laplacian)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(7,7))
dilation = cv2.dilate(laplacian,kernel,iterations = 1)
cv2.imshow("dilation", dilation)
result= cv2.subtract(threshold,dilation)
cv2.imshow("result",result)
contours, hierarchy = cv2.findContours(result,cv2.RETR_LIST,cv2.CHAIN_APPROX_NONE)
findcircles(gray,contours)
But I dont get the same effect as #Micka's.I dont know which step is wrong.
Adapting the idea of #jochen I came to this:
extract the full circle mask as you've done (I called it fullForeground )
from your colored image, compute grayscale, blur (median blur size 7) it and and extract edges, for example with cv::Laplacian
This laplacian thresholded > 50 gives:
cv::Laplacian(blurred, lap, 0, 5); // no delta
lapMask = lap > 50; // thresholding to values > 50
This one dilated once gives:
cv::dilate(lapMask, dilatedThresholdedLaplacian, cv::Mat()); // dilate the edge mask once
Now subtraction fullForeground - dilatedThresholdedLaplacian (same as and_not operator for this type of masks) gives:
from this you can compute contours. For each contour you can compute the area and compare it to the area of an enclosing circle, giving this code and result:
std::vector<std::vector<cv::Point> > contours;
cv::findContours(separated.clone(), contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
double minArea = 500;
double minCircleRatio = 0.5;
for(unsigned int i=0; i<contours.size(); ++i)
{
double cArea = cv::contourArea(contours[i]);
if(cArea < minArea) continue;
//filteredContours.push_back(contours[i]);
//cv::drawContours(input, contours, i, cv::Scalar(0,255,0), 1);
cv::Point2f center;
float radius;
cv::minEnclosingCircle(contours[i], center, radius);
double circleArea = radius*radius*CV_PI;
if(cArea/circleArea < minCircleRatio) continue;
cv::circle(input, center, radius, cv::Scalar(0,0,255),2);
}
here is another image showing the coverage:
hope this helps
I think the first mistake ist the value of thesh.
In your example the command cv2.threshold converts all white areas to black and everything else to white. I would suggest using a smaller value for thesh so that all black pixel get converted to white and all white or "colored" pixels (inside the circles) get converted to black or vise versa. The value of thesh should be a little bigger than the brightest of the black pixels.
See opencv docu for threshold for more information.
Afterwards I would let opencv find all contours in the thresholded image and filter them for "valid" circles, e.g. by size and shape.
If that is not sufficiant you could segment the inner circle from the rest of the image: First compute threasholdImageA with all white areas colored black. Then compute threasholdImageB with all the black areas being black. Afterwards combine both, threasholdImageA and threasholdImageB, (e.g. with numpy.logical_and) to have a binary image with only the inner circle being white and the rest black. Of course the values for the threshold have to be chosen wisely to get the specific result.
That way also circles where the inner part directly touches the background will be segmented.
Related
I am attempting to find the area inside an arbitrarily-shaped closed curve plotted in python (example image below). So far, I have tried to use both the alphashape and polygon methods to acheive this, but both have failed. I am now attempting to use OpenCV and the floodfill method to count the number of pixels inside the curve and then I will later convert that to an area given the area that a single pixel encloses on the plot.
Example image:
testplot.jpg
In order to do this, I am doing the following, which I adapted from another post about OpenCV.
import cv2
import numpy as np
# Input image
img = cv2.imread('testplot.jpg', cv2.IMREAD_GRAYSCALE)
# Dilate to better detect contours
temp = cv2.dilate(temp, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
# Find largest contour
cnts, _ = cv2.findContours(255-temp, cv2.RETR_TREE , cv2.CHAIN_APPROX_NONE) #255-img and cv2.RETR_TREE is to account for how cv2 expects the background to be black, not white, so I convert the background to black.
largestCnt = [] #I expect this to yield the blue contour
for cnt in cnts:
if (len(cnt) > len(largestCnt)):
largestCnt = cnt
# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])
# Initial mask for flood filling, should cover entire figure
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0
# Generate intermediate image, draw largest contour onto it, flood fill this contour
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
area = cv2.countNonZero(temp) #Number of pixels encircled by blue line
I expect from this to get to a place where I have the same image as above, but with the center of the contour filled in white and the background and original blue contour in black. I end up with this:
result.jpg
While this at first glance appears to have accurately turned the area inside the contour white, the white area is actually larger than the area inside the contour and so the result I get is overestimating the number of pixels inside it.
Any input on this would be greatly appreciated. I am fairly new to OpenCV so I may have misunderstood something.
EDIT:
Thanks to a comment below, I made some edits and this is now my code, with edits noted:
import cv2
import numpy as np
# EDITED INPUT IMAGE: Input image
img = cv2.imread('testplot2.jpg', cv2.IMREAD_GRAYSCALE)
# EDIT: threshold
_, temp = cv2.threshold(img, 250, 255, cv2.THRESH_BINARY_INV)
# EDIT, REMOVED: Dilate to better detect contours
# Find largest contour
cnts, _ = cv2.findContours(temp, cv2.RETR_EXTERNAL , cv2.CHAIN_APPROX_NONE)
largestCnt = [] #I expect this to yield the blue contour
for cnt in cnts:
if (len(cnt) > len(largestCnt)):
largestCnt = cnt
# Determine center of area of largest contour
M = cv2.moments(largestCnt)
x = int(M["m10"] / M["m00"])
y = int(M["m01"] / M["m00"])
# Initial mask for flood filling, should cover entire figure
width, height = temp.shape
mask = img2 = np.ones((width + 2, height + 2), np.uint8) * 255
mask[1:width, 1:height] = 0
# Generate intermediate image, draw largest contour, flood filled
temp = np.zeros(temp.shape, np.uint8)
temp = cv2.drawContours(temp, largestCnt, -1, 255, cv2.FILLED)
_, temp, mask, _ = cv2.floodFill(temp, mask, (x, y), 255)
temp = cv2.morphologyEx(temp, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)))
area = cv2.countNonZero(temp) #Number of pixels encircled by blue line
I input a different image with the axes and the frame that python adds by default removed for ease. I get what I expect at the second step, so this image. However, in the enter image description here both the original contour and the area it encircles appear to have been made white, whereas I want the original contour to be black and only the area it encircles to be white. How might I acheive this?
The problem is your opening operation at the end. This morphological operation includes a dilation at the end that expands the white contour, increasing its area. Let’s try a different approach where no morphology is involved. These are the steps:
Convert your image to grayscale
Apply Otsu’s thresholding to get a binary image, let’s work with black and white pixels only.
Apply a first flood-fill operation at image location (0,0) to get rid of the outer white space.
Filter small blobs using an area filter
Find the “Curve Canvas” (The white space that encloses the curve) and locate and store its starting point at (targetX, targetY)
Apply a second flood-fill al location (targetX, targetY)
Get the area of the isolated blob with cv2.countNonZero
Let’s take a look at the code:
import cv2
import numpy as np
# Set image path
path = "C:/opencvImages/"
fileName = "cLIjM.jpg"
# Read Input image
inputImage = cv2.imread(path+fileName)
inputCopy = inputImage.copy()
# Convert BGR to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu + bias adjustment:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
This is the binary image you get:
Now, let’s flood-fill at the corner located at (0,0) with a black color to get rid of the first white space. This step is very straightforward:
# Flood-fill background, seed at (0,0) and use black color:
cv2.floodFill(binaryImage, None, (0, 0), 0)
This is the result, note how the first big white area is gone:
Let’s get rid of the small blobs applying an area filter. Everything below an area of 100 is gonna be deleted:
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(binaryImage, connectivity=4)
# Set the minimum pixels for the area filter:
minArea = 100
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
This is the result of the filter:
Now, what remains is the second white area, I need to locate its starting point because I want to apply a second flood-fill operation at this location. I’ll traverse the image to find the first white pixel. Like this:
# Get Image dimensions:
height, width = filteredImage.shape
# Store the flood-fill point here:
targetX = -1
targetY = -1
for i in range(0, width):
for j in range(0, height):
# Get current binary pixel:
currentPixel = filteredImage[j, i]
# Check if it is the first white pixel:
if targetX == -1 and targetY == -1 and currentPixel == 255:
targetX = i
targetY = j
print("Flooding in X = "+str(targetX)+" Y: "+str(targetY))
There’s probably a more elegant, Python-oriented way of doing this, but I’m still learning the language. Feel free to improve the script (and share it here). The loop, however, gets me the location of the first white pixel, so I can now apply a second flood-fill at this exact location:
# Flood-fill background, seed at (targetX, targetY) and use black color:
cv2.floodFill(filteredImage, None, (targetX, targetY), 0)
You end up with this:
As you see, just count the number of non-zero pixels:
# Get the area of the target curve:
area = cv2.countNonZero(filteredImage)
print("Curve Area is: "+str(area))
The result is:
Curve Area is: 1510
Here is another approach using Python/OpenCV.
Read Input
convert to HSV colorspace
Threshold on color range of blue
Find the largest contour
Get its area and print that
draw the contour as a white filled contour on black background
Save the results
Input:
import cv2
import numpy as np
# read image as grayscale
img = cv2.imread('closed_curve.jpg')
# convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
#select blu color range in hsv
lower = (24,128,115)
upper = (164,255,255)
# threshold on blue in hsv
thresh = cv2.inRange(hsv, lower, upper)
# get largest contour
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
area = cv2.contourArea(c)
print("Area =",area)
# draw filled contour on black background
result = np.zeros_like(thresh)
cv2.drawContours(result, [c], -1, 255, cv2.FILLED)
# save result
cv2.imwrite("closed_curve_thresh.jpg", thresh)
cv2.imwrite("closed_curve_result.jpg", result)
# view result
cv2.imshow("threshold", thresh)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Threshold Image:
Result Filled Contour On Black Background:
Area Result:
Area = 2347.0
I'm trying to extract the pad section from the following image with OpenCv.
Starting with an image like this:
I am trying to extract into an image like this:
to end up with an image something like this
I currently have the following
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('strip.png')
grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresholded = cv2.threshold(grayscale, 0, 255, cv2.THRESH_OTSU)
bbox = cv2.boundingRect(thresholded)
x, y, w, h = bbox
foreground = img[y:y+h, x:x+w]
cv2.imwrite("output.png", foreground)
Which outputs this:
If you look closely to the upper and the lower parts of the image, it seems more cluttered and the center part (which is your desired output), it looks soft and smooth.
Since the center part is homogeneous , a smoothing filter (like an erosion) won't effect that part so much, the upper part otherwise, would change noticeably more.
At the first step, I remove the black background with a simple thresholding. At further I did some smoothing effect on the image and compute the difference between the result and the original image, then thresholded the final result to remove the unwanted pixels.
Then I did some morphology to remove noisy residual of the process. At the end with the help of boundingRect command, I extracted the desired segment (the white contour):
background removed:
the difference image after bluring with erosion:
the difference image after opening process and a threshold:
And finally the bounding box of the white objects:
The code I wrote (C++ opencv):
Mat im = imread("E:/t.jpg", 0);
resize(im, im, Size() , 0.3, 0.3); // # resizing just for better visualization
Mat im1,im2, im3;
// Removing the black background:
threshold(im, im1, 50, 255, THRESH_BINARY);
vector<vector<Point>> contours_1;
findContours(im1, contours_1, RETR_CCOMP, CHAIN_APPROX_NONE);
Rect r = boundingRect(contours_1[0]);
im(r).copyTo(im);
im.copyTo(im3);
imshow("background removed", im);
// detecting the cluttered parts and cut them:
erode(im, im2, Mat::ones(3, 3, CV_8U), Point(-1, -1), 3);
im2.convertTo(im2, CV_32F);
im3.convertTo(im3, CV_32F);
subtract(im2, im3, im1);
double min, max;
minMaxIdx(im1, &min, &max);
im1 = 255*(im1 - min) / (max - min);
im1.convertTo(im1, CV_8U);
imshow("the difference image", im1);
threshold(im1, im1, 250, 255, THRESH_BINARY);
erode(im1, im1, Mat::ones(3, 3, CV_8U), Point(-1, -1), 3);
dilate(im1, im1, Mat::ones(3, 3, CV_8U), Point(-1, -1), 7);
imshow("the difference image thresholded", im1);
vector<Point> idx, hull;
vector<vector<Point>> hullis;
findNonZero(im1, idx);
Rect rr = boundingRect(idx);
rectangle(im, rr, Scalar(255, 255, 255), 2);
imshow("Final segmentation", im);
waitKey(0);
so I've been trying to code a Python script, which takes an image as input and then cuts out a rectangle with a specific background color. However, what causes a problem for my coding skills, is that the rectangle is not on a fixed position in every image (the position will be random).
I do not really understand how to manage the numpy functions. I also read something about OpenCV, but I'm totally new to it. So far I just cropped the images through the ".crop" function, but then I would have to use fixed values.
This is how the input image could look and now I would like to detect the position of the yellow rectangle and then crop the image to its size.
Help is appreciated, thanks in advance.
Edit: #MarkSetchell's way works pretty good, but found a issue for a different test picture. The problem with the other picture is that there are 2 small pixels with the same color at the top and the bottom of the picture, which cause errors or a bad crop.
Updated Answer
I have updated my answer to cope with specks of noisy outlier pixels of the same colour as the yellow box. This works by running a 3x3 median filter over the image first to remove the spots:
#!/usr/bin/env python3
import numpy as np
from PIL import Image, ImageFilter
# Open image and make into Numpy array
im = Image.open('image.png').convert('RGB')
na = np.array(im)
orig = na.copy() # Save original
# Median filter to remove outliers
im = im.filter(ImageFilter.MedianFilter(3))
# Find X,Y coordinates of all yellow pixels
yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2))
top, bottom = yellowY[0], yellowY[-1]
left, right = yellowX[0], yellowX[-1]
print(top,bottom,left,right)
# Extract Region of Interest from unblurred original
ROI = orig[top:bottom, left:right]
Image.fromarray(ROI).save('result.png')
Original Answer
Ok, your yellow colour is rgb(247,213,83), so we want to find the X,Y coordinates of all yellow pixels:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Open image and make into Numpy array
im = Image.open('image.png').convert('RGB')
na = np.array(im)
# Find X,Y coordinates of all yellow pixels
yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2))
# Find first and last row containing yellow pixels
top, bottom = yellowY[0], yellowY[-1]
# Find first and last column containing yellow pixels
left, right = yellowX[0], yellowX[-1]
# Extract Region of Interest
ROI=na[top:bottom, left:right]
Image.fromarray(ROI).save('result.png')
You can do the exact same thing in Terminal with ImageMagick:
# Get trim box of yellow pixels
trim=$(magick image.png -fill black +opaque "rgb(247,213,83)" -format %# info:)
# Check how it looks
echo $trim
251x109+101+220
# Crop image to trim box and save as "ROI.png"
magick image.png -crop "$trim" ROI.png
If still using ImageMagick v6 rather than v7, replace magick with convert.
What I see is dark and light gray areas on sides and top, a white area, and a yellow rectangle with gray triangles inside the white area.
The first stage I suggest is converting the image from RGB color space to HSV color space.
The S color channel in HSV space, is the "color saturation channel".
All colorless (gray/black/white) are zeros and yellow pixels are above zeros in the S channel.
Next stages:
Apply threshold on S channel (convert it to binary image).
The yellow pixels goes to 255, and other goes to zero.
Find contours in thresh (find only the outer contour - only the rectangle).
Invert polarity of the pixels inside the rectangle.
The gray triangles become 255, and other pixels are zeros.
Find contours in thresh - find the gray triangles.
Here is the code:
import numpy as np
import cv2
# Read input image
img = cv2.imread('img.png')
# Convert from BGR to HSV color space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Get the saturation plane - all black/white/gray pixels are zero, and colored pixels are above zero.
s = hsv[:, :, 1]
# Apply threshold on s - use automatic threshold algorithm (use THRESH_OTSU).
ret, thresh = cv2.threshold(s, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Find contours in thresh (find only the outer contour - only the rectangle).
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues).
# Mark rectangle with green line
cv2.drawContours(img, contours, -1, (0, 255, 0), 2)
# Assume there is only one contour, get the bounding rectangle of the contour.
x, y, w, h = cv2.boundingRect(contours[0])
# Invert polarity of the pixels inside the rectangle (on thresh image).
thresh[y:y+h, x:x+w] = 255 - thresh[y:y+h, x:x+w]
# Find contours in thresh (find the triangles).
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues).
# Iterate triangle contours
for c in contours:
if cv2.contourArea(c) > 4: # Ignore very small contours
# Mark triangle with blue line
cv2.drawContours(img, [c], -1, (255, 0, 0), 2)
# Show result (for testing).
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
S color channel in HSV color space:
thresh - S after threshold:
thresh after inverting polarity of the rectangle:
Result (rectangle and triangles are marked):
Update:
In case there are some colored dots on the background, you can crop the largest colored contour:
import cv2
import imutils # https://pypi.org/project/imutils/
# Read input image
img = cv2.imread('img2.png')
# Convert from BGR to HSV color space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Get the saturation plane - all black/white/gray pixels are zero, and colored pixels are above zero.
s = hsv[:, :, 1]
cv2.imwrite('s.png', s)
# Apply threshold on s - use automatic threshold algorithm (use THRESH_OTSU).
ret, thresh = cv2.threshold(s, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Find contours
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = imutils.grab_contours(cnts)
# Find the contour with the maximum area.
c = max(cnts, key=cv2.contourArea)
# Get bounding rectangle
x, y, w, h = cv2.boundingRect(c)
# Crop the bounding rectangle out of img
out = img[y:y+h, x:x+w, :].copy()
Result:
In opencv you can use inRange. This basically makes whatever color is in the range you speciefied white, and the rest black. This way all the yellow will be white.
Here is the documentation: https://docs.opencv.org/3.4/da/d97/tutorial_threshold_inRange.html
I am new to OpenCV and Python and I have been encountering a problem in removing noises in my input image. I only wanted to extract the nucleus of a WBC so I used addition to highlight the nucleus and used thresholding to remove the RBCs in the image. I successfully removed the RBCs but the platelets are not removed and some lines appeared in the borders. I also tried using dilation, erosion, opening and closing to denoise the image but the nucleus gets destroyed.
Here is my code:
img = cv2.imread('1.bmp')
img_2 = cv2.imread('1.bmp')
input_img = cv2.addWeighted(img, 0.55, img_2, 0.6, 0)
retval, threshold = cv2.threshold(input_img, 158, 255, cv2.THRESH_BINARY)
threshold = cv2.cvtColor(threshold, cv2.COLOR_BGR2GRAY)
retval2, threshold2 = cv2.threshold(threshold, 0, 255,
cv2.THRESH_BINARY+cv2.THRESH_OTSU)
blur2 = cv2.medianBlur(threshold2,5)
Here is the original image:
After Thresholding:
If the nucleus of a WBC as you have highlighted is always the largest contour before thresholding, I would suggest using findContours to store it alone and remove the smaller blobs like this:
vector<vector<Point>>contours; //Vector for storing contour
vector<Vec4i> hierarchy;
//Find the contours in the image
findContours(input_img, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
for (int i = 0; i< contours.size(); i++) // iterate through each contour.
{
double a = contourArea(contours[i], false); // Find the area of contour
if (a>largest_area){
largest_area = a;
//Store the index of largest contour
largest_contour_index = i;
// Find the bounding rectangle for biggest contour
bounding_rect = boundingRect(contours[i]);
}
}
Scalar color(255, 255, 255);
// Draw the largest contour using the previously stored index.
Mat dst;
drawContours(dst, contours, largest_contour_index, color, CV_FILLED, 8, hierarchy);
My code is C++ but you can find python examples: How to detect and draw contours using OpenCV in Python?
I got some questions about segmentation of contoured image. For example i got cable image and i can contour this image with threshold and drawcontour function with code in down below. Contoured image, threshold image. My questions are i want to extract this cable and read rgb code. Any advice could be great! Thanks.
gray_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
ret, thresh_img = cv2.threshold(gray_image, trs, 255, cv2.THRESH_BINARY)
im2, contours, hierarchy = cv2.findContours(thresh_img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im2, contours, -1, red, cnt)
cv2.imshow(winName, im2)
You can use cv2.contourArea(contours) more information here and here
You can get the area inside some polygon "contour" using shoelace formula
The Idea is to calculate the are incrementally by summing/subtracting the area between polygon sides and on the axis , after one complete loop though the polygon contour you the result of summation/subtraction will be the area inside the polygon
j = numPoints-1
for (uint_fast8_t i=0; i<numPoints; i++)
{
area = area + (contour[j][0]+contour[i][0]) * (contour[j][1]-contour[i][1]);
area1 = area1 + (contour[j][0]*contour[i][1]) - (contour[j][1]*contour[i][0]); //different form for the formula
j = i; //j is previous vertex to i
}
area= area/2;
area1= area1/2; //sign of area depend on direction of rotation
https://en.wikipedia.org/wiki/Shoelace_formula
https://www.mathopenref.com/coordpolygonarea.html
https://www.mathopenref.com/coordpolygonarea2.html
for python
https://www.101computing.net/the-shoelace-algorithm/