As shown in the images, I have a masked MRI in which certain muscles have been segmented out. I am trying to get the area of these muscles in pixels. Currently, I can get all the non-zero pixels in the image using np.count_nonzero(result), however, I am looking to get the individual areas of each left and right muscle, not just the total.
I know I can manually crop/select an ROI of each area, but I do not want to do this for 100s of images. Also not all muscles are cleanly on each side as seen by the bottom image (cannot just split the image in half). The images are represented as 2D arrays.
Thank you!
You can use OpenCV Library to get the number of pixels for each object in the image.
import cv2 # OpenCV library
# Read the image
image = cv2.imread(r'D:\image_sample.jpg')
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Get the mask (predefined threshold)
mask = cv2.inRange(gray, 200, 255)
#Find all Objects contour's
contours, hierarchy = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for i in range(len(contours)):
m=max(contours[i][0])
area = cv2.contourArea(contours[i])
print(f"Area{i+1} = {area} pixels")
image = cv2.drawContours(image, contours, i, (0,255,255*i), -1)
img_text = cv2.putText(image, str(area), m, cv2.FONT_HERSHEY_PLAIN, 1, (255,0,255), 1, cv2.LINE_AA, False)
#Show the resut
cv2.imshow('image with contours', image)
cv2.waitKey()
Related
I am trying to take out the ground floor of room image as separate one . You can assume to extract it out from there and show using basic approach. I am using Opencv instead of any segmentation approaches as i think those approaches will take time and opencv will be fast. It should work with every image means remain dynamic .
Here is right now what i am using:
Image:
Image
Code:
import cv2
import numpy as np
# Load the image
img = cv2.imread("room.jpg")
# Convert the image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply Gaussian blur to remove noise
blur = cv2.GaussianBlur(gray, (5, 5), 0)
# Use Canny edge detection to find edges
edges = cv2.Canny(blur, 50, 150)
# Find contours in the image
contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Sort the contours by area
contours = sorted(contours, key=cv2.contourArea, reverse=True)
# Set a minimum area for the contour to be considered
min_area = 5000
# Loop through the contours and draw only those with an area greater than min_area
for cnt in contours:
if cv2.contourArea(cnt) > min_area:
cv2.drawContours(img, [cnt], -1, (0, 255, 0), 3)
break
# Display the result
cv2.imshow("Floor", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
It is not giving desired results, It is wrong placing points. How can i sort it out, Opinions and other things would also be apprecietable.
I need help with OpenCV
I have a picture with a complex for lying on the ground now i need to extract this form from the picture and cleaned it from noise. But now there is a logo which i need to remove and 4 holes to identify.What i could do Original image
My code so far:
import cv2
import numpy as np
# Read the original image
img = cv2.imread('Amoebe_1.jpg')
# resize image
scale_down = 0.4
img = cv2.resize(img, None, fx= scale_down, fy= scale_down, interpolation= cv2.INTER_LINEAR)
# Display original image
cv2.imshow('Original', img)
cv2.waitKey(0)
# Denoising
dst = cv2.fastNlMeansDenoisingColored(img,None,20,10,10,21)
# Canny Edge Detection
edges = cv2.Canny(image=dst, threshold1=100, threshold2=200) # Canny Edge Detection
# Contour Detection
contours1, hierarchy1 = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# draw contours on the original image for `CHAIN_APPROX_SIMPLE`
image_copy1 = img.copy()
cv2.drawContours(image_copy1, contours1, -1, (0, 255, 0), 2, cv2.LINE_AA)
# see the results
cv2.imshow('Simple approximation', image_copy1)
# Display Canny Edge Detection Image
cv2.imshow('Canny Edge Detection', edges)
cv2.waitKey(0)
#Floodfill
h,w,chn = img.shape
seed = (w/2,h/2)
mask = np.zeros((h+2,w+2),np.uint8)
bucket = edges.copy()
cv2.floodFill(bucket, mask, (0,0), (255,255,255))
cv2.imshow('Mask', bucket)
cv2.waitKey(0)
cv2.destroyAllWindows()
Having a shot at this in ImageJ, extracting the red channel from the raw image gives me this:
Which is close to a binary image already. Running a small (3pix) median filter and thresholding gives this as a binary:
Running cv.findContours() on that last one and analysing contour areas should give you the little holes and the "eye". Use cv.drawContours() with the bigger objects to fill up the eye and logo area, maybe dilate() to fill small discrepancies.
This is Wendy.Image processing related question.
I have to select the bubbles properly to analyze its size and calculate the velocity among image sequences.
original image
enter image description here
image sequences
enter image description here//enter image description here//enter image description here
In the picture,the largest shape is a droplet,and inside the droplet contains lots of tiny bubbles.
So far, the questions are:
1.Because the shape of bubbles are broken, few contours are detected. At the same time, lots of edges detected. How can I fill out the edges to be contours ( like a coin, I just want the outline of each bubble and don't want to get the lines inside bubbles)
current progress
enter image description here
enter image description here
code(detect contours)
gray=cv2.imread("1000.tif")
blurred = cv2.GaussianBlur(gray, (3,3), 0)
edged = cv2.Canny(blurred, 5, 60)
cv2.imshow('edged ', edged )
cv2.waitKey(0)
contours, hierarchy = cv2.findContours(edged,cv2.RETR_EXTERNAL,cv2.RETR_LIST)
img = gray.copy()
cv2.drawContours(img,contours,-1,(0,255),1)
cv2.imwrite("contours.tif", img)
cv2.imshow("contours", img)
cv2.waitKey(0)
code(fill holes)
im_floodfill = edged.copy()
h, w = edged.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
cv2.floodFill(im_floodfill, mask, (0,0), 255)
im_floodfill_inv = cv2.bitwise_not(im_floodfill)
im_out = edged | im_floodfill_inv
cv2.imshow("Thresholded Image", edged)
cv2.imshow("Floodfilled Image", im_floodfill)
cv2.imshow("Inverted Floodfilled Image", im_floodfill_inv)
cv2.imshow("Foreground", im_out)
cv2.waitKey(0)
cv2.imwrite("fill.jpg", im_out)
the result will be
enter image description here
The velocity of the bubbles is my objective. I've tried to put the binary image sequences into Iamgej and used the "trackmate" (a tool for automated, and semi-automated particle tracking). It has a fixed bubble size setting to track, however, the size of the bubbles is quite different in my case, so is there any method to track bubbles moving?
If this isn't clear enough please let me know and I can try and be more specific. Thank you for your time.
Here is one way in Python/OpenCV.
Read the input
Convert to gray
Threshold
Fill the holes with morphology close
Get the contours and the largest contour (though presumably just one)
Draw the contour on a copy of the input
Save the results
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('vapor.jpg')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
#thresh = cv2.threshold(gray,5,255,cv2.THRESH_BINARY)[1]
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# apply close morphology to fill white circles
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (45,45))
thresh_cleaned = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
# get largest contour
contours = cv2.findContours(thresh_cleaned, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
# draw contour
result = img.copy()
cv2.drawContours(result,[big_contour],0,(0,0,255),1)
# save image with points drawn
cv2.imwrite('vapor_thresh.jpg',thresh)
cv2.imwrite('vapor_cleaned_thresh.jpg',thresh_cleaned)
cv2.imwrite('vapor_contour.jpg',result)
cv2.imshow("thresh", thresh)
cv2.imshow("thresh_cleaned", thresh_cleaned)
cv2.imshow("contour", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Threshold image:
Morphology filled image:
Contour image:
To elaborate more of what my expected outcome is, I have attached a test.png file, and what I want this script to do.
Essentially you would somehow have to find all the black pixels and some how find the start and stop location of them.
Then I want to enlarge that, so to speak, I want to extract the selected pixels (the contents in the green square) into it's own image and save it.
All my images are either Black and White, no other color.
Original Image
Outcome Image
Extracted from Outcome Image
If anyone could help me out with this, it'd be super appreciated! I've tried using Opencv Python to somehow find black pixels but all my attempts have failed.
Here is one way to do that in Python/OpenCV.
Read the input
Convert to gray
Invert polarity (regions for contours need to be white)
Threshold
Get contours
Get bounding boxes
Crop regions in input according to bounding boxes
Save to disk
Input:
import cv2
import numpy as np
# read image
img = cv2.imread("black_pixels.png")
# convert img to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# invert polarity
gray = 255 - gray
# do adaptive threshold on gray image
thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY)[1]
# Get contours
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
i = 1
for c in cnts:
# create white image
result = np.full_like(img, (255,255,255))
# get bounding box
x,y,w,h = cv2.boundingRect(c)
# crop region of img using bounding box
region = img[y:y+h, x:x+w]
# save region to new image
cv2.imwrite("black_region_{0}.png".format(i), region)
i = i + 1
# display it
cv2.imshow("IMAGE", img)
cv2.imshow("GRAY", gray)
cv2.imshow("THRESHOLD", thresh)
cv2.waitKey(0)
Result:
I am working on Retinal fundus images.The image consists of a circular retina on a black background. With OpenCV, I have managed to get a contour which surrounds the whole circular Retina. What I need is to crop out the circular retina from the black background.
It is unclear in your question whether you want to actually crop out the information that is defined within the contour or mask out the information that isn't relevant to the contour chosen. I'll explore what to do in both situations.
Masking out the information
Assuming you ran cv2.findContours on your image, you will have received a structure that lists all of the contours available in your image. I'm also assuming that you know the index of the contour that was used to surround the object you want. Assuming this is stored in idx, first use cv2.drawContours to draw a filled version of this contour onto a blank image, then use this image to index into your image to extract out the object. This logic masks out any irrelevant information and only retain what is important - which is defined within the contour you have selected. The code to do this would look something like the following, assuming your image is a grayscale image stored in img:
import numpy as np
import cv2
img = cv2.imread('...', 0) # Read in your image
# contours, _ = cv2.findContours(...) # Your call to find the contours using OpenCV 2.4.x
_, contours, _ = cv2.findContours(...) # Your call to find the contours
idx = ... # The index of the contour that surrounds your object
mask = np.zeros_like(img) # Create mask where white is what we want, black otherwise
cv2.drawContours(mask, contours, idx, 255, -1) # Draw filled contour in mask
out = np.zeros_like(img) # Extract out the object and place into output image
out[mask == 255] = img[mask == 255]
# Show the output image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
If you actually want to crop...
If you want to crop the image, you need to define the minimum spanning bounding box of the area defined by the contour. You can find the top left and lower right corner of the bounding box, then use indexing to crop out what you need. The code will be the same as before, but there will be an additional cropping step:
import numpy as np
import cv2
img = cv2.imread('...', 0) # Read in your image
# contours, _ = cv2.findContours(...) # Your call to find the contours using OpenCV 2.4.x
_, contours, _ = cv2.findContours(...) # Your call to find the contours
idx = ... # The index of the contour that surrounds your object
mask = np.zeros_like(img) # Create mask where white is what we want, black otherwise
cv2.drawContours(mask, contours, idx, 255, -1) # Draw filled contour in mask
out = np.zeros_like(img) # Extract out the object and place into output image
out[mask == 255] = img[mask == 255]
# Now crop
(y, x) = np.where(mask == 255)
(topy, topx) = (np.min(y), np.min(x))
(bottomy, bottomx) = (np.max(y), np.max(x))
out = out[topy:bottomy+1, topx:bottomx+1]
# Show the output image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
The cropping code works such that when we define the mask to extract out the area defined by the contour, we additionally find the smallest horizontal and vertical coordinates which define the top left corner of the contour. We similarly find the largest horizontal and vertical coordinates that define the bottom left corner of the contour. We then use indexing with these coordinates to crop what we actually need. Note that this performs cropping on the masked image - that is the image that removes everything but the information contained within the largest contour.
Note with OpenCV 3.x
It should be noted that the above code assumes you are using OpenCV 2.4.x. Take note that in OpenCV 3.x, the definition of cv2.findContours has changed. Specifically, the output is a three element tuple output where the first image is the source image, while the other two parameters are the same as in OpenCV 2.4.x. Therefore, simply change the cv2.findContours statement in the above code to ignore the first output:
_, contours, _ = cv2.findContours(...) # Your call to find contours
Here's another approach to crop out a rectangular ROI. The main idea is to find the edges of the retina using Canny edge detection, find contours, and then extract the ROI using Numpy slicing. Assuming you have an input image like this:
Extracted ROI
import cv2
# Load image, convert to grayscale, and find edges
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY)[1]
# Find contour and sort by contour area
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
# Find bounding box and extract ROI
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h, x:x+w]
break
cv2.imshow('ROI',ROI)
cv2.imwrite('ROI.png',ROI)
cv2.waitKey()
This is a pretty simple way. Mask the image with transparency.
Read the image
Make a grayscale version.
Otsu Threshold
Apply morphology open and close to thresholded image as a mask
Put the mask into the alpha channel of the input
Save the output
Input
Code
import cv2
import numpy as np
# load image as grayscale
img = cv2.imread('retina.jpeg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold input image using otsu thresholding as mask and refine with morphology
ret, mask = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((9,9), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# put mask into alpha channel of result
result = img.copy()
result = cv2.cvtColor(result, cv2.COLOR_BGR2BGRA)
result[:, :, 3] = mask
# save resulting masked image
cv2.imwrite('retina_masked.png', result)
Output