Image Processing: Problem with Fish measuring - python

I'm having problems with contouring a live video capture. I'm using OpenCV library on Python
My objective is to measure tilapia fishes that go through a tube one by one and get their approx sizes by getting the rectangle dimensions. Here is a sample of 1 tilapia passing.
Here is one more with backlighting
The problem is that the water is also detected as a contour and I can't seem to isolate the fish. I have tried using a mix of blurring techniques, dilation, erosion but it can't seem to solve my problem.
Are there any other algorithms I can use? Or should I switch to Object Detection?
Here's my code:
#!/usr/bin/env python3
import numpy as np
import cv2
import time
kernel = np.ones((5,5),np.uint8)
cap = cv2.VideoCapture('white.mp4')
foreground_background = cv2.createBackgroundSubtractorMOG2()
while True:
ret, frame = cap.read()
if(type(frame) == type(None)):
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#gray = cv2.GaussianBlur(gray, (21, 21), 0)
#gray = cv2.medianBlur(gray,11)
gray = cv2.bilateralFilter(gray,9,75,75)
ret,thresh = cv2.threshold(gray, 127, 255 , cv2.THRESH_BINARY_INV)
thresh = cv2.erode(thresh, kernel, iterations=3)
thresh = cv2.dilate(thresh, kernel, iterations=3)
im2, cnts, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
c=None
for c in cnts:
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 255, 0), 2)
cv2.imshow('Output', thresh)
cv2.waitKey(2)
cap.release()
cv2.destroyAllWindows()
Following added by Mark Setchell... if anyone wants to see the individual frames, I extracted at 10 fps and montaged the frames as follows:
Commands used:
ffmpeg -i a.mov -r 10 frame_%05d.png
montage -tile 6x frame_*png -geometry +10+10 result.png

I would try several things including:
Various ways to remove the background -- I see you used cv2.createBackgroundSubtractorMOG2() but simpler methods might yield better results. Try taking the average of several dozen frames when there are no fish in the frame and then subtract that average background from all frames. Something like this: background = (A+B+C)/3 where A, B, and C are background frames/matrices without fish. Now you can simply remove the background from all frames with a subtraction. Additionally, you could try subtracting the previous 1 (or several) frames from the current frame to highlight the changes between frames (effectively a time derivative). This could be done in a loop.
Sum along rows and columns -- column and row projections are a very easy way to highlight high contrast regions within an image. Presumably (if dark values > light values) column summing an image where the fish was in the middle of the image would yield a Gaussian looking vector with a peak index corresponding to the middle column of the image (see figure below). This would also give you a means to estimate the size of the fish by noting the width of the Gaussian looking vector.
Note that the initial high values on the column sum vector are a result of the lighting conditions and would be remedied via background removal (see step 1).
From the numpy package in python you can perform column sums with something like:
import numpy as np
np.sum(M,axis=1)
Where M is the current image/matrix/frame of interest.

Related

How to detect the location of a grid of letters in an image with openCV and Python?

I'm writing a program that takes an image that has a grid of 4 by 4 letters somewhere in that image.
E.g.
1
I want to read these letters into my program and for that I'm using pytesseract for the OCR.
Before feeding the image to pytesseract I do some preprocessing with openCV to increase the odds of pytesseract working correctly.
This is the code I use for this:
import cv2
img = cv2.imread('my_image.png')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_pre_processed = cv2.threshold(img_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
And these are some sample outputs of img_pre_processed:
2 3 4
Since the letters in the grid are spaced apart pytesseract has a difficult time to read them when I give the entire image as input. So it would be helpful if I knew the coordinates of every letter, then I could edit the image in such a way that pytesseract can always recognise them.
I started to try and solve this problem on my own and the solution I'm coming up with might work but it's getting rather complicated. So I'm wondering if there is a better way to do it.
At the moment I'm using the cv2.findContours() function to get all the contours of the objects in the image. For every contour I calculate the center coordinates and the area of the box you would be able to draw around it. I then sort these by area to get the largest contours. Now here it starts to get more and more complicated. I can't just say take the biggest 16 contours, because there might be unwanted objects in the picture that have a bigger area than the 16 letters that I want. Also some letters like O, P, Q,... have 2 contours and their inner contour might even be bigger than another letters outer contour like the letter I for example.
E.g. This is an image with the 18 biggest contours marked with a green box. 5
So to continue with my way of attacking the problem I would have to write an algorithm that finds the contours that are most likely part of the grid while ignoring the contours that are unwanted and also the inner contours of letters that have 2 contours.
While this is possible, I'm wondering if there is be a better way of doing this.
Somebody told me that if you can filter the image in such a way that everything gets more blurry so that all the letters become blobs. That it might be possible to do a pattern detection with 4x4 grid of blobs. But I don't know how to do that or if that's possible.
So if somebody knows a better way to tackle this problem or if you know how to execute the plan of attack I mentioned earlier that would be most helpfull.
Thanks in advance!
You can simply filter the bounding rectangles by width and height. As this is a rule based approach, it may need more example images to fine tune the filter rules.
import cv2
# get bounding rectangles of contours
img = cv2.imread('img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
bbox = [cv2.boundingRect(c) for c in contours]
# filter rectangles by width and height
for x, y, w, h in bbox:
if (4 < w < 200) and (30 < h < 200):
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

How to detect multiple blocks of text from an image of a document?

I have images of Math question papers which have multiple questions per page. Example:
Math questions image
I want to use Python to extract the contents of each question separately and store them in a database table. From my research, I have a rough idea for my workflow: Pre-process image --> Find contours of each question --> Snip and send those individual images to pyTesseract --> Store the transcribed text.
I was very happy to find a great thread about a similar problem, but when I tried that approach on my image, the ROI that was identified covered the whole page. In other words, it identified all the questions as one block of text.
How do I make OpenCV recognize multiple ROIs within a page and draw bounding boxes? Is there something different to be done during the pre-processing?
Please suggest an approach - thanks so much!
First you need to convert the image into grayscale
Perform otsu'threshold which does better binarization in removing the background.
Specify structure shape and kernel size. Kernel size increases or decreases the area of the rectangle to be detected.
Applying dilation on the threshold image with the kernel when you dilated it gets thicker.
Finding contours
Looping through the identified contours Then the rectangular part is can be drawn using cv2.rectangle method
import cv2
img = cv2.imread("text.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
ret, thresh1 = cv2.threshold(blur, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY_INV)
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (18, 18))
dilation = cv2.dilate(thresh1, rect_kernel, iterations = 1)
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
# Drawing a rectangle on copied image
rect = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imwrite('drawed.png', img)
Sample output iamge

Python: Find Amount of Handwriting in Video

Do you know of an algorithm that can see that there is handwriting on an image? I am not interested in knowing what the handwriting says, but only that there is one present?
I have a video of someone filling a slide with handwriting. My goal is to determine how much of the slide has been filled with handwriting already.
The video in question can be downloaded here: http://www.filedropper.com/00_6
For this particular video, a great solution was already suggested in Quantify how much a slide has been filled with handwriting
The solution is based on summing the amount of the specific color used for the handwriting. However, if the handwriting is not in blue but any other color that can also be found on non-handwriting, this approach will not work.
Therefore, I am interested to know, if there exists a more general solution to determine if there is handwriting present on an image?
What I have done so far:
I was thinking of extracting the contours of an image, and then somehow detect the handwriting part based on how curvy the contours are (but I have no clue how to do that part). it might not be the best idea, though, as again it's not always correct...
import cv2
import matplotlib.pyplot as plt
img = cv2.imread(PATH TO IMAGE)
print("img shape=", img.shape)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("image", gray)
cv2.waitKey(1)
#### extract all contours
# Find Canny edges
edged = cv2.Canny(gray, 30, 200)
cv2.waitKey(0)
# Finding Contours
# Use a copy of the image e.g. edged.copy()
# since findContours alters the image
contours, hierarchy = cv2.findContours(edged,
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('Canny Edges After Contouring', edged)
cv2.waitKey(0)
print("Number of Contours found = " + str(len(contours)))
# Draw all contours
# -1 signifies drawing all contours
cv2.drawContours(img, contours, -1, (0, 255, 0), 3)
cv2.imshow('Contours', img)
cv2.waitKey(0)
You can identify the space taken by hand-writing by masking the pixels from the template, and then do the same for the difference between further frames and the template. You can use dilation, opening, and thresholding for this.
Let's start with your template. Let's identify the parts we will mask:
import cv2
import numpy as np
template = cv2.imread('template.jpg')
Now, let's broaden the occupied pixels to make a zone that we will mask (hide) later:
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
kernel = np.ones((5, 5),np.uint8)
dilation = cv2.dilate(255 - template, kernel,iterations = 5)
Then, we will threshold to turn this into a black and white mask:
_, thresh = cv2.threshold(dilation,25,255,cv2.THRESH_BINARY_INV)
In later frames, we will subtract this mask from the picture, by turning all these pixels to white. For instance:
import numpy as np
import cv2
vidcap = cv2.VideoCapture('0_0.mp4')
success,image = vidcap.read()
count = 0
frames = []
while count < 500:
frames.append(image)
success,image = vidcap.read()
count += 1
mask = np.where(thresh == 0)
example = frames[300]
example[mask] = [255, 255, 255]
cv2.imshow('', example)
cv2.waitKey(0)
Now, we will create a function that will return the difference between the template and a given picture. We will also use opening to get rid of the left over single pixels that would make it ugly.
def difference_with_mask(image):
grayscale = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel = np.ones((5, 5), np.uint8)
dilation = cv2.dilate(255 - grayscale, kernel, iterations=5)
_, thresh = cv2.threshold(dilation, 25, 255, cv2.THRESH_BINARY_INV)
thresh[mask] = 255
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
return closing
cv2.imshow('', difference_with_mask(frames[400]))
cv2.waitKey(0)
To address the fact that you don't want to have the hand detected as hand-writing, I suggest that instead of using the mask for every individual frame, you use the 95th percentile of the 15 last 30th frame... hang on. Look at this:
results = []
for ix, frame in enumerate(frames):
if ix % 30 == 0:
history.append(frame)
results.append(np.quantile(history, 0.95, axis=0))
print(ix)
Now, the example frame becomes this (the hand is removed because it wasn't mostly present in the 15 last 30th frames):
As you can see a little part of the hand-writing is missing. It will come later, because of the time-dependent percentile transformation we're doing. You'll see later: in my example with frame 18,400, the text that is missing in the image above is present. Then, you can use the function I gave you and this will be the result:
And here we go! Note that this solution, which doesn't include the hand, will take longer to compute because there's a few calculations needing to be done. Using just an image with no regard to the hand would calculate instantly, to the extent that you could probably run it on your webcam feed in real time.
Final Example:
Here's the frame 18,400:
Final image:
You can play with the function if you want the mask to wrap more thinly around the text:
Full code:
import os
import numpy as np
import cv2
vidcap = cv2.VideoCapture('0_0.mp4')
success,image = vidcap.read()
count = 0
from collections import deque
frames = deque(maxlen=700)
while count < 500:
frames.append(image)
success,image = vidcap.read()
count += 1
template = cv2.imread('template.jpg')
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
kernel = np.ones((5, 5),np.uint8)
dilation = cv2.dilate(255 - template, kernel,iterations = 5)
cv2.imwrite('dilation.jpg', dilation)
cv2.imshow('', dilation)
cv2.waitKey(0)
_, thresh = cv2.threshold(dilation,25,255,cv2.THRESH_BINARY_INV)
cv2.imwrite('thresh.jpg', thresh)
cv2.imshow('', thresh)
cv2.waitKey(0)
mask = np.where(thresh == 0)
example = frames[400]
cv2.imwrite('original.jpg', example)
cv2.imshow('', example)
cv2.waitKey(0)
example[mask] = 255
cv2.imwrite('example_masked.jpg', example)
cv2.imshow('', example)
cv2.waitKey(0)
def difference_with_mask(image):
grayscale = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel = np.ones((5, 5), np.uint8)
dilation = cv2.dilate(255 - grayscale, kernel, iterations=5)
_, thresh = cv2.threshold(dilation, 25, 255, cv2.THRESH_BINARY_INV)
thresh[mask] = 255
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
return closing
cv2.imshow('', difference_with_mask(frames[400]))
cv2.waitKey(0)
masked_example = difference_with_mask(frames[400])
cv2.imwrite('masked_example.jpg', masked_example)
from collections import deque
history = deque(maxlen=15)
results = []
for ix, frame in enumerate(frames):
if ix % 30 == 0:
history.append(frame)
results.append(np.quantile(history, 0.95, axis=0))
print(ix)
if ix > 500:
break
cv2.imshow('', frames[400])
cv2.waitKey(0)
cv2.imshow('', results[400].astype(np.uint8))
cv2.imwrite('percentiled_frame.jpg', results[400].astype(np.uint8))
cv2.waitKey(0)
cv2.imshow('', difference_with_mask(results[400].astype(np.uint8)))
cv2.imwrite('final.jpg', difference_with_mask(results[400].astype(np.uint8)))
cv2.waitKey(0)
You could try to make a template before detection which you could use to deduct it on the current frame of the video. One way you could make such a template is to iterate through every pixel of the frame and look-up if it has a higher value (white) in that coordinate than the value that is stored in the list.
Here is an example of such a template from your video by iterating through the first two seconds:
Once you have that it is simple to detect the text. You can use the cv2.absdiff() function to make difference of template and frame. Here is an example:
Once you have this image it is trivial to search for writting (threshold + contour search or something similar).
Here is an example code:
import numpy as np
import cv2
cap = cv2.VideoCapture('0_0.mp4') # read video
bgr = cap.read()[1] # get first frame
frame = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY) # transform to grayscale
template = frame.copy() # make a copy of the grayscale
h, w = frame.shape[:2] # height, width
matrix = [] # a list for [y, x] coordinares
# fill matrix with all coordinates of the image (height x width)
for j in range(h):
for i in range(w):
matrix.append([j, i])
fps = cap.get(cv2.CAP_PROP_FPS) # frames per second of the video
seconds = 2 # How many seconds of the video you wish to look the template for
k = seconds * fps # calculate how many frames of the video is in that many seconds
i = 0 # some iterator to count the frames
lowest = [] # list that will store highest values of each pixel on the fram - that will build our template
# store the value of the first frame - just so you can compare it in the next step
for j in matrix:
y = j[0]
x = j[1]
lowest.append(template[y, x])
# loop through the number of frames calculated before
while(i < k):
bgr = cap.read()[1] # bgr image
frame = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY) # transform to grayscale
# iterate through every pixel (pixels are located in the matrix)
for l, j in enumerate(matrix):
y = j[0] # x coordinate
x = j[1] # y coordinate
temp = template[y, x] # value of pixel in template
cur = frame[y, x] # value of pixel in the current frame
if cur > temp: # if the current frame has higher value change the value in the "lowest" list
lowest[l] = cur
i += 1 # increment the iterator
# just for vizualization
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
i = 0 # new iteratir to increment position in the "lowest" list
template = np.ones((h, w), dtype=np.uint8)*255 # new empty white image
# iterate through the matrix and change the value of the new empty white image to that value
# in the "lowest" list
for j in matrix:
template[j[0], j[1]] = lowest[i]
i += 1
# just for visualization - template
cv2.imwrite("template.png", template)
cv2.imshow("template", template)
cv2.waitKey(0)
cv2.destroyAllWindows()
counter = 0 # counter of countours: logicaly if the number of countours would
# rapidly decrease than that means that a new template is in order
mean_compare = 0 # this is needed for a simple color checker if the contour is
# the same color as the oders
# this is the difference between the frame of the video and created template
while(cap.isOpened()):
bgr = cap.read()[1] # bgr image
frame = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY) # grayscale
img = cv2.absdiff(template, frame) # resulted difference
thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1] # thresholded image
kernel = np.ones((5, 5), dtype=np.uint8) # simple kernel
thresh = cv2.dilate(thresh, kernel, iterations=1) # dilate thresholded image
cnts, h = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # contour search
if len(cnts) < counter*0.5 and counter > 50: # check if new template is in order
# search for new template again
break
else:
counter = len(cnts) # update counter
for cnt in cnts: # iterate through contours
size = cv2.contourArea(cnt) # size of contours - to filter out noise
if 20 < size < 30000: # noise criterion
mask = np.zeros(frame.shape, np.uint8) # empry mask - needed for color compare
cv2.drawContours(mask, [cnt], -1, 255, -1) # draw contour on mask
mean = cv2.mean(bgr, mask=mask) # the mean color of the contour
if not mean_compare: # first will set the template color
mean_compare = mean
else:
k1 = 0.85 # koeficient how much each channels value in rgb image can be smaller
k2 = 1.15 # koeficient how much each channels value in rgb image can be bigger
# condition
b = bool(mean_compare[0] * k1 < mean[0] < mean_compare[0] * k2)
g = bool(mean_compare[1] * k1 < mean[1] < mean_compare[1] * k2)
r = bool(mean_compare[2] * k1 < mean[2] < mean_compare[2] * k2)
if b and g and r:
cv2.drawContours(bgr, [cnt], -1, (0, 255, 0), 2) # draw on rgb image
# just for visualization
cv2.imshow('img', bgr)
if cv2.waitKey(1) & 0xFF == ord('s'):
cv2.imwrite(str(j)+".png", img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# release the video object and destroy window
cap.release()
cv2.destroyAllWindows()
One possible result with a simple size and color filter:
NOTE: This template search algorithm is very slow because of the nested loops and can probably be optimized to make it faster - you need a little more math knowledge than me. Also, you will need to make a check if the template changes in the same video - I'm guessing that shouldn't be too difficult.
A simpler idea on how to make it a bit faster is to resize the frames to let's say 20% and make the same template search. After that resize it back to the original and dilate the template. It will not be as nice of a result but it will make a mask on where the text and lines of the template are. Then simply draw it over the frame.
I don't think you really need the code in this case and it would be rather long if you did. But here's an algorithm to do it.
Use OpenCV's EAST (Efficient Accurate Scene Text detector) model at the beginning to establish the starting text on the slide. That gives you a bounding box(es) of the initial percentage of the slide covered with slide text as opposed to handwritten explanatory text.
Every, say 1-5 seconds (people don't write all that fast), compare that baseline image with the current image and the previous image.
If the current image has more text than the previous image but the initial bounding boxes are NOT the same, you have a new and rather busy slide.
If the current image has more text than the previous image but the initial bounding boxes are ARE the same, more text is being added.
If the current image had less text than the previous image but the initial bounding boxes are NOT the same, you again have a new slide -- only, not busy and with space like the last one to write.
If the current image has less text than the previous image but the initial bounding boxes are ARE the same, you either have a duplicate slide with what will presumably be more text or the teacher is erasing a section to continue, or modify their explanation. Meaning, you'll need some way of addressing this.
When you have a new slide, take the previous image, and compare the bounding boxes of all text, subtracting the boxes for the initial state.
Computationally, this isn't going to be cheap (you certainly won't be doing this life, at least not for a number of years) but it's robust, and sampling the text every so many seconds of time will help.
Personally, I would approach this as an ensemble. That is an initial bounding box then look at the color of the text. If you can get away with the percentage of different color text, do. And when you can't, you'll still be good.
In addition to the great answers that people provided, I have two other suggestions.
The first one, is the CNN methods. It's totally workable to use some object detection routine, or even a segmentation method (like U-NET) to differentiate between the texts. It is easy because you can find millions of images from digital text books and also handwritten documents to train your model.
The Second approach is to locate and to extract every single symbol on the image, separately (with a simple method like the one you used so far, or with connectedcomponent). Since typographic letters and symbols have a unique shape and style (similar fonts - unlike the handwritten letters) you can match all the found letters with sample typographic letters that you gathered separately to distinguish between the handwritten and the typographic. Feature-point-based matching (like SURF) could be a good tool for this approach.

How to remove image noise using opencv - python?

I am working with skin images, in recognition of skin blemishes, and due to the presence of noises, mainly by the presence of hairs, this work becomes more complicated.
I have an image example in which I work in an attempt to highlight only the skin spot, but due to the large number of hairs, the algorithm is not effective. With this, I would like you to help me develop an algorithm to remove or reduce the amount of hair so that I can only highlight my area of ​​interest (ROI), which are the spots.
Algorithm used to highlight skin blemishes:
import numpy as np
import cv2
#Read the image and perform threshold
img = cv2.imread('IMD006.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.medianBlur(gray,5)
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
#Search for contours and select the biggest one
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
#Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
#Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
#Display the result
cv2.imwrite('IMD006.png', res)
#cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Example image used:
How to deal with these noises to the point of improving my region of interest?
This is quite a difficult task becasue the hair goes over your ROI (mole). I don't know how to help remove it from the mole but I can help to remove the backround like in the picture without hairs. For the removal of hairs from mole I advise you to search for "removing of watermarks from image" and "deep neural networks" to maybe train a model to remove the hairs (note that this task will be quite difficult).
That being said, for the removing of background you could try the same code that you allready have for detection without hairs. You will get a binary image like this:
Now your region is filled with white lines (hairs) that go over your contour that is your ROI and cv2.findContours() would also pick them out because they are connected. But if you look at the picture you will find out that the white lines are quite thin and you can remove it from the image by performing opening (cv2.morphologyEx) on the image. Opening is erosion followed by dilation so when you erode the image with a big enough kernel size the white lines will dissapear:
Now you have a white spot with some noises arround which you can connect by performing another dilation (cv2.dilate()):
To make the ROI a bit smoother you can blur the image cv2.blur():
After that you can make another treshold and search for the biggest contour. The final result:
Hope it helps a bit. Cheers!
Example code:
import numpy as np
import cv2
# Read the image and perfrom an OTSU threshold
img = cv2.imread('hair.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Remove hair with opening
kernel = np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# Combine surrounding noise with ROI
kernel = np.ones((6,6),np.uint8)
dilate = cv2.dilate(opening,kernel,iterations=3)
# Blur the image for smoother ROI
blur = cv2.blur(dilate,(15,15))
# Perform another OTSU threshold and search for biggest contour
ret, thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cnt = max(contours, key=cv2.contourArea)
# Create a new mask for the result image
h, w = img.shape[:2]
mask = np.zeros((h, w), np.uint8)
# Draw the contour on the new mask and perform the bitwise operation
cv2.drawContours(mask, [cnt],-1, 255, -1)
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()

Image Card detection using python

I want to detect how many number of cards are present in this image using python.I was trying with white pixel but not getting the correct result.
My code is given below:
import cv2
import numpy as np
img = cv2.imread('imaagi.jpg', cv2.IMREAD_GRAYSCALE)
n_white_pix = np.sum(img == 255)
print('Number of white pixels:', n_white_pix)
I am a beginner. So unable to find out the way.
This solution is with respect to the image you have provided and the implementation is in OpenCV.
Code:
im = cv2.imread('C:/Users/Jackson/Desktop/cards.jpg', 1)
#--- convert the image to HSV color space ---
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
cv2.imshow('H', hsv[:,:,0])
cv2.imshow('S', hsv[:,:,1])
#--- find Otsu threshold on hue and saturation channel ---
ret, thresh_H = cv2.threshold(hsv[:,:,0], 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
ret, thresh_S = cv2.threshold(hsv[:,:,1], 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
#--- add the result of the above two ---
cv2.imshow('thresh', thresh_H + thresh_S)
#--- some morphology operation to clear unwanted spots ---
kernel = np.ones((5, 5), np.uint8)
dilation = cv2.dilate(thresh_H + thresh_S, kernel, iterations = 1)
cv2.imshow('dilation', dilation)
#--- find contours on the result above ---
(_, contours, hierarchy) = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#--- since there were few small contours found, retain those above a certain area ---
im2 = im.copy()
count = 0
for c in contours:
if cv2.contourArea(c) > 500:
count+=1
cv2.drawContours(im2, [c], -1, (0, 255, 0), 2)
cv2.imshow('cards_output', im2)
print('There are {} cards'.format(count))
Result:
On the terminal I got: There are 6 cards
Depending on how exactly your "white pixel approach" was working (please share more details on that if possible), you could try a simple image binarization, which is a well-established way of separating different objects/entities in your image. Granted, it will work only on grayscale images, but that is something you can also easily fix with sklearn.
It might provide optimal results right away, especially if the lighting conditions vary across images, or you have (as seen above) cards that contain a wide variety of colors.
To circumvent this, you could also try to look into different color spaces, e..g HSV.
If that still does not work, I would recommend using image segmentation libraries from OpenCV or similra libraries. The problem is that they usually also bring some unwanted complexity to your project, which might not be necessary if it works with a simple approach such as the binarization.

Categories