Testing image of voltmeter
So I'm fairly new to the area and have been dabbling with images. I've been having some unstable results for lack of better words. I'm essentially trying to detect the voltmeter in variations of the above image and extracting the white region or as close to the white region as possible.
The problem is, it works for some variations of images but not for others. So, my questions are:
Is there a somewhat quick and efficient method of detecting and extracting the required region? (Quick being somewhat secondary)
What is the thought process behind a task like this for some of you with actual experience and expertise in the field?
I've tried:
Checking various color spaces and specific planes in those color spaces for ease of information extraction
Histogram Equalization, threshold, blurring, edge detection
Contouring and a couple of odd things in between.
I'm a bit stuck on achieving a "robust" or somewhat "generalized" way of detecting the said area in variations of the above image.
Thanks for any feedback.
Here's the code:
import cv2
from matplotlib import pyplot as plt
im = cv2.imread("volt.jpg")
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 150, 200, 0)
#imedge = cv2.Canny(imgray, 30, 200)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
plt.imshow(thresh)
voltcnt = 0
x = 0
for i in range(0, len(contours)):
if cv2.contourArea(contours[i]) > 50000 and cv2.contourArea(contours[i]) < 100000:
voltcnt = cv2.contourArea(contours[i])
x = i
cv2.drawContours(im, contours[x], -1, (255, 0, 0), 3)
plt.imshow(im)
Here's the output:
You can also try with "cv2.SimpleBlobDetector()"
Related
I'm writing a program that takes an image that has a grid of 4 by 4 letters somewhere in that image.
E.g.
1
I want to read these letters into my program and for that I'm using pytesseract for the OCR.
Before feeding the image to pytesseract I do some preprocessing with openCV to increase the odds of pytesseract working correctly.
This is the code I use for this:
import cv2
img = cv2.imread('my_image.png')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_pre_processed = cv2.threshold(img_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
And these are some sample outputs of img_pre_processed:
2 3 4
Since the letters in the grid are spaced apart pytesseract has a difficult time to read them when I give the entire image as input. So it would be helpful if I knew the coordinates of every letter, then I could edit the image in such a way that pytesseract can always recognise them.
I started to try and solve this problem on my own and the solution I'm coming up with might work but it's getting rather complicated. So I'm wondering if there is a better way to do it.
At the moment I'm using the cv2.findContours() function to get all the contours of the objects in the image. For every contour I calculate the center coordinates and the area of the box you would be able to draw around it. I then sort these by area to get the largest contours. Now here it starts to get more and more complicated. I can't just say take the biggest 16 contours, because there might be unwanted objects in the picture that have a bigger area than the 16 letters that I want. Also some letters like O, P, Q,... have 2 contours and their inner contour might even be bigger than another letters outer contour like the letter I for example.
E.g. This is an image with the 18 biggest contours marked with a green box. 5
So to continue with my way of attacking the problem I would have to write an algorithm that finds the contours that are most likely part of the grid while ignoring the contours that are unwanted and also the inner contours of letters that have 2 contours.
While this is possible, I'm wondering if there is be a better way of doing this.
Somebody told me that if you can filter the image in such a way that everything gets more blurry so that all the letters become blobs. That it might be possible to do a pattern detection with 4x4 grid of blobs. But I don't know how to do that or if that's possible.
So if somebody knows a better way to tackle this problem or if you know how to execute the plan of attack I mentioned earlier that would be most helpfull.
Thanks in advance!
You can simply filter the bounding rectangles by width and height. As this is a rule based approach, it may need more example images to fine tune the filter rules.
import cv2
# get bounding rectangles of contours
img = cv2.imread('img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
bbox = [cv2.boundingRect(c) for c in contours]
# filter rectangles by width and height
for x, y, w, h in bbox:
if (4 < w < 200) and (30 < h < 200):
cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I want to detect and count the objects inside an image that touch while ignoring what could be considered as a single object. I have the basic image, on which i tried applying a cv2.HoughCircles() method to try and identify some circles. I then parsed the returned array and tried using cv2.circle() to draw them on the image.
However, I seem to always get too many circles returned by cv2.HoughCircles() and couldn't figure out how to only count the objects that are touching.
This is the image i was working on
My code so far:
import numpy
import matplotlib.pyplot as pyp
import cv2
segmentet = cv2.imread('photo')
houghCircles = cv2.HoughCircles(segmented, cv2.HOUGH_GRADIENT, 1, 80, param1=450, param2=10, minRadius=30, maxRadius=200)
houghArray = numpy.uint16(houghCircles)[0,:]
for circle in houghArray:
cv2.circle(segmented, (circle[0], circle[1]), circle[2], (0, 250, 0), 3)
And this is the image i get, which is quite a far shot from want i really want.
How can i properly identify and count said objects?
Here is one way in Python OpenCV by getting contour areas and the convex hull area of the contours. The take the ratio (area/convex_hull_area). If small enough, then it is a cluster of blobs. Otherwise it is an isolated blob.
Input:
import cv2
import numpy as np
# read input image
img = cv2.imread('blobs_connected.jpg')
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold to binary
thresh = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)[1]
# find contours
#label_img = img.copy()
contour_img = img.copy()
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
index = 1
isolated_count = 0
cluster_count = 0
for cntr in contours:
area = cv2.contourArea(cntr)
convex_hull = cv2.convexHull(cntr)
convex_hull_area = cv2.contourArea(convex_hull)
ratio = area / convex_hull_area
#print(index, area, convex_hull_area, ratio)
#x,y,w,h = cv2.boundingRect(cntr)
#cv2.putText(label_img, str(index), (x,y), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1, (0,0,255), 2)
if ratio < 0.91:
# cluster contours in red
cv2.drawContours(contour_img, [cntr], 0, (0,0,255), 2)
cluster_count = cluster_count + 1
else:
# isolated contours in green
cv2.drawContours(contour_img, [cntr], 0, (0,255,0), 2)
isolated_count = isolated_count + 1
index = index + 1
print('number_clusters:',cluster_count)
print('number_isolated:',isolated_count)
# save result
cv2.imwrite("blobs_connected_result.jpg", contour_img)
# show images
cv2.imshow("thresh", thresh)
#cv2.imshow("label_img", label_img)
cv2.imshow("contour_img", contour_img)
cv2.waitKey(0)
Clusters in Red, Isolated blobs in Green:
Textual Information:
number_clusters: 4
number_isolated: 81
approach it in steps.
label connected components. two connected blobs get the same label because they're connected. so far so good.
now separate your blobs. use watershed (first comment) or whatever other method gives you results. I can't fully predict the watershed approach. it might deal with touching blobs of dissimilar size or it might do something silly. the sample/tutorial also assumes a minimum size (0.7 * max peak); plug in something absolute in pixels maybe.
then, for each separated blob, check which label it sits on (take coordinates of centroid to be safe), and note down a +1 for that label (a histogram).
any label that has more than one separated blob sitting on it, would be what you are looking for.
I want to detect how many number of cards are present in this image using python.I was trying with white pixel but not getting the correct result.
My code is given below:
import cv2
import numpy as np
img = cv2.imread('imaagi.jpg', cv2.IMREAD_GRAYSCALE)
n_white_pix = np.sum(img == 255)
print('Number of white pixels:', n_white_pix)
I am a beginner. So unable to find out the way.
This solution is with respect to the image you have provided and the implementation is in OpenCV.
Code:
im = cv2.imread('C:/Users/Jackson/Desktop/cards.jpg', 1)
#--- convert the image to HSV color space ---
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
cv2.imshow('H', hsv[:,:,0])
cv2.imshow('S', hsv[:,:,1])
#--- find Otsu threshold on hue and saturation channel ---
ret, thresh_H = cv2.threshold(hsv[:,:,0], 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
ret, thresh_S = cv2.threshold(hsv[:,:,1], 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
#--- add the result of the above two ---
cv2.imshow('thresh', thresh_H + thresh_S)
#--- some morphology operation to clear unwanted spots ---
kernel = np.ones((5, 5), np.uint8)
dilation = cv2.dilate(thresh_H + thresh_S, kernel, iterations = 1)
cv2.imshow('dilation', dilation)
#--- find contours on the result above ---
(_, contours, hierarchy) = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#--- since there were few small contours found, retain those above a certain area ---
im2 = im.copy()
count = 0
for c in contours:
if cv2.contourArea(c) > 500:
count+=1
cv2.drawContours(im2, [c], -1, (0, 255, 0), 2)
cv2.imshow('cards_output', im2)
print('There are {} cards'.format(count))
Result:
On the terminal I got: There are 6 cards
Depending on how exactly your "white pixel approach" was working (please share more details on that if possible), you could try a simple image binarization, which is a well-established way of separating different objects/entities in your image. Granted, it will work only on grayscale images, but that is something you can also easily fix with sklearn.
It might provide optimal results right away, especially if the lighting conditions vary across images, or you have (as seen above) cards that contain a wide variety of colors.
To circumvent this, you could also try to look into different color spaces, e..g HSV.
If that still does not work, I would recommend using image segmentation libraries from OpenCV or similra libraries. The problem is that they usually also bring some unwanted complexity to your project, which might not be necessary if it works with a simple approach such as the binarization.
I am trying to do characters detection, have to draw a box around them, then crop and then feed to a neural network for recognition. Everything is working but before I was using sets of characters on a single color background image and segmentation was easily done.
However with real photos I have different lighting conditions and really struggle to find the contours.
After applying some adaptive thresholding I managed to get the folowing results, but starting from that I really can't figure how to properly proceed and detect each character. I can detect half of the characters easily, but not all of them. Probably because they are surrounded by lots of small irrelevant contours.
I have a feeling there is one step left but I can't figure which one.
Find Countours is capable of finding only about half of the characters.
For now, in short, im doing:
im_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
im_gray = cv2.GaussianBlur(im_gray, (5, 5), 0)
_, th1 = cv2.threshold(im_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
cim, ctrs, hier = cv2.findContours(th1.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
and
th2 = cv2.adaptiveThreshold(im_gray,255,cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY,11,2)
Images below - original image and some variations of intermediate results.
Original picture:
After some thresholding:
After some thresholding:
Inverse thresholding:
So the question is - what is the step/steps after to segment the characters?
You can perform difference of gaussians. The idea is to blur the image with two different kernels and subtract their respective results:
Code:
im = cv2.imread(img, 0)
#--- it is better to take bigger kernel sizes to remove smaller edges ---
kernel1 = 15
kernel2 = 31
blur1 = cv2.GaussianBlur(im,(kernel1, kernel1), 0)
blur2 = cv2.GaussianBlur(im,(kernel2, kernel2), 0)
cv2.imshow('Difference of Gaussians',blur2 - blur1)
Result:
I'm currently working on an image where I have to find the box outer region. But I failed to find the white and black boxes regions.
input image:
https://i.imgur.com/gec9eP5.png
output image:
https://i.imgur.com/Giz1DAW.png
Update edit:
if I use HLS instead of HSV I can find 3 more box region but 2 is still missing.
here is new output:
https://i.imgur.com/eUqltKI.png
and here is my code:
import cv2
import numpy as np
img = cv2.imread("1.png")
imghsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_blue = np.array([0,50,0])
upper_blue = np.array([255,255,255])
mask_blue = cv2.inRange(imghsv, lower_blue, upper_blue)
_, contours, _ = cv2.findContours(mask_blue, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
im = np.copy(img)
cv2.drawContours(im, contours, -1, (0, 255, 0), 2)
cv2.imwrite("contours_blue.png", im)
The mask you're generating with
mask_blue = cv2.inRange(imghsv, lower_blue, upper_blue)
does not include the bottom row at all, so it's impossible to detect these outlines with
_, contours, _ = cv2.findContours(mask_blue, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
You could try to work with multiple masks / thresholds to account for the different color ranges and merge the detected contours.
Color channel threshold is not the optimal solution in cases when you are dealing with objects of many different colors (that are not known in advance) and with background color that is not necessarily distinctly different from all object colors. Combination of multiple thresholds/conditions could solve the job for this particular image but this same combination can fail for slightly different input, so I think this approach is generally not too good.
I think the problem is very elementary in nature so I would recommend sticking to a simple approach. For example, if you apply Sobel operator on your image, you get result like one below. The intensity of the result is weak on some borders, so I inverted the image colors to make it better visible.
There are tons of tutorials on Sobel operator on the web so I won't go into detail here. On your input image there is no noise, so the intensity outside and within the boxes is zero. I would therefore suggest masking-out all zero values. If you do contour detection after that, you will end up with two contours per square - one will be on the inner side of the border and one on the outer side of the border. If you only want to extract contours on the outer border, see how contour hirearchy works in the OpenCV documentation. If you want to have contour exactly on the border, help yourself with outer contour and erosion.