Thresholding picture in opencv - python

I need help thresholding a picture. I need identify different types of gummies but i cannot get past the thresholding part of my project.
I have various pictures but this is one of them:
I have done this using mean_c threshold but i need better results to find the countours after.
This is the original picture:

You may get better results by converting the image from RGB to HSV color space and threshold by hue (that's color value) and saturation (that's how much color is there compared to the gray value). Using saturation you might get the most of your gummies, except the transparent ones -- these are quite hard to get.
On the other hand, you may try to use edge detections, since your paper is flat and gummies really stand out. Here's the edge detection result I've got:
here's the code:
#!/usr/bin/env python
import cv2
img = cv2.imread( 'Downloads/gummies.jpg' )
img = cv2.pyrDown(cv2.pyrDown( img ))
laplacian = cv2.Laplacian(img, cv2.CV_8U)
cv2.normalize( laplacian, img, 0, 600, cv2.NORM_MINMAX)
cv2.imshow( 'frame', img )
cv2.waitKey(0)
cv2.destroyAllWindows()

Related

How to improve python/tesseract Image to Text accuracy?

How can I grab an image from a region and properly use tesseract to translate to text? I got this currently:
img = ImageGrab.grab(bbox =(1341,182, 1778, 213))
tesstr = pytesseract.image_to_string(np.array(img), lang ='eng')
print (tesstr)
Issue is that it translates it incredibly wrong because the region it's getting the text from is in red with blue background, how can I improve its accuracy? Example of what it's trying to turn from image to text:
*Issue is that it translates it incredibly wrong because the region it's getting the text from is in red with blue background, how can I improve its accuracy? *
You should know the Improving the quality of the output. You need to try each of the suggested method listed. If you still can't achieve the desired result, you should look at the other methods:
Thresholding Operations using inRange
Changing Colorspaces
Image segmentation
To get the desired result, you need to get the binary mask of the image. Both simple threshold, and adaptive-threshold won't work for the input image.
To get the binary mask
Up-sample and convert input image to the HSV color-space
Set lower and higher color boundaries.
Result:
The OCR output for 0.37 version will be:
Day 20204, 16:03:12: Your ‘Metal Triangle Foundation’
was destroved!
Code:
import cv2
import numpy as np
import pytesseract
# Load the image
img = cv2.imread("b.png")
# Up-sample
img = cv2.resize(img, (0, 0), fx=2, fy=2)
# Convert to HSV color-space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# Get the binary mask
msk = cv2.inRange(hsv, np.array([0, 0, 123]), np.array([179, 255, 255]))
# OCR
txt = pytesseract.image_to_string(msk)
print(txt)
# Display
cv2.imshow("msk", msk)
cv2.waitKey(0)
There is an option in the Tesseract API such that you are able to increase the DPI at which you examine the image to detect text. Higher the DPI, hihger the precision, till diminishing returns set in. More processing power is required. DPI should not exceed original image DPI.

Thresholding Resistor Bands with OpenCV

So I am trying to make a neural network that categorizes resistor strength by recognizing the color bands. Before I get to that step I want to use OpenCV to threshold all the colors except the resistor bands so that it is easier for the neural network to categorize. However I do not know what threshold type is best suited for this.
I tried several ranges of HLS, RGB, and HSV, but they all do not get rid of the background of the resistor.
Note: I have already used contours to get rid of the background, so now all that is left is the resistor with the colored lines on it.
HLS in my case got rid of the colors, but kept the resistor background, as shown in the code below
frame_HLS = cv2.cvtColor(masked_data, cv2.COLOR_BGR2HLS)
frame_threshold = cv2.inRange(frame_HLS, (50, 0, 0), (139, 149, 255))
Here is an image of the original image, and the HLS output
So overall, I am just wondering if anyone knows if the other color modes like LUV work well for this, or whether or not I will just have to use contours or other methods to separate them.
You're on the right track and color thresholding is a great approach to segmenting the resistor. Currently, the thresholding is performing correctly, you just need to do a few simple steps to remove the background.
I tried several ranges of HLS, RGB, and HSV, but they all do not get rid of the background of the resistor.
To remove the background we can make use of the binary mask that cv2.inRange() generated. We simply use cv2.bitwise_and() and convert all black pixels on the mask to white with these two lines
result = cv2.bitwise_and(original, original, mask=frame_threshold)
result[frame_threshold==0] = (255,255,255)
Here's the masked image of what you currently have (left) and after removing the background (right)
import cv2
image = cv2.imread('1.png')
original = image.copy()
frame_HLS = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)
frame_threshold = cv2.inRange(frame_HLS, (50, 0, 0), (139, 149, 255))
result = cv2.bitwise_and(original, original, mask=frame_threshold)
result[frame_threshold==0] = (255,255,255)
cv2.imshow('result', result)
cv2.waitKey()
However I do not know what threshold type is best suited for this.
Right now you're using color thresholding, you could continue using this method and experiment with other ranges in the HLS, RGB, or HSV color space. In all of these cases, you can remove the background by converting in all black pixels on the mask to white. If you decide to pivot to another thresholding method, take a look at Otsu's threshold or Adaptive thresholding which automatically calculates the threshold value.

Convert image to binary doesn't shows white lines of a soccer field

I am inspired by the following blogpost, however I am struggling with step 2/3.
I want to creates a binary image from a gray image based on the threshold values and ultimately displaying all white lines on the image. My desired output looks as follows:
First, I want to isolate the soccer field by using colour-thresholding and morphology.
def isolate_field(img):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# find green pitch
light_green = np.array([40, 40, 40])
dark_green = np.array([70, 255, 255])
mask = cv2.inRange(hsv, light_green, dark_green)
# removing small noises
kernel = np.ones((5, 5), np.uint8)
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# apply mask over original frame
return cv2.bitwise_and(frame, frame, mask=opening)
This gives the following output:
I am happy with the results so far, but because of the large shadow I am struggling with the image-processing when I grayscale the picture. As a result, the binary thresholding is based on the sunny part in the upper-left corner instead of the white lines around the soccer field.
Following the methodology on the tutorials I get the following output for the simple thresholding:
and adaptive thresholding:
and finally, Otsu's thresholding:
How can I make sure that the white lines become more visible? I was thinking about cropping the frame so I only see the field and then use a mask based on the color white. That didn't work out unfortunately.
Help is much appreciated,
You can modify inRange to also exclude saturated colors (meaning the greens). I don't have your original image, so I used your intermediate result:
The result of inRange is the binary image you want. I expect you can achieve better results with the original image. I used this script in the image - which makes it easy to search for good HSV values.

How to replace color for colored objects in image?

I am trying to detect edges in images of a video, but edge detection methods such as canny does not work very well might be due to in similarity between boxes's color and floor color or brightness so I want to find a way to make all red and blue boxes look as white as possible, or may be the best way to detect edges as perfect as possible for every frame since that is the ultimate goal.
I recommend you using color tracking then.
Convert to HSV
cv2.bgr2hsv
Why hsv? eventhough the brightness change, u can still detect that color
Filtering
You can use cv2.inrange
Noise cancelling
Use cv2.Gaussianblur
Contouring
use cv2.findContours
Find the edge
use ur method
Repeat this step for every color of your box
Hope this help
Just to complete my comment in your question. One can use HSV/HLS colorspaces and use inRanges with the Hue channel. For example:
import numpy as np
import cv2
# load image and threshold it
original = cv2.imread("a.jpg")
hsvframe = cv2.cvtColor(original, cv2.COLOR_BGR2HLS)
mask = cv2.inRange(hsvframe, (160,40,40), (180, 255, 255))
mask = mask + cv2.inRange(hsvframe, (0,40,40), (12, 255, 255)) # color red is at the beginning and end of the hue wheel
original[mask==255] = (0,255,0)
cv2.imshow("image", original)
cv2.waitKey(0)
cv2.destroyAllWindows()
Things to remember, Hue goes from 0-180 in np.uint8. This means if you need hue 300-360 the limits will be 150-180. The other two values are 0-255 where 255 = 100%.
The result of this small code is:
It is not perfect, but one can refine it using the methods suggested by the other answer. I hope this helps.

how to draw outlines of objects on an image to a separate image

i am working on a puzzle, my final task here is to identify edge type of the puzzle piece.
as shown in the above image i have mange to rotate and crop out every edge of the piece in same angle. my next step is to separate the edge line into a separate image like as shown in the image bellow
then to fill up one side of the line with with a color and try to process it to decide what type of edge it is.
i dont see a proper way to separate the edge line from the image for now.
my approach::
one way to do is scan pixel by pixel and find the black pixels where there is a nun black pixel next to it. this is a code that i can implement. but it feels like a primitive and a time consuming approach.
so if there you can offer any help or ideas, or any completely different way to detect the hollows and humps.
thanks in advance..
First convert your color image to grayscale. Then apply a threshold, say zero to obtain a binary image. You may have to use morphological operations to further process the binary image if there are holes. Then find the contours of this image and draw them to a new image.
A simple code is given below, using opencv 4.0.1 in python 2.7.
bgr = cv2.imread('puzzle.png')
gray = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY)
_, roi = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
cv2.imwrite('/home/dhanushka/stack/roi.png', roi)
cont = cv2.findContours(roi, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
output = np.zeros(gray.shape, dtype=np.uint8)
cv2.drawContours(output, cont[0], -1, (255, 255, 255))
# removing boundary
boundary = 255*np.ones(gray.shape, dtype=np.uint8)
boundary[1:boundary.shape[0]-1, 1:boundary.shape[1]-1] = 0
toremove = output & boundary
output = output ^ toremove

Categories