How to threshold a grayscale image with lighting and noise - python

I am trying to threshold images with challenging noise.
.
The numbers on the side are the dimensions. I have tried various standard methods:
ret,thresh1 = cv2.threshold(img,95,255,0)
#cv2.THRESH_BINARY
thresh2 = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,7,0.5)
thresh3 = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,3,1.5)
# Otsu's thresholding after Gaussian filtering
blur = cv2.GaussianBlur(img,(3,3),0)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
I want to segment the "lighter" portion inside the darker grey zone (or vice versa). I have played with various kernel sizes, and constant values but nothing is giving me a good segmentation. Any ideas what else i can try or how to improve the results? Some sample results i get using the code is

Related

Thresholding Resistor Bands with OpenCV

So I am trying to make a neural network that categorizes resistor strength by recognizing the color bands. Before I get to that step I want to use OpenCV to threshold all the colors except the resistor bands so that it is easier for the neural network to categorize. However I do not know what threshold type is best suited for this.
I tried several ranges of HLS, RGB, and HSV, but they all do not get rid of the background of the resistor.
Note: I have already used contours to get rid of the background, so now all that is left is the resistor with the colored lines on it.
HLS in my case got rid of the colors, but kept the resistor background, as shown in the code below
frame_HLS = cv2.cvtColor(masked_data, cv2.COLOR_BGR2HLS)
frame_threshold = cv2.inRange(frame_HLS, (50, 0, 0), (139, 149, 255))
Here is an image of the original image, and the HLS output
So overall, I am just wondering if anyone knows if the other color modes like LUV work well for this, or whether or not I will just have to use contours or other methods to separate them.
You're on the right track and color thresholding is a great approach to segmenting the resistor. Currently, the thresholding is performing correctly, you just need to do a few simple steps to remove the background.
I tried several ranges of HLS, RGB, and HSV, but they all do not get rid of the background of the resistor.
To remove the background we can make use of the binary mask that cv2.inRange() generated. We simply use cv2.bitwise_and() and convert all black pixels on the mask to white with these two lines
result = cv2.bitwise_and(original, original, mask=frame_threshold)
result[frame_threshold==0] = (255,255,255)
Here's the masked image of what you currently have (left) and after removing the background (right)
import cv2
image = cv2.imread('1.png')
original = image.copy()
frame_HLS = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)
frame_threshold = cv2.inRange(frame_HLS, (50, 0, 0), (139, 149, 255))
result = cv2.bitwise_and(original, original, mask=frame_threshold)
result[frame_threshold==0] = (255,255,255)
cv2.imshow('result', result)
cv2.waitKey()
However I do not know what threshold type is best suited for this.
Right now you're using color thresholding, you could continue using this method and experiment with other ranges in the HLS, RGB, or HSV color space. In all of these cases, you can remove the background by converting in all black pixels on the mask to white. If you decide to pivot to another thresholding method, take a look at Otsu's threshold or Adaptive thresholding which automatically calculates the threshold value.

Convert image to binary doesn't shows white lines of a soccer field

I am inspired by the following blogpost, however I am struggling with step 2/3.
I want to creates a binary image from a gray image based on the threshold values and ultimately displaying all white lines on the image. My desired output looks as follows:
First, I want to isolate the soccer field by using colour-thresholding and morphology.
def isolate_field(img):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# find green pitch
light_green = np.array([40, 40, 40])
dark_green = np.array([70, 255, 255])
mask = cv2.inRange(hsv, light_green, dark_green)
# removing small noises
kernel = np.ones((5, 5), np.uint8)
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# apply mask over original frame
return cv2.bitwise_and(frame, frame, mask=opening)
This gives the following output:
I am happy with the results so far, but because of the large shadow I am struggling with the image-processing when I grayscale the picture. As a result, the binary thresholding is based on the sunny part in the upper-left corner instead of the white lines around the soccer field.
Following the methodology on the tutorials I get the following output for the simple thresholding:
and adaptive thresholding:
and finally, Otsu's thresholding:
How can I make sure that the white lines become more visible? I was thinking about cropping the frame so I only see the field and then use a mask based on the color white. That didn't work out unfortunately.
Help is much appreciated,
You can modify inRange to also exclude saturated colors (meaning the greens). I don't have your original image, so I used your intermediate result:
The result of inRange is the binary image you want. I expect you can achieve better results with the original image. I used this script in the image - which makes it easy to search for good HSV values.

How can I detect red particles on an image using python opencv?

my job is to detect and get the size of red particles from image. I tried simple blob detections, but works bad with colour filter and extracting values of red using the HSV but I got poor results because the image has small resolution (I work on Rasperry Pi using a webcam).
Here is a sample picture:
Using the HSV colour space is perfectly fine. If you show the hue and saturation components of the image, you'll see that the red particles have a relatively large hue with a small saturation.
BTW, your image is rather large in resolution. I'm going to downsample for the purposes of fitting the images into the post as well as minimizing processing time. First let's load in your image, resize it down to 25% resolution, then extract out the HSV components:
import cv2
import numpy as np
im = cv2.imread('sample.png')
im_resize = cv2.resize(im, None, None, 0.25, 0.25)
out = cv2.cvtColor(im_resize, cv2.COLOR_BGR2HSV)
stacked = np.hstack([out[...,0], out[...,1]])
cv2.imshow("Hue & Saturation", stacked)
cv2.waitKey(0)
cv2.destroyAllWindows()
I'm also stacking the hue and saturation channels together into a single image so we can see what it looks like and displaying this to the screen.
We get this image:
The combination of a relatively large hue component with a low saturation component is unique in comparison to the rest of the image. Let's do some simple thresholding to extract out those components where we look for areas that have a hue component that is greater than one threshold and a saturation component that is smaller than another threshold:
hue_thresh = 100
saturation_thresh = 32
thresh = np.logical_and(out[...,0] > hue_thresh, out[...,1] < saturation_thresh)
cv2.imshow("Thresholded", 255*(thresh.astype(np.uint8)))
cv2.waitKey(0)
cv2.destroyAllWindows()
I set some tuned thresholds, then use numpy.logical_and to combine both conditions together. Because the image is now of type bool and to display images, they should be an unsigned or floating-point type, we convert the image to uint8 then multiply by 255.
We now get this image:
As you can see, we extract out the portions that are a reddish hue that is not common with the background. The thresholds will also need to be played around with, but it's fine for this particular example.

OpenCV Dynamic segmentation method for blurred or degraded image

I have implemented segmentation for given images, but images may vary based on different color. How can I separate background from the foreground where foreground contains hollow circle/filled circle only. My goal is to find threshold value automatically based on color of image.
[Sample images][1]
import numpy as np
import cv2
import os
image =cv2.imread("cropped2/pnr6.jpg")
img = image.copy()
MARKER_LOWER_BOUND = ( 0, 0, 0)
MARKER_UPPER_BOUND = (255, 255, 25)
marker_seg_mask = cv2.inRange(img, MARKER_LOWER_BOUND, MARKER_UPPER_BOUND)
cv2.imshow("thresold.jpg",marker_seg_mask)
Look at the cv2.adaptiveThreshold() function, which should do what you need.
From the docs:
Adaptive Thresholding
In the previous section, we used a global value as threshold value.
But it may not be good in all the conditions where image has different
lighting conditions in different areas. In that case, we go for
adaptive thresholding. In this, the algorithm calculate the threshold
for a small regions of the image. So we get different thresholds for
different regions of the same image and it gives us better results for
images with varying illumination.

Trying to improve my road segmentation program in OpenCV

I am trying to make a program that is capable of identifying a road in a scene and proceeded to using morphological filtering and the watershed algorithm. However the program produces either mediocre or bad results. It seems to do okay (not good enough through) if the road takes up most of the scene. However in other pictures, it turns out that the sky gets segmented instead (watershed with the clouds).
I tried to see if I can preform more image processing to improve the results, but this is the best I have so far and don't know how to move forward to improve my program.
How can I improve my program?
Code:
import numpy as np
import cv2
from matplotlib import pyplot as plt
import imutils
def invert_img(img):
img = (255-img)
return img
#img = cv2.imread('images/coins_clustered.jpg')
img = cv2.imread('images/road_4.jpg')
img = imutils.resize(img, height = 300)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
thresh = invert_img(thresh)
# noise removal
kernel = np.ones((3,3), np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 4)
# sure background area
sure_bg = cv2.dilate(opening,kernel,iterations=3)
#sure_bg = cv2.morphologyEx(sure_bg, cv2.MORPH_TOPHAT, kernel)
# Finding sure foreground area
dist_transform = cv2.distanceTransform(opening, cv2.DIST_L2, 5)
ret, sure_fg = cv2.threshold(dist_transform,0.7*dist_transform.max(),255,0)
# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv2.subtract(sure_bg,sure_fg)
# Marker labelling
ret, markers = cv2.connectedComponents(sure_fg)
# Add one to all labels so that sure background is not 0, but 1
markers = markers+1
# Now, mark the region of unknown with zero
markers[unknown==255] = 0
'''
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
imgray = cv2.GaussianBlur(imgray, (5, 5), 0)
img = cv2.Canny(imgray,200,500)
'''
markers = cv2.watershed(img,markers)
img[markers == -1] = [255,0,0]
cv2.imshow('background',sure_bg)
cv2.imshow('foreground',sure_fg)
cv2.imshow('threshold',thresh)
cv2.imshow('result',img)
cv2.waitKey(0)
For start, segmentation problems are hard. The more general you want the solution to be, the more hard it gets. Road segemntation is a well-known problem, and i'm sure you can find many papers which tackle this issue from various directions.
Something that helps me get ideas for computer vision problems is trying to think what makes it so easy for me to detect it and so hard for computer.
For example, let's look on the road on your images. What makes it unique from the background?
Distinct gray color.
Always have 2 shoulders lines in white color
Always on the bottom section of the image
Always have a seperation line in the middle (yellow/white)
Pretty smooth
Wider on the bottom and vanishing into horizon.
Now, after we have found some unique features, we need to find ways to quantify them, so it will be obvious to the algorithm as it is obvious to us.
Work on the RGB (or even better - HSV) image, don't convert it to gray on the beginning and lose all the color data. Look for gray area!
Again, let's find white regions (inside gray ones). You can try do edge detection in the specific orientation of the shoulders line. You are looking for line that takes about half of the height of the image. etc...
Lets delete the upper half of the image. It is hardly that you ever have there a road, and you will get rid from a lot of noise in your algorithm.
see 2...
Lets check the local standard deviation, or some other smoothness feature.
If we found some shape, lets check if it fits what we expect.
I know these are just ideas and I don't claim they are easy to implement, but if you want to improve your algorithm you must give it more "knowledge", just as you have.
Exploit some domain knowledge; in other words, make some simplifying assumptions. Even basic things like "the camera's not upside down" and "the pavement has a uniform hue" will improve the common case.
If you can treat crossroads as a special case, then finding the edges of the roadway may be a simpler and more useful task than finding the roadway itself.

Categories