Related
I am trying to remove the checkered background (which represents transparent background in Adobe Illustrator and Photoshop) with transparent color (alpha channel) in some PNGs with Python script.
First, I use template matching:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img_rgb = cv2.imread('testimages/fake1.png', cv2.IMREAD_UNCHANGED)
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('pattern.png', 0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
if len(img_rgb[0][0]) == 3:
# add alpha channel
rgba = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2RGBA)
rgba[:, :, 3] = 255 # default not transparent
img_rgb = rgba
# replace the area with a transparent rectangle
cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (255, 255, 255, 0), -1)
cv2.imwrite('result.png', img_rgb)
Source Image: fake1.png
Pattern Template: pattern.png
Output: result.png (the gray area is actually transparent; enlarge a bit for viewing easier)
I know this approach has problems, as the in some cases, the template cannot be identified fully, as part of the pattern is hidden by the graphics in the PNG image.
My question is: How can I match such a pattern perfectly using OpenCV? via FFT Filtering?
References:
How particular pixel to transparent in opencv python?
Detecting a pattern in an image and retrieving its position
https://python.plainenglish.io/how-to-remove-image-background-using-python-6f7ffa8eab15
https://answers.opencv.org/question/232506/make-the-background-of-the-image-transparent-using-a-mask/
https://dsp.stackexchange.com/questions/36679/which-image-filter-can-be-applied-to-remove-gridded-pattern-from-corrupt-jpegs
Here is one way to do that in Python/OpenCV simply by thresholding on the checks color range.
Input:
import cv2
import numpy as np
# read input
img = cv2.imread("fake.png")
# threshold on checks
low = (230,230,230)
high = (255,255,255)
mask = cv2.inRange(img, low, high)
# invert alpha
alpha = 255 - mask
# convert img to BGRA
result = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
result[:,:,3] = alpha
# save output
cv2.imwrite('fake_transparent.png', result)
cv2.imshow('img', img)
cv2.imshow('mask', mask)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Download the resulting image to see that it is actually transparent.
Here is one way to use DFT to process the image in Python/OpenCV/Numpy. One does need to know the size of the checkerboard pattern (light or dark square size).
Read the input
Separate channels
Apply DFT to each channel
Shift origin from top left to center of each channel
Extract magnitude and phase images from each channel
Define the checkerboard pattern size
Create a black and white checkerboard image of the same size
Apply similar DFT processing to the checkerboard image
Get the spectrum from the log(magnitude)
Threshold the spectrum to form a mask
Zero out the DC center point in the mask
OPTION: If needed apply morphology dilate to thicken the white dots. But does not seem to be needed here
Invert the mask so the background is white and the dots are black
Convert the mask to range 0 to 1 and make 2 channels
Apply the two-channel mask to the center shifted DFT channels
Shift the center back to the top left in each masked image
Do the IDFT to get back from complex domain to real domain on each channel
Merge the resulting channels back to a BGR image as the final reconstituted image
Save results
Input:
import numpy as np
import cv2
import math
# read input
# note: opencv fft only works on grayscale
img = cv2.imread('fake.png')
hh, ww = img.shape[:2]
# separate channels
b,g,r = cv2.split(img)
# convert images to floats and do dft saving as complex output
dft_b = cv2.dft(np.float32(b), flags = cv2.DFT_COMPLEX_OUTPUT)
dft_g = cv2.dft(np.float32(g), flags = cv2.DFT_COMPLEX_OUTPUT)
dft_r = cv2.dft(np.float32(r), flags = cv2.DFT_COMPLEX_OUTPUT)
# apply shift of origin from upper left corner to center of image
dft_b_shift = np.fft.fftshift(dft_b)
dft_g_shift = np.fft.fftshift(dft_g)
dft_r_shift = np.fft.fftshift(dft_r)
# extract magnitude and phase images
mag_b, phase_b = cv2.cartToPolar(dft_b_shift[:,:,0], dft_b_shift[:,:,1])
mag_g, phase_g = cv2.cartToPolar(dft_g_shift[:,:,0], dft_g_shift[:,:,1])
mag_r, phase_r = cv2.cartToPolar(dft_r_shift[:,:,0], dft_r_shift[:,:,1])
# set check size (size of either dark or light square)
check_size = 15
# create checkerboard pattern
white = np.full((check_size,check_size), 255, dtype=np.uint8)
black = np.full((check_size,check_size), 0, dtype=np.uint8)
checks1 = np.hstack([white,black])
checks2 = np.hstack([black,white])
checks3 = np.vstack([checks1,checks2])
numht = math.ceil(hh / (2*check_size))
numwd = math.ceil(ww / (2*check_size))
checks = np.tile(checks3, (numht,numwd))
checks = checks[0:hh, 0:ww]
# apply dft to checkerboard pattern
dft_c = cv2.dft(np.float32(checks), flags = cv2.DFT_COMPLEX_OUTPUT)
dft_c_shift = np.fft.fftshift(dft_c)
mag_c, phase_c = cv2.cartToPolar(dft_c_shift[:,:,0], dft_c_shift[:,:,1])
# get spectrum from magnitude (add tiny amount to avoid divide by zero error)
spec = np.log(mag_c + 0.00000001)
# theshold spectrum
mask = cv2.threshold(spec, 1, 255, cv2.THRESH_BINARY)[1]
# mask DC point (center spot)
centx = int(ww/2)
centy = int(hh/2)
dot = np.zeros((3,3), dtype=np.uint8)
mask[centy-1:centy+2, centx-1:centx+2] = dot
# If needed do morphology dilate by small amount.
# But does not seem to be needed in this case
# invert mask
mask = 255 - mask
# apply mask to real and imaginary components
mask1 = (mask/255).astype(np.float32)
mask2 = cv2.merge([mask1,mask1])
complex_b = dft_b_shift*mask2
complex_g = dft_g_shift*mask2
complex_r = dft_r_shift*mask2
# shift origin from center to upper left corner
complex_ishift_b = np.fft.ifftshift(complex_b)
complex_ishift_g = np.fft.ifftshift(complex_g)
complex_ishift_r = np.fft.ifftshift(complex_r)
# do idft with normalization saving as real output and crop to original size
img_notch_b = cv2.idft(complex_ishift_b, flags=cv2.DFT_SCALE+cv2.DFT_REAL_OUTPUT)
img_notch_b = img_notch_b.clip(0,255).astype(np.uint8)
img_notch_b = img_notch_b[0:hh, 0:ww]
img_notch_g = cv2.idft(complex_ishift_g, flags=cv2.DFT_SCALE+cv2.DFT_REAL_OUTPUT)
img_notch_g = img_notch_g.clip(0,255).astype(np.uint8)
img_notch_g = img_notch_g[0:hh, 0:ww]
img_notch_r = cv2.idft(complex_ishift_r, flags=cv2.DFT_SCALE+cv2.DFT_REAL_OUTPUT)
img_notch_r = img_notch_r.clip(0,255).astype(np.uint8)
img_notch_r = img_notch_r[0:hh, 0:ww]
# combine b,g,r components
img_notch = cv2.merge([img_notch_b, img_notch_g, img_notch_r])
# write result to disk
cv2.imwrite("fake_checks.png", checks)
cv2.imwrite("fake_spectrum.png", (255*spec).clip(0,255).astype(np.uint8))
cv2.imwrite("fake_mask.png", mask)
cv2.imwrite("fake_notched.png", img_notch)
# show results
cv2.imshow("ORIGINAL", img)
cv2.imshow("CHECKS", checks)
cv2.imshow("SPECTRUM", spec)
cv2.imshow("MASK", mask)
cv2.imshow("NOTCH", img_notch)
cv2.waitKey(0)
cv2.destroyAllWindows()
Checkerboard image:
Spectrum of checkerboard:
Mask:
Result (notch filtered image):
The checkerboard pattern in the result is mitigated from the original, but still there upon close inspection.
From here one needs to threshold on the white background and invert to make an image for the alpha channel. Then convert the image to 4 BGRA and insert the alpha channel into the BGRA image as I described in my other answer below.
since you're working on PNG's with transparent backgrounds, it would probably be equally viable to instead of trying to detect the checkered background, you try to extract the stuff that isn't checkered. This could probably be achieved using a color check on all pixels. You could use opencv's inRange() function. I'll link a StackOverflow link below that tries to detect dark spots on a image.
Inrange example
I'm trying to take real time input for hand gestures with web cam, then processing the images to feed them to a neural network. I wrote this processing function to make the hand features look prominent:
img = cv2.imread('hand.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),2)
th3 = cv2.adaptiveThreshold(blur,10,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,11,2)
ret, res = cv2.threshold(th3, 225, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
res = cv2.Canny(res,100,200)
cv2.imshow("Canny", res)
The input and the output images are as follows:
It's obvious that double lines, instead of one, are detected along the edges (allover the hand, not only contour). I want to make them single. If I apply just Canny edge detection algo, then the edges are not very prominent.
One straightforward solution would be flood-fill the background with white and then with black using cv2.floodFill, like this:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "hand.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Convert the image to Grayscale:
binaryImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Flood fill bakcground (white + black):
cv2.floodFill(binaryImage, mask=None, seedPoint=(int(0), int(0)), newVal=(255))
cv2.floodFill(binaryImage, mask=None, seedPoint=(int(0), int(0)), newVal=(0))
cv2,imshow("floodFilled", binaryImage)
cv2.waitKey(0)
This is the result:
If you want to get a solid mask of the hand, you could try to fill the holes inside the hand's contour, also using flood-fill and some image arithmetic, like this:
# image path
path = "D://opencvImages//"
fileName = "hand.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Convert the image to Grayscale:
binaryImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Isolate holes on input image:
holes = binaryImage.copy()
# Get rows and cols from input:
(rows, cols) = holes.shape[:2]
# Remove background via flood-fill on 4 outermost corners
cv2.floodFill(holes, mask=None, seedPoint=(int(0), int(0)), newVal=(255))
cv2.floodFill(holes, mask=None, seedPoint=(int(10), int(rows-10)), newVal=(255))
cv2.floodFill(holes, mask=None, seedPoint=(int(cols-10), int(10)), newVal=(255))
cv2.floodFill(holes, mask=None, seedPoint=(int(cols-10), int(rows-10)), newVal=(255))
# Get holes:
holes = 255 - holes
# Final image is original imput + isolated holes:
mask = binaryImage + holes
# Deep copy for further results:
maskCopy = mask.copy()
maskCopy = cv2.cvtColor(maskCopy, cv2.COLOR_GRAY2BGR)
These are the isolated holes and hand mask:
You can then detect the bounding rectangle by processing contours, filtering small-area blobs and approximating to a rectangle, like this:
# Find the big contours/blobs on the processed image:
contours, hierarchy = cv2.findContours(mask, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# Get bounding rectangles:
for c in contours:
# Filter contour by area:
blobArea = cv2.contourArea(c)
maxArea = 100
if blobArea > maxArea:
# Approximate the contour to a polygon:
contoursPoly = cv2.approxPolyDP(c, 3, True)
# Get the polygon's bounding rectangle:
boundRect = cv2.boundingRect(contoursPoly)
# Get the dimensions of the bounding rect:
rectX = boundRect[0]
rectY = boundRect[1]
rectWidth = boundRect[2]
rectHeight = boundRect[3]
# Draw rectangle:
color = (0, 255, 0)
cv2.rectangle(maskCopy, (int(rectX), int(rectY)), (int(rectX + rectWidth), int(rectY + rectHeight)), color, 3)
cv2.imshow("Bounding Rectangle", maskCopy)
cv2.waitKey(0)
This is the result:
It looks like you are on the correct way, but as #CrisLuengo mentioned, Canny is applied on grayscale images rather than binary images. Here is an approach.
import numpy as np
import matplotlib.pyplot as plt
import cv2
img_gray = cv2.imread('hand.png',0)
sigma = 2
threshold1=30
threshold2=60
img_blur = cv2.GaussianBlur(img_gray,(5,5),sigmaX=sigma,sigmaY=sigma)
res = cv2.Canny(img_blur,threshold1=threshold1,threshold2=threshold2)
fig,ax = plt.subplots(1,2,sharex=True,sharey=True)
ax[0].imshow(img_gray,cmap='gray')
ax[1].imshow(res,cmap='gray')
plt.show()
After playing around with the parameters of the gaussian filter and the Canny threshold values, this is what I am getting:
As you can see most of the fingers are clearly detected except the thumb. The lighting conditions make it difficult for Canny to calculate a proper gradient there. You might either try to improve the contrast of your images through your setup (which is the easiest solution to me), or to apply some contrast enhancements methods like Contrast Limited Adaptive Histogram Equalization (CLAHE) before going for Canny. I did not get any better results than the one above after a few trials with CLAHE, though, but it might be worth to look at it. Good luck!
I'm pre-processing some images in order to remove the background from my area of interest. However, the images on my bench have rounded edges due to the focus of the camera. How do I discard these rounded edges and be able to remove only my object of interest from the image? The code below I can remove the background of the image, but it does not work right due to the edges around.
import numpy as np
import cv2
#Read the image and perform threshold and get its height and weight
img = cv2.imread('IMD408.bmp')
h, w = img.shape[:2]
# Transform to gray colorspace and blur the image.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
# Make a fake rectangle arround the image that will seperate the main contour.
cv2.rectangle(blur, (0,0), (w,h), (255,255,255), 10)
# Perform Otsu threshold.
_,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# Create a mask for bitwise operation
mask = np.zeros((h, w), np.uint8)
# Search for contours and iterate over contours. Make threshold for size to
# eliminate others.
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for i in contours:
cnt = cv2.contourArea(i)
if 1000000 >cnt > 100000:
cv2.drawContours(mask, [i],-1, 255, -1)
# Perform the bitwise operation.
res = cv2.bitwise_and(img, img, mask=mask)
# Display the result.
cv2.imwrite('IMD408.png', res)
cv2.imshow('img', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
input image:
Exit:
Error:
Since you mentioned that all the images have the same hue, then this should work well for them. Steps is to do some white balancing which will increase the contrast a bit.
Get the greyscale.
Threshold the grayscale image. Values less than 127 are set to 255 (white). This will give you a binary image, which will become a mask for the original image.
Apply the mask
You might have to play around with the thresholding if you want better results, here is the link for that. But this should get you started. I'm using a different OpenCV version compared to you might have to tweak the code a bit.
import cv2
def equaliseWhiteBalance(image):
''' Return equilised WB of an image '''
wb = cv2.xphoto.createSimpleWB() #Create WB Object
imgWB = wb.balanceWhite(img) #Balance White on image
r,g,b = cv2.split(imgWB) #Get individual r,g,b channels
r_equ = cv2.equalizeHist(r) #Equalise RED channel
g_equ = cv2.equalizeHist(g) #Equalise GREEN channel
b_equ = cv2.equalizeHist(b) #Equalise BLUE channel
img_equ_WB = cv2.merge([r_equ,g_equ,b_equ]) #Merge equalised channels
return imgWB
#Read the image
img = cv2.imread('IMD408.bmp')
result = img.copy()
#Get whiteBalance of image
imgWB = equaliseWhiteBalance(img)
cv2.imshow('img', imgWB)
cv2.waitKey(0)
# Get gray image
gray = cv2.cvtColor(imgWB,cv2.COLOR_RGB2GRAY)
cv2.imshow('img', gray)
cv2.waitKey(0)
# Perform threshold
_, thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
cv2.imshow('img', thresh)
cv2.waitKey(0)
# Apply mask
result[thresh!=0] = (255,255,255)
cv2.imshow('img', result)
cv2.waitKey(0)
If all the dark corner vignettes have different sizes per image, then I suggest looking for centroid of contours on the binary (mask) image. Centroids with a 'short' distance to any corner of your image will be the dark vignettes, so their value can be changed from black to white.
I want to isolate every character in the following image:
and it should create a rectangular bounding box around each character. My code is creating a circular bounding box. I need to supply these isolated character images to my trained model to predict the character. I haven't done image processing before which leads me to asking such a question.
This is the code I'm using:
# Standard imports
import cv2
import numpy as np;
from PIL import Image
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10;
params.maxThreshold = 200;
#Filter by Color
params.filterByColor=False
params.blobColor=255
# Filter by Area.
params.filterByArea = False
params.minArea = 50
# Filter by Circularity
params.filterByCircularity = False
params.minCircularity = 0.0785
#
# # Filter by Convexity
params.filterByConvexity = False
params.minConvexity = 0.87
#
# # Filter by Inertia
params.filterByInertia = False
params.minInertiaRatio = 0.01
# Read image
im = cv2.imread("C:\\xx\\testimages\\bw_plate.jpg", cv2.IMREAD_GRAYSCALE)
cv2.threshold(im,200,255,cv2.THRESH_BINARY_INV,im)
# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs.
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0, 0, 255),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
cv2.waitKey(0)
My output with the following code is:
Why is it not detecting 0 and 2 properly? Also how can I create separate jpeg files for every isolated character?
The C++ implementation of my project uses CblobResult class which did the segmentation. Is there any equivalent library in python?
This is what the final output must look like for every character after segmentation:
After removing background noises you can input image like this:
Then you can get what you want using following code:
import cv2
img = cv2.imread('test4.jpg', 0)
cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU,img)
image, contours, hier = cv2.findContours(img, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
contours = sorted(contours, key=lambda ctr: cv2.boundingRect(ctr)[0])
cv2.imshow("contours", img)
cv2.waitKey(0)
d=0
for ctr in contours:
# Get bounding box
x, y, w, h = cv2.boundingRect(ctr)
# Getting ROI
roi = image[y:y+h, x:x+w]
cv2.imshow('character: %d'%d,roi)
cv2.imwrite('character_%d.png'%d, roi)
cv2.waitKey(0)
cv2.destroyAllWindows()
d+=1
I am trying to isolate certain colored lines(the field lines) in a set of hockey images. I have applied a Hue Lightness Saturation(HLS) color space filter and have managed to pass all the components of the original image which are in the specified HLS range. However, some parts of the image are also being passed through because they satisfy the color range, for example, parts of the crowd and parts of the playing field. However, I want to isolate only the line which is required. How can I do this?
NOTE: I have separate programs for blue and yellow because they require different HLS ranges. In some images, there are more than one line. Also, as can be seen in the 2nd image I have put, the lines may be slightly curved. In the 2nd image, it would be enough if I can get the straight part only.
I have tried various image transformations and morphological operations with no luck. I have also experimented quite a lot with the HLS ranges and have set them so that they produce the best result on a set of images which I have, but I still don't get a satisfactory result.
Original images:
Code:
import cv2
import numpy as np
frame = cv2.imread('hockey4.jpg')
width=900
height=600
frame = cv2.resize(frame,(width,height))
# Convert BGR to HLS
hls = cv2.cvtColor(frame, cv2.COLOR_BGR2HLS)
#HLS ranges for blue
#lower array defines the lower limit and upper array defines the upper limit of the range
#The mask is a binary image where the output is white if the corresponding pixel in the input image is between the range specified by upper and lower limits
#blue
lower = np.array([90,90,19]) #90,90,19
upper = np.array([130,190,100]) #130,190,100
mask = cv2.inRange(hls, lower, upper)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(frame,frame, mask= mask)
You can get better results from range thresholding than you're currently getting. Additionally, try using the morphological operations like opening and closing after thresholding to remove spurious bits and add in any pieces you didn't mean to remove.
These were the results I was able to get through thresholding with cv2.inRange() and applying opening and closing with cv2.morphologyEx():
image 1 yellow, image 1 blue, image 2 yellow, image 2 blue
Here was my code for the first image:
import cv2
import numpy as np
img = cv2.imread('0.jpg')
# crop out the top of the image where the scores are
h, w = img.shape[:2]
score_h = int(h/8)
img = img[score_h:h, :]
h, w = img.shape[:2]
# blur for better thresholding
blur = cv2.GaussianBlur(img, (5,5), 1)
# threshold in HSV space
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
# threshold for yellow
lb_y = np.array([11, 0, 153])
ub_y = np.array([52, 255, 255])
bin_y = cv2.inRange(hsv, lb_y, ub_y)
# cv2.imshow("thresh yellow", bin_y)
# cv2.waitKey(0)
# open to remove spurious yellow bits
open_kern = np.ones((3,3), dtype=np.uint8)
bin_y = cv2.morphologyEx(bin_y, cv2.MORPH_OPEN, open_kern, iterations=2)
# cv2.imshow("opened yellow", bin_y)
# cv2.waitKey(0)
# threshold for blue
lb_b = np.array([113, 41, 191])
ub_b = np.array([119, 76, 232])
bin_b = cv2.inRange(hsv, lb_b, ub_b)
# cv2.imshow("thresh blue", bin_b)
# cv2.waitKey(0)
# open to remove spurious blue bits
kern = np.ones((3,3), dtype=np.uint8)
bin_b = cv2.morphologyEx(bin_b, cv2.MORPH_OPEN, kern, iterations=2)
# cv2.imshow("opened blue", bin_b)
# cv2.waitKey(0)
# combine to show yellow detection
rip_y = img.copy()
rip_y[bin_y==0] = 0
mark_y = cv2.addWeighted(img, .4, rip_y, .6, 1)
cv2.imshow("marked yellow", mark_y)
cv2.waitKey(0)
# cv2.imwrite('0-y.jpg',mark_y)
# combine to show blue detection
rip_b = img.copy()
rip_b[bin_b==0] = 0
mark_b = cv2.addWeighted(img, .4, rip_b, .6, 1)
cv2.imshow("marked blue", mark_b)
cv2.waitKey(0)
# cv2.imwrite('0-b.jpg',mark_b)
And for the second:
import cv2
import numpy as np
img = cv2.imread('1.jpg')
# crop out the top of the image where the scores are
h, w = img.shape[:2]
score_h = int(h/10)
img = img[score_h:h, :]
h, w = img.shape[:2]
# blur for better thresholding
blur = cv2.GaussianBlur(img, (5,5), 1)
# threshold in HSV space
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
# threshold for yellow
lb_y = np.array([14, 79, 140])
ub_y = np.array([25, 255, 217])
bin_y = cv2.inRange(hsv, lb_y, ub_y)
# cv2.imshow("thresh yellow", bin_y)
# cv2.waitKey(0)
# open to remove spurious yellow bits
open_kern = np.ones((3,3), dtype=np.uint8)
bin_y = cv2.morphologyEx(bin_y, cv2.MORPH_OPEN, open_kern, iterations=2)
# cv2.imshow("opened yellow", bin_y)
# cv2.waitKey(0)
# threshold for blue
lb_b = np.array([113, 50, 150])
ub_b = np.array([135, 255, 255])
bin_b = cv2.inRange(hsv, lb_b, ub_b)
# cv2.imshow("thresh blue", bin_b)
# cv2.waitKey(0)
# close to fill in blue lines
kern = np.ones((3,3), dtype=np.uint8)
bin_b = cv2.morphologyEx(bin_b, cv2.MORPH_CLOSE, kern, iterations=2)
# cv2.imshow("closed blue", bin_b)
# cv2.waitKey(0)
# open to remove spurious lines
bin_b = cv2.morphologyEx(bin_b, cv2.MORPH_OPEN, kern, iterations=2)
# cv2.imshow("opened blue", bin_b)
# cv2.waitKey(0)
# combine to show yellow detection
rip_y = img.copy()
rip_y[bin_y==0] = 0
mark_y = cv2.addWeighted(img, .4, rip_y, .6, 1)
cv2.imshow("marked yellow", mark_y)
cv2.waitKey(0)
cv2.imwrite('1-y.jpg',mark_y)
# combine to show blue detection
rip_b = img.copy()
rip_b[bin_b==0] = 0
mark_b = cv2.addWeighted(img, .4, rip_b, .6, 1)
cv2.imshow("marked blue", mark_b)
cv2.waitKey(0)
cv2.imwrite('1-b.jpg',mark_b)
The first and the second are identical aside from their threshold values, and the second image I am applying a closing operation on the blue (not done for the other three thresholds) to close up the gaps in the line a bit.
It's not clear whether you're looking to do this automatically or manually for a few images. If you're looking to do it automatically, my first go-to would be to create a set of threshold values that work relatively well for all the images, and be harsh with morphological operations. This will likely thin out the lines a good bit, but then you could use numpy operations to fit a curve to the lines, and then widen it to achieve roughly the same line.
If you're going to be doing it manually, then the easiest bet is to play around with threshold values for each image. I've just recently created a tool that gives you sliders for changing the threshold range and previewing it on the fly. Check it out on GitHub if you're interested in trying it out. It will output the binary thresholded image, the colorspace you're thresholding in, and the lower and upper bounds that were used for thresholding.