Identify a line on an image using OpenCV - python

This is the frame I want to process. I wanted to identify a line in an image. Although, I did that via Canny Edge detection, I tried using the corners too (expecting the dots to cover most of the lines). Counter-intuitively, more dots appeared on the noise in the image than the actual line. I was wondering if anyone knows of a function in OpenCV (Python) to connect these dots intelligently, only to connect over the line, and not the noise.
Help will be much appreciated.
I wanted to identify this black line
import cv2
import numpy as np
img = cv2.imread('fw1.jpeg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
minLineLength = 100
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap)
for x1,y1,x2,y2 in lines[0]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv2.imwrite('houghlines5.jpg',img)

I think you should increase contrast and get rid of noise in the first place. Contrast increase is described here: https://stackoverflow.com/a/46522515/8682088
Code from source above:
cv::Mat img = cv::imread("E:\\Workspace\\KS\\excercise\\oBwBH.jpg", 0);
cv::Mat workingMat;
cv::GaussianBlur(img, workingMat, cv::Size(101, 101), 31, 31); //high blur to extract background light
img = img - 0.7*work; //adjust light level
cv::normalize(img, img, 0, 255, cv::NORM_MINMAX); \\use whole range
cv::medianBlur(img, img, 5); \\remove noise
cv::Canny(img, work, 100, 200); \\extract lines; you could do hough lines instead since it has canny inside.
After that I suggest using HoughLines or HoughLinesP in opencv library.

Related

Detecting the lines (structure edges) in a charcoal drawing image

I want to detect the all lines of a structure's edges but I can't detect every line. Does anyone know how to do that?
I want to detect lines in this picture but it's really noisy and I can't detect every line. I tried denoising, blurring and other things then I used canny line detection but it didn't find every line.
Here is the original image:
I want to detect every edge of those structures.
Can anyone do that?
I got this with denoising:
My code:
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
denoised = cv2.fastNlMeansDenoising(gray_image,None,20)
blur_image = cv2.medianBlur(denoised,5)
cv2.imshow('blur_image 2',blur_image)
edges = cv2.Canny(blur, 40, 255)
cv2.imshow('canny_image',edges)
# Hough Transform
lines = cv2.HoughLinesP(edges, 1, np.pi/180, 50, minLineLength=50, maxLineGap=10)
Can anyone help me?

Removing glare with opencv and keeping edges removed from the glare

I have this image : https://imgur.com/9A7542w
And i am trying to get the contours of the image but as we can see in this : https://imgur.com/VU7KqiS
where there is glare, the contours of some circles are not drawn.
I assume that if i get the glare off on this photo, when i use canny, the edges will be drawn correctly?
I am new to openCV, i've read some post on here and tried out some techniques, but it didn't work out at all.
Note: i am doing this in python
Anyone could help ? Thanks a lot.
What i tried first :
import cv2 as cv
img = cv.imread('Photos/board.jpg')
canny = cv.Canny(img, 125, 175)
contours, hierarchies = cv.findContours(canny, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE)
blank = np.zeros(img.shape, dtype='uint8')
cv.drawContours(blank, contours, -1, (0,0,255),1)
cv.imshow('Contours drawn', blank)
results : https://imgur.com/yLVFCh2
Second attempt a bit better but useless things appears in the result
adaptive_thresh = cv.adaptiveThreshold(gray, 255, cv.ADAPTIVE_THRESH_MEAN_C, cv.THRESH_BINARY, 13, 2)
cv.imshow('Adaptive thresholding', adaptive_thresh)
canny = cv.Canny(adaptive_thresh, 125, 175)
cv.imshow('Canny edges', canny)
Results (Theres so much white pixels that appear on the photo : https://imgur.com/mqljl1m
I think the easiest way to do this is to change the 2nd value of your Canny detection like this :
canny = cv.Canny(img, 25, 175)
the lower threshold (second argument) is set lower then you can avoid this glare effect. More info here
From my point of view, you can also work in hsv space which is more confomfortable if you want to extract informations from images with effects like this. More info about hsv. The Fig. 3 a) speaks for itself.
Here is the full code, you had some errors in yours (and maybe you use an old opencv release)
import numpy as np
img = cv.imread(r'C:\Users\MyUser\Desktop\board.jpeg')
canny = cv.Canny(img, 25, 175)
img2, contours, hierarchy = cv.findContours(canny, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE)
blank = np.zeros(img.shape, dtype='uint8')
cv.drawContours(blank, contours, -1, (0,0,255),1)
cv.imshow('Contours drawn', blank)
cv.waitKey(0)
EDIT: I also want to tell you that it'll be difficult to use the extracted coordinates here. You'd better use circle detection and line detection to extract and use the coordinates of the board and pucks.

how to draw outlines of objects on an image to a separate image

i am working on a puzzle, my final task here is to identify edge type of the puzzle piece.
as shown in the above image i have mange to rotate and crop out every edge of the piece in same angle. my next step is to separate the edge line into a separate image like as shown in the image bellow
then to fill up one side of the line with with a color and try to process it to decide what type of edge it is.
i dont see a proper way to separate the edge line from the image for now.
my approach::
one way to do is scan pixel by pixel and find the black pixels where there is a nun black pixel next to it. this is a code that i can implement. but it feels like a primitive and a time consuming approach.
so if there you can offer any help or ideas, or any completely different way to detect the hollows and humps.
thanks in advance..
First convert your color image to grayscale. Then apply a threshold, say zero to obtain a binary image. You may have to use morphological operations to further process the binary image if there are holes. Then find the contours of this image and draw them to a new image.
A simple code is given below, using opencv 4.0.1 in python 2.7.
bgr = cv2.imread('puzzle.png')
gray = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY)
_, roi = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)
cv2.imwrite('/home/dhanushka/stack/roi.png', roi)
cont = cv2.findContours(roi, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
output = np.zeros(gray.shape, dtype=np.uint8)
cv2.drawContours(output, cont[0], -1, (255, 255, 255))
# removing boundary
boundary = 255*np.ones(gray.shape, dtype=np.uint8)
boundary[1:boundary.shape[0]-1, 1:boundary.shape[1]-1] = 0
toremove = output & boundary
output = output ^ toremove

Finding bright spots in a image using opencv

I want to find the bright spots in the above image and tag them using some symbol. For this i have tried using the Hough Circle Transform algorithm that OpenCV already provides. But it is giving some kind of assertion error when i run the code. I also tried the Canny edge detection algorithm which is also provided in OpenCV but it is also giving some kind of assertion error. I would like to know if there is some method to get this done or if i can prevent those error messages.
I am new to OpenCV and any help would be really appreciated.
P.S. - I can also use Scikit-image if necessary. So if this can be done using Scikit-image then please tell me how.
Below is my preprocessing code:
import cv2
import numpy as np
image = cv2.imread("image1.png")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
binary_image = np.where(gray_image > np.mean(gray_image),1.0,0.0)
binary_image = cv2.Laplacian(binary_image, cv2.CV_8UC1)
If you are just going to work with simple images like your example where you have black background, you can use same basic preprocessing/thresholding then find connected components. Use this example code to draw a circle inside all circles in the image.
import cv2
import numpy as np
image = cv2.imread("image1.png")
# constants
BINARY_THRESHOLD = 20
CONNECTIVITY = 4
DRAW_CIRCLE_RADIUS = 4
# convert to gray
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# extract edges
binary_image = cv2.Laplacian(gray_image, cv2.CV_8UC1)
# fill in the holes between edges with dilation
dilated_image = cv2.dilate(binary_image, np.ones((5, 5)))
# threshold the black/ non-black areas
_, thresh = cv2.threshold(dilated_image, BINARY_THRESHOLD, 255, cv2.THRESH_BINARY)
# find connected components
components = cv2.connectedComponentsWithStats(thresh, CONNECTIVITY, cv2.CV_32S)
# draw circles around center of components
#see connectedComponentsWithStats function for attributes of components variable
centers = components[3]
for center in centers:
cv2.circle(thresh, (int(center[0]), int(center[1])), DRAW_CIRCLE_RADIUS, (255), thickness=-1)
cv2.imwrite("res.png", thresh)
cv2.imshow("result", thresh)
cv2.waitKey(0)
Here is resulting image:
Edit: connectedComponentsWithStats takes a binary image as input, and returns connected pixel groups in that image. If you would like to implement that function yourself, naive way would be:
1- Scan image pixels from top left to bottom right until you encounter a non-zero pixel that does not have a label (id).
2- When you encounter a non-zero pixel, search all its neighbours recursively( If you use 4 connectivity you check UP-LEFT-DOWN-RIGHT, with 8 connectivity you also check diagonals) until you finish that region. Assign each pixel a label. Increase your label counter.
3- Continue scanning from where you left.

Remove features from binarized image

I wrote a little script to transform pictures of chalkboards into a form that I can print off and mark up.
I take an image like this:
Auto-crop it, and binarize it. Here's the output of the script:
I would like to remove the largest connected black regions from the image. Is there a simple way to do this?
I was thinking of eroding the image to eliminate the text and then subtracting the eroded image from the original binarized image, but I can't help thinking that there's a more appropriate method.
Sure you can just get connected components (of certain size) with findContours or floodFill, and erase them leaving some smear. However, if you like to do it right you would think about why do you have the black area in the first place.
You did not use adaptive thresholding (locally adaptive) and this made your output sensitive to shading. Try not to get the black region in the first place by running something like this:
Mat img = imread("desk.jpg", 0);
Mat img2, dst;
pyrDown(img, img2);
adaptiveThreshold(255-img2, dst, 255, ADAPTIVE_THRESH_MEAN_C,
THRESH_BINARY, 9, 10); imwrite("adaptiveT.png", dst);
imshow("dst", dst);
waitKey(-1);
In the future, you may read something about adaptive thresholds and how to sample colors locally. I personally found it useful to sample binary colors orthogonally to the image gradient (that is on the both sides of it). This way the samples of white and black are of equal size which is a big deal since typically there are more background color which biases estimation. Using SWT and MSER may give you even more ideas about text segmentation.
I tried this:
import numpy as np
import cv2
im = cv2.imread('image.png')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
grayout = 255*np.ones((im.shape[0],im.shape[1],1), np.uint8)
blur = cv2.GaussianBlur(gray,(5,5),1)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
wcnt = 0
for item in contours:
area =cv2.contourArea(item)
print wcnt,area
[x,y,w,h] = cv2.boundingRect(item)
if area>10 and area<200:
roi = gray[y:y+h,x:x+w]
cntd = 0
for i in range(x,x+w):
for j in range(y,y+h):
if gray[j,i]==0:
cntd = cntd + 1
density = cntd/(float(h*w))
if density<0.5:
for i in range(x,x+w):
for j in range(y,y+h):
grayout[j,i] = gray[j,i];
wcnt = wcnt + 1
cv2.imwrite('result.png',grayout)
You have to balance two things, removing the black spots but balance that with not losing the contents of what is on the board. The output I got is this:
Here is a Python numpy implementation (using my own mahotas package) of the method for the top answer (almost the same, I think):
import mahotas as mh
import numpy as np
Imported mahotas & numpy with standard abbreviations
im = mh.imread('7Esco.jpg', as_grey=1)
Load the image & convert to gray
im2 = im[::2,::2]
im2 = mh.gaussian_filter(im2, 1.4)
Downsample and blur (for speed and noise removal).
im2 = 255 - im2
Invert the image
mean_filtered = mh.convolve(im2.astype(float), np.ones((9,9))/81.)
Mean filtering is implemented "by hand" with a convolution.
imc = im2 > mean_filtered - 4
You might need to adjust the number 4 here, but it worked well for this image.
mh.imsave('binarized.png', (imc*255).astype(np.uint8))
Convert to 8 bits and save in PNG format.

Categories