How to separate the dart board from the background with OpenCV? - python

I use python and openCV. I would like to separate the dart board from the background. I tried it with findContours and Canny edge detection but I couldn't make it.
Example image:

You can use grab-cut algorithm -
What you have to do is to specify the area of the image which'll act as a foreground as a rectangle. The algorithm will take a lil bit of time, and will throw out your required image... Code here requires a lil bit of tweaking though.
import numpy as np
import cv2
#a is your image
img = cv2.imread('a.jpg')
mask = np.zeros(img.shape[:2],np.uint8)
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
rect = (360,85,1670, 1900)
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img = img*mask2[:,:,np.newaxis]
cv2.imshow('image',img)
cv2.waitKey(0)
The final result in the source will give you a better result(after applying some masks)... But as I said you can modify it according to your wants.
Sources -
http://docs.opencv.org/3.1.0/d8/d83/tutorial_py_grabcut.html#gsc.tab=0

Related

Roi custom OpenCV

I have a video stream where I do the detection of people using Opencv and python.
My ROI is rectangular, but I would like to make a custom shape as in the figure.
It seems this is a stationary camera. If so, you can hard code the rectangular region of interest. You can then use a mask (created with for instance MS Paint) to black out everything outside of the custom shape.
Result:
Code:
import cv2
# load image
img = cv2.imread('image.jpg')
# load mask
mask = cv2.imread('roi_mask.png',0)
# create subimage
roi = img[120:350,150:580]
# mask roi
masked_roi = cv2.bitwise_and(roi,roi,mask=mask)
# display result
cv2.imshow('Roi',roi)
cv2.imshow('Mask',mask)
cv2.imshow('Result',masked_roi)
cv2.waitKey(0)
cv2.destroyAllWindows()

How to extract signature from an image (python script)?

Here is a sample image of signature :
How to get the signature from this image without background so that I can paste it over user image.
What if background is not white?
I have tried this, how to customize it for different background colors?
Just started with Python myself but thought I'd have a go at a solution - came up with this:
#!/usr/bin/python2
import cv2
import numpy as np
file_name = "/tmp/signature.jpg" # your signature image...
image = cv2.imread(file_name, 1)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGBA)
# note: [R,G,B,255] below, so first 3 numbers [255,255,255] are your white
# background pixels to be converted to RGBA setting of [0,0,0,0] (transparent)
image[np.all(image == [255, 255, 255, 255], axis=2)] = [0, 0, 0, 0]
cv2.imwrite("/tmp/signature-transparent.png", image)
This script will grab your signature.jpg, make a transparent background from all the white pixels it finds, then write it to signature.png.
Looks like this:
However it's not exactly clean around the edges! Anyone out there who can sort that?
It is quite a process, mainly because there are a number of steps to add an image on top of an image of a different size. I advice you to check out all intermediate steps in the code below, to understand what happens.
I used the HSV-colorspace to separate the signature from the background, this is easy to adapt if the signature or background have other colors.
I have not found python bindings for the copyTo()-method used by #BahramdunAdil. You could use numpy.copyto() functionality instead. For that I'll refer you to this answer.
I used a different technique: to add the image on top of another, first a subimage of the same size as the signature is created. The signature can be added to the subimage, which is then put back in the main image.
Alternatively, you can take the thresholded signature and use #renedv1's method to save an alpha image. Use the sign_masked image for that. Because of the HSV-range you can create a cleaner result. (Note: account for the fact that sign_masked has a black background)
Result:
Code:
import numpy as np
import cv2
# load image
sign = cv2.imread("sign.jpg")
bg_img = cv2.imread("green_area.jpg")
# Convert BGR to HSV
hsv = cv2.cvtColor(sign, cv2.COLOR_BGR2HSV)
# define range of HSV-color of the signature
lower_val = np.array([0,0,0])
upper_val = np.array([179,255,150])
# Threshold the HSV image to get a mask that holds the signature area
mask = cv2.inRange(hsv, lower_val, upper_val)
# create an opposite: a mask that holds the background area
mask_inv= cv2.bitwise_not(mask)
# create an image of the signature with background excluded
sign_masked = cv2.bitwise_and(sign,sign,mask=mask)
# get the dimensions of the signature
height, width = sign.shape[:2]
# create a subimage of the area where the signature needs to go
placeToPutSign = bg_img[0:height,0:width]
# exclude signature area
placeToPutSign_masked = cv2.bitwise_and(placeToPutSign, placeToPutSign, mask=mask_inv)
# add signature to subimage
placeToPutSign_joined = cv2.add(placeToPutSign_masked, sign_masked)
# put subimage over main image
bg_img[0:height,0:width] = placeToPutSign_joined
# display image
cv2.imshow("result", bg_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
You should consider the steps below: for example, pretend it is your user image:
Now go with these steps:
cv::namedWindow("result", cv::WINDOW_FREERATIO);
cv::Mat signatureImg = cv::imread(R"(izrMq.jpg)");
cv::Mat userImg = cv::imread(R"(user_image.jpg)");
// make a mask
cv::Mat mask;
cv::cvtColor(signatureImg, mask, cv::COLOR_BGR2GRAY);
cv::threshold(mask, mask, 150, 255, cv::THRESH_BINARY_INV);
// now copy
cv::Mat submat = userImg(cv::Rect(userImg.cols-signatureImg.cols, userImg.rows-signatureImg.rows, signatureImg.cols, signatureImg.rows));
signatureImg.copyTo(submat, mask);
cv::imshow("result", userImg);
cv::waitKey();
And it is the result:
Hope it helps!

Finding bright spots in a image using opencv

I want to find the bright spots in the above image and tag them using some symbol. For this i have tried using the Hough Circle Transform algorithm that OpenCV already provides. But it is giving some kind of assertion error when i run the code. I also tried the Canny edge detection algorithm which is also provided in OpenCV but it is also giving some kind of assertion error. I would like to know if there is some method to get this done or if i can prevent those error messages.
I am new to OpenCV and any help would be really appreciated.
P.S. - I can also use Scikit-image if necessary. So if this can be done using Scikit-image then please tell me how.
Below is my preprocessing code:
import cv2
import numpy as np
image = cv2.imread("image1.png")
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
binary_image = np.where(gray_image > np.mean(gray_image),1.0,0.0)
binary_image = cv2.Laplacian(binary_image, cv2.CV_8UC1)
If you are just going to work with simple images like your example where you have black background, you can use same basic preprocessing/thresholding then find connected components. Use this example code to draw a circle inside all circles in the image.
import cv2
import numpy as np
image = cv2.imread("image1.png")
# constants
BINARY_THRESHOLD = 20
CONNECTIVITY = 4
DRAW_CIRCLE_RADIUS = 4
# convert to gray
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# extract edges
binary_image = cv2.Laplacian(gray_image, cv2.CV_8UC1)
# fill in the holes between edges with dilation
dilated_image = cv2.dilate(binary_image, np.ones((5, 5)))
# threshold the black/ non-black areas
_, thresh = cv2.threshold(dilated_image, BINARY_THRESHOLD, 255, cv2.THRESH_BINARY)
# find connected components
components = cv2.connectedComponentsWithStats(thresh, CONNECTIVITY, cv2.CV_32S)
# draw circles around center of components
#see connectedComponentsWithStats function for attributes of components variable
centers = components[3]
for center in centers:
cv2.circle(thresh, (int(center[0]), int(center[1])), DRAW_CIRCLE_RADIUS, (255), thickness=-1)
cv2.imwrite("res.png", thresh)
cv2.imshow("result", thresh)
cv2.waitKey(0)
Here is resulting image:
Edit: connectedComponentsWithStats takes a binary image as input, and returns connected pixel groups in that image. If you would like to implement that function yourself, naive way would be:
1- Scan image pixels from top left to bottom right until you encounter a non-zero pixel that does not have a label (id).
2- When you encounter a non-zero pixel, search all its neighbours recursively( If you use 4 connectivity you check UP-LEFT-DOWN-RIGHT, with 8 connectivity you also check diagonals) until you finish that region. Assign each pixel a label. Increase your label counter.
3- Continue scanning from where you left.

Select rectangle interactively and obtain corner coods opencv

I am trying to realize the grabcut segmentation, with the functions available in opencv and the implementation from here.
In the code . same as shown here
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('messi5.jpg')
mask = np.zeros(img.shape[:2],np.uint8)
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
rect = (50,50,450,290)
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img = img*mask2[:,:,np.newaxis]
plt.imshow(img),plt.colorbar(),plt.show()
However, as you can see in the code, the rectangle coordinates are hard coded. I want to interactively select a rectangle, by displaying the input image and use the corner coods of that rectangle for further processing. Am pretty new to both python and opencv. I am tryying here, but your help is much needed. Thanks in advance.

Remove features from binarized image

I wrote a little script to transform pictures of chalkboards into a form that I can print off and mark up.
I take an image like this:
Auto-crop it, and binarize it. Here's the output of the script:
I would like to remove the largest connected black regions from the image. Is there a simple way to do this?
I was thinking of eroding the image to eliminate the text and then subtracting the eroded image from the original binarized image, but I can't help thinking that there's a more appropriate method.
Sure you can just get connected components (of certain size) with findContours or floodFill, and erase them leaving some smear. However, if you like to do it right you would think about why do you have the black area in the first place.
You did not use adaptive thresholding (locally adaptive) and this made your output sensitive to shading. Try not to get the black region in the first place by running something like this:
Mat img = imread("desk.jpg", 0);
Mat img2, dst;
pyrDown(img, img2);
adaptiveThreshold(255-img2, dst, 255, ADAPTIVE_THRESH_MEAN_C,
THRESH_BINARY, 9, 10); imwrite("adaptiveT.png", dst);
imshow("dst", dst);
waitKey(-1);
In the future, you may read something about adaptive thresholds and how to sample colors locally. I personally found it useful to sample binary colors orthogonally to the image gradient (that is on the both sides of it). This way the samples of white and black are of equal size which is a big deal since typically there are more background color which biases estimation. Using SWT and MSER may give you even more ideas about text segmentation.
I tried this:
import numpy as np
import cv2
im = cv2.imread('image.png')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
grayout = 255*np.ones((im.shape[0],im.shape[1],1), np.uint8)
blur = cv2.GaussianBlur(gray,(5,5),1)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
wcnt = 0
for item in contours:
area =cv2.contourArea(item)
print wcnt,area
[x,y,w,h] = cv2.boundingRect(item)
if area>10 and area<200:
roi = gray[y:y+h,x:x+w]
cntd = 0
for i in range(x,x+w):
for j in range(y,y+h):
if gray[j,i]==0:
cntd = cntd + 1
density = cntd/(float(h*w))
if density<0.5:
for i in range(x,x+w):
for j in range(y,y+h):
grayout[j,i] = gray[j,i];
wcnt = wcnt + 1
cv2.imwrite('result.png',grayout)
You have to balance two things, removing the black spots but balance that with not losing the contents of what is on the board. The output I got is this:
Here is a Python numpy implementation (using my own mahotas package) of the method for the top answer (almost the same, I think):
import mahotas as mh
import numpy as np
Imported mahotas & numpy with standard abbreviations
im = mh.imread('7Esco.jpg', as_grey=1)
Load the image & convert to gray
im2 = im[::2,::2]
im2 = mh.gaussian_filter(im2, 1.4)
Downsample and blur (for speed and noise removal).
im2 = 255 - im2
Invert the image
mean_filtered = mh.convolve(im2.astype(float), np.ones((9,9))/81.)
Mean filtering is implemented "by hand" with a convolution.
imc = im2 > mean_filtered - 4
You might need to adjust the number 4 here, but it worked well for this image.
mh.imsave('binarized.png', (imc*255).astype(np.uint8))
Convert to 8 bits and save in PNG format.

Categories