Select rectangle interactively and obtain corner coods opencv - python

I am trying to realize the grabcut segmentation, with the functions available in opencv and the implementation from here.
In the code . same as shown here
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('messi5.jpg')
mask = np.zeros(img.shape[:2],np.uint8)
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
rect = (50,50,450,290)
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img = img*mask2[:,:,np.newaxis]
plt.imshow(img),plt.colorbar(),plt.show()
However, as you can see in the code, the rectangle coordinates are hard coded. I want to interactively select a rectangle, by displaying the input image and use the corner coods of that rectangle for further processing. Am pretty new to both python and opencv. I am tryying here, but your help is much needed. Thanks in advance.

Related

Py: OpenCV: How to detect if image matches another image / color in the left bottom corner?

I am trying to use opencv, whether on LinkedIn account thumbnails in the left bottom corner green-olive color / pixels is to be detectable. (opentowork hashtag)
Example image:
Piece to find:
Example without #opentowork:
I have tried to match templates, but the result is really unreliable:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import imutils
original = cv2.imread('orig.jpg', 0) # Piece to find
train_img = cv2.imread('1516535608688.jpg', 0) # Example image
print(cv2.matchTemplate(train_img, original, cv2.TM_CCOEFF_NORMED).max())
Then I googled and find out how to detect coordinates:
img1 = imutils.resize(train_img)
img2 = img1[197:373,181:300] #roi of the image
indices = np.where(img2!= [0])
coordinates = zip(indices[0], indices[1])
I am using opencv like the first and the last time and have no idea (and don't need it in the future) how to proceed.

Image segmentation to find cells in biological images

I have a bunch of images of cells and I want to extract where the cells are. I'm currently using circular Hough transforms and it works alright, but screws up regularly. Wondering if people have any pointers. Sorry this isn't a question specifically about software - it's how to get better performance in tis image segmentation problem.
I've tried other stuff in skimage with limited success, like the contour finding, edge detection and active contours. Nothing worked well out of the box, although it could just be that I didn't fiddle with the parameters correctly. I haven't done much image segmentation, and I don't really know how this stuff works or what the best ways are to jury-rig it.
Here is the code I currently am using that takes a grayscale image as a numpy array and looks for the cell as a circle:
import cv2
import numpy as np
smallest_dim = min(img.shape)
min_rad = int(img.shape[0]*0.05)
max_rad = int(img.shape[0]*0.5) #0.5
circles = cv2.HoughCircles((img*255).astype(np.uint8),cv2.HOUGH_GRADIENT,1,50,
param1=50,param2=30,minRadius=min_rad,maxRadius=max_rad)
circles = np.uint16(np.around(circles))
x, y, r = circles[0,:][:1][0]
Here is an example where the code found the wrong circle as the boundary of the cell. It seems like it got confused by the gunk that is surrounding the cell:
I think one issue may be the plotting of circle (coordinates may be wrong).
Also, like #Nicos mentioned, there is alot of tweaking involved with traditional image processing to make specific cases work (while more recent machine learning approaches, the tweaking is so that models do not over-train), my attempt with skimage is displayed below. Radius range, number of circles, edge detection image, all needs to be tweaked... given the potential variation among and within images. Within this image, there are, at least to me, 3 circles with varying gradient, from the canny edge detection image, you can sort of see we are getting more than 3 circles, further, the "illumination" seems to vary at different locations (due to this being an sem image)?!
import matplotlib.pyplot as plt
import numpy as np
import imageio
from skimage import data, color
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle_perimeter
from skimage.util import img_as_ubyte
!wget https://i.stack.imgur.com/2tsWw.jpg
# rgb to gray https://stackoverflow.com/a/51571053/868736
im = imageio.imread('2tsWw.jpg')
gray = lambda rgb : np.dot(rgb[... , :3] , [0.299 , 0.587, 0.114])
gray = gray(im)
image = np.array(gray[60:220,210:450])
plt.imshow(image,cmap='gray')
edges = canny(image, sigma=3,)
plt.imshow(edges,cmap='gray')
overlayimage = np.copy(image)
# https://scikit-image.org/docs/dev/auto_examples/edges/plot_circular_elliptical_hough_transform.html
hough_radii = np.arange(30, 60, 2)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent X circles
x=1
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii,
total_num_peaks=x)
# Draw them
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
#image = color.gray2rgb(image)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius)
overlayimage[circy, circx] = 255
print(radii)
ax.imshow(overlayimage,cmap='gray')
plt.show()

cv2 treshold does not work correctly on second image

I am new to python and I was playing around with background subtraction to visualize changes in pre and post change images.
I wrote a short and simple script using the cv2 library:
#!/usr/bin/env python
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
#GRAYSCALE ONLY FOR TESTING
#Test with person appearing in image
img1 = cv.imread("images/1.jpg", 0)
img2 = cv.imread("images/2.jpg", 0)
img3 = cv.subtract(img1, img2)
ret,thresh1 = cv.threshold(img3,90,255,cv.THRESH_BINARY)
#Test with satelite image of japan landslide changes after earthquake
jl_before = cv.imread("images/japan_earthquake_before.jpg",0)
jl_after = cv.imread("images/japan_earthquake_after.jpg",0)
jl_subtraction = cv.subtract(jl_before, jl_after)
ret,thresh2 = cv.threshold(img3,20,255,cv.THRESH_BINARY)
images = [img1, img2, thresh1, jl_before, jl_after, thresh2]
titles = ["Image1", "Image2", "Changes", "Japan_Before", "Japan_After", "Japan_Changes" ]
for i in range(6):
plt.subplot(2,3,i+1),plt.imshow(images[i],'gray')
plt.title(titles[i])
plt.xticks([]),plt.yticks([])
plt.show()
The result looks like this:
Why is the mask with changes from the first set of images present in the mask of the second set of images?
I used different variables, thresh1 and thresh2.
Any help would be greatly appreciated as I can't seem to find the problem.
Because you missed a change when copy pasting:
ret,thresh2 = cv.threshold(img3,20,255,cv.THRESH_BINARY)
^^^^

Draw circle around imperfect circular objects

I have this image of an eye where I want to get the center of the pupil:
Original Image
I applied adaptive threshold as well as laplacian to the image using this code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\Users\User\Documents\module4\input\left.jpg',0)
image = cv2.medianBlur(img,5)
th = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY,11,2)
laplacian = cv2.Laplacian(th,cv2.CV_64F)
cv2.imshow('output', laplacian)
cv2.imwrite('C:\Users\User\Documents\module4\output\output.jpg', laplacian)
cv2.waitKey(0)
cv2.destroyAllWindows
and the resulting image looks like this: Resulting image by applying adaptive threshold
I want to draw a circle around the smaller inner circle and get its center. I've tried using contours and circular hough transform but it does not correctly detect any circles in the image.
Here is my code for Circular Hough Transform:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\Users\User\Documents\module4\output\output.jpg',0)
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(img,(i[0],i[1]),i[2],(255,255,0),2)
# draw the center of the circle
cv2.circle(img,(i[0],i[1]),2,(255,0,255),3)
cv2.imshow('detected circles',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
And here is the code for applying contour:
import cv2
import numpy as np
img = cv2.imread('C:\Users\User\Documents\module4\output\output.jpg',0)
_, contours,hierarchy = cv2.findContours(img, 1, 2)
cnt = contours[0]
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
img = cv2.circle(img,center,radius,(0,255,255),2)
cv2.imshow('contour', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The resulting image of this code exactly looks like the image wherein I applied adaptive threshold. I would really appreciate it if anyone can help me solve my problem. I've been stuck with this for a while now. Also, if any of you guys can suggest a better way to detect the center of the pupil besides this method, I would also really appreciate it.
try to apply edge detection instead of shareholding after filtering of original image and then apply hough circle
My thought would be to use the Hough transform like you're doing. But another method might be template matching like this. This assumes you know the approximate radius of the pupil in the image, you can try to build a template.
import skimage
import numpy as np
import matplotlib.pyplot as plt
img = skimage.io.imread('Wjioe.jpg')
#just use grayscale, but you could make separate template for each r,g,b channel
img = np.mean(img, axis=2)
(M,N) = img.shape
mm = M-20
nn = N-20
template = np.zeros([mm,nn])
## Create template ##
#darkest inner circle (pupil)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,4.5, shape=template.shape)
template[rr,cc]=-2
#iris (circle surrounding pupil)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,8, shape=template.shape)
template[rr,cc] = -1
#Optional - pupil reflective spot (if centered)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,1.5, shape=template.shape)
template[rr,cc] = 1
plt.imshow(template)
normccf = skimage.feature.match_template(img, template,pad_input=True)
#center pixel
(i,j) = np.unravel_index( np.argmax(normccf), normccf.shape)
plt.imshow(img)
plt.plot(j,i,'r*')
You're defining a 3 channel color for a gray-scale image. Based on my test it will only read the first value in that tuple. Because the first value in your other colors (in the middle code) starts with 255, it draws a full white circle and because the first value in your last color (in your last code) starts with 0, it draws a full black circle which you can't see.
Just change your color values to a 1 channel color (an int between 0 and 255) and you'll be fine.

How to separate the dart board from the background with OpenCV?

I use python and openCV. I would like to separate the dart board from the background. I tried it with findContours and Canny edge detection but I couldn't make it.
Example image:
You can use grab-cut algorithm -
What you have to do is to specify the area of the image which'll act as a foreground as a rectangle. The algorithm will take a lil bit of time, and will throw out your required image... Code here requires a lil bit of tweaking though.
import numpy as np
import cv2
#a is your image
img = cv2.imread('a.jpg')
mask = np.zeros(img.shape[:2],np.uint8)
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
rect = (360,85,1670, 1900)
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img = img*mask2[:,:,np.newaxis]
cv2.imshow('image',img)
cv2.waitKey(0)
The final result in the source will give you a better result(after applying some masks)... But as I said you can modify it according to your wants.
Sources -
http://docs.opencv.org/3.1.0/d8/d83/tutorial_py_grabcut.html#gsc.tab=0

Categories