Image rotation angle opencv python - python

I am developing plate detection algorithm.
I made plate localization and now new problem appear.
I want to detect plate image rotation angle and rotate if necessary.
But how to detect this ?
I tried with cv2.HoughLinesP function, but result are as in attached image
http://postimg.org/image/vis8errzn/
y = area.shape[0]
x = area.shape[1]
#############################################################################
#If necessary rotate image by angle detected with Hough transformation
gray = cv2.cvtColor(area,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,150,apertureSize = 3)
minLineLength = 10
maxLineGap = 30
lines = cv2.HoughLinesP(edges,1,math.pi/180,100,minLineLength,maxLineGap)
if(lines != None):
for x1,y1,x2,y2 in lines[0]:
cv2.line(gray,(x1,y1),(x2,y2),(0,255,0),2)
a = ((y2-y1)*1.0)/((x2-x1)*1.0)
print 'a = ', a
print 'lines = ', lines

I am not positive what kind of "plate" you are referring to but I assume it is some planar object with notable features. In either case, here is an excellent tutorial by the folks at OpenCv explaining how to determine what I believe your question is asking:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.html
The basic run down for this procedure is:
Given the image files for the plate in question (in the orientation you want to consider)
Detect feature points in both images
Use a descriptor on those feature points to give them "meaning"
Match the descriptors between the two images
Calculate the Homography between the two images (giving you the rotation matrix you are looking
for)
All of this can be done with opencv standard library functions.

Related

Compute the blurriness along x direction using sobel operator in OpenCV in Python

We need to detect whether the images produced by our tunable lens are blurred or not.
We want to find a proxy measure for blurriness.
My current thinking is to first apply Sobel along the x direction because the jumps or the stripes are mostly along this direction. Then computing the x direction marginal means and finally compute the standard deviation of these marginal means.
We expect this Std is bigger for a clear image and smaller for a blurred one because clear images shall have a large intensity or more bigger jumps of pixel values.
But we get the opposite results. How could we improve this blurriness measure?
def sobel_image_central_std(PATH):
# use the blue channel
img = cv2.imread(PATH)[:,:,0]
# extract the central part of the image
hh, ww = img.shape
hh2 = hh // 2
ww2 = ww// 2
hh4 = hh // 4
ww4 = hh //4
img_center = img[hh4:(hh2+hh4), ww4:(ww2+ww4)]
# Sobel operator
sobelx = cv2.Sobel(img_center, cv2.CV_64F, 1, 0, ksize=3)
x_marginal = sobelx.mean(axis = 0)
plt.plot(x_marginal)
return(x_marginal.std())
Blur #1
Blur #2
Clear #1
Clear #2
In general:
Is there a way to detect if an image is blurry?
You can combine calculation this with your other question where you are searching for the central angle.
Once you have the angle (and the center, maybe outside of the image) you can make an axis transformation to remove the circular component of the cone. Instead you get x (radius) and y (angle) where y would run along the circular arcs.
Maybe you can get the center of the image from the camera set-up.
Then you don't need to calculate it using the intersection of the edges from the central angle. Or just do it manually once if it is fixed for all images.
Look at polar coordinate systems.
Due to the shape of the cone the image will be more dense at the peak but this should be a fixed factor. But this will probably bias the result when calculation the blurriness along the transformed image.
So what you could to correct this is create a synthetic cone image with circular lines and do the transformation on it. Again, requires some try-and-error.
But it should deliver some mask that you could use to correct the "blurriness bias".

Classification on the basis of shape and size

How to classify objects on the basis of shape and size using machine learning?
Say I've an circle and some small dotted squares in an image. The difference between the two is their shape and size. So given an image, how to distinguish between these objects and return the result.
In the actual problem those objects are hot spots in a solar PV folder, which are defected parts of it. I need to classify them.The I/P image is as:
[This is less or more a square type of hot spot:]
https://i.stack.imgur.com/4JL7E.png
This answer doesn't ellaborate machine learning or any approach using classifiers
The circles can be detected by the Hough Circle
Transform
from OpenCV cv2.HoughCircles():
Documentation for Hough Circles in OpenCV
Note: By using the radius you can tune the shape size of the circles you want to detect. And to be honest I didn't really get what dotted squares are, maybe you could show an exemplary image in your question.
If there are only two different kinds of objects in the image you probably don't even need a classifier, because the two classes are already being separated by the subsequent image processing (that though depends highly on your input images).
import cv2
import numpy as np
img = cv2.imread('opencv_logo.png',0)
img = cv2.medianBlur(img,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20,
param1=50,param2=30,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result of the code is as follows

OpenCV Image Aligment using ORB

I need to precisely align two images. To do that I am using Enhanced Correlation Coefficient (ECC). Which gives me great results except for images that are rotated a lot. For example if the Reference image (base image) and tested image (that I want to align) are rotated by 90 degrees ECC method doesn't work which is right according to the documentation of findTransformECC() which says
Note that if images undergo strong displacements/rotations, an initial transformation that roughly aligns the images is necessary (e.g., a simple euclidean/similarity transform that allows for the images showing the same image content approximately).
So I have to use feature point based alignment method to do some rough alignment. I tried both SIFT and ORB and I am facing same problem with both. It works fine for some images and for others the resulting transformation is shifted or rotated on wrong side.
These are input images:
I thought that the problem is caused by wrong matches but if I use just 10 keypoints with smaller distance it seems to me that all of them are good matches(I exactly the same result when I use 100 keypoints)
This is the result of matching:
This is the result:
If you compare the rotated image it is shifted to the right and upside down.
What am I missing?
This is my code:
# Initiate detector
orb = cv2.ORB_create()
# find the keypoints with ORB
kp_base = orb.detect(base_gray, None)
kp_test = orb.detect(test_gray, None)
# compute the descriptors with ORB
kp_base, des_base = orb.compute(base_gray, kp_base)
kp_test, des_test = orb.compute(test_gray, kp_test)
# Debug print
base_keypoints = cv2.drawKeypoints(base_gray, kp_base, color=(0, 0, 255), flags=0, outImage=base_gray)
test_keypoints = cv2.drawKeypoints(test_gray, kp_test, color=(0, 0, 255), flags=0, outImage=test_gray)
output.debug_show("Base image keypoints",base_keypoints, debug_mode=debug_mode,fxy=fxy,waitkey=True)
output.debug_show("Test image keypoints",test_keypoints, debug_mode=debug_mode,fxy=fxy,waitkey=True)
# find matches
# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf.match(des_base, des_test)
# Sort them in the order of their distance.
matches = sorted(matches, key=lambda x: x.distance)
# Debug print - Draw first 10 matches.
number_of_matches = 10
matches_img = cv2.drawMatches(base_gray, kp_base, test_gray, kp_test, matches[:number_of_matches], flags=2, outImg=base_gray)
output.debug_show("Matches", matches_img, debug_mode=debug_mode,fxy=fxy,waitkey=True)
# calculate transformation matrix
base_keypoints = np.float32([kp_base[m.queryIdx].pt for m in matches[:number_of_matches]]).reshape(-1, 1, 2)
test_keypoints = np.float32([kp_test[m.trainIdx].pt for m in matches[:number_of_matches]]).reshape(-1, 1, 2)
# Calculate Homography
h, status = cv2.findHomography(base_keypoints, test_keypoints)
# Warp source image to destination based on homography
im_out = cv2.warpPerspective(test_gray, h, (base_gray.shape[1], base_gray.shape[0]))
output.debug_show("After rotation", im_out, debug_mode=debug_mode, fxy=fxy)
The answer to this problem is both mundane and irritating. Assuming this is the same issue as what I've encountered (I think it is):
Problem and Explanation
Images are saved by most cameras with EXIF tags that include an "Orientation" value. Beginning with OpenCV 3.2, this orientation tag is automatically read-in when an image is loaded with cv.imread(), and the image is oriented based on the tag (there are 8 possible orientations, which include 90* rotations, mirroring and flipping). Some image viewing applications (such as Image Viewer in Linux Mint Cinnamon, and Adobe Photoshop) will display images rotated in the direction of the EXIF Orientation tag. Other applications (such as QGIS and OpenCV < 3.2) ignore the tag. If your Image 1 has an orientation tag, and Image 2 has an orientation tag, and you perform the alignment with ORB (I haven't tried SIFT for this) in OpenCV, your aligned Image 2 will appear with the correct orientation (that of Image 1) when opened in an application that reads the EXIF Orientation tag. However, if you open both images in an application that ignores the EXIF Orientation tag, then they will not appear to have the same orientation. This problem becomes even more pronounced when 1 image has an orientation tag and the other does not.
One Possible Solution
Remove the EXIF Orientation tags prior to reading the images into OpenCV. Now, as of OpenCV 3.4 (maybe 3.3?) there is an option to load the images ignoring the tag, but when this is done, they are loaded as grayscale (1 channel), which is not helpful if you NEED color cv.imread('image.jpg',128) where 128 means "ignore orientation). So, I use pyexiv2 in python to remove the offending EXIF Orientation tag from my images:
import pyexiv2
image = path_to_image
imageMetadata = pyexiv2.ImageMetadata(image)
imageMetadata.read()
try:
del imageMetadata['Exif.Image.Orientation']
imageMetadata.write()
except:
continue

Shape recognition using Hu moments from OpenCV in Python

I have a problem using the Hu moments for shape recognition. The goal is to be able to recognize the two white circles and the two white squares on the left in the picture.
http://i.stack.imgur.com/wVzYa.jpg
I tried using the cv2.approxPolyDP method but it doesn't quite work when there is a rotation. For the white circles I used the cv2.HoughCircles method and it works pretty well. However, I really need to use the Hu moments, because it seems it is a better method.
I have this code below:
import cv2
import numpy as np
nomeimg = "coded_target.jpg"
img = cv2.imread(nomeimg)
gray = cv2.imread(nomeimg,0)
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(4,4))
imgbnbin = thresh
imgbnbin = cv2.dilate(imgbnbin, element)
#find contour
contours,hierarchy=cv2.findContours(imgbnbin,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
#Elimination small contours
Areacontours = list()
for i in Areacontours:
area = cv2.contourArea(contours[i])
if (area > 90 ):
Areacontours.append(contours[i])
contours = Areacontours
print('found objects')
print(len(contours))
print("humoments")
mom = cv2.moments(contours[0])
Humoments = cv2.HuMoments(mom)
Humoments2 = -np.sign(Humoments)*np.log10(np.abs(Humoments))
print(Humoments2)
It returns 7 numbers which are the Hu invariants. I tried rotating the picture and I see that only the last two are changing. It also says that it only found 1 object found when there are obviously more than that. Is it normal?
I thought of using templates for shape identification purposes but I don't know how to do it: I believe I should exploit the Hu moments of the templates and see where it fits but I'm not sure on how to achieve it.
I appreciate the help.
You can create a template image of the squares and implement a template matching technique in order to detect it on the image.
You can also detect the contour of the template image and use the function cv2.matchshapes . However this function is used in order to compare two images. So, I guess you will have to make a window with the same size with you template and run it through you original image in order to detect which part is the best match (minimum value for the function matchshape).

Finding shapes in an image using opencv

I'm trying to look for shapes in an image using OpenCV. I know the shapes I want to match (there are some shapes I don't know about, but I don't need to find them) and their orientations. I don't know their sizes (scale) and locations.
My current approach:
Detect contours
For each contour, calculate the maximum bounding box
Match each bounding box to one of the known shapes separately. In my real project, I'm scaling the region to the template size and calculating differences in Sobel gradient, but for this demo, I'm just using the aspect ratio.
Where this approach comes undone is where shapes touch. The contour detection picks up the two adjacent shapes as a single contour (single bounding box). The matching step will then obviously fail.
Is there a way to modify my approach to handle adjacent shapes separately? Also, is there a better way to perform step 3?
For example: (Es colored green, Ys colored blue)
Failed case: (unknown shape in red)
Source code:
import cv
import sys
E = cv.LoadImage('e.png')
E_ratio = float(E.width)/E.height
Y = cv.LoadImage('y.png')
Y_ratio = float(Y.width)/Y.height
EPSILON = 0.1
im = cv.LoadImage(sys.argv[1], cv.CV_LOAD_IMAGE_GRAYSCALE)
storage = cv.CreateMemStorage(0)
seq = cv.FindContours(im, storage, cv.CV_RETR_EXTERNAL,
cv.CV_CHAIN_APPROX_SIMPLE)
regions = []
while seq:
pts = [ pt for pt in seq ]
x, y = zip(*pts)
min_x, min_y = min(x), min(y)
width, height = max(x) - min_x + 1, max(y) - min_y + 1
regions.append((min_x, min_y, width, height))
seq = seq.h_next()
rgb = cv.LoadImage(sys.argv[1], cv.CV_LOAD_IMAGE_COLOR)
for x,y,width,height in regions:
pt1 = x,y
pt2 = x+width,y+height
if abs(float(width)/height - E_ratio) < EPSILON:
color = (0,255,0,0)
elif abs(float(width)/height - Y_ratio) < EPSILON:
color = (255,0,0,0)
else:
color = (0,0,255,0)
cv.Rectangle(rgb, pt1, pt2, color, 2)
cv.ShowImage('rgb', rgb)
cv.WaitKey(0)
e.png:
y.png:
good:
bad:
Before anybody asks, no, I'm not trying to break a captcha :) OCR per se isn't really relevant here: the actual shapes in my real project aren't characters -- I'm just lazy, and characters are the easiest thing to draw (and still get detected by trivial methods).
As your shapes can vary in size and ratio, you should look at scaling invariant descriptors. A bunch of such descriptors would be perfect for your application.
Process those descriptors on your test template and then use some kind of simple classification to extract them. It should give pretty good results with simple shapes as you show.
I used Zernike and Hu moments in the past, the latter being the most famous. You can find an example of implementation here : http://www.lengrand.fr/2011/11/classification-hu-and-zernike-moments-matlab/.
Another thing : Given your problem, you should look at OCR technologies (stands for optical character recognition : http://en.wikipedia.org/wiki/Optical_character_recognition ;)).
Hope this helps a bit.
Julien
Have you try Chamfer Matching or contour matching (correspondence) using CCH as descriptor.
Chamfer matching is using distance transform of target image and template contour. not exactly scale invariant but fast.
The latter is rather slow, as the complexity is at least quadratic for bipartite matching problem. on the other hand, this method is invariant to scale, rotation, and probably local distortion (for approximate matching, which IMHO is good for the bad example above).

Categories