Is there any way that I can straighten this image using OpenCV with Python? I was figuring it out using the different transformations but I cant get it.
Here is my code:
rows, cols, h = img.shape
M = np.float32([[1, 0, 100], [0, 1, 50]])
And then I apply Affine Transformation.
dst = cv2.warpAffine(roi, M, (cols, rows))
Still I cant get the desired output of the image to be straighten. Scratching my head for almost an hour now. Anyone can help me please?
Do you remember my previous post? This answer is based on that.
So I obtained the 4 corner points of the bounding box around the book and fed it into the homography function.
Code:
#---- 4 corner points of the bounding box
pts_src = np.array([[17.0,0.0], [77.0,5.0], [0.0, 552.0],[53.0, 552.0]])
#---- 4 corner points of the black image you want to impose it on
pts_dst = np.array([[0.0,0.0],[77.0, 0.0],[ 0.0,552.0],[77.0, 552.0]])
#---- forming the black image of specific size
im_dst = np.zeros((552, 77, 3), np.uint8)
#---- Framing the homography matrix
h, status = cv2.findHomography(pts_src, pts_dst)
#---- transforming the image bound in the rectangle to straighten
im_out = cv2.warpPerspective(im, h, (im_dst.shape[1],im_dst.shape[0]))
cv2.imwrite("im_out.jpg", im_out)
Since you have the contour bounding box around the book; you have to feed those 4 points into the array pts_src.
Related
Following my own question from 4 years ago, this time in Python only-
I am looking for a way to perform texture mapping into a small region in a destination image, defined by 4 corners given as (x, y) pixel coordinates. This region is not necessarily rectangular. It is a perspective projection of some rectangle onto the image plane.
I would like to map some (rectangular) texture into the mask defined by those corners.
Mapping directly by forward-mapping the texture will not work properly, as source pixels will be mapped to non-integer locations in the destination.
This problem is usually solved by inverse-warping from the destination to the source, then coloring according to some interpolation.
Opencv's warpPerspective doesn't work here, as it can't take a mask in.
Inverse-warping the entire destination and then mask is not acceptable because the majority of the computation is redundant.
Is there a built-in opencv (or other) function that accomplishes above requirements?
If not, what is a good way to get a list of pixels from my ROI defined by corners, in favor of passing that to projectPoints?
Example background image:
I want to fill the area outlined by the red lines (defined by its corners) with some other texture, say this one
Mapping between them can be obtained by mapping the texture's corners to the ROI corners with cv2.getPerspectiveTransform
For future generations, here is how to only back and forward warp pixels within the bbox of the warped corner points, as #Micka suggested.
here banner is the grass image, and banner_coords_2d are the corners of the red region on image, which is meme-man.
def transform_banner(banner_coords_2d, banner, image):
# show_points_on_image("banner corners", image, banner_coords_2d)
banner_height, banner_width, _ = banner.shape
src_banner_points = np.float32([
[0, 0],
[banner_width - 1, 0],
[0, banner_height - 1],
[banner_width - 1, banner_height - 1],
])
# only warp to size of bbox of warped corners, not all of the image
warped_left = np.round(np.min(banner_coords_2d[:, 0])).astype(int)
warped_right = np.round(np.max(banner_coords_2d[:, 0])).astype(int)
warped_top = np.round(np.min(banner_coords_2d[:, 1])).astype(int)
warped_bottom = np.round(np.max(banner_coords_2d[:, 1])).astype(int)
warped_width = int(warped_right - warped_left)
warped_height = int(warped_bottom - warped_top)
dst_banner_points = banner_coords_2d.astype(np.float32)
dst_banner_points[:, 0] -= warped_left
dst_banner_points[:, 1] -= warped_top
tform = cv2.getPerspectiveTransform(src_banner_points, dst_banner_points)
warped_banner = cv2.warpPerspective(banner, tform, (warped_width, warped_height))
# cv2.imshow("warped_banner", warped_banner)
image_with_banner = image.copy()
image_with_banner[warped_top: warped_bottom, warped_left: warped_right][warped_banner != 0] = warped_banner[
warped_banner != 0]
# cv2.imshow("image_with_banner", image_with_banner)
return image_with_banner
Likely, this can be done more neatly, I am open to edits.
I've been researching and trying a couple functions to get what I want and I feel like I might be overthinking it.
One version of my code is below. The sample image is here.
My end goal is to find the angle (yellow) of the approximated line with respect to the frame (green line) Final
I haven't even got to the angle portion of the program yet.
The results I was obtaining from the below code were as follows. Canny Closed Small Removed
Anybody have a better way of creating the difference and establishing the estimated line?
Any help is appreciated.
import cv2
import numpy as np
pX = int(512)
pY = int(768)
img = cv2.imread('IMAGE LOCATION', cv2.IMREAD_COLOR)
imgS = cv2.resize(img, (pX, pY))
aimg = cv2.imread('IMAGE LOCATION', cv2.IMREAD_GRAYSCALE)
# Blur image to reduce noise and resize for viewing
blur = cv2.medianBlur(aimg, 5)
rblur = cv2.resize(blur, (384, 512))
canny = cv2.Canny(rblur, 120, 255, 1)
cv2.imshow('canny', canny)
kernel = np.ones((2, 2), np.uint8)
#fringeMesh = cv2.dilate(canny, kernel, iterations=2)
#fringeMesh2 = cv2.dilate(fringeMesh, None, iterations=1)
#cv2.imshow('fringeMesh', fringeMesh2)
closing = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, kernel)
cv2.imshow('Closed', closing)
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(closing, connectivity=8)
#connectedComponentswithStats yields every separated component with information on each of them, such as size
sizes = stats[1:, -1]; nb_components = nb_components - 1
min_size = 200 #num_pixels
fringeMesh3 = np.zeros((output.shape))
for i in range(0, nb_components):
if sizes[i] >= min_size:
fringeMesh3[output == i + 1] = 255
#contours, _ = cv2.findContours(fringeMesh3, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#cv2.drawContours(fringeMesh3, contours, -1, (0, 255, 0), 1)
cv2.imshow('final', fringeMesh3)
#cv2.imshow("Natural", imgS)
#cv2.imshow("img", img)
cv2.imshow("aimg", aimg)
cv2.imshow("Blur", rblur)
cv2.waitKey()
cv2.destroyAllWindows()
You can fit a straight line to the first white pixel you encounter in each column, starting from the bottom.
I had to trim your image because you shared a screen grab of it with a window decoration, title and frame rather than your actual image:
import cv2
import math
import numpy as np
# Load image as greyscale
im = cv2.imread('trimmed.jpg', cv2.IMREAD_GRAYSCALE)
# Get index of first white pixel in each column, starting at the bottom
yvals = (im[::-1,:]>200).argmax(axis=0)
# Make the x values 0, 1, 2, 3...
xvals = np.arange(0,im.shape[1])
# Fit a line of the form y = mx + c
z = np.polyfit(xvals, yvals, 1)
# Convert the slope to an angle
angle = np.arctan(z[0]) * 180/math.pi
Note 1: The value of z (the result of fitting) is:
array([ -0.74002694, 428.01463745])
which means the equation of the line you are looking for is:
y = -0.74002694 * x + 428.01463745
i.e. the y-intercept is at row 428 from the bottom of the image.
Note 2: Try to avoid JPEG format as an intermediate format in image processing - it is lossy and changes your pixel values - so where you have thresholded and done your morphology you are expecting values of 255 and 0, JPEG will lossily alter those values and you end up testing for a range or thresholding again.
Your 'Closed' image seems to quite clearly segment the two regions, so I'd suggest you focus on turning that boundary into a line that you can do something with. Connected components analysis and contour detection don't really provide any useful information here, so aren't necessary.
One quite simple approach to finding the line angle is to find the first white pixel in each row. To get only the rows that are part of your diagonal, don't include rows where that pixel is too close to either side (e.g. within 5%). That gives you a set of points (pixel locations) on the boundary of your two types of grass.
From there you can either do a linear regression to get an equation for the straight line, or you can get two points by averaging the x values for the top and bottom half of the rows, and then calculate the gradient angle from that.
An alternative approach would be doing another morphological close with a very large kernel, to end up with just a solid white region and a solid black region, which you could turn into a line with canny or findContours. From there you could either get some points by averaging, use the endpoints, or given a smooth enough result from a large enough kernel you could detect the line with hough lines.
After have segmented my lemons successfully I would like to get his size in pixels and then convert this value to millimeters. I'm reading a thesis were this guys did that but with strawberries. The first step was crop the segmented strawberries in a rectangle:
The image (b) was called the 'minimum rectangle'. According the authors to create it, This is built depending on the extreme values of the region:
- the highest point
- the extreme left point
- the lowest point
- the extreme right point of the region of interest.
Once this is done, the width of the rectangle is measured, which will indicate the measurement of the diameter of the strawberry in pixels.
In my case this is my input image:
And this is my desired output:
I'm programming in python with opencv. I would like to crop my input image and then find the minimum rectangle to get the width of the rectangle which will show the diameter of the lemon in pixels.
According the thesis, to convert the measure in pixels to a measure of the real world as in millimeters, I should take a photography with a rectangle with a 3 cm of side with the same conditions as were take the images of the lemons. Then I should segment this rectangle and then find his minimun rectangle as the image of above and find his width in pixels as result of it with a rectangle of known measures i.g they got 176 pixels of width. Of this way they got:
1mm = 176/30 = 5.87 pixels
With this information I would like to compute the width of my lemons and get this first in pixels, the convert it to milimetters. Guys if you can do it, please suppost that I taked a photography of a know figure of 3cm of side, the same as the thesis. By the moment I can't get the minimun rectangle because I don't know how get it, is because that I asking for his help to you.
Well guys I would like to see your suggestions, any I idea I will apreciate it. Thanks so much.
Thanks you.
Once you have the thresholded image (mask) of your blob of interest (the lemon) it is very straightforward to get its (rotated) minimum area rectangle or bounding rectangle. Use the cv2.minAreaRect function to get the former or the cv2.boundingRect function to get the later. In both cases you need to compute the contours of the binary mask, get the outer and biggest contour and pass that to either function.
Let's see an example for getting both:
# image path
path = "C://opencvImages//"
fileName = "TAkY2.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Grayscale conversion:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Thresholding:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
This is just to get the binary mask, you already have this. This is the result:
Now, get the contours. And just to draw some results, prepare a couple of deep copies of the input that we will use to check out things:
# Find the big contours/blobs on the filtered image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# Deep copies of the input image to draw results:
minRectImage = inputImage.copy()
polyRectImage = inputImage.copy()
Now, get the contours and filter them by a minimum area (minArea) value. You want to just keep the biggest contour - that's the lemon perimeter:
# Look for the outer bounding boxes:
for i, c in enumerate(contours):
if hierarchy[0][i][3] == -1:
# Get contour area:
contourArea = cv2.contourArea(c)
# Set minimum area threshold:
minArea = 1000
# Look for the largest contour:
if contourArea > minArea:
# Option 1: Get the minimum area bounding rectangle
# for this contour:
boundingRectangle = cv2.minAreaRect(c)
# Get the rectangle points:
rectanglePoints = cv2.boxPoints(boundingRectangle)
# Convert float array to int array:
rectanglePoints = np.intp(rectanglePoints)
# Draw the min area rectangle:
cv2.drawContours(minRectImage, [rectanglePoints], 0, (0, 0, 255), 2)
cv2.imshow("minAreaRect", minRectImage)
cv2.waitKey(0)
This portion of code gets you these results. Note that this rectangle is rotated to encompass the minimum area of the contour, just as if you were actually measuring the lemon with a caliper:
You can also get the position of the four corners of this rectangle. Still, inside the loop, we have the following bit of code:
# Draw the corner points:
for p in rectanglePoints:
cv2.circle(minRectImage, (p[0], p[1]), 3, (0, 255, 0), -1)
cv2.imshow("minAreaRect Points", minRectImage)
cv2.waitKey(0)
These are the corners of the min area rectangle:
You might or might not like this result. You might be looking for the bounding rectangle that is not rotated. In such case you can use cv2.boundingRect, but first, you need to approximate the contour to a polygon-based set of points. This is the approach, continuing from the last line of code:
# Option2: Approximate the contour to a polygon:
contoursPoly = cv2.approxPolyDP(c, 3, True)
# Convert the polygon to a bounding rectangle:
boundRect = cv2.boundingRect(contoursPoly)
# Set the rectangle dimensions:
rectangleX = boundRect[0]
rectangleY = boundRect[1]
rectangleWidth = boundRect[0] + boundRect[2]
rectangleHeight = boundRect[1] + boundRect[3]
# Draw the rectangle:
cv2.rectangle(polyRectImage, (int(rectangleX), int(rectangleY)),
(int(rectangleWidth), int(rectangleHeight)), (0, 255, 0), 2)
cv2.imshow("Poly Rectangle", polyRectImage)
cv2.imwrite(path + "polyRectImage.png", polyRectImage)
cv2.waitKey(0)
This is the result:
Edit:
This is the bit that actually crops the lemon from the last image:
# Crop the ROI:
croppedImg = inputImage[rectangleY:rectangleHeight, rectangleX:rectangleWidth]
This is the final output:
Our team set up a vision system with a camera, a microscope and a tunable lens to look at the internal surface of a cone.
Visually speaking, the camera takes 12 image for one cone with each image covering 30 degrees.
Now we've collected many sample images and want to make sure each "fan"(as shown below) is at least 30 degree.
Is there any way in Python, with cv2 or other packages, to measure this central angle. Thanks.
Here is one way to do that in Python/OpenCV.
Read the image
Convert to gray
Threshold
Use morphology open and close to smooth and fill out the boundary
Apply Canny edge extraction
Separate the image into top edge and bottom edge by blackening the opposite side to each edge
Fit lines to the top and bottom edges
Compute the angle of each edge
Compute the difference between the two angles
Draw the lines on the input
Save the results
Input:
import cv2
import numpy as np
import math
# read image
img = cv2.imread('cone_shape.jpg')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray,11,255,cv2.THRESH_BINARY)[1]
# apply open then close to smooth boundary
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (13,13))
morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = np.ones((33,33), np.uint8)
morph = cv2.morphologyEx(morph, cv2.MORPH_CLOSE, kernel)
# apply canny edge detection
edges = cv2.Canny(morph, 150, 200)
hh, ww = edges.shape
hh2 = hh // 2
# split edge image in half vertically and blacken opposite half
top_edge = edges.copy()
top_edge[hh2:hh, 0:ww] = 0
bottom_edge = edges.copy()
bottom_edge[0:hh2, 0:ww] = 0
# get coordinates of white pixels in top and bottom
# note: need to transpose y,x in numpy to x,y for opencv
top_white_pts = np.argwhere(top_edge.transpose()==255)
bottom_white_pts = np.argwhere(bottom_edge.transpose()==255)
# fit lines to white pixels
# (x,y) is point on line, (vx,vy) is unit vector along line
(vx1,vy1,x1,y1) = cv2.fitLine(top_white_pts, cv2.DIST_L2, 0, 0.01, 0.01)
(vx2,vy2,x2,y2) = cv2.fitLine(bottom_white_pts, cv2.DIST_L2, 0, 0.01, 0.01)
# compute angle for vectors vx,vy
top_angle = (180/math.pi)*math.atan(vy1/vx1)
bottom_angle = (180/math.pi)*math.atan(vy2/vx2)
print(top_angle, bottom_angle)
# cone angle is the difference
cone_angle = math.fabs(top_angle - bottom_angle)
print(cone_angle)
# draw lines on input
lines = img.copy()
p1x1 = int(x1-1000*vx1)
p1y1 = int(y1-1000*vy1)
p1x2 = int(x1+1000*vx1)
p1y2 = int(y1+1000*vy1)
cv2.line(lines, (p1x1,p1y1), (p1x2,p1y2), (0, 0, 255), 1)
p2x1 = int(x2-1000*vx2)
p2y1 = int(y2-1000*vy2)
p2x2 = int(x2+1000*vx2)
p2y2 = int(y2+1000*vy2)
cv2.line(lines, (p2x1,p2y1), (p2x2,p2y2), (0, 0, 255), 1)
# save resulting images
cv2.imwrite('cone_shape_thresh.jpg',thresh)
cv2.imwrite('cone_shape_morph.jpg',morph)
cv2.imwrite('cone_shape_edges.jpg',edges)
cv2.imwrite('cone_shape_lines.jpg',lines)
# show thresh and result
cv2.imshow("thresh", thresh)
cv2.imshow("morph", morph)
cv2.imshow("edges", edges)
cv2.imshow("top edge", top_edge)
cv2.imshow("bottom edge", bottom_edge)
cv2.imshow("lines", lines)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thresholded image:
Morphology processed image:
Edge Image:
Lines on input:
Cone Angle (in degrees):
42.03975696357633
That sounds possible. You need to do some preprocessing and filtering to figure out what works and there is probably some tweaking involved.
There are three approaches that could work.
1.)
The basic idea is to somehow get two lines and measure the angle between them.
Define a threshold to define the outer black region (out of the central angle) and set all values below it to zero.
This will also set some of the blurry stripes inside the central angle to zero so we have to try to "heal" them away. This is done by using Morphological Transformations. You can read about them here and here.
You could try the operation Closing, but I don't know if it fixes stripes. Usually it fixes dots or scratches. This answer seems to indicate that it should work on lines.
Maybe at that point apply some Gaussian blurring and to the threshold thing again. Then try to use some edge or line detection.
It's basically try and error, you have to see what works.
2.)
Another thing that could work is to try to use the arc-enter code herelike scratches, maybe even strengthen them and use the Hough Circle Transform. I think it detects arcs as well.
Just try it and see what the function returns. In the best case there are several circles / arcs that you can use to estimate the central angle.
There are several approaches on arc detection here on StackOverflow or here.
I am not sure if that's the same with all your image, but the one above looks like there are some thin, green and pink arcs that seem to stretch all along the central angle. You could use that to filter for that color, then make it grey scale.
This question might be helpful.
3.)
Apply an edge filter, e.g Canny skimage.feature.canny
Try several sigmas and post the images in your question, then we can try to think on how to continue.
What could work is to calculate the convex hull around all points that are part of an edge. Then get the two lines that form the central angle from the convex hull.
I'm working on depth map with OpenCV. I can obtain it but it is reconstructed from the left camera origin and there is a little tilt of this latter and as you can see on the figure, the depth is "shifted" (the depth should be close and no horizontal gradient):
I would like to express it as with a zero angle, i try with the warp perspective function as you can see below but i obtain a null field...
P = np.dot(cam,np.dot(Transl,np.dot(Rot,A1)))
dst = cv2.warpPerspective(depth, P, (2048, 2048))
with :
#Projection 2D -> 3D matrix
A1 = np.zeros((4,3))
A1[0,0] = 1
A1[0,2] = -1024
A1[1,1] = 1
A1[1,2] = -1024
A1[3,2] = 1
#Rotation matrice around the Y axis
theta = np.deg2rad(5)
Rot = np.zeros((4,4))
Rot[0,0] = np.cos(theta)
Rot[0,2] = -np.sin(theta)
Rot[1,1] = 1
Rot[2,0] = np.sin(theta)
Rot[2,2] = np.cos(theta)
Rot[3,3] = 1
#Translation matrix on the X axis
dist = 0
Transl = np.zeros((4,4))
Transl[0,0] = 1
Transl[0,2] = dist
Transl[1,1] = 1
Transl[2,2] = 1
Transl[3,3] = 1
#Camera Intrisecs matrix 3D -> 2D
cam = np.concatenate((C1,np.zeros((3,1))),axis=1)
cam[2,2] = 1
P = np.dot(cam,np.dot(Transl,np.dot(Rot,A1)))
dst = cv2.warpPerspective(Z0_0, P, (2048*3, 2048*3))
EDIT LATER :
You can download the 32MB field dataset here: https://filex.ec-lille.fr/get?k=cCBoyoV4tbmkzSV5bi6. Then, load and view the image with:
from matplotlib import pyplot as plt
import numpy as np
img = np.load('testZ0.npy')
plt.imshow(img)
plt.show()
I have got a rough solution in place. You can modify it later.
I used the mouse handling operations available in OpenCV to crop the region of interest in the given heatmap.
(Did I just say I used a mouse to crop the region?) Yes, I did. To learn more about mouse functions in OpenCV SEE THIS. Besides, there are many other SO questions that can help you in this regard.:)
Using those functions I was able to obtain the following:
Now to your question of removing the tilt. I used the homography principal by taking the corner points of the image above and using it on a 'white' image of a definite size. I used the cv2.findHomography() function for this.
Now using the cv2.warpPerspective() function in OpenCV, I was able to obtain the following:
Now you can the required scale to this image as you wanted.
CODE:
I have also attached some snippets of code for your perusal:
#First I created an image of white color of a definite size
back = np.ones((435, 379, 3)) # size
back[:] = (255, 255, 255) # white color
Next I obtained the corner points pts_src on the tilted image below :
pts_src = np.array([[25.0, 2.0],[403.0,22.0],[375.0,436.0],[6.0,433.0]])
I wanted the points above to be mapped to the points 'pts_dst' given below :
pts_dst = np.array([[2.0, 2.0], [379.0, 2.0], [379.0, 435.0],[2.0, 435.0]])
Now I used the principal of homography:
h, status = cv2.findHomography(pts_src, pts_dst)
Finally I mapped the original image to the white image using perspective transform.
fin = cv2.warpPerspective(img, h, (back.shape[1],back.shape[0]))
# img -> original tilted image.
# back -> image of white color.
Hope this helps! I also got to learn a great deal from this question.
Note: The points fed to the 'cv2.findHomography()' must be in float.
For more info on Homography , visit THIS PAGE