Python - Perspective transform for OpenCV from a rotation angle - python

I'm working on depth map with OpenCV. I can obtain it but it is reconstructed from the left camera origin and there is a little tilt of this latter and as you can see on the figure, the depth is "shifted" (the depth should be close and no horizontal gradient):
I would like to express it as with a zero angle, i try with the warp perspective function as you can see below but i obtain a null field...
P = np.dot(cam,np.dot(Transl,np.dot(Rot,A1)))
dst = cv2.warpPerspective(depth, P, (2048, 2048))
with :
#Projection 2D -> 3D matrix
A1 = np.zeros((4,3))
A1[0,0] = 1
A1[0,2] = -1024
A1[1,1] = 1
A1[1,2] = -1024
A1[3,2] = 1
#Rotation matrice around the Y axis
theta = np.deg2rad(5)
Rot = np.zeros((4,4))
Rot[0,0] = np.cos(theta)
Rot[0,2] = -np.sin(theta)
Rot[1,1] = 1
Rot[2,0] = np.sin(theta)
Rot[2,2] = np.cos(theta)
Rot[3,3] = 1
#Translation matrix on the X axis
dist = 0
Transl = np.zeros((4,4))
Transl[0,0] = 1
Transl[0,2] = dist
Transl[1,1] = 1
Transl[2,2] = 1
Transl[3,3] = 1
#Camera Intrisecs matrix 3D -> 2D
cam = np.concatenate((C1,np.zeros((3,1))),axis=1)
cam[2,2] = 1
P = np.dot(cam,np.dot(Transl,np.dot(Rot,A1)))
dst = cv2.warpPerspective(Z0_0, P, (2048*3, 2048*3))
EDIT LATER :
You can download the 32MB field dataset here: https://filex.ec-lille.fr/get?k=cCBoyoV4tbmkzSV5bi6. Then, load and view the image with:
from matplotlib import pyplot as plt
import numpy as np
img = np.load('testZ0.npy')
plt.imshow(img)
plt.show()

I have got a rough solution in place. You can modify it later.
I used the mouse handling operations available in OpenCV to crop the region of interest in the given heatmap.
(Did I just say I used a mouse to crop the region?) Yes, I did. To learn more about mouse functions in OpenCV SEE THIS. Besides, there are many other SO questions that can help you in this regard.:)
Using those functions I was able to obtain the following:
Now to your question of removing the tilt. I used the homography principal by taking the corner points of the image above and using it on a 'white' image of a definite size. I used the cv2.findHomography() function for this.
Now using the cv2.warpPerspective() function in OpenCV, I was able to obtain the following:
Now you can the required scale to this image as you wanted.
CODE:
I have also attached some snippets of code for your perusal:
#First I created an image of white color of a definite size
back = np.ones((435, 379, 3)) # size
back[:] = (255, 255, 255) # white color
Next I obtained the corner points pts_src on the tilted image below :
pts_src = np.array([[25.0, 2.0],[403.0,22.0],[375.0,436.0],[6.0,433.0]])
I wanted the points above to be mapped to the points 'pts_dst' given below :
pts_dst = np.array([[2.0, 2.0], [379.0, 2.0], [379.0, 435.0],[2.0, 435.0]])
Now I used the principal of homography:
h, status = cv2.findHomography(pts_src, pts_dst)
Finally I mapped the original image to the white image using perspective transform.
fin = cv2.warpPerspective(img, h, (back.shape[1],back.shape[0]))
# img -> original tilted image.
# back -> image of white color.
Hope this helps! I also got to learn a great deal from this question.
Note: The points fed to the 'cv2.findHomography()' must be in float.
For more info on Homography , visit THIS PAGE

Related

How to transform source image to destination image automatically using opencv

IMAGE
I am having source image on the left. I distorted this image manually using grids from paint and result is on the right side. I was wondering if opencv offers any function which compares both images and gives some transformation matrix which I can use for further use. e.g. Passing this matrix to the video and I have distorted video like the image on right side.
I appreciate the help of this community :)
I was wondering if opencv offers any function which compares both images and gives some transformation matrix which I can use for further use.
Neither OpenCV nor any other library can calculate transformation matrix for non-linear transformation. It is impossible to represent non-linear transformation as a transformation matrix. However, one may find a transformation matrix that will be linear approximation of non-linear transformation. Such approximations should not necessarily yield satisfactory results.
As the image that you distort has a check pattern, you may describe the distortion by the movement of the corners of squares. I suggest that you try the following two-step approach:
Step1: Find corners both in original and distorted images.
OpenCV has a bunch of functions that are used in camera calibration and are designed for check pattern images. The following function uses cv2.findChessboardCorners function from OpenCV to find the corner points. Note: get_chessboard_corner_points returns relative coordinates, not the exact ones.
def get_chessboard_corner_points(img, corner_shape=(10,15)):
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.01)
success, corners = cv2.findChessboardCorners(img, corner_shape)
if success:
corners = cv2.cornerSubPix(img, corners, (11,11), (-1,-1), criteria)
corners = np.squeeze(corners, axis=1)
for i in range(len(corners)):
corners[i] = (corners[i][0]/img.shape[1], corners[i][1]/img.shape[0])
#You may also try adding boundary points of the image by uncommenting the next line.
#corners = np.append(corners, np.array([[0,0],[0,1],[1,0],[1,1]]), 0)
#In addition you may also add other boundary points acording to your needs
return corners
return []
Step2: Distort each frame of video using interpolation and points found in the previous step.
Now when you have the points you need to write the distortion part.The distort_image function will do the job, though you may change it according to your needs. Note: src and dest contain relative coordinates. For target image we calculate and keep the exact coordinates in src_exact and dest_exact respectively. Then, griddata from SciPy is being used for interpolation.
def distort_image(img, src, dest):
height, width = img.shape[:2]
src_exact = []
dest_exact = []
for i in range(len(src)):
src_exact.append((int(src[i][1]*height), int(src[i][0]*width)))
dest_exact.append((int(dest[i][1]*height), int(dest[i][0]*width)))
grid_x, grid_y = np.mgrid[0:height - 1:complex(0, height), 0:width - 1:complex(0, width)]
grid_z = griddata(dest_exact, src_exact, (grid_x, grid_y), method='cubic')
map_x = np.append([], [ar[:, 1] for ar in grid_z]).reshape(height, width)
map_y = np.append([], [ar[:, 0] for ar in grid_z]).reshape(height, width)
map_x_32 = map_x.astype('float32')
map_y_32 = map_y.astype('float32')
distorted = cv2.remap(img, map_x_32, map_y_32, cv2.INTER_CUBIC)
return distorted
And here are a few lines that allow you to test the above presented functions.
import cv2
import numpy as np
from scipy.interpolate import griddata
TEST_IMG_PATH = "" #path to original grid
SOURCE_IMG_PATH = "" #path to distorted grid
DEST_IMG_PATH = "" #path to image to be distorted(this can be replaced by a video frame)
CORNER_SHAPE = (10, 15)
src_img = cv2.cvtColor(cv2.imread(SOURCE_IMG_PATH), cv2.COLOR_BGR2GRAY)
dest_img = cv2.cvtColor(cv2.imread(DEST_IMG_PATH), cv2.COLOR_BGR2GRAY)
test = cv2.cvtColor(cv2.imread(TEST_IMG_PATH), cv2.COLOR_BGR2RGB)
src_points = get_chessboard_corner_points(src_img, CORNER_SHAPE)
dest_points = get_chessboard_corner_points(dest_img, CORNER_SHAPE)
if len(src_points) != 0 and len(dest_points) != 0:
result = distort_image(test, src_points, dest_points)

OpenCV: Understanding warpPerspective / perspective transform

I made a small example for myself to play around with OpenCVs wrapPerspective, but the output is not completely as I expected.
My input is a bar at an 45° angle. I want to transform it so that it's vertically aligned / at an 90° angle. No problem with that. However, what I don't understand is that everything around the actual destination points is black. The reason I don't understand this is, that actually only the transformation matrix gets passed to the wrapPerspective function, not the destination points themselves. So my expected output would be a bar at an 90° angle and most around it to be yellow instead of black. Where's my error in reasoning?
# helper function
def showImage(img, title):
fig = plt.figure()
plt.suptitle(title)
plt.imshow(img)
# read and show test image
img = mpimg.imread('test_transform.jpg')
showImage(img, "input image")
# source points
top_left = [194,430]
top_right = [521,103]
bottom_right = [549,131]
bottom_left = [222,458]
pts = np.array([bottom_left,bottom_right,top_right,top_left])
# target points
y_off = 400; # y offset
top_left_dst = [top_left[0], top_left[1] - y_off]
top_right_dst = [top_left_dst[0] + 39.6, top_left_dst[1]]
bottom_right_dst = [top_right_dst[0], top_right_dst[1] + 462.4]
bottom_left_dst = [top_left_dst[0], bottom_right_dst[1]]
dst_pts = np.array([bottom_left_dst, bottom_right_dst, top_right_dst, top_left_dst])
# generate a preview to show where the warped bar would end up
preview=np.copy(img)
cv2.polylines(preview,np.int32([dst_pts]),True,(0,0,255), 5)
cv2.polylines(preview,np.int32([pts]),True,(255,0,255), 1)
showImage(preview, "preview")
# calculate transformation matrix
pts = np.float32(pts.tolist())
dst_pts = np.float32(dst_pts.tolist())
M = cv2.getPerspectiveTransform(pts, dst_pts)
# wrap image and draw the resulting image
image_size = (img.shape[1], img.shape[0])
warped = cv2.warpPerspective(img, M, dsize = image_size, flags = cv2.INTER_LINEAR)
showImage(warped, "warped")
The result using this code is:
Here's my input image test_transform.jpg:
And here is the same image with coordinates added:
By request, here is the transformation matrix:
[[ 6.05504680e-02 -6.05504680e-02 2.08289910e+02]
[ 8.25714275e+00 8.25714275e+00 -5.12245707e+03]
[ 2.16840434e-18 3.03576608e-18 1.00000000e+00]]
Your ordering in your arrays or their positions might be the fault. Check this Transformed Image: The dst_pts array is: np.array([[196,492],[233,494],[234,32],[196,34]]), thats more or less like the blue rectangle in your preview image.(I made the coordinates myself to make sure they are right)
NOTE: Your source and destination points should be in right order

Using inpaint function in OpenCV via Python to interpolate broken river-data in a watershed

Background
A raster file collected via LIDAR records the topography of a watershed. To properly model the watershed, the river must appear continuous without any breaks or interruptions. The roads in the raster file appear like dams that interrupt the river as seen in the picture below
Specific Area Under Consideration in the Watershed
Objective
These river breaks are the main problem and I am trying but failing to remove them.
Approach
Via Python, I used the various tools and prebuilt functions in the OpenCV library. The primary function I used in this approach is the cv2.inpaint function. This function takes in an image file and a mask file and interpolates the original image wherever the mask file pixels are nonzero.
The main step here is determining the mask file which I did by detecting the corners at the break in the river. The mask file will guide the inpaint function to fill in the pixels according to the patterns in the surrounding pixels.
Problem
My issue is that this happens from all directions whereas I require it to only extrapolate pixel data from the river itself. The image below shows the flawed result: inpaint works but it considers data from outside the river too.
Inpainted Result
Here is my code if you are so kind as to help:
import scipy.io as sio
import numpy as np
from matplotlib import pyplot as plt
import cv2
matfile = sio.loadmat('box.mat') ## box.mat original raster file linked below
ztopo = matfile['box']
#Crop smaller section for analysis
ztopo2 = ztopo[200:500, 1400:1700]
## Step 1) Isolate river
river = ztopo2.copy()
river[ztopo2<217.5] = 0
#This will become problematic for entire watershed w/o proper slicing
## Step 2) Detect Corners
dst = cv2.cornerHarris(river,3,7,0.04)
# cornerHarris arguments adjust qualities of corner markers
# Dilate Image (unnecessary)
#dst = cv2.dilate(dst,None)
# Threshold for an optimal value, it may vary depending on the image.
# This adjusts what defines a corner
river2 = river.copy()
river2[dst>0.01*dst.max()]=[150]
## Step 3) Remove river and keep corners
#Initiate loop to isolate detected corners
i=0
j=0
rows,columns = river2.shape
for i in np.arange(rows):
for j in np.arange(columns):
if river2[i,j] != 150:
river2[i,j] = 0
j = j + 1
i = i + 1
# Save corners as new image for import during next step.
# Import must be via cv2 as thresholding and contour detection can only work on BGR files. Sio import in line 6 (matfile = sio.loadmat('box.mat')) imports 1 dimensional image rather than BGR.
cv2.imwrite('corners.png', river2)
## Step 4) Create mask image by defining and filling a contour around the previously detected corners
#Step 4 code retrieved from http://dsp.stackexchange.com/questions/2564/opencv-c-connect-nearby-contours-based-on-distance-between-them
#Article: OpenCV/C++ connect nearby contours based on distance between them
#Initiate function to specify features of contour connections
def find_if_close(cnt1,cnt2):
row1,row2 = cnt1.shape[0],cnt2.shape[0]
for i in xrange(row1):
for j in xrange(row2):
dist = np.linalg.norm(cnt1[i]-cnt2[j])
if abs(dist) < 50 :
return True
elif i==row1-1 and j==row2-1:
return False
#import image of corners created in step 3 so thresholding can function properly
img = cv2.imread('corners.png')
#Thresholding and Finding contours only works on grayscale image
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
contours,hier = cv2.findContours(thresh,cv2.RETR_EXTERNAL,2)
LENGTH = len(contours)
status = np.zeros((LENGTH,1))
for i,cnt1 in enumerate(contours):
x = i
if i != LENGTH-1:
for j,cnt2 in enumerate(contours[i+1:]):
x = x+1
dist = find_if_close(cnt1,cnt2)
if dist == True:
val = min(status[i],status[x])
status[x] = status[i] = val
else:
if status[x]==status[i]:
status[x] = i+1
unified = []
maximum = int(status.max())+1
for i in xrange(maximum):
pos = np.where(status==i)[0]
if pos.size != 0:
cont = np.vstack(contours[i] for i in pos)
hull = cv2.convexHull(cont) # I don't know why/how this is used
unified.append(hull)
cv2.drawContours(img,unified,-1,(0,255,0),1) #Last argument specifies contour width
cv2.drawContours(thresh,unified,-1,255,-1)
# Thresh is the filled contour while img is the contour itself
# The contour surrounds the corners
#cv2.imshow('pic', thresh) #Produces black and white image
## Step 5) Merge via inpaint
river = np.uint8(river)
ztopo2 = np.uint8(ztopo2)
thresh[thresh>0] = 1
#new = river.copy()
merged = cv2.inpaint(river,thresh,12,cv2.INPAINT_TELEA)
plt.imshow(merged)
plt.colorbar()
plt.show()

opencv python - applying rotation matrix from rodrigues function

I am trying to simulate an image standing out of a marker. This is my code so far which does what is pictured. Essentially, I just want to rotate the image to appear to stand out orthogonal to the checkerboard.
As you can see, I use the code to find the transformation matrix between a normalized square image and the corresponding checkerboard corners. I then use warpPerspective to get the image you see. I know that I can use the rotation vectors from the solvePnP to obtain a rotation matrix through rodrigues() but I dont know what the next step is
def transformTheSurface(inputFrame):
ret, frameLeft = capleft.read()
capGray = cv2.cvtColor(frameLeft,cv2.COLOR_BGR2GRAY)
found, corners = cv2.findChessboardCorners(capGray, (5,4), None, cv2.CALIB_CB_NORMALIZE_IMAGE + cv2.CALIB_CB_ADAPTIVE_THRESH ) #,None,cv2.CALIB_CB_FAST_CHECK)
if (found):
npGameFrame = pygame.surfarray.array3d(inputFrame)
inputFrameGray = cv2.cvtColor(npGameFrame,cv2.COLOR_BGR2GRAY)
cv2.drawChessboardCorners(frameLeft, (5,4), corners, found)
q = corners[[0, 4, 15, 19]]
ret, rvecs, tvecs = cv2.solvePnP(objp, corners, mtx, dist)
ptMatrix = cv2.getPerspectiveTransform( muffinCoords, q)
npGameFrame = cv2.flip(npGameFrame, 0)
ptMatrixWithXRot = ptMatrix * rodRotMat[0]
#inputFrameConv = cv2.cvtColor(npGameFrame,cv2.COLOR_BGRA2GRAY)
transMuffin = cv2.warpPerspective(npGameFrame, ptMatrix, (640, 480)) #, muffinImg, cv2.INTER_NEAREST, cv2.BORDER_CONSTANT, 0)
EDIT:
I have added some more code, in hopes to create my own 3x3 transformation matrix. I used the following reference . Here is my code for that:
#initialization happens earlier in code
muffinCoords = np.zeros((4,2), np.float32)
muffinCoords[0] = (0,0)
muffinCoords[1] = (200,0)
muffinCoords[2] = (0,200)
muffinCoords[3] = (200,200)
A1 = np.zeros((4,3), np.float32)
A1[0] = (1,0,322)
A1[1] = (0,1,203)
A1[2] = (0,0,0)
A1[3] = (0,0,1)
R = np.zeros((4,4), np.float32)
R[3,3] = 1.0
T = np.zeros((4,4), np.float32)
T[0] = (1,0,0,0)
T[1] = (0,1,0,0)
T[2] = (0,0,1,0)
T[3] = (0,0,0,1)
#end initialization
#load calib data derived using cv2.calibrateCamera, my Fx and Fy are about 800
loadedCalibFileMTX = np.load('calibDataMTX.npy')
mtx = np.zeros((3,4), np.float32)
mtx[:3,:3] = loadedCalibFileMTX
#this is new to my code, creating what I interpret as Rx*Ry*Rz
ret, rvecCalc, tvecs = cv2.solvePnP(objp, corners, loadedCalibFileMTX, dist)
rodRotMat = cv2.Rodrigues(rvecCalc)
R[:3,:3] = rodRotMat[0]
#then I create T
T[0,3] = tvecs[0]
T[1,3] = tvecs[1]
T[2,3] = tvecs[2]
# CREATING CUSTOM TRANSFORM MATRIX
# A1 -> 2d to 3d projection matrix
# R-> rotation matrix as calculated by solve PnP, or Rx * Ry * Rz
# T -> converted translation matrix, reference from site, vectors pulled from tvecs of solvPnP
# mtx -> 3d to 2d matrix
# customTransformMat = mtx * (T * (R * A1)) {this is intended calculation of following}
first = np.dot(R, A1)
second = np.dot(T, first)
finalCalc = np.dot(mtx, second)
finalNorm = finalCalc/(finalCalc[2,2]) # to make sure that the [2,2] element is 1
transMuffin = cv2.warpPerspective(npGameFrame, finalNorm, (640, 480), None, cv2.INTER_NEAREST, cv2.BORDER_CONSTANT, 0)
#transMuffin is returned as undefined here, any help?
# using the cv2.getPerspectiveTransform method to find what you can find pictured at the top
ptMatrix = cv2.getPerspectiveTransform( muffinCoords, q)
I finally figured out the right methodology. you can find the code here https://github.com/mikezucc/augmented-reality-fighter-pygame
NOTICE:
Almost ALL of the game code is written by Leif Theiden, and is under license specified in the .py files. The code relevant to the computer vision is in states.py. I used the game to just show that it can be done for others looking to get started in simple computer vision.
My code opens a thread everytime a new surface (PyGame for frame simply) is called to be displayed on the main window. I start a thread at that point and execute a simple computer vision function that does the following:
Searches a camera stream frame for the 5x4 chessboard (cv2.findChessboardCorners)
The found corners are then drawn onto the image
Using cv2.solvePnP, the approximate pose (Rotation and translation vectors) are derived
The 3d points that describe a square are then projected from the 3d space determined by step 3 into a 2d space. this is used to convert a predertimined 3d structure into something you can use to graph on a 2d image.
However, this step instead finds the transformation to get from a set of 2d square points (the dimensions of the game frame) to the newly found projected 2d points (of the 3d frame). Now you can see that what we are trying to is simply do a two step transformation.
I then perform a basic tutorial style addition of the captured stream frame and the transformed game frame to get a final image
Variables:
+from3dTransMatrix -> points of the projected 3d structure into 2d points. these are the red dots you see
+q -> this is the reference plane that we determine the pose from
+ptMatrix -> the final transformation, to transform the game frame to fit in the projected frame
Check out the screens in the topmost folder ;]
enjoy!

interpolation of transformPerspective openCV

I have made a small program that reads an image, transforms the perspective and then redraws the image. Currently I rewrite each pixel to the output manually but this way a lot of points are lost and the result is image that is very faint (the larger the transformation the fainter the image). This is my code:
U, V = np.meshgrid(range(img_array.shape[1]), range(img_array.shape[0]))
UV = np.vstack((U.flatten(),V.flatten())).T
UV_warped = cv2.perspectiveTransform(np.array([UV]).astype(np.float32), H)
UV_warped = UV_warped[0]
UV_warped = UV_warped.astype(np.int)
x_translation = min(UV_warped[:,0])
y_translation = min(UV_warped[:,1])
new_width = np.amax(UV_warped[:,0])-np.amin(UV_warped[:,0])
new_height = np.amax(UV_warped[:,1])-np.amin(UV_warped[:,1])
UV_warped[:,0] = UV_warped[:,0] - int(x_translation)
UV_warped[:,1] = UV_warped[:,1] - int(y_translation)
# create box for image
new_img = np.ones((new_height+1, new_width+1))*255 # 0 = black 255 - white background
for uv_pix, UV_warped_pix in zip(UV, UV_warped):
x_orig = uv_pix[0] # x in origineel
y_orig = uv_pix[1] # y in origineel
color = img_array[y_orig, x_orig]
x_new = UV_warped_pix[0] # new x
y_new = UV_warped_pix[1] # new y
new_img[y_new, x_new] = np.array(color)
img = Image.fromarray(np.uint8(new_img))
img.save("test.jpg")
Is there a way to do this differently (with interpolation maybe?) so I won't loose so many pixels and the image is not so faint?
You are looking for the function warpPerspective (As already mentioned in answer to your previous question OpenCV perspective transform in python).
You can use this function like this (although I'm not familiar with python) :
cv2.warpPerspective(src_img, H_from_src_to_dst, dst_size, dst_img)
EDIT: You can refer to this OpenCV tutorial. It uses affine transformations, but there exists similar OpenCV functions for perspective transformations.

Categories