Related
I've been researching and trying a couple functions to get what I want and I feel like I might be overthinking it.
One version of my code is below. The sample image is here.
My end goal is to find the angle (yellow) of the approximated line with respect to the frame (green line) Final
I haven't even got to the angle portion of the program yet.
The results I was obtaining from the below code were as follows. Canny Closed Small Removed
Anybody have a better way of creating the difference and establishing the estimated line?
Any help is appreciated.
import cv2
import numpy as np
pX = int(512)
pY = int(768)
img = cv2.imread('IMAGE LOCATION', cv2.IMREAD_COLOR)
imgS = cv2.resize(img, (pX, pY))
aimg = cv2.imread('IMAGE LOCATION', cv2.IMREAD_GRAYSCALE)
# Blur image to reduce noise and resize for viewing
blur = cv2.medianBlur(aimg, 5)
rblur = cv2.resize(blur, (384, 512))
canny = cv2.Canny(rblur, 120, 255, 1)
cv2.imshow('canny', canny)
kernel = np.ones((2, 2), np.uint8)
#fringeMesh = cv2.dilate(canny, kernel, iterations=2)
#fringeMesh2 = cv2.dilate(fringeMesh, None, iterations=1)
#cv2.imshow('fringeMesh', fringeMesh2)
closing = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, kernel)
cv2.imshow('Closed', closing)
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(closing, connectivity=8)
#connectedComponentswithStats yields every separated component with information on each of them, such as size
sizes = stats[1:, -1]; nb_components = nb_components - 1
min_size = 200 #num_pixels
fringeMesh3 = np.zeros((output.shape))
for i in range(0, nb_components):
if sizes[i] >= min_size:
fringeMesh3[output == i + 1] = 255
#contours, _ = cv2.findContours(fringeMesh3, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#cv2.drawContours(fringeMesh3, contours, -1, (0, 255, 0), 1)
cv2.imshow('final', fringeMesh3)
#cv2.imshow("Natural", imgS)
#cv2.imshow("img", img)
cv2.imshow("aimg", aimg)
cv2.imshow("Blur", rblur)
cv2.waitKey()
cv2.destroyAllWindows()
You can fit a straight line to the first white pixel you encounter in each column, starting from the bottom.
I had to trim your image because you shared a screen grab of it with a window decoration, title and frame rather than your actual image:
import cv2
import math
import numpy as np
# Load image as greyscale
im = cv2.imread('trimmed.jpg', cv2.IMREAD_GRAYSCALE)
# Get index of first white pixel in each column, starting at the bottom
yvals = (im[::-1,:]>200).argmax(axis=0)
# Make the x values 0, 1, 2, 3...
xvals = np.arange(0,im.shape[1])
# Fit a line of the form y = mx + c
z = np.polyfit(xvals, yvals, 1)
# Convert the slope to an angle
angle = np.arctan(z[0]) * 180/math.pi
Note 1: The value of z (the result of fitting) is:
array([ -0.74002694, 428.01463745])
which means the equation of the line you are looking for is:
y = -0.74002694 * x + 428.01463745
i.e. the y-intercept is at row 428 from the bottom of the image.
Note 2: Try to avoid JPEG format as an intermediate format in image processing - it is lossy and changes your pixel values - so where you have thresholded and done your morphology you are expecting values of 255 and 0, JPEG will lossily alter those values and you end up testing for a range or thresholding again.
Your 'Closed' image seems to quite clearly segment the two regions, so I'd suggest you focus on turning that boundary into a line that you can do something with. Connected components analysis and contour detection don't really provide any useful information here, so aren't necessary.
One quite simple approach to finding the line angle is to find the first white pixel in each row. To get only the rows that are part of your diagonal, don't include rows where that pixel is too close to either side (e.g. within 5%). That gives you a set of points (pixel locations) on the boundary of your two types of grass.
From there you can either do a linear regression to get an equation for the straight line, or you can get two points by averaging the x values for the top and bottom half of the rows, and then calculate the gradient angle from that.
An alternative approach would be doing another morphological close with a very large kernel, to end up with just a solid white region and a solid black region, which you could turn into a line with canny or findContours. From there you could either get some points by averaging, use the endpoints, or given a smooth enough result from a large enough kernel you could detect the line with hough lines.
Here is grayscale uint8 image I'm working with: source grayscale image.
This image is a result of stitching 6 different colorized depth images into one. There are 3 rectangular objects in the image, and my goal is to find edges of these objects. Obviously, I have no problem to find external edges of objects. But, separating objects from each other is a big pain.
Desired rectangles in image:
Input image as numpy array: https://drive.google.com/file/d/1uN9R4MgVQBzjJuMhcqWMUAhWDJCatHSf/view?usp=sharing
First of all I was trying to threshold binarize the image, following with some
erosion + dilation processing to distinguish all three objects from
each other. Then contours + minAreaRect would give me necessary
result. This option isn't robust enough, because objects in the scene
can be so close to each other, that edge between them has the same
depth as roughness of the object surfaces. So important edges can be
"blended" with object surfaces deviations. Consequently, sometimes,
I'm getting two objects united in one object.
Using canny edge detection with automatically calculated coefficients
(from picture median) catches all unnecessary brightness changes together with edges. Canny with manually adjusted coefficients works better, but it doesn't give closed edge result + it is not reliable (must be manually tweaked each time).
Another thing I tried - adjusting brightness of image nonlinearly (power-law transformation) - to increase brightness of objects surfaces leaving dark edge cavities unchanged.
p = 0.2; c = (input_image.max()) / (input_image.max()**(p)); output_image = (c*blur_gray.astype(np.float)**(p)).astype(np.uint8)
Here is a result: brightness adjusted image.
Threshold binarizing of this image give better results in terms of edges. I tried canny and Laplacian edge detection, but obtained results give disconnected parts of contour with some noise in object surface areas: binarized result of Laplacian filtering. Next step, in my mind, must be some kind of edge estimation/restoration algorithm. I tried Hough transform to get edge lines, but it didn't give any intelligible result.
It seems to me that I just go around in circles without achieving any intelligible result. So I request help. Probably my approach is fundamentally wrong, or I am missing something due to the fact that I do not have sufficient knowledge. Any ideas or suggestions?
P.S. After posting this, I'll continue, and will try to implement wateshed segmentation algorithm, may be it would work.
I tried to come up with a method to emphasize the vertical and horizontal lines separating the shapes.
I started by thresholding the original image (from numpy) and just used a [0, 10] range that seemed reasonable.
I ran a vertical and horizontal line kernel over the image to generate two masks
Vertical Kernel
Horizontal Kernel
I combined the two masks so that we'd have both of the lines separating the boxes
Now we can use findContours to find the boxes. I filtered out small contours to get just the 3 rectangles and used a 4-sided approximation to try and get just their sides.
import cv2
import numpy as np
import random
# approx n-sided shape
def approxSides(contour, numSides, step_size):
# approx until numSides points
num_points = 999999;
percent = step_size;
while num_points >= numSides:
# get number of points
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
num_points = len(approx);
# increment
percent += step_size;
# step back and get the points
# there could be more than numSides points if our step size misses it
percent -= step_size * 2;
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
return approx;
# convolve
def conv(mask, kernel, size, half):
# get res
h,w = mask.shape[:2];
# loop
nmask = np.zeros_like(mask);
for y in range(half, h - half):
print("Y: " + str(y) + " || " + str(h));
for x in range(half, w - half):
total = np.sum(np.multiply(mask[y-half:y+half+1, x-half:x+half+1], kernel));
total /= 255;
if total > half:
nmask[y][x] = 255;
else:
nmask[y][x] = 0;
return nmask;
# load numpy array
img = np.load("output_data.npy");
mask = cv2.inRange(img, 0, 10);
# resize
h,w = mask.shape[:2];
scale = 0.25;
h = int(h*scale);
w = int(w*scale);
mask = cv2.resize(mask, (w,h));
# use a line filter
size = 31; # size / 2 is max bridge size
half = int(size/2);
vKernel = np.zeros((size,size), np.float32);
for a in range(size):
vKernel[a][half] = 1/size;
hKernel = np.zeros((size,size), np.float32);
for a in range(size):
hKernel[half][a] = 1/size;
# run filters
vmask = cv2.filter2D(mask, -1, vKernel);
vmask = cv2.inRange(vmask, (half * 255 / size), 255);
hmask = cv2.filter2D(mask, -1, hKernel);
hmask = cv2.inRange(hmask, (half * 255 / size), 255);
combined = cv2.bitwise_or(vmask, hmask);
# contours OpenCV3.4, if you're using OpenCV 2 or 4, it returns (contours, _)
combined = cv2.bitwise_not(combined);
_, contours, _ = cv2.findContours(combined, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter out small contours
cutoff_size = 1000;
big_cons = [];
for con in contours:
area = cv2.contourArea(con);
if area > cutoff_size:
big_cons.append(con);
# do approx for 4-sided shape
colored = cv2.cvtColor(combined, cv2.COLOR_GRAY2BGR);
four_sides = [];
for con in big_cons:
approx = approxSides(con, 4, 0.01);
color = [random.randint(0,255) for a in range(3)];
cv2.drawContours(colored, [approx], -1, color, 2);
four_sides.append(approx); # not used for anything
# show
cv2.imshow("Image", img);
cv2.imshow("mask", mask);
cv2.imshow("vmask", vmask);
cv2.imshow("hmask", hmask);
cv2.imshow("combined", combined);
cv2.imshow("Color", colored);
cv2.waitKey(0);
I'm trying to find the tilt angle in a series of images which look like the created example data below. There should be a clear edge which is visible by eye. However I'm struggling in extracting the edges so far. Is Canny the right way of finding the edge here or is there a better way of finding the edge?
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage.filters import gaussian_filter
# create data
xvals = np.arange(0,2000)
yvals = 10000 * np.exp((xvals - 1600)/200) + 100
yvals[1600:] = 100
blurred = gaussian_filter(yvals, sigma=20)
# create image
img = np.tile(blurred,(2000,1))
img = np.swapaxes(img,0,1)
# rotate image
rows,cols = img.shape
M = cv.getRotationMatrix2D((cols/2,rows/2),3.7,1)
img = cv.warpAffine(img,M,(cols,rows))
# convert to uint8 for Canny
img_8 = cv.convertScaleAbs(img,alpha=(255.0/65535.0))
fig,ax = plt.subplots(3)
ax[0].plot(xvals,blurred)
ax[1].imshow(img)
# find edge
ax[2].imshow(cv.Canny(img_8, 20, 100, apertureSize=5))
You can find the angle by transforming your image to binary (cv2.threshold(cv2.THRESH_BINARY)) then search for contours.
When you locate your contour (line) then you can fit a line on your contour cv2.fitLine() and get two points of your line. My math is not very good but I think that in linear equation the formula goes f(x) = k*x + n and you can get k out of those two points (k = (y2-y1)/(x2-x1)) and finally the angle phi = arctan(k). (If I'm wrong please correct it)
You can also use the rotated bounding rectangle - cv2.minAreaRect() - which already returns the angle of the rectangle (rect = cv2.minAreaRect() --> rect[2]). Hope it helps. Cheers!
Here is an example code:
import cv2
import numpy as np
import math
img = cv2.imread('angle.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, threshold = cv2.threshold(gray,170,255,cv2.THRESH_BINARY)
im, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for c in contours:
area = cv2.contourArea(c)
perimeter = cv2.arcLength(c, False)
if area < 10001 and 100 < perimeter < 1000:
# first approach - fitting line and calculate with y=kx+n --> angle=tan^(-1)k
rows,cols = img.shape[:2]
[vx,vy,x,y] = cv2.fitLine(c, cv2.DIST_L2,0,0.01,0.01)
lefty = int((-x*vy/vx) + y)
righty = int(((cols-x)*vy/vx)+y)
cv2.line(img,(cols-1,righty),(0,lefty),(0,255,0),2)
(x1, y1) = (cols-1, righty)
(x2, y2) = (0, lefty)
k = (y2-y1)/(x2-x1)
angle = math.atan(k)*180/math.pi
print(angle)
#second approch - cv2.minAreaRect --> returns center (x,y), (width, height), angle of rotation )
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img,[box],0,(0,0,255),2)
print(rect[2])
cv2.imshow('img2', img)
Original image:
Output:
-3.8493663478518627
-3.7022125720977783
tribol,
it seems like you can take the gradient image G = |Gx| + |Gy| (normalize it to some known range), calc its Histogram and take the top bins of it. it will give you approx mask of the line. Then you can do line fitting. It'll give you a good initial guess.
A very simple way of doing it is as follows... adjust my numbers to suit your knowledge of the data.
Normalise your image to a scale of 0-255.
Choose two points A and B, where A is 10% of the image width in from the left side and B is 10% in from the right side. The distance AB is now 0.8 x 2000, or 1600 px.
Go North from point A sampling your image till you exceed some sensible threshold that means you have met the tilted line. Note the Y value at this point, as YA.
Do the same, going North from point B till you meet the tilted line. Note the Y value at this point, as YB.
The angle you seek is:
tan-1((YB-YA)/1600)
Thresholding as suggested by kavko didn't work that well, as the intensity varied from image to image (I could of course consider the histogram for each image to imrove this approach). I ended up with taking the maximum of the gradient in the y-direction:
def rotate_image(image):
blur = ndimage.gaussian_filter(image, sigma=10) # blur image first
grad = np.gradient(blur, axis= 0) # take gradient along y-axis
grad[grad>10000]=0 # filter unreasonable high values
idx_maxline = np.argmax(grad, axis=0) # get y-indices of max slope = indices of edge
mean = np.mean(idx_maxline)
std = np.std(idx_maxline)
idx = np.arange(idx_maxline.shape[0])
idx_filtered = idx[(idx_maxline < mean+std) & (idx_maxline > mean - std)] # filter positions where highest slope is at different position(blobs)
slope, intercept, r_value, p_value, std_err = stats.linregress(idx_filtered, idx_maxline[idx_filtered])
out = ndimage.rotate(image,slope*180/np.pi, reshape = False)
return out
out = rotate_image(img)
plt.imshow(out)
I am trying to circular mask an image in Python. I found some example code on the web, but I'm not sure how to change the maths to get my circle in the correct place.
I have an image image_data of type numpy.ndarray with shape (3725, 4797, 3):
total_rows, total_cols, total_layers = image_data.shape
X, Y = np.ogrid[:total_rows, :total_cols]
center_row, center_col = total_rows/2, total_cols/2
dist_from_center = (X - total_rows)**2 + (Y - total_cols)**2
radius = (total_rows/2)**2
circular_mask = (dist_from_center > radius)
I see that this code applies euclidean distance to calculate dist_from_center, but I don't understand the X - total_rows and Y - total_cols part. This produces a mask that is a quarter of a circle, centered on the top-left of the image.
What role are X and Y playing on the circle? And how can I modify this code to produce a mask that is centered somewhere else in the image instead?
The algorithm you got online is partly wrong, at least for your purposes. If we have the following image, we want it masked like so:
The easiest way to create a mask like this is how your algorithm goes about it, but it's not presented in the way that you want, nor does it give you the ability to modify it in an easy way. What we need to do is look at the coordinates for each pixel in the image, and get a true/false value for whether or not that pixel is within the radius. For example, here's a zoomed in picture showing the circle radius and the pixels that were strictly within that radius:
Now, to figure out which pixels lie inside the circle, we'll need the indices of each pixel in the image. The function np.ogrid() gives two vectors, each containing the pixel locations (or indices): there's a column vector for the column indices and a row vector for the row indices:
>>> np.ogrid[:4,:5]
[array([[0],
[1],
[2],
[3]]), array([[0, 1, 2, 3, 4]])]
This format is useful for broadcasting so that if we use them in certain functions, it will actually create a grid of all the indices instead of just those two vectors. We can thus use np.ogrid() to create the indices (or pixel coordinates) of the image, and then check each pixel coordinate to see if it's inside or outside the circle. In order to tell whether it's inside the center, we can simply find the Euclidean distance from the center to every pixel location, and then if that distance is less than the circle radius, we'll mark that as included in the mask, and if it's greater than that, we'll exclude it from the mask.
Now we've got everything we need to make a function that creates this mask. Furthermore we'll add a little bit of nice functionality to it; we can send in the center and the radius, or have it automatically calculate them.
def create_circular_mask(h, w, center=None, radius=None):
if center is None: # use the middle of the image
center = (int(w/2), int(h/2))
if radius is None: # use the smallest distance between the center and image walls
radius = min(center[0], center[1], w-center[0], h-center[1])
Y, X = np.ogrid[:h, :w]
dist_from_center = np.sqrt((X - center[0])**2 + (Y-center[1])**2)
mask = dist_from_center <= radius
return mask
In this case, dist_from_center is a matrix the same height and width that is specified. It broadcasts the column and row index vectors into a matrix, where the value at each location is the distance from the center. If we were to visualize this matrix as an image (scaling it into the proper range), then it would be a gradient radiating from the center we specify:
So when we compare it to radius, it's identical to thresholding this gradient image.
Note that the final mask is a matrix of booleans; True if that location is within the radius from the specified center, False otherwise. So we can then use this mask as an indicator for a region of pixels we care about, or we can take the opposite of that boolean (~ in numpy) to select the pixels outside that region. So using this function to color pixels outside the circle black, like I did up at the top of this post, is as simple as:
h, w = img.shape[:2]
mask = create_circular_mask(h, w)
masked_img = img.copy()
masked_img[~mask] = 0
But if we wanted to create a circular mask at a different point than the center, we could specify it (note that the function is expecting the center coordinates in x, y order, not the indexing row, col = y, x order):
center = (int(w/4), int(h/4))
mask = create_circular_mask(h, w, center=center)
Which, since we're not giving a radius, would give us the largest radius so that the circle would still fit in the image bounds:
Or we could let it calculate the center but use a specified radius:
radius = h/4
mask = create_circular_mask(h, w, radius=radius)
Giving us a centered circle with a radius that doesn't extend exactly to the smallest dimension:
And finally, we could specify any radius and center we wanted, including a radius that extends outside the image bounds (and the center can even be outside the image bounds!):
center = (int(w/4), int(h/4))
radius = h/2
mask = create_circular_mask(h, w, center=center, radius=radius)
What the algorithm you found online does is equivalent to setting the center to (0, 0) and setting the radius to h:
mask = create_circular_mask(h, w, center=(0, 0), radius=h)
I'd like to offer a way to do this that doesn't involve the np.ogrid() function. I'll crop an image called "robot.jpg", which is 491 x 491 pixels. For readability I'm not going to define as many variables as I would in a real program:
Import libraries:
import matplotlib.pyplot as plt
from matplotlib import image
import numpy as np
Import the image, which I'll call "z". This is a color image so I'm also pulling out just a single color channel. Following that, I'll display it:
z = image.imread('robot.jpg')
z = z[:,:,1]
zimg = plt.imshow(z,cmap="gray")
plt.show()
robot.jpg as displayed by matplotlib.pyplot
To wind up with a numpy array (image matrix) with a circle in it to use as a mask, I'm going to start with this:
x = np.linspace(-10, 10, 491)
y = np.linspace(-10, 10, 491)
x, y = np.meshgrid(x, y)
x_0 = -3
y_0 = -6
mask = np.sqrt((x-x_0)**2+(y-y_0)**2)
Note the equation of a circle on that last line, where x_0 and y_0 are defining the center point of the circle in a grid which is 491 elements tall and wide. Because I defined the grid to go from -10 to 10 in both x and y, it is within that system of units that x_0 and x_y set the center point of the circle with respect to the center of the image.
To see what that produces I run:
maskimg = plt.imshow(mask,cmap="gray")
plt.show()
Our "proto" masking circle
To turn that into an actual binary-valued mask, I'm just going to take every pixel below a certain value and set it to 0, and take every pixel above a certain value and set it to 256. The "certain value" will determine the radius of the circle in the same units defined above, so I'll call that 'r'. Here I'll set 'r' to something and then loop through every pixel in the mask to determine if it should be "on" or "off":
r = 7
for x in range(0,490):
for y in range(0,490):
if mask[x,y] < r:
mask[x,y] = 0
elif mask[x,y] >= r:
mask[x,y] = 256
maskimg = plt.imshow(mask,cmap="gray")
plt.show()
The mask
Now I'll just multiply the mask by the image element-wise, then display the result:
z_masked = np.multiply(z,mask)
zimg_masked = plt.imshow(z_masked,cmap="gray")
plt.show()
To invert the mask I can just swap the 0 and the 256 in the thresholding loop above, and if I do that I get:
Masked version of robot.jpg
The other answers work, but they are slow, so I will propose an answer using skimage.draw.disk. Using this is faster and I find it simple to use. Simply specify the center of the circle and radius then use the output to create a mask
from skimage.draw import disk
mask = np.zeros((10, 10), dtype=np.uint8)
row = 4
col = 5
radius = 5
rr, cc = disk(row, col, radius)
mask[rr, cc] = 1
I'm working on depth map with OpenCV. I can obtain it but it is reconstructed from the left camera origin and there is a little tilt of this latter and as you can see on the figure, the depth is "shifted" (the depth should be close and no horizontal gradient):
I would like to express it as with a zero angle, i try with the warp perspective function as you can see below but i obtain a null field...
P = np.dot(cam,np.dot(Transl,np.dot(Rot,A1)))
dst = cv2.warpPerspective(depth, P, (2048, 2048))
with :
#Projection 2D -> 3D matrix
A1 = np.zeros((4,3))
A1[0,0] = 1
A1[0,2] = -1024
A1[1,1] = 1
A1[1,2] = -1024
A1[3,2] = 1
#Rotation matrice around the Y axis
theta = np.deg2rad(5)
Rot = np.zeros((4,4))
Rot[0,0] = np.cos(theta)
Rot[0,2] = -np.sin(theta)
Rot[1,1] = 1
Rot[2,0] = np.sin(theta)
Rot[2,2] = np.cos(theta)
Rot[3,3] = 1
#Translation matrix on the X axis
dist = 0
Transl = np.zeros((4,4))
Transl[0,0] = 1
Transl[0,2] = dist
Transl[1,1] = 1
Transl[2,2] = 1
Transl[3,3] = 1
#Camera Intrisecs matrix 3D -> 2D
cam = np.concatenate((C1,np.zeros((3,1))),axis=1)
cam[2,2] = 1
P = np.dot(cam,np.dot(Transl,np.dot(Rot,A1)))
dst = cv2.warpPerspective(Z0_0, P, (2048*3, 2048*3))
EDIT LATER :
You can download the 32MB field dataset here: https://filex.ec-lille.fr/get?k=cCBoyoV4tbmkzSV5bi6. Then, load and view the image with:
from matplotlib import pyplot as plt
import numpy as np
img = np.load('testZ0.npy')
plt.imshow(img)
plt.show()
I have got a rough solution in place. You can modify it later.
I used the mouse handling operations available in OpenCV to crop the region of interest in the given heatmap.
(Did I just say I used a mouse to crop the region?) Yes, I did. To learn more about mouse functions in OpenCV SEE THIS. Besides, there are many other SO questions that can help you in this regard.:)
Using those functions I was able to obtain the following:
Now to your question of removing the tilt. I used the homography principal by taking the corner points of the image above and using it on a 'white' image of a definite size. I used the cv2.findHomography() function for this.
Now using the cv2.warpPerspective() function in OpenCV, I was able to obtain the following:
Now you can the required scale to this image as you wanted.
CODE:
I have also attached some snippets of code for your perusal:
#First I created an image of white color of a definite size
back = np.ones((435, 379, 3)) # size
back[:] = (255, 255, 255) # white color
Next I obtained the corner points pts_src on the tilted image below :
pts_src = np.array([[25.0, 2.0],[403.0,22.0],[375.0,436.0],[6.0,433.0]])
I wanted the points above to be mapped to the points 'pts_dst' given below :
pts_dst = np.array([[2.0, 2.0], [379.0, 2.0], [379.0, 435.0],[2.0, 435.0]])
Now I used the principal of homography:
h, status = cv2.findHomography(pts_src, pts_dst)
Finally I mapped the original image to the white image using perspective transform.
fin = cv2.warpPerspective(img, h, (back.shape[1],back.shape[0]))
# img -> original tilted image.
# back -> image of white color.
Hope this helps! I also got to learn a great deal from this question.
Note: The points fed to the 'cv2.findHomography()' must be in float.
For more info on Homography , visit THIS PAGE