Using 3D perception in opencv2 - python

Can anyone please explain if it is possible, and if so how, to work with cv2.getPerspectiveTransform().
I have 3d information about my image: I know the length of a,b and also the Different heights of c,d,e,f and g. I made the height different to get more 3d information but if it isn't needed that will be preferable.
Ultimately I need to know where the pink dot really is in the rectangle after implementing the transform on my [x,y] position I get from the camera feed.

If you denote by C,D,E,F the positions of the four corners of the black polygon in the original image (each of them is a 2D point), and C',D',E',F' the positions of the corresponding points in your target image (probably (0,0), (a, 0), (a, b), (0, b)), M = cv2.getPerspectiveTransform({C,D,E,F}, {C',D',E',F'}) is the perspective transformation from one polygon to the other.
Given the position G of the vertical projection of g onto the black polygon in the original image, you can compute its position in the target image as cv2.transform(G, M). This will return a point (x,y,z), where the last coordinate z is a normalizing term. This z is zero when your point would be "at infinity" in the target image. If z is not zero, the point you are looking for is (x/z, y/z).
If z is zero, your point is at infinity, in the direction of the support of vector (x, y) (think of the case where G would be at the intersection of the supporting lines of two opposite sides of the black polygon in the source image).
If you know that the heights of c,d,e,f,g are equal, these points are also coplanar, and the exact same method applies to c,d,e,f,g instead of C,D,E,F,G.

Related

Map corresponding points between Delaunay triangles

I'm trying to morph two images of faces using an inverse warp. I have the Delaunay triangles for both images as well as all transformation matrices for all pairs of corresponding triangles.
I have applied the matrix to every pixel inside the triangles, but the image I am getting is all messed up and some pixels aren't being filled in as well.
I suspect the vertices lists are not in order which means the triangles are not corresponding. Or it could just be me messing up the row, cols order.
Here's my code:
from scipy.spatial import Delaunay
from skimage.draw import polygon
import numpy as np
def drawDelaunay(img, landmarks, color):
tri = Delaunay(landmarks)
vertices = []
for t in landmarks[tri.simplices]:
# t = [int(i) for i in t]
pt1 = [t[0][0], t[0][1]]
pt2 = [t[1][0], t[1][1]]
pt3 = [t[2][0], t[2][1]]
cv2.line(img, pt1, pt2, color, 1, cv2.LINE_AA, 0)
cv2.line(img, pt2, pt3, color, 1, cv2.LINE_AA, 0)
cv2.line(img, pt3, pt1, color, 1, cv2.LINE_AA, 0)
vertices.append([pt1, pt2, pt3])
return img, vertices
def getAffineMat(triangle1, triangle2):
x = np.transpose(np.matrix([*triangle1]))
y = np.transpose(np.matrix([*triangle2]))
# Add ones to bottom of x and y
x = np.vstack((x, [1,1,1]))
y = np.vstack((y, [1,1,1]))
xInv = np.linalg.pinv(x)
return np.dot(y, xInv)
srcImg = face2
srcRows, srcCols, srcDepth = face2.shape
destImg = np.zeros(face1.shape, dtype=np.uint8)
for triangle1, triangle2 in zip(vertices1, vertices2):
transMat = getAffineMat(triangle1, triangle2)
r, c = list(map(list, zip(*triangle2)))
rr, cc = polygon(r, c)
for row, col in zip(rr, cc):
transformed = np.dot(transMat, [col, row, 1])
srcX, srcY, *_ = np.array(transformed.T)
# Check if pixel is within image boundaries
if isWithinBounds(srcCols, srcRows, col, row):
# Interpolate the color of the pixel from the four nearest pixels
color = bilinearInterpolation(srcImg, srcX, srcY)
# Set the color of the current pixel in the destination image
destImg[row, col] = color
I wish to implement this without getAffineTransform or warpAffine. Any help would be much appreciated!
Sources:
Transfer coordinates from one triangle to another triangle
https://devendrapratapyadav.github.io/FaceMorphing/
But you don't have corresponding triangles! This looks like 2 separates Delaunay triangulation. Maybe made on matching points, but still no matching triangles. You can't do two Delaunay triangulation, one in each image, and expect them to match. You need 1 delaunay triangulation, and then use the same edges on both sides (so, for at least one side, triangulation will not be exactly Delaunay).
Look for example at the top-right corner of your images.
On one side you have you have 4 outgoing edges (counting those we can't see because they are confused with te image border, but they have to be there), on the other you have 6 outgoing edges.
The number of edges connected to two matching vertices is supposed to be a constant (otherwise, how could you warp anything?).
So, clearly, I think (but you did not provide any code, for that, since you postulate that triangulation is correct, when I am pretty sure it is triangulation that is not. So I can only surmise), you got a two sets of matching points, then performed 2 Delaunay's triangulation on those 2 sets of points, expecting to be able to match triangles, even tho they are not at all the same triangles.
Edit: how to transform
(in reply to your question in comment)
It's the same triangulations. You have a list of points p₁, p₂, p₃, ..., pₙ in the first images. A matching list of points q₁, q₂, q₃, ..., qₙ in the second image. You perform a triangulation in the 1st image. Whose output should be a list of triplets of indices, such as (1,3,4), (1, 2, 3), ... meaning that optimal triangulation in 1st image is the one made of triangle (p₁,p₃, p₄), (p₁, p₂, p₃), ...
And in the second image, you use triangulation (q₁,q₃,q₄), (q₁, q₂, q₃), ...
Even if it is not the optimal triangulation of q₁,q₂,...,qₙ (the one that maximize smallest angle). It should not be that far, if q₁,q₂,...,qₙ are not that different from p₁,p₂,...,pₙ (which they are not supposed to be, if you tried to match consistently both images).
So, transformation matrices are the one transforming coordinates in each matching triangles (there are one transformation for each pair of matching triangles).
To decide which point (x',y') of second image matches point (x,y) of first image, you need
to identify in which triangle (i,j,k) (that is (pᵢ,pⱼ,pₖ)) (x,y) is,
Find barycentric coordinates of (x,y) inside this triangle: (x,y)=αpᵢ+βpⱼ+γpₖ
Assume that (x',y') have the same barycentric coordinates inside the matching triangle, that is (x',y')=αqᵢ+βqⱼ+γqₖ
Transformation matrix (for triangle (i,j,k)) is the one going from (x,y) to (x',y')

How would I make a 2D colormap where each point represents a 3d angle?

I have a HxWx3 array where the 3 denotes x,y,z co-ordinates (so it's HxW 3d points, organised in a 2D array of height H and width W). Actually I only care about the angle that x, y, z makes to the origin. I want to turn this into an RGB image of HxW that's informative. Informative means that angles close to one another should have always have similar colors, and angles far from one another should have different colors.
For context to anyone who works with computer vision: I want to do a colormap of the normal obtained from a depth map.
EDIT - If it makes it any easier I only need one hemisphere, not a whole sphere. So circular consistency only needs to happen in one dimension, not two (I think).

Calculating how much area of an ellipsis is covered by a certain pixel in Python

I am working with Python and currently trying to figure out the following: If I place an ellipsis of which the semi-axes, the centre's location and the orientation are known, on a pixel map, and the ellipsis is large enough to cover multiple pixels, how do I figure out which pixel covers which percentage of the total area of the ellipsis? As an example, let's take a map of 10*10 pixels (i.e. interval of [0,9]) and an ellipsis with the centre at (6.5, 6.5), semi-axes of (0.5, 1.5) and an orientation angle of 30° between the horizontal and the semi-major axis. I have honestly no idea, so any help is appreciated.
edit: To clarify, the pixels (or cells) have an area. I know the area of the ellipsis, its position and its orientation, and I want to find out how much of its area is located within pixel 1, how much it is within pixel 2 etc.
Following the equation of an elipse
The easiest way to find which pixels from your mesh are inside and which are out would be to assign (x, y, alpha) for each pixel in the above equation.
If the result <=1, the pixel is inside. Otherwise, it is outside.
You can count the pixels.
This is math problem. Try math.exchange rather than stackoverflow.
I suggest you to transform the plane: translation to get the center in the middle, rotation to get the ellipsis's axes on the x-y ones and dilatation on x to get a circle. And then work with a circle on rhombus tiles.
Your problem won't be less or more tractable in the new formulation but the math and code you have to work on will be slightly lighter.

ReprojectImageTo3D corresponding pixel in image (Stereo Vision)

I have a disparity map.
Based on the disparity map, hovering on the 'left image' displays:
X and y of the image, So if i hover on the top-left most, it will display x:0, y:0
The next step is to display distance of the specific pixel,to make my life easy, I will try to do it with reprojectImageTo3D(disp, Q)
I got Q from stereoRectify
now, reprojectImageTo3D in python, returns an n by 3 matrix.
So I can see, it is a row of x y z coordinates. Wondering, how can I know which pixel are these coordinates correspond to?
This is a sample of the 3D points that I saved using numpy.savetxt
http://pastebin.com/wwDCYwjA
BTW: I'm doing everything in python, but GUI in Java, I don't have time to study GUI in python.
If you correctly calculate your disparity map, you should get (n1,n2,1) dimensional array, where n1,n2 - number of image's pixels by axes, 1 - number of chanels (single channel, which contain distance in pixels between correspondent pixels from left and right images). You should check that by typing disp.shape. After that you should pass your disparity map's ndarray to reprojectImageTo3D function and get ndarray, which has (n1,n2,3) shape (third dimension contains X,Y,Z coords of 3D point). You can check that by typing:
threeDImage = reprojectImageTo3D(disp, Q)
print threeDImage.shape
And finally, since you made your disparity map based on left image, each pixel, which has coords x,y on left image (or disparity map), corresponds to threeDImage[x][y] 3D point. Keep in mind, that row:0, column:0 is the top-left element of the matrix, based on opencv handling images:
0/0---column--->
|
|
row
|
|
v

OpenCV - Estimating Box dimensions in Python

This is the continuation of my previous question. I now have an image like this
Here the corners are detected. Now I am trying to estimate the dimensions of the bigger box while smaller black box dimensions are known.
Can anyone guide me what is the best way to estimate the dimensions of the box? I can do it with simple Euclidean distance but I don't know if it is the correct way or not. Or even if it is the correct way then from a list of tuples (coordinates) how can I find distances like A-B or A-D or G-H but not like A-C or A-F?
The sequence has to be preserved in order to get correct dimensions. Also I have two boxes here so when I create list of corners coordinates then it should contain all coordinates from A-J and I don't know which coordinates belong to which box. So how can I preserve that for two different boxes because I want to run this code for more similar images.
Note: The corners in this image is not a single point but a set of points so I clustered the set of the corner and average them to get a single (x,y) coordinate for each corner.
I have tried my best to explain my questions. Will be extremely glad to have some answers :) Thanks.
For the
How can I find distances like A-B or A-D or G-H but not like A-C or
A-F
part
Here's a quick code, not efficient for images with lots of corners, but for your case it's OK. The idea is to start from the dilated edge image you got in your other question (with only the big box, but the idea is the same for the image where there is also the small box)
then for every possible combination of corners, you look at a few points on an imaginary line between them, and then you check if these points actually fall on a real line in the image.
import cv2
import numpy as np
#getting intermediate points on the line between point1 and point2
#for example, calling this function with (p1,p2,3) will return the point
#on the line between p1 and p2, at 1/3 distance from p2
def get_intermediate_point(p1,p2,ratio):
return [p1[0]+(p2[0]-p1[0])/ratio,p1[1]+(p2[1]-p1[1])/ratio]
#open dilated edge images
img=cv2.imread(dilated_edges,0)
#corners you got from your segmentation and other question
corners=[[29,94],[102,21],[184,52],[183,547],[101,576],[27,509]]
nb_corners=len(corners)
#intermediate points between corners you are going to test
ratios=[2,4,6,8] #in this example, the middle point, the quarter point, etc
nb_ratios=len(ratios)
#list which will contain all connected corners
connected_corners=[]
#double loop for going through all possible corners
for i in range(nb_corners-1):
for j in range(i+1,nb_corners):
cpt=0
c1=corners[i]; c2=corners[j]
#testing every intermediate points between the selected corners
for ratio in ratios:
p=get_intermediate_point(c1,c2,ratio)
#checking if these points fall on a white pixel in the image
if img[p[0],p[1]]==255:
cpt+=1
#if enough of the intermediate points fall on a white pixel
if cpt>=int(nb_ratios*0.75):
#then we assume that the 2 corners are indeed connected by a line
connected_corners.append([i,j])
print(connected_corners)
In general you cannot, since any reconstruction is only up to scale.
Basically, given a calibrated camera and 6 2D-points (6x2=12) you want to find 6 3D points + scale = 6x3+1=19. There aren't enough equations.
In order to do so, you will have to make some assumptions and insert them into the equations.
Form example:
The box edges are perpendicular to each other (which means that every 2 neighboring points share at least one coordinate value).
You need to assume that you know the height of the bottom points, i.e. they are on the same plane as your calibration box (this will give you the Z of the visible bottom points).
Hopefully, these constraints are enough to given you less equations that unknown and you can solve the linear equation set.

Categories