Converting Blender's UV vectors into 3D coordinates - python

I have converted a UV vector coordinates into 3D plane using Blender's Bmesh function. Using bmesh to create the newer mesh,
bm.faces.new((vert))
it creates faces that is undesirable, or more accurately, the edges and faces that are generated by the function are not as intended.
Is there a way to allow no generation of edges that are more than the maximum connected distance of a mesh using bmesh?
This picture shows generation of both edges and faces on the left marked out in green, comparing with the seam that I have marked for UV unwrapping.

Each face has a list of edges, an edge can give you it's length and if it's too long you can remove it. Removing one of the edges will also delete the face.
f = bm.faces.new((vert))
for e in f.edges:
if e.calc_length() > max_length:
bm.edges.remove(e)
You can also get the distance between two vertices before using them to create a face (v1.co-v2.co).length, the trick is knowing which will be joined as edges and which will be diagonal across the face.

Related

Open CV Contours - Splitting concave polygon into multiple convex ones

I have the below image in a numpy array
I want to
separate the blocks into individual contours or any coordinate representation.
I then want to transform any concave polygons into multiple convex polygons.
Like this
So far I've managed to isolate each block into contours with opencv... but is there an easy way to split the L shape objects into two or more square blocks. The new contours of each shape can overlap if needed.
It may also be the case that I have an Image like this which does not have such straight lines.
I have used cv2.approxPolyDP to draw the shape, but again they are concave and I need them splitting.
Any help appreciated.
One way I can think of is, for each contour, find it convex hull first.See this link
Now find the defect points between contour and its convex hull. See this link
Now using the data of defects distance, find the point with maximum distance. This point will be the points where the 2 objects are joined in L shape. Now from this point, draw a perpendicular line to the contour tangent at that point, and again find contours. The resultant contours will be the 2 contours for the L shape.
Note: In this approach, it is possible that some part of one object comes in other while dividing them at the boundary.
Ok so thanks Rahul for your answer.
I ended up finding a package that helped me trangulate the polygons which solved my issue.
download with :
pip install sect
Then :
from sect.triangulation import constrained_delaunay_triangles
Take the contours generated by openCV - this generates them as below.
Then "smooth" the colours so there are less of them. I've used this
epsilon = 0.005 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
then run it through sect
constrained_delaunay_triangles([tuple(x) for x in approx.squeeze()])
The output splits the polygons into triangles removing ALL concave polygons totally.

OpenCV - Estimating Box dimensions in Python

This is the continuation of my previous question. I now have an image like this
Here the corners are detected. Now I am trying to estimate the dimensions of the bigger box while smaller black box dimensions are known.
Can anyone guide me what is the best way to estimate the dimensions of the box? I can do it with simple Euclidean distance but I don't know if it is the correct way or not. Or even if it is the correct way then from a list of tuples (coordinates) how can I find distances like A-B or A-D or G-H but not like A-C or A-F?
The sequence has to be preserved in order to get correct dimensions. Also I have two boxes here so when I create list of corners coordinates then it should contain all coordinates from A-J and I don't know which coordinates belong to which box. So how can I preserve that for two different boxes because I want to run this code for more similar images.
Note: The corners in this image is not a single point but a set of points so I clustered the set of the corner and average them to get a single (x,y) coordinate for each corner.
I have tried my best to explain my questions. Will be extremely glad to have some answers :) Thanks.
For the
How can I find distances like A-B or A-D or G-H but not like A-C or
A-F
part
Here's a quick code, not efficient for images with lots of corners, but for your case it's OK. The idea is to start from the dilated edge image you got in your other question (with only the big box, but the idea is the same for the image where there is also the small box)
then for every possible combination of corners, you look at a few points on an imaginary line between them, and then you check if these points actually fall on a real line in the image.
import cv2
import numpy as np
#getting intermediate points on the line between point1 and point2
#for example, calling this function with (p1,p2,3) will return the point
#on the line between p1 and p2, at 1/3 distance from p2
def get_intermediate_point(p1,p2,ratio):
return [p1[0]+(p2[0]-p1[0])/ratio,p1[1]+(p2[1]-p1[1])/ratio]
#open dilated edge images
img=cv2.imread(dilated_edges,0)
#corners you got from your segmentation and other question
corners=[[29,94],[102,21],[184,52],[183,547],[101,576],[27,509]]
nb_corners=len(corners)
#intermediate points between corners you are going to test
ratios=[2,4,6,8] #in this example, the middle point, the quarter point, etc
nb_ratios=len(ratios)
#list which will contain all connected corners
connected_corners=[]
#double loop for going through all possible corners
for i in range(nb_corners-1):
for j in range(i+1,nb_corners):
cpt=0
c1=corners[i]; c2=corners[j]
#testing every intermediate points between the selected corners
for ratio in ratios:
p=get_intermediate_point(c1,c2,ratio)
#checking if these points fall on a white pixel in the image
if img[p[0],p[1]]==255:
cpt+=1
#if enough of the intermediate points fall on a white pixel
if cpt>=int(nb_ratios*0.75):
#then we assume that the 2 corners are indeed connected by a line
connected_corners.append([i,j])
print(connected_corners)
In general you cannot, since any reconstruction is only up to scale.
Basically, given a calibrated camera and 6 2D-points (6x2=12) you want to find 6 3D points + scale = 6x3+1=19. There aren't enough equations.
In order to do so, you will have to make some assumptions and insert them into the equations.
Form example:
The box edges are perpendicular to each other (which means that every 2 neighboring points share at least one coordinate value).
You need to assume that you know the height of the bottom points, i.e. they are on the same plane as your calibration box (this will give you the Z of the visible bottom points).
Hopefully, these constraints are enough to given you less equations that unknown and you can solve the linear equation set.

reduce perimeter of polygon by eliminating points

I don't know exactly how to state this question, so consider the following picture.
The polygons were generated by detecting contours of a rasterized map of different region boundaries. Notice the "inlets" created by letters in the original image. I'd like to identify sets of points which if the endpoints were connected would reduce the length of the polygon's perimeter by at least some value. I tried generating the convex hull for each polygon and basing the perimeter-savings on the difference in the distance between the polygon perimeter between hull vertices and the distance between the vertices but there is no guarantee that these vertices are near the edge of the "inlet".
I feel like there is a term in computational geometry for this problem but don't know what it is. Do I have to compute the distance saved for each possible combination of starting/ending points or is there a simplified algorithm which does this recursively?
An example of when using the convex hull breaks down is the polygon in the center of the following example:
Here, the convex hull connects the corners of the polygon whereas I only want to close off the large inlet on the right side of the polygon while retaining the curvature of that side.
You could try an alpha shape. Alpha shape is defined as edges in a delaunay triangulation not exceeding alpha.

Calculating real world co-ordinates using stereo images in Python and OpenCV

I'm working on calculating the real world coordinates of an object in a scene by using a pair of stereo images. The images are simulations of perfect pinhole cameras so there is no distortion to correct and there is no rotation. I know OpenCV has a bunch of functions to calibrate stereo cameras and create disparity maps, but if all I want to calculate is the coordinates of one point, is there a simple way to do that?
1) Case of no rotation, only translation parallel to the horizontal axis of the image plane, cameras with equal focal lengths.
Denote with "f" the common focal length. Denote with "b" the baseline of the stereo pair, namely the distance between the cameras' optical centers. Given a 3D point P, visible in both cameras at horizontal image coordinates x_left and x_right, denote with "d" their disparity, namely the difference d = x_left - x_right.
By elementary geometry it then follows that the depth z_left of P in the left camera coordinates is:
z_left = b * f / d.
2) Any other case (unequal focal lengths, differences in other intrinsic parameters, non-linear lens distortion, inter-camera rotation, translation not parallel to the x axis, etc.):
Don't bother, use OpenCV,

Using 3D perception in opencv2

Can anyone please explain if it is possible, and if so how, to work with cv2.getPerspectiveTransform().
I have 3d information about my image: I know the length of a,b and also the Different heights of c,d,e,f and g. I made the height different to get more 3d information but if it isn't needed that will be preferable.
Ultimately I need to know where the pink dot really is in the rectangle after implementing the transform on my [x,y] position I get from the camera feed.
If you denote by C,D,E,F the positions of the four corners of the black polygon in the original image (each of them is a 2D point), and C',D',E',F' the positions of the corresponding points in your target image (probably (0,0), (a, 0), (a, b), (0, b)), M = cv2.getPerspectiveTransform({C,D,E,F}, {C',D',E',F'}) is the perspective transformation from one polygon to the other.
Given the position G of the vertical projection of g onto the black polygon in the original image, you can compute its position in the target image as cv2.transform(G, M). This will return a point (x,y,z), where the last coordinate z is a normalizing term. This z is zero when your point would be "at infinity" in the target image. If z is not zero, the point you are looking for is (x/z, y/z).
If z is zero, your point is at infinity, in the direction of the support of vector (x, y) (think of the case where G would be at the intersection of the supporting lines of two opposite sides of the black polygon in the source image).
If you know that the heights of c,d,e,f,g are equal, these points are also coplanar, and the exact same method applies to c,d,e,f,g instead of C,D,E,F,G.

Categories