openCV triangulatePoints acting strange in python - python

I am trying to calculate the 3D points using OpenCV to being a multiple view reconstruction. I perform the standard sequences of finding matching points using SIFT, then getting the Fundamental and Essential Matrix with a known camera calibration matrix. After recovering the pose of the second cam relative to the first, I go on to try and triangulate the points. All other parts of the code work well and as expected. But this one part gets glitchy. I am using OpenCV 4.3.0. Sometimes triangulatePoints just breaks the IDE (Spyder), sometimes it gives me the points, and sometimes it gives me a bunch of points at [1,1,1]. The IDE also breaks if the number of points is over 200. The more points the more glitchy it seems to get.
This is getting frustrating, any help would be appreciated.
Here is a snippet of the code.
F, mask = cv.findFundamentalMat(pts1,pts2,cv.FM_LMEDS)
# We select only inlier points
pts1 = pts1[mask.ravel()==1]
pts2 = pts2[mask.ravel()==1]
print(len(pts1),len(pts2))
print(cv.__version__)
E, mask_2 = cv.findEssentialMat(pts1, pts2, focal=f_x, pp=(O_x, O_y), method=cv.FM_LMEDS, prob=0.999, threshold=3.0)
print("Essential Matrix")
print(E)
print(" ")
points, R_1, t_1, mask_2 = cv.recoverPose(E, pts1, pts2,pts2,focal=f_x, pp=(O_x, O_y), mask = mask_2)
print("Rotation Matrix")
print(R_1)
print(" ")
R_M = R.from_matrix(R_1)
R_1_E = R_M.as_euler('zyx', degrees=True)
print("angles (z,y,x) or (alpha, beta, gamma) Z is dir of Principal Ray, Y is Vert and X is horiz")
print(R_1_E)
print("Translation")
print(t_1)
K = np.array([[f_x, 0,O_x],
[0,f_x, O_y],
[0,0,1]])
Pr_1 = np.array([[1,0,0,0],[0,1,0,0],[0,0,1,0]])
Pr_2 = np.hstack((np.dot(K,R_1),np.dot(K,t_1)))
#Pr_2 = np.hstack((R_1,t_1))
pts1_t = pts1[:200].T
pts2_t = pts2[:200].T
#print(pts1_t)
points4D = cv.triangulatePoints(Pr_1,Pr_2,pts1_t, pts2_t)
#print(points4D.T[:3].T)
coordinate_eucl= cv.convertPointsFromHomogeneous(points4D.T)
coordinate_eucl=coordinate_eucl.reshape(-1,3)
px,py,pz=coordinate_eucl.T
coordP = []
for i in range(len(px)):
coordP.append([px[i],py[i],pz[i]])
print(coordP[:20])

the triangulate points function in OpenCV Python does not work very well.
I used the functions created by Eliasvan, here:
https://github.com/Eliasvan/Multiple-Quadrotor-SLAM/blob/master/Work/python_libs/triangulation.py
I used the Linear_LS_Triangulation Function, worked very well for me, and fast.

Related

Closing holes in mesh while retaining point cloud shape

I have a ply file that I am attempting to turn into a mesh for the purposes of ray tracing. It looks like this is the open3d visualizer and is supposed to represent a part of a city:
I used open3d to get make the following mesh as following(kdtree is just to get small number of points as file is huge):
input_file = "san.ply"
pcd = o3d.io.read_point_cloud(input_file)
point_cloud_in_numpy = np.asarray(pcd.points)
color = np.asarray(pcd.colors)
kd = scipy.spatial.cKDTree(point_cloud_in_numpy) #create kdtree for fast querying
near = kd.query_ball_point([0, 0, 0], 100)
items = point_cloud_in_numpy[near]
colors = color[near]
pcd2 = o3d.geometry.PointCloud()
pcd2.colors = o3d.utility.Vector3dVector(colors)
pcd2.points = o3d.utility.Vector3dVector(items)
pcd2.estimate_normals()
distances = pcd2.compute_nearest_neighbor_distance()
avg_dist = np.mean(distances)
radius = 2 * avg_dist
mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_ball_pivoting(
pcd2,
o3d.utility.DoubleVector([radius, radius * 2]))
vertices = np.asarray(mesh.vertices)
faces = np.asarray(mesh.triangles)
o3d.visualization.draw_geometries([mesh])
However, when graphing the mesh, we get something that looks like this:
Many holes and just not at all optimal for ray tracing. I also tried using the create_from_point_cloud_poisson method instead however I kept on getting the following error:
[ERROR] /Users/yixing/repo/Open3D/build/poisson/src/ext_poisson/PoissonRecon/Src/FEMTree.IsoSurface.specialized.inl (Line 1463)
operator()
Failed to close loop [6: 87 64 18] | (113981): (2752,2560,2196)
which I found no way to fix online. I tried looking around but the best I found was pymeshfix which doesn't even work because "The input is assumed to represent a single closed solid object", which my point cloud is obviously not. I'm just looking for a good way to perform surface reconstruction that lets me keep the shape of the city while also fixing all the holes and making all surfaces created by points near eachother surfaces watertight.
Maybe you can close the holes with fill_holes() from the tensor-based TriangleMesh:
mesh = o3d.t.geometry.TriangleMesh.from_legacy(mesh).fill_holes().to_legacy()
fill_holes() takes a parameter for max. hole sizes to be closed
http://www.open3d.org/docs/latest/python_api/open3d.t.geometry.TriangleMesh.html#open3d.t.geometry.TriangleMesh.fill_holes

How to extract subimages from an image?

What are the ways to count and extract all subimages given a master image?
Sample 1
Input:
Output should be 8 subgraphs.
Sample 2
Input:
Output should have 6 subgraphs.
Note: These image samples are taken from internet. Images can be of random dimensions.
Is there a way to draw lines of separation in these image and then split based on those details ?
e.g :
I don't think, there'll be a general solution to extract all single figures properly from arbitrary tables of figures (as shown in the two examples) – at least using some kind of "simple" image-processing techniques.
For "perfect" tables with constant grid layout and constant colour space between single figures (as shown in the two examples), the following approach might be an idea:
Calculate the mean standard deviation in x and y direction, and threshold using some custom parameter. The mean standard deviation within the constant colour spaces should be near zero. A custom parameter will be needed here, since there'll be artifacts, e.g. from JPG compression, which effects might be more or less severe.
Do some binary closing on the mean standard deviations using custom parameters. There might be small constant colour spaces around captions or similar, cf. the second example. Again, custom parameters will be needed here, too.
From the resulting binary "signal", we can extract the start and stop positions for each subimage, thus the subimage itself by slicing from the original image. Attention: That works only, if the tables show a constant grid layout!
That'd be some code for the described approach:
import cv2
import numpy as np
from skimage.morphology import binary_closing
def extract_from_table(image, std_thr, kernel_x, kernel_y):
# Threshold on mean standard deviation in x and y direction
std_x = np.mean(np.std(image, axis=1), axis=1) > std_thr
std_y = np.mean(np.std(image, axis=0), axis=1) > std_thr
# Binary closing to close small whitespaces, e.g. around captions
std_xx = binary_closing(std_x, np.ones(kernel_x))
std_yy = binary_closing(std_y, np.ones(kernel_y))
# Find start and stop positions of each subimage
start_y = np.where(np.diff(np.int8(std_xx)) == 1)[0]
stop_y = np.where(np.diff(np.int8(std_xx)) == -1)[0]
start_x = np.where(np.diff(np.int8(std_yy)) == 1)[0]
stop_x = np.where(np.diff(np.int8(std_yy)) == -1)[0]
# Extract subimages
return [image[y1:y2, x1:x2, :]
for y1, y2 in zip(start_y, stop_y)
for x1, x2 in zip(start_x, stop_x)]
for file in (['image1.jpg', 'image2.png']):
img = cv2.imread(file)
cv2.imshow('image', img)
subimages = extract_from_table(img, 5, 21, 11)
print('{} subimages found.'.format(len(subimages)))
for i in subimages:
cv2.imshow('subimage', i)
cv2.waitKey(0)
The print output is:
8 subimages found.
6 subimages found.
Also, each subimage is shown for visualization purposes.
For both images, the same parameters were suitable, but that's just some coincidence here!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
NumPy: 1.20.1
OpenCV: 4.5.1
scikit-image: 0.18.1
----------------------------------------
I could only extract the sub-images using simple array slicing technique. I am not sure if this is what you are looking for. But if one knows the table columns and rows, I think you can extract the sub-images.
image = cv2.imread('table.jpg')
p = 2 #number of rows
q = 4 #number of columns
width, height, channels = image.shape
width_patch = width//p
height_patch = height//q
x=0
for i in range(0, width - width_patch, width_patch):
for j in range(0, height - height_patch, height_patch):
crop = image[i:i+width_patch, j:j+height_patch]
cv2.imwrite("image_{0}.jpg".format(x),crop)
x+=1
# cv2.imshow('crop', crop)
# cv2.waitKey(0)```

how to find Y face of the cube in Maya with Python

sorry for such specific question guys , I think people only with knowledge of Maya will answer tho. In Maya I have cubes different sizes and I need to find with python which face of cube is pointing Y axis down. (Pivot is in center) Any tips will be appreciated
Thanks a lot :)
import re
from maya import cmds
from pymel.core.datatypes import Vector, Matrix, Point
obj = 'pCube1'
# Get the world transformation matrix of the object
obj_matrix = Matrix(cmds.xform(obj, query=True, worldSpace=True, matrix=True))
# Iterate through all faces
for face in cmds.ls(obj + '.f[*]', flatten=True):
# Get face normal in object space
face_normals_text = cmds.polyInfo(face, faceNormals=True)[0]
# Convert to a list of floats
face_normals = [float(digit) for digit in re.findall(r'-?\d*\.\d*', face_normals_text)]
# Create a Vector object and multiply with matrix to get world space
v = Vector(face_normals) * obj_matrix
# Check if vector faces downwards
if max(abs(v[0]), abs(v[1]), abs(v[2])) == -v[1]:
print face, v
If you just need a quick solution without vector math and Pymel or the the API, you can use cmds.polySelectConstraint to find the faces aligned with a normal. All you need to do is select all the faces, then use the constraint to get only the ones pointing the right way. This will select all the faces in a mesh that are pointing along a given axis:
import maya.cmds as cmds
def select_faces_by_axis (mesh, axis = (0,1,0), tolerance = 45):
cmds.select(mesh + ".f[*]")
cmds.polySelectConstraint(mode = 3, type = 8, orient = 2, orientaxis = axis, orientbound = (0, tolerance))
cmds.polySelectConstraint(dis=True) # remember to turn constraint off!
The axis is the x,y,z axis you want and tolerance is the slop in degrees you'll tolerate. To get the downward faces you'd do
select_faces_by_axis ('your_mesh_here', (0,0,-1))
or
select_faces_by_axis ('your_mesh_here', (0,0,-1), 1)
# this would get faces only within 1 degree of downard
This method has the advantage of operating mostly in Maya's C++, it's going to be faster than python-based methods that loop over all the faces in a mesh.
With pymel the code can be a bit more compact. Selecting the faces pointing downwards:
n=pm.PyNode("pCubeShape1")
s = []
for f in n.faces:
if f.getNormal(space='world')[1] < 0.0:
s.append(f)
pm.select(s)

How to find face neighbours in Maya?

I have a problem where I need to select faces that are next to one pre-selected face.
This may be done easily but the problem is that when I get a neighbour face I need to know in which direction it is facing.
So now I am able to select faces which are connected with an edge but I can't get the face that is for example left or right from the first selected face. I have tried multiple approaches but can't find the solution.
I tried with:
pickWalk - cmds.pickWalk()- problem with this is that it's behavior can't be predicted, since it walks the mesh from the camera perspective.
polyInfo - cmds.polyInfo()- this is a very useful function and closest to the answer. In this approach I try to extract edges from a face and then see which are neighbours to that face with edgeToFace(). This works well but doesn't solve my problem. To elaborate, when polyInfo returns faces that share edges, it doesn't return them in a way that I can always know that edgesList[0] (for example) is the edge that points left or right. Hence if I use this on different faces the resulting face may be facing in a different direction in each case.
Hard way with many conversions from vertex to edge then to face etc. But still again it's the same problem where I don't know which edge is the top or left one.
conectedFaces()method who i call on selected face and it returns faces which are connected to first face,but still it`s the same problem,i dont know which face is facing which way.
To be clear I'm not using a pre-selected list of faces and checking them, but I need to know the faces without knowing or keeping their names somewhere. Does someone know a way that works with selection of faces?
To elaborate my question I made an image to make it clear:
As you can see from the example if there is selected face I need to select any of pointed faces, but that must be exact face I want to select. Other methods select all neighbour faces, but I need method that I can say "select right" and will select right one from first selected face.
This is one solution that would be fairly consistent under the rule that up/down/left/right is aligned with the mesh's transformation (local space), though could be world space too.
The first thing I would do is build a face relative coordinate system for every mesh face using the average face vertex position, face normal, and world space Y axis of the mesh's transformation. This involves a little vector math, so I will use the API to make this easier. This first part will make a coordinate system for each face that we will store into lists for future querying. See below.
from maya import OpenMaya, cmds
meshTransform = 'polySphere'
meshShape = cmds.listRelatives(meshTransform, c=True)[0]
meshMatrix = cmds.xform(meshTransform, q=True, ws=True, matrix=True)
primaryUp = OpenMaya.MVector(*meshMatrix[4:7])
# have a secondary up vector for faces that are facing the same way as the original up
secondaryUp = OpenMaya.MVector(*meshMatrix[8:11])
sel = OpenMaya.MSelectionList()
sel.add(meshShape)
meshObj = OpenMaya.MObject()
sel.getDependNode(0, meshObj)
meshPolyIt = OpenMaya.MItMeshPolygon(meshObj)
faceNeighbors = []
faceCoordinates = []
while not meshPolyIt.isDone():
normal = OpenMaya.MVector()
meshPolyIt.getNormal(normal)
# use the seconary up if the normal is facing the same direction as the object Y
up = primaryUp if (1 - abs(primaryUp * normal)) > 0.001 else secondaryUp
center = meshPolyIt.center()
faceArray = OpenMaya.MIntArray()
meshPolyIt.getConnectedFaces(faceArray)
meshPolyIt.next()
faceNeighbors.append([faceArray[i] for i in range(faceArray.length())])
xAxis = up ^ normal
yAxis = normal ^ xAxis
matrixList = [xAxis.x, xAxis.y, xAxis.z, 0,
yAxis.x, yAxis.y, yAxis.z, 0,
normal.x, normal.y, normal.z, 0,
center.x, center.y, center.z, 1]
faceMatrix = OpenMaya.MMatrix()
OpenMaya.MScriptUtil.createMatrixFromList(matrixList, faceMatrix)
faceCoordinates.append(faceMatrix)
These functions will look up and return which face is next to the one given in a particular direction (X and Y) relative to the face. This uses a dot product to see which face is more in that particular direction. This should work with any number of faces but it will only return one face that is in the most of that direction.
def getUpFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,1,0))
def getDownFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,-1,0))
def getRightFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(1,0,0))
def getLeftFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(-1,0,0))
def getDirectionalFace(faceIndex, axis):
faceMatrix = faceCoordinates[faceIndex]
closestDotProd = -1.0
nextFace = -1
for n in faceNeighbors[faceIndex]:
nMatrix = faceCoordinates[n] * faceMatrix.inverse()
nVector = OpenMaya.MVector(nMatrix(3,0), nMatrix(3,1), nMatrix(3,2))
dp = nVector * axis
if dp > closestDotProd:
closestDotProd = dp
nextFace = n
return nextFace
So you would call it like this:
getUpFace(123)
With the number being the face index you want to get the face that is "up" from it.
Give this a try and see if it satisfies your needs.
polyListComponentConversion
import pprint
init_face = cmds.ls(sl=True)
#get edges
edges = cmds.polyListComponentConversion(init_face, ff=True, te=True)
#get neighbour faces
faces = cmds.polyListComponentConversion(edges, fe=True, tf=True, bo=True)
# show neighbour faces
cmds.select(faces)
# print face normal of each neighbour face
pprint.pprint(cmds.ployInfo(faces,fn=True))
The easiest way of doing this is using Pymel's connectedFaces() on the MeshFace:
http://download.autodesk.com/us/maya/2011help/pymel/generated/classes/pymel.core.general/pymel.core.general.MeshFace.html
import pymel.core as pm
sel = pm.ls(sl=True)[0]
pm.select(sel.connectedFaces())

Detect loops/intersections in matplotlib scatter plot

At some point in my work, I came up with that kind of scatter plot.
I would like for my script to be able to detect the fact that it "loops" and to give me the point (or an approximation thereof) where it does so : for instance, in this case it would be about [0.2,0.1].
I tried to play around with some representative quantities of my points, like norm and/or argument, like in the following piece of code.
import numpy as np
x,y = np.genfromtxt('points.dat',unpack=True)
norm = np.sqrt(x**2+y**2)
arg = np.arctan2(y,x)
left,right = np.meshgrid(norm,norm)
norm_diff = np.fabs(left - right)
mask = norm_diff == 0.
norm_diff_ma = np.ma.masked_array(norm_diff,mask)
left,right = np.meshgrid(arg,arg)
arg_diff = np.fabs(left - right)
mask = arg_diff == 0.
arg_diff_ma = np.ma.masked_array(arg_diff,mask)
list_of_indices = np.ma.where((norm_diff_ma<1.0e-04)*(arg_diff_ma<1.0e-04))
But, it does not work as intended : might be because the dataset contains too many points and the distance between two aligned points is anyhow of the same order of magnitude as the distance between the points in the "loop cluster" ...
I was thinking about detecting clusters, or maybe even detecting lines in the scatter plot and then see if there are any intersections between any two lines, but I am afraid my skills in image processing only go so far.
Is there any algorithm, any trick that any of you can think about would work here ?
A representative data sample can be found here.
Edit 08/13/2015 16h18 : after the short discussion with #DrBwts I took a closer look at the data I obtained after a pyplot.contour() call. Using the following routine to extract all the vertices :
def contour_points(contour, steps=1):
try:
loc_arr = np.row_stack([path.interpolated(steps).vertices for linecol in contour.collections for path in linecol.get_paths()])
except ValueError:
loc_arr = np.empty((0,2))
finally:
return loc_arr
y,x = contour_points(CS,steps=1).T
it turns out the points of coordinates (x,y) are ordered, in the sense where a call to pyplot.plot() connects the dots correctly.

Categories