I have a ply file that I am attempting to turn into a mesh for the purposes of ray tracing. It looks like this is the open3d visualizer and is supposed to represent a part of a city:
I used open3d to get make the following mesh as following(kdtree is just to get small number of points as file is huge):
input_file = "san.ply"
pcd = o3d.io.read_point_cloud(input_file)
point_cloud_in_numpy = np.asarray(pcd.points)
color = np.asarray(pcd.colors)
kd = scipy.spatial.cKDTree(point_cloud_in_numpy) #create kdtree for fast querying
near = kd.query_ball_point([0, 0, 0], 100)
items = point_cloud_in_numpy[near]
colors = color[near]
pcd2 = o3d.geometry.PointCloud()
pcd2.colors = o3d.utility.Vector3dVector(colors)
pcd2.points = o3d.utility.Vector3dVector(items)
pcd2.estimate_normals()
distances = pcd2.compute_nearest_neighbor_distance()
avg_dist = np.mean(distances)
radius = 2 * avg_dist
mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_ball_pivoting(
pcd2,
o3d.utility.DoubleVector([radius, radius * 2]))
vertices = np.asarray(mesh.vertices)
faces = np.asarray(mesh.triangles)
o3d.visualization.draw_geometries([mesh])
However, when graphing the mesh, we get something that looks like this:
Many holes and just not at all optimal for ray tracing. I also tried using the create_from_point_cloud_poisson method instead however I kept on getting the following error:
[ERROR] /Users/yixing/repo/Open3D/build/poisson/src/ext_poisson/PoissonRecon/Src/FEMTree.IsoSurface.specialized.inl (Line 1463)
operator()
Failed to close loop [6: 87 64 18] | (113981): (2752,2560,2196)
which I found no way to fix online. I tried looking around but the best I found was pymeshfix which doesn't even work because "The input is assumed to represent a single closed solid object", which my point cloud is obviously not. I'm just looking for a good way to perform surface reconstruction that lets me keep the shape of the city while also fixing all the holes and making all surfaces created by points near eachother surfaces watertight.
Maybe you can close the holes with fill_holes() from the tensor-based TriangleMesh:
mesh = o3d.t.geometry.TriangleMesh.from_legacy(mesh).fill_holes().to_legacy()
fill_holes() takes a parameter for max. hole sizes to be closed
http://www.open3d.org/docs/latest/python_api/open3d.t.geometry.TriangleMesh.html#open3d.t.geometry.TriangleMesh.fill_holes
Related
background - I'm trying to create a circular polygon and add it to a kml using simpleKML.
The kml knows that there should be a polygon added, and it has the proper colour, width, and description, but whenever I zoom to the location it leads me to coordinates 0,0 and no polygon.
My code to create the polygon looks like:
pol = kml.newpolygon(name=pnt.name)
pol.description = ("A buffer for " + pnt.name)
pol.innerboundaryis = [newCoord]
pol.style.linestyle.color = simplekml.Color.green
pol.style.linestyle.width = 5
pol.style.polystyle.color = simplekml.Color.changealphaint(100, simplekml.Color.green)
where 'newCoord' is a 2D array with all of the lat/long information stored in it.
Because I thought the array might not format the data properly I tried to form a simple triangular polygon using the code:
pol1 = kml.newpolygon(name=pnt.name)
pol1.innerboundaryis = [(46.714,-75.6667),(44.60796,-74.502),(46.13910,-74.57411),(46.714,-75.6667)]
pol1.style.linestyle.color = simplekml.Color.green
pol1.style.linestyle.width = 5
pol1.style.polystyle.color = simplekml.Color.changealphaint(100, simplekml.Color.green)
but it has the same issue as the first.
I've tried forming the polygon with both .innerboundaryis() and .outerboundaryis() without success and I'm running out of ideas.
edit: I should add that I'm opening the kml file in Google Earth
There is almost no documentation on this issue online so I figured I would post the answer to my question for anyone who has this issue in the future.
This is the code that I used that got the polygon working.
newCoords = []
pol = kml.newpolygon(name=pnt.name)
pol.description = ("A buffer for " + pnt.name)
if pnt.name in bufferList:
bufferRange = input('Enter the buffer range. ' )
for i in range(360):
newCoords.append( ( math to calculate Lat, math to calculate Long ) )
pol.outerboundaryis.coords.addcoordinates([newCoords[i]])
pol.style.linestyle.color = simplekml.Color.green
pol.style.linestyle.width = 5
pol.style.polystyle.color = simplekml.Color.changealphaint(100, simplekml.Color.green)
You need to put your coordinates into a list before adding them to the polygon's outer boundary using the 'coords.addcoordinates()' function. Additionally it must be a one dimensional list, so both the latitude and longitude coordinate must be stored in the same place.
You can input floats directly with '.outerboundaryis()', example:
pol.outerboundaryis = [(18.333868,-34.038274), (18.370618,-34.034421),
(18.350616,-34.051677),(18.333868,-34.038274)]
But '.addcoordinates()' only accepts lists and integers.
sorry for such specific question guys , I think people only with knowledge of Maya will answer tho. In Maya I have cubes different sizes and I need to find with python which face of cube is pointing Y axis down. (Pivot is in center) Any tips will be appreciated
Thanks a lot :)
import re
from maya import cmds
from pymel.core.datatypes import Vector, Matrix, Point
obj = 'pCube1'
# Get the world transformation matrix of the object
obj_matrix = Matrix(cmds.xform(obj, query=True, worldSpace=True, matrix=True))
# Iterate through all faces
for face in cmds.ls(obj + '.f[*]', flatten=True):
# Get face normal in object space
face_normals_text = cmds.polyInfo(face, faceNormals=True)[0]
# Convert to a list of floats
face_normals = [float(digit) for digit in re.findall(r'-?\d*\.\d*', face_normals_text)]
# Create a Vector object and multiply with matrix to get world space
v = Vector(face_normals) * obj_matrix
# Check if vector faces downwards
if max(abs(v[0]), abs(v[1]), abs(v[2])) == -v[1]:
print face, v
If you just need a quick solution without vector math and Pymel or the the API, you can use cmds.polySelectConstraint to find the faces aligned with a normal. All you need to do is select all the faces, then use the constraint to get only the ones pointing the right way. This will select all the faces in a mesh that are pointing along a given axis:
import maya.cmds as cmds
def select_faces_by_axis (mesh, axis = (0,1,0), tolerance = 45):
cmds.select(mesh + ".f[*]")
cmds.polySelectConstraint(mode = 3, type = 8, orient = 2, orientaxis = axis, orientbound = (0, tolerance))
cmds.polySelectConstraint(dis=True) # remember to turn constraint off!
The axis is the x,y,z axis you want and tolerance is the slop in degrees you'll tolerate. To get the downward faces you'd do
select_faces_by_axis ('your_mesh_here', (0,0,-1))
or
select_faces_by_axis ('your_mesh_here', (0,0,-1), 1)
# this would get faces only within 1 degree of downard
This method has the advantage of operating mostly in Maya's C++, it's going to be faster than python-based methods that loop over all the faces in a mesh.
With pymel the code can be a bit more compact. Selecting the faces pointing downwards:
n=pm.PyNode("pCubeShape1")
s = []
for f in n.faces:
if f.getNormal(space='world')[1] < 0.0:
s.append(f)
pm.select(s)
At some point in my work, I came up with that kind of scatter plot.
I would like for my script to be able to detect the fact that it "loops" and to give me the point (or an approximation thereof) where it does so : for instance, in this case it would be about [0.2,0.1].
I tried to play around with some representative quantities of my points, like norm and/or argument, like in the following piece of code.
import numpy as np
x,y = np.genfromtxt('points.dat',unpack=True)
norm = np.sqrt(x**2+y**2)
arg = np.arctan2(y,x)
left,right = np.meshgrid(norm,norm)
norm_diff = np.fabs(left - right)
mask = norm_diff == 0.
norm_diff_ma = np.ma.masked_array(norm_diff,mask)
left,right = np.meshgrid(arg,arg)
arg_diff = np.fabs(left - right)
mask = arg_diff == 0.
arg_diff_ma = np.ma.masked_array(arg_diff,mask)
list_of_indices = np.ma.where((norm_diff_ma<1.0e-04)*(arg_diff_ma<1.0e-04))
But, it does not work as intended : might be because the dataset contains too many points and the distance between two aligned points is anyhow of the same order of magnitude as the distance between the points in the "loop cluster" ...
I was thinking about detecting clusters, or maybe even detecting lines in the scatter plot and then see if there are any intersections between any two lines, but I am afraid my skills in image processing only go so far.
Is there any algorithm, any trick that any of you can think about would work here ?
A representative data sample can be found here.
Edit 08/13/2015 16h18 : after the short discussion with #DrBwts I took a closer look at the data I obtained after a pyplot.contour() call. Using the following routine to extract all the vertices :
def contour_points(contour, steps=1):
try:
loc_arr = np.row_stack([path.interpolated(steps).vertices for linecol in contour.collections for path in linecol.get_paths()])
except ValueError:
loc_arr = np.empty((0,2))
finally:
return loc_arr
y,x = contour_points(CS,steps=1).T
it turns out the points of coordinates (x,y) are ordered, in the sense where a call to pyplot.plot() connects the dots correctly.
I want to generate a surface which should look like a hemisphere.. What I have done so far is to read an already existing BEM mesh and try to show the scalar values on it. But now I have to show the scalar values on a hemisphere instead of the Bem mesh. And I don't know how to generate using a triangular mesh that looks like an hemisphere.
This hemisphere needs to contain a set of N number of points(x,y,z)[using the mlab.triangular_mesh] and at each vertex I need to represent N data(float) either as a value or using variations in colormap(eg: blue(lowest value of the data) to red(highest value of the data)). data=its an array of size 2562, a set of float values, could be randomly generated as its part of another codes. Points were part of another set of code too.its of shape(2562,3). but the shape is not a hemisphere
This was the program I used for viewing using the BEM surface
fname = data_path + '/subjects/sample/bem/sample-5120-5120-5120-bem-sol.fif'
surfaces = mne.read_bem_surfaces(fname, add_geom=True)
print "Number of surfaces : %d" % len(surfaces)
head_col = (0.95, 0.83, 0.83) # light pink
colors = [head_col]
try:
from enthought.mayavi import mlab
except:
from mayavi import mlab
mlab.figure(size=(600, 600), bgcolor=(0, 0, 0))
for c, surf in zip(colors, surfaces):
points = surf['rr']
faces = surf['tris']
s=data
mlab.triangular_mesh(points[:, 0], points[:, 1], points[:, 2],faces,color=c, opacity=1,scalars=s[:,0])
#mesh= mlab.triangular_mesh(x,y,z,triangles,representation='wireframe',opacity=0) #point_data=mesh.mlab_source.dataset.point_data
#point_data.scalars=t
#point_data.scalars.name='Point data'
#mesh2= mlab.pipeline.set_active_attribute(mesh,point_scalars='Point data')
As others have pointed out your question is not very clear, and does not include an easily reproducible example -- your example would take considerable work for us to reproduce and you have not described the steps you have taken very clearly.
What you are trying to do is easy. Scalars can be defined for each vertex (i.e., each VTK point):
surf = mlab.triangular_mesh(x,y,z,triangles)
surf.mlab_source.scalars = t
And you need to set a flag to get them to appear, which I think might be your problem:
surf.actor.mapper.scalar_visibility=True
Here is some code to generate a half-sphere. It produces a VTK polydata. I'm not 100% sure if the mayavi source is the same source type as triangular_mesh but I think it is.
res = 250. #desired resolution (number of samples on sphere)
phi,theta = np.mgrid[0:np.pi:np.pi/res, 0:np.pi:np.pi/res]
x=np.cos(theta) * np.sin(phi)
y=np.sin(theta) * np.sin(phi)
z=np.cos(phi)
mlab.mesh(x,y,z,color=(1,1,1))
I've looked and looked for a solution to this problem and am turning up nothing.
I'm generating rectangular FITS images through matplotlib and subsequently applying WCS coordinates to them using AstroPy (or PyFITS). My images are in galactic latitude and longitude, so the header keywords appropriate for my maps should be GLON-CAR and GLAT-CAR (for Cartesian projection). I've looked at other maps that use this same map projection in SAO DS9 and the coordinates work great... the grid is perfectly orthogonal as it should be. The FITS standard projections can be found here.
But when I generate my maps, the coordinates are not at all Cartesian. Here's a side-by-side comparison of my map (left) and another reference map of roughly the same region (right). Both are listed GLON-CAR and GLAT-CAR in the FITS header, but mine is screwy when looked at in SAO DS9 (note that the coordinate grid is something SAO DS9 generates based on the data in the FITS header, or at least stored somewhere in the FITS file):
This is problematic, because the coordinate-assigning algorithm will assign incorrect coordinates to each pixel if the projection is wrong.
Has anyone encountered this, or know what could be the problem?
I've tried applying other projections (just to see how they perform in SAO DS9) and they come out fine... but my Cartesian and Mercator projections do not come out with the orthogonal grid like they should.
I can't believe this would be a bug in AstroPy, but I can't find any other cause... unless my arguments in the header are incorrectly formatted, but I still don't see how that could cause the problem I'm experiencing. Or would you recommend using something else? (I've looked at matplotlib basemap but have had some trouble getting that to work on my computer).
My header code is below:
from __future__ import division
import numpy as np
from astropy.io import fits as pyfits # or use 'import pyfits, same thing'
#(lots of code in between: defining variables and simple calculations...
#probably not relevant)
header['BSCALE'] = (1.00000, 'REAL = TAPE*BSCALE + BZERO')
header['BZERO'] = (0.0)
header['BUNIT'] = ('mag ', 'UNIT OF INTENSITY')
header['BLANK'] = (-100.00, 'BLANK VALUE')
header['CRVAL1'] = (glon_center, 'REF VALUE POINT DEGR') #FIRST COORDINATE OF THE CENTER
header['CRPIX1'] = (center_x+0.5, 'REF POINT PIXEL LOCATION') ## REFERENCE X PIXEL
header['CTYPE1'] = ('GLON-CAR', 'COORD TYPE : VALUE IS DEGR')
header['CDELT1'] = (-glon_length/x_length, 'COORD VALUE INCREMENT WITH COUNT DGR') ### degrees per pixel
header['CROTA1'] = (0, 'CCW ROTATION in DGR')
header['CRVAL2'] = (glat_center, 'REF VALUE POINT DEGR') #Y COORDINATE OF THE CENTER
header['CRPIX2'] = (center_y+0.5, 'REF POINT PIXEL LOCATION') #Y REFERENCE PIXEL
header['CTYPE2'] = ('GLAT-CAR', 'COORD TYPE: VALUE IS DEGR') # WAS CAR OR TAN
header['CDELT2'] = (glat_length/y_length, 'COORD VALUE INCREMENT WITH COUNT DGR') #degrees per pixel
header['CROTA2'] = (rotation, 'CCW ROTATION IN DEGR') #NEGATIVE ROTATES CCW around origin (bottom left).
header['DATAMIN'] = (data_min, 'Minimum data value in the file')
header['DATAMAX'] = (data_max, 'Maximum data value in the file')
header['TELESCOP'] = ("Produced from 2MASS")
pyfits.update(filename, map_data, header)
Thanks for any help you can provide.
In the modern definition of the -CAR projection (from Calabretta et al.), GLON-CAR/GLAT-CAR projection only produces a rectilinear grid if CRVAL2 is set to zero. If CRVAL2 is not zero, then the grid is curved (this should have nothing to do with Astropy). You can try and fix this by adjusting CRVAL2 and CRPIX2 so that CRVAL2 is zero. Does this help?
Just to clarify what I mean, try, after your code above, and before writing out the file:
header['CRPIX2'] -= header['CRVAL2'] / header['CDELT2']
header['CRVAL2'] = 0.
Any luck?
If you look at the header for the 'reference' file you looked at, you'll see that CRVAL2 is zero there. Just to be clear, there's nothing wrong with CRVAL2 being non-zero, but the grid is then no longer rectilinear.