For a project I am working on, I have successfully performed the SFM procedure on road image data, and have been able to generate a .ply file containing point cloud coordinates (X, Y, Z), RGB values, and normals (nx, ny, nz).
Now, I am interested in calculating curvature values for each point from the data I have. I have come across Surface Curvature MATLAB Equivalent in Python, but the implementation is said to work only when X, Y, and Z are 2D arrays.
Is Gaussian and Mean curvatures applicable if the road surface is very rough? Given, the (X,Y,Z) coordinate data of size NX3 and given the (nx,ny,nz) normal data of size NX3, how should I find the curvature? Is there a current implementation in Python?
I'm actually writing a Python library for working with point clouds.
Using the raw point cloud, the only "curvature" concept I'm aware of is the one computed using the eigenvalues obtained from the neighbourhood of each point.
If that is what you mean, here is an example code:
from pyntcloud import PyntCloud
cloud = PyntCloud.from_file("Box.ply")
This is the example point cloud inside Box.ply:
The steps to compute the curvature are:
Get k-neighbors of each point:
k_neighbors = cloud.get_neighbors(k=10)
Compute the eigenvalues for each point using it's k (10 in this case) neighbours:
ev = cloud.add_scalar_field("eigen_values", k_neighbors=k_neighbors)
Compute the curvature from those eigenvalues:
cloud.add_scalar_field("curvature", ev=ev)
Saving the cloud with the new scalar fields:
cloud.to_file("out.ply")
Here is the point cloud inside out.ply colored according to the curvature value assigned to each point (White are higher curvature values):
Here is an example on how to filter the point cloud to only keep the points that have a curvature value above the mean:
curvature = cloud.points["curvature(K(16))"]
cloud.points = cloud.points[curvature > curvature.mean()]
cloud.to_file("out.ply")
And the new content of out.ply:
Related
I have a 50 by 50 grid of evenly spaced (x,y) points. Each of these points has a third scalar value. This can be visualized using a contourplot which I have added. I am interested in the regions indicated in by the red circles. These regions of low "Z-values" are what I want to extract from this data.
2D contour plot of 50 x 50 evenly spaced grid points:
I want to do this by using clustering (machine learning), which can be lightning quick when applied correctly. The problem is, however, that the points are evenly spaced together and therefore the density of the entire dataset is equal everywhere.
I have tried using a DBSCAN algorithm with a custom distance metric which takes into account the Z values of each point. I have defined the distance between two points as follows:\
def custom_distance(point1,point2):
average_Z = (point1[2]+point2[2])/2
distance = np.sqrt(np.square((point1[0]-point2[0])) + np.square((point1[1]-point2[1])))
distance = distance * average_Z
return distance
This essentially determines the Euclidean distance between two points and adds to it the average of the two Z values of both points. In the picture below I have tested this distance determination function applied in a DBSCAN algorithm. Each point in this 50 by 50 grid each has a Z value of 1, except for four clusters that I have randomly placed. These points each have a z value of 10. The algorithm is able to find the clusters in the data based on their z value as can be seen below.
DBSCAN clustering result using scalar value distance determination:
Positive about the results I tried to apply it to my actual data, only to be disappointed by the results. Since the x and y values of my data are very large, I have simply scaled them to be 0 to 49. The z values I have left untouched. The results of the clustering can be seen in the image below:
Clustering result on original data:
This does not come close to what I want and what I was expecting. For some reason the clusters that are found are of rectangular shape and the light regions of low Z values that I am interested in are not extracted with this approach.
Is there any way I can make the DBSCAN algorithm work in this way? I suspect the reason that it is currently not working has something to do with the differences in scale of the x,y and z values. I am also open for tips or recommendations on other approaches on how to define and find the lighter regions in the data.
How to design a simple code to automatically quantify a 2D rough surface based on given scatter points geometrically? For example, to use a number, r=0 for a smooth surface, r=1 for a very rough surface and the surface is in between smooth and rough when 0 < r < 1.
To more explicitly illustrate this question, the attached figure below is used to show several sketches of 2D rough surfaces. The dots are the scattered points with given coordinates. Accordingly, every two adjacent dots can be connected and a normal vector of each segment can be computed (marked with arrow). I would like to design a function like
def roughness(x, y):
...
return r
where x and y are sequences of coordinates of each scatter point. For example, in case (a), x=[0,1,2,3,4,5,6], y=[0,1,0,1,0,1,0]; in case (b), x=[0,1,2,3,4,5], y=[0,0,0,0,0,0]. When we call the function roughness(x, y), we will get r=1 (very rough) for case (a) and r=0 (smooth) for case (b). Maybe r=0.5 (medium) for case (d). The question is refined to what appropriate components do we need to put inside the function roughness?
Some initial thoughts:
Roughness of a surface is a local concept, which we only consider within a specific range of area, i.e. only with several local points around the location of interest. To use mean of local normal vectors? This may fail: (a) and (b) are with the same mean, (0,1), but (a) is rough surface and (b) is smooth surface. To use variance of local normal vectors? This may also fail: (c) and (d) are with the same variance, but (c) is rougher than (d).
maybe something like this:
import numpy as np
def roughness(x, y):
# angles between successive points
t = np.arctan2(np.diff(y), np.diff(x))
# differences between angles
ts = np.sin(t)
tc = np.cos(t)
dt = ts[1:] * tc[:-1] - tc[1:] * ts[:-1]
# sum of squares
return np.sum(dt**2) / len(dt)
would give you something like you're asking?
Maybe you should consider a protocol definition:
1) geometric definition of the surface first
2) grant unto that geometric surface intrinsic properties.
2.a) step function can be based on quadratic curve between two peaks or two troughs with their concatenated point as the focus of the 'roughness quadratic' using the slope to define roughness in analogy to the science behind road speed-bumps.
2.b) elliptical objects can be defined by a combination of deformation analysis with centered circles on the incongruity within the body. This can be solved in many ways analogous to step functions.
2.c) flat lines: select points that deviate from the mean and do a Newtonian around with a window of 5-20 concatenated points or what ever is clever.
3) define a proper threshold that fits what ever intuition you are defining as "roughness" or apply conventions of any professional field to your liking.
This branched approach might be quicker to program, but I am certain this solution can be refactored into a Euclidean construct of 3-point ellipticals, if someone is up for a geometry problem.
The mathematical definitions of many surface parameters can be found here, which can be easily put into numpy:
https://www.keyence.com/ss/products/microscope/roughness/surface/parameters.jsp
Image (d) shows a challenge: basically you want to flatten the shape before doing the calculation. This requires prior knowledge of the type of geometry you want to fit. I found an app Gwyddion that can do this in 3D, but it can only interface with Python 2.7, not 3.
If you know which base shape lies underneath:
fit the known shape
calculate the arc distance between each two points
remap the numbers by subtracting 1) from the original data and assigning new coordinates according to 2)
perform normal 2D/3D roughness calculations
I am currently working in a inverse problem, scripted in python, in which observed GPS data is used to infer the displacement below the earth (e.g and earthquake). The displacement beneath the earth can be discretized in a grid, and the equation that describes this problems can be written as:
G(n*n) * S(n*1) = U(n*1) [1]
Where G is our model matrix that describes problem, S is the displacement below the earth vector and U is the Surface Observations vector. In this case n is the number of grid points.
For penalizing "non-physical" solutions we can regularize the inverse problem by assuming that displacement in a grid point is similar to its neighbors via a Laplacian smoothing matrix. Mathematically, what we want is minimize the Laplacian of displacement, which using matrix can be written as:
F(nxn) * S(n*1) = 0(n*1) [2]
where F is the laplacian matrix, S is the displacements column vector and "0" is a zero column vector. n is the number of grid points used in the inversion. Both equations [1] and [2] can be optimized with methods such as scipy.optimize.nnls which is used in this script.
The problem is in the building of the Laplacian Matrix, I am not sure how I should start to build it; Is there any scipy or another library method that can achieve what I want? I have searched some methods in scipy for laplacian smoothing that are used for Image Processing but I don't know if they work for this particular case. The other option is building a finite difference method for the laplacian but I am unsure on how I should start creating it.
Thank you
Maybe NetworkX library in Python could be an option. Considering that the displacement beneath the earth can be discretized in a grid you can build a network G(N, E) where N is the number of nodes in your grid and E all possible edges connecting your nodes N.
In each node and edge, you can apply a weight value (e.g.displacement measure).
Moreover, NetworkX has a function related to the Laplacian Matrix calculation.
I'm working with python on a 3d point cloud files that are in format XYZ, I need to calculate the distance from each one of them to the center, and then label (and colour for better visualization) them according to it. So far I got this cloud classification using this code:
xyz_coordinates = points[:, 0:3]
xyz_min = np.amin(xyz_coordinates, axis=0) # 3, gets minimum of each axis
xyz_max = np.amax(xyz_coordinates, axis=0) # 3, gets maximum of each axis
xyz_center = (xyz_min + xyz_max) / 2
xyz_max_euclidean=distance.euclidean(xyz_center,xyz_max) # gets euclidean distance, it gives me circles
xyz_cut=xyz_max_euclidean/N_CLASSES
# gets the euclidean distance for all points and assign normalized tagged classes
for i in xrange(xyz_coordinates.shape[0]):
label=int(math.floor(distance.euclidean(xyz_center,xyz_coordinates[i])/xyz_cut))
point_bbox_list.append(np.concatenate([xyz_coordinates[i],g_distance2color[str(label)],np.array([label])],0))
but as you can see I am calculating the euclidean distance from the center to each one of the points, and this is not correct, in this case, for example the limit of the walls or the table are not correct. I imagined this kind of graph but with a squared coloured shape. Right now I been successful in calculating the bounding boxes for each object, shown here but I can not get the results I expect. I've tried also with mahalanobis distance, but the classification turns out as an ellipsoid, is there any other calculating distance metric that I can use?
I have experimental results for a scalar field over a plane.
The data points are taken over a 2D grid of (x, y) coordinates in the plane, but there is a ring of (x, y) points in the plane where no data is taken, in some sense because the field isn't defined there.
How can i perform an interpolation over the 2D grid that takes into account this large lack of data points, and doesn't try to interpolate inside the torus?
Preferably I'd like to use one of the interpolators in the Python Scipy package.
Thank you