Scipy / Numpy Reimann Sum Height - python

I am working on a visualization that models the trajectory of an object over a planar surface. Currently, the algorithm I have been provided with uses a simple trajectory function (where velocity and gravity are provided) and Runge-Kutta integration to check n points along the curve for a point where velocity becomes 0. We are discounting any atmospheric interaction.
What I would like to do it introduce a non-planar surface, say from a digital terrain model (raster). My thought is to calculate a Reimann sum at each pixel and determine if the offset from the planar surface is equal to or less than the offset of the underlying topography from the planar surface.
Is it possible, using numpy or scipy, to calculate the height of a Reimann rectangle? Conversely, the area of the rectangle (midpoint is fine) would work, as I know the width nd can calculate the height.

For computing Reimann sums you could look into numpy.cumsum(). I am not sure if you can do a surface or only an array with this method. However, you could always loop through all the rows of your terrain and store each row in a two dimensional array as you go. Leaving you with an array of all the terrain heights.

Related

How to determine if an object is flat or not from depth image?

I have a 2x2 matrix of distances from a depth sensor.
The matrix is cropped so only the points we are interested in is in the frame(All the points in the cropped image contains the object).
My question is how can we determine if this object is flat or not?
The depth image is acquired from Realsense d435. I read the depth image and then multiply it by depth_scale.
The object is recognized using AI for the rgb image that is aligned with the depth image.
And I have 4 points on the object. So, all the distances in that rectangle contains the distance of the object from the sensor.
My first idea was standard deviation of all the points. But then this falls apart if the image is taken from an angle. (since the standard deviation won't be 0)
From an angle the distance of a flat object is changing uniformly on the y axis. Maybe somehow, we can use this information?
The 2x2 matrix is a numpy array in python. Maybe there are some libraries which do this already.
After reprojecting your four depth measurements to the 3D space, it becomes a problem of deciding if your set of points is coplanar. There are several ways you can go about it.
One way to do it is to reproject the points to 3D and fit a plane to all four of them there. Since you're fitting a plane to four points in three dimensions, you get an over-determined system, and it's very unlikely that all points would lie exactly on the estimated plane. At this stage, you could prescribe some tolerance to determine "goodness of fit". For instance, you could look at the R^2 coefficient.
To fit the plane you can use scipy.linalg.lstsq. Here's a good description of how it can be done: Fit plane to a set of points in 3D.
Another way to approach the problem is by calculating the volume of a tetrahedron spanned by the four points in 3D. If they are coplanar (or close to coplanar), the volume of such a tatrahedron should be equal to (or close to) 0. Assuming your pointa reprojected to 3D can be described by (x_0, y_0, z_0), ..., (x_3, y_3, z_3), the volume of the tetrahedron is equal to:
volume = abs(numpy.linalg.det(tetrahedron)) / 6, where
tetrahedron = np.array([[x_0, y_0, z_0, 1], [x_1, y_1, z_1, 1], [x_2, y_2, z_2, 1], [x_3, y_3, z_3, 1]])
To check if your points are on the same plane, (equivalently - if the tetrahedron has a small enough volume), it is now sufficient to check if
volume < TOL
for some defined small tolerance value, which must be determined experimentally.
You can define a surface by choosing three of the four 3D points.
Evaluate the distance from the remaining point to the surface.
How to choose the three points is... it may be good to choose the pattern that maximizes the area of the triangle.

How to calculate the average/best Rotation between two coordinate systems?

Through a sensor I get the rotation between points in coordinate system A to points in coordinate system B. The measured rotations between the coordinate systems are not 100% identical due to the noise of the sensor.
How can I determine the average or optimal rotation matrix between the coordinate systems? Similar to this problem: stackoverflow: Averaging Quatenion, but contrary to that I do not want to use Quaternions, but try some least square approach.
Given: Rba(n): Rotation matrix from a to b, measured at n different time points
Wanted: Rba optimal
My approach: Minimization of the squared distance.
First I define n random points in space and apply the rotations to these points.
And now I can calculate the rotation by means of the Krabsch algorithm using singular value decomposition to minimize the square distance between the input points and the transformed points.
However, what I don't understand is that the calculated rotation matrix seems to be dependent on the input points. That is, I get different rotation matrices as a result for different input points, although the applied rotation matrices Rba(n) remain the same.
Why is that? And what is the right way?

Quantify roughness of a 2D surface based on given scatter points geometrically

How to design a simple code to automatically quantify a 2D rough surface based on given scatter points geometrically? For example, to use a number, r=0 for a smooth surface, r=1 for a very rough surface and the surface is in between smooth and rough when 0 < r < 1.
To more explicitly illustrate this question, the attached figure below is used to show several sketches of 2D rough surfaces. The dots are the scattered points with given coordinates. Accordingly, every two adjacent dots can be connected and a normal vector of each segment can be computed (marked with arrow). I would like to design a function like
def roughness(x, y):
...
return r
where x and y are sequences of coordinates of each scatter point. For example, in case (a), x=[0,1,2,3,4,5,6], y=[0,1,0,1,0,1,0]; in case (b), x=[0,1,2,3,4,5], y=[0,0,0,0,0,0]. When we call the function roughness(x, y), we will get r=1 (very rough) for case (a) and r=0 (smooth) for case (b). Maybe r=0.5 (medium) for case (d). The question is refined to what appropriate components do we need to put inside the function roughness?
Some initial thoughts:
Roughness of a surface is a local concept, which we only consider within a specific range of area, i.e. only with several local points around the location of interest. To use mean of local normal vectors? This may fail: (a) and (b) are with the same mean, (0,1), but (a) is rough surface and (b) is smooth surface. To use variance of local normal vectors? This may also fail: (c) and (d) are with the same variance, but (c) is rougher than (d).
maybe something like this:
import numpy as np
def roughness(x, y):
# angles between successive points
t = np.arctan2(np.diff(y), np.diff(x))
# differences between angles
ts = np.sin(t)
tc = np.cos(t)
dt = ts[1:] * tc[:-1] - tc[1:] * ts[:-1]
# sum of squares
return np.sum(dt**2) / len(dt)
would give you something like you're asking?
Maybe you should consider a protocol definition:
1) geometric definition of the surface first
2) grant unto that geometric surface intrinsic properties.
2.a) step function can be based on quadratic curve between two peaks or two troughs with their concatenated point as the focus of the 'roughness quadratic' using the slope to define roughness in analogy to the science behind road speed-bumps.
2.b) elliptical objects can be defined by a combination of deformation analysis with centered circles on the incongruity within the body. This can be solved in many ways analogous to step functions.
2.c) flat lines: select points that deviate from the mean and do a Newtonian around with a window of 5-20 concatenated points or what ever is clever.
3) define a proper threshold that fits what ever intuition you are defining as "roughness" or apply conventions of any professional field to your liking.
This branched approach might be quicker to program, but I am certain this solution can be refactored into a Euclidean construct of 3-point ellipticals, if someone is up for a geometry problem.
The mathematical definitions of many surface parameters can be found here, which can be easily put into numpy:
https://www.keyence.com/ss/products/microscope/roughness/surface/parameters.jsp
Image (d) shows a challenge: basically you want to flatten the shape before doing the calculation. This requires prior knowledge of the type of geometry you want to fit. I found an app Gwyddion that can do this in 3D, but it can only interface with Python 2.7, not 3.
If you know which base shape lies underneath:
fit the known shape
calculate the arc distance between each two points
remap the numbers by subtracting 1) from the original data and assigning new coordinates according to 2)
perform normal 2D/3D roughness calculations

Computing diameter-lines of a 3D spherical mask

Background
For an algorithm I'm working on, I currently use a 3D sphere as binary mask, with a NxNxN array having voxels in a sphere of radius N//2 as True. Further processing does computation for each voxel set as True.
It proved computationally intensive for my specific task as N grew large = O(N^3), so I now want to reduce my binary mask to a subsample of lines radiating from array center within radius.
Objective
I want a 3D binary mask of the lines in gray in the image.
To have a bit of control over the number of voxels, I would have a parameter (say l) regulating the number of lines sampled in each 2D circle, and maybe a second one (k ?) for the number of z-rotation.
What I tried
I am using numpy and scipy, and I thought that I could use the scipy.ndimage.interpolation.rotate method to rotate a single line around on a plane, then use that complete 2D mask to rotate around the z-axis.
This proved difficult, as interpolate uses some deep magic regarding splines that discard my True values on rotation.
I am thinking that I could compute mathematically which voxel should be set to True by following some line-equations, but I'm at a loss to find them.
Any idea how to get there ?
Update : Solution !
Thanks to jkalden who helped me think this through and gave code samples, I have this :
rmax is radius of sphere, n_theta and n_phi the number of polar and azimutal lines to use.
out_mask = np.zeros((rmax*2,) * 3, dtype=bool)
# for each phi = one circle in azimutal circles
for phi in np.linspace(0, np.deg2rad(360), n_phi,endpoint=False):
# for all lines in polar circle of this azimutal circle
for theta in np.linspace(0, np.deg2rad(360), n_theta,endpoint=False):
# for all distances (0-rmax) in these lines
for r in range(rmax):
coords = spherical_to_cartesian([r, theta, phi]) + rmax
out_mask[tuple(coords)] = True
With the spherical_to_cartesian from this code sample.
Which gives me this (with rmax = 50 and n_theta = n_phi = 8) :
(Center area tuned out of my function by choice)
I propose to change the coordinate system to spherical coordinates. Thus, you will choose your 2D circle by an azimuthal angle, and a line then is defined by additionally choosing a polar angle. The variable along the line is then just the radius, and you can use ´numpy.linspace´ to discretize it. Doing so might also save time during calculation.
You can switch your coordinate system any time by using the bijective relation which is implemented e.g. here or here.

Width of an arbitrary polygon

I need a way to characterize the size of sets of 2-D points, so I can determine whether to render them as individual points in a space or as representative polygons, dependent on the scale of the viewport. I already have an algorithm to calculate the convex hull of the set to produce the representative polygon, but I need a way to characterize its size. One obvious measure is the maximum distance between points on the convex hull, which is the diameter of the set. But I'm really more interested in the size of its cross-section perpendicular to its diameter, to figure out how narrow the bounding polygon is. Is there a simple way to do this, given the sorted list of vertices and and the indices of the furthest points (ideally in Python)?
Or alternatively, is there an easy way to calculate the radii of the minimal area bounding ellipse of a set of points? I have seen some approaches to this problem, but nothing that I can readily convert to Python, so I'm really looking for something that's turnkey.
You can compute:
the size of its cross-section perpendicular to its diameter
with the following steps:
Find the convex hull
Find the two points a and b which are furthest apart
Find the direction vector d = (a - b).normalized() between those two
Rotate your axes so that this direction vector lies horizontal, using the matrix:
[ d.x, d.y]
[-d.y, d.x]
Find the minimum and maximum y value of points in this new coordinate system. The difference is your "width"
Note that this is not a particularly good definition of "width" - a better one is:
The minimal perpendicular distance between two distinct parallel lines each having at least one point in common with the polygon's boundary but none with the polygon's interior
Another useful definition of size might be twice the average distance between points on the hull and the center
center = sum(convexhullpoints) / len(convexhullpoints)
size = 2 * sum(abs(p - center) for p in convexhullpoints) / len(convexhullpoints)

Categories