Calculate Interpolation coefficients for random points in scipy/python? - python

Given a set of points with coordinates x, y, z and a function value f(x, y, z), I want to interpolate a new function value for a new coordinate.
Since this computation will be done on a gpu I want to precompute the interpolation coefficients with python.
So basically I need some kind of function which returns interpolation coefficients for a given set of points and function values.
Usually the interpolation uses just a couple of points (8 - 16)
I thought about using the griddata function from python, see:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html#scipy.interpolate.griddata
or using an RBF Function.
But im not sure how to get the interpolation scheme/coefficients
from these python functions. Also im not sure which interpolation scheme is best suited for this kind of problem.
Thank you for any suggestions

Related

How to fit a plane in points in 3d using available methods in python

I have a bunch of point in 3d (x, y and z) and want to find the best plane fitting these points. I used the rbf method of scipy but it is giving me points and I want the equation of the plane. I have also read this solution and this one but still I cannot find the equation of the plane. The important note is that my poitns are usually making curves shapes surfaces and a linear least square plane is not what I want. I do appreciate any help in advance.

Use Quadpy to numerically integrate sets of points on a Sphere

I have a set of values on a sphere in three dimensions. I want to numerically integrate them, and I heard that quadpy offers good speed and functionality. However, I do not have a function
def func(x, y, z):
do something
return f
which I could pass to quadpy. Can I just use its integration somehow to numerically integrate my set of points with one of their schemes? Otherwise, if someone knows a good, and fast numpy or scipy alternative I'd be also OK with that.
quadpy author here. All methods in quadpy are Gaussian integration. meaning that you must be able to evaluate a function at given points. (The magic in Gaussian integration is how the points are chosen.)
If you only have numerical data here and there, the best you can do is probably form Voronoi cells, i.e., for each point i compute the area V_i that is closest to this point, and then
sum(V_i f(x_i))
As an approximation, you can use meshzoo to create a spherical mesh and assign the triangles to their closest x_i.

How to define a higher-degree spline using python?

I am using the Scipy CubicSpline interpolation based on a certain number of points as shown in the diagram below:
My problem is, the second derivative of the Cubic Splive function looks a little bit edgy:
In order to smooth the second curve I need a higher degree of spline interpolation. Is there a Scipy build in function (similar to CubicSpline) or an easy way to do that? (A b-spline function want work)
make_interp_spline should be able to construct BSpline objects of higher degrees (FITPACK only goes up to k=5, which is hardcoded fairly deep down).

scipy.interpolate.splrep: What do the derivatives beyond der=1 represent?

I've been using scipy.interpolate.splrep to create path animations in a 3D application. I use the first derivative (der=1) option to get the tangent on curve at a given point to calculate a forward vector which the allows me to orient the object following that path.
This is when it occurred to me that splrep can also compute the second (der=2) and third (der=3) derivatives (and presumably more?)
I know that the first derivative of a spline at a given query point corresponds to the tangent on spline at that point, what i'd like to know is what do the derivatives beyond (der > 1) represent and what can the values splrep returns for them be used for?

DTW for 3D gesture recognition

I'm using DTW to compare gestures in 3D space, relying on 3axis accelerometer data, using Python MLPY module.
I'm in doubt whether I need to apply DTW to each axis (x, y, z) independently and then sum up the resulting costs or there is some way to combine the axis before running DTW. I think just running DTW on the norm of the vector is misleading, as this way you would just discard useful information.
What would you suggest?
I would use the Eucledian distance, see at the bottom of my earlier answer.

Categories