So I have a 4d force field as in x y and z with a Force attached to each point.
I'm a a bit at a lost on how to interpolate this, I came across this though.
http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.ndimage.interpolation.map_coordinates.html
It looks like I can make an array of x y z F and then interpolate it but I'm really not too sure.
Anyone know any good libraries for python on how to do this?
Example input is:
x y z Force
0 0 0 35
0 0 1 32
0 0 2 23
...
2 5 9 54
The way you described it (x,y,z), this is a 3D field, not 4D.
The purpose of interpolation is to specify a field (in this case Force) at any point (x,y,z)
even if you don't have a Force, at precisely that point (x,y,z). Instead, you have the forces at a number of nearby points and you interpolate them to get a meaningful Force at point (x,y,z). In 3D you would need a bilinear interpolation algorithm at the least, and coding it up (and keeping indices in order) is non-trivial.
In Python, you can use Scipy's interpolation routines.
So, if i understand you corret, you have the force given at some points and at others not and want to come up with an interpolation?
At NMR the use of linear interpolation is till up to date. As an alternative, you can give splines a try, but thats more costly. But it mostly depends on the kind of your force - how fast is it going changing? Build your interpolation rouinte out of that.
Related
I am writing a python script for some geometrical data manipulation (calculating motion trajectories for a multi-drive industrial machine). Generally, the idea is that there is a given shape (let's say - an ellipse, but it general case it can be any convex shape, defined with a series of 2D points), which is rotated and it's uppermost tangent point must be followed. I don't have a problem with the latter part but I need a little hint with the 2D shape preparation.
Let's say that the ellipse was defined with too little points, for example - 25. (As I said, ultimately this can be any shape, for example a rounded hexagon). To maintain necessary precision I need far more points (let's say - 1000), preferably equally distributed over whole shape or with higher density of points near corners, sharp curves, etc.
I have a few things ringing in my head, I guess that DFT (FFT) would be a good starting point for this resampling, analyzing the scipy.signal.resample() I have found out that there are far more functions in the scipy.signal package which sound promising to me...
What I'm asking for is a suggestion which way I should follow, what tool I should try for this job, which may be the most suitable. Maybe there is a tool meant exactly for what I'm looking for or maybe I'm overthinking this and one of the implementations of FFT like resample() will work just fine (of course, after some adjustments at the starting and ending point of the shape to make sure it's closing without issues)?
Scipy.signal sounds promising, however, as far as I understand, it is meant to work with time series data, not geometrical data - I guess this may cause some problems as my data isn't a function (in a mathematical understanding).
Thanks and best regards!
As far as I understood, what you want is to get an interpolated version of your original data.
The DFT (or FFT) will not achieve this purpose, since it will perform an Fourier Transform (which is not what you want).
Talking theoretically, what you need to interpolate your data is to define a function to calculate the result in the new-data-points.
So, let's say your data contains 5 points, in which one you have a 1D (to simplify) number stored, representing your data, and you want a new array with 10 points, filled with the linear-interpolation of your original data.
Using numpy.interp:
import numpy as np
original_data = [2, 0, 3, 5, 1] # define your data in 1D
new_data_resolution = 0.5 # define new sampling distance (i.e, your x-axis resolution)
interp_data = np.interp(
x = np.arange(0, 5-1+new_data_resolution , new_data_resolution), # new sampling points (new axis)
xp = range(original_data),
fp = original_data
)
# now interp_data contains (5-1) / 0.5 + 1 = 9 points
After this, you will have a (5-1) / new_resolution (which is greater than 5, since new_resolution < 1)-length data, which values will be (in this case) a linear interpolation of your original data.
After you have achieved/understood this example, you can dive in the scipy.interpolate module to get a better understanding in the interpolation functions (my example uses a linear function to get the data in the missing points).
Applying this to n-D dimensional arrays is straight-forward, iterating over each dimension of your data.
I have a mixed integer problem. I need to minimize a function, which is a weighted least square regression, the weights being dependent on the regression (iteratively reweighted least square). 7 parameters define my piecewise regression. I need to find a local minima around a first guess.
I tried to write the problem in gekko, but I somehow find it very difficult to implement. After many tries, I stopped at "negative DOF".
Anyway, I decided to brute force the problem. It works, but it's slow. I build a cube (itertools) around my working point in 7D and calculate the weighted square errors at each of the 3^7 points. I have boundaries for each dimension, and sometimes my working point is on one of the faces of my 7D domain. Technically, I have 2^p * 3^(7-p) points. I now have a list of all the values, find the minimum, move my working point there and restart building a cube, excluding all the points that I have already calculated in the previous loop steps.
Now I want to accelerate it by calculating the gradient at my working point and move faster (skip a step or two in my loop). np.gradient will require a 7d array in order to perform correctly.
Given a point, and 7 ranges around that point, how to make a 7D array in an efficient way? How to make an image of this array with the values of my function?
Please don't say 7 for loops.
Regardless of whether your function is vectorized, you can use an approach with np.indices like this:
base_grid = np.indices(7 * (3,), sparse=False) - 1
This produces an array of all the combinations of -1, 0, 1 that you need. np.meshgrid does something similar, but the arrays will be separated into a tuple, which is inconvenient.
At each iteration, you modify the grid with your step (scale) and offset:
current_grid = base_grid * scale + offset
If your function is vectorized, you call it directly, the grid is 7 3x3x3x3x3x3x3 arrays. If it accepts seven inputs, just use star expansion.
If your function is not vectorized, you can still step along the corresponding elements in a single loop, not seven loops, using np.nditer:
with np.nditer([current_grid, None],
op_axes=[list(range(1, current_grid.ndim)), None]) as it:
for x, y in it:
y[:] = f(*x)
j = it.operands[1]
Does anyone know of a good way to deal with a point cloud heightfield in python? I've done some googling but I'm not sure if any of the results really describe what I need. I have over 20 million points in the form of (lat, lon, altitude) (or if you'd rather, (x, y, altitude)) and want to be able to interpolate to an arbitrary position. But my experience is that a simple quadratic interpolation is terrible for altitude data. E.g. picture the following height points, and wanting to interpolate at point X:
4 3 2
4 3 2 2 0 0
3 2 0 0 2
2 22 0 2 2
2 X 2 3
0 0 0 2 3
0 2 3 4
2 3
Clearly X is down in the bottom of some sort of deep channel running ENE from the bottom left (or WSW from the top right). But quadratic interpolation will state that X is at around height 2 - aka not in a canyon at all, but on its rim. Quadratic interpolation cares only about distance, not angle. But in terrain, angle matters a lot. If you have a distant point, and several closer points in the same direction, the distant point is virtually meaningless, but if you have a distant point in a direction with no other points in the way, that point can be very meaningful.
A good triangulation could capture the geometry, obviously - it's easy enough to interpolate a point on a triangular facet. But a Delaunay triangulation wouldn't cut it - it would just reinforce the screwup, because it doesn't like long thin shapes like canyons, and would instead prefer to tesselate the points across the edges of the canyon in the above example.
I can think up algorithms to capture the terrain well (such as "projecting" from each point with a line toward X representing a weighted slope contribution, and weakening the weighting by each point the line passes, relative to how close it passes... then doing a least-squares fit for all slopes and points). Another option that comes to mind is using quadratic interpolation in all directions from X to get slopes in each direction, then iteratively adjusting X up or down to converge to a solution where it's not jutting unusually up or down as a whole. But any new algorithm would probably be significant work to implement, and I hate reinventing the wheel (probably suboptimally). Surely there must be some sort of fast library that would be good for this sort of thing? Hopefully pythonic? :)
I am estimating the fundamental matrix and the essential matrix by using the inbuilt functions in opencv.I provide input points to the function by using ORB and brute force matcher.These are the problems that i am facing:
1.The essential matrix that i compute from in built function does not match with the one i find from mathematical computation using fundamental matrix as E=k.t()FK.
2.As i vary the number of points used to compute F and E,the values of F and E are constantly changing.The function uses Ransac method.How do i know which value is the correct one??
3.I am also using an inbuilt function to decompose E and find the correct R and T from the 4 possible solutions.The value of R and T also change with the changing E.More concerning is the fact that the direction vector T changes without a pattern.Say it was in X direction at a value of E,if i change the value of E ,it changes to Y or Z.Y is this happening????.Has anyone else had the same problem.???
How do i resolve this problem.My project involves taking measurements of objects from images.
Any suggestions or help would be welcome!!
Both F and E are defined up to a scale factor. It may help to normalize the matrices, e. g. by dividing by the last element.
RANSAC is a randomized algorithm, so you will get a different result every time. You can test how much it varies by triangulating the points, or by computing the reprojection errors. If the results vary too much, you may want to increase the number of RANSAC trials or decrease the distance threshold, to make sure that RANSAC converges to the correct solution.
Yes, Computing Fundamental Matrix gives a different matrix every time as it is defined up to a scale factor.
It is a Rank 2 matrix with 7DOF(3 rot, 3 trans, 1 scaling).
The fundamental matrix is a 3X3 matrix, F33(3rd col and 3rd row) is scale factor.
You make ask why do we append matrix with constant at F33, Because of (X-Left)F(x-Right)=0, This is a homogenous equation with infinite solutions, we are adding a constraint by making F33 constant.
As part of my research, I'm required to integrate data-defined functions over small subsets of their domain, many times over. This is a long post: answers to any of the three questions below will be acknowledged!
For example, let's say I have a large 1-D domain x, and a function y to be integrated over some subset of x.
In a typical example, the domain x would have 1000 grid points, where at step 'i' my data function y[i,:] is a numpy array the same size as x. Typically y will be a numpy array of shape (1000,1000).
Now, for each value of i, I need to employ quadrature, for many points in x, to find the integral of y[i,arr] over arr, where arr is a subdomain of x.
Here's my first problem: when arr is small (say 3 points), methods like scipy.integrate.cumtrapz won't give a good approximation - there are only three values in y[i,arr].
At each step i, one has to do such integrations for arr's ranging from size 3 to approximately size 200. The output of these integrations are used to update y[i+1,:], so I believe much error is being introduced due to my current use of cumtrapz.
Edit: Many thanks to #Fabian Rost, who provided an answer to what was Question 1: whether errors were in fact being introduced. He also proposed a using linear interpolation as in Question 2 below, and an estimate for how long such a technique would take. I guess what really remains is whether there is a faster technique than that proposed.
My proposed solution is to:
Create a new interpolating object, say y2, for y[i,arr2], where arr2 is a sufficiently larger subdomain than arr.
Create a new linspace x2 corresponding to the intersection of arr with arr2, then use an existing function-integration method like scipy.integrate.quadrature to integrate y2 over x2.
The result from step 2 is probably a really good approximation to the integral for y[i,arr].
Question 2: Are all these steps necessary? That is, is there a built-in that will do all this for me?
Question 3: I believe if I want to avoid errors, I have to do these interpolations->integrations, over 1000 iterations, for at least 200 subdomains at each iteration. This can clearly become quite costly. Is there a more Pythonic way to do this?
Answers to any of these questions are GREATLY appreciated! Thanks so much for reading.
Assuming linear interpolation is a good model you could define a continuous data function and integrate using scipy.integrate.quad like that:
import scipy as sp
import scipy.integrate
xp = sp.linspace(0, 1000, 1000)
yp = sp.randn(1000)
datafunc = lambda x: sp.interp(x, xp, yp)
sp.integrate.quad(datafunc, 3, 1000)
Depending on the domain size the integration take 2 to 4 ms on my machine. That would mean something like 4 hours for 1000 * 200 integrations which I think is OK, if you only need to do it once. But the time will heavily depend on your data.