How does this interpolating function work? - python

I am trying to write a function which interpolates some data and then you can chose any value on the x axis to find the corresponding point on the y axis.
For example:
f = f_from_data([3, 4, 6], [0, 1, 2])
print f(3.5)
produces the answer
0.5
I came across an answer which looks like this:
def f_from_data(xs,ys):
return scipy.interpolate.interp1d(xs, ys)
Can someone please explain how this works? I understand interp1d but I'm not sure how this simple line of code can get the answer when, for example
print f(5)
is input into it.

A simple example may help. interp1d is a class that acts like a function. It returns not a number, but another function-like object. Once you call it again, it returns the interpolated value of y at the input value of x. You can also feed this function single points, or whole arrays:
import numpy as np
from scipy.interpolate import interp1d
X=[3,4,6]
Y=[0,1,2]
f = interp1d(X,Y, bounds_error=False)
print f(3.5)
X2 = np.linspace(3, 6, 5)
print X2
print f(X2)
0.5
[ 3. 3.75 4.5 5.25 6. ]
[ 0. 0.75 1.25 1.625 2. ]

Your example uses linear interpolation - straight connecting lines between data points.
So, for your given data (xs = [3, 4, 6] and ys = [0, 1, 2]) the function looks like
where the blue points are the input data, the green line is the interpolated function, and the red dot is the test point f(3.5) == 0.5
To calculate f(5.0):
First, you have to find out which line segment you are on.
x == 5 is in the second segment, between 4 and 6, so we are looking for point C (5, y) between points A (4, 1) and B (6, 2).
C is on the line, so AC = k * AB where 0. <= k < 1.; this gives us two equations in two unknowns (k and y). Solving, we get
y = Ay + (By - Ay) * (Cx - Ax) / (Bx - Ax)
and subbing in,
y = 1. + (2. - 1.) * (5. - 4.) / (6. - 4.)
= 1.5
so the interpolated point is C (5, 1.5) and the function returns f(5.0) = 1.5
From the above, you should be able to write your own f() function given xs and ys; and this is exactly what scipy.interpolate.interp1d(xs, ys) does - takes xs and ys and returns an interpolative function, ie
f = scipy.interpolate.interp1d([3, 4, 6], [0, 1, 2])
# f is now a function that you can call, like
f(5.0) # => 1.5

To quote the documentation:
This class returns a function whose call method uses interpolation
to find the value of new points.
Thus, calling the returned function with an x value gives the corresponding interpolated y value.

Related

numpy polynomial.Polynomial.fit() gives different coefficients than polynomial.polyfit()

I do not understand why polynomial.Polynomial.fit() gives coefficients very different from the expected coefficients :
import numpy as np
x = np.linspace(0, 10, 50)
y = x**2 + 5 * x + 10
print(np.polyfit(x, y, 2))
print(np.polynomial.polynomial.polyfit(x, y, 2))
print(np.polynomial.polynomial.Polynomial.fit(x, y, 2))
Gives :
[ 1. 5. 10.]
[10. 5. 1.]
poly([60. 75. 25.])
The two first results are OK, and thanks to this answer I understand why the two arrays are in reversed order.
However, I do not understand the signification of the third result. The coefficients looks wrong, though the polynomial that I got this way seems to give correct predicted values.
The answer is slightly hidden in the docs, of course. Looking at the class numpy.polynomial.polynomial.Polynomial(coef, domain=None, window=None)
It is clear that in general the coefficients [a, b, c, ...] are for the polynomial a + b * x + c * x**2 + .... However, there are the keyword parameters domain and window both with default [-1,1]. I am not into that class, so I am not sure about the purpose, but it is clear that a remapping takes place. Now in the case of polynomial.Polynomial.fit() one has a class method that automatically takes the x data as domain, but still makes the mapping to the window. Hence, in the OP [0-10] is mapped onto [-1,1]. This is done by x = x' / 5 - 1 or x' -> 5 * x + 5. Putting the latter in the OP polynomial we get
( 5 x' + 5 )**2 + 5 * ( 5 * x' + 5 ) + 10 = 25 * x'**2 + 75 * x' + 60
Voila.
To get the expected result one has to put
print(np.polynomial.polynomial.Polynomial.fit(x, y, 2, window=[0, 10] ) )
wich gives
poly([10. 5. 1.])
Buried in the docs:
Note that the coefficients are given in the scaled domain defined by the linear mapping between the window and domain. convert can be used to get the coefficients in the unscaled data domain.
So use:
poly.convert()
This will rescale your coefficients to what you are probably expecting.
Example for data generated from 1 + 2x + 3x^2:
from numpy.polynomial import Polynomial
test_poly = Polynomial.fit([0, 1, 2, 3, 4, 5],
[1, 6, 17, 34, 57, 86],
2)
print(test_poly)
print(test_poly.convert())
Output:
poly([24.75 42.5 18.75])
poly([1. 2. 3.])

How to plot curve with given polynomial coefficients?

using Python I have an array with coefficients from a polynomial, let's say
polynomial = [1,2,3,4]
which means the equation:
y = 4x³ + 3x² + 2x + 1
(so the array is in reversed order)
Now how do I plot this into a visual curve in the Jupyter Notebook?
There was a similar question:
Plotting polynomial with given coefficients
but I didn't understand the answer (like what is a and b?).
And what do I need to import to make this happen?
First, you have to decide the limits for x in your plot. Let's say x goes from -2 to 2. Let's also ask for a hundred points on our curve (this can be any sufficiently large number for your interval so that you get a smooth-looking curve)
Let's create that array:
lower_limit = -2
upper_limit = 2
num_pts = 100
x = np.linspace(lower_limit, upper_limit, num_pts)
Now, let's evaluate y at each of these points. Numpy has a handy polyval() that'll do this for us. Remember that it wants the coefficients ordered by highest exponent to lowest, so you'll have to reverse the polynomial list
poly_coefs = polynomial[::-1] # [4, 3, 2, 1]
y = np.polyval(poly_coefs, x)
Finally, let's plot everything:
plt.plot(x, y, '-r')
You'll need the following imports:
import numpy as np
from matplotlib import pyplot as plt
If you don't want to import numpy, you can also write vanilla python methods to do the same thing:
def linspace(start, end, num_pts):
step = (end - start) / (num_pts - 1)
return [start + step * i for i in range(num_pts)]
def polyval(coefs, xvals):
yvals = []
for x in xvals:
y = 0
for power, c in enumerate(reversed(coefs)):
y += c * (x ** power)
yvals.append(y)
return yvals

How do I write a code for an f(x) equation to give its corresponding f(-x) in python?

I'm trying to write a function that accepts inputs of coefficients for polynomial f(x) and returns the corresponding f(-x). This is what I have so far:
import numpy as np
coeff = [1, -5, 2, -5]
poly = np.poly1d(coeff)
print (poly)
This prints out:
1x³ - 5x² + 2x - 5
I'm stuck here, since using poly(-x) or any values of x calculates the whole equation itself. Is there any workaround here? What I only want is for the code to do f(-x) such that 1(-1)³ - 5(-1)² + 2(-1) + 5 prints out:
-1x³ - 5x² - 2x - 5
Thank you!
The issue is that you are defining your polynomial by it's coefficients. I would instead define the variable x and let the module itself handle all the manipulations.
import numpy as np
x = np.poly1d([1,0])
p = x**3 - 5*x**2 + 2*x - 5
print(p(-x))
Just multiply by -1 to coeff list like below
import numpy as np
coeff = np.array([1, -5, 2, -5])
poly1 = np.poly1d(coeff)
print (poly1)
poly2 = np.poly1d(-1 * coeff)
print(poly2)

I dont receive the arrays I want, can someone help me?

I have to make a program with 4 functions. 3 of them have to be the 3 coordinates x, y, z and the other one have to be the function, which give the np.array with all the coordinates together, but I only receive one giant array with all the factors of each one like an only element of the array. I don't know if I'm explaining myself, but here is my code:
import numpy as np
def x(t):
x = (5.25 * t)
return x
def y(t):
y = (-0.365 * (t**2)) + (7.15 * t) + 34
return y
def z(t):
z = (-0.49 * (t**2)) + (9.9 * t)
return z
def f(t):
f = np.array([x(t), y(t), z(t)])
return f
f(t) [0:49]
t = np.arange(0, 21, 0.4275996114)
M = f(t)
print(M)
I have to print the first 50 coordinates of the ball until the time 20 seconds, but I receive the 50 numbers of x like only 1 element of an array.
The question was not very clear but I think I understood your problem and what you are asking. Basically you want to get a matrix M in which each row is a numpy array containing the three x,y,z coordinates. You want to have as many rows as there are measurements in 20 seconds.
M = np.empty((0,3), float)
t = np.arange(0, 21, 0.4275996114)
for time in t:
M = np.append(M, [f(time)], axis = 0)
print(M)
Explanation
First, we create what will become our matrix by specifying that each row consists of 3 columns. We want it to contain decimal numbers, so we specify float as type:
M = np.empty((0,3), float)
Then a problem you have in your code is that you only call the f function once, passing the entire t array of times as an argument. You actually have to call the f function once for each instant of time contained in t.
To solve it you have to make a loop on each element in t. The result of each call to f() must be added as a row in the M matrix.
t = np.arange(0, 21, 0.4275996114)
for time in t:
M = np.append(M, [f(time)], axis = 0)
Partial Output
This output shows only the result of the range from 0 to 3. is just an example to show you the format of the output obtained
[[ 0. 34. 0. ]
[ 2.24489796 36.9906001 4.14364385]
[ 4.48979592 39.84772596 8.10810311]
[ 6.73469388 42.57137757 11.89337776]
[ 8.97959184 45.16155495 15.49946782]
[11.2244898 47.61825808 18.92637328]
[13.46938776 49.94148697 22.17409413]
[15.71428572 52.13124162 25.24263039]]

2D Gaussian Fit for intensities at certain coordinates in Python

I have a set of coordinates (x, y, z(x, y)) which describe intensities (z) at coordinates x, y. For a set number of these intensities at different coordinates, I need to fit a 2D Gaussian that minimizes the mean squared error.
The data is in numpy matrices and for each fitting session I will have either 4, 9, 16 or 25 coordinates. Ultimately I just need to get the central position of the gaussian (x_0, y_0) that has smallest MSE.
All of the examples that I have found use scipy.optimize.curve_fit but the input data they have is over an entire mesh rather than a few coordinates.
Any help would be appreciated.
Introduction
There are multiple ways to approach this. You can use non-linear methods (e.g. scipy.optimize.curve_fit), but they'll be slow and aren't guaranteed to converge. You can linearize the problem (fast, unique solution), but any noise in the "tails" of the distribution will cause issues. There are actually a few tricks you can apply to this particular case to avoid the latter issue. I'll show some examples, but I don't have time to demonstrate all of the "tricks" right now.
Just as a side note, a general 2D guassian has 6 parameters, so you won't be able to fully fit things with 4 points. However, it sounds like you might be assuming that there's no covariance between x and y and that the variances are the same in each direction (i.e. a perfectly "round" bell curve). If that's the case, then you only need four parameters. If you know the amplitude of the guassian, you'll only need three. However, I'm going to start with the general solution, and you can simplify it later on, if you want to.
For the moment, let's focus on solving this problem using non-linear methods (e.g. scipy.optimize.curve_fit).
The general equation for a 2D guassian is (directly from wikipedia):
where:
is essentially 0.5 over the covariance matrix, A is the amplitude,
and (X₀, Y₀) is the center
Generate simplified sample data
Let's write the equation above out:
import numpy as np
import matplotlib.pyplot as plt
def gauss2d(x, y, amp, x0, y0, a, b, c):
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
And then let's generate some example data. To start with, we'll generate some data that will be easy to fit:
np.random.seed(1977) # For consistency
x, y = np.random.random((2, 10))
x0, y0 = 0.3, 0.7
amp, a, b, c = 1, 2, 3, 4
zobs = gauss2d(x, y, amp, x0, y0, a, b, c)
fig, ax = plt.subplots()
scat = ax.scatter(x, y, c=zobs, s=200)
fig.colorbar(scat)
plt.show()
Note that we haven't added any noise, and the center of the distribution is within the range that we have data (i.e. center at 0.3, 0.7 and a scatter of x,y observations between 0 and 1). For the moment, let's stick with this, and then we'll see what happens when we add noise and shift the center.
Non-linear fitting
To start with, let's use scpy.optimize.curve_fit to preform a non-linear least-squares fit to the gaussian function. (On a side note, you can play around with the exact minimization algorithm by using some of the other functions in scipy.optimize.)
The scipy.optimize functions expect a slightly different function signature than the one we originally wrote above. We could write a wrapper to "translate", but let's just re-write the gauss2d function instead:
def gauss2d(xy, amp, x0, y0, a, b, c):
x, y = xy
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
All we did was have the function expect the independent variables (x & y) as a single 2xN array.
Now we need to make an initial guess at what the guassian curve's parameters actually are. This is optional (the default is all ones, if I recall correctly), but you're likely to have problems converging if 1, 1 is not particularly close to the "true" center of the gaussian curve. For that reason, we'll use the x and y values of our largest observed z-value as a starting point for the center. I'll leave the rest of the parameters as 1, but if you know that they're likely to consistently be significantly different, change them to something more reasonable.
Here's the full, stand-alone example:
import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt
def main():
x0, y0 = 0.3, 0.7
amp, a, b, c = 1, 2, 3, 4
true_params = [amp, x0, y0, a, b, c]
xy, zobs = generate_example_data(10, true_params)
x, y = xy
i = zobs.argmax()
guess = [1, x[i], y[i], 1, 1, 1]
pred_params, uncert_cov = opt.curve_fit(gauss2d, xy, zobs, p0=guess)
zpred = gauss2d(xy, *pred_params)
print 'True parameters: ', true_params
print 'Predicted params:', pred_params
print 'Residual, RMS(obs - pred):', np.sqrt(np.mean((zobs - zpred)**2))
plot(xy, zobs, pred_params)
plt.show()
def gauss2d(xy, amp, x0, y0, a, b, c):
x, y = xy
inner = a * (x - x0)**2
inner += 2 * b * (x - x0)**2 * (y - y0)**2
inner += c * (y - y0)**2
return amp * np.exp(-inner)
def generate_example_data(num, params):
np.random.seed(1977) # For consistency
xy = np.random.random((2, num))
zobs = gauss2d(xy, *params)
return xy, zobs
def plot(xy, zobs, pred_params):
x, y = xy
yi, xi = np.mgrid[:1:30j, -.2:1.2:30j]
xyi = np.vstack([xi.ravel(), yi.ravel()])
zpred = gauss2d(xyi, *pred_params)
zpred.shape = xi.shape
fig, ax = plt.subplots()
ax.scatter(x, y, c=zobs, s=200, vmin=zpred.min(), vmax=zpred.max())
im = ax.imshow(zpred, extent=[xi.min(), xi.max(), yi.max(), yi.min()],
aspect='auto')
fig.colorbar(im)
ax.invert_yaxis()
return fig
main()
In this case, we exactly(ish) recover our original "true" parameters.
True parameters: [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [ 1. 0.3 0.7 2. 3. 4. ]
Residual, RMS(obs - pred): 1.01560615193e-16
As we'll see in a second, this won't always be the case...
Adding Noise
Let's add some noise to our observations. All I've done here is change the generate_example_data function:
def generate_example_data(num, params):
np.random.seed(1977) # For consistency
xy = np.random.random((2, num))
noise = np.random.normal(0, 0.3, num)
zobs = gauss2d(xy, *params) + noise
return xy, zobs
However, the result looks quite different:
And as far as the parameters go:
True parameters: [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [ 1.129 0.263 0.750 1.280 32.333 10.103 ]
Residual, RMS(obs - pred): 0.152444640098
The predicted center hasn't changed much, but the b and c parameters have changed quite a bit.
If we change the center of the function to somewhere slightly outside of our scatter of points:
x0, y0 = -0.3, 1.1
We'll wind up with complete nonsense as a result in the presence of noise! (It still works correctly without noise.)
True parameters: [1, -0.3, 1.1, 2, 3, 4]
Predicted params: [ 0.546 -0.939 0.857 -0.488 44.069 -4.136]
Residual, RMS(obs - pred): 0.235664449826
This is a common problem when fitting a function that decays to zero. Any noise in the "tails" can result in a very poor result. There are a number of strategies to deal with this. One of the easiest is to weight the inversion by the observed z-values. Here's an example for the 1D case: (focusing on linearized the problem) How can I perform a least-squares fitting over multiple data sets fast? If I have time later, I'll add an example of this for the 2D case.

Categories