My question is: What is the best approach to iterative polynomial multiplication in Python?
I thought an interesting project would be to write a function in Python to generate the coefficients and exponents of each term for a Chebyshev polynomial of a given degree. The recursive function to generate such a polynomial (represented by Tn(x)) is:
With:
T0(x) = 1
and
T1(x) = x:
Tn(x) = 2xTn-1(x) - Tn-2(x)
What I have so far isn't very useful, but I am having trouble kind of wrapping my brain around how to get this going. What I want to happen is the following:
>> chebyshev(4)
[[8,4], [8,2], [1,0]]
This list represents the Chebyshev polynomial of the 4th degree:
T4(x) = 8x4 - 8x2 + 1
import sys
def chebyshev(n, a=[1,0], b=[1,1]):
z = [2,1]
result = []
if n == 0:
return a
if n == 1:
return b
print >> sys.stderr, ([z[0]*b[0],
z[1]+b[1]],
a) # This displays the proper result for n = 2
return result
The one solution I found on the web didn't work, so I am hoping someone can shed some light.
p.s. More information on Chebyshev polynomials: CSU Fullteron, Wikipedia - Chebyshev polynomials. They are very cool/useful, and tie together some really interesting trig functions/properties; worth a read.
SciPy has an implementation for Chebyshev
http://www.scipy.org/doc/api_docs/SciPy.special.orthogonal.html
I would suggest looking at their code.
The best implementation for Chebyshev is :
// Computes T_n(x), with -1 <= x <= 1
real T( int n, real x )
{
return cos( n*acos(x) ) ;
}
If you test this against other implementations, including explicit polynomial evaluation and iteratively computing the recurrence relation, this is actually just as fast. Try it yourself..
Generally:
Explicit polynomial evaluation is the worst (for large n)
Recursive evaluation is a little better
cosine evaluation is the best
orthopy (a project of mine) also supports computation of Chebyshev polynomials. With
import orthopy
# from sympy.abc import x
x = 0.5
normalization = "normal" # or "classical", "monic"
evaluator = orthopy.c1.chebyshev1.Eval(x, normalization)
for _ in range(10):
print(next(evaluator))
0.5641895835477564
0.39894228040143276
-0.39894228040143265
...
you get the values of the polynomials with increasing degree at x = 0.5. You can use a list/vector of multiple values, or even sympy symbolics.
Computation happens with recurrence relations of course. If you're interested in the coefficients, check out
rc = orthopy.c1.chebyshev1.RecurrenceCoefficients("monic", symbolic=True)
Related
I am trying to find the probability of an event of a random variable past a specific value, i.e. pr(x>a), where a is some constant, typically much higher than the average of x, and x is not of any standard Gaussian distribution. So I wanted to fit some other probability density function, and take the integral of the pdf of x from a to inf. As this is a problem of modelling the spikes, I considered this an Extreme Value analysis problem, and found that the Weibull distribution might be appropriate.
Regarding extreme value distributions, the Weibull distribution has a very "not-easy-to-implement" integral, and I therefore figured I could just get the pdf from Scipy, and do a Riemann-sum. I also thought that I could as well simply evaluate the kernel density, get the pdf, and do the same with the Riemann sum, to approximate the integral.
I found a Q here on Stack which provided a neat method for doing Riemann sums in Python, and I adapted that code to fit my problem. But when I evaluate the integral I get weird numbers, indicating that something is either wrong with the KDE, or the Riemann sum-function.
Two scenarios, the first with the Weibull, in accordance with the Scipy documentation:
x = theData
x_grid = np.linspace(0,np.max(x),len(x))
p = ss.weibull_min.fit(x[x!=0], floc=0)
pd = ss.weibull_min.pdf(x_grid,p[0], p[1], p[2])
which looks like this:
and then also tried the KDE method as follows
pd = ss.gaussian_kde(x).pdf(x_grid)
which I subsequently run through the following function:
def riemannSum(a, b, n):
dx = (b - a) / n
s = 0.0
x = a
for i in range(n):
s += pd[x]
x += dx
return s * dx
print(riemannSum(950.0, 1612.0, 10000))
print(riemannSum(0.0, 1612.0, 100000))
In the case of the Weibull, it gives me
>> 0.272502150549
>> 18.2860384829
and in the case of the KDE, I get
>> 0.448450460469
>> 18.2796021034
This is obviously wrong. Taking the integral of the entire thing should give me 1, and 18.2+ is quite far off.
Am I wrong in my assumptions of what I can do with these density functions? Or have I made some mistake in the Riemann sum function
the Weibull distribution has a very "not-easy-to-implement" integral
Huh?!
Weibull distribution has very well defined CDF, so implementing integral is pretty much one-liner (ok, make it two for clarity)
def WeibullCDF(x, lmbd, k):
q = pow(x/lmbd, k)
return 1.0 - exp(-q)
And, of course, there is ss.weibull_min.cdf(x_grid,p[0], p[1], p[2]) if you want to pick from standard library
I know there is an accepted answer that worked for you but I stumbled across this while looking to see how to do a Riemann sum of a probability density and others may too so I will give this a go.
Basically, I think you had (what is now) an older version of numpy that allowed floating point indexing and your pd variable pointed to an array of values drawn from the pdf corresponding to the values at xgrid. Nowadays you will get an error in numpy when trying to use a floating point index but since you didn't you were accessing the value of the pdf at the grid values corresponding to that index. What you needed to do was calculate the pdf with the new values you wanted to use in your Riemann sum.
I edited the code from the question to create a method that works for calculating the integral of the pdf.
def riemannSum(a, b, n):
dx = (b-a)/n
s = 0.0
x = 0
pd = weibull_min.pdf(np.linspace(a, b, n), p[0], p[1], p[2])
for i in range(n):
s += pd[x]
x += 1
return s*dx
Below Riemann implementation can also be used (it uses Java instead of Python) sorry.
import static java.lang.Math.exp;
import static java.lang.Math.pow;
import java.util.Optional;
import java.util.function.BiFunction;
import java.util.function.BinaryOperator;
import java.util.function.Function;
import java.util.stream.IntStream;
public class WeibullPDF
{
public interface Riemann extends BiFunction<Function<Double, Double>, Integer,
BinaryOperator<Double>> { }
public static void main(String args[])
{
int N=100000;
Riemann s = (f, n) -> (a, b) ->
IntStream.range(0, n).
.mapToDouble(i->f.apply(a+i*((b-a)/n))*((b-a)/n)).sum();
double k=1.5;
Optional<Double> weibull =
Optional.of(s.apply(x->k*pow(x,k-1)*exp(-pow(x,k)),N).apply(0.0,1612.0));
weibull.ifPresent(System.out::println); //prints 0.9993617886716168
}
}
Imagine a simulation experiment in which the output is n total numbers, where k of them are sampled from an exponential random variable with rate a and n-k are sampled from an exponential random variable with rate b. The constraints are that 0 < a ≤ b and 0 ≤ k ≤ n, but a, b, and k are all unknown. Further, because of details of the simulation experiment, when a << b, k ≈ 0, and when a = b, k ≈ n/2.
My goal is to estimate either a or b (don't care about k, and I don't need to estimate both a and b: just one of the two is fine). From speculation, it seems as though estimating just b might be the easiest path (when a << b, there is pretty much nothing to use to estimate a and plenty to estimate b, and when a = b, both there is still plenty to estimate b). I want to do it in Python ideally, but I am open to any free software.
My first approach was to use sklearn.optimize to optimize a likelihood function where, for each number in my dataset, I compute P(X=x) for an exponential with rate a, compute the same for an exponential with rate b, and simply choose the larger of the two:
from sys import stdin
from math import exp,log
from scipy.optimize import fmin
DATA = None
def pdf(x,l): # compute P(X=x) for an exponential rv X with rate l
return l*exp(-1*l*x)
def logML(X,la,lb): # compute the log-ML of data points X given two exponentials with rates la and lb where la < lb
ml = 0.0
for x in X:
ml += log(max(pdf(x,la),pdf(x,lb)))
return ml
def f(x): # objective function to minimize
assert DATA is not None, "DATA cannot be None"
la,lb = x
if la > lb: # force la <= lb
return float('inf')
elif la <= 0 or lb <= 0:
return float('inf') # force la and lb > 0
return -1*logML(DATA,la,lb)
if __name__ == "__main__":
DATA = [float(x) for x in stdin.read().split()] # read input data
Xbar = sum(DATA)/len(DATA) # compute mean
x0 = [1/Xbar,1/Xbar] # start with la = lb = 1/mean
result = fmin(f,x0,disp=DISP)
print("ML Rates: la = %f and lb = %f" % tuple(result))
This unfortunately didn't work very well. For some selections of the parameters, it's within an order of magnitude, but for others, it's absurdly off. Given my problem (with its constraints) and my goal of estimating the larger parameter of the two exponentials (without caring about the smaller parameter nor the number of points that came from either), any ideas?
I posted the question in more general statistical terms on the stats Stack Exchange, and it got an answer:
https://stats.stackexchange.com/questions/291642/how-to-estimate-parameters-of-mixture-of-2-exponential-random-variables-ideally
Also, I tried the following, which worked decently well:
First, for every single integer percentile (1st percentile, 2nd percentile, ..., 99th percentile), I compute the estimate of b using the quantile closed-form equation (where the i-th quantile is the (i *100)-th percentile) for an exponential distribution (the i-th quantile = −ln(1 − i) / λ, so λ = −ln(1 − i) / (i-th quantile)). The result is a list where each i-th element corresponds to the b estimate using the (i+1)-th percentile.
Then, I perform peak-calling on this list using the Python implementation of the Matlab peak-calling function. Then, I take the list of resulting peaks and return the minimum. It seems to work fairly well.
I will implement the EM solution in the Stack Exchange post as well and see which works better.
EDIT: I implemented the EM solution, and it seems to work decently well in my simulations (n = 1000, various a and b).
I have used numpy's polyfit and obtained a very good fit (using a 7th order polynomial) for two arrays, x and y. My relationship is thus;
y(x) = p[0]* x^7 + p[1]*x^6 + p[2]*x^5 + p[3]*x^4 + p[4]*x^3 + p[5]*x^2 + p[6]*x^1 + p[7]
where p is the polynomial array output by polyfit.
Is there a way to reverse this method easily, so I have a solution in the form of,
x(y) = p[0]*y^n + p[1]*y^n-1 + .... + p[n]*y^0
No there is no easy way in general. Closed form-solutions for arbitrary polynomials are not available for polynomials of the seventh order.
Doing the fit in the reverse direction is possible, but only on monotonically varying regions of the original polynomial. If the original polynomial has minima or maxima on the domain you are interested in, then even though y is a function of x, x cannot be a function of y because there is no 1-to-1 relation between them.
If you are (i) OK with redoing the fitting procedure, and (ii) OK with working piecewise on single monotonic regions of your fit at a time, then you could do something like this:
-
import numpy as np
# generate a random coefficient vector a
degree = 1
a = 2 * np.random.random(degree+1) - 1
# an assumed true polynomial y(x)
def y_of_x(x, coeff_vector):
"""
Evaluate a polynomial with coeff_vector and degree len(coeff_vector)-1 using Horner's method.
Coefficients are ordered by increasing degree, from the constant term at coeff_vector[0],
to the linear term at coeff_vector[1], to the n-th degree term at coeff_vector[n]
"""
coeff_rev = coeff_vector[::-1]
b = 0
for a in coeff_rev:
b = b * x + a
return b
# generate some data
my_x = np.arange(-1, 1, 0.01)
my_y = y_of_x(my_x, a)
# verify that polyfit in the "traditional" direction gives the correct result
# [::-1] b/c polyfit returns coeffs in backwards order rel. to y_of_x()
p_test = np.polyfit(my_x, my_y, deg=degree)[::-1]
print p_test, a
# fit the data using polyfit but with y as the independent var, x as the dependent var
p = np.polyfit(my_y, my_x, deg=degree)[::-1]
# define x as a function of y
def x_of_y(yy, a):
return y_of_x(yy, a)
# compare results
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(my_x, my_y, '-b', x_of_y(my_y, p), my_y, '-r')
Note: this code does not check for monotonicity but simply assumes it.
By playing around with the value of degree, you should see that see the code only works well for all random values of a when degree=1. It occasionally does OK for other degrees, but not when there are lots of minima / maxima. It never does perfectly for degree > 1 because approximating parabolas with square-root functions doesn't always work, etc.
I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files
I have a range of data that I have approximated using a polynomial of degree 2 in Python. I want to calculate the area underneath this polynomial between 0 and 1.
Is there a calculus, or similar package from numpy that I can use, or should I just make a simple function to integrate these functions?
I'm a little unclear what the best approach for defining mathematical functions is.
Thanks.
If you're integrating only polynomials, you don't need to represent a general mathematical function, use numpy.poly1d, which has an integ method for integration.
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> i = p.integ()
>>> i
poly1d([ 0.66666667, 2. , 6. , 0. ])
>>> integrand = i(1) - i(0) # Use call notation to evaluate a poly1d
>>> integrand
8.6666666666666661
For integrating arbitrary numerical functions, you would use scipy.integrate with normal Python functions for functions. For integrating functions analytically, you would use sympy. It doesn't sound like you want either in this case, especially not the latter.
Look, Ma, no imports!
>>> coeffs = [2., 4., 6.]
>>> sum(coeff / (i+1) for i, coeff in enumerate(reversed(coeffs)))
8.6666666666666661
>>>
Our guarantee: Works for a polynomial of any positive degree or your money back!
Update from our research lab: Guarantee extended; s/positive/non-negative/ :-)
Update Here's the industrial-strength version that is robust in the face of stray ints in the coefficients without having a function call in the loop, and uses neither enumerate() nor reversed() in the setup:
>>> icoeffs = [2, 4, 6]
>>> tot = 0.0
>>> divisor = float(len(icoeffs))
>>> for coeff in icoeffs:
... tot += coeff / divisor
... divisor -= 1.0
...
>>> tot
8.6666666666666661
>>>
It might be overkill to resort to general-purpose numeric integration algorithms for your special case...if you work out the algebra, there's a simple expression that gives you the area.
You have a polynomial of degree 2: f(x) = ax2 + bx + c
You want to find the area under the curve for x in the range [0,1].
The antiderivative F(x) = ax3/3 + bx2/2 + cx + C
The area under the curve from 0 to 1 is: F(1) - F(0) = a/3 + b/2 + c
So if you're only calculating the area for the interval [0,1], you might consider
using this simple expression rather than resorting to the general-purpose methods.
'quad' in scipy.integrate is the general purpose method for integrating functions of a single variable over a definite interval. In a simple case (such as the one described in your question) you pass in your function and the lower and upper limits, respectively. 'quad' returns a tuple comprised of the integral result and an upper bound on the error term.
from scipy import integrate as TG
fnx = lambda x: 3*x**2 + 9*x # some polynomial of degree two
aoc, err = TG.quad(fnx, 0, 1)
[Note: after i posted this i an answer posted before mine, and which represents polynomials using 'poly1d' in Numpy. My scriptlet just above can also accept a polynomial in this form:
import numpy as NP
px = NP.poly1d([2,4,6])
aoc, err = TG.quad(px, 0, 1)
# returns (8.6666666666666661, 9.6219328800846896e-14)
If one is integrating quadratic or cubic polynomials from the get-go, an alternative to deriving the explicit integral expressions is to use Simpson's rule; it is a deep fact that this method exactly integrates polynomials of degree 3 and lower.
To borrow Mike Graham's example (I haven't used Python in a while; apologies if the code looks wonky):
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> integrand = (1 - 0)(p(0) + 4*p((0 + 1)/2) + p(1))/6
uses Simpson's rule to compute the value of integrand. You can verify for yourself that the method works as advertised.
Of course, I did not simplify the expression for integrand to indicate that the 0 and 1 can be replaced with arbitrary values u and v, and the code will still work for finding the integral of the function from u to v.