I am trying to use scipy.optimize.fsolve to work out the x-intercept(s):
from scipy.optimize import fsolve
from numpy import array, empty
counter = 0
def f(x_):
global counter
counter += 1
return pow(x_, 3) * 3 - 9.5 * pow(x_, 2) + 10 * x_
x0_ = empty(2)
x0_[0] = 1
x0_[1] = 6
res = fsolve(f, x0=x0_)
print(counter)
print(res)
the function f(x): https://www.desmos.com/calculator/8j8djr01da
the result of this code is:
74
[0. 0.]
I expect the result to be
[0, 1.575, 3.175]
Can someone please offer some help.
Plus:
I can't understand the documentation of fsolve(x0), is that just a guess? I will be so appreciated if you can explain.
Plus Plus:
I will be working with lots of linear equations with unknown expressions and exponential, I am really looking for a way to work out the x-intercepts, in other words, the roots by the expression of f(x).I would be so glad if you can help.
You get the set of all roots for a polynomial by
numpy.roots([3, -9.5, +10, 0])
array([1.58333333+0.90905934j, 1.58333333-0.90905934j,
0. +0.j ])
It is not clear what your other expected real roots are, fsolve will only find the real root 0.
Of course, if you take the coefficients that you used in the Desmos graphing tool
numpy.roots([2, -9.5, +10, 0])
you will actually get the expected
array([3.17539053, 1.57460947, 0. ])
For scalar non-polynomial functions the interface scipy.optimize.find_root is perhaps more suitable, especially if you can provide a bracketing interval.
I just want to say that at the first step you define your function wrong:
it should be
def f(x_):
# global counter
# counter += 1
return pow(x_, 3) * 2 - 9.5 * pow(x_, 2) + 10 * x_
but notpow(x_, 3) * 3 - 9.5 * pow(x_, 2) + 10 * x_
If you then set x0_ precisely:
x0_=[0,1,3] # according to intersection on graph
res=fsolve(f, x0=x0_)
Give you the anticipated output:
[0. 1.57460947 3.17539053]
Sometimes you just have to be more careful :)
Related
I do not understand why polynomial.Polynomial.fit() gives coefficients very different from the expected coefficients :
import numpy as np
x = np.linspace(0, 10, 50)
y = x**2 + 5 * x + 10
print(np.polyfit(x, y, 2))
print(np.polynomial.polynomial.polyfit(x, y, 2))
print(np.polynomial.polynomial.Polynomial.fit(x, y, 2))
Gives :
[ 1. 5. 10.]
[10. 5. 1.]
poly([60. 75. 25.])
The two first results are OK, and thanks to this answer I understand why the two arrays are in reversed order.
However, I do not understand the signification of the third result. The coefficients looks wrong, though the polynomial that I got this way seems to give correct predicted values.
The answer is slightly hidden in the docs, of course. Looking at the class numpy.polynomial.polynomial.Polynomial(coef, domain=None, window=None)
It is clear that in general the coefficients [a, b, c, ...] are for the polynomial a + b * x + c * x**2 + .... However, there are the keyword parameters domain and window both with default [-1,1]. I am not into that class, so I am not sure about the purpose, but it is clear that a remapping takes place. Now in the case of polynomial.Polynomial.fit() one has a class method that automatically takes the x data as domain, but still makes the mapping to the window. Hence, in the OP [0-10] is mapped onto [-1,1]. This is done by x = x' / 5 - 1 or x' -> 5 * x + 5. Putting the latter in the OP polynomial we get
( 5 x' + 5 )**2 + 5 * ( 5 * x' + 5 ) + 10 = 25 * x'**2 + 75 * x' + 60
Voila.
To get the expected result one has to put
print(np.polynomial.polynomial.Polynomial.fit(x, y, 2, window=[0, 10] ) )
wich gives
poly([10. 5. 1.])
Buried in the docs:
Note that the coefficients are given in the scaled domain defined by the linear mapping between the window and domain. convert can be used to get the coefficients in the unscaled data domain.
So use:
poly.convert()
This will rescale your coefficients to what you are probably expecting.
Example for data generated from 1 + 2x + 3x^2:
from numpy.polynomial import Polynomial
test_poly = Polynomial.fit([0, 1, 2, 3, 4, 5],
[1, 6, 17, 34, 57, 86],
2)
print(test_poly)
print(test_poly.convert())
Output:
poly([24.75 42.5 18.75])
poly([1. 2. 3.])
I have a set of data; each column corresponds to a spectrum at a certain time. I want to fit the spectrum at a generic time (t_i) as a linear combination of the spectrum at time 0 (in the first column), at time 5 (in column 30) and time 35 (in column 210). So the equation I want to fit is:
S(t_i) = a * S(t_0) + b * S(t_5) + c * S(t_35)
where:
0 <= a, b, c <= 1
a + b + c = 1
I found the solution at this question (Minimizing Least Squares with Algebraic Constraints and Bounds) super useful. But when I try it with my set of data the results are obviously wrong. I tried modifying the method to 'Nelder-Mead' but it doesn't respect my bound so I get negative values.
This is my script:
t0= df.iloc[:,0] #Spectrum at time 0
t5 = df.iloc[:,30] # Spectrum at time 5
t35 = df.iloc[:,120] # Spectrum at time 35
ti= df.iloc[:,20]
# Bounds that make every coefficient be between 0 and 1
bnds = [(0, 1), (0, 1), (0, 1)]
# Constrain the sum of the coefficient to 1
cons = [{"type": "eq", "fun": lambda x: x[0] + x[1] + x[2] - 1}]
xinit = np.array([1, 0, 0])
fun = lambda x: np.sum((ti -(x[0] * t0 + x[1] * t5 + x[2] * t35))**2)
res = minimize(fun, xinit,method='Nelder-Mead', bounds=bnds, constraints=cons)
print(res.x)
If I use the Nelder-Mead method I get: Out: [ 0.02732053 1.01961422 -0.04504698] , if I don't specify the method I get: [1. 0. 0.] (I believe that in this case the SLSQP method is being used).
The data I'm referring to is similar to the following:
0 3.333 5 35.001
0.001045089 0.001109701 0.001169798 0.000725486
0.001083051 0.001138815 0.001176665 0.000713021
0.001090994 0.001142676 0.001186642 0.000716149
0.001096258 0.001156476 0.001190218 0.00071286
Can you identify the problem? Can you suggest other ways to solve this problem? I have also tried using least_squares, but it failed.
The result of a local optimization strongly depends on the initial values.
It might return [1, 0, 0] for the case you stated above because there simply was no possibility for the optimizer to find a "downhill-only" way to [0. 1. 0.].
In fact, you might have started in a local minima and all ways out of the dip went uphill. So the optimizer chose to stay. That's how these optimizers work.
Try
xinit = np.array([0.0, 1.0, 0.0])
for t_i = t5 and I am quite sure the optimizer will return the initial value.
For your case do what I stated here: Run the optimizer several times, each time pick random initial values inside your boundaries. You can pick the code posted there and just add your constraints, use SLSQP or trust-constr.
Given this function:
def f(x):
return (1-x**2)**m * ((1-x)/2)**n
where m and n are constants, let's say both 0.5 for the sake of an example.
I'm trying to use functions from scipy.optimize to solve for x given a value of y. I'm only interested in xvalues from -1 to 1. Plotting the function with
x = numpy.arange(0, 1, 0,1)
matplotlib.pyplot.plot(x, f(x))
shows that the function is a kind of distorted parabola covering the range about 0 to 0.65. So lets try solving it for y = 0.3:
def f(x):
return (1 - x**2)**m * ((1-x)/2)**n - 0.3
print(scipy.optimize.newton_krylov(f, 0.5))
0.6718791645800665
This looks about right for one of the possible solutions. But there are two. The second should be around -0.9. Try what I might for an initial guess, I can't get it to find this second solution. The Newton-Krylov method gives no convergence at all for xin < 0 but none of the solvers can find this second solution.
Am I missing something? What am I doing wrong?
The method converges at least for x=-0.9:
scipy.optimize.newton_krylov(f, -0.9)
#array(-0.9527983).
It diverges for x approximately in [-0.85...0.06].
This is because, newton_krylov uses the Jacobian of the function. This makes it a gradient decent method consequently your solutions always converge to a local minima. Furthermore, because your function is parabolic you have a very interesting option!
The first is to find the maxima of f(x) and split your search domain into to. Next you can make an initial guess in each domain and solve with newton_krylov.
def f(x):
# Here is our function
return (1-x**2)**m * ((1-x)/2)**n
def minf(x):
# Here is where we find an optima and split the domain
return -f(x)
def fy(x):
# This is where you want your y value target defined
return abs(f(x) - .3)
if __name__ == "__main__":
x = numpy.arange(-1., 1., 1e-3, dtype=float)
# pyplot.plot(x, f(x))
# pyplot.show()
minx = minimize(minf, 0.0)['x']
# Make an initial guess in each domain
a1 = minx - 1.6 * minx
a2 = minx + 1.6 * minx
print(newton_krylov(fy, a1))
print(newton_krylov(fy, a2))
The output then is:
[0.67187916]
[-0.95279992]
Given inputs of:
present value = 11, SUMMATIONS of future values that = 126, and n = 7 (periods of change)
how can I solve for a rate of chain that would create a chain that sums into being the FV? This is different from just solving for a rate of return between 11 and 126. This is solving for the rate of return that allows the summation to 126. Ive been trying different ideas and looking up IRR and NPV functions but the summation aspect is stumping me.
In case the summation aspect isn't clear, if I assume a rate of 1.1, that would turn PV = 11 into a list like so (that adds up to nearly the FV 126), how can I solve for r only knowing n, summation fv and pv?:
11
12.1
13.31
14.641
16.1051
17.71561
19.487171
21.4358881
total = 125.7947691
Thank you.
EDIT:
I attempted to create a sort of iterator, but it's hanging after the first loop...
for r in (1.01,1.02,1.03,1.04,1.05,1.06,1.07,1.08,1.09,1.10,1.11,1.12):
print r
test = round(11* (1-r**8) / (1 - r),0)
print test
while True:
if round(126,0) == round(11* (1-r**8) / (1 - r),0):
answer = r
break
else:
pass
EDIT 2:
IV = float(11)
SV = float(126)
N = 8
# sum of a geometric series: (SV = IV * (1 - r^n) / (1 - r )
# r^n - (SV/PV)*r + ((SV - IV)/IV) = 0
# long form polynomial to be solved, with an n of 3 for example:
# 1r^n + 0r^n + 0r^n + -(SV/PV)r + ((SV - IV)/IV)
# each polynomial coefficient can go into numpy.roots to solve
# for the r that solves for the abcd * R = 0 above.
import numpy
array = numpy.roots([1.,0.,0.,0.,0.,0.,0.,(-SV)/IV,(SV-IV)/IV])
for i in array:
if i > 1:
a = str(i)
b = a.split("+")
answer = float(b[0])
print answer
I'm getting a ValueError that my string "1.10044876702" cant be converted to float. any ideas?
SOLVED: i.real gets the real part of it. no need for split or string conversion ie:
for i in array:
if i > 1:
a = i.real
answer = float(a)
print answer
Sum of a geometric series
Subbing in,
126 = 11 * (1 - r**8) / (1 - r)
where we need to solve for r. After rearranging,
r**8 - (126/11)*r + (115/11) = 0
then using NumPy
import numpy as np
np.roots([1., 0., 0., 0., 0., 0., 0., -126./11, 115./11])
gives
array([-1.37597528+0.62438671j, -1.37597528-0.62438671j,
-0.42293755+1.41183514j, -0.42293755-1.41183514j,
0.74868844+1.1640769j , 0.74868844-1.1640769j ,
1.10044877+0.j , 1.00000000+0.j ])
where the first six roots are imaginary and the last is invalid (gives a div-by-0 in the original equation), so the only useable answer is r = 1.10044877.
Edit:
Per the Numpy docs, np.root expects an array-like object (aka a list) containing the polynomial coefficients. So the parameters above can be read as 1.0*r^8 + 0.*r^7 + 0.*r^6 + 0.*r^5 + 0.*r^4 + 0.*r^3 + 0.*r^2 + -126./11*r + 115./11, which is the polynomial to be solved.
Your iterative solver is pretty crude; it will get you a ballpark answer, but the calculation time is exponential with the desired degree of accuracy. We can do much better!
No general analytic solution is known for an eighth-order equation, so some numeric method is needed.
If you really want to code your own solver from scratch, the simplest is Newton-Raphson method - start with a guess, then iteratively evaluate the function and offset your guess by the error divided by the first derivative to hopefully converge on a root - and hope that your initial guess is a good one and your equation has real roots.
If you care more about getting good answers quickly, np.root is hard to beat - it computes the eigenvectors of the companion matrix to simultaneously find all roots, both real and complex.
Edit 2:
Your iterative solver is hanging because of your while True clause - r never changes in the loop, so you will never break. Also, else: pass is redundant and can be removed.
After a good bit of rearranging, your code becomes:
import numpy as np
def iterative_test(rng, fn, goal):
return min(rng, key=lambda x: abs(goal - fn(x)))
rng = np.arange(1.01, 1.20, 0.01)
fn = lambda x: 11. * (1. - x**8) / (1. - x)
goal = 126.
sol = iterative_test(rng, fn, goal)
print('Solution: {} -> {}'.format(sol, fn(sol)))
which results in
Solution: 1.1 -> 125.7947691
Edit 3:
Your last solution is looking much better, but you must keep in mind that the degree of the polynomial (and hence the length of the array passed to np.roots) changes as the number of periods changes.
import numpy as np
def find_rate(present_value, final_sum, periods):
"""
Given the initial value, sum, and number of periods in
a geometric series, solve for the rate of growth.
"""
# The formula for the sum of a geometric series is
# final_sum = sum_i[0..periods](present_value * rate**i)
# which can be reduced to
# final_sum = present_value * (1 - rate**(periods+1) / (1 - rate)
# and then rearranged as
# rate**(periods+1) - (final_sum / present_value)*rate + (final_sum / present_value - 1) = 0
# Build the polynomial
poly = [0.] * (periods + 2)
poly[ 0] = 1.
poly[-2] = -1. * final_sum / present_value
poly[-1] = 1. * final_sum / present_value - 1.
# Find the roots
roots = np.roots(poly)
# Discard unusable roots
roots = [rt for rt in roots if rt.imag == 0. and rt.real != 1.]
# Should be zero or one roots left
if len(roots):
return roots[0].real
else:
raise ValueError('no solution found')
def main():
pv, fs, p = 11., 126., 7
print('Solution for present_value = {}, final_sum = {}, periods = {}:'.format(pv, fs, p))
print('rate = {}'.format(find_rate(pv, fs, p)))
if __name__=="__main__":
main()
This produces:
Solution for present_value = 11.0, final_sum = 126.0, periods = 7:
rate = 1.10044876702
Solving the polynomial roots is overkill. This computation is usually made with a solver such as the Newton method directly applied to the exponential formula. Works for fractional durations too.
For example, https://math.stackexchange.com/questions/502976/newtons-method-annuity-due-equation
I want to slove a set of linear equation of 10 variable.
I created the first array like this:
A=np.random.random_integers(15, size=(10,10))
and i want the values after the equal to be 0
(A.x + d.y + .... + N = 0)
so i did something like that:
b=np.zeros(shape=(10))
but when i apply the linear algebra function
print linalg.solve(A, b)
i just get as a result an array of 10 zeros.
[ 0. 0. 0. 0. -0. -0. -0. -0. 0. 0.]
anyone can help??
I do not understand the meaning of the second line of code.
Though, with this:
A=np.random.random_integers(15, size=(10,10))
b=np.zeros(shape=(10))
you are solving the system:
A * x = b
which means that you have:
A[1,1] * x_1 + A[1,2] * x_2 + ... + A[1,10] * x_10 = 0
A[2,1] * x_1 + A[2,2] * x_2 + ... + A[2,10] * x_10 = 0
...
So that the x = zero vector is always a perfect solution = you are looking for such x that A x = 0, so x is zero. Try
b = np.random.random_integers(15, size=(10,1))
and x resulting from linalg.solve(A,b) will specify a linear combination of columns from A to sum up to the random b vector.
In https://stackoverflow.com/questions/12910513/how-to-verify-the-results-of-a-linear-equation-system you tried numpy.svd (which is singular value decmposition, which I think you do not want) and numpy.lstsq which tries to find inexact solution that minimizes the least square distance (e.g. for overdetermined matrices).
I might not have understood what you are looking for - please clarify the line specifying what exactly are you looking for.