Scipy offers many useful tools for root finding, notably fsolve. Typically a program has the following form:
def eqn(x, a, b):
return x + 2*a - b**2
fsolve(eqn, x0=0.5, args = (a,b))
and will find a root for eqn(x) = 0 given some arguments a and b.
However, what if I have a problem where I want to solve for the a variable, giving the function arguments in x and b? Of course, I could recast the initial equation as
def eqn(a, x, b)
but this seems long winded and inefficient. Instead, is there a way I can simply set fsolve (or another root finding algorithm) to allow me to choose which variable I want to solve for?
You can go with your first idea in a more concise way using lambda functions:
fsolve(lambda a,x,b: eqn(x, a, b), x0=0.5, args=(x,b))
That is, rearrange the arguments in the lambda wrapper so you don't have to write a separate def eqn2(a,x,b).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Let's say I have an equation:
2x + 6 = 12
With algebra we can see that x = 3. How can I make a program in Python that can solve for x? I'm new to programming, and I looked at eval() and exec() but I can't figure out how to make them do what I want. I do not want to use external libraries (e.g. SAGE), I want to do this in just plain Python.
How about SymPy? Their solver looks like what you need. Have a look at their source code if you want to build the library yourself…
There are two ways to approach this problem: numerically and symbolically.
To solve it numerically, you have to first encode it as a "runnable" function - stick a value in, get a value out. For example,
def my_function(x):
return 2*x + 6
It is quite possible to parse a string to automatically create such a function; say you parse 2x + 6 into a list, [6, 2] (where the list index corresponds to the power of x - so 6*x^0 + 2*x^1). Then:
def makePoly(arr):
def fn(x):
return sum(c*x**p for p,c in enumerate(arr))
return fn
my_func = makePoly([6, 2])
my_func(3) # returns 12
You then need another function which repeatedly plugs an x-value into your function, looks at the difference between the result and what it wants to find, and tweaks its x-value to (hopefully) minimize the difference.
def dx(fn, x, delta=0.001):
return (fn(x+delta) - fn(x))/delta
def solve(fn, value, x=0.5, maxtries=1000, maxerr=0.00001):
for tries in xrange(maxtries):
err = fn(x) - value
if abs(err) < maxerr:
return x
slope = dx(fn, x)
x -= err/slope
raise ValueError('no solution found')
There are lots of potential problems here - finding a good starting x-value, assuming that the function actually has a solution (ie there are no real-valued answers to x^2 + 2 = 0), hitting the limits of computational accuracy, etc. But in this case, the error minimization function is suitable and we get a good result:
solve(my_func, 16) # returns (x =) 5.000000000000496
Note that this solution is not absolutely, exactly correct. If you need it to be perfect, or if you want to try solving families of equations analytically, you have to turn to a more complicated beast: a symbolic solver.
A symbolic solver, like Mathematica or Maple, is an expert system with a lot of built-in rules ("knowledge") about algebra, calculus, etc; it "knows" that the derivative of sin is cos, that the derivative of kx^p is kpx^(p-1), and so on. When you give it an equation, it tries to find a path, a set of rule-applications, from where it is (the equation) to where you want to be (the simplest possible form of the equation, which is hopefully the solution).
Your example equation is quite simple; a symbolic solution might look like:
=> LHS([6, 2]) RHS([16])
# rule: pull all coefficients into LHS
LHS, RHS = [lh-rh for lh,rh in izip_longest(LHS, RHS, 0)], [0]
=> LHS([-10,2]) RHS([0])
# rule: solve first-degree poly
if RHS==[0] and len(LHS)==2:
LHS, RHS = [0,1], [-LHS[0]/LHS[1]]
=> LHS([0,1]) RHS([5])
and there is your solution: x = 5.
I hope this gives the flavor of the idea; the details of implementation (finding a good, complete set of rules and deciding when each rule should be applied) can easily consume many man-years of effort.
Python may be good, but it isn't God...
There are a few different ways to solve equations. SymPy has already been mentioned, if you're looking for analytic solutions.
If you're happy to just have a numerical solution, Numpy has a few routines that can help. If you're just interested in solutions to polynomials, numpy.roots will work. Specifically for the case you mentioned:
>>> import numpy
>>> numpy.roots([2,-6])
array([3.0])
For more complicated expressions, have a look at scipy.fsolve.
Either way, you can't escape using a library.
If you only want to solve the extremely limited set of equations mx + c = y for positive integer m, c, y, then this will do:
import re
def solve_linear_equation ( equ ):
"""
Given an input string of the format "3x+2=6", solves for x.
The format must be as shown - no whitespace, no decimal numbers,
no negative numbers.
"""
match = re.match(r"(\d+)x\+(\d+)=(\d+)", equ)
m, c, y = match.groups()
m, c, y = float(m), float(c), float(y) # Convert from strings to numbers
x = (y-c)/m
print ("x = %f" % x)
Some tests:
>>> solve_linear_equation("2x+4=12")
x = 4.000000
>>> solve_linear_equation("123x+456=789")
x = 2.707317
>>>
If you want to recognise and solve arbitrary equations, like sin(x) + e^(i*pi*x) = 1, then you will need to implement some kind of symbolic maths engine, similar to maxima, Mathematica, MATLAB's solve() or Symbolic Toolbox, etc. As a novice, this is beyond your ken.
Use a different tool. Something like Wolfram Alpha, Maple, R, Octave, Matlab or any other algebra software package.
As a beginner you should probably not attempt to solve such a non-trivial problem.
I'm looking for an efficient way to compute the result of a (complex) mathematical function.
right now it looks comparable to:
def f(x):
return x**2
def g(x):
if not x: return 1
return f(x)*5
def h(x):
return g(x)
with concurrent.futures.ProcessPoolExecutor() as executor:
print(list(executor.map(h, params)))
since every function call is costly in Python, the code should already run faster if f(x) is merged with g(x). Unfortunately in that case the 'return ...' line of the g(x) function becomes very long already. Furthermore there are currently actually 6 functions defined in total, so the complete formula occupies several lines.
So, what's a clever way to compute the result of a physics formula?
EDIT:
Thank you so far, but my question is not really about this specific snippet of code but more about the way to implement physics formulas in Python. For example one could also define the expression as a string and evaluate it using eval() but that is obviously slower.
To be more specific I have a potential and want to implement it parallel. Therefore I call my version of "h(x)" using the map function of a ProcessPoolExecutor (with different values each time). But is it best practice to define the function as a function that calls other functions or uses variables? Is there a more efficient way?
def formula(x):
if not x :
return 1
return x*x*5
I don't think the line is in danger of being problematically long, but if you're concerned about the length of the return ... line you could use intermediate values, e.g.:
def g(x):
if x == 0:
return 1
x2 = x ** 2
return x2 * 5
As an aside, in this context it is incorrect to use the is operator as in x is 0. It does not check for numerical equality, which is what == does. The is operator checks that the two operands refer to exactly the same object in memory, which happens to have the same behaviour as == in this case because the Python interpreter is intelligently reusing number objects. It can lead to confusing errors, for example:
a = 1234
b = 1233
a == (b + 1) # True
a is (b + 1) # False
In practice, is is mainly used only to check if a value is None.
everyone
Is it possible to solve the following like:
x = np.matrix([[8.3,-20.6],[-20.6,65.8]])
y is a function of P:
y = lambda P: np.matrix([[0.02P,-0.02P], [-0.02P,0.04P]])
I want to find the P value given conditions that:
P>0, det(x-y)==0;
Is there any handy way for this?
Thank you very much!
Shawn
If you don't mind using an additional package, scipy.optimize has a number of methods that would be perfect. Otherwise, you could implement your own zero finding function.
If you want to go the scipy route:
1) Define your problem as a function that takes your unknown parameter (P) as its first argument, and returns the value you want to minimize:
def zerofn(P, x, y):
return np.linalg.det(x - y(P))
2) Optimize this function, using scipy. The example here uses a simple Newton-Raphson zero finder, but there are many other options which you might want to use to specify parameter bounds (e.g. P > 0).
import scipy.optimize as opt
opt.newton(zerofn, x0=1, args=(x, y))
>> 160.25865914054651
The result of this zero finder is your optimized value of P.
I have a project that one step of the process of it is to solve R(k,d,a),
where k means kth step.
My friend suggest me to do this in sympy ,but I don't know how to do it.
from sympy import *
k= symbols('k')
d= symbols('d')
a= symbols('a')
R= function('R')(k,d,a)
print R`
In fact I don't know how to define a function in sympy with class method...
and this expression is failure.
def R(k,d,a):
k:# of nth process
d:parameter in order to initializing
R(0,d,a) should be given
if k==0:
return 100*d
r=d*(1-(1-(d/R(k-1,d,a))**2)**0.5)**0.5
B=-3/2*d
D=(R(k-1,d,a))**3*(3*a*d/R(k-1,d,a)-2)+r**2*(r-3/2*d)
here I define R(k,d,a) with R(k-1,d,a),is it appropriate?
x^3+Bx^2+Cx+D=0 ,where c=0
x represent R(k,d,a) here.
x=symbols('x')
y=solve (x**3+x**2*B+D,x)
return max(y)
Here I want a list of y,and asking y in real number.
Later return the biggest one.
But I don't know how to realize it.
Finally,for each k ,I will need the other function to give a value that R(k,d,a) will be a parameter in it.I think I can do it by my self with for loop,it is not hard for me.
What is the hardest is how to get R(k,d,a).
I don't need complex root .But if I want ,how can I get them ?
Thank you for watching!
What you have looks basically OK. Three suggestions, however:
make sure your function always returns a SymPy value, e.g. don't return 100*d since d might be an int; return S(100)*d;
wherever you have division make sure that it is not suffering from Python trunction wherein 1/2 -> 0. e.g. write B = -S(3)/2*d instead of what you have (and use that B in your expression for D, writing (r+B) at the end of it;
max will not be able to sort the roots if complex roots are present so it would be better to select the real ones by hand: y=[i for i in solve (x**3+x**2*B+D,x) if i.is_real].
Using excel solver, it is easy to find a solution (optimum value for x and y )for this equation:
(x*14.80461) + (y * -4.9233) + (10*0.4803) ≈ 0
However, I can't figure out how to do this in Python. The existing scipy optimize library function like fsolve() or leastsq() seems to work with only one variable.... (I might just not know how to use them)...
Any suggestions?
Thanks!
>>> def f(x):
... return x[0]*14.80461 + x[1]*(-4.9233) + x[2]*(10*0.4803)
>>> def vf(x):
... return [f(x), 0, 0]
>> xx = fsolve(vf, x0=[0,0,1])
>>>
>>> f(xx)
8.8817841970012523e-16
Since the solution is not unique, different initial values for an unknown lead to different (valid) solutions.
EDIT: Why this works. Well, it's a dirty hack. It's just that fsolve and its relatives deal with systems of equations. What I did here, I defined a system of three equations (f(x) returns a three-element list) for three variables (x has three elements). Now fsolve uses a Newton-type algorithm to converge to a solution.
Clearly, the system is underdefined: you can specify arbitrary values of two variables, say, x[1] and x[2] and find x[0] to satisfy the only non-trivial equation you have. You can see this explicitly by specifying a couple of initial guesses for x0 and see different outputs, all of which satisfy f(x)=0 up to a certain tolerance.