I am trying to solve a system of nonlinear equations in Python. The equations have the form:
(1) x^2 + y^2 = a
(2)(x-b)^2 + y^2 = c
where x and y are the variables and a,b,c are parameters. I would like to have a function which i can pass the parameters a,b,c to and it returns me the values for x and y. How can I do that ?
What i currently have is
from scipy.optimize import fsolve
def equation(var, *data):
a,b,c = data
x,y = var
eq1 = x**2 + y**2 - a**2
eq2 = (x - b)**2 + y**2 - c**2
return [eq1, eq2]
x,y = fsolve(equation, args=data)
But this does not quite work. Can someone help?
I think it is just missing the initialized values
from scipy.optimize import fsolve
def equation(var, *data):
a,b,c = data
x,y = var
eq1 = x**2 + y**2 - a**2
eq2 = (x - b)**2 + y**2 - c**2
return [eq1, eq2]
x,y = fsolve(equation,[1,1], args=(1,1,1))
print(x,y)
Taking inspiration from geometrical interpretation of Thierry Lathuille, there is maybe no real need to use a nonlinear solver.
First, Eq. (1) requires y^2=a^2-x^2 and thus (x-b)^2+a^2-x^2=c^2 according to Eq. (2). The latter equality simplifies to b^2-2bx+a^2=c^2 with xSol=(a^2+b^2-c^2)/(2b) as the possible solution for x (the symmetry suggest that there'll be only one solution for x). With this solution for x, we can check the sign of a^2-xSol^2. If negative, then there is no solution. If non-negative, then the solutions are (xSol,+ySol) and (xSol,-ySol) with ySol=np.sqrt(a^2-xSol^2).
Based on some testing with the code below, the solution above seems to work (but correct me if I'm wrong).
from scipy.optimize import fsolve
import numpy as np
abc = (1,1,4)
# Nonlinear optimizer
def equation(var, *data):
a,b,c = data
x,y = var
eq1 = x**2 + y**2 - a**2
eq2 = (x - b)**2 + y**2 - c**2
return [eq1, eq2]
x,y = fsolve(equation,[1,1], args=abc)
print(x,y)
# Geometric solution
a = abc[0]; b = abc[1]; c = abc[2]
xSol = (a**2+b**2-c**2)/(2*b)
if a**2-xSol**2<0:
print("No solution")
else:
ySol = np.sqrt(a**2-xSol**2)
print(xSol,ySol)
print(xSol,-ySol)
Related
I am trying to solve an equation but the solve() function is taking over 10 minutes even on a high RAM colab notebook. Are there any simplifications to the problem that I can take to speed this along? Here is the code:
x, y, x_0, y_0, x_new, y_new, t, f = symbols('x y x_0 y_0 x_new y_new t f')
D = (2 * (1 - t) * sqrt(x * y) + t * (x + y)) / (2 * (x + y) * sqrt(x * y))
D_old = D.subs([(x, x_0), (y, y_0)])
D_new = D.subs([(x, x_new), (y, y_new)])
delta_D = D_new - D_old
target = Eq(delta_D, f)
answer = solve(target, x_new)
If it is taking a long time you must be trying to solve for one of the x or y values. This will require solving a messy cubic equation in many variables. It would be better if you just substituted in the values of interest and then used nsolve to find the roots of interest. Otherwise, you can get a symbolic solution to the generic cubic g3 = solve(a*x**3 + b*x**2 + c*x + d, x) and then substitute in the corresponding expressions for the coefficients of collect(sympy.solvers.solvers.unrad(target.rewrite(Add))[0], v) where v is the variable of interest. But I won't bog this down with more details until it is clear what you really want to do.
I am trying to solve a function in an annular domain that has a change of phase with respect to the angular direction of the annulus.
My attempt to solve it is the following:
import numpy as np
from scipy import integrate
def f(x0, y0):
r = np.sqrt(x0**2 + y0**2)
if r >= rIn and r <= rOut:
theta = np.arctan(y0 / x0)
R = np.sqrt((x - x0)**2 + (y - y0)**2 + z**2)
integrand = (np.exp(-1j * (k*R + theta))) / R
return integrand
else:
return 0
# Test
rIn = 0.5
rOut = 1.5
x = 1
y = 1
z = 1
k = 3.66
I = integrate.dblquad(f, -rOut, rOut, lambda x0: -rOut, lambda x0: rOut)
My problem is that I don't know how to get rid of the division by zero occuring when I evaluate theta.
Any help will be more than appreciated!
Use numpy.arctan2 instead, it will have problems only if both x and y are zero, in which case the angle is undetermined.
Also I see you that your integrand is complex, in this case you will probably have to handle real and imaginary part separately, as done here.
If you look at this example:
2Bsin(x)+ (B + ¾A)*cos(x) = sin(x) + 2cos(x)
it’s easy to see that
2B = 1, B + ¾A = 2
and applying some basic linear algebra
B = ½, A = 2
In Python, using sympy, however, you run this code:
from sympy import *; var('x A B')
P1 = (B/2)*sin(x) + (B + 3*A/4)*cos(x)
P2 = sin(x) + 2*cos(x)
solve(Eq(P1, P2), [A,B])
you get this
[(-2*B*tan(x)/3 - 4*B/3 + 4*tan(x)/3 + 4/3, B)]
Is there a way to get the result in terms of A and B?
It seems like a little bit of a hack but works. I substitute sin(x) and cos(x) by x and y and turn it into a polynomial. Then I can just get the coefficients, make an equation out of those and solve it just fine.
from sympy import *; var('x y A B')
P1 = (B/2)*sin(x) + (B + 3*A/4)*cos(x)
P2 = sin(x) + 2*cos(x)
P1s = P1.subs({sin(x):x, cos(x):y})
P2s = P2.subs({sin(x):x, cos(x):y})
eqs = tuple(map(lambda x:Eq(*x),zip(Poly(P1s,[x,y]).coeffs(), Poly(P2s,[x,y]).coeffs())))
Those sympy does solve
sol = solve(eqs)
{A: 0, B: 2}
And I can even put those into the original equation to see if something weird happened:
P1.subs(sol), P2.subs(sol)
(sin(x) + 2*cos(x), sin(x) + 2*cos(x))
I have the following negative quadratic equation
-0.03402645959398278x^{2}+156.003469x-178794.025
I want to know if there is a straight way (using numpy/scipy libraries or any other) to get the value of x when the slope of the derivative is zero (the maxima). I'm aware I could:
change the sign of the equation and apply the scipy.optimize.minima method or
using the derivative of the equation so I can get the value when the slope is zero
For instance:
from scipy.optimize import minimize
quad_eq = np.poly1d([-0.03402645959398278, 156.003469, -178794.025])
############SCIPY####################
neg_quad_eq = np.poly1d(np.negative(quad_eq))
fit = minimize(neg_quad_eq, x0=15)
slope_zero_neg = fit.x[0]
maxima = np.polyval(quad_eq, slope_zero_neg)
print(maxima)
##################numpy######################
import numpy as np
first_dev = np.polyder(quad_eq)
slope_zero = first_dev.r
maxima = np.polyval(quad_eq, slope_zero)
print(maxima)
Is there any straight way to get the same result?
print(maxima)
You don't need all that code... The first derivative of a x^2 + b x + c is 2a x + b, so solving 2a x + b = 0 for x yields x = -b / (2a) that is actually the maximum you are searching for
import numpy as np
import matplotlib.pyplot as plt
def func(x, a=-0.03402645959398278, b=156.003469, c=-178794.025):
result = a * x**2 + b * x + c
return result
def func_max(a=-0.03402645959398278, b=156.003469, c=-178794.025):
maximum_x = -b / (2 * a)
maximum_y = a * maximum_x**2 + b * maximum_x + c
return maximum_x, maximum_y
x = np.linspace(-50000, 50000, 100)
y = func(x)
mx, my = func_max()
print('maximum:', mx, my)
maximum: 2292.384674478263 15.955750522436574
and verify
plt.plot(x, y)
plt.axvline(mx, color='r')
plt.axhline(my, color='r')
I have one exponential equation with two unknowns, say:
y*exp(ix) = sqrt(2) + i * sqrt(2)
Manually, I can transform it to system of trigonometric equations:
y * cos x = sqrt(2)
y * sin x = sqrt(2)
How can I do it automatically in sympy?
I tried this:
from sympy import *
x = Symbol('x', real=True)
y = Symbol('y', real=True)
eq = Eq(y * cos(I * x), sqrt(2) + I * sqrt(2))
print([e.trigsimp() for e in eq.as_real_imag()])
but only got two identical equations except one had "re" before it and another one "im".
You can call the method .rewrite(sin) or .rewrite(cos) to obtain the desired form of your equation. Unfortunately, as_real_imag cannot be called on an Equation directly but you could do something like this:
from sympy import *
def eq_as_real_imag(eq):
lhs_ri = eq.lhs.as_real_imag()
rhs_ri = eq.rhs.as_real_imag()
return Eq(lhs_ri[0], rhs_ri[0]), Eq(lhs_ri[1], rhs_ri[1])
x = Symbol('x', real=True)
y = Symbol('y', real=True)
original_eq = Eq(y*exp(I*x), sqrt(2) + I*sqrt(2))
trig_eq = original_eq.rewrite(sin) # Eq(y*(I*sin(x) + cos(x)), sqrt(2) + sqrt(2)*I)
eq_real, eq_imag = eq_as_real_imag(trig_eq)
print(eq_real) # Eq(y*cos(x), sqrt(2))
print(eq_imag) # Eq(y*sin(x), sqrt(2))
(You might also have more luck just working with expressions (implicitly understood to be 0) instead of an Equation e.g. eq.lhs - eq.rhs in order to call the method as_real_imag directly)