solve two simultaneous equations: one contains a Python function - python

On the below map, I have two known points (A and B) with their coordinates (longitude, latitude). I need to derive the coordinates of a point C which is on the line, and is 100 kilometres away from A.
First I created a function to calculate the distances between two points in kilometres:
# pip install haversine
from haversine import haversine
def get_distance(lat_from,long_from,lat_to,long_to):
distance_in_km = haversine((lat_from,long_from),
(lat_to, long_to),
unit='km')
return distance_in_km
Then using the slope and the distance, the coordinates of point C should be the solution to the below equations:
# line segment AB and AC share the same slope, so
# (15.6-27.3)/(41.6-34.7) = (y-27.3)/(x-34.7)
# the distance between A and C is 100 km, so
# get_distance(y,x,27.3,34.7) = 100
Then I try to solve these two equations in Python:
from sympy import symbols, Eq, solve
slope = (15.6-27.3)/(41.6-34.7)
x, y = symbols('x y')
eq1 = Eq(y-slope*(x-34.7)-27.3)
eq2 = Eq(get_distance(y,x,34.7,27.3)-100)
solve((eq1,eq2), (x, y))
The error is TypeError: can't convert expression to float. I may understand the error, because the get_distance function is expecting inputs as floats, while my x and y in eq2 are sympy.core.symbol.Symbol.
I tried to add np.float(x), but the same error remains.
Is there a way to solve equations like these? Or do you have better ways to achieve what is needed?
Many thanks!
# there is a simple example of solving equations:
from sympy import symbols, Eq, solve
x, y = symbols('x y')
eq1 = Eq(2*x-y)
eq2 = Eq(x+2-y)
solve((eq1,eq2), (x, y))
# output: {x: 2, y: 4}

You can directly calculate that point. We can implement a python version of the intermediate calculation for lat long.
Be aware this calculations assume the earth is a sphere, and takes the curve into account, i.e. this is not a Euclidean approximation like your original post.
Say we have two (lat,long) points A and B;
import numpy as np
A = (52.234869, 4.961132)
B = (46.491267, 26.994655)
EARTH_RADIUS = 6371.009
We can than calculate the intermediate point fraction f by taking 100/distance-between-a-b-in-km
from sklearn.neighbors import DistanceMetric
dist = DistanceMetric.get_metric('haversine')
point_1 = np.array([A])
point_2 = np.array([B])
delta = dist.pairwise(np.radians(point_1), np.radians(point_2) )[0][0]
f = 100 / (delta * EARTH_RADIUS)
phi_1, lambda_1 = np.radians(point_1)[0]
phi_2, lambda_2 = np.radians(point_2)[0]
a = np.sin((1-f) * delta) / np.sin(delta)
b = np.sin( f * delta) / np.sin(delta)
x = a * np.cos(phi_1) * np.cos(lambda_1) + b * np.cos(phi_2) * np.cos(lambda_2)
y = a * np.cos(phi_1) * np.sin(lambda_1) + b * np.cos(phi_2) * np.sin(lambda_2)
z = a * np.sin(phi_1) + b * np.sin(phi_2)
phi_n = np.arctan2(z, np.sqrt(x**2 + y**2) )
lambda_n = np.arctan2(y,x)
The point C, going from A to B, with 100 km distance from A, is than
C = np.degrees( phi_n ), np.degrees(lambda_n)
In this case
(52.02172458025681, 6.384361456573444)

Related

Speeding up Sympy solve() on a particular equation

I am trying to solve an equation but the solve() function is taking over 10 minutes even on a high RAM colab notebook. Are there any simplifications to the problem that I can take to speed this along? Here is the code:
x, y, x_0, y_0, x_new, y_new, t, f = symbols('x y x_0 y_0 x_new y_new t f')
D = (2 * (1 - t) * sqrt(x * y) + t * (x + y)) / (2 * (x + y) * sqrt(x * y))
D_old = D.subs([(x, x_0), (y, y_0)])
D_new = D.subs([(x, x_new), (y, y_new)])
delta_D = D_new - D_old
target = Eq(delta_D, f)
answer = solve(target, x_new)
If it is taking a long time you must be trying to solve for one of the x or y values. This will require solving a messy cubic equation in many variables. It would be better if you just substituted in the values of interest and then used nsolve to find the roots of interest. Otherwise, you can get a symbolic solution to the generic cubic g3 = solve(a*x**3 + b*x**2 + c*x + d, x) and then substitute in the corresponding expressions for the coefficients of collect(sympy.solvers.solvers.unrad(target.rewrite(Add))[0], v) where v is the variable of interest. But I won't bog this down with more details until it is clear what you really want to do.

Find maxima for a negative parabolic equation

I have the following negative quadratic equation
-0.03402645959398278x^{2}+156.003469x-178794.025
I want to know if there is a straight way (using numpy/scipy libraries or any other) to get the value of x when the slope of the derivative is zero (the maxima). I'm aware I could:
change the sign of the equation and apply the scipy.optimize.minima method or
using the derivative of the equation so I can get the value when the slope is zero
For instance:
from scipy.optimize import minimize
quad_eq = np.poly1d([-0.03402645959398278, 156.003469, -178794.025])
############SCIPY####################
neg_quad_eq = np.poly1d(np.negative(quad_eq))
fit = minimize(neg_quad_eq, x0=15)
slope_zero_neg = fit.x[0]
maxima = np.polyval(quad_eq, slope_zero_neg)
print(maxima)
##################numpy######################
import numpy as np
first_dev = np.polyder(quad_eq)
slope_zero = first_dev.r
maxima = np.polyval(quad_eq, slope_zero)
print(maxima)
Is there any straight way to get the same result?
print(maxima)
You don't need all that code... The first derivative of a x^2 + b x + c is 2a x + b, so solving 2a x + b = 0 for x yields x = -b / (2a) that is actually the maximum you are searching for
import numpy as np
import matplotlib.pyplot as plt
def func(x, a=-0.03402645959398278, b=156.003469, c=-178794.025):
result = a * x**2 + b * x + c
return result
def func_max(a=-0.03402645959398278, b=156.003469, c=-178794.025):
maximum_x = -b / (2 * a)
maximum_y = a * maximum_x**2 + b * maximum_x + c
return maximum_x, maximum_y
x = np.linspace(-50000, 50000, 100)
y = func(x)
mx, my = func_max()
print('maximum:', mx, my)
maximum: 2292.384674478263 15.955750522436574
and verify
plt.plot(x, y)
plt.axvline(mx, color='r')
plt.axhline(my, color='r')

How to expand one exponential complex equation to two trigonometric ones in sympy?

I have one exponential equation with two unknowns, say:
y*exp(ix) = sqrt(2) + i * sqrt(2)
Manually, I can transform it to system of trigonometric equations:
y * cos x = sqrt(2)
y * sin x = sqrt(2)
How can I do it automatically in sympy?
I tried this:
from sympy import *
x = Symbol('x', real=True)
y = Symbol('y', real=True)
eq = Eq(y * cos(I * x), sqrt(2) + I * sqrt(2))
print([e.trigsimp() for e in eq.as_real_imag()])
but only got two identical equations except one had "re" before it and another one "im".
You can call the method .rewrite(sin) or .rewrite(cos) to obtain the desired form of your equation. Unfortunately, as_real_imag cannot be called on an Equation directly but you could do something like this:
from sympy import *
def eq_as_real_imag(eq):
lhs_ri = eq.lhs.as_real_imag()
rhs_ri = eq.rhs.as_real_imag()
return Eq(lhs_ri[0], rhs_ri[0]), Eq(lhs_ri[1], rhs_ri[1])
x = Symbol('x', real=True)
y = Symbol('y', real=True)
original_eq = Eq(y*exp(I*x), sqrt(2) + I*sqrt(2))
trig_eq = original_eq.rewrite(sin) # Eq(y*(I*sin(x) + cos(x)), sqrt(2) + sqrt(2)*I)
eq_real, eq_imag = eq_as_real_imag(trig_eq)
print(eq_real) # Eq(y*cos(x), sqrt(2))
print(eq_imag) # Eq(y*sin(x), sqrt(2))
(You might also have more luck just working with expressions (implicitly understood to be 0) instead of an Equation e.g. eq.lhs - eq.rhs in order to call the method as_real_imag directly)

Couple Differential Equations using Python

I am trying to solve a system of geodesics orbital equations using python. They are coupled ordinary equations. I've tried different approaches, but they all yielded me a wrong shape (the shape should be some periodic function when plotting r and phi). Any idea on how to do this?
Here are my constants
G = 4.30091252525 * (pow(10, -3)) #Gravitational constant in (parsec*km^2)/(Ms*sec^2)
c = 0.0020053761 #speed of light , AU/sec
M = 170000 #mass of the central body, in solar masses
m = 10 #mass of the orbiting body, in solar masses
rs = 2 * G * M / pow(c, 2) #Schwarzschild radius
Lz= 0.000024 #Angular momemntum
h = Lz / m #Just the constant in equation
E= 1.715488e-007 #energy
And initial conditions are:
Y(0) = rs
Phi(0) = math.pi
Orbital equations
The way I tried to do it:
def rhs(t, u):
Y, phi = u
dY = np.sqrt((E**2 / (m**2 * c**2) - (1 - rs / Y) * (c**2 + h**2 / Y**2)))
dphi = L / Y**2
return [dY, dphi]
Y0 = np.array([rs,math.pi])
sol = solve_ivp(rhs, [1, 1000], Y0, method='Radau', dense_output=True)
It seems like you are looking at the spacial coordinates in an invariant plane of the geodesic equations of an object moving in Schwarzschild gravity.
One can use many different methods, which preserve as much of the underlying geometric structure of the model as possible, like symplectic geometric integrators or perturbation theory. As Lutz Lehmann pointed out in the comments, the default method for 'solve_ivp' uses as default the Dormand-Prince (4)5 stepper that utilizes the extrapolation mode, that is, the order 5 step, with the step size selection driven by the error estimate of the order 4 step.
Warning: your initial condition for Y equals Schwarzschild's radius, so these equations may fail or require special treatment (especially the time component of the equations, which you have not included here!) It may be that you have to switch to different coordinates, that remove the singularity at the even horizon. Moreover, the solutions may not be periodic curves, but quasi-periodic, so they may not close up nicely.
For a quick and dirty treatment, but possibly a fairly accurate one, I would differentiate the first equation
(dr / dtau)^2 = (E2_mc2 - c2) + (2*GM)/r - (h^2)/(r^2) + (r_schw*h^2)/(r^3)
with respect to the proper time tau, then cancel out the first derivative dr / dtau with respect to r on both sides, and end up with an equation with second derivative for the radius r on the left. Then turn this second derivative equation into a pair of first derivative equations for r and its rate of change v, i.e
dphi / dtau = h / (r^2)
dr / dtau = v
dv / dtau = - GM / (r^2) + h^2 / (r^3) - 3*r_schw*(h^2) / (2*r^4)
and calculate from the original equation for r and its first derivative dr / dtau an initial value for the rate of change v = dr / dtau, i.e. I would solve for v the equations with r=r0:
(v0)^2 = (E2_mc2 - c2) + (2*GM)/r0 - (h^2)/(r0^2) + (r_schw*h^2)/(r0^3)
Maybe some kind of python code like this may work:
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
#from ode_helpers import state_plotter
# u = [phi, Y, V, t] or if time is excluded
# u = [phi, Y, V]
def f(tau, u, param):
E2_mc2, c2, GM, h, r_schw = param
Y = u[1]
f_phi = h / (Y**2)
f_Y = u[2] # this is the dr / dt auxiliary equation
f_V = - GM / (Y**2) + h**2 / (Y**3) - 3*r_schw*(h**2) / (2*Y**4)
#f_time = (E2_mc2 * Y) / (Y - r_schw) # this is the equation of the time coordinate
return [f_phi, f_Y, f_V] # or [f_phi, f_Y, f_V, f_time]
# from the initial value for r = Y0 and given energy E,
# calculate the initial rate of change dr / dtau = V0
def ivp(Y0, param, sign):
E2_mc2, c2, GM, h, r_schw = param
V0 = math.sqrt((E2_mc2 - c2) + (2*GM)/Y0 - (h**2)/(Y0**2) + (r_schw*h**2)/(Y0**3))
return sign*V0
G = 4.30091252525 * (pow(10, -3)) #Gravitational constant in (parsec*km^2)/(Ms*sec^2)
c = 0.0020053761 #speed of light , AU/sec
M = 170000 #mass of the central body, in solar masses
m = 10 #mass of the orbiting body, in solar masses
Lz= 0.000024 #Angular momemntum
h = Lz / m #Just the constant in equation
E= 1.715488e-007 #energy
c2 = c**2
E2_mc2 = (E**2) / (c2*m**2)
GM = G*M
r_schw = 2*GM / c2
param = [E2_mc2, c2, GM, h, r_schw]
Y0 = r_schw
sign = 1 # or -1
V0 = ivp(Y0, param, sign)
tau_span = np.linspace(1, 1000, num=1000)
u0 = [math.pi, Y0, V0]
sol = solve_ivp(lambda tau, u: f(tau, u, param), [1, 1000], u0, t_eval=tau_span)
Double check the equations, mistakes and inaccuracies are possible.

Generating random numbers a, b, c such that a^2 + b^2 + c^2 = 1

To do some simulations in Python, I'm trying to generate numbers a,b,c such that a^2 + b^2 + c^2 = 1. I think generating some a between 0 and 1, then some b between 0 and sqrt(1 - a^2), and then c = sqrt(1 - a^2 - b^2) would work.
Floating point values are fine, the sum of squares should be close to 1. I want to keep generating them for some iterations.
Being new to Python, I'm not really sure how to do this. Negatives are allowed.
Edit: Thanks a lot for the answers!
According to this answer at stats.stackexchange.com, you should use normally distributed values to get uniformly distributed values on a sphere. That would mean, you could do:
import numpy as np
abc = np.random.normal(size=3)
a,b,c = abc/np.sqrt(sum(abc**2))
Just in case your interested in the probability densities I decided to do a comparison between the different approaches:
import numpy as np
import random
import math
def MSeifert():
a = 1
b = 1
while a**2 + b**2 > 1: # discard any a and b whose sum of squares already exceeds 1
a = random.random()
b = random.random()
c = math.sqrt(1 - a**2 - b**2) # fixed c
return a, b, c
def VBB():
x = np.random.uniform(0,1,3) # random numbers in [0, 1)
x /= np.sqrt(x[0] ** 2 + x[1] ** 2 + x[2] ** 2)
return x[0], x[1], x[2]
def user3684792():
theta = random.uniform(0, 0.5*np.pi)
phi = random.uniform(0, 0.5*np.pi)
return np.sin(theta)* np.cos(phi), np.sin(theta)*np.sin(phi), np.cos(theta)
def JohanL():
abc = np.random.normal(size=3)
a,b,c = abc/np.sqrt(sum(abc**2))
return a, b, c
def SeverinPappadeux():
cos_th = 2.0*random.uniform(0, 1.0) - 1.0
sin_th = math.sqrt(1.0 - cos_th*cos_th)
phi = random.uniform(0, 2.0*math.pi)
return sin_th * math.cos(phi), sin_th * math.sin(phi), cos_th
And plotting the distributions:
%matplotlib notebook
import matplotlib.pyplot as plt
f, axes = plt.subplots(3, 4)
for func_idx, func in enumerate([MSeifert, JohanL, user3684792, VBB]):
axes[0, func_idx].set_title(str(func.__name__))
res = [func() for _ in range(50000)]
for idx in range(3):
axes[idx, func_idx].hist([i[idx] for i in res], bins='auto')
axes[0, 0].set_ylabel('a')
axes[1, 0].set_ylabel('b')
axes[2, 0].set_ylabel('c')
plt.tight_layout()
With the result:
Explanation: The rows show the distributions for a, b and c respectively while the columns show the histograms (distributions) of the different approaches.
The only approaches that give a uniformly random distribution in the range (-1, 1) are JohanLs and Severin Pappadeux's approach. All other approaches have some features like spikes or a functional behavior in the range [0, 1). Note that these two solution currently gives values between -1 and 1 while all other approaches give values between 0 and 1.
I think it is actually a cool problem, and a nice way to do this is to just use spherical polar coordinates and generate the angles at random.
import random
import numpy as np
def random_pt():
theta = random.uniform(0, 0.5*np.pi)
phi = random.uniform(0, 0.5*np.pi)
return np.sin(theta)* np.cos(phi), np.sin(theta)*np.sin(phi), np.cos(theta)
You could do it like this:
import random
import math
def three_random_numbers_adding_to_one():
a = 1
b = 1
while a**2 + b**2 > 1: # discard any a and b whose sum of squares already exceeds 1
a = random.random()
b = random.random()
c = math.sqrt(1 - a**2 - b**2) # fixed c
return a, b, c
a, b, c = three_random_numbers_adding_to_one()
print(a**2 + b**2 + c**2)
However floats have only limited precision so these won't add to exactly 1, just approximately.
You may need to check if the numbers generated with this function are "random enough". It could be that this setup biases the "randomness".
The "right" answer depends on whether you are looking for a uniform random distribution in space, or on the surface of a sphere, or something else. If you are looking for points on the surface of a sphere, you still have to worry about the cos(theta) factor which will cause points to appear "bunched up" near the poles of the sphere. Since exact nature is not clear from your question, here is a "totally random" distribution that should work:
x = np.random.uniform(0,1,3) # random numbers in [0, 1)
x /= np.sqrt(x[0] ** 2 + x[1] ** 2 + x[2] ** 2)
Another advantage here is that since we are using numpy arrays, you can quickly scale to large sets of points too, by using x = np.random.uniform(0, 1, (3, n)) for any n.
Time to add another solution, heh...
This time it is truly uniform on the unit sphere point picking - check http://mathworld.wolfram.com/SpherePointPicking.html for details
import math
import random
def random_pt():
cos_th = 2.0*random.uniform(0, 1.0) - 1.0
sin_th = math.sqrt(1.0 - cos_th*cos_th)
phi = random.uniform(0, 2.0*math.pi)
return sin_th * math.cos(phi), sin_th * math.sin(phi), cos_th
for k in range(0, 100):
a, b, c = random_pt()
print("{0} {1} {2} {3}".format(a, b, c, a*a + b*b + c*c))

Categories