Finding Radius of curvature of given polar curve in python - python

We are given with a curve like - l/r = 1+e*cos(theta) . And we have find radius of curvature at any given point (r,theta)in polar form using formula
ROC = (r**2 + (r1)**2) / (r**2 - (r1)**2 - (r * r2)) .
My try -
from sympy import *
#from math import *
#import numpy as np
theta ,r ,l ,e= symbols('theta r l e')
r = l/(1+e*cos(theta))
dybydx = diff(r,theta)
dybydx_value = ((dybydx.subs(r,r)).subs(theta,theta))
dsqybydxsq = diff(r,theta)
dsqybydxsq_value = ((dsqybydxsq.subs(r,r)).subs(theta,theta))
roc = simplify((((r)**2 + (dybydx_value)**2)**(3/2))/((r)**2 + 2 * (dybydx_value)**2 - (r * dsqybydxsq_value)))
pprint(simplify(roc))
Expected Output -
(2 * (r)**(3/2))/root(a)
My Input -

Related

Sympy integration running into error, but Mathematica works fine

I'm trying to integrate an equation using sympy, but the evaluation keeps erroring out with:
TypeError: cannot add <class 'sympy.matrices.immutable.ImmutableDenseMatrix'> and <class 'sympy.core.numbers.Zero'>
However, when I integrate the same equation in Mathematica, I get the result I'm looking for. I'm hoping someone can explain what is happening under the hood with sympy and how its integration differs from mathematica's.
Because there are a lot of equations involved, I am only going to post the Python representation of the equations which are used to make up the variables called in the integration. To know for certain that the integration is an issue, I have independently checked each equation involved and made sure the results match between Python and Mathematica. I can confirm that only the integration fails.
Integration variable setup in Python:
import numpy as np
import sympy as sym
from sympy import I, Matrix, integrate
from sympy.integrals import Integral
from sympy.functions import exp
from sympy.physics.vector import cross, dot
c_speed = 299792458
lambda_u = 0.01
k_u = (2 * np.pi)/lambda_u
K_0 = 0.2
gamma_0 = 2.0
beta_0 = sym.sqrt(1 - 1 / gamma_0**2)
t = sym.symbols('t')
t_start = (-2 * lambda_u) / c_speed
t_end = (3 * lambda_u) / c_speed
beta_x_of_t = sym.Piecewise( (0.0, sym.Or(t < t_start, t > t_end)),
((-1 * K_0)/gamma_0 * sym.sin(k_u * c_speed * t), sym.And(t_start <= t, t <= t_end)) )
beta_z_of_t = sym.Piecewise( (beta_0, sym.Or(t < t_start, t > t_end)),
(beta_0 * (1 - K_0**2/ (4 * beta_0 * gamma_0**2)) + K_0**2 / (4 * beta_0 * gamma_0**2) * sym.sin(2 * k_u * c_speed * t), sym.And(t_start <= t, t <= t_end)) )
beta_xp_of_t = sym.diff(beta_x_of_t, t)
beta_zp_of_t = sym.diff(beta_z_of_t, t)
Python Integration:
n = Matrix([(0, 0, 1)])
beta = Matrix([(beta_x_of_t, 0, beta_z_of_t)])
betap = Matrix([(beta_xp_of_t, 0, beta_zp_of_t)])
def rad(n, beta, betap):
return integrate(n.cross((n-beta).cross(betap)), (t, t_start, t_end))
rad(n, beta, betap)
# Output is the error above
Mathematica integration:
rad[n_, beta_, betap_] :=
NIntegrate[
Cross[n, Cross[n - beta, betap]]
, {t, tstart, tend}, AccuracyGoal -> 3]
rad[{0, 0, 1}, {betax[t], 0, betaz[t]}, {betaxp[t], 0, betazp[t]}]
# Output is {0.00150421, 0., 0.}
I did see similar question such as this one and this question, but I'm not entirely sure if they are relevant here (the second link seems to be a closer match, but not quite a fit).
As you're working with imprecise floats (and even use np.pi instead of sym.pi), while sympy tries to find exact symbolic solutions, sympy's expressions get rather wild with constants like 6.67e-11 mixed with much larger values.
Here is an attempt to use more symbolic expressions, and only integrate the x-coordinate of the function.
import sympy as sym
from sympy import I, Matrix, integrate, S
from sympy.integrals import Integral
from sympy.functions import exp
from sympy.physics.vector import cross, dot
c_speed = 299792458
lambda_u = S(1) / 100
k_u = (2 * sym.pi) / lambda_u
K_0 = S(2) / 100
gamma_0 = 2
beta_0 = sym.sqrt(1 - S(1) / gamma_0 ** 2)
t = sym.symbols('t')
t_start = (-2 * lambda_u) / c_speed
t_end = (3 * lambda_u) / c_speed
beta_x_of_t = sym.Piecewise((0, sym.Or(t < t_start, t > t_end)),
((-1 * K_0) / gamma_0 * sym.sin(k_u * c_speed * t), sym.And(t_start <= t, t <= t_end)))
beta_z_of_t = sym.Piecewise((beta_0, sym.Or(t < t_start, t > t_end)),
(beta_0 * (1 - K_0 ** 2 / (4 * beta_0 * gamma_0 ** 2)) + K_0 ** 2 / (
4 * beta_0 * gamma_0 ** 2) * sym.sin(2 * k_u * c_speed * t),
sym.And(t_start <= t, t <= t_end)))
beta_xp_of_t = sym.diff(beta_x_of_t, t)
beta_zp_of_t = sym.diff(beta_z_of_t, t)
n = Matrix([(0, 0, 1)])
beta = Matrix([(beta_x_of_t, 0, beta_z_of_t)])
betap = Matrix([(beta_xp_of_t, 0, beta_zp_of_t)])
func = n.cross((n - beta).cross(betap))[0]
print(integrate(func, (t, t_start, t_end)))
This doesn't give an error, and outputs just zero.
Lambdify can be used to convert the function to numpy, and plot it.
func_np = sym.lambdify(t, func)
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
t_start_np = float(t_start.evalf())
t_end_np = float(t_end.evalf())
ts = np.linspace(t_start_np, t_end_np, 100000)
plt.plot(ts, func_np(ts))
print("approximate integral (np.trapz):", np.trapz(ts, func_np(ts)))
print("approximate integral (scipy's quad):", quad(func_np, t_start_np, t_end_np))
This outputs:
approximate integral (np.trapz): 0.04209721470548062
approximate integral (scipy's quad): (-2.3852447794681098e-18, 5.516333374450447e-10)
Note the huge values on the y-axis, and the small values on the x-axis. These values, as well as matematica's, could well be rounding errors.

Python: how to integrate functions with two unknown parameters numerically

Now I have two functions respectively are
rho(u) = np.exp( (-2.0 / 0.2) * (u**0.2-1.0) )
psi( w(x-u) ) = (1/(4.0 * math.sqrt(np.pi))) * np.exp(- ((w * (x-u))**2) / 4.0) * (2.0 - (w * (x-u))**2)
And then I want to integrate 'rho(u) * psi( w(x-u) )' with respect to 'u'. So that the integral result can be one function with respect to 'w' and 'x'.
Here's my Python code snippet as I try to solve this integral.
import numpy as np
import math
import matplotlib.pyplot as plt
from scipy import integrate
x = np.linspace(0,10,1000)
w = np.linspace(0,10,500)
u = np.linspace(0,10,1000)
rho = np.exp((-2.0/0.2)*(u**0.2-1.0))
value = np.zeros((500,1000),dtype="float32")
# Integrate the products of rho with
# (1/(4.0*math.sqrt(np.pi)))*np.exp(- ((w[i]*(x[j]-u))**2) / 4.0)*(2.0 - (w[i]*(x[j]-u))**2)
for i in range(len(w)):
for j in range(len(x)):
value[i,j] =value[i,j]+ integrate.simps(rho*(1/(4.0*math.sqrt(np.pi)))*np.exp(- ((w[i]*(x[j]-u))**2) / 4.0)*(2.0 - (w[i]*(x[j]-u))**2),u)
plt.imshow(value,origin='lower')
plt.colorbar()
As illustrated above, when I do the integration, I used nesting for loops. We all know that such a way is inefficient.
So I want to ask whether there are methods not using for loop.
Here is a possibility using scipy.integrate.quad_vec. It executes in 6 seconds on my machine, which I believe is acceptable. It is true, however, that I have used a step of 0.1 only for both x and w, but such a resolution seems to be a good compromise on a single core.
from functools import partial
import matplotlib.pyplot as plt
from numpy import empty, exp, linspace, pi, sqrt
from scipy.integrate import quad_vec
from time import perf_counter
def func(u, x):
rho = exp(-10 * (u ** 0.2 - 1))
var = w * (x - u)
psi = exp(-var ** 2 / 4) * (2 - var ** 2) / 4 / sqrt(pi)
return rho * psi
begin = perf_counter()
x = linspace(0, 10, 101)
w = linspace(0, 10, 101)
res = empty((x.size, w.size))
for i, xVal in enumerate(x):
res[i], err = quad_vec(partial(func, x=xVal), 0, 10)
print(f'{perf_counter() - begin} s')
plt.contourf(w, x, res)
plt.colorbar()
plt.xlabel('w')
plt.ylabel('x')
plt.show()
UPDATE
I had not realised, but one can also work with a multi-dimensional array in quad_vec. The updated approach below enables to increase the resolution by a factor of 2 for both x and w, and keep an execution time of around 7 seconds. Additionally, no more visible for loops.
import matplotlib.pyplot as plt
from numpy import exp, mgrid, pi, sqrt
from scipy.integrate import quad_vec
from time import perf_counter
def func(u):
rho = exp(-10 * (u ** 0.2 - 1))
var = w * (x - u)
psi = exp(-var ** 2 / 4) * (2 - var ** 2) / 4 / sqrt(pi)
return rho * psi
begin = perf_counter()
x, w = mgrid[0:10:201j, 0:10:201j]
res, err = quad_vec(func, 0, 10)
print(f'{perf_counter() - begin} s')
plt.contourf(w, x, res)
plt.colorbar()
plt.xlabel('w')
plt.ylabel('x')
plt.show()
Addressing the comment
Just add the following lines before plt.show() to have both axes scale logarithmically.
plt.gca().set_xlim(0.05, 10)
plt.gca().set_ylim(0.05, 10)
plt.gca().set_xscale('log')
plt.gca().set_yscale('log')

Converting XYZ coordinates to longitutde/latitude in Python

I've got some XYZ coordinates in Kilometers (gotten using wgs) with the origin at the center of the Earth, is it possible to convert this into latitude and longitude?
In addition: how can I do this quickly inside python?
It's simply a reverse of this question here: Converting from longitude\latitude to Cartesian coordinates
Based on #daphshez answer
You can use this code,
Here, x, y and z are in Kms and R is an approximate diameter of Earth.
import numpy as np
R = 6371
lat = np.degrees(np.arcsin(z/R))
lon = np.degrees(np.arctan2(y, x))
This is how you do it. Taking into account both radiuses. Based on:
https://gist.github.com/govert/1b373696c9a27ff4c72a
and verifyed.
import math
x = float(4333216) #in meters
y = float(3193635) #in meters
z = float(3375365) #in meters
a = 6378137.0 #in meters
b = 6356752.314245 #in meters
f = (a - b) / a
f_inv = 1.0 / f
e_sq = f * (2 - f)
eps = e_sq / (1.0 - e_sq)
p = math.sqrt(x * x + y * y)
q = math.atan2((z * a), (p * b))
sin_q = math.sin(q)
cos_q = math.cos(q)
sin_q_3 = sin_q * sin_q * sin_q
cos_q_3 = cos_q * cos_q * cos_q
phi = math.atan2((z + eps * b * sin_q_3), (p - e_sq * a * cos_q_3))
lam = math.atan2(y, x)
v = a / math.sqrt(1.0 - e_sq * math.sin(phi) * math.sin(phi))
h = (p / math.cos(phi)) - v
lat = math.degrees(phi)
lon = math.degrees(lam)
print(lat,lon,h)

Using functions to create Electric Field array for 2d Density Plot and 3d Surface Plot

Below is my code, I'm supposed to use the electric field equation and the given variables to create a density plot and surface plot of the equation. I'm getting "invalid dimensions for image data" probably because the function E takes multiple variables and is trying to display them all as multiple dimensions. I know the issue is that I have to turn E into an array so that the density plot can be displayed, but I cannot figure out how to do so. Please help.
import numpy as np
from numpy import array,empty,linspace,exp,cos,sqrt,pi
import matplotlib.pyplot as plt
lam = 500 #Nanometers
x = linspace(-10*lam,10*lam,10)
z = linspace(-20*lam,20*lam,10)
w0 = lam
E0 = 5
def E(E0,w0,x,z,lam):
E = np.zeros((len(x),len(z)))
for i in z:
for j in x:
E = ((E0 * w0) / w(z,w0,zR(w0,lam)))
E = E * exp((-r(x)**2) / (w(z,w0,zR(w0,lam)))**2)
E = E * cos((2 * pi / lam) * (z + (r(x)**2 / (2 * Rz(z,zR,lam)))))
return E
def r(x):
r = sqrt(x**2)
return r
def w(z,w0,lam):
w = w0 * sqrt(1 + (z / zR(w0,lam))**2)
return w
def Rz(z,w0,lam):
Rz = z * (1 + (zR(w0,lam) / z)**2)
return Rz
def zR(w0,lam):
zR = pi * lam
return zR
p = E(E0,w0,x,z,lam)
plt.imshow(p)
It took me way too much time and thinking but I finally figured it out after searching for similar examples of codes for similar problems. The correct code looks like:
import numpy as np
from numpy import array,empty,linspace,exp,cos,sqrt,pi
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
lam = 500*10**-9 #Nanometers
x1 = linspace(-10*lam,10*lam,100)
z1 = linspace(-20*lam,20*lam,100)
[x,y] = np.meshgrid(x1,z1)
w0 = lam
E0 = 5
r = sqrt(x**2)
zR = pi * lam
w = w0 * sqrt(1 + (y / zR)**2)
Rz = y * (1 + (zR / y)**2)
E = (E0 * w0) / w
E = E * exp((-r**2 / w**2))
E = E * cos((2 * pi / lam) * (y + (r**2 / (2 * Rz))))
def field(x,y):
lam = 500*10**-9
k = (5 * lam) / lam * sqrt(1 + (y / (pi*lam))**2)
k *= exp(((-sqrt(x**2)**2 / (lam * sqrt(1 + (y / pi * lam)**2))**2)))
k *= cos((2 / lam) * (y + ((sqrt(x**2)**2 / (2 * y * (1 + (pi * lam / y)**2))))))
return k
#Density Plot
f = field(x,y)
plt.imshow(f)
plt.show()
#Surface Plot
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(x,y,E,rstride=1,cstride=1)
plt.show

Smoothing signal - convolution

I have an iterative model in Python which generates at signal using a function which contains a derivative. As the model iterates the signal becomes noisy - I suspect it may be an issue with computing the numerical derivative. I've tried to smooth this by applying a low-pass filter, convolving the noisy signal with a Gaussian kernel. I use the code snippet:
nw = 256
std = 40
window = gaussian(nw, std, sym=True)
filtered = convolve(current, window, mode='same') / np.sum(window)
where current is my signal, and gaussian and convolve have been imported from scipy. This seems to give a slight improvement, and the first 2 or 3 iterations appear very smooth. However, after that the signal becomes extremely noisy again, despite the fact that the low-pass filter is positioned inside the iterative loop.
Can anyone suggest where I might be going wrong or how I could better tackle this problem? Thanks.
EDIT: As suggested I've included the code I'm using below. At 5 iterations the noise on the signal is clearly apparent.
import numpy as np
from scipy import special
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy.signal import convolve
from scipy.signal import gaussian
# Constants
B = 426400E-9 # tesla
R = 71723E3
Rkm = R / 1000.
Omega = 1.75e-4 #8.913E-4 # rads/s
period = (2. * np.pi / Omega) / 3600. # Gets period in hours
Bj = 2.0 * B
mdot = 1000.
sigmapstar = 0.05
# Create rhoe array
rho0 = 5.* R
rho1 = 100. * R
rhoe = np.linspace(rho0, rho1, 2.E5)
# Define flux function and z component of equatorial field strength
Fe = B * R**3 / rhoe
Bze = B * (R/rhoe)**3
def derivs(u, rhoe, p):
"""Computes the derivative"""
wOmegaJ = u
Bj, sigmapstar, mdot, B, R = p
# Compute the derivative of w/omegaJ wrt rhoe (**Fe and Bjz have been subbed)
dwOmegaJ = (((8.0*np.pi*sigmapstar*B**2 * (R**6)) / (mdot * rhoe**5)) \
*(1.0-wOmegaJ) - (2.*wOmegaJ/rhoe))
res = dwOmegaJ
return res
its = 5 # number of iterations to perform
i = 0
# Loop to iterate
while i < its:
# Define the initial condition of rigid corotation
wOmegaJ_0 = 1
params = [Bj, sigmapstar, mdot, B, R]
init = wOmegaJ_0
# Compute numerical solution to Hill eqn
u = odeint(derivs, init, rhoe, args=(params,))
wOmega = u[:,0]
# Calculate I_rho
i_rho = 8. * np.pi * sigmapstar * Omega * Fe * ( 1. - wOmega)
dx = rhoe[1] - rhoe[0]
differential = np.gradient(i_rho, dx)
jpara = 1. * differential / (4 * np.pi * rhoe * Bze )
jpari = 2. * B * para
# Remove infinity and NaN values)
jpari[~np.isfinite(jpari)] = 0.0
# Convolve to smooth curve
nw = 256
std = 40
window = gaussian(nw, std, sym=True)
filtered = convolve(jpari, window, mode='same') /np.sum(window)
jpari = filtered
# Pedersen conductivity as function of jpari
sigmapstar0 = 0.05
jstar = 0.01e-6
jstarstar = 0.25e-6
s1 = 0.1e6#0.1e6 # (Am^-2)^-1
s2 = 9.9e6 # (Am^-2)^-1
n = 8.
# Calculate news sigmapstar. Realistic conductivity
sigmapstarNew = sigmapstar0 + 0.5 * (s1 + s2/(1 + (jpari/jstarstar)**n)**(1./n)) * (np.sqrt(jpari**2 + jstar**2) + jpari)
sigmapstarNew = sigmapstarNew
diff = np.abs(sigmapstar - sigmapstarNew) / sigmapstar * 100
diff = max(diff)
sigmapstar = 0.5* sigmapstar + 0.5* sigmapstarNew # Weighted averaging
i += 1
print diff
# Plot jpari
ax = plt.subplot(111)
ax.plot(rhoe/R, jpari * 1e6)
ax.axhline(0, ls=':')
ax.set_xlabel(r'$\rho_e / R_{UCD}$')
ax.set_ylabel(r'$j_{\parallel i} $ / $ \mu$ A m$^{-2}$')
ax.set_xlim([0,80])
ax.set_ylim(-0.01,0.01)
plt.locator_params(nbins=5)
plt.draw()
plt.show()

Categories