I am trying to fit some data with a double Gaussian function using scipy.optimization.curve_fit:
Double gaussian fit with two centroids
def _2gaussian(x, amp1, cen1, sigma1, amp2, cen2, sigma2):
g1 = amp1 * (1 / (sigma1 * (np.sqrt(2 * np.pi)))) * (np.exp((-1.0 / 2.0) * (((x - cen1) / sigma1) ** 2)))
g2 = amp2 * (1 / (sigma2 * (np.sqrt(2 * np.pi)))) * (np.exp((-1.0 / 2.0) * (((x - cen2) / sigma2) ** 2)))
return g1 + g2 # + cen2
So my problem is: my data are quite symmetric and I was trying to use a double Gaussian fit with a common/shared centroid for the two Gaussian functions. I have tried to write in the previous formula cen1 instead of cen2 (and keeping cen2 in the parameters) or also to just eliminate totally cen2 as a parameter. Nothing of these worked, so that the fit obtained is just a flat curve with a big spike in the middle. Do you have any suggestions on how to have just a shared centroid?
Double gaussian with cen1 only
The lowest points in your data start at around y = 4000. But your Gaussian function has no offset term, so it will always start at y = 0. You either need to normalize your data, or add an offset like this:
def _2gaussian(x, amp1, cen1, sigma1, amp2, cen2, sigma2, offset):
g1 = amp1 * (1 / (sigma1 * (np.sqrt(2 * np.pi)))) * (np.exp((-1.0 / 2.0) * (((x - cen1) / sigma1) ** 2)))
g2 = amp2 * (1 / (sigma2 * (np.sqrt(2 * np.pi)))) * (np.exp((-1.0 / 2.0) * (((x - cen2) / sigma2) ** 2)))
return g1 + g2 + offset # + cen2
Then I highly recommend using the p0 argument in curve_fit, which will let you specify an initial guess for each parameter in your fitting function. In your case, your guess for offset will be 4000.
Related
I need to generate a double 3D gyroid structure. For this, I'm using vedo
from matplotlib import pyplot as plt
from scipy.constants import speed_of_light
from vedo import *
import numpy as np
# Paramters
a = 5
length = 100
width = 100
height = 10
pi = np.pi
x, y, z = np.mgrid[:length, :width, :height]
def gen_strut(start, stop):
'''Generate the strut parameter t for the gyroid surface. Create a linear gradient'''
strut_param = np.ones((length, 1))
strut_param = strut_param * np.linspace(start, stop, width)
t = np.repeat(strut_param[:, :, np.newaxis], height, axis=2)
return t
plt = Plotter(shape=(1, 1), interactive=False, axes=3)
scale=0.5
cox = cos(scale * pi * x / a)
siy = sin(scale * pi * y / a)
coy = cos(scale * pi * y / a)
siz = sin(scale * pi * z / a)
coz = cos(scale * pi * z / a)
six = sin(scale * pi * x / a)
U1 = ((six ** 2) * (coy ** 2) +
(siy ** 2) * (coz ** 2) +
(siz ** 2) * (cox ** 2) +
(2 * six * coy * siy * coz) +
(2 * six * coy * siz * cox) +
(2 * cox * siy * siz * coz)) - (gen_strut(0, 1.3) ** 2)
threshold = 0
iso1 = Volume(U1).isosurface(threshold).c('silver').alpha(1)
cube = TessellatedBox(n=(int(length-1), int(width-1), int(height-1)), spacing=(1, 1, 1))
iso_cut = cube.cutWithMesh(iso1).c('silver').alpha(1)
# Combine the two meshes into a single mesh
plt.at(0).show([cube, iso1], "Double Gyroid 1", resetcam=False)
plt.interactive().close()
The result looks quite good, but now I'm struggling with exporting the volume. Although vedo has over 300 examples, I did not find anything in the documentation to export this as a watertight volume for 3D-Printing. How can I achieve this?
I assume you mean that you want to extract a watertight mesh as an STL (?).
This is a non trivial problem because it is only well defined on a subset of the mesh regions where the in/out is not ambiguous, in those cases fill_holes() seems to do a decent job..
Other cases should be dealt "manually". Eg, you can access the boundaries with mesh.boundaries() and try to snap the vertices to a closest common vertex. This script is not a solution, but I hope can give some ideas on how to proceed.
from vedo import *
# Paramters
a = 5
length = 100
width = 100
height = 10
def gen_strut(start, stop):
strut_param = np.ones((length, 1))
strut_param = strut_param * np.linspace(start, stop, width)
t = np.repeat(strut_param[:, :, np.newaxis], height, axis=2)
return t
scale=0.5
pi = np.pi
x, y, z = np.mgrid[:length, :width, :height]
cox = cos(scale * pi * x / a)
siy = sin(scale * pi * y / a)
coy = cos(scale * pi * y / a)
siz = sin(scale * pi * z / a)
coz = cos(scale * pi * z / a)
six = sin(scale * pi * x / a)
U1 = ((six ** 2) * (coy ** 2) +
(siy ** 2) * (coz ** 2) +
(siz ** 2) * (cox ** 2) +
(2 * six * coy * siy * coz) +
(2 * six * coy * siz * cox) +
(2 * cox * siy * siz * coz)) - (gen_strut(0, 1.3) ** 2)
iso = Volume(U1).isosurface(0).c('silver').backcolor("p5").lw(1).flat()
cube = TessellatedBox(n=(length-1, width-1, height-1)).c('red5').alpha(1)
cube.triangulate().compute_normals()
cube.cut_with_mesh(iso).compute_normals()
print(iso.boundaries(return_point_ids=True))
print(cube.boundaries(return_point_ids=True))
print(iso.boundaries().join().lines())
show(iso, cube).close()
merge(iso, cube).clean().backcolor("p5").show().close()
iso.clone().fill_holes(15).backcolor("p5").show().close()
I am writing a function to calculate three nonlinear equations. I have the function to calculate them, but the thing is I give initial guess to them - which works fine. But now I have a constraint for x between (1/4 and 1/3). For that, I am making 2 for lops. One is for ptinting 13 different values, the other one for the constraint.
However the code does not give me the result at all:
import numpy as np
import math
from scipy.optimize import fsolve
def equations(vars):
x, y, z = vars
eq1 = ((x / (1 - x)) - (((2.5*np.cos(z)) / (8 * np.pi * np.sin(z) ** 2)) * (1 + (design_ratio * np.tan(z)))))
eq2 = ((y / (1 + y)) - (2.5 / (8 * np.pi * np.cos(z))) * (1 -design_ratio *( (1 / np.tan(z)))))
eq3 = np.tan(z) - ((1-x) /( 1.40 * (1+y)))
return [eq1, eq2, eq3]
n=13
for i in range(0, n):
for j in range(25555, 33333):
x = 0.00001 *x
x, y, z = fsolve(equations, (0.328, 0.048, 28))
print(x, y, z)
Frankly, using a loop is a terrible approach to handle the box constraint 1/4 <= x <= 1/3. Since fsolve doesn't support (box) constraints, you can rewrite the problem
Solve F(x,y,z) = 0 with 1/4 <= x <= 1/3
as an equivalent minimization problem
min np.sum(F(x,y,z)**2) s.t. 1/4 <= x <= 1/3
and solve it by means of scipy.optimize.minimize like this:
import numpy as np
from scipy.optimize import minimize
def F(vars):
x, y, z = vars
eq1 = ((x / (1 - x)) - (((2.5*np.cos(z)) / (8 * np.pi * np.sin(z) ** 2)) * (1 + (design_ratio * np.tan(z)))))
eq2 = ((y / (1 + y)) - (2.5 / (8 * np.pi * np.cos(z))) * (1 -design_ratio *( (1 / np.tan(z)))))
eq3 = np.tan(z) - ((1-x) /( 1.40 * (1+y)))
return np.array([eq1, eq2, eq3])
bounds = [(1./4, 1./3), (None, None), (None, None)]
res = minimize(lambda vars: np.sum(F(vars)**2), x0=(0.328, 0.048, 28), bounds=bounds)
Ive create a bilateral filter in python and added numba to(in theory) run it on the my GPU. Yet when I set the mode to CUDA its extremely slow and doesn't seem to use my graphics card at all while putting it into parallel makes it very fast. The best reason I have is that my function just isn't optimized to run on my GPU very well because it can't apply one function to all elements at once(because it requires the x and y coordinate to work). So my question is, why does this run slow with target set to CUDA?
#guvectorize([(float64[:,:], float64[:,:])],'(n,m)->(n,m)',target='cuda',nopython =True)
def apply_filter(img, filteredImage):
imh, imw = img.shape[:2]
radius = diameter // 2
hd = int((diameter - 1) / 2)
#print("starting work in cuda")
for h in prange(hd, imh - hd):
for w in prange(hd, imw - hd):
Wp = 0
filteredPixel = 0
startX = h
startY = w
for x in prange(0, diameter):
for y in prange(0, diameter):
currentX = startX - (radius - x)
cureentY = startY - (radius - y)
intensityDifferent = img[currentX][cureentY] - img[startX][startY]
intensity = (1.0 / (2 * math.pi * (sIntesity ** 2)) * math.exp(- (intensityDifferent ** 2) / (2 * sIntesity ** 2)))
distance = ((currentX-startX)**2 + (cureentY-startY)**2)**(1/2)
smoothing = (1.0 / (2 * math.pi * (sSpace ** 2))) * math.exp(- (distance ** 2) / (2 * sSpace ** 2))
weight = intensity * smoothing;
filteredPixel += img[currentX][cureentY] * weight
Wp += weight
filteredImage[h][w] = int(round(filteredPixel / Wp))
#print("done!")
I am trying to implement SABR (Stochastic alpha, beta, rho) in Python to calculate implied volatility. This link here explains SABR very accurately and concisely starting on slide 17: http://lesniewski.us/papers/presentations/MIT_March2014.pdf
The method seems easy enough, but the problem I am having is that I am getting a ZeroDivisonError every time I run the program. I believe this may be because I am choosing my initial alpha, rho, and sigma0 incorrectly during calibration. However, I cannot find online how to choose the initial values to guarantee that a minimum will be found.
Here is my code:
# args = [alpha, rho, sigma0]
# The other parameters (T, K, F0, beta, rho, marketVol) are globals
def calcImpliedVol(args):
alpha = args[0]
rho = args[1]
sigma0 = args[2]
# From MIT powerpoint, slide 21
Fmid = (F0 + K) / 2.0
gamma1 = 1.0 * beta / Fmid
gamma2 = 1.0 * beta * (beta - 1) / Fmid**2
xi = 1.0 * alpha / (sigma0 * (1 - beta)) * (F0**(1-beta) - K**(1-beta))
e = T * alpha**2 # From MIT powerpoint, slide 19
# From MIT powerpoint, slide 21
impliedVol = \
1.0 * alpha * log(F0/K) / D(rho, xi) * \
(1 + ((2 * gamma2 - gamma1**2 + 1 / Fmid**2)/24.0 * (sigma0 * Fmid**beta / alpha)**2 + \
(rho * gamma1 / 4.0) * (sigma0 * Fmid**beta / alpha) + ((2 - 3 * rho**2) / 24.0)) * e) - \
marketVol
# Returns lambda function in terms of alpha, rho, sigma0
return impliedVol;
# From MIT powerpoint, slide 21
def D(rho, xi):
result = log((sqrt(1 - 2 * rho * xi + xi**2) + xi - rho) / (1-rho))
return result
# Find optimal alpha, rho, sigma0 that minimizes calcImpliedVol - marketVol
def optimize():
result = optimize.minimize(calcImpliedVol, [alpha_init, rho_init, sigma0_init])
return result
Thanks so, so much for any help!
This is a little late but bounding is the right way to go:
bounds = [(0.0001, None), (-0.9999, 0.9999), (0.0001, None)]
x_solved = minimize(obj_func, initial_guess, args=(mkt_vols, F, K, tau, beta),
method='L-BFGS-B', bounds=bounds, tol=0.00001)
alpha represents the base volatility (atm vol can be used here to as an initialization) so it is bounded by 0.0. rho is the correlation between the axes so it is bounded by -1 and 1. nu (vol of volatility) is bounded on the downside by 0.0.
The coefficient tuple is returned with:
x_solved.x
Work with bounding the search intervals.
Im trying to write a program that takes new york city x/y coords and turns them into lat/lng decimal points. Im new to planar/globe mapping. Ive included the constants that NYC has provided on their website. Also if there is a good article on how to do this I would love to learn! Below is the program I have written along with commented output at the bottom and also what the ideal values should be. Im kinda just stumbling in the dark on this.
#!/usr/bin/python
from math import *
"""
Supplied by NYC
Lambert Conformal Conic:
Standard Parallel: 40.666667
Standard Parallel: 41.033333
Longitude of Central Meridian: -74.000000
Latitude of Projection Origin: 40.166667
False Easting: 984250.000000
False Northing: 0.000000
"""
x = 981106 #nyc x coord
y = 195544 #nyc y coord
a = 6378137 #' major radius of ellipsoid, map units (NAD 83)
e = 0.08181922146 #' eccentricity of ellipsoid (NAD 83)
angRad = pi/180 #' number of radians in a degree
pi4 = pi/4 #' Pi / 4
p0 = 40.166667 * angRad #' latitude of origin
p1 = 40.666667 * angRad #' latitude of first standard parallel
p2 = 41.033333 * angRad #' latitude of second standard parallel
m0 = -74.000000 * angRad #' central meridian
x0 = 984250.000000 #' False easting of central meridian, map units
m1 = cos(p1) / sqrt(1 - ((e ** 2) * sin(p1) ** 2))
m2 = cos(p2) / sqrt(1 - ((e ** 2) * sin(p2) ** 2))
t0 = tan(pi4 - (p0 / 2))
t1 = tan(pi4 - (p1 / 2))
t2 = tan(pi4 - (p2 / 2))
t0 = t0 / (((1 - (e * (sin(p0)))) / (1 + (e * (sin(p0)))))**(e / 2))
t1 = t1 / (((1 - (e * (sin(p1)))) / (1 + (e * (sin(p1)))))**(e / 2))
t2 = t2 / (((1 - (e * (sin(p2)))) / (1 + (e * (sin(p2)))))**(e / 2))
n = log(m1 / m2) / log(t1 / t2)
f = m1 / (n * (t1 ** n))
rho0 = a * f * (t0 ** n)
x = x - x0
pi2 = pi4 * 2
rho = sqrt((x ** 2) + ((rho0 - y) ** 2))
theta = atan(x / (rho0 - y))
t = (rho / (a * f)) ** (1 / n)
lon = (theta / n) + m0
x = x + x0
lat0 = pi2 - (2 * atan(t))
part1 = (1 - (e * sin(lat0))) / (1 + (e * sin(lat0)))
lat1 = pi2 - (2 * atan(t * (part1 ** (e / 2))))
while abs(lat1 - lat0) < 0.000000002:
lat0 = lat1
part1 = (1 - (e * sin(lat0))) / (1 + (e * sin(lat0)))
lat1 = pi2 - (2 * atan(t * (part1 ^ (e / 2))))
lat = lat1 / angRad
lon = lon / angRad
print lat,lon
#output : 41.9266666432 -74.0378981653
#should be 40.703778, -74.011829
Im pretty stuck, I have a ton of these that need geo-coded
Thanks for any help!
One word answer: pyproj
>>> from pyproj import Proj
>>> pnyc = Proj(
... proj='lcc',
... datum='NAD83',
... lat_1=40.666667,
... lat_2=41.033333,
... lat_0=40.166667,
... lon_0=-74.0,
... x_0=984250.0,
... y_0=0.0)
>>> x = [981106.0]
>>> y = [195544.0]
>>> lon, lat = pnyc(x, y, inverse=True)
>>> lon, lat
([-74.037898165369015], [41.927378144152335])
These formulas should help you out:
http://www.linz.govt.nz/geodetic/conversion-coordinates/projection-conversions/lambert-conformal-conic#lbl3
owww. you'd be better using a library for this. a little searching suggests that should be the python interface to gdal
this question uses gdal, but not via the python api (they just call gdal via a command line from within python), but might help.
you might be best asking at gis stackexchange for more info.
i'm unclear where you got the code above from. if you link to it i/someone could check for obvious implementation errors.
Rather than trying to work through all the math, you could just pick a grid over your map surface and find out the lat/long of those grid points, then use interpolation to do the conversion. Depending on the linearity of the projection it might not take many points to get good accuracy.