Unexpected behaviour when plotting 3D scenes with mayavi - python

I want to plot a sphere with latitudes 3D using mayavi. But I don't want the the latitudes in an equidistant angular range, but in an arangement according to this: https://en.wikipedia.org/wiki/Spherical_segment
This should result in spherical segments which have the same surface area.
So far... Lets consider theta to be the polar angle and phi to be the azimutal angle. Then I have the following code:
import numpy as np
from mayavi import mlab
## Create a sphere
r = 1.0
pi = np.pi
cos = np.cos
sin = np.sin
arccos=np.arccos
phi, theta = np.mgrid[-0.5*pi:0.5*pi:101j, 0:1*pi:101j]
x = r*sin(phi)*cos(theta)
y = r*sin(phi)*sin(theta)
z = r*cos(phi)
## Basic settings mlab
mlab.figure(1, bgcolor=(1, 1, 1), fgcolor=(0, 0, 0), size=(500, 500))
mlab.clf()
mlab.mesh(x , y , z, color=(0.9,0.,0.), opacity=0.3)
phi1=np.linspace(0, 2 * np.pi, 100)
theta1=arccos(np.linspace(0,1,11))
for i in range(len(theta1)):
x_pol = np.cos(phi1) * np.cos(theta1[i])
y_pol = np.sin(phi1) * np.cos(theta1[i])
z_pol = np.ones_like(phi1) * np.sin(theta1[i])
mlab.plot3d(x_pol, y_pol, z_pol, color=(0,0,0), opacity=0.2, tube_radius=None)
mlab.show()
The result is shown in image0 below.
As you can see, the arrangement of the segments is not correctly ordered. So I changed the order in theta1:
theta1=arccos(np.linspace(1,0,11))
The result is shown in image1 below. As you can see, the arrangement of the segments didn't change.
So, why is that? When I arrange the angular spacing from 0...1 this should come up with a different result then a spacing from 1...0. But actually it doesn't?!?
Has anyone a clue, what I did wrong?
Thanks
image0
image1

The ranges have the same values. The segments are the same, but in reversed order.
See the values of theta:
In [1]: np.flip(np.linspace(0,1,11), 0), np.linspace(1,0,11)
Out[1]:
(array([ 1. , 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0. ]),
array([ 1. , 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0. ]))

Thanks for your reply. I am not sure, whether I got your point. In the first case theta looks like this:
In [1]: np.arccos(np.linspace(1,0,11))
Out [1]:
array([0. , 0.45102681, 0.64350111, 0.79539883, 0.92729522,
1.04719755, 1.15927948, 1.26610367, 1.36943841, 1.47062891,
1.57079633])
In the second case it looks like:
In [1]: np.arccos(np.linspace(0,1,11))
Out [1]:
array([1.57079633, 1.47062891, 1.36943841, 1.26610367, 1.15927948,
1.04719755, 0.92729522, 0.79539883, 0.64350111, 0.45102681,
0. ])
So to me, it seems correct.

Ok,
sometimes it takes quite a time for me ^^
I figured out, what I did wrong. I simply changed
np.arccos(np.linspace(0,1,11))
to
np.pi/2 - np.arccos(np.linspace(0,1,11))
which produces the correct output.
correct Image
Well, sometimes you (I) don't see the forest for the trees... ^^
Greetings...

Related

Fit data to integral using quad - magnetic hysteresis loop

I'm having trouble getting a fit to converge, as it's either not converging or giving a NaN error, depending on my start parameters. I'm using quad to integrate and fitting using lmfit. Any help is appreciated.
I'm fitting my data to a Langevin function, weighted by a log-normal distribution. Stackoverflow won't let me post an image of the function because of my reputation score, but it's in the code below.
I'm plugging in H (field) and fitting for Ms, Dm, and sigma, while mu_0, Msb, kb, and T are all constants.
Here's what I'm working with, using some example data:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy
from numpy import vectorize, sqrt, log, inf, exp, pi, tanh
from scipy.constants import k, mu_0
from lmfit import Parameters
from scipy.integrate import quad
x_data = [-7.0, -6.5, -6.0, -5.5, -5.0, -4.5, -4.0, -3.5, -3.0, -2.5, -2.0, -1.5, -1.0,
-0.95, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4,
-0.35, -0.3, -0.25, -0.2, -0.1,-0.05, 3e-6, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3,
0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0,
1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0]
y_data = [-61.6, -61.6, -61.6, -61.5, -61.5, -61.4, -61.3, -61.2, -61.1, -61.0, -60.8,
-60.4, -59.8, -59.8, -59.7, -59.5, -59.4, -59.3, -59.1, -58.9, -58.7, -58.4,
-58.1, -57.7, -57.2, -56.5, -55.6, -54.3, -52.2, -48.7, -41.8, -27.3, 2.6,
30.1, 43.1, 49.3, 52.6, 54.5, 55.8, 56.6, 57.3, 57.8, 58.2, 58.5, 58.7, 59.0,
59.1, 59.3, 59.5, 59.6, 59.7, 59.8, 59.9, 60.5, 60.8, 61.0, 61.2, 61.3, 61.4,
61.4, 61.5, 61.6, 61.6, 61.7, 61.7]
params = Parameters()
params.add('Dm' , value = 8e-9 , vary = True, min = 0, max = 1) # magnetic diameter (m)
params.add('s' , value = 0.4 , vary = True, min = 0.0, max = 10.0) # sigma, unitless
params.add('Ms' , value = 61.0 , vary = True) #, min = 30.0 , max = 100.0) # saturation magnetization (emu/g)
params.add('Msb', value = 446000 * 1e-16, vary = False) # Bulk magnetite saturation magnetization (A/m)
params.add('T' , value = 300 , vary = False) # Temperature (K)
def Mag(x_data, params):
v = params.valuesdict() # put parameters into a dictionary
def numerator(D, x_data, params):
# langevin
a_numerator = pi * v['Msb'] * x_data * D**3
a_denominator = 6*k*v['T']
a = a_numerator / a_denominator
langevin = (1/tanh(a)) - (1/a)
# PDF
exp_num = (log(D/v['Dm']))**2
exp_denom = 2 * v['s']
exponential = exp(-exp_num/exp_denom)
pdf = exponential/(sqrt(2*pi) * v['s'] * D)
return D**3 * langevin * pdf
def denominator(D, params):
# PDF
exp_num = (log(D/v['Dm']))**2
exp_denom = 2 * v['s']
exponential = exp(-exp_num/exp_denom)
pdf = exponential/(sqrt(2*pi) * v['s'] * D)
return D**3 * pdf
# return integrals
return v['Ms'] * quad(numerator, 0, inf, args=(x_data, params))[0] / quad(denominator, 0, inf,args=(params))[0]
# vectorize
vcurve = np.vectorize(Mag, excluded=set([1]))
plt.plot(x_data, vcurve(x_data, params))
plt.scatter(x_data, y_data)
This plots the data and the fit equation with start parameters. I have an issue somewhere with units in the Langevin and have to multiply the numerator by 1e-16 to get the curve looking correct...
from lmfit import minimize, Minimizer, Parameters, Parameter, report_fit
def fit_function(params, x_data, y_data):
model1 = vcurve(x_data, params)
resid1 = y_data - model1
return resid1
minner = Minimizer(fit_function, params, fcn_args=(x_data, y_data))
result = minner.minimize()
report_fit(result)
result.params.pretty_print()
Depending on the sigma (s) value I choose, which should be able to range from 0 to infinity, the integral won't converge, giving the following error:
/var/folders/pz/tbd_dths0_512bm6l43vpg680000gp/T/ipykernel_68003/1413445460.py:39: IntegrationWarning: The algorithm does not converge. Roundoff error is detected
in the extrapolation table. It is assumed that the requested tolerance
cannot be achieved, and that the returned result (if full_output = 1) is
the best which can be obtained.
return v['Ms'] * quad(numerator, 0, inf, args=(x_data, params))[0] / quad(denominator, 0, inf,args=(params))[0]
I'm stuck on why the fit isn't converging. Is this an issue because I'm using very small numbers or is this an issue with quad/lmfit? Thank you!
Having parameters that are closer to order 1 (say, between 1e-7 and 1e7) is a good idea. If you expect a parameter is in the 1.e-9 (or 1.e-16!) range, you could definitely scale it (in the fitting function) so that the value passed back and forth by the fitting algorithm is closer to order 1. But, I sort of doubt that is the main problem you are having.
It looks to me like your Mag function is not very sensitive to the values of your variable parameters Dm and s. I am not 100% sure why that is. Have you verified that calculations using your "Mag" or "vcurve" do what you expect them to do?

Constrain specific values in Scipy curve fitting

I have what may be quite a basic question, but a quick googling was not able to solve it.
So I have some experimental data that I need to fit with an equation like
a * exp^{-x/t}
in the case of needing more components the expression is
a * exp^{-x/t1} + b * exp^{-x/t2} ... + n * exp^{-x/tn}
for n elements
Right now I have the following code
x = np.array([0.0001, 0.0004, 0.0006, 0.0008, 0.001, 0.0015, 0.002, 0.004, 0.006, 0.008, 0.01, 0.05, 0.1, 0.2, 0.5, 0.6, 0.8, 1, 1.5, 2, 4, 6, 8])
y1= np.array([5176350.00, 5144208.69, 4998297.04, 4787100.79, 4555731.93, 4030741.17, 3637802.79, 2949911.45, 2816472.26, 2831962.09, 2833262.53, 2815205.34, 2610685.14, 3581566.94, 1820610.74, 2100882.80, 1762737.50, 1558251.40, 997259.21, 977892.00, 518709.91, 309594.88, 186184.52])
y2 = np.array([441983.26, 423371.31, 399370.82, 390603.58, 378351.08, 356511.93, 349582.29, 346425.39, 351191.31, 329363.40, 325154.86, 352906.21, 333150.81, 301613.81, 94043.05, 100885.77, 86193.40, 75548.26, 27958.11, 20262.68, 27945.10])
def fitcurve (x, a, b, t1, t2):
return a * np.exp(- x / t1) + b * np.exp(- x / t2)
popt, pcov = curve_fit(fitcurve, x, y)
print('a = ', popt[0], 'b = ', popt[1], 't1 = ', popt[2], 't2 = ', popt[3])
plt.plot(x,y, 'bo')
plt.plot(x,fitcurve(x, *popt))
Something important is that a+b+...n = is equal to 1. Basically the percentage of each component. Ideally, I want to plot 1, 2, 3 and 4 components and see which ones provide a better fitting
I am afraid that your data cannot be fitted with a simple sum of exponential functions. Did you draw the points on a graph in order to see what is the shape of the curve ?
This looks more like a function of logistic kind (but not exactly logistic) than a sum of exponentials.
I could provide some advises to fit a sum of exponential (even with condition about the sum of coefficients). But this would be of no use with your data. Of course if you have other data convenient to fit a sum of exponentials, I would be pleased to show how to proceed.
I am not going into the model-fitting procedure but what you can do is argparse variable number of paramters and then try to fit for various numbers of exponentials. You can make use of the broadcasting feature of numpy to achieve this.
EDIT: you have to take care of the number of elements in argparse. Only even numbers works now. I leave it up to you to edit that part in (trivial).
Target
We want to fit $$\sum_i^N a_i \exp(-b_i x)$$ for variable $n$
Output:
Implementation:
from scipy import optimize, ndimage, interpolate
x = np.array([0.0001, 0.0004, 0.0006, 0.0008, 0.0010, 0.0015, 0.0020, 0.0040, 0.0060, 0.0080, 0.0100, 0.0500, 0.1000, 0.2000, 0.5000, 0.6000, 0.8000, 1.0000, 1.5000, 2.0000, 4.0000, 6.0000, 8.0000, 10.0000])
y = np.array([416312.6500, 387276.6400, 364153.7600, 350981.7000, 336813.8800, 314992.6100, 310430.4600, 318255.1700, 318487.1700, 291768.9700, 276617.3000, 305250.2100, 272001.3500, 260540.5600, 173677.1900, 155821.5500, 151502.9700, 83559.9000, 256097.3600, 20761.8400, 1.0000, 1.0000, 1.0000, 1.0000])
# variable args fit
def fitcurve (x, *args):
args = np.array(args)
half = len(args)//2
y = args[:half] * np.exp(-x[:, None] * args[half:])
return y.sum(-1)
# data seems to contain outlier?
# y = ndimage.median_filter(y, 5)
popt, pcov = optimize.curve_fit(fitcurve, x, y,
bounds = (0, np.inf),
p0 = np.ones(6), # set variable size
maxfev = 1e3,
)
fig, ax = plt.subplots()
ax.plot(x,y, 'bo')
# ax.set_yscale('log')
ax.set_xscale('symlog')
ax.plot(x,fitcurve(x, *popt))
fig.show()

how to sample HSV space with perceptual uniformity

I can sample HSV space (fixed s and v) as so
hue_gradient = np.linspace(0, 360,16)#sample 16 different equally spread hues
hsv = np.ones(shape=(1, len(hue_gradient), 3), dtype=float)*0.75#set sat and brightness to 0.75
hsv[:, :, 0] = hue_gradient#make one array
hsv
array([[[ 0. , 0.75, 0.75],
[ 24. , 0.75, 0.75],
[ 48. , 0.75, 0.75],
[ 72. , 0.75, 0.75],
[ 96. , 0.75, 0.75],
[120. , 0.75, 0.75],
[144. , 0.75, 0.75],
[168. , 0.75, 0.75],
[192. , 0.75, 0.75],
[216. , 0.75, 0.75],
[240. , 0.75, 0.75],
[264. , 0.75, 0.75],
[288. , 0.75, 0.75],
[312. , 0.75, 0.75],
[336. , 0.75, 0.75],
[360. , 0.75, 0.75]]])
However, all of these colors are not perceptually uniform
I can confirm this by running a deltaE2000 equation (delta_e_cie2000) from colormath package. The result looks like this:
The values are deltaE values, colors 0-15 correspond to the hue angle positions. As you can see, some colors are below the perceptual threshold
So, question is, is it possible for me to uniformly sample a hsv space with the s and v fixed? If not, how can I sample the space in a way that the colors are arranged as neighbors with hue similarity with s and v varying as little as they have to?
I tried a few things, but in the end this seemed to work. It spaces the hue values uniformly and then nudges them until they are perceptually uniform.
from colormath import color_objects, color_diff, color_conversions
SAT = 1.0
VAL = 1.0
COLOR_COUNT = 16
NUDGE_SIZE = 0.2
def hue_to_lab(hue):
return color_conversions.convert_color(
color_objects.HSVColor(hue, SAT, VAL), color_objects.LabColor
)
def get_equally_spaced(number, iters=100):
# Create hues with evenly spaced values in hue space
hues = [360 * x / number for x in range(number)]
for _ in range(iters):
# Convert hues to CIELAB colours
cols = [hue_to_lab(h) for h in hues]
# Work out the perceptual differences between pairs of adjacent
# colours
deltas = [
color_diff.delta_e_cie2000(cols[i], cols[i - 1]) for i in range(len(cols))
]
# Nudge each hue towards whichever adjacent colour is furthest
# away perceptually
nudges = [
(deltas[(i + 1) % len(deltas)] - deltas[i]) * NUDGE_SIZE
for i in range(len(deltas))
]
hues = [(h + d) % 360 for (h, d) in zip(hues, nudges)]
return hues
print(get_equally_spaced(COLOR_COUNT, iters=1000))
NUDGE_SIZE can mess it up if set wrong (changing it to 2 here results in nothing resembling a rainbow) and I think the best value depends on how many iterations you’re doing and how many colours you’re generating. The delta_e_cie2000 values for adjacent colours (with the settings given) are [16.290288769191324, 16.290288766871242, 16.290288753399196, 16.290288726186013, 16.290288645469946, 16.290288040904777, 16.290288035037598, 16.290288051426675, 16.290288079361915, 16.290288122430887, 16.290288180738187, 16.290288265350803, 16.290288469198916, 16.29028866254433, 16.2902887136652], which are pretty uniform: I think iters=1000 is overkill for this few colours. I’m using normal lists here, but it should translate to NumPy arrays—and probably run a bit faster.
The algorithm works like this:
Start with a naïve evenly spaced set of hues.
Calculate the perceptual differences between adjacent pairs of colours.
Move each hue slightly towards whichever of its neighbours is most different to it perceptually. The size of this movement is proportional to NUDGE_SIZE.
Repeat 2–3 until the hues have been nudged iters times.

Minimization of an equation using Python

I have four vectors.
x = [0.4, -0.3, 0.9]
y1 = [0.3, 1, 0]
y2 = [1, -0.9, 0.5]
y3 =[0.6, 0.01, 0.8]
I need to minimize following equation:
where 0 <= a,b,g <= 1. I have tried to use scipy.minimize but I could not understand how that can be used for this equation. Is there any library for optimization that I can use or is there any easier way in Python to do it?
My ultimate goal is to find values of a,b,g between 0-1 that give me minimum value given these four vectors as input.
Edit 0: I fixed the problem by using a Bounds instance. The array x should be what you are looking for. Here is the answer.
fun: 0.34189582276366093
hess_inv: <3x3 LbfgsInvHessProduct with dtype=float64>
jac: array([ 6.91014296e-01, 3.49720253e-07, -2.88657986e-07])
message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 40
nit: 8
status: 0
success: True
x: array([0. , 0.15928136, 0.79907217])
I worked on that a little bit. I get stuck with an error, but I feel like I am on the correct way to solve it. Here is the code.
import numpy as np
from scipy.optimize import Bounds,minimize
def cost_function(ini):
x = np.array([0.4, -0.3, 0.9])
y1 = np.array([0.3, 1, 0])
y2 = np.array([1, -0.9, 0.5])
y3 =np.array([0.6, 0.01, 0.8])
L = np.linalg.norm(np.transpose(x) - np.dot(ini[0],y1) - np.dot(ini[1],y2) - np.dot(ini[2],y3))
return L
ini = np.random.rand(3)
min_b= np.zeros(3)
max_b= np.ones(3)
bnds=Bounds(min_b,max_b)
print(minimize(cost_function,x0 =ini,bounds=bnds))
However, I am getting the error ValueError: length of x0 != length of bounds, although the lengths are equal. I could not find a solution, maybe you do. Good luck! Let me know if you find a solution and if it works!

Plotting several graphs with values extracted from one array

I have an numpy array, lets say one with 4 rows and 6 (always even number) columns:
m=np.round(np.random.rand(4,6))
array([[ 0.99, 0.48, 0.05, 0.26, 0.92, 0.44],
[ 0.81, 0.54, 0.19, 0.38, 0.5 , 0.02],
[ 0.11, 0.96, 0.04, 0.69, 0.78, 0.31],
[ 0.5 , 0.53, 0.94, 0.77, 0.6 , 0.75]])
I now want to plot graphs according to the column pairs, in this case
Graph 1: x-values=m[:,1] and y-values=m[:,0]
Graph 2: x-values=m[:,3] and y-values=m[:,2]
Graph 3: x-values=m[:,5] and y-values=m[:,4]
The first two columns are basically a pair of values, the next two are another pair of values and the last two also are a pair of values.
All the graphs should be in the same plot!
I need a general solution for plotting multiple graphs like this with an undefined but EVEN number of columns of the array. Something like a loop!
Hope somebody can help me :)
you can loop on all values of the column pairs
import matplotlib.pyplot
i=1
while i<len(m[0]):
x = m[:,i]
y = m[:,i-1]
plt.plot(x,y)
plt.savefig('placeholderName_%d.png' % i)
plt.close()
i=i+2
note that I'm starting at 1, and incrementing by two. this conforms to the example you presented
This gives terrible results with the m arra y you specified, but if it was just a sample and your data is more realistic, the following should do:
for i in range(m.shape[1] // 2):
plt.figure()
plt.plot(m[:, 2* i], m[:, 2 * i + 1])
If you want all the plots on the same figure, just move the plt.figure() out of the loop:
plt.figure()
for i in range(m.shape[1] // 2):
plt.plot(m[:, 2* i], m[:, 2 * i + 1])

Categories