I am trying to calculate the FWHM of spectra using python. The spectral description (I'm talking in terms of the physics) for me it's bit complicated and I can't fit the data using some simple Gaussian or Lorentizian profile.
So far I managed to manage interpolation of the data and draw a straight line parallel to the X axis through the half maxima.
How can I find the coordinates of the intersection of the two lines on both sides of the peak?
I know if I take the cursor in those points it will give me the coordinates but I want to automate this process so that it becomes much more user friendly. How can I do that?
from matplotlib import pyplot as mp
import numpy as np
def peak(x, c):
return np.exp(-np.power(x - c, 2) / 16.0)
def lin_interp(x, y, i, half):
return x[i] + (x[i+1] - x[i]) * ((half - y[i]) / (y[i+1] - y[i]))
def half_max_x(x, y):
half = max(y)/2.0
signs = np.sign(np.add(y, -half))
zero_crossings = (signs[0:-2] != signs[1:-1])
zero_crossings_i = np.where(zero_crossings)[0]
return [lin_interp(x, y, zero_crossings_i[0], half),
lin_interp(x, y, zero_crossings_i[1], half)]
# make some fake data
x=np.linspace(0,20,21)
y=peak(x,10)
# find the two crossing points
hmx = half_max_x(x,y)
# print the answer
fwhm = hmx[1] - hmx[0]
print("FWHM:{:.3f}".format(fwhm))
# a convincing plot
half = max(y)/2.0
mp.plot(x,y)
mp.plot(hmx, [half, half])
mp.show()
The (x, y) coordinates of the two points are (hmx[0], half) and (hmx[1], half).
In addition to the previos answer, in case of baseline is not 0, then ((max-min)/2) + min. That's what I did to solve my problem. Tks.
Related
I have created a simple visualizing function in python using pyplot. It takes a dataframe, an upper and lower limit and the start/end points to visualize. Here is the full code:
def visualize(DATASET, DATASET_LIMITS, DATASET_START, DATASET_END):
# DATASET = df_I
# DATASET_LIMITS = df_I_limits
# DATASET_START = 0
# DATASET_END = len(df_I)
plt.figure(figsize=(20,10))
values = []
values_above = []
for data in DATASET['Temp'].iloc[DATASET_START:DATASET_END]:
if data < DATASET_LIMITS[0] or data > DATASET_LIMITS[1]:
values.append(math.nan)
values_above.append(data)
else:
values.append(data)
values_above.append(math.nan)
plt.plot(range(0, DATASET_END - DATASET_START), values, 'b-')
plt.plot(range(0, DATASET_END - DATASET_START), values_above, 'r-')
plt.hlines(DATASET_LIMITS[0], 0, DATASET_END - DATASET_START, colors='g', linestyles='dashed')
plt.hlines(DATASET_LIMITS[1], 0, DATASET_END - DATASET_START, colors='g', linestyles='dashed')
plt.show()
Here is what a generated graph looks like:
You can already see some gaps where the dotted-greed limit line bisects the graph, but here's a zoomed in version to show the problem more clearly. Here is just the largest spike in the data, in the ~205000 range:
You can clearly see that the red and blue segments of the graph are not connected. I believe this is likely due to the method that I use to visualize the data, that being two arrays - values and values_above. Is there a better way of accomplishing this graphing behaviour? Or perhaps a way to get connected lines using this approach?
Since you didn't provide dataset, I'm going to post a simple function I usually use on this occasions.
You take the x and y coordinates from your dataframe and pass them to this function: modify_coords(x, y, y_lim) where y_lim is the y coordinates of the your horizontal line. This function will insert new points where the intersection happens. Then, you can proceed with your usual code.
def modify_coords(x, y, y_lim):
"""If a line segment defined by `(x1, y1) -> (x2, y2)` intercepts
a limiting y-value, divide this segment by inserting a new point
such that y_newpoint = y_lim.
"""
xv, yv = [x[0]], [y[0]]
for i in range(len(x) - 1):
xc, xn = x[i:i+2]
yc, yn = y[i:i+2]
if ((yc < y_lim) and (yn > y_lim)) or ((yc > y_lim) and (yn < y_lim)):
xv.append(((y_lim - yc) / ((yn - yc) / (xn - xc))) + xc)
yv.append(y_lim)
xv.append(xn)
yv.append(yn)
return np.array(xv), np.array(yv)
Note:
it uses linear interpolation to add the new points.
you can see this function in action at this question/answer, where I used a much smoother function as a test. You should give it a try on your data and eventually adapt it to achieve your goal.
Ray Casting Algorithm
MandelBulb Ray Casting Algorithm Python Example
So, if I understand correctly, the ray casting algorithm requires that an observer be located external to the 3D fractal at which point vectors are drawn from the observer toward a point on the plane normal to the vector and intersecting the origin.
It would seem to me that this would either severely limit the rendered view of the fractal or require stereoscopic 3D reconstruction of the fractal using multiple observer positions (which seems difficult to me). Additionally, no information can be gathered regarding the internal structure of the fractal.
Other Algorithms
Alternatively, Direct Volume Rendering seems intuitive enough however, computationally expensive and potentially inefficient in and of itself. Indirect Volume Rendering using an algorithm such as marching cubes might also employ a bit of a learning curve it seems.
Somewhere in the pdf of the 2nd link it talks about cut plane views in order to see slices of the fractal.
Question:
Why not use cut planes as a rendering method?
1) Using a modified ray tracing algorithm, say we put the observer at point Q at the origin (0, 0, 0).
2) Let us then emit rays from the origin toward the incident plane spanned by y & z point combinations that is slicing the fractal.
3) Calculate the distance to the fractal surface using the algorithm in the 1st link. If the x component of computed distance is within a certain tolerance, dx of the slicing plane, then the y & z coordinates along with the x value of the slicing plane are stored as the x, y, z coordinates. These coordinates are now representative of the surface at that specific slice in x.
4) Let us say that the slicing plane has one degree of freedom in the x direction. By moving the plane in its degree of freedom, we can receive yet another set of x, y, z coordinates for a given slice.
5) The final result is a calculable surface generated by the point cloud created in the previous steps.
6) Additionally, the degree of freedom of the slicing plane can be altered to create an another point cloud which can then be verified against the previous as a means of post-processing.
Please see the image below as a visual aid (the sphere represents the MandelBulb).
Below is my Python code so far, adapted from the first link. I successfully generate the plane of points and am able to get the directions from the origin to the points on the plane. There must be something fundamentally flawed in the distance estimator function because thats where everything breaks down and I get nans for the total distances
def get_plane_points(x, y_res=500, z_res=500, y_min=-10, y_max=10, z_min=-10, z_max=10):
y = np.linspace(y_min, y_max, y_res)
z = np.linspace(z_min, z_max, z_res)
x, y, z = np.meshgrid(x, y, z)
x, y, z = x.reshape(-1), y.reshape(-1) , z.reshape(-1)
P = np.vstack((x, y, z)).T
return P
def get_directions(P):
v = np.array(P - 0)
v = v/np.linalg.norm(v, axis=1)[:, np.newaxis]
return v
#jit
def DistanceEstimator(positions, plane_loc, iterations, degree):
m = positions.shape[0]
x, y, z = np.zeros(m), np.zeros(m), np.zeros(m)
x0, y0, z0 = positions[:, 0], positions[:, 1], positions[:, 2]
dr = np.zeros(m) + 1
r = np.zeros(m)
theta = np.zeros(m)
phi = np.zeros(m)
zr = np.zeros(m)
for _ in range(iterations):
r = np.sqrt(x * x + y * y + z * z)
dx = .01
x_loc = plane_loc
idx = (x < x_loc + dx) & (x > x_loc - dx)
dr[idx] = np.power(r[idx], degree - 1) * degree * dr[idx] + 1.0
theta[idx] = np.arctan2(np.sqrt(x[idx] * x[idx] + y[idx] * y[idx]), z[idx])
phi[idx] = np.arctan2(y[idx], x[idx])
zr[idx] = r[idx] ** degree
theta[idx] = theta[idx] * degree
phi[idx] = phi[idx] * degree
x[idx] = zr[idx] * np.sin(theta[idx]) * np.cos(phi[idx]) + x0[idx]
y[idx] = zr[idx] * np.sin(theta[idx]) * np.sin(phi[idx]) + y0[idx]
z[idx] = zr[idx] * np.cos(theta[idx]) + z0[idx]
return 0.5 * np.log(r) * r / dr
def trace(directions, plane_location, max_steps=50, iterations=50, degree=8):
total_distance = np.zeros(directions.shape[0])
keep_iterations = np.ones_like(total_distance)
steps = np.zeros_like(total_distance)
for _ in range(max_steps):
positions = total_distance[:, np.newaxis] * directions
distance = DistanceEstimator(positions, plane_location, iterations, degree)
total_distance += distance * keep_iterations
steps += keep_iterations
# return 1 - (steps / max_steps) ** power
return total_distance
def run():
plane_location = 2
plane_points = get_plane_points(x=plane_location)
directions = get_directions(plane_points)
distance = trace(directions, plane_location)
return distance
I am eager to hear thoughts on this and what limitations/issues I may encounter. Thanks in advance for the help!
If I am not mistaken, it is not impossible for this algorithm to work. There is inherent potential for problems with any assumptions made about the internal structure of the MandelBulb and what positions an observer is allowed to occupy. That is, if the observer is known to initially be in a zone of convergence then the ray tracing algorithm with return nothing meaningful since the furthest distance that could be measured is 0. This is due to the fact that the current ray tracing algorithm terminates upon first contact with the surface. It is likely this could be altered, however.
Rather than slicing the fractal with plane P, it might make more sense to prevent the termination of the ray upon first contact and instead, terminate based on a distance thats known to exist past the surface of the MandelBulb.
In order to draw a path between two points on a map with many points (almost two thousand), I use the following function:
def path_between_cities(self, cities_with_coordinates, from_to):
from matplotlib.lines import Line2D
# coordinates from chosen path
x = [int(from_to[0][2]), int(from_to[1][2])]
y = [int(from_to[0][1]), int(from_to[1][1])]
#create line
line = Line2D(x,y,linestyle='-',color='k')
# create axis
x_ = np.array((0,2000))
y_ = np.array((0,6000))
plt.plot(x_,y_, 'o')
for item in cities_with_coordinates:
name = item[0]
y_coord = int(item[1])
x_coord = int(item[2])
plt.plot([x_coord], [y_coord], marker='o', markersize=1, color='blue')
plt.axes().add_line(line)
plt.axis('scaled')
plt.show()
My goal is to extract all points (coordinates) which are found below the drawn line.
I know that you can do this using the cross product of vectors
Given a large number of vectors, what would be the most efficient way of achieving this in the context above?
Each cross product operation is still O(1). You can run the below function for all the points and see which of them are below, bringing it to a linear time check.
def ccw(a,b,c):
""" Returns 1 if c is above directed line ab else returns -1"""
return (b.x - a.x) * (c.y - a.y) - (c.x - a.x) * (b.y - a.y)
#a and b are the vertices and c is the test point.
Unless you have some other information about the points, you would have to check each point to see if it below a particular line.
I have a numpy array filled with intensity readings at different radii in a uniform circle (for context, this is a 1D radiative transfer project for protostellar formation models: while much better models exist, my supervisor wasnts me to have the experience of producing one so I understand how others work).
I want to take that 1d array, and "rotate" it through a circle, forming a 2D array of intensities that could then be shown with imshow (or, with a bit of work, aplpy). The final array needs to be 2d, and the projection needs to be Cartesian, not polar.
I can do it with nested for loops, and I can do it with lookup tables, but I have a feeling there must be a neat way of doing it in numpy or something.
Any ideas?
EDIT:
I have had to go back and recreate my (frankly horrible) mess of for loops and if statements that I had before. If I really tried, I could probably get rid of one of the loops and one of the if statements by condensing things down. However, the aim is not to make it work with for loops, but see if there is a built in way to rotate the array.
impB is an array that differs slightly from what I stated it was before. Its actually just a list of radii where particles are detected. I then bin those into radius bins to get the intensity (or frequency if you prefer) in each radius. R is the scale factor for my radius as I run the model in a dimensionless way. iRes is a resolution scale factor, essentially how often I want to sample my radial bins. Everything else should be clear.
radJ = np.ndarray(shape=(2*iRes, 2*iRes)) # Create array of 2xRadius square
for i in range(iRes):
n = len(impB[np.where(impB[:] < ((i+1.) * (R / iRes)))]) # Count number of things within this radius +1
m = len(impB[np.where(impB[:] <= ((i) * (R / iRes)))]) # Count number of things in this radius
a = (((i + 1) * (R / iRes))**2 - ((i) * (R / iRes))**2) * math.pi # A normalisation factor based on area.....dont ask
for x in range(iRes):
for y in range(iRes):
if (x**2 + y**2) < (i * iRes)**2:
if (x**2 + y**2) >= (i * iRes)**2: # Checks for radius, and puts in cartesian space
radJ[x+iRes,y+iRes] = (n-m) / a # Put in actual intensity bins
radJ[x+iRes,-y+iRes] = (n-m) / a
radJ[-x+iRes,y+iRes] = (n-m) / a
radJ[-x+iRes,-y+iRes] = (n-m) / a
Nested loops are a simple approach for that. With ri_data_r and y containing your radius values (difference to the middle pixel) and the array for rotation, respectively, I would suggest:
from scipy import interpolate
import numpy as np
y = np.random.rand(100)
ri_data_r = np.linspace(-len(y)/2,len(y)/2,len(y))
interpol_index = interpolate.interp1d(ri_data_r, y)
xv = np.arange(-1, 1, 0.01) # adjust your matrix values here
X, Y = np.meshgrid(xv, xv)
profilegrid = np.ones(X.shape, float)
for i, x in enumerate(X[0, :]):
for k, y in enumerate(Y[:, 0]):
current_radius = np.sqrt(x ** 2 + y ** 2)
profilegrid[i, k] = interpol_index(current_radius)
print(profilegrid)
This will give you exactly what you are looking for. You just have to take in your array and calculate an symmetric array ri_data_r that has the same length as your data array and contains the distance between the actual data and the middle of the array. The code is doing this automatically.
I stumbled upon this question in a different context and I hope I understood it right. Here are two other ways of doing this. The first uses skimage.transform.warp with interpolation of desired order (here we use order=0 Nearest-neighbor). This method is slower but more precise and needs less memory then the second method.
The second one does not use interpolation, therefore is faster but also less precise and needs way more memory because it stores each 2D array containing one tilt until the end, where they are averaged with np.nanmean().
The difference between both solutions stemmed from the problem of handling the center of the final image where the tilts overlap the most, i.e. the first one would just add values with each tilt ending up out of the original range. This was "solved" by clipping the matrix in each step to a global_min and global_max (consult the code). The second one solves it by taking the mean of the tilts where they overlap, which forces us to use the np.nan.
Please, read the Example of usage and Sanity check sections in order to understand the plot titles.
Solution 1:
import numpy as np
from skimage.transform import warp
def rotate_vector(vector, deg_angle):
# Credit goes to skimage.transform.radon
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
center = vector.size // 2
square = np.zeros((vector.size, vector.size))
square[center,:] = vector
rad_angle = np.deg2rad(deg_angle)
cos_a, sin_a = np.cos(rad_angle), np.sin(rad_angle)
R = np.array([[cos_a, sin_a, -center * (cos_a + sin_a - 1)],
[-sin_a, cos_a, -center * (cos_a - sin_a - 1)],
[0, 0, 1]])
# Approx. 80% of time is spent in this function
return warp(square, R, clip=False, output_shape=((vector.size, vector.size)))
def place_vectors(vectors, deg_angles):
matrix = np.zeros((vectors.shape[-1], vectors.shape[-1]))
global_min, global_max = 0, 0
for i, deg_angle in enumerate(deg_angles):
tilt = rotate_vector(vectors[i], deg_angle)
global_min = tilt.min() if global_min > tilt.min() else global_min
global_max = tilt.max() if global_max < tilt.max() else global_max
matrix += tilt
matrix = np.clip(matrix, global_min, global_max)
return matrix
Solution 2:
Credit for the idea goes to my colleague Michael Scherbela.
import numpy as np
def rotate_vector(vector, deg_angle):
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
square = np.ones([vector.size, vector.size]) * np.nan
radius = vector.size // 2
r_values = np.linspace(-radius, radius, vector.size)
rad_angle = np.deg2rad(deg_angle)
ind_x = np.round(np.cos(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_y = np.round(np.sin(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_x = np.clip(ind_x, 0, vector.size-1)
ind_y = np.clip(ind_y, 0, vector.size-1)
square[ind_y, ind_x] = vector
return square
def place_vectors(vectors, deg_angles):
matrices = []
for deg_angle, vector in zip(deg_angles, vectors):
matrices.append(rotate_vector(vector, deg_angle))
matrix = np.nanmean(np.array(matrices), axis=0)
return np.nan_to_num(matrix, copy=False, nan=0.0)
Example of usage:
r = 100 # Radius of the circle, i.e. half the length of the vector
n = int(np.pi * r / 8) # Number of vectors, e.g. number of tilts in tomography
v = np.ones(2*r) # One vector, e.g. one tilt in tomography
V = np.array([v]*n) # All vectors, e.g. a sinogram in tomography
# Rotate 1D vector to a specific angle (output is 2D)
angle = 45
rotated = rotate_vector(v, angle)
# Rotate each row of a 2D array according to its angle (output is 2D)
angles = np.linspace(-90, 90, num=n, endpoint=False)
inplace = place_vectors(V, angles)
Sanity check:
These are just simple checks which by no means cover all possible edge cases. Depending on your use case you might want to extend the checks and adjust the method.
# I. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then sum(inplace) should be approx. equal to (n * (2πr - n)) / π
# which is an area that should be covered by the tilts
desired_area = (n * (2 * np.pi * r - n)) / np.pi
covered_area = np.sum(inplace)
covered_frac = covered_area / desired_area
print(f'This method covered {covered_frac * 100:.2f}% '
'of the area which should be covered in total.')
# II. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then a circle M with radius m <= r should be the largest circle which
# is fully covered by the vectors. I.e. its mean should be no less than 1.
# If n = πr then m = r.
# m = n / π
m = int(n / np.pi)
# Code for circular mask not included
mask = create_circular_mask(2*r, 2*r, center=None, radius=m)
m_area = np.mean(inplace[mask])
print(f'Full radius r={r}, radius m={m}, mean(M)={m_area:.4f}.')
Code for plotting:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
plt.subplot(121)
rotated = np.nan_to_num(rotated) # not necessary in case of the first method
plt.title(
f'Output of rotate_vector(), angle={angle}°\n'
f'Sum is {np.sum(rotated):.2f} and should be {np.sum(v):.2f}')
plt.imshow(rotated, cmap=plt.cm.Greys_r)
plt.subplot(122)
plt.title(
f'Output of place_vectors(), r={r}, n={n}\n'
f'Covered {covered_frac * 100:.2f}% of the area which should be covered.\n'
f'Mean of the circle M is {m_area:.4f} and should be 1.0.')
plt.imshow(inplace)
circle=plt.Circle((r, r), m, color='r', fill=False)
plt.gcf().gca().add_artist(circle)
plt.gcf().gca().legend([circle], [f'Circle M (m={m})'])
I have been writing a Python script (GitHub LINK) for visualizing asteroid/comet/meteoroid orbits. The script also plots the position of planets and their orbits.
It works just right for orbits with small semi-major axis (i.e. "smaller" orbits). But when I have an orbit that goes way beyond Neptune (e.g. of a Halley-type comet), and from certain perspectives, there is a weird "wraparound" (for lack of a better word) effect.
Let me show you what I mean:
Image compilation: http://i.imgur.com/onSZG8s.png
This image shows the plot from a perspective where it does not break.
When you rotate the same plot a bit to the right, it is as if the orbit folded in half and reversed its direction!
And if you look at the plot from a great distance, you can see that the elipse is plotted as it should be.
And here is a minimal version of the code with which the issue can be reproduced. The "wraparound" occurs only when the perspective of the camera is closely parallel with the large orbit.
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
def orbitalElements2Cartesian(a, e, I, peri, node, E):
""" Convert orbital elements to Cartesian coordinates in the Solar System.
Args:
a (float): semi-major axis (AU)
e (float): eccentricity
I (float): inclination (degrees)
peri (float): longitude of perihelion (degrees)
node (float): longitude of ascending node (degrees)
E (float): eccentric anomaly (radians)
"""
# The source of equations used:
# http://farside.ph.utexas.edu/teaching/celestial/Celestialhtml/node34.html
# Check if the orbit is parabolic or hyperbolic
if e >=1:
e = 0.99999999
# Convert degrees to radians
I, peri, node = map(np.radians, [I, peri, node])
# True anomaly
theta = 2*np.arctan(np.sqrt((1.0 + e)/(1.0 - e))*np.tan(E/2.0))
# Distance from the Sun to the poin on orbit
r = a*(1.0 - e*np.cos(E))
# Cartesian coordinates
x = r*(np.cos(node)*np.cos(peri + theta) - np.sin(node)*np.sin(peri + theta)*np.cos(I))
y = r*(np.sin(node)*np.cos(peri + theta) + np.cos(node)*np.sin(peri + theta)*np.cos(I))
z = r*np.sin(peri + theta)*np.sin(I)
return x, y, z
if __name__ == '__main__':
# Example orbital elements
# a, e, incl, peri, node
orb_elements = np.array([
[2.363, 0.515, 4.0, 205.0, 346.1],
[0.989, 0.089, 3.1, 55.6, 21.2],
[0.898, 0.460, 1.3, 77.1, 331.2],
[104.585332285, 0.994914, 89.3950, 130.8767, 282.4633]
])
# Setup the plot
fig = plt.figure()
ax = fig.gca(projection='3d')
# Eccentric anomaly (full range)
E = np.linspace(-np.pi, np.pi, 100)
# Plot the given orbits
for i, orbit in enumerate(orb_elements):
a, e, I, peri, node = orbit
# Take extra steps in E if the orbit is very large
if a > 50:
E = np.linspace(-np.pi, np.pi, (a/20.0)*100)
# Get the orbit in the cartesian space
x, y, z = orbitalElements2Cartesian(a, e, I, peri, node, E)
# Plot orbits
ax.plot(x, y, z, c='#32CD32')
# Add limits (in AU)
ax.set_xlim3d(-5,5)
ax.set_ylim3d(-5,5)
ax.set_zlim3d(-5,5)
plt.tight_layout()
plt.show()
I am a bit dumbfounded by this and cannot seem to find a proper solution. I would greatly appreciate some help!
matplotlib isn't great for complex 3D plots in my experience (I've had similar strange behaviour with out of axis values). Something like mayavi could be worth considering as it's designed for 3D plots...
A possible workaround is given in this blog, basically just set out of axis values to np.NaN for your required axis. If I add the following to your example,
for r in [x,y,z]:
for i in np.arange(len(r)):
if r[i] < -5:
x[i] = np.NaN
y[i] = np.NaN
z[i] = np.NaN
elif r[i] > 5:
x[i] = np.NaN
y[i] = np.NaN
z[i] = np.NaN
else:
pass
it removes the wraparound.
I had similar issues and wanted to make something a bit more user friendly. I moved all of the functions in this library over to javascript and created a webGL interface in Three.js which lets you do what you want here but also plots the location of the asteroid / comet with animation via time functions. Just need a web browser to use it. Check it out :)
http://rankinstudio.com/asteroids/asteroids.html