Generate profiles through a 2D array at an angle without altering pixels - python

I'd like to plot two profiles through the highest intensity point in a 2D numpy array, which is an image of a blob (i.e. a line through the semi-major axis, and another line through the semi-minor axis). The blob is rotated at an angle theta counterclockwise from the standard x-axis and is asymmetric.
It is a 600x600 array with a max intensity of 1 (at only one pixel) that is located right at the center at (300, 300). The angle rotation from the x-axis (which then gives the location of the semi-major axis when rotated by that angle) is theta = 89.54 degrees. I do not want to use scipy.ndimage.rotate because it uses spline interpolation, and I do not want to change any of my pixel values. But I suppose a nearest-neighbor interpolation method would be okay.
I tried generating lines corresponding to the major and minor axes across the image, but the result was not right at all (the peak was far less than 1), so maybe I did something wrong. The code for this is below:
import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage
def profiles_at_angle(image, axis, theta):
theta = np.deg2rad(theta)
if axis == 'major':
x_0, y_0 = 0, 300-300*np.tan(theta)
x_1, y_1 = 599, 300+300*np.tan(theta)
elif axis=='minor':
x_0, y_0 = 300-300*np.tan(theta), 599
x_1, y_1 = 300+300*np.tan(theta), -599
num = 600
x, y = np.linspace(x_0, x_1, num), np.linspace(y_0, y_1, num)
z = ndimage.map_coordinates(image, np.vstack((x,y)))
fig, axes = plt.subplots(nrows=2)
axes[0].imshow(image, cmap='gray')
axes[0].axis('image')
axes[1].plot(z)
plt.xlim(250,350)
plt.show()
profiles_at_angle(image, 'major', theta)
Did I do something obviously wrong in my code above? Or how else can I accomplish this? Thank you.
Edit: Here are some example images. Sorry for the bad quality; my browser crashed every time I tried uploading them anywhere so I had to take photos of the screen.
Figure 1: This is the result of my code above, which is clearly wrong since the peak should be at 1. I'm not sure what I did wrong though.
Figure 2: I made this plot below by just taking the profiles through the standard x and y axes, ignoring any rotation (this only looks good coincidentally because the real angle of rotation is so close to 90 degrees, so I was able to just switch the labels and get this). I want my result to look something like this, but taking the correction rotation angle into account.
Edit: It could be useful to run tests on this method using data very much like my own (it's a 2D Gaussian with nearly the same parameters):
image = np.random.random((600,600))
def generate(data_set):
xvec = np.arange(0, np.shape(data_set)[1], 1)
yvec = np.arange(0, np.shape(data_set)[0], 1)
X, Y = np.meshgrid(xvec, yvec)
return X, Y
def gaussian_func(xy, x0, y0, sigma_x, sigma_y, amp, theta, offset):
x, y = xy
a = (np.cos(theta))**2/(2*sigma_x**2) + (np.sin(theta))**2/(2*sigma_y**2)
b = -np.sin(2*theta)/(4*sigma_x**2) + np.sin(2*theta)/(4*sigma_y**2)
c = (np.sin(theta))**2/(2*sigma_x**2) + (np.cos(theta))**2/(2*sigma_y**2)
inner = a * (x-x0)**2
inner += 2*b*(x-x0)*(y-y0)
inner += c * (y-y0)**2
return (offset + amp * np.exp(-inner)).ravel()
xx, yy = generate(image)
image = gaussian_func((xx.ravel(), yy.ravel()), 300, 300, 5, 4, 1, 1.56, 0)
image = np.reshape(image, (600, 600))

This should do it for you. You just did not properly compute your lines.
theta = 65
peak = np.argwhere(image==1)[0]
x = np.linspace(peak[0]-100,peak[0]+100,1000)
y = lambda x: (x-peak[1])*np.tan(np.deg2rad(theta))+peak[0]
y_maj = np.linspace(y(peak[1]-100),y(peak[1]+100),1000)
y = lambda x: -(x-peak[1])/np.tan(np.deg2rad(theta))+peak[0]
y_min = np.linspace(y(peak[1]-100),y(peak[1]+100),1000)
del y
z_min = scipy.ndimage.map_coordinates(image, np.vstack((x,y_min)))
z_maj = scipy.ndimage.map_coordinates(image, np.vstack((x,y_maj)))
fig, axes = plt.subplots(nrows=2)
axes[0].imshow(image)
axes[0].plot(x,y_maj)
axes[0].plot(x,y_min)
axes[0].axis('image')
axes[1].plot(z_min)
axes[1].plot(z_maj)
plt.show()

Related

Inverse of numpy.gradient function

I need to create a function which would be the inverse of the np.gradient function.
Where the Vx,Vy arrays (Velocity component vectors) are the input and the output would be an array of anti-derivatives (Arrival Time) at the datapoints x,y.
I have data on a (x,y) grid with scalar values (time) at each point.
I have used the numpy gradient function and linear interpolation to determine the gradient vector Velocity (Vx,Vy) at each point (See below).
I have achieved this by:
#LinearTriInterpolator applied to a delaunay triangular mesh
LTI= LinearTriInterpolator(masked_triang, time_array)
#Gradient requested at the mesh nodes:
(Vx, Vy) = LTI.gradient(triang.x, triang.y)
The first image below shows the velocity vectors at each point, and the point labels represent the time value which formed the derivatives (Vx,Vy)
The next image shows the resultant scalar value of the derivatives (Vx,Vy) plotted as a colored contour graph with associated node labels.
So my challenge is:
I need to reverse the process!
Using the gradient vectors (Vx,Vy) or the resultant scalar value to determine the original Time-Value at that point.
Is this possible?
Knowing that the numpy.gradient function is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries, I am sure there is a function which would reverse this process.
I was thinking that taking a line derivative between the original point (t=0 at x1,y1) to any point (xi,yi) over the Vx,Vy plane would give me the sum of the velocity components. I could then divide this value by the distance between the two points to get the time taken..
Would this approach work? And if so, which numpy integrate function would be best applied?
An example of my data can be found here [http://www.filedropper.com/calculatearrivaltimefromgradientvalues060820]
Your help would be greatly appreciated
EDIT:
Maybe this simplified drawing might help understand where I'm trying to get to..
EDIT:
Thanks to #Aguy who has contibuted to this code.. I Have tried to get a more accurate representation using a meshgrid of spacing 0.5 x 0.5m and calculating the gradient at each meshpoint, however I am not able to integrate it properly. I also have some edge affects which are affecting the results that I don't know how to correct.
import numpy as np
from scipy import interpolate
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
#Createmesh grid with a spacing of 0.5 x 0.5
stepx = 0.5
stepy = 0.5
xx = np.arange(min(x), max(x), stepx)
yy = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xx, yy)
grid_z1 = interpolate.griddata((x,y), Arrival_Time, (xgrid, ygrid), method='linear') #Interpolating the Time values
#Formatdata
X = np.ravel(xgrid)
Y= np.ravel(ygrid)
zs = np.ravel(grid_z1)
Z = zs.reshape(X.shape)
#Calculate Gradient
(dx,dy) = np.gradient(grid_z1) #Find gradient for points on meshgrid
Velocity_dx= dx/stepx #velocity ms/m
Velocity_dy= dy/stepx #velocity ms/m
Resultant = (Velocity_dx**2 + Velocity_dy**2)**0.5 #Resultant scalar value ms/m
Resultant = np.ravel(Resultant)
#Plot Original Data F(X,Y) on the meshgrid
fig = pyplot.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x,y,Arrival_Time,color='r')
ax.plot_trisurf(X, Y, Z)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.set_zlabel('Time (ms)')
pyplot.show()
#Plot the Derivative of f'(X,Y) on the meshgrid
fig = pyplot.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(X,Y,Resultant,color='r',s=0.2)
ax.plot_trisurf(X, Y, Resultant)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.set_zlabel('Velocity (ms/m)')
pyplot.show()
#Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2],
dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
Now the np.gradient is applied at every meshnode (dx,dy) = np.gradient(grid_z1)
Now in my process I would analyse the gradient values above and make some adjustments (There is some unsual edge effects that are being create which I need to rectify) and would then integrate the values to get back to a surface which would be very similar to f(x,y) shown above.
I need some help adjusting the integration function:
#Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1)*stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0)*stepy
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum([dxintegral[0, len(xx) // 2],
dyintegral[i, len(yy) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
And now I need to calculate the new 'Time' values at the original (x,y) point locations.
UPDATE (08-09-20) : I am getting some promising results using the help from #Aguy. The results can be seen below (with the blue contours representing the original data, and the red contours representing the integrated values).
I am still working on an integration approach which can remove the inaccuarcies at the areas of min(y) and max(y)
from matplotlib.tri import (Triangulation, UniformTriRefiner,
CubicTriInterpolator,LinearTriInterpolator,TriInterpolator,TriAnalyzer)
import pandas as pd
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
#-------------------------------------------------------------------------
# STEP 1: Import data from Excel file, and set variables
#-------------------------------------------------------------------------
df_initial = pd.read_excel(
r'C:\Users\morga\PycharmProjects\venv\Development\Trial'
r'.xlsx')
Inputdata can be found here link
df_initial = df_initial .sort_values(by='Delay', ascending=True) #Update dataframe and sort by Delay
x = df_initial ['X'].to_numpy()
y = df_initial ['Y'].to_numpy()
Arrival_Time = df_initial ['Delay'].to_numpy()
# Createmesh grid with a spacing of 0.5 x 0.5
stepx = 0.5
stepy = 0.5
xx = np.arange(min(x), max(x), stepx)
yy = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xx, yy)
grid_z1 = interpolate.griddata((x, y), Arrival_Time, (xgrid, ygrid), method='linear') # Interpolating the Time values
# Calculate Gradient (velocity ms/m)
(dy, dx) = np.gradient(grid_z1) # Find gradient for points on meshgrid
Velocity_dx = dx / stepx # x velocity component ms/m
Velocity_dy = dy / stepx # y velocity component ms/m
# Integrate to compare the original data input
dxintegral = np.nancumsum(Velocity_dx, axis=1) * stepx
dyintegral = np.nancumsum(Velocity_dy, axis=0) * stepy
valintegral = np.ma.zeros(dxintegral.shape) # Makes an array filled with 0's the same shape as dx integral
for i in range(len(yy)):
for j in range(len(xx)):
valintegral[i, j] = np.ma.sum(
[dxintegral[0, len(xx) // 2], dyintegral[i, len(xx) // 2], dxintegral[i, j], - dxintegral[i, len(xx) // 2]])
valintegral[np.isnan(dx)] = np.nan
min_value = np.nanmin(valintegral)
valintegral = valintegral + (min_value * -1)
##Plot Results
fig = plt.figure()
ax = fig.add_subplot()
ax.scatter(x, y, color='black', s=7, zorder=3)
ax.set_xlabel('X-Coordinates')
ax.set_ylabel('Y-Coordinates')
ax.contour(xgrid, ygrid, valintegral, levels=50, colors='red', zorder=2)
ax.contour(xgrid, ygrid, grid_z1, levels=50, colors='blue', zorder=1)
ax.set_aspect('equal')
plt.show()
TL;DR;
You have multiple challenges to address in this issue, mainly:
Potential reconstruction (scalar field) from its gradient (vector field)
But also:
Observation in a concave hull with non rectangular grid;
Numerical 2D line integration and numerical inaccuracy;
It seems it can be solved by choosing an adhoc interpolant and a smart way to integrate (as pointed out by #Aguy).
MCVE
In a first time, let's build a MCVE to highlight above mentioned key points.
Dataset
We recreate a scalar field and its gradient.
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
def f(x, y):
return x**2 + x*y + 2*y + 1
Nx, Ny = 21, 17
xl = np.linspace(-3, 3, Nx)
yl = np.linspace(-2, 2, Ny)
X, Y = np.meshgrid(xl, yl)
Z = f(X, Y)
zl = np.arange(np.floor(Z.min()), np.ceil(Z.max())+1, 2)
dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1)
V = np.hypot(dZdx, dZdy)
The scalar field looks like:
axe = plt.axes(projection='3d')
axe.plot_surface(X, Y, Z, cmap='jet', alpha=0.5)
axe.view_init(elev=25, azim=-45)
And, the vector field looks like:
axe = plt.contour(X, Y, Z, zl, cmap='jet')
axe.axes.quiver(X, Y, dZdx, dZdy, V, units='x', pivot='tip', cmap='jet')
axe.axes.set_aspect('equal')
axe.axes.grid()
Indeed gradient is normal to potential levels. We also plot the gradient magnitude:
axe = plt.contour(X, Y, V, 10, cmap='jet')
axe.axes.set_aspect('equal')
axe.axes.grid()
Raw field reconstruction
If we naively reconstruct the scalar field from the gradient:
SdZx = np.cumsum(dZdx, axis=1)*np.diff(xl)[0]
SdZy = np.cumsum(dZdy, axis=0)*np.diff(yl)[0]
Zhat = np.zeros(SdZx.shape)
for i in range(Zhat.shape[0]):
for j in range(Zhat.shape[1]):
Zhat[i,j] += np.sum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]])
Zhat += Z[0,0] - Zhat[0,0]
We can see the global result is roughly correct, but levels are less accurate where the gradient magnitude is low:
Interpolated field reconstruction
If we increase the grid resolution and pick a specific interpolant (usual when dealing with mesh grid), we can get a finer field reconstruction:
r = np.stack([X.ravel(), Y.ravel()]).T
Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel())
Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel())
Nx, Ny = 200, 200
xli = np.linspace(xl.min(), xl.max(), Nx)
yli = np.linspace(yl.min(), yl.max(), Nx)
Xi, Yi = np.meshgrid(xli, yli)
ri = np.stack([Xi.ravel(), Yi.ravel()]).T
dZdxi = Sx(ri).reshape(Xi.shape)
dZdyi = Sy(ri).reshape(Xi.shape)
SdZxi = np.cumsum(dZdxi, axis=1)*np.diff(xli)[0]
SdZyi = np.cumsum(dZdyi, axis=0)*np.diff(yli)[0]
Zhati = np.zeros(SdZxi.shape)
for i in range(Zhati.shape[0]):
for j in range(Zhati.shape[1]):
Zhati[i,j] += np.sum([SdZyi[i,0], -SdZyi[0,0], SdZxi[i,j], -SdZxi[i,0]])
Zhati += Z[0,0] - Zhati[0,0]
Which definitely performs way better:
So basically, increasing the grid resolution with an adhoc interpolant may help you to get more accurate result. The interpolant also solve the need to get a regular rectangular grid from a triangular mesh to perform integration.
Concave and convex hull
You also have pointed out inaccuracy on the edges. Those are the result of the combination of the interpolant choice and the integration methodology. The integration methodology fails to properly compute the scalar field when it reach concave region with few interpolated points. The problem disappear when choosing a mesh-free interpolant able to extrapolate.
To illustrate it, let's remove some data from our MCVE:
q = np.full(dZdx.shape, False)
q[0:6,5:11] = True
q[-6:,-6:] = True
dZdx[q] = np.nan
dZdy[q] = np.nan
Then the interpolant can be constructed as follow:
q2 = ~np.isnan(dZdx.ravel())
r = np.stack([X.ravel(), Y.ravel()]).T[q2,:]
Sx = interpolate.CloughTocher2DInterpolator(r, dZdx.ravel()[q2])
Sy = interpolate.CloughTocher2DInterpolator(r, dZdy.ravel()[q2])
Performing the integration we see that in addition of classical edge effect we do have less accurate value in concave regions (swingy dot-dash lines where the hull is concave) and we have no data outside the convex hull as Clough Tocher is a mesh-based interpolant:
Vl = np.arange(0, 11, 1)
axe = plt.contour(X, Y, np.hypot(dZdx, dZdy), Vl, cmap='jet')
axe.axes.contour(Xi, Yi, np.hypot(dZdxi, dZdyi), Vl, cmap='jet', linestyles='-.')
axe.axes.set_aspect('equal')
axe.axes.grid()
So basically the error we are seeing on the corner are most likely due to integration issue combined with interpolation limited to the convex hull.
To overcome this we can choose a different interpolant such as RBF (Radial Basis Function Kernel) which is able to create data outside the convex hull:
Sx = interpolate.Rbf(r[:,0], r[:,1], dZdx.ravel()[q2], function='thin_plate')
Sy = interpolate.Rbf(r[:,0], r[:,1], dZdy.ravel()[q2], function='thin_plate')
dZdxi = Sx(ri[:,0], ri[:,1]).reshape(Xi.shape)
dZdyi = Sy(ri[:,0], ri[:,1]).reshape(Xi.shape)
Notice the slightly different interface of this interpolator (mind how parmaters are passed).
The result is the following:
We can see the region outside the convex hull can be extrapolated (RBF are mesh free). So choosing the adhoc interpolant is definitely a key point to solve your problem. But we still need to be aware that extrapolation may perform well but is somehow meaningless and dangerous.
Solving your problem
The answer provided by #Aguy is perfectly fine as it setups a clever way to integrate that is not disturbed by missing points outside the convex hull. But as you mentioned there is inaccuracy in concave region inside the convex hull.
If you wish to remove the edge effect you detected, you will have to resort to an interpolant able to extrapolate as well, or find another way to integrate.
Interpolant change
Using RBF interpolant seems to solve your problem. Here is the complete code:
df = pd.read_excel('./Trial-Wireup 2.xlsx')
x = df['X'].to_numpy()
y = df['Y'].to_numpy()
z = df['Delay'].to_numpy()
r = np.stack([x, y]).T
#S = interpolate.CloughTocher2DInterpolator(r, z)
#S = interpolate.LinearNDInterpolator(r, z)
S = interpolate.Rbf(x, y, z, epsilon=0.1, function='thin_plate')
N = 200
xl = np.linspace(x.min(), x.max(), N)
yl = np.linspace(y.min(), y.max(), N)
X, Y = np.meshgrid(xl, yl)
#Zp = S(np.stack([X.ravel(), Y.ravel()]).T)
Zp = S(X.ravel(), Y.ravel())
Z = Zp.reshape(X.shape)
dZdy, dZdx = np.gradient(Z, yl, xl, edge_order=1)
SdZx = np.nancumsum(dZdx, axis=1)*np.diff(xl)[0]
SdZy = np.nancumsum(dZdy, axis=0)*np.diff(yl)[0]
Zhat = np.zeros(SdZx.shape)
for i in range(Zhat.shape[0]):
for j in range(Zhat.shape[1]):
#Zhat[i,j] += np.nansum([SdZy[i,0], -SdZy[0,0], SdZx[i,j], -SdZx[i,0]])
Zhat[i,j] += np.nansum([SdZx[0,N//2], SdZy[i,N//2], SdZx[i,j], -SdZx[i,N//2]])
Zhat += Z[100,100] - Zhat[100,100]
lz = np.linspace(0, 5000, 20)
axe = plt.contour(X, Y, Z, lz, cmap='jet')
axe = plt.contour(X, Y, Zhat, lz, cmap='jet', linestyles=':')
axe.axes.plot(x, y, '.', markersize=1)
axe.axes.set_aspect('equal')
axe.axes.grid()
Which graphically renders as follow:
The edge effect is gone because of the RBF interpolant can extrapolate over the whole grid. You can confirm it by comparing the result of mesh-based interpolants.
Linear
Clough Tocher
Integration variable order change
We can also try to find a better way to integrate and mitigate the edge effect, eg. let's change the integration variable order:
Zhat[i,j] += np.nansum([SdZy[N//2,0], SdZx[N//2,j], SdZy[i,j], -SdZy[N//2,j]])
With a classic linear interpolant. The result is quite correct, but we still have an edge effect on the bottom left corner:
As you noticed the problem occurs at the middle of the axis in region where the integration starts and lacks a reference point.
Here is one approach:
First, in order to be able to do integration, it's good to be on a regular grid. Using here variable names x and y as short for your triang.x and triang.y we can first create a grid:
import numpy as np
n = 200 # Grid density
stepx = (max(x) - min(x)) / n
stepy = (max(y) - min(y)) / n
xspace = np.arange(min(x), max(x), stepx)
yspace = np.arange(min(y), max(y), stepy)
xgrid, ygrid = np.meshgrid(xspace, yspace)
Then we can interpolate dx and dy on the grid using the same LinearTriInterpolator function:
fdx = LinearTriInterpolator(masked_triang, dx)
fdy = LinearTriInterpolator(masked_triang, dy)
dxgrid = fdx(xgrid, ygrid)
dygrid = fdy(xgrid, ygrid)
Now comes the integration part. In principle, any path we choose should get us to the same value. In practice, since there are missing values and different densities, the choice of path is very important to get a reasonably accurate answer.
Below I choose to integrate over dxgrid in the x direction from 0 to the middle of the grid at n/2. Then integrate over dygrid in the y direction from 0 to the i point of interest. Then over dxgrid again from n/2 to the point j of interest. This is a simple way to make sure most of the path of integration is inside the bulk of available data by simply picking a path that goes mostly in the "middle" of the data range. Other alternative consideration would lead to different path selections.
So we do:
dxintegral = np.nancumsum(dxgrid, axis=1) * stepx
dyintegral = np.nancumsum(dygrid, axis=0) * stepy
and then (by somewhat brute force for clarity):
valintegral = np.ma.zeros(dxintegral.shape)
for i in range(n):
for j in range(n):
valintegral[i, j] = np.ma.sum([dxintegral[0, n // 2], dyintegral[i, n // 2], dxintegral[i, j], - dxintegral[i, n // 2]])
valintegral = valintegral * np.isfinite(dxintegral)
valintegral would be the result up to an arbitrary constant which can help put the "zero" where you want.
With your data shown here:
ax.tricontourf(masked_triang, time_array)
This is what I'm getting reconstructed when using this method:
ax.contourf(xgrid, ygrid, valintegral)
Hopefully this is somewhat helpful.
If you want to revisit the values at the original triangulation points, you can use interp2d on the valintegral regular grid data.
EDIT:
In reply to your edit, your adaptation above has a few errors:
Change the line (dx,dy) = np.gradient(grid_z1) to (dy,dx) = np.gradient(grid_z1)
In the integration loop change the dyintegral[i, len(yy) // 2] term to dyintegral[i, len(xx) // 2]
Better to replace the line valintegral = valintegral * np.isfinite(dxintegral) with valintegral[np.isnan(dx)] = np.nan

Matplotlib: inverse affine transformation to get an equal aspect with different x and y limits

I have the 2D coordinates of a geometric shape as x and y arrays. Using a combination of translation and rotation I can get the shape rotated about its geometric center by a given angle alpha (See below for a minimal example).
As shown in the code below, this can be achieved by first shifting the geometric center of the shape to the origin of the coordinates, then applying the rotation (multiplying by the 2D rotation matrix) then translating it back to its original position.
In this example, let's assume that the shape is a rectangle:
import numpy as np
from numpy import cos, sin, linspace, concatenate
import matplotlib.pyplot as plt
def rotate(x, y, alpha):
"""
Rotate the shape by an angle alpha (given in degrees)
"""
# Get the center of the shape
x_center = (x.max() + x.min()) / 2.0
y_center = (y.max() + y.min()) / 2.0
# Shifting the center of the shape to the origin of coordinates
x0 = x - x_center
y0 = y - y_center
angle_rad = np.deg2rad(alpha)
rot_mat = np.array([
[cos(angle_rad), -sin(angle_rad)],
[sin(angle_rad), cos(angle_rad)]
])
xy = np.vstack((x0, y0))
xnew, ynew = rot_mat # xy
# translate it back to its original location
xnew += x_center
ynew += y_center
return xnew, ynew
z0, z1, z2, z3 = 4 + 0.6*1j, 4 + 0.8*1j, 8 + 0.8*1j, 8 + 0.6*1j
xy = concatenate((
linspace(z0, z1, 10, endpoint=False),
linspace(z1, z2, 10, endpoint=False),
linspace(z2, z3, 10, endpoint=False),
linspace(z3, z0, 10, endpoint=True)
))
x = xy.real
y = xy.imag
xrot, yrot = rotate(x, y, alpha=-45.0)
# The x and y limits
xlow, xup = 0, 10
ylow, yup = -1.5, 3.0
plt.plot(x, y, label='original shape')
plt.plot(xrot, yrot, label='rotated shape')
plt.xlim((xlow, xup))
plt.ylim((ylow, yup))
plt.legend()
plt.show()
We get the following plot:
As you can see, the shape gets rotated but it is stretched/skewed as well because the aspect was not set to equal. we could check that by setting:
plt.gca().set_aspect('equal')
And this shows the rotated shape without being skewed:
The problem is that I am plotting this shape with other data that has an x range much larger than the y range. So, setting an equal aspect is not a solution in this case.
To be more precise, I want the rotated shape (orange color) in the first figure to show up correctly like the second figure. My approach is to find the inverse skew matrix in the first figure (resulting from the difference between x and y limits) and multiply it by the rotated shape to get the expected result.
Unfortunately, Using trial and error I couldn't get the correct skew matrix.
Any help is greatly appreciated.
EDIT
From a linear algebra perspective, how to express that deformation of the rotated shape in the first figure in terms of skewing and scaling transformations?
When performing the desired rotation, the vertices of the rectangle will lose their meaning in data coordinates, and the initial rectangle will become a trapezoid. Apparently this is desired. So the question becomes essentially how to perform a rotation in screen coordinates about a given point center in data coordinates.
The solution might look a little complicated, which is due to a callback being used. This is necessary, to keep the center point in screen coordinates synchronized with possible axis limit changes.
from matplotlib import pyplot as plt
from matplotlib.transforms import Affine2D
x, y = (4, 0.6)
dx, dy = (4, 0.2)
fig, ax = plt.subplots()
# The x and y limits
xlow, xup = 0, 10
ylow, yup = -1.5, 3.0
ax.set(xlim=(xlow, xup), ylim=(ylow, yup))
rect1 = plt.Rectangle((x,y), width=dx, height=dy, facecolor="none", edgecolor="C0")
ax.add_patch(rect1)
rect2 = plt.Rectangle((x,y), width=dx, height=dy, facecolor="none", edgecolor="C1")
ax.add_patch(rect2)
def lim_change(evt=None):
center = (x+dx/2, y+dy/2)
trans = ax.transData + Affine2D().rotate_deg_around(*ax.transData.transform_point(center), -45)
rect2.set_transform(trans)
lim_change()
cid = ax.callbacks.connect("xlim_changed", lim_change)
cid = ax.callbacks.connect("ylim_changed", lim_change)
plt.show()

How can I make my 2D Gaussian fit to my image

I am trying to fit a 2D Gaussian to an image to find the location of the brightest point in it. My code looks like this:
import numpy as np
import astropy.io.fits as fits
import os
from astropy.stats import mad_std
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
from lmfit.models import GaussianModel
from astropy.modeling import models, fitting
def gaussian(xycoor,x0, y0, sigma, amp):
'''This Function is the Gaussian Function'''
x, y = xycoor # x and y taken from fit function. Stars at 0, increases by 1, goes to length of axis
A = 1 / (2*sigma**2)
eq = amp*np.exp(-A*((x-x0)**2 + (y-y0)**2)) #Gaussian
return eq
def fit(image):
med = np.median(image)
image = image-med
image = image[0,0,:,:]
max_index = np.where(image >= np.max(image))
x0 = max_index[1] #Middle of X axis
y0 = max_index[0] #Middle of Y axis
x = np.arange(0, image.shape[1], 1) #Stars at 0, increases by 1, goes to length of axis
y = np.arange(0, image.shape[0], 1) #Stars at 0, increases by 1, goes to length of axis
xx, yy = np.meshgrid(x, y) #creates a grid to plot the function over
sigma = np.std(image) #The standard dev given in the Gaussian
amp = np.max(image) #amplitude
guess = [x0, y0, sigma, amp] #The initial guess for the gaussian fitting
low = [0,0,0,0] #start of data array
#Upper Bounds x0: length of x axis, y0: length of y axis, st dev: max value in image, amplitude: 2x the max value
upper = [image.shape[0], image.shape[1], np.max(image), np.max(image)*2]
bounds = [low, upper]
params, pcov = curve_fit(gaussian, (xx.ravel(), yy.ravel()), image.ravel(),p0 = guess, bounds = bounds) #optimal fit. Not sure what pcov is.
return params
def plotting(image, params):
fig, ax = plt.subplots()
ax.imshow(image)
ax.scatter(params[0], params[1],s = 10, c = 'red', marker = 'x')
circle = Circle((params[0], params[1]), params[2], facecolor = 'none', edgecolor = 'red', linewidth = 1)
ax.add_patch(circle)
plt.show()
data = fits.getdata('AzTECC100.fits') #read in file
med = np.median(data)
data = data - med
data = data[0,0,:,:]
parameters = fit(data)
#generates a gaussian based on the parameters given
plotting(data, parameters)
The image is plotting and the code is giving no errors but the fitting isn't working. It's just putting an x wherever the x0 and y0 are. The pixel values in my image are very small. The max value is 0.0007 and std dev is 0.0001 and the x and y are a few orders of magnitude larger. So I believe my problem is that because of this my eq is going to zero everywhere so the curve_fit is failing. I'm wondering if there's a better way to construct my gaussian so that it plots correctly?
I do not have access to your image. Instead I have generated some test "image" as follows:
y, x = np.indices((51,51))
x -= 25
y -= 25
data = 3 * np.exp(-0.7 * ((x+2)**2 + (y-1)**2))
Also, I have modified your code for plotting to increase the radius of the circle by 10:
circle = Circle((params[0], params[1]), 10 * params[2], ...)
and I commented out two more lines:
# image = image[0,0,:,:]
# data = data[0,0,:,:]
The result that I get is shown in the attached image and it looks reasonable to me:
Could it be that the issue is in how you access data from the FITS file? (e.g., image = image[0,0,:,:]) Are the data 4D array? Why do you have 4 indices?
I also saw that you have asked a similar question here: Astropy.model 2DGaussian issue in which you tried to use just astropy.modeling. I will look into that question.
NOTE: you can replace code such as
max_index = np.where(image >= np.max(image))
x0 = max_index[1] #Middle of X axis
y0 = max_index[0] #Middle of Y axis
with
y0, x0 = np.unravel_index(np.argmax(data), data.shape)

Fitting 2D sum of gaussians, scipy.optimise.leastsq (Ans: Use curve_fit!)

I want to fit an 2D sum of gaussians to this data:
After failing at fitting a sum to this initially I instead sampled each peak separately (image) and returned a fit by find it's moments (essentially using this code).
Unfortunately, this results in an incorrect peak position measurement, due to the overlapping signal of the neighbouring peaks. Below is a plot of the sum of the separate fits. Obviously their peak all lean toward the centre. I need to account for this in order to return the correct peak position.
I've got working code which plots a 2D gaussian envelope function (twoD_Gaussian()), and I parse this through optimize.leastsq as a 1D array using numpy.ravel and an appropriate error function, however this results in a nonsense output.
I tried fitting a single peak within the sum and get the following erroneous output:
I'd appreciate any advice on what i could try to make this work, or alternative approaches if this isn't appropriate. All input welcomed of course!
Code below:
from scipy.optimize import leastsq
import numpy as np
import matplotlib.pyplot as plt
def twoD_Gaussian(amp0, x0, y0, amp1=13721, x1=356, y1=247, amp2=14753, x2=291, y2=339, sigma=40):
x0 = float(x0)
y0 = float(y0)
x1 = float(x1)
y1 = float(y1)
x2 = float(x2)
y2 = float(y2)
return lambda x, y: (amp0*np.exp(-(((x0-x)/sigma)**2+((y0-y)/sigma)**2)/2))+(
amp1*np.exp(-(((x1-x)/sigma)**2+((y1-y)/sigma)**2)/2))+(
amp2*np.exp(-(((x2-x)/sigma)**2+((y2-y)/sigma)**2)/2))
def fitgaussian2D(x, y, data, params):
"""Returns (height, x, y, width_x, width_y)
the gaussian parameters of a 2D distribution found by a fit"""
errorfunction = lambda p: np.ravel(twoD_Gaussian(*p)(*np.indices(np.shape(data))) - data)
p, success = optimize.leastsq(errorfunction, params)
return p
# Create data indices
I = image # Red channel of a scanned image, equivalent to the 1st image displayed in this post.
p = np.asarray(I).astype('float')
w,h = np.shape(I)
x, y = np.mgrid[0:h, 0:w]
xy = (x,y)
# scanned at 150 dpi = 5.91 dots per mm
dpmm = 5.905511811
plot_width = 40*dpmm
# create function indices
fdims = np.round(plot_width/2)
xdims = (RC[0] - fdims, RC[0] + fdims)
ydims = (RC[1] - fdims, RC[1] + fdims)
fx = np.linspace(xdims[0], xdims[1], np.round(plot_width))
fy = np.linspace(ydims[0], ydims[1], np.round(plot_width))
fx,fy = np.meshgrid(fx,fy)
#Crop image for display
crp_data = image[xdims[0]:xdims[1], ydims[0]:ydims[1]]
z = crp_data
# Parameters obtained from separate fits
Amplitudes = (13245, 13721, 15374)
px = (410, 356, 290)
py = (350, 247, 339)
initial_guess_sum = (Amp[0], px[0], py[0], Amp[1], px[1], py[1], Amp[2], px[2], py[2])
initial_guess_peak3 = (Amp[0], px[0], py[0]) # Try fitting single peak within sum
fitted_pars = fitgaussian2D(x, y, z, initial_guess_sum)
#fitted_pars = fitgaussian2D(x, y, z, initial_guess_peak3)
data_fitted= twoD_Gaussian(*fitted_pars)(fx,fy)
#data_fitted= twoD_Gaussian(*initial_guess_sum)(fx,fy)
fig = plt.figure(figsize=(10, 30))
ax = fig.add_subplot(111, aspect="equal")
#fig, ax = plt.subplots(1)
cb = ax.imshow(p, cmap=plt.cm.jet, origin='bottom',
extent=(x.min(), x.max(), y.min(), y.max()))
ax.contour(fx, fy, data_fitted.reshape(fx.shape[0], fy.shape[1]), 4, colors='w')
ax.set_xlim(np.int(RC[0])-135, np.int(RC[0])+135)
ax.set_ylim(np.int(RC[1])+135, np.int(RC[1])-135)
#plt.colorbar(cb)
plt.show()
I tried any number of other things before giving up and trying curve_fit again, albeit with more knowledge of parsing lambda functions. It worked. Example output and code below (still with redundancies) for the sake of posterity.
def twoD_Gaussian(amp0, x0, y0, amp1=13721, x1=356, y1=247, amp2=14753, x2=291, y2=339, sigma=40):
x0 = float(x0)
y0 = float(y0)
x1 = float(x1)
y1 = float(y1)
x2 = float(x2)
y2 = float(y2)
return lambda x, y: (amp0*np.exp(-(((x0-x)/sigma)**2+((y0-y)/sigma)**2)/2))+(
amp1*np.exp(-(((x1-x)/sigma)**2+((y1-y)/sigma)**2)/2))+(
amp2*np.exp(-(((x2-x)/sigma)**2+((y2-y)/sigma)**2)/2))
def twoD_GaussianCF(xy, amp0, x0, y0, amp1=13721, amp2=14753, x1=356, y1=247, x2=291, y2=339, sigma_x=12, sigma_y=12):
x0 = float(x0)
y0 = float(y0)
x1 = float(x1)
y1 = float(y1)
x2 = float(x2)
y2 = float(y2)
g = (amp0*np.exp(-(((x0-x)/sigma_x)**2+((y0-y)/sigma_y)**2)/2))+(
amp1*np.exp(-(((x1-x)/sigma_x)**2+((y1-y)/sigma_y)**2)/2))+(
amp2*np.exp(-(((x2-x)/sigma_x)**2+((y2-y)/sigma_y)**2)/2))
return g.ravel()
# Create data indices
I = image # Red channel of a scanned image, equivalent to the 1st image displayed in this post.
p = np.asarray(I).astype('float')
w,h = np.shape(I)
x, y = np.mgrid[0:h, 0:w]
xy = (x,y)
N_points = 3
display_width = 80
initial_guess_sum = (Amp[0], px[0], py[0], Amp[1], px[1], py[1], Amp[2], px[2], py[2])
popt, pcov = opt.curve_fit(twoD_GaussianCF, xy, np.ravel(p), p0=initial_guess_sum)
data_fitted= twoD_Gaussian(*popt)(x,y)
peaks = [(popt[1],popt[2]), (popt[5],popt[6]), (popt[7],popt[8])]
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, aspect="equal")
cb = ax.imshow(p, cmap=plt.cm.jet, origin='bottom',
extent=(x.min(), x.max(), y.min(), y.max()))
ax.contour(x, y, data_fitted.reshape(x.shape[0], y.shape[1]), 20, colors='w')
ax.set_xlim(np.int(RC[0])-135, np.int(RC[0])+135)
ax.set_ylim(np.int(RC[1])+135, np.int(RC[1])-135)
for k in range(0,N_points):
plt.plot(peaks[k][0],peaks[k][1],'bo',markersize=7)
plt.show()
If all you care about is the centroid of each gaussian, I would just go with scipy.optimize.minimize. Multiply your data by -1 and then do some coarse sampling to find minima. The height of each peak will be offset by the neighboring gaussians but the positions are unchanged, so if you find a local extreme value then that must be the centroid of a gaussian.
If you need the other parameters, it might make sense to find the centroids as I suggest and then use leastsq to find the amplitudes and widths. It might add a lot of overhead if you're running these fits many times, but it would significantly reduce the number of free parameters in the least squares fit.

Color matplotlib quiver field according to magnitude and direction

I'm attempting to achieve the same behavior as this function in Matlab, whereby the color of each arrow corresponds to both its magnitude and direction, essentially drawing its color from a wheel. I saw this question, but it only seems to work for barbs. I also saw this answer, but quiver complains that the color array must be two-dimensional.
What is the best way to compute C for matplotlib.pyplot.quiver, taking into account both magnitude and direction?
Even though this is quite old now, I've come across the same problem. Based on matplotlibs quiver demo and my own answer to this post, I created the following example. The idea is to convert the angle of a vector to the color using HSV colors Hue value. The absolute value of the vector is used as the saturation and the value.
import numpy as np
import matplotlib.colors
import matplotlib.pyplot as plt
def vector_to_rgb(angle, absolute):
"""Get the rgb value for the given `angle` and the `absolute` value
Parameters
----------
angle : float
The angle in radians
absolute : float
The absolute value of the gradient
Returns
-------
array_like
The rgb value as a tuple with values [0..1]
"""
global max_abs
# normalize angle
angle = angle % (2 * np.pi)
if angle < 0:
angle += 2 * np.pi
return matplotlib.colors.hsv_to_rgb((angle / 2 / np.pi,
absolute / max_abs,
absolute / max_abs))
X = np.arange(-10, 10, 1)
Y = np.arange(-10, 10, 1)
U, V = np.meshgrid(X, Y)
angles = np.arctan2(V, U)
lengths = np.sqrt(np.square(U) + np.square(V))
max_abs = np.max(lengths)
c = np.array(list(map(vector_to_rgb, angles.flatten(), lengths.flatten())))
fig, ax = plt.subplots()
q = ax.quiver(X, Y, U, V, color=c)
plt.show()
The color wheel is the following. The code for generating it is mentioned in the Edit.
Edit
I just noticed, that the linked matlab function "renders a vector field as a grid of unit-length arrows. The arrow direction indicates vector field direction, and the color indicates the magnitude". So my above example is not really what is in the question. Here are some modifications.
The left graph is the same as above. The right one does, what the cited matlab function does: A unit-length arrow plot with the color indicating the magnitude. The center one does not use the magnitude but only the direction in the color which might be useful too. I hope other combinations are clear from this example.
import numpy as np
import matplotlib.colors
import matplotlib.pyplot as plt
def vector_to_rgb(angle, absolute):
"""Get the rgb value for the given `angle` and the `absolute` value
Parameters
----------
angle : float
The angle in radians
absolute : float
The absolute value of the gradient
Returns
-------
array_like
The rgb value as a tuple with values [0..1]
"""
global max_abs
# normalize angle
angle = angle % (2 * np.pi)
if angle < 0:
angle += 2 * np.pi
return matplotlib.colors.hsv_to_rgb((angle / 2 / np.pi,
absolute / max_abs,
absolute / max_abs))
X = np.arange(-10, 10, 1)
Y = np.arange(-10, 10, 1)
U, V = np.meshgrid(X, Y)
angles = np.arctan2(V, U)
lengths = np.sqrt(np.square(U) + np.square(V))
max_abs = np.max(lengths)
# color is direction, hue and value are magnitude
c1 = np.array(list(map(vector_to_rgb, angles.flatten(), lengths.flatten())))
ax = plt.subplot(131)
ax.set_title("Color is lenth,\nhue and value are magnitude")
q = ax.quiver(X, Y, U, V, color=c1)
# color is length only
c2 = np.array(list(map(vector_to_rgb, angles.flatten(),
np.ones_like(lengths.flatten()) * max_abs)))
ax = plt.subplot(132)
ax.set_title("Color is direction only")
q = ax.quiver(X, Y, U, V, color=c2)
# color is direction only
c3 = np.array(list(map(vector_to_rgb, 2 * np.pi * lengths.flatten() / max_abs,
max_abs * np.ones_like(lengths.flatten()))))
# create one-length vectors
U_ddash = np.ones_like(U)
V_ddash = np.zeros_like(V)
# now rotate them
U_dash = U_ddash * np.cos(angles) - V_ddash * np.sin(angles)
V_dash = U_ddash * np.sin(angles) + V_ddash * np.cos(angles)
ax = plt.subplot(133)
ax.set_title("Uniform length,\nColor is magnitude only")
q = ax.quiver(X, Y, U_dash, V_dash, color=c3)
plt.show()
To plot the color wheel use the following code. Note that this uses the max_abs value from above which is the maximum value that the color hue and value can reach. The vector_to_rgb() function is also re-used here.
ax = plt.subplot(236, projection='polar')
n = 200
t = np.linspace(0, 2 * np.pi, n)
r = np.linspace(0, max_abs, n)
rg, tg = np.meshgrid(r, t)
c = np.array(list(map(vector_to_rgb, tg.T.flatten(), rg.T.flatten())))
cv = c.reshape((n, n, 3))
m = ax.pcolormesh(t, r, cv[:,:,1], color=c, shading='auto')
m.set_array(None)
ax.set_yticklabels([])
I don't know if you've since found that quiver with matplotlib 1.4.x has 3d capability. This capability is limited when attempting to colour the arrows however.
A friend and I write the following script (in half an hour or so) to plot my experiment data using hex values from a spreadsheet, for my thesis. We're going to make this more automated once we're done with the semester but the issue with passing a colour map to quiver is that it can't accept a vector form for some reason.
This link is to my git repository where the code I used, slightly neatened up by another friend, is hosted.
I hope I can save someone the time it took me.

Categories