I am looking into how the intensity of a ring changes depending on angle. Here is an example of an image:
What I would like to do is take a circle of values from within the center of that doughnut and plot them vs angle. What I'm currently doing is using scipy.ndimage.interpolation.rotate and taking slices radially through the ring, and extracting the maximum of the two peaks and plotting those vs angle.
crop = np.ones((width,width)) #this is my image
slices = np.arange(0,width,1)
stack = np.zeros((2*width,len(slices)))
angles = np.linspace(0,2*np.pi,len(crop2))
for j in range(len(slices2)): # take slices
stack[:,j] = rotate(crop,slices[j],reshape=False)[:,width]
However I don't think this is doing what I'm actually looking for. I'm mostly struggling with how to extract the data I want. I have also tried applying a mask which looks like this;
to the image, but then I don't know how to get the values within that mask in the correct order (ie. in order of increasing angle 0 - 2pi)
Any other ideas would be of great help!
I made a different input image to help verifying correctness:
import numpy as np
import scipy as sp
import scipy.interpolate
import matplotlib.pyplot as plt
# Mock up an image.
W = 100
x = np.arange(W)
y = np.arange(W)
xx,yy = np.meshgrid(x,y)
image = xx//5*5 + yy//5*5
image = image / np.max(image) # scale into [0,1]
plt.imshow(image, interpolation='nearest', cmap='gray')
plt.show()
To sample values from circular paths in the image, we first build an interpolator because we want to access arbitrary locations. We also vectorize it to be faster.
Then, we generate the coordinates of N points on the circle's circumference using the parametric definition of the circle x(t) = sin(t), y(t) = cos(t).
N should be at least twice the circumference (Nyquist–Shannon sampling theorem).
interp = sp.interpolate.interp2d(x, y, image)
vinterp = np.vectorize(interp)
for r in (15, 30, 45): # radii for circles around image's center
xcenter = len(x)/2
ycenter = len(y)/2
arclen = 2*np.pi*r
angle = np.linspace(0, 2*np.pi, arclen*2, endpoint=False)
value = vinterp(xcenter + r*np.sin(angle),
ycenter + r*np.cos(angle))
plt.plot(angle, value, label='r={}'.format(r))
plt.legend()
plt.show()
Related
I am calculating the Fourier transform of simple 2D shapes (a circle for example). I am buzzed by a behaviour of the FFT I do not understand: whether my shape is centered on one pixel or on the corner of four pixels the resulting FFT is centered on one pixel. Here is a bit of code to explain:
import numpy as np
import matplotlib.pyplot as plt
nPix = 1000
pixelCentered = False
X,Y = np.indices((nPix,nPix),dtype=float)
if pixelCentered:
X -= (nPix)/2 #this is centering my grid on one pixel
Y -= (nPix)/2 #this is centering my grid on one pixel
else:
X -= (nPix -1)/2 #this is centering my grid on the corner of four pixels
Y -= (nPix -1)/2 #this is centering my grid on the corner of four pixels
x = X/(nPix/2)
y = Y/(nPix/2)
R = np.sqrt(x**2+y**2)
I = (R<0.5).astype(float)
PSF = np.fft.fftshift(np.fft.fft2(I))
plt.figure(1)
plt.clf()
plt.imshow(I)
plt.figure(2)
plt.clf()
plt.imshow(np.abs(PSF))
Since my grid is even, the exact center of the grid is at the corner of the four central pixels. However I create my shape based on the coordinate system X, Y and then I can control the position of the center of my shape. If I set pixelCentered to True the exact center of my grid is on pixel (nPix/2,nPix/2). In this case I expect my maximum pixel to be pixel (nPix/2,nPix/2). Now, if I set pixelCentered to False, The exact center of my array is not inside of a pixel but at the corner of the four center pixels. In this case I expect in the FFT that the four central pixel have the maximum value. But this is not what I observe. In fact regardless of the displacement I impose on the circle, the maximum of the FFT of my circle is always centered on the pixel (nPix/2,nPix/2).
What am I missing here? Why is the maximum intensity not moving around with the shape?
Cheers,
Thanks to Homer512 comments I can safely answer my own question. Basically the trick is to add a phase in the spatial domain of pi/2. This is only valid if you play with your zero padding without changing your coordinate system.
Here is some example code for you to play with:
import numpy as np
import matplotlib.pyplot as plt
nPix = 1000
X,Y = np.indices((nPix,nPix),dtype=float)
X -= (nPix)/2 #this is centering my grid on one pixel
Y -= (nPix)/2 #this is centering my grid on one pixel
x = X/(nPix/2)
y = Y/(nPix/2)
R = np.sqrt(x**2+y**2)
oversampling = 5
I = (R<(1/oversampling)).astype(float)
phase = -(np.pi/2*x)-(np.pi/2*y) #phase you add to center the image
complexPhase = I * np.exp(1j*phase)
plt.figure(1)
plt.clf()
plt.imshow(I)
PSF = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(complexPhase)))
plt.figure(2)
plt.clf()
plt.imshow(np.real(PSF*np.conj(PSF)))
plt.ylim([496,504])
plt.xlim([496,504])
Now, if you prefer re scaling your x and y to get your shape sized properly, and still get the FFT centered on the geometrical center of the grid this is how you should do it:
import numpy as np
import matplotlib.pyplot as plt
nPix = 1000
X,Y = np.indices((nPix,nPix),dtype=float)
X -= (nPix)/2 #this is centering my grid on one pixel
Y -= (nPix)/2 #this is centering my grid on one pixel
oversampling = 2
x = X/(nPix/2)*oversampling
y = Y/(nPix/2)*oversampling
R = np.sqrt(x**2+y**2)
I = (R<1).astype(float)
#Now your additional phase should scale with the oversampling
phase = -(np.pi/oversampling/2*x)-(np.pi/oversampling/2*y)
complexPhase = I * np.exp(1j*phase)
plt.figure(1)
plt.clf()
plt.imshow(I)
PSF = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(complexPhase)))
plt.figure(2)
plt.clf()
plt.imshow(np.real(PSF*np.conj(PSF)))
plt.ylim([496,504])
plt.xlim([496,504])
I hope this is clear.
Special note for opticians: this is very annoying because if you do wavefront sensing and use FFT in your reconstructions, it means that you will always get an offset tip/tilt phase due to you trying to center the PSF in the array. Which is not the natural state of the FFT.
I have made a workflow code to detect the edges of a flame in an image. I could get the edge line. It consists of many pixel points stored in an array (data in my code). Now based on the data, I would like to calculate the length of the edge. The idea is to calculate the distance between every point in data and sum them all to get the length. I really stuck in making that. Please help me, many thanks.
Here is a processed image:
Here is the original image that converted to the processed image, I put in the code is to compare the result:
import cv2
import matplotlib.pyplot as plt
if __name__ == '__main__':
path = '1897_1.jpg' #processed image
pic = cv2.imread(path)
original = cv2.imread('1897_2.jpg') #original image
img2 = cv2.flip(original, 1)
b,g,r = cv2.split(pic)
img4 = cv2.flip(b, 1)
h,w = img4.shape
data = []
th_val = 20
for i in range(h):
for j in range(w):
val = img4[i, j]
if (val >= th_val):
data.append(j)
break
b1 = range(len(data))
b2 = len(data)
result = [b2]
print (b2)
plt.figure(figsize = (10, 8))
plt.subplot(121)
plt.imshow(img4)
plt.plot(data, b1)
plt.axis('off');
plt.subplot(122)
plt.plot(data, b1)
plt.imshow(img2)
plt.axis('off')
I came up with a very simple solution, it is far from optimal, but it works for this example, and it is a good starting point. Unfortunately, this solution is not optimal for the blue chanell, where the curve is not smooth, but it works for green and red chanells.
data contains width coordinates of the first red pixel overcoming threshold. So, all first pixels are separated by 1 pixel step on vertical axes and data[i+1] - data[i] on horizontal axes. These two values can be considered as two cathetus of the squeare triangle, and the hypothenuse is the distance we want to calculate. So, here is the solution:
length = 0
for i in range(0,len(data)-1):
cathetus = data[i+1]-data[i]
hypothenuse = (cathetus**2 + 1**2)**1/2
length += hypothenuse
print(length)
Update
I have came up with two solutions: a hardcoded one and one released in the form of the function. Let us start with the first one: mean is a rather good approximator for the signal + noise. In the situation, when you do not have very strong noise or missing data, you may use this approach. In the example below we select points with x in [1,2,3] then we calculate mean y for these points and assign mean to coordinate x=2. Next we select points x in [2,3,4] and so on. As a result, we obtain mean_data list with y coordinates and mean_x with x coordinates. We can calculate length with the approach described above. You may also increase the power of smoothing by averaging over 4 and more points from data.
mean_data = []
mean_x = range(1,len(data)-1)
for i in range(0,len(data)-2):
mean_d = (data[i] + data[i+1] + data[i+2])/3
mean_data.append(mean_d)
Another approach is to use smoothing tools from scipy package. One of them is described below. When calculating the length you will have to adjust to new x axes xnew.
from scipy.interpolate import spline
import numpy as np
#transform to np.arrays initial data
b1_ = np.array(b1)
data_ = np.array(data)
# create new x with more data points
xnew = np.linspace(b1_.min(),b1_.max(),50) #50 is a number of points in between
smoothed_data = spline(b1_,data_,xnew)
I have solved Laplace's equation on an annulus with a central hole. (The blue-red colourmap part on the plot: https://imgur.com/gallery/le6ToAG. I would like to discard part of this plot (anything outside the black circle I've drawn onto the plot), and I want the black x I drew to be the centre of a new, smaller disk, with an off-centre hole in it that used to be a central hole in a bigger disk.
just to be clear, I want to get this: https://imgur.com/gallery/VcMLpOm and I also wonder if it would be possible to save the values of this cropped disk in another array somehow?:
This is the closest thing to what I need that I have found so far: remove part of a plot in matplotlib
however, I don't know how to implement it in my case. I feel like I should define a new mesh grid somehow, but I'm not sure how to tell it to have an off-centre hole in the plot, moreover, this does not help with saving the values of the smaller disk with the off-central hole into a new array.
This is my code for plotting the original annulus:
import numpy as np
import matplotlib.pyplot as plt
#initial conditions
Nr = 50
N_phi = 50
radius = 10
r2 = 2
T1 = 35
T2 = 4
# define for plot
r = np.linspace(r2, radius, Nr)
phi = np.linspace(0, 2*np.pi, N_phi)
R, phi = np.meshgrid(r, phi)
X = R*np.cos(phi)
Y = R*np.sin(phi)
#initialise matrix
T = np.ones((Nr, N_phi))
#print(np.shape(T))
#add solution to laplace's equation to matrix T
for i in reversed(range(0,Nr)):
T[:,i] = T1 + ((T2- T1)/np.log(r2/radius))*np.log(r[i]/radius)
#plot
plt.figure()
yes = plt.contourf(X,Y,T,cmap='jet')
plt.colorbar(yes)
plt.show()
I hope someone can guide me in the right direction. Thanks.
As a first start, a possibility would be to add
T = np.ma.array(T) # mask a circle in the middle:
outside = np.sqrt((X + 3)**2 + (Y - 3)**2) > 5
T[outside] = np.ma.masked
right before plt.figure(), according to https://matplotlib.org/gallery/images_contours_and_fields/contourf_demo.html#sphx-glr-gallery-images-contours-and-fields-contourf-demo-py
But I think this looks nicer if the base contourf plot is prepared with higher resolution in X and Y...
PS: e.g. setting
#initial conditions
Nr = 500
N_phi = 500
made it look quite sharp
I'm working in Python2.7 with 3D numpy arrays, and trying to retrieve only pixels who fall on a 2D tilted disc.
Here is my code to plot the border of the disc (= a circle) I am interested in
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
#creating a 3d numpy array (empty in this example, but will represent a binary 3D image in my application)
space=np.zeros((40,40,20))
r = 8 #radius of the circle
theta = np.pi / 4 # "tilt" of the circle
phirange = np.linspace(0, 2 * np.pi) #to make a full circle
#center of the circle
center=[20,20,10]
#computing the values of the circle in spherical coordinates and converting them
#back to cartesian
for phi in phirange:
x = r * np.cos(theta) * np.cos(phi) + center[0]
y= r*np.sin(phi) + center[1]
z= r*np.sin(theta)* np.cos(phi) + center[2]
space[int(round(x)),int(round(y)),int(round(z))]=1
x,y,z = space.nonzero()
#plotting
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, zdir='z', c= 'red')
plt.show()
The plot gives the following figure :
which is a good start, but now I want a way to retrieve only the values of the pixels of space which are located in the disc defined by the circle : the ones in the pink zone in the following image (in my application, space will be a 3D binary image, here it is numpy.zeros() just to be able to plot and show you the disc I want):
How should I procede ?
I guess there is some numpy masking involved, an I understand how you would do it in 2D (like this question) but I'm having trouble applying this to 3D.
One easy way would be to calculate the normal vector to your disc plane. You can use your spherical coordinates for that. Be sure not to add the centre, set phi at zero and swap cos and sin theta, also stick a minus sign to the sin.
lets call that vector v. The plane is given by v0*x0 + v1*x1 + v2*x2 == c you can calculate c by inserting a point from your circle for x.
Next you can make a 2d grid for x0 and x1 and solve for x2. this gives you the height x2 as a function of the x0, x1 mesh. for these points you can calculate the distance from your disc centre and discard the points that are too far off. This you would indeed do using a mask.
Finally, depending on how precisely you want to plot you could round the x2 values to grid units, but for example for a surface plot I wouldn't do that.
To get a 3d mask as you describe you would round x2 and then starting from an all zero space set the disc pixels using space[x0, x1, x2] = True. This assumes that you have masked x0, x1, x2 as described earlier.
Well that is a math problem, you should ask it in the Mathematics Stack Exchange site.
From my perspective, you should first find the surface your disc is in, and do the area calculation within that surface, by, for example, the method you mentioned in the linked question.
numpy or matplotlib here definitely do not responsible for the projection, you do.
Without clearly point out which (or which kind of) surface they are in, and the equation does not guarantee it is a plane, the area does not mean anything.
I have created an ellipse using matplotlib.patches.ellipse as shown below:
patch = mpatches.Ellipse(center, major_ax, minor_ax, angle_deg, fc='none', ls='solid', ec='g', lw='3.')
What I want is a list of all the integer coordinates enclosed inside this patch.
I.e. If I was to plot this ellipse along with every integer point on the same grid, how many of those points are enclosed in the ellipse?
I have tried seeing if I can extract the equation of the ellipse so I can loop through each point and see whether it falls within the line but I can't seem to find an obvious way to do this, it becomes more complicated as the major axis of the ellipse can be orientated at any angle. The information to do this must be stored in patches somewhere, but I can't seem to find it.
Any advice on this would be much appreciated.
Ellipse objects have a method contains_point which will return 1 if the point is in the ellipse, 0 other wise.
Stealing from #DrV 's answer:
import matplotlib.pyplot as plt
import matplotlib.patches
import numpy as np
# create an ellipse
el = matplotlib.patches.Ellipse((50,-23), 10, 13.7, 30, facecolor=(1,0,0,.2), edgecolor='none')
# calculate the x and y points possibly within the ellipse
y_int = np.arange(-30, -15)
x_int = np.arange(40, 60)
# create a list of possible coordinates
g = np.meshgrid(x_int, y_int)
coords = list(zip(*(c.flat for c in g)))
# create the list of valid coordinates (from untransformed)
ellipsepoints = np.vstack([p for p in coords if el.contains_point(p, radius=0)])
# just to see if this works
fig = plt.figure()
ax = fig.add_subplot(111)
ax.add_artist(el)
ep = np.array(ellipsepoints)
ax.plot(ellipsepoints[:,0], ellipsepoints[:,1], 'ko')
plt.show()
This will give you the result as below:
If you really want to use the methods offered by matplotlib, then:
import matplotlib.pyplot as plt
import matplotlib.patches
import numpy as np
# create an ellipse
el = matplotlib.patches.Ellipse((50,-23), 10, 13.7, 30, facecolor=(1,0,0,.2), edgecolor='none')
# find the bounding box of the ellipse
bb = el.get_window_extent()
# calculate the x and y points possibly within the ellipse
x_int = np.arange(np.ceil(bb.x0), np.floor(bb.x1) + 1, dtype='int')
y_int = np.arange(np.ceil(bb.y0), np.floor(bb.y1) + 1, dtype='int')
# create a list of possible coordinates
g = np.meshgrid(x_int, y_int)
coords = np.array(zip(*(c.flat for c in g)))
# create a list of transformed points (transformed so that the ellipse is a unit circle)
transcoords = el.get_transform().inverted().transform(coords)
# find the transformed coordinates which are within a unit circle
validcoords = transcoords[:,0]**2 + transcoords[:,1]**2 < 1.0
# create the list of valid coordinates (from untransformed)
ellipsepoints = coords[validcoords]
# just to see if this works
fig = plt.figure()
ax = fig.add_subplot(111)
ax.add_artist(el)
ep = np.array(ellipsepoints)
ax.plot(ellipsepoints[:,0], ellipsepoints[:,1], 'ko')
Seems to work:
(Zooming in reveals that even the points hanging on the edge are inside.)
The point here is that matplotlib handles ellipses as transformed circles (translate, rotate, scale, anything affine). If the transform is applied in reverse, the result is a unit circle at origin, and it is very simple to check if a point is within that.
Just a word of warning: The get_window_extent may not be extremely reliable, as it seems to use the spline approximation of a circle. Also, see tcaswell's comment on the renderer-dependency.
In order to find a more reliable bounding box, you may:
create a horizontal and vertical vector into the plot coordinates (their position is not important, ([0,0],[1,0]) and ([0,0], [0,1]) will do)
transform these vectors into the ellipse coordinates (the get_transform, etc.)
find in the ellipse coordinate system (i.e. the system where the ellipse is a unit circle around the origin) the four tangents of the circle which are parallel to these two vectors
find the intersection points of the vectors (4 intersections, but 2 diagonal will be enough)
transform the intersection points back to the plot coordinates
This will give an accurate (but of course limited by the numerical precision) square bounding box.
However, you may use a simple approximation:
all possible points are within a circle whose center is the same as that of the ellipse and whose diameter is the same as that of the major axis of the ellipse
In other words, all possible points are within a square bounding box which is between x0+-m/2, y0+-m/2, where (x0, y0) is the center of the ellipse and m the major axis.
I'd like to offer another solution that uses the Path object's contains_points() method instead of contains_point():
First get the coordinates of the ellipse and make it into a Path object:
elpath=Path(el.get_verts())
(NOTE that el.get_paths() won't work for some reason.)
Then call the path's contains_points():
validcoords=elpath.contains_points(coords)
Below I'm comparing #tacaswell's solution (method 1), #Drv's (method 2) and my own (method 3) (I've enlarged the ellipse by ~5 times):
import numpy
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
from matplotlib.path import Path
import time
#----------------Create an ellipse----------------
el=Ellipse((50,-23),50,70,30,facecolor=(1,0,0,.2), edgecolor='none')
#---------------------Method 1---------------------
t1=time.time()
for ii in range(50):
y=numpy.arange(-100,50)
x=numpy.arange(-30,130)
g=numpy.meshgrid(x,y)
coords=numpy.array(zip(*(c.flat for c in g)))
ellipsepoints = numpy.vstack([p for p in coords if el.contains_point(p, radius=0)])
t2=time.time()
print 'time of method 1',t2-t1
#---------------------Method 2---------------------
t2=time.time()
for ii in range(50):
y=numpy.arange(-100,50)
x=numpy.arange(-30,130)
g=numpy.meshgrid(x,y)
coords=numpy.array(zip(*(c.flat for c in g)))
invtrans=el.get_transform().inverted()
transcoords=invtrans.transform(coords)
validcoords=transcoords[:,0]**2+transcoords[:,1]**2<=1.0
ellipsepoints=coords[validcoords]
t3=time.time()
print 'time of method 2',t3-t2
#---------------------Method 3---------------------
t3=time.time()
for ii in range(50):
y=numpy.arange(-100,50)
x=numpy.arange(-30,130)
g=numpy.meshgrid(x,y)
coords=numpy.array(zip(*(c.flat for c in g)))
#------Create a path from ellipse's vertices------
elpath=Path(el.get_verts())
# call contains_points()
validcoords=elpath.contains_points(coords)
ellipsepoints=coords[validcoords]
t4=time.time()
print 'time of method 3',t4-t3
#---------------------Plot it ---------------------
fig,ax=plt.subplots()
ax.add_artist(el)
ep=numpy.array(ellipsepoints)
ax.plot(ellipsepoints[:,0],ellipsepoints[:,1],'ko')
plt.show(block=False)
I got these execution time:
time of method 1 62.2502269745
time of method 2 0.488734006882
time of method 3 0.588987112045
So the contains_point() approach is way slower. The coordinate-transformation method is faster than mine, but when you get irregular shaped contours/polygons, this method would still work.
Finally the result plot: