I am trying to generate a lattice of points in the shape of a Menger sponge or Sierpinski sponge.
https://en.wikipedia.org/wiki/Menger_sponge This link details how the shape is mathematically constructed.
I wanted to find a way where I could make this shape using recursion to remove the necessary cubes.
I looked online but I could only find code which generated 3d renderings of the shape and not a lattice of points.
It is worth mentioning that I am not familiar with OO programming which seemed to be the general method the examples I found used.
I then tried to make a 2D version to see if I could implement it, but the only version I got to work was by manually subtracting the areas needed.
This is what I've gotten to work, only removing the first square from the centre:
`
import numpy as np
import matplotlib.pyplot as plt
size = 12
x = []
y = []
for index_x in np.arange(size):
for index_y in np.arange(size):
x = np.append(x, index_x)
y = np.append(y, index_y)
# step 1: remove central box
x_box = []
y_box = []
for index_1 in np.arange(144):
if (x[index_1] < size/3 or x[index_1] >= 2/3 * size or
y[index_1] < size/3 or y[index_1] >= 2/3 * size):
x_box = np.append(x_box, x[index_1])
y_box = np.append(y_box, y[index_1])
# step 2: remove central square in each surrounding square
# Do the same steps as above but for the other smaller squares
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(x_box, y_box)
ax.set_title('Menger Sponge')
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()
`
This is what my code produces.
Is there an easier / better way of implementing this?
You need to add a recursive element to your code. I would also suggest thinking in terms of 2D (and eventually 3D) matricies instead of 1D arrays and explore numpy's abilities in depth:
import numpy as np
def menger(matrix, size):
quotient, remainder = divmod(size, 3)
if remainder == 0:
for x in np.arange(0, size, quotient):
for y in np.arange(0, size, quotient):
view = matrix[x:x + quotient, y:y + quotient]
if (x // quotient) % 3 == 1 and (y // quotient) % 3 == 1:
view *= 0
menger(view, quotient)
if __name__ == "__main__":
import matplotlib.pyplot as plt
SIZE = 27
matrix = np.ones((SIZE, SIZE))
menger(matrix, SIZE)
plt.matshow(matrix)
plt.colorbar()
plt.show()
Related
Lately I've been studying the filter back projection, and I download the code from github.com. I was confused the process of the filter back projection. here is part of his code:
def backproject(sinogram, theta):
"""Backprojection function.
inputs: sinogram - [n x m] numpy array where n is the number of projections and m the number of angles
theta - vector of length m denoting the angles represented in the sinogram
output: backprojArray - [n x n] backprojected 2-D numpy array"""
imageLen = sinogram.shape[0] #sinogram : [n x m] , so imageLen = n(height)
reconMatrix = np.zeros((imageLen, imageLen))
x = np.arange(imageLen)-imageLen/2
y = x.copy()
X, Y = np.meshgrid(x, y)
plt.ion()
fig2, ax = plt.subplots()
im = plt.imshow(reconMatrix, cmap='gray')
theta = theta*np.pi/180
numAngles = len(theta)
for n in range(numAngles):
Xrot = X*np.sin(theta[n])-Y*np.cos(theta[n])
XrotCor = np.round(Xrot+imageLen/2)
XrotCor = XrotCor.astype('int')
projMatrix = np.zeros((imageLen, imageLen))
m0, m1 = np.where((XrotCor >= 0) & (XrotCor <= (imageLen-1)))
s = sinogram[:,n]
projMatrix[m0, m1] = s[XrotCor[m0, m1]]
reconMatrix += projMatrix
im.set_data(Image.fromarray((reconMatrix-np.min(reconMatrix))/np.ptp(reconMatrix)*255))
ax.set_title('Theta = %.2f degrees' % (theta[n]*180/np.pi))
fig2.canvas.draw()
fig2.canvas.flush_events()
plt.close()
plt.ioff()
backprojArray = np.flipud(reconMatrix)
return backprojArray
For the loop 'for', I was confused for two weeks.
Firstly, I really don't know the following code.
Xrot = X*np.sin(theta[n])-Y*np.cos(theta[n])
XrotCor = np.round(Xrot+imageLen/2) .
I don't know how it works through geometric ways. I have drown the matrix and so on, but I still don't know the priciples.
Lastly, for the code, im.set_data(Image.fromarray((reconMatrix-np.min(reconMatrix))/np.ptp(reconMatrix)*255)) , what does it mean, cause I only know the direct back projection. And I really don't know why there's 255
Xrot = X*np.sin(theta[n])-Y*np.cos(theta[n])
This is the simple back projection algorithmm. I am also learning it so I will try to make it as simple and concise as possible.
There are some steps for FBP.
Input Sinogram image(Radon Transform)
_ Create Filter (Ram filter works best but you can try other High pass filters as well)
Forward Fourier Transform(dft function )
Apply Filter
Inverse Fourier Transform
Backprojection (Basically reversing the sinogram technique)
Backprojection is simply back projecting the values and add up them to get the original image for each projection.
im.set_data(Image.fromarray((reconMatrix-.min(reconMatrix))/np.ptp(reconMatrix)*255))
I believe this code is normalizing the image nothing else.
using Python I have an array with coefficients from a polynomial, let's say
polynomial = [1,2,3,4]
which means the equation:
y = 4x³ + 3x² + 2x + 1
(so the array is in reversed order)
Now how do I plot this into a visual curve in the Jupyter Notebook?
There was a similar question:
Plotting polynomial with given coefficients
but I didn't understand the answer (like what is a and b?).
And what do I need to import to make this happen?
First, you have to decide the limits for x in your plot. Let's say x goes from -2 to 2. Let's also ask for a hundred points on our curve (this can be any sufficiently large number for your interval so that you get a smooth-looking curve)
Let's create that array:
lower_limit = -2
upper_limit = 2
num_pts = 100
x = np.linspace(lower_limit, upper_limit, num_pts)
Now, let's evaluate y at each of these points. Numpy has a handy polyval() that'll do this for us. Remember that it wants the coefficients ordered by highest exponent to lowest, so you'll have to reverse the polynomial list
poly_coefs = polynomial[::-1] # [4, 3, 2, 1]
y = np.polyval(poly_coefs, x)
Finally, let's plot everything:
plt.plot(x, y, '-r')
You'll need the following imports:
import numpy as np
from matplotlib import pyplot as plt
If you don't want to import numpy, you can also write vanilla python methods to do the same thing:
def linspace(start, end, num_pts):
step = (end - start) / (num_pts - 1)
return [start + step * i for i in range(num_pts)]
def polyval(coefs, xvals):
yvals = []
for x in xvals:
y = 0
for power, c in enumerate(reversed(coefs)):
y += c * (x ** power)
yvals.append(y)
return yvals
I'm trying to generate a regular n number of points within the volume of a sphere. I found this similar answer (https://scicomp.stackexchange.com/questions/29959/uniform-dots-distribution-in-a-sphere) on generating a uniform regular n number of points on the surface of a sphere, with the following code:
import numpy as np
n = 5000
r = 1
z = []
y = []
x = []
alpha = 4.0*np.pi*r*r/n
d = np.sqrt(alpha)
m_nu = int(np.round(np.pi/d))
d_nu = np.pi/m_nu
d_phi = alpha/d_nu
count = 0
for m in range (0,m_nu):
nu = np.pi*(m+0.5)/m_nu
m_phi = int(np.round(2*np.pi*np.sin(nu)/d_phi))
for n in range (0,m_phi):
phi = 2*np.pi*n/m_phi
xp = r*np.sin(nu)*np.cos(phi)
yp = r*np.sin(nu)*np.sin(phi)
zp = r*np.cos(nu)
x.append(xp)
y.append(yp)
z.append(zp)
count = count +1
which works as intended:
How can I modify this to generate a regular set of n points in the volume of a sphere?
Another method to do this, yielding uniformity in volume:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
dim_len = 30
spacing = 2 / dim_len
point_cloud = np.mgrid[-1:1:spacing, -1:1:spacing, -1:1:spacing].reshape(3, -1).T
point_radius = np.linalg.norm(point_cloud, axis=1)
sphere_radius = 0.5
in_points = point_radius < sphere_radius
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(point_cloud[in_points, 0], point_cloud[in_points, 1], point_cloud[in_points, 2], )
plt.show()
Output (matplotlib mixes up the view but it is a uniformly sampled sphere (in volume))
Uniform sampling, then checking if points are in the sphere or not by their radius.
Uniform sampling reference [see this answer's edit history for naiive sampling].
This method has the drawback of generating redundant points which are then discarded.
It has the upside of vectorization, which probably makes up for the drawback. I didn't check.
With fancy indexing, one could generate the same points as this method without generating redundant points, but I doubt it can be easily (or at all) vectorized.
Sample uniformly along X. For every value of X, you draw two Y from X²+Y²=1. Sample uniformly between these two Y. Then for every (X, Y) pair, you draw two Z from X²+Y²+Z²=1. Sample uniformly between these two Z.
I have a numpy array filled with intensity readings at different radii in a uniform circle (for context, this is a 1D radiative transfer project for protostellar formation models: while much better models exist, my supervisor wasnts me to have the experience of producing one so I understand how others work).
I want to take that 1d array, and "rotate" it through a circle, forming a 2D array of intensities that could then be shown with imshow (or, with a bit of work, aplpy). The final array needs to be 2d, and the projection needs to be Cartesian, not polar.
I can do it with nested for loops, and I can do it with lookup tables, but I have a feeling there must be a neat way of doing it in numpy or something.
Any ideas?
EDIT:
I have had to go back and recreate my (frankly horrible) mess of for loops and if statements that I had before. If I really tried, I could probably get rid of one of the loops and one of the if statements by condensing things down. However, the aim is not to make it work with for loops, but see if there is a built in way to rotate the array.
impB is an array that differs slightly from what I stated it was before. Its actually just a list of radii where particles are detected. I then bin those into radius bins to get the intensity (or frequency if you prefer) in each radius. R is the scale factor for my radius as I run the model in a dimensionless way. iRes is a resolution scale factor, essentially how often I want to sample my radial bins. Everything else should be clear.
radJ = np.ndarray(shape=(2*iRes, 2*iRes)) # Create array of 2xRadius square
for i in range(iRes):
n = len(impB[np.where(impB[:] < ((i+1.) * (R / iRes)))]) # Count number of things within this radius +1
m = len(impB[np.where(impB[:] <= ((i) * (R / iRes)))]) # Count number of things in this radius
a = (((i + 1) * (R / iRes))**2 - ((i) * (R / iRes))**2) * math.pi # A normalisation factor based on area.....dont ask
for x in range(iRes):
for y in range(iRes):
if (x**2 + y**2) < (i * iRes)**2:
if (x**2 + y**2) >= (i * iRes)**2: # Checks for radius, and puts in cartesian space
radJ[x+iRes,y+iRes] = (n-m) / a # Put in actual intensity bins
radJ[x+iRes,-y+iRes] = (n-m) / a
radJ[-x+iRes,y+iRes] = (n-m) / a
radJ[-x+iRes,-y+iRes] = (n-m) / a
Nested loops are a simple approach for that. With ri_data_r and y containing your radius values (difference to the middle pixel) and the array for rotation, respectively, I would suggest:
from scipy import interpolate
import numpy as np
y = np.random.rand(100)
ri_data_r = np.linspace(-len(y)/2,len(y)/2,len(y))
interpol_index = interpolate.interp1d(ri_data_r, y)
xv = np.arange(-1, 1, 0.01) # adjust your matrix values here
X, Y = np.meshgrid(xv, xv)
profilegrid = np.ones(X.shape, float)
for i, x in enumerate(X[0, :]):
for k, y in enumerate(Y[:, 0]):
current_radius = np.sqrt(x ** 2 + y ** 2)
profilegrid[i, k] = interpol_index(current_radius)
print(profilegrid)
This will give you exactly what you are looking for. You just have to take in your array and calculate an symmetric array ri_data_r that has the same length as your data array and contains the distance between the actual data and the middle of the array. The code is doing this automatically.
I stumbled upon this question in a different context and I hope I understood it right. Here are two other ways of doing this. The first uses skimage.transform.warp with interpolation of desired order (here we use order=0 Nearest-neighbor). This method is slower but more precise and needs less memory then the second method.
The second one does not use interpolation, therefore is faster but also less precise and needs way more memory because it stores each 2D array containing one tilt until the end, where they are averaged with np.nanmean().
The difference between both solutions stemmed from the problem of handling the center of the final image where the tilts overlap the most, i.e. the first one would just add values with each tilt ending up out of the original range. This was "solved" by clipping the matrix in each step to a global_min and global_max (consult the code). The second one solves it by taking the mean of the tilts where they overlap, which forces us to use the np.nan.
Please, read the Example of usage and Sanity check sections in order to understand the plot titles.
Solution 1:
import numpy as np
from skimage.transform import warp
def rotate_vector(vector, deg_angle):
# Credit goes to skimage.transform.radon
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
center = vector.size // 2
square = np.zeros((vector.size, vector.size))
square[center,:] = vector
rad_angle = np.deg2rad(deg_angle)
cos_a, sin_a = np.cos(rad_angle), np.sin(rad_angle)
R = np.array([[cos_a, sin_a, -center * (cos_a + sin_a - 1)],
[-sin_a, cos_a, -center * (cos_a - sin_a - 1)],
[0, 0, 1]])
# Approx. 80% of time is spent in this function
return warp(square, R, clip=False, output_shape=((vector.size, vector.size)))
def place_vectors(vectors, deg_angles):
matrix = np.zeros((vectors.shape[-1], vectors.shape[-1]))
global_min, global_max = 0, 0
for i, deg_angle in enumerate(deg_angles):
tilt = rotate_vector(vectors[i], deg_angle)
global_min = tilt.min() if global_min > tilt.min() else global_min
global_max = tilt.max() if global_max < tilt.max() else global_max
matrix += tilt
matrix = np.clip(matrix, global_min, global_max)
return matrix
Solution 2:
Credit for the idea goes to my colleague Michael Scherbela.
import numpy as np
def rotate_vector(vector, deg_angle):
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
square = np.ones([vector.size, vector.size]) * np.nan
radius = vector.size // 2
r_values = np.linspace(-radius, radius, vector.size)
rad_angle = np.deg2rad(deg_angle)
ind_x = np.round(np.cos(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_y = np.round(np.sin(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_x = np.clip(ind_x, 0, vector.size-1)
ind_y = np.clip(ind_y, 0, vector.size-1)
square[ind_y, ind_x] = vector
return square
def place_vectors(vectors, deg_angles):
matrices = []
for deg_angle, vector in zip(deg_angles, vectors):
matrices.append(rotate_vector(vector, deg_angle))
matrix = np.nanmean(np.array(matrices), axis=0)
return np.nan_to_num(matrix, copy=False, nan=0.0)
Example of usage:
r = 100 # Radius of the circle, i.e. half the length of the vector
n = int(np.pi * r / 8) # Number of vectors, e.g. number of tilts in tomography
v = np.ones(2*r) # One vector, e.g. one tilt in tomography
V = np.array([v]*n) # All vectors, e.g. a sinogram in tomography
# Rotate 1D vector to a specific angle (output is 2D)
angle = 45
rotated = rotate_vector(v, angle)
# Rotate each row of a 2D array according to its angle (output is 2D)
angles = np.linspace(-90, 90, num=n, endpoint=False)
inplace = place_vectors(V, angles)
Sanity check:
These are just simple checks which by no means cover all possible edge cases. Depending on your use case you might want to extend the checks and adjust the method.
# I. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then sum(inplace) should be approx. equal to (n * (2πr - n)) / π
# which is an area that should be covered by the tilts
desired_area = (n * (2 * np.pi * r - n)) / np.pi
covered_area = np.sum(inplace)
covered_frac = covered_area / desired_area
print(f'This method covered {covered_frac * 100:.2f}% '
'of the area which should be covered in total.')
# II. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then a circle M with radius m <= r should be the largest circle which
# is fully covered by the vectors. I.e. its mean should be no less than 1.
# If n = πr then m = r.
# m = n / π
m = int(n / np.pi)
# Code for circular mask not included
mask = create_circular_mask(2*r, 2*r, center=None, radius=m)
m_area = np.mean(inplace[mask])
print(f'Full radius r={r}, radius m={m}, mean(M)={m_area:.4f}.')
Code for plotting:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
plt.subplot(121)
rotated = np.nan_to_num(rotated) # not necessary in case of the first method
plt.title(
f'Output of rotate_vector(), angle={angle}°\n'
f'Sum is {np.sum(rotated):.2f} and should be {np.sum(v):.2f}')
plt.imshow(rotated, cmap=plt.cm.Greys_r)
plt.subplot(122)
plt.title(
f'Output of place_vectors(), r={r}, n={n}\n'
f'Covered {covered_frac * 100:.2f}% of the area which should be covered.\n'
f'Mean of the circle M is {m_area:.4f} and should be 1.0.')
plt.imshow(inplace)
circle=plt.Circle((r, r), m, color='r', fill=False)
plt.gcf().gca().add_artist(circle)
plt.gcf().gca().legend([circle], [f'Circle M (m={m})'])
I'm trying to acquire the bifurcation diagram for the equation below:
(x is a function of t)
as:
And here is my snippet:
import numpy as np
import matplotlib.pyplot as plt
def pitch(r, x):
return r * x + np.power(x,3)- np.power(x,5)
n = 10000
r = np.linspace(-200, 200, n)
iterations = 1000
last = 100
x = 0
for i in range(iterations):
x = pitch(r,x)
if i >= (iterations - last):
plt.plot(r,x, ',k', alpha=0.02)
plt.title("Bifurcation diagram")
plt.show()
But the generated plot is not what it is supposed to be:
Edit:
Here is my recent attempt:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def pitch(s,x,r):
x = s[0]
dxdt = r * x + np.power(x,3)- np.power(x,5)
return [dxdt]
t = np.linspace(0,100)
s0=[-50]
r = np.linspace(-200, 200)
for i in r:
s = odeint(pitch,s0,t, args=(i,))
plt.plot(s,i,',k', alpha=0.02)
plt.title("Bifurcation diagram")
plt.show()
With this error:
raise ValueError("x and y must have same first dimension") ValueError:
x and y must have same first dimension
Could you give me some advice to fix this problem?!
I found a link to this post and decided to post a few remarks that might be helpful to someone stumbling upon it in the future.
I did not analyze the equation in detail but it is clear from the first sight that something interesting would happen when r is close to 0.
So we could study the behavior of the system for r in [-10,10]
You are right to use odeint instead of solving the Cauchy problem using Euler method coded by yourself.
This equation has an attractor in that it soon "forgets" the initial condition and slides towards the attractor, yet the choice of the attractor depends on where in relation to 0 do we start. Large positive initial conditions would slide to the negative attractor and vice versa as - x^5 is the term that defines the behavior at large x.
What we need to do is for each r in the range put a mark at the attractor that the solution slides to for each initial condition.
We first create a canvas to put marks into:
diagram = np.zeros((200,200))
And then for each combination of (r,s0) we put a point on the canvas at (r,s[-1]).
Here is the complete code
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def pitch(s,x,r):
x = s[0]
dxdt = r * x + np.power(x,3)- np.power(x,5)
return [dxdt]
t = np.arange(0,100,2)
s0=[-50]
N = 200 # Number of points along each side of the diagram
diagram = np.zeros((N,N))
rmin,rmax = -10,10
rrange = np.arange(rmin, rmax,(rmax-rmin)/N)
smin,smax = -5.0,5.0
srange = np.arange(smin,smax,2*(smax-smin)/N)
for i in rrange:
for s0 in srange:
s = odeint(pitch,[s0],t, args=(i,))
imgx = int((i-rmin)*N/(rmax-rmin))
imgy = int((s[-1]-smin)/(smax-smin)*N)
imgx = min(N-1,max(0,imgx)) # make sure we stay
imgy = min(N-1,max(0,imgy)) # within the diagram bounds
diagram[imgy,imgx] = 1
plt.title("Bifurcation diagram")
plt.imshow(np.flipud(diagram),cmap=cm.Greys,
extent=[rmin,rmax,smin,smax],aspect=(rmax-rmin)/(smax-smin))
plt.xlabel("r")
plt.ylabel("x")
plt.show()
And the resulting plot
When you zoom in into the region around 0 by setting (rmin,rmax) to (-0.5,0.5) you could see that the branches of the diagram do not start at 0
Instead as in the diagram drawn in the original post the branches start at roughly r=-0.25