I have the following code snippet (for Hough circle transform):
for r in range(1, 11):
for t in range(0, 360):
trad = np.deg2rad(t)
b = x - r * np.cos(trad)
a = y - r * np.sin(trad)
b = np.floor(b).astype('int')
a = np.floor(a).astype('int')
A[a, b, r-1] += 1
Where A is a 3D array of shape (height, width, 10), and
height and width represent the size of a given image.
My goal is to convert the snippet exclusively to numpy code.
My attempt is this:
arr_r = np.arange(1, 11)
arr_t = np.deg2rad(np.arange(0, 360))
arr_cos_t = np.cos(arr_t)
arr_sin_t = np.sin(arr_t)
arr_rcos = arr_r[..., np.newaxis] * arr_cos_t[np.newaxis, ...]
arr_rsin = arr_r[..., np.newaxis] * arr_sin_t[np.newaxis, ...]
arr_a = (y - arr_rsin).flatten().astype('int')
arr_b = (x - arr_rcos).flatten().astype('int')
Where x and y are two scalar values.
I am having trouble at converting the increment part: A[a,b,r] += 1. I thought of this: A[a,b,r] counts the number of occurrences of the pair (a,b,r), so a clue was to use a Cartesian product (but the arrays are too large).
Any tips or tricks I can use?
Thank you very much!
Edit: after filling A, I need (a,b,r) as argmax(A). The tuple (a,b,r) identifies a circle and its value in A represents the confidence value. So I want that tuple with the highest value in A. This is part of the voting algorithm from Hough circle transform: find circle parameter with unknown radius.
Method #1
Here's one way leveraging broadcasting to get the counts and update A (this assumes the a and b values computed in the intermediate steps are positive ones) -
d0,d1,d2 = A.shape
arr_r = np.arange(1, 11)
arr_t = np.deg2rad(np.arange(0, 360))
arr_b = np.floor(x - arr_r[:,None] * np.cos(arr_t)).astype('int')
arr_a = np.floor(y - arr_r[:,None] * np.sin(arr_t)).astype('int')
idx = (arr_a*d1*d2) + (arr_b * d2) + (arr_r-1)[:,None]
A.flat[:idx.max()+1] += np.bincount(idx.ravel())
# OR A.flat += np.bincount(idx.ravel(), minlength=A.size)
Method #2
Alternatively, we could avoid bincount to replace the last step in approach #1, like so -
idx.ravel().sort()
idx.shape = (-1)
grp_idx = np.flatnonzero(np.concatenate(([True], idx[1:]!=idx[:-1],[True])))
A.flat[idx[grp_idx[:-1]]] += np.diff(grp_idx)
Improvement with numexpr
We could also leverage numexpr module for faster sine, cosine computations, like so -
import numexpr as ne
arr_r2D = arr_r[:,None]
arr_b = ne.evaluate('floor(x - arr_r2D * cos(arr_t))').astype(int)
arr_a = ne.evaluate('floor(y - arr_r2D * sin(arr_t))').astype(int)
np.add(np.array ([arr_a, arr_b, 10]), 1)
Related
I am trying to filter out all non-gray values within a given tolerance with the following code. It gives the expected results but runs too slowly to use in practice. Is there a way to do the following using numpy operations?
for i in range(height):
for j in range(width):
r, g, b = im_arr[i][j]
r = (r + 150) / 2
g = (g + 150) / 2
b = (b + 150) / 2
mean = (r + g + b) / 3
diffr = abs(mean - r)
diffg = abs(mean - g)
diffb = abs(mean - b)
maxdev = 2
if (diffr + diffg + diffb) > maxdev:
im_arr[i][j][0] = 0
im_arr[i][j][1] = 0
im_arr[i][j][2] = 0
Looping in plain Python is slow: one of the advantages of numpy is that
transversing the arrays is highly optimized. Without commenting on the algorithm, you can get the same results using only numpy, which will be much faster
Since im_arr is an image, it is very likely that the dtype is np.uint8.
That is only 8 bits, so you have to be careful of overflows. In you code, when you add 150 to a number, the result will be of type np.int64. But if you add 150 to an 8-bit np.ndarray, the result will still be of type np.uint8 and it can overflow.
You can either change the array type (using astype) or add a float, which will automatically promote the array to float
mod_img = (im_arr + 150.)/2 # the point of "150." is important
signed_dif = mod_img - np.mean(mod_img, axis=2, keepdims=True)
collapsed_dif = np.sum(np.abs(signed_dif), axis=2)
maxdev = 2
im_arr[collapsed_dif > maxdev] = 0
This can be done without any loop. I'll try to break out every step into a dedicated line
import numpy as np
im_arr = np.random.rand(300,400,3) # Assuming this how you image looks like
img_shifted = (im_arr + 15) / 2 # This can be done in one go
mean_v = np.mean(img_shifted, axis=2) # Compute the mean along the channel axis
diff_img = np.abs(mean_v[:,:,None] - img_shifted) # Broadcasting to subtract n x m from n x m x k
maxdev = 2
selection = np.sum(diff_img, axis=2) > maxdev
im_arr[selection] = 0 # Using fancy indexing with booleans
I am trying to evaluate a function at discretized point and stored in column-major order like this:
import numpy as np;
N = 3 ##
n = N * N
h = 1 / (N + 1) # step size
h2 = h**2 #
deltaX = np.zeros(N)
deltaY = np.zeros(N);
def Function(x, y):
output = -20. * np.pi * np.sin(2 * np.pi * x) * sin(4 * np.pi * y)
return output
## Equally spaced delta:
for i in range(1, N + 1):
deltaX[i - 1] = i * h;
deltaY[i - 1] = i * h;
### Lexicographic Row order ###
### Evaluation of function at deltaX and deltaY
feval = np.zeros((n, 1))
How could I approach to evaluate the discretization for this function?
Good news: your function properly uses numpy operations, so is completely vectorized. That means that you can evaluate it at every element of the input arrays.
The shape of the inputs don't have to match exactly. They just have to broadcast together. That means that only non-singleton dimensions need to match.
So start by creating the appropriate input arrays. Numpy provides the tools to do this elegantly without looping:
N = 3
h = 1 / (N + 1)
delta_x = np.arange(1., N + 1.) * h
delta_y = np.linspace(h, N * h, N)[:, None]
I deliberately used two different ways to create the coordinate arrays, to serve as an example. In practice, you'd want to use one of the two methods.
The index [:, None] turns delta_y into a column vector. None introduces a new singleton axis. There are any number of Other ways to do the same thing, like `delta_y = ....reshape(-1, 1).
And read the docs I linked to, and for all the functions I used.
Now that you have a column in the y direction and a row in the x, you can call Function as just
val = Function(delta_x, delta_y)
The operation of arranging the 2D matrix val into a 1D array is called raveling. By default, it uses the default row-major order that numpy uses in memory. This order is also called "C" order. The alternative arrangement is to interpret the array in column major order, like Matlab does. This is called Fortran order. It will require a copy of the data since that's not how the elements are laid out in memory.
One way to ravel in Fortran order:
feval = val.ravel(order='F')
An alternative is to transpose and use C order:
feval = val.T.ravel()
The last two lines can be combined, so you end up with 3 lines:
delta_x = h * np.arange(1., N + 1.)
delta_y = h * np.arange(1., N + 1.)[:, None]
feval = Function(delta_x, delta_y).ravel(order='F')
You could make it into a one-liner, but that's pushing it.
I looked around a lot but it's hard to find an answer. Basically when one interpolates v -> w you would normally use one of the many interpolation functions. But I want to get the corresponding matrix Av = w.
In my case w is a 200x200 matrices with v beeing a random subset of w with half as many points. I don't really care for fancy math it could be as simple as weighting the known points by distance squared. I already tried just implementing it all with some for loops but it only really works with small values. But maybe it helps explaining my question.
from random import sample
def testScatter(xbig, ybig):
NumberOfPoints = int(xbig * ybig / 2) #half as many points as in full Sample
#choose random coordinates
Index = sample(range(xbig * ybig),NumberOfPoints)
IndexYScatter = np.remainder(Index, xbig)
IndexXScatter = np.array((Index - IndexYScatter) / xbig, dtype=int)
InterpolationMatrix = np.zeros((xbig * ybig , NumberOfPoints), dtype=np.float32)
WeightingSum = np.zeros(xbig * ybig )
coordsSamplePoints = []
for i in range(NumberOfPoints): #first set all the given points (no need to interpolate)
coordsSamplePoints.append(IndexYScatter[i] + xbig * IndexXScatter[i])
InterpolationMatrix[coordsSamplePoints[i], i] = 1
WeightingSum[coordsSamplePoints[i]] = 1
for x in range(xbig * ybig): #now comes the interpolation
if x not in coordsSamplePoints:
YIndexInterpol = x % xbig #xcoord in interpolated matrix
XIndexInterpol = (x - YIndexInterpol) / xbig #ycoord in interp. matrix
for y in range(NumberOfPoints):
XIndexScatter = IndexXScatter[y]
YIndexScatter = IndexYScatter[y]
distanceSquared = (np.float32(YIndexInterpol) - np.float32(YIndexScatter))**2+(np.float32(XIndexInterpol) - np.float32(XIndexScatter))**2
InterpolationMatrix[x,y] = 1/distanceSquared
WeightingSum[x] += InterpolationMatrix[x,y]
return InterpolationMatrix/ WeightingSum[:,None] , IndexXScatter, IndexYScatter
You need to spend some time with the Numpy documentation start at the top of this page and working your way down. Studying answers here on SO for questions asking how to vectorize an operation when using Numpy array's would help you. If you find that you are iterating over indices and performing calcs with Numpy arrays there is probably a better way.
First cut...
The first for loop can be replaced with:
coordsSamplePoints = IndexYScatter + (xbig * IndexXScatter)
InterpolationMatrix[coordsSamplePoints,np.arange(coordsSamplePoints.shape[0])] = 1
WeightingSum[coordsSamplePoints] = 1
This mainly makes use of elementwise arithmetic and Index arrays - the complete Indexing Tutorial should be read
You can test this by enhancing the function and executing the for loop along with Numpy way then comparing the result.
...
IM = InterpolationMatrix.copy()
WS = WeightingSum.copy()
for i in range(NumberOfPoints): #first set all the given points (no need to interpolate)
coordsSamplePoints.append(IndexYScatter[i] + xbig * IndexXScatter[i])
InterpolationMatrix[coordsSamplePoints[i], i] = 1
WeightingSum[coordsSamplePoints[i]] = 1
cSS = IndexYScatter + (xbig * IndexXScatter)
IM[cSS,np.arange(cSS.shape[0])] = 1
WS[cSS] = 1
# TEST Validity
print((cSS == coordsSamplePoints).all(),
(IM == InterpolationMatrix).all(),
(WS == WeightingSum).all())
...
The outer loop:
...
for x in range(xbig * ybig): #now comes the interpolation
if x not in coordsSamplePoints:
YIndexInterpol = x % xbig #xcoord in interpolated matrix
XIndexInterpol = (x - YIndexInterpol) / xbig #ycoord in interp. matrix
...
Can be replaced with:
...
space = np.arange(xbig * ybig)
mask = ~(space == cSS[:,None]).any(0)
iP = space[mask] # points to interpolate
yIndices = iP % xbig
xIndices = (iP - yIndices) / xbig
...
Complete solution:
import random
import numpy as np
def testScatter(xbig, ybig):
NumberOfPoints = int(xbig * ybig / 2) #half as many points as in full Sample
#choose random coordinates
Index = random.sample(range(xbig * ybig),NumberOfPoints)
IndexYScatter = np.remainder(Index, xbig)
IndexXScatter = np.array((Index - IndexYScatter) / xbig, dtype=int)
InterpolationMatrix = np.zeros((xbig * ybig , NumberOfPoints), dtype=np.float32)
WeightingSum = np.zeros(xbig * ybig )
coordsSamplePoints = IndexYScatter + (xbig * IndexXScatter)
InterpolationMatrix[coordsSamplePoints,np.arange(coordsSamplePoints.shape[0])] = 1
WeightingSum[coordsSamplePoints] = 1
IM = InterpolationMatrix
cSS = coordsSamplePoints
WS = WeightingSum
space = np.arange(xbig * ybig)
mask = ~(space == cSS[:,None]).any(0)
iP = space[mask] # points to interpolate
yIndices = iP % xbig
xIndices = (iP - yIndices) / xbig
dSquared = ((yIndices[:,None] - IndexYScatter) ** 2) + ((xIndices[:,None] - IndexXScatter) ** 2)
IM[iP,:] = 1/dSquared
WS[iP] = IM[iP,:].sum(1)
return IM / WS[:,None], IndexXScatter, IndexYScatter
I'm getting about 200x improvement with this over your original with (100,100) for the arguments. Probably some other minor improvements but they won't effect execution time significantly.
Broadcasting is another Numpy skill that is a must.
I am creating a couple of multi dimensional arrays using NumPy and inititalising them based on the index as follows:
pos_data = []
# Some typical values
d = 2 # Could also be 3
vol_ext = (1000, 500) # If d = 3, this will have another entry
ratio = [5.0, 8.0] # Again, if d = 3, it will have another entry
for i in range(d):
pos_data.append(np.zeros(vol_ext))
if d == 2:
for y in range(vol_ext[1]):
for x in range(vol_ext[0]):
pos_data[0][x, y] = (x - 1.0) * ratio[0]
pos_data[1][x, y] = (y - 1.0) * ratio[1]
elif d == 3:
for z in range(vol_ext[2]):
for y in range(vol_ext[1]):
for x in range(vol_ext[0]):
pos_data[0][x, y, z] = (x - 1.0) * ratio[0]
pos_data[1][x, y, z] = (y - 1.0) * ratio[1]
pos_data[2][x, y, z] = (z - 1.0) * ratio[2]
The looping is a bit ugly and slow as well. Additionally, if I have a 3-dimensional object then I have to have another nested loop.
I was wondering if there is a Pythonic way to generate these values as they are just based on the x, y and z indices. I tried to use the combinatorics bit from itertools, but I could not make it work.
It's easy with np.meshgrid:
pos_data = np.meshgrid(*(r * (np.arange(s) - 1.0)
for s, r in zip(vol_ext, ratio)), indexing='ij')
I would generate a two or three dimensional numpy.meshgrid of data, then scale each entry by the ratio per slice.
For the 2D case:
(X, Y) = np.meshgrid(np.arange(vol_ext[1]), np.arange(vol_ext[0]))
pos_data = [(Y - 1) * ratio[0], (X - 1) * ratio[1]]
For the 3D case:
(X, Y, Z) = np.meshgrid(np.arange(vol_ext[2]), np.arange(vol_ext[1]), np.arange(vol_ext[0]))
pos_data = [(Z - 1) * ratio[0], (Y - 1) * ratio[1], (X - 1) * ratio[2]]
Example using your 2D data
pos_data has been generated by your code. I've created a new list pos_data2 that stores the equivalent list using the above solution:
In [40]: vol_ext = (1000, 500)
In [41]: (X, Y) = np.meshgrid(np.arange(vol_ext[1]), np.arange(vol_ext[0]))
In [42]: pos_data2 = [(Y - 1) * ratio[0], (X - 1) * ratio[1]]
In [43]: np.allclose(pos_data[0], pos_data2[0])
Out[43]: True
In [44]: np.allclose(pos_data[1], pos_data2[1])
Out[44]: True
Making this adaptive based on vol_ext
We can combine this with a list comprehension where we can take advantage of the fact that the output of numpy.meshgrid is a tuple:
pts = [np.arange(v) for v in reversed(vol_ext)]
pos_data = [(D - 1) * R for (D, R) in zip(reversed(np.meshgrid(*pts)), ratio)]
The first line of code generates the range of points per desired dimension as a list. We then use a list comprehension to compute the desired calculations per slice by iterating over each desired grid of points in the desired dimension combined with the correct ratio to apply.
Example Run
In [49]: pts = [np.arange(v) for v in reversed(vol_ext)]
In [50]: pos_data2 = [(D - 1) * R for (D, R) in zip(reversed(np.meshgrid(*pts)), ratio)]
In [51]: np.all([np.allclose(p, p2) for (p, p2) in zip(pos_data, pos_data2)])
Out[51]: True
The last line goes through each slice and ensures both lists align.
I think there are a couple of things to consider:
is there a reason that pos_data has to be a list?
don't have another variable (d) that you have to hard code, when it is always supposed to be the length of some other tuple.
With these in mind, you can solve your problem of variable numbers of for loops using itertools.product, which basically is just a shorthand for nested for loops. The positional args for product are the ranges of the loops.
My implementation is:
from itertools import product
vol_ext = (1000, 500) # If d = 3, this will have another entry
ratio = [5.0, 8.0] # Again, if d = 3, it will have another entry
pos_data_new = np.zeros((len(ratio), *vol_ext))
# now loop over each dimension in `vol_ext`. Since `product` expects
# positional arguments, we have to unpack a tuple of `range(vol)`s.
for inds in product(*(range(vol) for vol in vol_ext)):
# inds is now a tuple, and we have to combine it with a slice in
# in the first dimension, and use it as an array on the right hand
# side to do the computation.
pos_data_new[(slice(None),) + inds] = (np.array(inds) - 1) * ratio
I don't think this will be any faster, but it certainly looks nicer.
Note that pos_data_new is now an array, to get it as a list in the first dimension, as per the original example, is simple enough.
I have a numpy array filled with intensity readings at different radii in a uniform circle (for context, this is a 1D radiative transfer project for protostellar formation models: while much better models exist, my supervisor wasnts me to have the experience of producing one so I understand how others work).
I want to take that 1d array, and "rotate" it through a circle, forming a 2D array of intensities that could then be shown with imshow (or, with a bit of work, aplpy). The final array needs to be 2d, and the projection needs to be Cartesian, not polar.
I can do it with nested for loops, and I can do it with lookup tables, but I have a feeling there must be a neat way of doing it in numpy or something.
Any ideas?
EDIT:
I have had to go back and recreate my (frankly horrible) mess of for loops and if statements that I had before. If I really tried, I could probably get rid of one of the loops and one of the if statements by condensing things down. However, the aim is not to make it work with for loops, but see if there is a built in way to rotate the array.
impB is an array that differs slightly from what I stated it was before. Its actually just a list of radii where particles are detected. I then bin those into radius bins to get the intensity (or frequency if you prefer) in each radius. R is the scale factor for my radius as I run the model in a dimensionless way. iRes is a resolution scale factor, essentially how often I want to sample my radial bins. Everything else should be clear.
radJ = np.ndarray(shape=(2*iRes, 2*iRes)) # Create array of 2xRadius square
for i in range(iRes):
n = len(impB[np.where(impB[:] < ((i+1.) * (R / iRes)))]) # Count number of things within this radius +1
m = len(impB[np.where(impB[:] <= ((i) * (R / iRes)))]) # Count number of things in this radius
a = (((i + 1) * (R / iRes))**2 - ((i) * (R / iRes))**2) * math.pi # A normalisation factor based on area.....dont ask
for x in range(iRes):
for y in range(iRes):
if (x**2 + y**2) < (i * iRes)**2:
if (x**2 + y**2) >= (i * iRes)**2: # Checks for radius, and puts in cartesian space
radJ[x+iRes,y+iRes] = (n-m) / a # Put in actual intensity bins
radJ[x+iRes,-y+iRes] = (n-m) / a
radJ[-x+iRes,y+iRes] = (n-m) / a
radJ[-x+iRes,-y+iRes] = (n-m) / a
Nested loops are a simple approach for that. With ri_data_r and y containing your radius values (difference to the middle pixel) and the array for rotation, respectively, I would suggest:
from scipy import interpolate
import numpy as np
y = np.random.rand(100)
ri_data_r = np.linspace(-len(y)/2,len(y)/2,len(y))
interpol_index = interpolate.interp1d(ri_data_r, y)
xv = np.arange(-1, 1, 0.01) # adjust your matrix values here
X, Y = np.meshgrid(xv, xv)
profilegrid = np.ones(X.shape, float)
for i, x in enumerate(X[0, :]):
for k, y in enumerate(Y[:, 0]):
current_radius = np.sqrt(x ** 2 + y ** 2)
profilegrid[i, k] = interpol_index(current_radius)
print(profilegrid)
This will give you exactly what you are looking for. You just have to take in your array and calculate an symmetric array ri_data_r that has the same length as your data array and contains the distance between the actual data and the middle of the array. The code is doing this automatically.
I stumbled upon this question in a different context and I hope I understood it right. Here are two other ways of doing this. The first uses skimage.transform.warp with interpolation of desired order (here we use order=0 Nearest-neighbor). This method is slower but more precise and needs less memory then the second method.
The second one does not use interpolation, therefore is faster but also less precise and needs way more memory because it stores each 2D array containing one tilt until the end, where they are averaged with np.nanmean().
The difference between both solutions stemmed from the problem of handling the center of the final image where the tilts overlap the most, i.e. the first one would just add values with each tilt ending up out of the original range. This was "solved" by clipping the matrix in each step to a global_min and global_max (consult the code). The second one solves it by taking the mean of the tilts where they overlap, which forces us to use the np.nan.
Please, read the Example of usage and Sanity check sections in order to understand the plot titles.
Solution 1:
import numpy as np
from skimage.transform import warp
def rotate_vector(vector, deg_angle):
# Credit goes to skimage.transform.radon
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
center = vector.size // 2
square = np.zeros((vector.size, vector.size))
square[center,:] = vector
rad_angle = np.deg2rad(deg_angle)
cos_a, sin_a = np.cos(rad_angle), np.sin(rad_angle)
R = np.array([[cos_a, sin_a, -center * (cos_a + sin_a - 1)],
[-sin_a, cos_a, -center * (cos_a - sin_a - 1)],
[0, 0, 1]])
# Approx. 80% of time is spent in this function
return warp(square, R, clip=False, output_shape=((vector.size, vector.size)))
def place_vectors(vectors, deg_angles):
matrix = np.zeros((vectors.shape[-1], vectors.shape[-1]))
global_min, global_max = 0, 0
for i, deg_angle in enumerate(deg_angles):
tilt = rotate_vector(vectors[i], deg_angle)
global_min = tilt.min() if global_min > tilt.min() else global_min
global_max = tilt.max() if global_max < tilt.max() else global_max
matrix += tilt
matrix = np.clip(matrix, global_min, global_max)
return matrix
Solution 2:
Credit for the idea goes to my colleague Michael Scherbela.
import numpy as np
def rotate_vector(vector, deg_angle):
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
square = np.ones([vector.size, vector.size]) * np.nan
radius = vector.size // 2
r_values = np.linspace(-radius, radius, vector.size)
rad_angle = np.deg2rad(deg_angle)
ind_x = np.round(np.cos(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_y = np.round(np.sin(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_x = np.clip(ind_x, 0, vector.size-1)
ind_y = np.clip(ind_y, 0, vector.size-1)
square[ind_y, ind_x] = vector
return square
def place_vectors(vectors, deg_angles):
matrices = []
for deg_angle, vector in zip(deg_angles, vectors):
matrices.append(rotate_vector(vector, deg_angle))
matrix = np.nanmean(np.array(matrices), axis=0)
return np.nan_to_num(matrix, copy=False, nan=0.0)
Example of usage:
r = 100 # Radius of the circle, i.e. half the length of the vector
n = int(np.pi * r / 8) # Number of vectors, e.g. number of tilts in tomography
v = np.ones(2*r) # One vector, e.g. one tilt in tomography
V = np.array([v]*n) # All vectors, e.g. a sinogram in tomography
# Rotate 1D vector to a specific angle (output is 2D)
angle = 45
rotated = rotate_vector(v, angle)
# Rotate each row of a 2D array according to its angle (output is 2D)
angles = np.linspace(-90, 90, num=n, endpoint=False)
inplace = place_vectors(V, angles)
Sanity check:
These are just simple checks which by no means cover all possible edge cases. Depending on your use case you might want to extend the checks and adjust the method.
# I. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then sum(inplace) should be approx. equal to (n * (2πr - n)) / π
# which is an area that should be covered by the tilts
desired_area = (n * (2 * np.pi * r - n)) / np.pi
covered_area = np.sum(inplace)
covered_frac = covered_area / desired_area
print(f'This method covered {covered_frac * 100:.2f}% '
'of the area which should be covered in total.')
# II. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then a circle M with radius m <= r should be the largest circle which
# is fully covered by the vectors. I.e. its mean should be no less than 1.
# If n = πr then m = r.
# m = n / π
m = int(n / np.pi)
# Code for circular mask not included
mask = create_circular_mask(2*r, 2*r, center=None, radius=m)
m_area = np.mean(inplace[mask])
print(f'Full radius r={r}, radius m={m}, mean(M)={m_area:.4f}.')
Code for plotting:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
plt.subplot(121)
rotated = np.nan_to_num(rotated) # not necessary in case of the first method
plt.title(
f'Output of rotate_vector(), angle={angle}°\n'
f'Sum is {np.sum(rotated):.2f} and should be {np.sum(v):.2f}')
plt.imshow(rotated, cmap=plt.cm.Greys_r)
plt.subplot(122)
plt.title(
f'Output of place_vectors(), r={r}, n={n}\n'
f'Covered {covered_frac * 100:.2f}% of the area which should be covered.\n'
f'Mean of the circle M is {m_area:.4f} and should be 1.0.')
plt.imshow(inplace)
circle=plt.Circle((r, r), m, color='r', fill=False)
plt.gcf().gca().add_artist(circle)
plt.gcf().gca().legend([circle], [f'Circle M (m={m})'])