Parallelizing array row similarity calculations in python - python

I have a large-ish array artist_topic_probs (112,312 item rows by ~100 feature columns), and I want to calculate the pairwise cosine similarity between a (large sample) of random pairs of rows from this array. Here's the relevant bits of my current code
# the number of random pairs to check (10 million here)
random_sample_size=10000000
# I want to make sure they're unique, and that I'm never comparing a row to itself
# so I generate my set of comparisons like so:
np.random.seed(99)
comps = set()
while len(comps)<random_sample_size:
a = np.random.randint(0,112312)
b= np.random.randint(0,112312)
if a!=b:
comp = tuple(sorted([a,b]))
comps.add(comp)
# convert to list at the end to ensure sort order
# not positive if this is needed...I've seen conflicting opinions
comps = list(sorted(comps))
This generates a list of tuples, where each are the two rows between which I'll calculate similarity. Then I just use a simple loop to calculate all the similarities:
c_dists = []
from scipy.spatial.distance import cosine
for a,b in comps:
c_dists.append(cosine(artist_topic_probs[a],artist_topic_probs[b]))
(of course, cosine here gives distance, not a similarity, but we can easily get that with sim = 1.0 - dist. I used similarity in the title because it's the more common term)
This works fine, but isn't too fast, and I need to repeat the procedure many times. I have 32 cores to work with, so parallelization seems like a good bet, but I'm not sure the best way to go about it. My idea was something like:
pool = mp.Pool(processes=32)
c_dists = [pool.apply(cosine, args=(artist_topic_probs[a],artist_topic_probs[b]))
for a,b in comps]
But testing this approach out on my laptop with some test data hasn't been working (it just hangs, or at least is taking so much longer than the simple loop that I got sick of waiting and killed it). My concern is the indexing of the matrix being some sort of bottleneck, but I'm not sure. Any ideas on how to effectively parallelize this (or otherwise speed up the process)?

First of all, you might want to use itertools.combinations and random.sample to get unique pairs in the future, but it won't work in this case due to memory issues. Then, multiprocessing is not multithreading, i.e. spawning a new process involves huge system overhead. There is little sense in spawning a process for each individual task. A task must be well worth the overhead to rationalise starting a new process, hence you'd better split all work into separate jobs (into as many pieces as the number of cores you want to use). Then, don't forget that multiprocessing implementation serialises the entire namespace and loads it into memory N times, where N is the number of processes. This can lead to intensive swapping if you don't have enough RAM to store N copies of your huge array. So you might want to reduce the number of cores.
Updated to restore initial order as you requested.
I made a test data-set of identical vectors, hence cosine must return a vector of zeros.
from __future__ import division, print_function
import math
import multiprocessing as mp
from scipy.spatial.distance import cosine
from operator import itemgetter
import itertools
def worker(enumerated_comps):
return [(ind, cosine(artist_topic_probs[a], artist_topic_probs[b])) for ind, (a, b) in enumerated_comps]
def slice_iterable(iterable, chunk):
"""
Slices an iterable into chunks of size n
:param chunk: the number of items per slice
:type chunk: int
:type iterable: collections.Iterable
:rtype: collections.Generator
"""
_it = iter(iterable)
return itertools.takewhile(
bool, (tuple(itertools.islice(_it, chunk)) for _ in itertools.count(0))
)
# Test data
artist_topic_probs = [range(10) for _ in xrange(10)]
comps = tuple(enumerate([(1, 2), (1, 3), (1, 4), (1, 5)]))
n_cores = 2
chunksize = int(math.ceil(len(comps)/n_cores))
jobs = tuple(slice_iterable(comps, chunksize))
pool = mp.Pool(processes=n_cores)
work_res = pool.map_async(worker, jobs)
c_dists = map(itemgetter(1), sorted(itertools.chain(*work_res.get())))
print(c_dists)
Output:
[2.2204460492503131e-16, 2.2204460492503131e-16, 2.2204460492503131e-16, 2.2204460492503131e-16]
These values are fairly close to zero.
P.S.
From the multiprocessing.Pool.apply docs
Equivalent of the apply() built-in function. It blocks until the
result is ready, so apply_async() is better suited for performing
work in parallel. Additionally, func is only executed in one of the
workers of the pool.

scipy.spatial.distance.cosine, as you can see following the link, introduces a significant overhead in your computations because for each invocation it computes the norm of the two vectors that you're analyzing at each invocation, for the size of your sample
this amounts to 20 millions norms computed, if you memorize the norms of your ~100 thousand vectors in advance you can save approximately 60% of your computation time because you have a dot product, u*v, and two norm calculations, and each of these three operations is roughly equivalent in terms of operations count.
Further, you're using explicit loops, if you could put your logic inside a vectorized numpy operator you could trim another large slice of your computational time.
Eventually, you talk about cosine similarity... consider that scipy.spatial.distance.cosine computes the cosine distance instead, the relationship is easy, cs = cd - 1 but I haven't seen this in your posted code.

Related

Python: how to speed up this function and make it more scalable?

I have the following function which accepts an indicator matrix of shape (20,000 x 20,000). And I have to run the function 20,000 x 20,000 = 400,000,000 times. Note that the indicator_Matrix has to be in the form of a pandas dataframe when passed as parameter into the function, as my actual problem's dataframe has timeIndex and integer columns but I have simplified this a bit for the sake of understanding the problem.
Pandas Implementation
indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000]))
def operations(indicator_Matrix):
s = indicator_Matrix.sum(axis=1)
d = indicator_Matrix.div(s,axis=0)
res = d[d>0].mean(axis=0)
return res.iloc[-1]
I tried to improve it by using numpy but it is still taking ages to run. I also tried concurrent.future.ThreadPoolExecutor but it still take a long time to run and not much improvement from list comprehension.
Numpy Implementation
indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000]))
def operations(indicator_Matrix):
s = indicator_Matrix.to_numpy().sum(axis=1)
d = (indicator_Matrix.to_numpy().T / s).T
d = pd.DataFrame(d, index = indicator_Matrix.index, columns = indicator_Matrix.columns)
res = d[d>0].mean(axis=0)
return res.iloc[-1]
output = [operations(indicator_Matrix) for i in range(0,20000**2)]
Note that the reason I convert d to a dataframe again is because I need to obtain the column means and retain only the last column mean using .iloc[-1]. d[d>0].mean(axis=0) return column means, i.e.
2478 1.0
0 1.0
Update: I am still stuck in this problem. I wonder if using gpu packages like cudf and CuPy on my local desktop would make any difference.
Assuming the answer of #CrazyChucky is correct, one can implement a faster parallel Numba implementation. The idea is to use plain loops and care about reading data the contiguous way. Reading data contiguously is important so to make the computation cache-friendly/memory-efficient. Here is an implementation:
import numba as nb
#nb.njit(['(int_[:,:],)', '(int_[:,::1],)', '(int_[::1,:],)'], parallel=True)
def compute_fastest(matrix):
n, m = matrix.shape
sum_by_row = np.zeros(n, matrix.dtype)
is_row_major = matrix.strides[0] >= matrix.strides[1]
if is_row_major:
for i in nb.prange(n):
s = 0
for j in range(m):
s += matrix[i, j]
sum_by_row[i] = s
else:
for chunk_id in nb.prange(0, (n+63)//64):
start = chunk_id * 64
end = min(start+64, n)
for j in range(m):
for i2 in range(start, end):
sum_by_row[i2] += matrix[i2, j]
count = 0
s = 0.0
for i in range(n):
value = matrix[i, -1] / sum_by_row[i]
if value > 0:
s += value
count += 1
return s / count
# output = [compute_fastest(indicator_Matrix.to_numpy()) for i in range(0,20000**2)]
Pandas dataframes can contain both row-major and column-major arrays. Regarding the memory layout, it is better to iterate over the rows or the column. This is why there is two implementations of the sum based on is_row_major. There is also 3 Numba signatures: one for row-major contiguous arrays, one for columns-major contiguous arrays and one for non-contiguous arrays. Numba will compile the 3 function variants and automatically pick the best one at runtime. The JIT-compiler of Numba can generate a faster implementation (eg. using SIMD instructions) when the input 2D array is known to be contiguous.
Experimental Results
This computation is about 14.5 times faster than operations_simpler on my i5-9600KF processor (6 cores). It still takes a lot of time but the computation is memory-bound and nearly optimal on my machine: it is bounded by the main-memory which has to be read:
On a 2000x2000 dataframe with 32-bit integers:
- operations: 86.310 ms/iter
- operations_simpler: 5.450 ms/iter
- compute_fastest: 0.375 ms/iter
- optimal: 0.345-0.370 ms/iter
If you want to get a faster code, then you need to use more compact data types. For example, a uint8 data type is large enough to contain the values 0 and 1, and it is 4 times smaller in memory on Windows. This means the code can be up to 4 time faster in this case. The smaller the data type, the faster the program. One could even try to compact 8 columns in 1 using bit tweaks though it is generally significantly slower using Numba unless you have a lot of available cores.
Notes & Discussion
The above code works only with uniformly-typed columns. If this is not the case, you can split the dataframe in multiple groups and convert each column group to Numpy array so to then call the Numba function (modified to support groups). Note the #CrazyChucky code has a similar issue: a dataframe column with mixed datatypes converted to a Numpy array results in an object-based Numpy array which is very inefficient (especially a row-major Numpy array).
Note that using a GPU will not make the computation faster unless the input dataframe is already stored in the GPU memory. Indeed, CPU-GPU data transfers are more expensive than just reading the RAM (due to the interconnect overhead which is generally a quite slow PCI one). Note that the GPU memory is quite limited compared to the CPU. If the target dataframe(s) do not need to be transferred, then using cudf is relatively simple and should give a small speed up. For a faster code, one need to implement a fast CUDA code but this is clearly far from being easy for dataframes with mixed dataype. In the end, the resulting speed up should be main_ram_throughput / gpu_ram_througput assuming there is no data transfer. Note that this factor is generally 5-12. Note also that CUDA and cudf require a Nvidia GPU.
Finally, reducing the input data size or just the amount of computation is certainly the best solution (as indicated in the comment by #zvone) since it is very computationally intensive.
You're doing some extra math you don't have to. In plain English, what you're doing is:
Summing each column
Turning the list of sums "sideways" and dividing each column by it
Taking the mean of each column, ignoring values ≤ 0
Returning only the rightmost mean
After step one, you no longer need anything but the rightmost column; you can ignore the other columns, only dividing and averaging the one whose result you care about. Changing your code accordingly:
def operations_simpler(indicator_matrix):
sums = indicator_matrix.sum(axis=1)
last_column = indicator_matrix.iloc[:, -1]
divided = last_column / sums
return divided[divided > 0].mean()
...yields the same result, and takes about a hundredth of the time. Extrapolating from shorter test runs, this cuts the time for 400,000,000 runs on my machine from about 114 years down to... about 324 days. Still not great. So far I've not managed to get it to run any faster by converting to NumPy, compiling with Numba, or employing multiprocessing, but I'll go ahead and post this for now in case it's helpful.
Note: You're unlikely to see any improvements with compute-heavy work like this from threading; if anything, you'd want to use multiprocessing. concurrent.futures offers executors for both. Threads are mostly useful to avoid waiting around for I/O.
As per the previous answer you can use Numba or you can you two other alternatives such as Dask which is a distributed computing package, to parallelize your function's execution it can divide your data into smaller bits and distribute computing across many CPU cores or even numerous machines.
import dask.array as da
def operations(indicator_matrix):
s = indicator_matrix.sum(axis=1)
d = indicator_matrix.div(s, axis=0)
res = d[d > 0].mean(axis=0)
return res.iloc[-1]
indicator_matrix_dask = da.from_array(indicator_matrix, chunks=(1000, 1000))
output_dask = indicator_matrix_dask.map_blocks(operations, dtype=float)
output = output_dask.compute()
or you can use CuPy which uses GPU to increase your function excution
import cupy as cp
def operations(indicator_matrix):
s = cp.sum(indicator_matrix, axis=1)
d = cp.divide(indicator_matrix.T, s).T
d = pd.DataFrame(d, index = indicator_matrix.index, columns = indicator_matrix.columns)
res = d[d > 0].mean(axis=0)
return res.iloc[-1]
indicator_matrix_cupy = cp.asarray(indicator_matrix)
output_cupy = operations(indicator_matrix_cupy)
output = cp.asnumpy(output_cupy)

Efficient Way to Repeatedly Split Large NumPy Array and Record Middle

I have a large NumPy array nodes = np.arange(100_000_000) and I need to rearrange this array by:
Recording and then removing the middle value in the array
Split the array into the left half and right half
Repeat Steps 1-2 for each half
Stop when all values are exhausted
So, for a smaller input example nodes = np.arange(10), the output would be:
[5 2 8 1 4 7 9 0 3 6]
This was accomplished by naively doing:
import numpy as np
def split(node, out):
mid = len(node) // 2
out.append(node[mid])
return node[:mid], node[mid+1:]
def reorder(a):
nodes = [a.tolist()]
out = []
while nodes:
tmp = []
for node in nodes:
for n in split(node, out):
if n:
tmp.append(n)
nodes = tmp
return np.array(out)
if __name__ == "__main__":
nodes = np.arange(10)
print(reorder(nodes))
However, this is way too slow for nodes = np.arange(100_000_000) and so I am looking for a much faster solution.
You can vectorize your function with Numpy by working on groups of slices.
Here is an implementation:
# Similar to [e for tmp in zip(a, b) for e in tmp] ,
# but on Numpy arrays and much faster
def interleave(a, b):
assert len(a) == len(b)
return np.column_stack((a, b)).reshape(len(a) * 2)
# n is the length of the input range (len(a) in your example)
def fast_reorder(n):
if n == 0:
return np.empty(0, dtype=np.int32)
startSlices = np.array([0], dtype=np.int32)
endSlices = np.array([n], dtype=np.int32)
allMidSlices = np.empty(n, dtype=np.int32) # Similar to "out" in your implementation
midInsertCount = 0 # Actual size of allMidSlices
# Generate a bunch of middle values as long as there is valid slices to split
while midInsertCount < n:
# Generate the new mid/left/right slices
midSlices = (endSlices + startSlices) // 2
# Computing the next slices is not needed for the last step
if midInsertCount + len(midSlices) < n:
# Generate the nexts slices (possibly with invalid ones)
newStartSlices = interleave(startSlices, midSlices+1)
newEndSlices = interleave(midSlices, endSlices)
# Discard invalid slices
isValidSlices = newStartSlices < newEndSlices
startSlices = newStartSlices[isValidSlices]
endSlices = newEndSlices[isValidSlices]
# Fast appending
allMidSlices[midInsertCount:midInsertCount+len(midSlices)] = midSlices
midInsertCount += len(midSlices)
return allMidSlices[0:midInsertCount]
On my machine, this is 89 times faster than your scalar implementation with the input np.arange(100_000_000) dropping from 2min35 to 1.75s. It also consume far less memory (rougthly 3~4 times less). Note that if you want a faster code, then you probably need to use a native language like C or C++.
Edit:
The question has been updated to have a much smaller input array so I leave the below for historical reasons. Basically it was likely a typo but we often get accustomed to computers working with insanely large numbers and when memory is involved they can be a real problem.
There is already a numpy based solution submitted by someone else that I think fits the bill.
Your code requires an insane amount of RAM just to hold 100 billion 64 bit integers. Do you have 800GB of RAM? Then you convert the numpy array to a list which will be substantially larger than the array (each packed 64 bit int in the numpy array will become a much less memory efficient python int object and the list will have a pointer to that object). Then you make a lot of slices of the list which will not duplicate the data but will duplicate the pointers to the data and use even more RAM. You also append all the result values to a list a single value at a time. Lists are very fast for adding items generally but with such an extreme size this will not only be slow but the way the list is allocated is likely to be extremely wasteful RAM wise and contribute to major problems (I believe they double in size when they get to a certain level of fullness so you will end up allocating more RAM than you need and doing many allocations and likely copies). What kind of machine are you running this on? There are ways to improve your code but unless you're running it on a super computer I don't know that you're going to ever finish that calculation. I only..only? have 32GB of RAM and I'm not going to even try to create a 100B int_64 numpy array as I don't want to use up ssd write life for a mass of virtual memory.
As for improving your code stick to numpy arrays don't change to a python list it will greatly increase the RAM you need. Preallocate a numpy array to put the answer in. Then you need a new algorithm. Anything recursive or recursive like (ie a loop splitting the input,) will require tracking a lot of state, your nodes list is going to be extraordinarily gigantic and again use a lot of RAM. You could use len(a) to indicate values that are removed from your list and scan through the entire array each time to figure out what to do next but that will save RAM in favour of a tremendous amount of searching a gigantic array. I feel like there is an algorithm to cut numbers from each end and place them in the output and just track the beginning and end but I haven't figured it out at least not yet.
I also think there is a simpler algorithm where you just track the number of splits you've done instead of making a giant list of slices and keeping it all in memory. Take the middle of the left half and then the middle of the right then count up one and when you take the middle of the left half's left half you know you have to jump to the right half then the count is one so you jump over to the original right half's left half and on and on... Based on the depth into the halves and the length of the input you should be able to jump around without scanning or tracking all of those slices though I haven't been able to dedicate much time to thinking this through in my head.
With a problem of this nature if you really need to push the limits you should consider using C/C++ so you can be as efficient as possible with RAM usage and because you're doing an insane number of tiny things which doesn't map well to python performance.

Generate a specific number of permutations

I have browsed SO extensively and I have found many questions about generating all possible permutations, but none regarding generating a specific number of permutations.
I developed, thanks to many SO questions, a decent permutation test routine. However I have to repeat it many times, and it is taking a too long time.
my code:
def exact_mc_perm_test(ys, nmc,boolean_selection):
# xs sample from a time series
# ys all time series
# print nmc
# sample difference in mean
mean_ys = np.mean(ys)
diff = np.abs(np.mean(ys[boolean_selection]) - mean_ys)
k=0
for j in np.arange(nmc):
# in place shuffling
np.random.shuffle(ys)
# difference now between fixed all time series and shuffled subsamplevalues
diff_shuffled = np.abs(np.mean(ys[boolean_selection]) - mean_ys)
k += diff < diff_shuffled
return k / nmc
I took this SO answer and modify it for my specific test.
I have to run it over a 3D array stored in an xarray. the dataset has (lon,lat,time) coordinates, I need to run it for each (lon,lat) position (along the time dimension)
I run it using chain.iteratools:
for ii in chain.from_iterable(zip(*dataset.variable())):
iis = ii[selected_position].values
ind_x =dataset.lon==ii.lon
ind_y =dataset.lat==ii.lat
dataset.perm_test[ind_y, ind_x] = exact_mc_perm_test1(iis, ii.values, 1000.,selected_position)
Ideally I want to run a permutation test with 20000 permutations. The two loops (within (lon,lat) and for 20000 shuffles) adds up.
I am looking to speed up the permutation test code.
Therefore I though about trying to generate a 2D array of shape (len(ys),20000) with essentially 20000 shuffled ys array, and then access them at ones and calculate the 20000 differences (diff in the code). (Or find a trade off between memory usage and the looping, so maybe do 5 loops for 4000 shuffles at the time).
I could not figure out or find a way to do this.
The permutations command from itertools generates all the possible permutations which in my case are too many to handle.
I have looked at the random library but couldn't find something that fits my need. Any suggestion?
Take a look at compress() and permutations() from the itertools:
for perm in compress(permutation(iterable, r=length), boolean_selection):
print(perm)

How to compute outer product of two lists of arrays efficiently in some parallel fashion?

I have two lists of vectors.
A = np.random.rand(100,2000)
B = np.random.rand(100,1000)
I need to calculate the outer product of the first entry of A with the first entry of B. Then the second, then the third and so on.
A naive loop
outers = []
for a, b in zip(A,B):
outers.append(np.outer(a,b))
takes ≈ 730 [ms] (via &&timeit) on my computer.
In the end outers is a 100 entry long list of 2000x1000 arrays, which is correct.
There must be a more efficient way of parallelising this task as now we actually first compute A[0] with B[0] and THEN A[1] B[1], where we could do them all independently and parallel.
If you want to do NumPy array operations in parallel, Dask is an excellent choice. For example, you can do this operation as follows:
import dask.array as da
dA = da.from_array(A, chunks=(10, A.shape[1]))
dB = da.from_array(B, chunks=(10, B.shape[1]))
task_graph = dA[:, :, None] * dB[:, None]
result = task_graph.compute()
The compute() step is flexible enough to apply the computation on multiple threads, multiple processes, multiple cores, multiple machines, etc.
For the particular example in your question, you're not going to gain much over a serial approach, as the overhead involved in chunking the input arrays and concatenating the output array is significant compared to the cost of simply doing 100 outer products. For larger problems, though, such an approach can lead to significant speedups.

Python - Multi processing to mount an array

I m using griddata to "mount" array with a great number of shapes and
i would like to know if i can calculate functions (on each slice) on each my 4 cores in order to accelerate the process?
import numpy
size = 8.
Y=(arange(2000))
X=(arange(2000))
(xx,yy)=meshgrid(X,Y)
array=zeros((Y.shape[0],X.shape[0],size))
array[:,:,0] = 0
array[:,:,1] = X+Y
array[:,:,2] = X**2+Y**2+X+Y
array[:,:,3] = X**3+Y**3+X**2+Y**2+X+Y
array[:,:,4] = X**4+Y**4+X**3+Y**3+X**2+Y**2+X+Y
array[:,:,5] = X**5+Y**5+X**4+Y**4+X**3+Y**3+X**2+Y**2+X+Y
array[:,:,6] = X**6+Y**6+X**5+Y**5+X**4+Y**4+X**3+Y**3+X**2+Y**2+X+Y
array[:,:,6] = X**7+Y**7+X**6+Y**6+X**5+Y**5+X**4+Y**4+X**3+Y**3+X**2+Y**2+X+Y
So here i would like to calculate array[:,:,0] & array[:,:,1] with the first core, then array[:,:,2] & array[:,:,3] with the second core...?
----EDIT LATER---
There is no link between different "slices"...My different functions are independent
array[:,:,0] = 0
array[:,:,1] = X+Y
array[:,:,2] = X*np.cos(X)+Y*np.sin(Y)
array[:,:,3] = X**3+np.sin(X)+X**2+Y**2+np.sin(Y)
...
You can try with multiprocessing.Pool :
from multiprocessing import Pool
import numpy as np
size = 8.
Y=(np.arange(2000))
X=(np.arange(2000))
(xx,yy)=np.meshgrid(X,Y)
array=np.zeros((Y.shape[0],X.shape[0],size))
def func(i): # you need to call a function with Pool
array_=np.zeros((Y.shape[0],X.shape[0]))
for j in range(1,i):
array_+=X**j+Y**j
return array_
if __name__ == '__main__':
p = Pool(4) # if you have 4 cores in your processor
result=p.map(func, range(1,8))
for i in range(1,8):
array[::,::,i]=result[i-1]
Keep in mind that multiprocessing in python does not share memory, that's why you have to create the array_ and add the for-loop at the end of the code.
As your application (with these dimensions) doesn't need a lot of computing time, it is possible that you will be slower with this method. Also you will create multiple copies of all your variables, wich may cause a memory overflow.
You should also double-check the func I wrote, as I didn't completely verify that it does what it is supposed to do :)
If you want to apply a single function over an array of data, then using e.g. a multiprocessing.Pool is a good solution, provided that both the input and output of the calculation are relatively small.
You want to do many different calculations to two input arrays, which results in an array being returned for every one of those calculations.
Since separate processes do not share memory, the X and Y arrays have to be transported to each worker process when it is are started. And the result of each calculation (which is also a numpy array the same size as X and Y) has to be returned to the parent process.
Depending on e.g. the size of the arrays and the amount of cores, the overhead from the transfer of all those array between worker processes and the parent process via interprocess communication ("IPC") will cost time, reducing the advantages of using multiple cores.
Keep in mind that the parent process has to listen for and handle IPC requests from all the worker processes. So you've shifted the bottleneck from calculation to communication.
So it is not a given that multiprocessing will actually improve performance in this case. It depends on the details of the actual problem (number of cores, array size, amount of physical memory et cetera).
You will have to do some careful performance measurements using e.g. Pool or Process with realistic array sizes.
Three things:
The most important question is why are you doing this?.
Your NumPy build may already be making use of multiple cores. I am not sure off the top of my head how to check, see questions like this or if absolutely necessary take a look at the Numexpr library https://github.com/pydata/numexpr
About the "Y" in your likely XY problem - you are re-calculating data that you can instead re-use:
.
import numpy
size = 8
Y=(arange(2000))
X=(arange(2000))
(xx,yy)=meshgrid(X,Y)
array = zeros((Y.shape[0], X.shape[0], size))
array[..., 0] = 0
for i in range(1, size):
array[..., 1] = X ** i + Y ** i + array[..., i - 1]

Categories