Related
I want to use a high-dimensional numpy array to store the norms of weighted sums of matrices.
For example:
mat1, mat2, mat3, mat4 = np.random.rand(3, 3), np.random.rand(3, 3), np.random.rand(3, 3), np.random.rand(3, 3)
res = np.empty((8, 7, 6, 5))
for i in range(8):
for j in range(7):
for p in range(6):
for q in range(5):
res[i, j, p, q] = np.linalg.norm(i * mat1 + j * mat2 + p * mat3 + q * mat4)
I would like to ask that are there any methods to avoid this nested loop?
Solution
Here's one way you can do it, via adding axes with None (equivalent to np.newaxis):
def weighted_norms(mat1, mat2, mat3, mat4):
P = mat1 * np.arange(8)[:, None, None]
Q = mat2 * np.arange(7)[:, None, None]
R = mat3 * np.arange(6)[:, None, None]
S = mat4 * np.arange(5)[:, None, None]
summation = S + R[:, None] + Q[:, None, None] + P[:, None, None, None]
return np.linalg.norm(summation, axis=(4, 5))
Veracity and a simple benchmark
In [6]: output = weighted_norms(mat1, mat2, mat3, mat4)
In [7]: np.allclose(output, res)
Out[7]: True
In [8]: %timeit weighted_norms(mat1, mat2, mat3, mat4)
71.3 µs ± 446 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Explanation
By adding two new axes to the np.arange objects, you can force the broadcasting you want, producing 0 * mat1, 1 * mat1, 2 * mat1 ....
The real tricky bit is then constructing the (8, 7, 6, 5, 3, 3) array (which is the shape before evaluating the norm which collapses the last two dimensions).
Notice that the summation of all the weighted 3D arrays starts with the last array, S, and progressively adds more weighted 3D arrays. The way it does this is by adding a new axis to broadcast over at each step.
For example, the shape of S is (5, 3, 3) and in order to correctly add R you need to insert a new axis. So the shape of R goes from (6, 3, 3) to (6, 1, 3, 3). This second dimension of size 1 is what allows us to broadcast the sum of S over R such that each array in the 3D S is added to each array in R (that's one level of nested loop).
Then we need to add Q (for every array in Q, for every array in R, for every array in S), so we need to insert two new axes turning Q from (7, 3, 3) to (7, 1, 1, 3, 3).
Finally, P goes from (8, 3, 3) to (8, 1, 1, 1, 3, 3).
It may help to "visualize" this by overlaying the shapes:
(5, 3, 3) <- S
:
+ (6, 1, 3, 3) <- R[:, None]
---------------------
(6, 5, 3, 3)
: :
+ (7, 1, 1, 3, 3) <- Q[:, None, None]
---------------------
(7, 6, 5, 3, 3)
: : :
+ (8, 1, 1, 1, 3, 3) <- P[:, None, None, None]
---------------------
(8, 7, 6, 5, 3, 3)
Generalizing
Here's a generalized version using a helper function for adding axes just to clean up the code a little:
from typing import Tuple
import numpy as np
def add_axes(x: np.ndarray, n: int) -> np.ndarray:
"""
Inserts `n` number of new axes into `x` from axis 1 onward.
e.g., for `x.shape == (3, 3)`, `add_axes(x, 2) -> (3, 1, 1, 3)`
"""
return np.expand_dims(x, axis=(*range(1, n + 1),))
def weighted_norms(arrs: Tuple[np.ndarray], weights: Tuple[int]) -> np.ndarray:
if len(arrs) != len(weights):
raise ValueError("Number of arrays must match number of weights")
summation = np.empty((weights[-1], *arrs[-1].shape))
for i, (x, w) in enumerate(zip(arrs[::-1], weights[::-1])):
summation = summation + add_axes(x * add_axes(np.arange(w), 2), i)
return np.linalg.norm(summation, axis=(-1, -2))
Usage:
In [10]: arrs = (mat1, mat2, mat3, mat4)
In [11]: weights = (8, 7, 6, 5)
In [12]: output = weighted_norms(arrs, weights)
In [13]: np.allclose(output, res)
Out[13]: True
In [14]: %timeit weighted_norms(arrs, weights)
109 µs ± 3.07 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
I am looking for a more optimized way to convert a (n,n) or (n,n,1) matrix to a (n,n,3) matrix. I start out with an (n,n,3), but my dimensions get reduced after I perform a sum over the second axis to (n,n). Essentially, I want to keep the original size of the array and have the second axis just repeated 3 times. The reason I need this is that I will later be broadcasting it with another (n,n,3) array, but they need the same dimensions.
My current method works, but does not seem elegant.
a0=np.random.random((n,n))
b=a.flatten().tolist()
a=np.array(zip(b,b,b))
a.shape=n,n,3
This setup has the desired result, but is clunky and hard to follow. Is there perhaps a way to go directly from an (n,n) to an (n,n,3) by duplicating the second index? or perhaps a way to not downsize the array to begin with?
None or np.newaxis is a common way of adding a dimension to an array. reshape with (3,3,1) works just as well:
In [64]: arr=np.arange(9).reshape(3,3)
In [65]: arr1 = arr[...,None]
In [66]: arr1.shape
Out[66]: (3, 3, 1)
repeat as function or method replicates this.
In [72]: arr2=arr1.repeat(3,axis=2)
In [73]: arr2.shape
Out[73]: (3, 3, 3)
In [74]: arr2[0,0,:]
Out[74]: array([0, 0, 0])
But you might not need to do this. With broadcasting a (3,3,1) works with a (3,3,3).
In [75]: (arr1+arr2).shape
Out[75]: (3, 3, 3)
In fact it will broadcast with a (3,) to produce (3,3,3).
In [77]: arr1+np.ones(3,int)
Out[77]:
array([[[1, 1, 1],
[2, 2, 2],
...
[[7, 7, 7],
[8, 8, 8],
[9, 9, 9]]])
So arr1+np.zeros(3,int) is another way of expanding that (3,3,1) to (3,3,3).
The broadcasting rules are:
(3,3,1) + (3,) => (3,3,1) + (1,1,3) => (3,3,3)
broadcasting adds dimensions at the start as needed.
When you sum on an axis, you can keep the original number of dimensions with a parameter:
In [78]: arr2.sum(axis=2).shape
Out[78]: (3, 3)
In [79]: arr2.sum(axis=2, keepdims=True).shape
Out[79]: (3, 3, 1)
This is handy if you want to subtract the mean from an array along any dimension:
arr2-arr2.mean(axis=2, keepdims=True)
You can firstly create a new axis (axis = 2) on a and then use np.repeat along this new axis:
np.repeat(a[:,:,None], 3, axis = 2)
Or another approach, flatten the array, repeat elements and then reshape:
np.repeat(a.ravel(), 3).reshape(n,n,3)
The result comparison:
import numpy as np
n = 4
a=np.random.random((n,n))
b=a.flatten().tolist()
a1=np.array(zip(b,b,b))
a1.shape=n,n,3
# a1 is the result from the original method
(np.repeat(a[:,:,None], 3, axis = 2) == a1).all()
# True
(np.repeat(a.ravel(), 3).reshape(4,4,3) == a1).all()
# True
Timing, use built-in numpy.repeat also shows a speed up:
import numpy as np
n = 4
a=np.random.random((n,n))
def rep():
b=a.flatten().tolist()
a1=np.array(zip(b,b,b))
a1.shape=n,n,3
%timeit rep()
# 100000 loops, best of 3: 7.11 µs per loop
%timeit np.repeat(a[:,:,None], 3, axis = 2)
# 1000000 loops, best of 3: 1.64 µs per loop
%timeit np.repeat(a.ravel(), 3).reshape(4,4,3)
# 1000000 loops, best of 3: 1.9 µs per loop
If I try
x = np.append(x, (2,3))
the tuple (2,3) does not get appended to the end of the array, rather 2 and 3 get appended individually, even if I originally declared x as
x = np.array([], dtype = tuple)
or
x = np.array([], dtype = (int,2))
What is the proper way to do this?
I agree with #user2357112 comment:
appending to NumPy arrays is catastrophically slower than appending to ordinary lists. It's an operation that they are not at all designed for
Here's a little benchmark:
# measure execution time
import timeit
import numpy as np
def f1(num_iterations):
x = np.dtype((np.int32, (2, 1)))
for i in range(num_iterations):
x = np.append(x, (i, i))
def f2(num_iterations):
x = np.array([(0, 0)])
for i in range(num_iterations):
x = np.vstack((x, (i, i)))
def f3(num_iterations):
x = []
for i in range(num_iterations):
x.append((i, i))
x = np.array(x)
N = 50000
print timeit.timeit('f1(N)', setup='from __main__ import f1, N', number=1)
print timeit.timeit('f2(N)', setup='from __main__ import f2, N', number=1)
print timeit.timeit('f3(N)', setup='from __main__ import f3, N', number=1)
I wouldn't use neither np.append nor vstack, I'd just create my python array properly and then use it to construct the np.array
EDIT
Here's the benchmark output on my laptop:
append: 12.4983000173
vstack: 1.60663705793
list: 0.0252208517006
[Finished in 14.3s]
You need to supply the shape to numpy dtype, like so:
x = np.dtype((np.int32, (1,2)))
x = np.append(x,(2,3))
Outputs
array([dtype(('<i4', (2, 3))), 1, 2], dtype=object)
[Reference][1]http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html
If I understand what you mean, you can use vstack:
>>> a = np.array([(1,2),(3,4)])
>>> a = np.vstack((a, (4,5)))
>>> a
array([[1, 2],
[3, 4],
[4, 5]])
I do not have any special insight as to why this works, but:
x = np.array([1, 3, 2, (5,7), 4])
mytuple = [(2, 3)]
mytuplearray = np.empty(len(mytuple), dtype=object)
mytuplearray[:] = mytuple
y = np.append(x, mytuplearray)
print(y) # [1 3 2 (5, 7) 4 (2, 3)]
As others have correctly pointed out, this is a slow operation with numpy arrays. If you're just building some code from scratch, try to use some other data type. But if you know your array will always remain small or you're not going to append much or if you have existing code that you need to tweak quickly, then go ahead.
simplest way:
x=np.append(x,None)
x[-1]=(2,3)
np.append is easy to use with a case like:
In [94]: np.append([1,2,3],4)
Out[94]: array([1, 2, 3, 4])
but its first example is harder to understand. It shows the same sort of flat concatenate that bothers you:
>>> np.append([1, 2, 3], [[4, 5, 6], [7, 8, 9]])
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
Stripped of dimensional tests, np.append does
In [166]: np.append(np.array([1,2],int),(2,3))
Out[166]: array([1, 2, 2, 3])
In [167]: np.concatenate([np.array([1,2],int),np.array((2,3))])
Out[167]: array([1, 2, 2, 3])
So except for the simplest cases you need to understand what np.array((2,3)) does, and how concatenate handles dimensions.
So apart from the speed issues, np.append can be trickier to use that the interface suggests. The parallels to list append are only superficial.
As for append (or concatenate) with dtype=object (not dtype=tuple) or a compound dtype ('i,i'), I couldn't tell you what happens without testing. At a minimum the inputs should already be arrays, and should have a matching dtype. Otherwise the results can unpredicatable.
edit
Don't trust the timings in https://stackoverflow.com/a/38985245/901925. The functions don't produce the same things.
Corrected functions:
In [233]: def g1(num_iterations):
...: x = np.ones((0,2),int)
...: for i in range(num_iterations):
...: x = np.append(x, [(i, i)], axis=0)
...: return x
...:
...: def g2(num_iterations):
...: x = np.ones((0, 2),int)
...: for i in range(num_iterations):
...: x = np.vstack((x, (i, i)))
...: return x
...:
...: def g3(num_iterations):
...: x = []
...: for i in range(num_iterations):
...: x.append((i, i))
...: x = np.array(x)
...: return x
...:
In [234]: g1(3)
Out[234]:
array([[0, 0],
[1, 1],
[2, 2]])
In [235]: g2(3)
Out[235]:
array([[0, 0],
[1, 1],
[2, 2]])
In [236]: g3(3)
Out[236]:
array([[0, 0],
[1, 1],
[2, 2]])
np.append and np.vstack timings are much closer. Both use np.concatenate to do the actual joining. They differ in how the inputs are processed prior to sending them to concatenate.
In [237]: timeit g1(1000)
9.69 ms ± 6.25 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [238]: timeit g2(1000)
12.8 ms ± 7.53 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [239]: timeit g3(1000)
537 µs ± 2.22 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
The wrong results. Note that f1 produces a 1d object dtype array, because the starting value is object dtype array, and there's not axis parameter. f2 duplicates the starting array.
In [240]: f1(3)
Out[240]: array([dtype(('<i4', (2, 1))), 0, 0, 1, 1, 2, 2], dtype=object)
In [241]: f2(3)
Out[241]:
array([[0, 0],
[0, 0],
[1, 1],
[2, 2]])
Not only is it slower to use np.append or np.vstack in a loop, it is also hard to do it right.
I want to get the rank of each element, so I use argsort in numpy:
np.argsort(np.array((1,1,1,2,2,3,3,3,3)))
array([0, 1, 2, 3, 4, 5, 6, 7, 8])
it give the same element the different rank, can I get the same rank like:
array([0, 0, 0, 3, 3, 5, 5, 5, 5])
If you don't mind a dependency on scipy, you can use scipy.stats.rankdata, with method='min':
In [14]: a
Out[14]: array([1, 1, 1, 2, 2, 3, 3, 3, 3])
In [15]: from scipy.stats import rankdata
In [16]: rankdata(a, method='min')
Out[16]: array([1, 1, 1, 4, 4, 6, 6, 6, 6])
Note that rankdata starts the ranks at 1. To start at 0, subtract 1 from the result:
In [17]: rankdata(a, method='min') - 1
Out[17]: array([0, 0, 0, 3, 3, 5, 5, 5, 5])
If you don't want the scipy dependency, you can use numpy.unique to compute the ranking. Here's a function that computes the same result as rankdata(x, method='min') - 1:
import numpy as np
def rankmin(x):
u, inv, counts = np.unique(x, return_inverse=True, return_counts=True)
csum = np.zeros_like(counts)
csum[1:] = counts[:-1].cumsum()
return csum[inv]
For example,
In [137]: x = np.array([60, 10, 0, 30, 20, 40, 50])
In [138]: rankdata(x, method='min') - 1
Out[138]: array([6, 1, 0, 3, 2, 4, 5])
In [139]: rankmin(x)
Out[139]: array([6, 1, 0, 3, 2, 4, 5])
In [140]: a = np.array([1,1,1,2,2,3,3,3,3])
In [141]: rankdata(a, method='min') - 1
Out[141]: array([0, 0, 0, 3, 3, 5, 5, 5, 5])
In [142]: rankmin(a)
Out[142]: array([0, 0, 0, 3, 3, 5, 5, 5, 5])
By the way, a single call to argsort() does not give ranks. You can find an assortment of approaches to ranking in the question Rank items in an array using Python/NumPy, including how to do it using argsort().
Alternatively, pandas series has a rank method which does what you need with the min method:
import pandas as pd
pd.Series((1,1,1,2,2,3,3,3,3)).rank(method="min")
# 0 1
# 1 1
# 2 1
# 3 4
# 4 4
# 5 6
# 6 6
# 7 6
# 8 6
# dtype: float64
With focus on performance, here's an approach -
def rank_repeat_based(arr):
idx = np.concatenate(([0],np.flatnonzero(np.diff(arr))+1,[arr.size]))
return np.repeat(idx[:-1],np.diff(idx))
For a generic case with the elements in input array not already sorted, we would need to use argsort() to keep track of the positions. So, we would have a modified version, like so -
def rank_repeat_based_generic(arr):
sidx = np.argsort(arr,kind='mergesort')
idx = np.concatenate(([0],np.flatnonzero(np.diff(arr[sidx]))+1,[arr.size]))
return np.repeat(idx[:-1],np.diff(idx))[sidx.argsort()]
Runtime test
Testing out all the approaches listed thus far to solve the problem on a large dataset.
Sorted array case :
In [96]: arr = np.sort(np.random.randint(1,100,(10000)))
In [97]: %timeit rankdata(arr, method='min') - 1
1000 loops, best of 3: 635 µs per loop
In [98]: %timeit rankmin(arr)
1000 loops, best of 3: 495 µs per loop
In [99]: %timeit (pd.Series(arr).rank(method="min")-1).values
1000 loops, best of 3: 826 µs per loop
In [100]: %timeit rank_repeat_based(arr)
10000 loops, best of 3: 200 µs per loop
Unsorted case :
In [106]: arr = np.random.randint(1,100,(10000))
In [107]: %timeit rankdata(arr, method='min') - 1
1000 loops, best of 3: 963 µs per loop
In [108]: %timeit rankmin(arr)
1000 loops, best of 3: 869 µs per loop
In [109]: %timeit (pd.Series(arr).rank(method="min")-1).values
1000 loops, best of 3: 1.17 ms per loop
In [110]: %timeit rank_repeat_based_generic(arr)
1000 loops, best of 3: 1.76 ms per loop
I've written a function for the same purpose. It uses pure python and numpy only. Please have a look. I put comments as well.
def my_argsort(array):
# this type conversion let us work with python lists and pandas series
array = np.array(array)
# create mapping for unique values
# it's a dictionary where keys are values from the array and
# values are desired indices
unique_values = list(set(array))
mapping = dict(zip(unique_values, np.argsort(unique_values)))
# apply mapping to our array
# np.vectorize works similar map(), and can work with dictionaries
array = np.vectorize(mapping.get)(array)
return array
Hope that helps.
Complex solutions are unnecessary for this problem.
> ary = np.sort([1, 1, 1, 2, 2, 3, 3, 3, 3]) # or anything; must be sorted.
> a = np.diff().cumsum(); a
array([0, 0, 1, 1, 2, 2, 2, 2])
> b = np.r_[0, a]; b # ties get first open rank
array([0, 0, 0, 1, 1, 2, 2, 2, 2])
> c = np.flatnonzero(ary[1:] != ary[:-1])
> np.r_[0, 1 + c][b] # ties get last open rank
array([0, 0, 0, 3, 3, 5, 5, 5])
I am surprised this specific question hasn't been asked before, but I really didn't find it on SO nor on the documentation of np.sort.
Say I have a random numpy array holding integers, e.g:
> temp = np.random.randint(1,10, 10)
> temp
array([2, 4, 7, 4, 2, 2, 7, 6, 4, 4])
If I sort it, I get ascending order by default:
> np.sort(temp)
array([2, 2, 2, 4, 4, 4, 4, 6, 7, 7])
but I want the solution to be sorted in descending order.
Now, I know I can always do:
reverse_order = np.sort(temp)[::-1]
but is this last statement efficient? Doesn't it create a copy in ascending order, and then reverses this copy to get the result in reversed order? If this is indeed the case, is there an efficient alternative? It doesn't look like np.sort accepts parameters to change the sign of the comparisons in the sort operation to get things in reverse order.
temp[::-1].sort() sorts the array in place, whereas np.sort(temp)[::-1] creates a new array.
In [25]: temp = np.random.randint(1,10, 10)
In [26]: temp
Out[26]: array([5, 2, 7, 4, 4, 2, 8, 6, 4, 4])
In [27]: id(temp)
Out[27]: 139962713524944
In [28]: temp[::-1].sort()
In [29]: temp
Out[29]: array([8, 7, 6, 5, 4, 4, 4, 4, 2, 2])
In [30]: id(temp)
Out[30]: 139962713524944
>>> a=np.array([5, 2, 7, 4, 4, 2, 8, 6, 4, 4])
>>> np.sort(a)
array([2, 2, 4, 4, 4, 4, 5, 6, 7, 8])
>>> -np.sort(-a)
array([8, 7, 6, 5, 4, 4, 4, 4, 2, 2])
For short arrays I suggest using np.argsort() by finding the indices of the sorted negatived array, which is slightly faster than reversing the sorted array:
In [37]: temp = np.random.randint(1,10, 10)
In [38]: %timeit np.sort(temp)[::-1]
100000 loops, best of 3: 4.65 µs per loop
In [39]: %timeit temp[np.argsort(-temp)]
100000 loops, best of 3: 3.91 µs per loop
Be careful with dimensions.
Let
x # initial numpy array
I = np.argsort(x) or I = x.argsort()
y = np.sort(x) or y = x.sort()
z # reverse sorted array
Full Reverse
z = x[I[::-1]]
z = -np.sort(-x)
z = np.flip(y)
flip changed in 1.15, previous versions 1.14 required axis. Solution: pip install --upgrade numpy.
First Dimension Reversed
z = y[::-1]
z = np.flipud(y)
z = np.flip(y, axis=0)
Second Dimension Reversed
z = y[::-1, :]
z = np.fliplr(y)
z = np.flip(y, axis=1)
Testing
Testing on a 100×10×10 array 1000 times.
Method | Time (ms)
-------------+----------
y[::-1] | 0.126659 # only in first dimension
-np.sort(-x) | 0.133152
np.flip(y) | 0.121711
x[I[::-1]] | 4.611778
x.sort() | 0.024961
x.argsort() | 0.041830
np.flip(x) | 0.002026
This is mainly due to reindexing rather than argsort.
# Timing code
import time
import numpy as np
def timeit(fun, xs):
t = time.time()
for i in range(len(xs)): # inline and map gave much worse results for x[-I], 5*t
fun(xs[i])
t = time.time() - t
print(np.round(t,6))
I, N = 1000, (100, 10, 10)
xs = np.random.rand(I,*N)
timeit(lambda x: np.sort(x)[::-1], xs)
timeit(lambda x: -np.sort(-x), xs)
timeit(lambda x: np.flip(x.sort()), xs)
timeit(lambda x: x[x.argsort()[::-1]], xs)
timeit(lambda x: x.sort(), xs)
timeit(lambda x: x.argsort(), xs)
timeit(lambda x: np.flip(x), xs)
np.flip() and reversed indexed are basically the same. Below is a benchmark using three different methods. It seems np.flip() is slightly faster. Using negation is slower because it is used twice so reversing the array is faster than that.
** Note that np.flip() is faster than np.fliplr() according to my tests.
def sort_reverse(x):
return np.sort(x)[::-1]
def sort_negative(x):
return -np.sort(-x)
def sort_flip(x):
return np.flip(np.sort(x))
arr=np.random.randint(1,10000,size=(1,100000))
%timeit sort_reverse(arr)
%timeit sort_negative(arr)
%timeit sort_flip(arr)
and the results are:
6.61 ms ± 67.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
6.69 ms ± 64.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
6.57 ms ± 58.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Hello I was searching for a solution to reverse sorting a two dimensional numpy array, and I couldn't find anything that worked, but I think I have stumbled on a solution which I am uploading just in case anyone is in the same boat.
x=np.sort(array)
y=np.fliplr(x)
np.sort sorts ascending which is not what you want, but the command fliplr flips the rows left to right! Seems to work!
Hope it helps you out!
I guess it's similar to the suggest about -np.sort(-a) above but I was put off going for that by comment that it doesn't always work. Perhaps my solution won't always work either however I have tested it with a few arrays and seems to be OK.
Unfortunately when you have a complex array, only np.sort(temp)[::-1] works properly. The two other methods mentioned here are not effective.
You could sort the array first (Ascending by default) and then apply np.flip()
(https://docs.scipy.org/doc/numpy/reference/generated/numpy.flip.html)
FYI It works with datetime objects as well.
Example:
x = np.array([2,3,1,0])
x_sort_asc=np.sort(x)
print(x_sort_asc)
>>> array([0, 1, 2, 3])
x_sort_desc=np.flip(x_sort_asc)
print(x_sort_desc)
>>> array([3,2,1,0])
Here is a quick trick
In[3]: import numpy as np
In[4]: temp = np.random.randint(1,10, 10)
In[5]: temp
Out[5]: array([5, 4, 2, 9, 2, 3, 4, 7, 5, 8])
In[6]: sorted = np.sort(temp)
In[7]: rsorted = list(reversed(sorted))
In[8]: sorted
Out[8]: array([2, 2, 3, 4, 4, 5, 5, 7, 8, 9])
In[9]: rsorted
Out[9]: [9, 8, 7, 5, 5, 4, 4, 3, 2, 2]
i suggest using this ...
np.arange(start_index, end_index, intervals)[::-1]
for example:
np.arange(10, 20, 0.5)
np.arange(10, 20, 0.5)[::-1]
Then your resault:
[ 19.5, 19. , 18.5, 18. , 17.5, 17. , 16.5, 16. , 15.5,
15. , 14.5, 14. , 13.5, 13. , 12.5, 12. , 11.5, 11. ,
10.5, 10. ]