I have a numpy array, X, of shape (200, 200, 1500). I also have a function, func, that essentially returns the mean of an array (it does a few other things but they are are all numpy operations, you can think of it as np.mean). Now if I want to apply this function across the second array I could just do np.apply_along_axis(func, 2, X). But, I also have a truth array of shape (200, 200, 1500). I want to only apply func to places where the truth array has True. So it would ignore any places where the truth array is false. So going back to the np.mean example it would take the mean for each array index across the second axis but ignore some arbitrary set of indices.
So in practice, my solution would be to convertX into a new array Y with shape (200, 200) but the elements of the array are lists. This would be done using the truth array. Then apply func to each list in the array. The problem is this seems very time consuming since and I feel like there is a numpy oriented solution for this. Is there?
If what I said with the array list is the best way, how would I go about combining X and the truth array to get Y?
Any suggestions or comments appreciated.
In [268]: X = np.random.randint(0,100,(200,200,1500))
Let's check how apply works with just np.mean:
In [269]: res = np.apply_along_axis(np.mean, 2, X)
In [270]: res.shape
Out[270]: (200, 200)
In [271]: timeit res = np.apply_along_axis(np.mean, 2, X)
1.2 s ± 36.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
An equivalent using iteration on the first two dimensions. I'm using reshape to make it easier to write; speed should be about the same with a double loop.
In [272]: res1 = np.reshape([np.mean(row) for row in X.reshape(-1,1500)],(200,200))
In [273]: np.allclose(res, res1)
Out[273]: True
In [274]: timeit res1 = np.reshape([np.mean(row) for row in X.reshape(-1,1500)],(200,200))
906 ms ± 13 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
So apply may be convenient, but it is not a speed tool.
For speed in numpy you need to maximize the use of compiled code, and avoiding unnecessary python level loops.
In [275]: res2 = np.mean(X,axis=2)
In [276]: np.allclose(res2,res)
Out[276]: True
In [277]: timeit res2 = np.mean(X,axis=2)
120 ms ± 619 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
If using apply in your new case is hard, you don't loose anything by using something you do understand.
masked
In [278]: mask = np.random.randint(0,2, X.shape).astype(bool)
The [272] iteration can be adapted to work with mask:
In [279]: resM1 = np.reshape([np.mean(row[m]) for row,m in zip(X.reshape(-1,1500),mask.reshape(-1,150
...: 0))],X.shape[:2])
In [280]: timeit resM1 = np.reshape([np.mean(row[m]) for row,m in zip(X.reshape(-1,1500),mask.reshape
...: (-1,1500))],X.shape[:2])
1.43 s ± 18.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
This might have problems if row[m] is empty. np.mean([]) produces a warning and nan value.
Applying the mask to X before any further processing looses dimensional information.
In [282]: X[mask].shape
Out[282]: (30001416,)
apply only works with one array, so it will be awkward (though not impossible) to use it to iterate on both X and mask. A structured array with data and mask fields might do the job. But the previous timings show, there's no speed advantage.
masked array
I don't usually expect masked arrays to offer speed, but this case it helps:
In [285]: xM = np.ma.masked_array(X, ~mask)
In [286]: resMM = np.ma.mean(xM, axis=2)
In [287]: np.allclose(resM1, resMM)
Out[287]: True
In [288]: timeit resMM = np.ma.mean(xM, axis=2)
849 ms ± 20.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
np.nanmean
There's a set of functions that use np.nan masking:
In [289]: Xfloat = X.astype(float)
In [290]: Xfloat[~mask] = np.nan
In [291]: resflt = np.nanmean(Xfloat, axis=2)
In [292]: np.allclose(resM1, resflt)
Out[292]: True
In [293]: %%timeit
...: Xfloat = X.astype(float)
...: Xfloat[~mask] = np.nan
...: resflt = np.nanmean(Xfloat, axis=2)
...:
...:
2.17 s ± 200 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
This doesn't help :(
Related
I can certainly do
a[a == 0] = something
that sets every entry of a that equals zero to something. Equivalently, I could write
a[np.equal(a, 0)] = something
Now, imagine a is an array of dtype=object. I cannot write a[a is None] because, of course, a itself isn't None. The intention is clear: I want the comparison is to be broadcast like any other ufunc. This list from the docs lists nothing like an is-unfunc.
Why is there none, and, more interestingly to me: what would be a performant replacement?
There are two things at play here.
The first (and more important) one is that is is implemented directly in the Python interpreter with no option to redirect to a dunder method. Numpy arrays, like many other objects, have an __eq__ method that implements the == operation. a is None is treated approximately as id(a) == id(None), with no recourse for an elementwise implementation under any circumstance. That's just how python works.
The second aspect is that numpy is fundamentally designed for storing numbers. Object arrays are special cases that store references to objects as a number. This appears to be the same as how lists store object references, but it's only similar when dealing with references. The elements of a list are always references to objects, even when the list contains homogeneous integers, for example. A numpy array of dtype int does not contain python objects. Each consecutive element of the array is a raw binary integer, not a reference to a python object wrapper. Even if python allowed you to override the is operator, it would be meaningless to apply elementwise.
So if you want to compare objects, use python lists:
mylist = [...]
mylist = [something if x is None else x for x in mylist]
If you insist on using a numpy array, either (a) use numerical arrays and mark None elements with something else, like np.nan, or (b) treat the array as a list. You will have to apply id or is to each element, which are python constructs, so there is no "performant" way to do it at that point, or (c) just use ==, which will trigger python-level equality comparison, which is equivalent to is for the singleton None.
Except for operations like reshape and indexing that don't depend on dtype (except for the itemsize), operations on object dtype arrays are performed at list-comprehension speeds, iterating on the elements and applying an appropriate method to each. Sometimes that method doesn't exist, such as when doing np.sin.
To illustrate, consider the array from one of the comments:
In [132]: a = np.array([1, None, 0, np.nan, ''])
In [133]: a
Out[133]: array([1, None, 0, nan, ''], dtype=object)
The object array test:
In [134]: a==None
Out[134]: array([False, True, False, False, False])
In [135]: timeit a==None
5.16 µs ± 73.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
An equivalent comprehension:
In [136]: [x is None for x in a]
Out[136]: [False, True, False, False, False]
In [137]: timeit [x is None for x in a]
1.52 µs ± 18.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
It's faster, even if we cast the result back to array (not a cheap step):
In [138]: timeit np.array([x is None for x in a])
4.67 µs ± 95.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Iteration on the list version of the array is even faster:
In [139]: timeit np.array([x is None for x in a.tolist()])
2.52 µs ± 48.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Let's look at the full assignment action:
In [141]: a[[x is None for x in a.tolist()]]
Out[141]: array([None], dtype=object)
In [142]: %%timeit a1=a.copy()
...: a1[[x is None for x in a1.tolist()]] = np.nan
...:
...:
4.03 µs ± 10 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [143]: %%timeit a1=a.copy()
...: a1[a1==None] = np.nan
...:
...:
6.18 µs ± 28.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
The usual caveat that things might scale differently.
Well, I have a simple problem that is giving me a headache, basically I have two two-dimensional arrays, full of [x,y] coordinates, and I want to compare the first with the second and generate a third array that contains all the elements of the first array that doesn't appear in the second. It's simple but I couldn't make it work at all. The size varies a lot, the first array can have between a thousand and 2 million coordinates, while the first array has between 1 and a thousand.
This operation will occur many times and the larger the first array, the more times it will occur
sample:
arr1 = np.array([[0, 3], [0, 4], [1, 3], [1, 7], ])
arr2 = np.array([[0, 3], [1, 7]])
result = np.array([[0, 4], [1, 3]])
In Depth: Basically I have a binary image with variable resolution, it is composed of 0 and 1 (255) and I analyze each pixel individually (with an algorithm that is already optimized), but (on purpose) every time this function is executed it analyzes only a fraction of the pixels, and when it is finished it gives me back all the coordinates of these pixels. The problem is that when it executes it runs the following code:
ones = np.argwhere(img == 255) # ones = pixels array
It takes about 0.02 seconds and is by far the slowest part of the code. My idea is to create this variable once and, each time the function ends, it removes the parsed pixels and passes the new array as parameter to continue until the array is empty
Not sure what you intend to do with the extra dimensions, as the set difference, like any filtering, is inherently losing the shape information.
Anyway, NumPy does provide np.setdiff1d() to solve this problem elegantly.
EDIT With the clarifications provided, you seems to be looking for a way compute the set difference on a given axis, i.e. the elements of the sets are actually arrays.
There is no built-in specifically for this in NumPy, but it is not too difficult to craft one.
For simplicity, we assume that the operating axis is the first one (so that the element of the set are arr[i]), that only unique elements appear in the first array, and that the arrays are 2D.
They are all based on the idea that the asymptotically best approach is to build a set() of the second array and then using that to filter out the entries from the first array.
The idiomatic way to build such set in Python / NumPy is to use:
set(map(tuple, arr))
where the mapping to tuple freezes arr[i], allowing them to be hashable and hence making them available to use with set().
Unfortunately, since the filtering would produce results of unpredictable size, NumPy arrays are not the ideal container for the result.
To solve this issue, one can use:
an intermediate list
import numpy as np
def setdiff2d_list(arr1, arr2):
delta = set(map(tuple, arr2))
return np.array([x for x in arr1 if tuple(x) not in delta])
np.fromiter() followed by np.reshape()
import numpy as np
def setdiff2d_iter(arr1, arr2):
delta = set(map(tuple, arr2))
return np.fromiter((x for xs in arr1 if tuple(xs) not in delta for x in xs), dtype=arr1.dtype).reshape(-1, arr1.shape[-1])
NumPy's advanced indexing
def setdiff2d_idx(arr1, arr2):
delta = set(map(tuple, arr2))
idx = [tuple(x) not in delta for x in arr1]
return arr1[idx]
Convert both inputs to set() (will force uniqueness of the output elements and will lose ordering):
import numpy as np
def setdiff2d_set(arr1, arr2):
set1 = set(map(tuple, arr1))
set2 = set(map(tuple, arr2))
return np.array(list(set1 - set2))
Alternatively, the advanced indexing can be built using broadcasting, np.any() and np.all():
def setdiff2d_bc(arr1, arr2):
idx = (arr1[:, None] != arr2).any(-1).all(1)
return arr1[idx]
Some form of the above methods were originally suggested in #QuangHoang's answer.
A similar approach could also be implemented in Numba, following the same idea as above but using a hash instead of the actual array view arr[i] (because of the limitations in what is supported inside a set() by Numba) and pre-computing the output size (for speed):
import numpy as np
import numba as nb
#nb.njit
def mul_xor_hash(arr, init=65537, k=37):
result = init
for x in arr.view(np.uint64):
result = (result * k) ^ x
return result
#nb.njit
def setdiff2d_nb(arr1, arr2):
# : build `delta` set using hashes
delta = {mul_xor_hash(arr2[0])}
for i in range(1, arr2.shape[0]):
delta.add(mul_xor_hash(arr2[i]))
# : compute the size of the result
n = 0
for i in range(arr1.shape[0]):
if mul_xor_hash(arr1[i]) not in delta:
n += 1
# : build the result
result = np.empty((n, arr1.shape[-1]), dtype=arr1.dtype)
j = 0
for i in range(arr1.shape[0]):
if mul_xor_hash(arr1[i]) not in delta:
result[j] = arr1[i]
j += 1
return result
While they all give the same result:
funcs = setdiff2d_iter, setdiff2d_list, setdiff2d_idx, setdiff2d_set, setdiff2d_bc, setdiff2d_nb
arr1 = np.array([[0, 3], [0, 4], [1, 3], [1, 7]])
print(arr1)
# [[0 3]
# [0 4]
# [1 3]
# [1 7]]
arr2 = np.array([[0, 3], [1, 7], [4, 0]])
print(arr2)
# [[0 3]
# [1 7]
# [4 0]]
result = funcs[0](arr1, arr2)
print(result)
# [[0 4]
# [1 3]]
for func in funcs:
print(f'{func.__name__:>24s}', np.all(result == func(arr1, arr2)))
# setdiff2d_iter True
# setdiff2d_list True
# setdiff2d_idx True
# setdiff2d_set False # because of ordering
# setdiff2d_bc True
# setdiff2d_nb True
their performance seems to be varying:
for func in funcs:
print(f'{func.__name__:>24s}', end=' ')
%timeit func(arr1, arr2)
# setdiff2d_iter 16.3 µs ± 719 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# setdiff2d_list 14.9 µs ± 528 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# setdiff2d_idx 17.8 µs ± 1.75 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# setdiff2d_set 17.5 µs ± 1.31 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# setdiff2d_bc 9.45 µs ± 405 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
# setdiff2d_nb 1.58 µs ± 51.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
The Numba-based approach proposed seems to outperform the other approaches by a fair margin (some 10x using the given input).
Similar timings are observed with larger inputs:
np.random.seed(42)
arr1 = np.random.randint(0, 100, (1000, 2))
arr2 = np.random.randint(0, 100, (1000, 2))
print(setdiff2d_nb(arr1, arr2).shape)
# (736, 2)
for func in funcs:
print(f'{func.__name__:>24s}', end=' ')
%timeit func(arr1, arr2)
# setdiff2d_iter 3.51 ms ± 75.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# setdiff2d_list 2.92 ms ± 32.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# setdiff2d_idx 2.61 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# setdiff2d_set 3.52 ms ± 67.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# setdiff2d_bc 25.6 ms ± 198 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# setdiff2d_nb 192 µs ± 1.66 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(As a side note, setdiff2d_bc() is the most negatively affected by the size of the second input).
Depending on how large your arrays are. If they are not too large (few thousands), you can
use broadcasting to compare each point in x to each point in y
use any to check for inequality at the last dimension
use all to check for matching
Code:
idx = (arr1[:,None]!=arr2).any(-1).all(1)
arr1[idx]
Output:
array([[0, 4],
[1, 3]])
update: for longer data, you can try set and a for loop:
set_y = set(map(tuple, y))
idx = [tuple(point) not in set_y for point in x]
x[idx]
I have a=np.array([array([1,2,3,4],[2,3,4,5]),array([6,7,8,9])]). I want to take a dot product of both of the arrays with some vector v.
I tried to vectorize the np.dot function.
vfunc=np.vectorize(np.dot) and I applied the vfunc to my array a. vfunc(a,v)where v is the vector i want to take dot product with. However, i get this error ValueError: setting an array element with a sequence. . Is there any other way to do this?
Since you are passing an object dtype array as argument, you need to specify 'O' result type as well. Without otypes vectorize tries to deduce the return dtype, and may do so wrongly. That's just one of the pitfalls to using np.vectorize:
In [196]: f = np.vectorize(np.dot, otypes=['O'])
In [197]: x = np.array([[1,2,3],[1,2,3,4]])
/usr/local/bin/ipython3:1: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
#!/usr/bin/python3
In [199]: f(x, x)
Out[199]: array([14, 30], dtype=object)
Another problem with np.vectorize is that it is slower than alternatives:
In [200]: f1 = np.frompyfunc(np.dot, 2,1)
In [201]: f1(x,x)
Out[201]: array([14, 30], dtype=object)
In [202]: np.array([np.dot(i,j) for i,j in zip(x,x)])
Out[202]: array([14, 30])
In [203]: timeit f(x, x)
27.1 µs ± 229 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [204]: timeit f1(x,x)
16.9 µs ± 135 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [205]: timeit np.array([np.dot(i,j) for i,j in zip(x,x)])
21.3 µs ± 201 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
np.vectorize has a clear speed disclaimer. Read the full docs; it isn't as simple a function as you might think. The name can be misleading.
I have a three-dimensional array like
A=np.array([[[1,1],
[1,0]],
[[1,2],
[1,0]],
[[1,0],
[0,0]]])
Now I would like to obtain an array that has a nonzero value in a given position if only a unique nonzero value (or zero) occurs in that position. It should have zero if only zeros or more than one nonzero value occur in that position. For the example above, I would like
[[1,0],
[1,0]]
since
in A[:,0,0] there are only 1s
in A[:,0,1] there are 0, 1 and 2, so more than one nonzero value
in A[:,1,0] there are 0 and 1, so 1 is retained
in A[:,1,1] there are only 0s
I can find how many nonzero elements there are with np.count_nonzero(A, axis=0), but I would like to keep 1s or 2s even if there are several of them. I looked at np.unique but it doesn't seem to support what I'd like to do.
Ideally, I'd like a function like np.count_unique(A, axis=0) which would return an array in the original shape, e.g. [[1, 3],[2, 1]], so I could check whether 3 or more occur and then ignore that position.
All I could come up with was a list comprehension iterating over the that I'd like to obtain
[[len(np.unique(A[:, i, j])) for j in range(A.shape[2])] for i in range(A.shape[1])]
Any other ideas?
You can use np.diff to stay at numpy level for the second task.
def diffcount(A):
B=A.copy()
B.sort(axis=0)
C=np.diff(B,axis=0)>0
D=C.sum(axis=0)+1
return D
# [[1 3]
# [2 1]]
it's seems to be a little faster on big arrays:
In [62]: A=np.random.randint(0,100,(100,100,100))
In [63]: %timeit diffcount(A)
46.8 ms ± 769 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [64]: timeit [[len(np.unique(A[:, i, j])) for j in range(A.shape[2])]\
for i in range(A.shape[1])]
149 ms ± 700 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Finally counting unique is simpler than sorting, a ln(A.shape[0]) factor can be win.
A way to win this factor is to use the set mechanism :
In [81]: %timeit np.apply_along_axis(lambda a:len(set(a)),axis=0,A)
183 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Unfortunately, this is not faster.
Another way is to do it by hand :
def countunique(A,Amax):
res=np.empty(A.shape[1:],A.dtype)
c=np.empty(Amax+1,A.dtype)
for i in range(A.shape[1]):
for j in range(A.shape[2]):
T=A[:,i,j]
for k in range(c.size): c[k]=0
for x in T:
c[x]=1
res[i,j]= c.sum()
return res
At python level:
In [70]: %timeit countunique(A,100)
429 ms ± 18.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Which is not so bad for a pure python approach. Then just shift this code at low level with numba :
import numba
countunique2=numba.jit(countunique)
In [71]: %timeit countunique2(A,100)
3.63 ms ± 70.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Which will be difficult to improve a lot.
One approach would be to use A as first axis indices for setting a boolean array of the same lengths along the other two axes and then simply counting the non-zeros along the first axis of it. Two variants would be possible - One keeping it as 3D and another would be to reshape into 2D for some performance benefit as indexing into 2D would be faster. Thus, the two implementations would be -
def nunique_axis0_maskcount_app1(A):
m,n = A.shape[1:]
mask = np.zeros((A.max()+1,m,n),dtype=bool)
mask[A,np.arange(m)[:,None],np.arange(n)] = 1
return mask.sum(0)
def nunique_axis0_maskcount_app2(A):
m,n = A.shape[1:]
A.shape = (-1,m*n)
maxn = A.max()+1
N = A.shape[1]
mask = np.zeros((maxn,N),dtype=bool)
mask[A,np.arange(N)] = 1
A.shape = (-1,m,n)
return mask.sum(0).reshape(m,n)
Runtime test -
In [154]: A = np.random.randint(0,100,(100,100,100))
# #B. M.'s soln
In [155]: %timeit f(A)
10 loops, best of 3: 28.3 ms per loop
# #B. M.'s soln using slicing : (B[1:] != B[:-1]).sum(0)+1
In [156]: %timeit f2(A)
10 loops, best of 3: 26.2 ms per loop
In [157]: %timeit nunique_axis0_maskcount_app1(A)
100 loops, best of 3: 12 ms per loop
In [158]: %timeit nunique_axis0_maskcount_app2(A)
100 loops, best of 3: 9.14 ms per loop
Numba method
Using the same strategy as used for nunique_axis0_maskcount_app2 with directly getting the counts at C-level with numba, we would have -
from numba import njit
#njit
def nunique_loopy_func(mask, N, A, p, count):
for j in range(N):
mask[:] = True
mask[A[0,j]] = False
c = 1
for i in range(1,p):
if mask[A[i,j]]:
c += 1
mask[A[i,j]] = False
count[j] = c
return count
def nunique_axis0_numba(A):
p,m,n = A.shape
A.shape = (-1,m*n)
maxn = A.max()+1
N = A.shape[1]
mask = np.empty(maxn,dtype=bool)
count = np.empty(N,dtype=int)
out = nunique_loopy_func(mask, N, A, p, count).reshape(m,n)
A.shape = (-1,m,n)
return out
Runtime test -
In [328]: np.random.seed(0)
In [329]: A = np.random.randint(0,100,(100,100,100))
In [330]: %timeit nunique_axis0_maskcount_app2(A)
100 loops, best of 3: 11.1 ms per loop
# #B.M.'s numba soln
In [331]: %timeit countunique2(A,A.max()+1)
100 loops, best of 3: 3.43 ms per loop
# Numba soln posted in this post
In [332]: %timeit nunique_axis0_numba(A)
100 loops, best of 3: 2.76 ms per loop
I used:
df['ids'] = df['ids'].values.astype(set)
to turn lists into sets, but the output was a list not a set:
>>> x = np.array([[1, 2, 2.5],[12,35,12]])
>>> x.astype(set)
array([[1.0, 2.0, 2.5],
[12.0, 35.0, 12.0]], dtype=object)
Is there an efficient way to turn list into set in Numpy?
EDIT 1:
My input is as big as below:
I have 3,000 records. Each has 30,000 ids: [[1,...,12,13,...,30000], [1,..,43,45,...,30000],...,[...]]
First flatten your ndarray to obtain a single dimensional array, then apply set() on it:
set(x.flatten())
Edit : since it seems you just want an array of set, not a set of the whole array, then you can do value = [set(v) for v in x] to obtain a list of sets.
The current state of your question (can change any time): how can I efficiently remove unique elements from a large array of large arrays?
import numpy as np
rng = np.random.default_rng()
arr = rng.random((3000, 30000))
out1 = list(map(np.unique, arr))
#or
out2 = [np.unique(subarr) for subarr in arr]
Runtimes in an IPython shell:
>>> %timeit list(map(np.unique, arr))
5.39 s ± 37.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
>>> %timeit [np.unique(subarr) for subarr in arr]
5.42 s ± 58.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Update: as #hpaulj pointed out in his comment, my dummy example is biased since floating-point random numbers will almost certainly be unique. So here's a more life-like example with integer numbers:
>>> arr = rng.integers(low=1, high=15000, size=(3000, 30000))
>>> %timeit list(map(np.unique, arr))
4.98 s ± 83.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
>>> %timeit [np.unique(subarr) for subarr in arr]
4.95 s ± 51.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In this case the elements of the output list have varying lengths, since there are actual duplicates to remove.
A couple of earlier 'row-wise' unique questions:
vectorize numpy unique for subarrays
Numpy: Row Wise Unique elements
Count unique elements row wise in an ndarray
In a couple of these the count is more interesting than the actual unique values.
If the number of unique values per row differs, then the result cannot be a (2d) array. That's a pretty good indication that the problem cannot be fully vectorized. You need some sort of iteration over the rows.