I have a long unicode string:
alphabet = range(0x0FFF)
mystr = ''.join(chr(random.choice(alphabet)) for _ in range(100))
mystr = re.sub('\W', '', mystr)
I would like to view it as a series of code points, so at the moment, I am doing the following:
arr = np.array(list(mystr), dtype='U1')
I would like to be able to manipulate the string as numbers, and eventually get some different code points back. Now I'd like to invert the transformation:
mystr = ''.join(arr.tolist())
These transformations are reasonably fast and invertible, but take up an unnecessary amount of space with the list intermediary.
Is there a way to convert a numpy array of unicode characters to and from a Python string without converting to a list first?
Afterthoughts
I can get arr to appear as a single string with something like
buf = arr.view(dtype='U' + str(arr.size))
This results in a 1-element array containing the entire original. The inverse is possible as well:
buf.view(dtype='U1')
The only issue is that the type of the result is np.str_, not str.
fromiter works, but is really slow, since it goes through the iterator protocol. It's much faster to encode your data to UTF-32 (in system byte order) and use numpy.frombuffer:
In [56]: x = ''.join(chr(random.randrange(0x0fff)) for i in range(1000))
In [57]: codec = 'utf-32-le' if sys.byteorder == 'little' else 'utf-32-be'
In [58]: %timeit numpy.frombuffer(bytearray(x, codec), dtype='U1')
2.79 µs ± 47 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [59]: %timeit numpy.fromiter(x, dtype='U1', count=len(x))
122 µs ± 3.82 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [60]: numpy.array_equal(numpy.fromiter(x, dtype='U1', count=len(x)), numpy.fr
...: ombuffer(bytearray(x, codec), dtype='U1'))
Out[60]: True
I've used sys.byteorder to determine whether to encode in utf-32-le or utf-32-be. Also, using bytearray instead of encode gets a mutable bytearray instead of an immutable bytes object, so the resulting array is writable.
As for the reverse conversion, arr.view(dtype=f'U{arr.size}')[0] works, but using item() is a bit faster and produces an ordinary string object, avoiding possible weird edge cases where numpy.str_ doesn't quite behave like str:
In [72]: a = numpy.frombuffer(bytearray(x, codec), dtype='U1')
In [73]: type(a.view(dtype=f'U{a.size}')[0])
Out[73]: numpy.str_
In [74]: type(a.view(dtype=f'U{a.size}').item())
Out[74]: str
In [75]: %timeit a.view(dtype=f'U{a.size}')[0]
3.63 µs ± 34 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [76]: %timeit a.view(dtype=f'U{a.size}').item()
2.14 µs ± 23.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Finally, be aware that NumPy doesn't handle nulls like normal Python string objects do. NumPy can't distinguish between 'asdf\x00\x00\x00' and 'asdf', so using NumPy arrays for string operations is not safe if your data may contain null code points.
The fastest way I have found to convert a string to an array is
arr = np.array([mystr]).view(dtype='U1')
Another (slower) way to convert a string to an array of unicode code points based on #Daniel Mesejo's comment:
arr = np.fromiter(mystr, dtype='U1', count=len(mystr))
Looking at the source code for fromiter shows that setting the count parameter to the length of the string will cause the entire array to be allocated at once, instead of performing multiple reallocations.
To convert back to a string:
str(arr.view(dtype=f'U{arr.size}')[0])
For most purposes, the final conversion to Python str is not necessary since np.str_ is a subclass of str.
arr.view(dtype=f'U{arr.size}')[0]
Appendix: Timing of frombuffer vs array
100
mystr = ''.join(chr(random.choice(range(1, 0x1000))) for _ in range(100))
%timeit np.array([mystr]).view(dtype='U1')
1.43 µs ± 27.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit np.frombuffer(bytearray(mystr, 'utf-32-le'), dtype='U1')
1.2 µs ± 9.06 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
10000
mystr = ''.join(chr(random.choice(range(1, 0x1000))) for _ in range(10000))
%timeit np.array([mystr]).view(dtype='U1')
4.33 µs ± 13.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit np.frombuffer(bytearray(mystr, 'utf-32-le'), dtype='U1')
10.9 µs ± 29.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
1000000
mystr = ''.join(chr(random.choice(range(1, 0x1000))) for _ in range(1000000))
%timeit np.array([mystr]).view(dtype='U1')
672 µs ± 1.64 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit np.frombuffer(bytearray(mystr, 'utf-32-le'), dtype='U1')
732 µs ± 5.22 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Related
I have a 1D array of integers with D elements (i.e. idx = np.array([i0, i1, ...]), s.t. idx.size = D), where each element corresponds to the index along that dimension of an ND array with D dimensions (i.e. data s.t. data.ndim = D). How can I index the data array using the index array idx?
In python I would do data[tuple(idx)], but tuple aren't supported in numba nopython mode.
My current workaround is to use data.ravel() and convert from ND indices to 1D indices of the flattened array, but it seems like there must be an easier (and computationally faster) solution. Is there a take_along_each_axis(data, idx) method somewhere?
Lets do a bit of time testing:
In [135]: data = np.ones((100,100,100,100)); idx = (50,50,50,50)
That's nearly a Gb of memory - not huge enough to create a memory error, but still should be a reasonable test. Actually, I get the same time for basic indexing for much smaller arrays. And for other idx values
In [136]: timeit data[idx]
212 ns ± 9.25 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
the interpreter translates that into a method call:
In [137]: timeit data.__getitem__(idx)
283 ns ± 4.37 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
indexing the 'flat' array, can be done with:
In [138]: timeit data.flat[np.ravel_multi_index(idx,data.shape)]
6.65 µs ± 75.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
or taking the conversion out of the loop:
In [139]: %%timeit x=np.ravel_multi_index(idx,data.shape)
...: data.flat[x]
574 ns ± 23.7 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [142]: %%timeit x=np.ravel_multi_index(idx,data.shape);df=data.flat
...: df[x]
345 ns ± 6.39 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
I think there are cases where flat indexing is faster, but this isn't one.
So a stand alone operation I don't see the point to writing a njit version. I suppose if it's part of some larger operation it could be worth it.
I've been exploring the performance differences between numpy functions and the normal built-in functions of Python, and I want to know how numpy functions are so optimized such that there's almost a 100x speed up.
Below is some code that I wrote to highlight the execution time differences between numpy mean() and manual calculation of mean using sum() and len()
import numpy as np
import time
n = 10**7
a = np.random.randn(n)
start = time.perf_counter()
mean = sum(a)/len(a)
seconds1 = time.perf_counter()-start
start = time.perf_counter()
mean = np.mean(a)
seconds2 = time.perf_counter()-start
print("First method takes time {:.3f}s".format(seconds1))
print("Second method takes time {:.3f}s".format(seconds2))
Output:-
First method takes 1.687s
Second method takes 0.013s
Make a numpy array:
In [130]: a=np.arange(10000)
Apply the numpy sum function:
In [131]: timeit np.sum(a)
16.2 µs ± 22.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
mean is a bit slower, since it has to divide by the shape (and may do a few other tests):
In [132]: timeit np.mean(a)
34.9 µs ± 198 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
np.sum actually delegates the action to the sum method of the array, so using that directly is a bit faster:
In [133]: timeit a.sum()
13.3 µs ± 25.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Python sum isn't a bad function, but it iterates over its argument. Iterating (in Python code) on an array is slow:
In [134]: timeit sum(a)
1.16 ms ± 2.55 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Converting the array to a list first saves time:
In [135]: timeit sum(a.tolist())
369 µs ± 7.95 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Better yet if we just time the list operation:
In [136]: %%timeit alist=a.tolist()
...: sum(alist)
57.2 µs ± 294 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
When working with numpy arrays, it is best to use its own methods (or numpy functions). Generally when using Python functions, it is better to use lists.
Using a numpy function on a list is slow, because it has to first convert the list to an array:
In [137]: %%timeit alist=a.tolist()
...: np.sum(alist)
795 µs ± 28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I have the following issue: I have a matrix yj of size (m,200) (m = 3683), and I have a dictionary that for each key, returns a numpy array of row indices for yj (for each key, the size array changes, just in case anyone is wondering).
Now, I have to access this matrix lots of times (around 1M times) and my code is slowing down because of the indexing (I've profiled the code and it takes 65% of time on this step).
Here is what I've tried out:
First of all, use the indices for slicing:
>> %timeit yj[R_u_idx_train[1]]
10.5 µs ± 79.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
The variable R_u_idx_train is the dictionary that has the row indices.
I thought that maybe boolean indexing might be faster:
>> yj[R_u_idx_train_mask[1]]
10.5 µs ± 159 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
R_u_idx_train_mask is a dictionary that returns a boolean array of size m where the indices given by R_u_idx_train are set to True.
I also tried np.ix_
>> cols = np.arange(0,200)
>> %timeit ix_ = np.ix_(R_u_idx_train[1], cols); yj[ix_]
42.1 µs ± 353 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
I also tried np.take
>> %timeit np.take(yj, R_u_idx_train[1], axis=0)
2.35 ms ± 88.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
And while this seems great, it is not, since it gives an array that is shape (R_u_idx_train[1].shape[0], R_u_idx_train[1].shape[0]) (it should be (R_u_idx_train[1].shape[0], 200)). I guess I'm not using the method correctly.
I also tried np.compress
>> %timeit np.compress(R_u_idx_train_mask[1], yj, axis=0)
14.1 µs ± 124 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Finally I tried to index with a boolean matrix
>> %timeit yj[R_u_idx_train_mask2[1]]
244 µs ± 786 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
So, is 10.5 µs ± 79.7 ns per loop the best I can do? I could try to use cython but that seems like a lot of work for just indexing...
Thanks a lot.
A very smart solution was given by V.Ayrat in the comments.
>> newdict = {k: yj[R_u_idx_train[k]] for k in R_u_idx_train.keys()}
>> %timeit newdict[1]
202 ns ± 6.7 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
Anyway maybe it would still be cool to know if there is a way to speed it up using numpy!
def match_score(vendor, company):
return max(fuzz.ratio(vendor, company), fuzz.partial_ratio(vendor, company), fuzz.token_sort_ratio(vendor, company))
Note: fuzz is from import fuzzywuzzy library
========================
vendor = 'RED DEER TELUS STORE'
When I try this code:
df['Vendor']=vendor
df['Score'] = np.array(match_score(tuple(df['Vendor']), tuple(df['Company'])))
I get this
However, when I try an almost identical code I get a different 'Score'?
df['Score'] = np.array(match_score(vendor, tuple(df['Company'])))
My logic in #2 is that the vendor is the same across the entire column so no need to put it in a tuple..I can just give it as a string and make the processing faster.
Can anyone explain why passing an entire column where vendor in each cell = 'RED DEER TELUS STORE' gives a different result than just passing 'RED DEER TELUS STORE' to the function as a string? Thanks!
tuple versus tolist:
In [166]: x=np.arange(10000)
In [167]: timeit tuple(x)
1.14 ms ± 26.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [168]: timeit list(x)
1.12 ms ± 2.01 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [169]: timeit x.tolist()
296 µs ± 9.98 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
or with a series
In [170]: ds = pd.Series(x)
In [171]: timeit tuple(ds)
1.22 ms ± 1.57 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [172]: timeit list(ds)
1.23 ms ± 1.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [173]: timeit ds.to_list()
394 µs ± 22.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
With a series of string values (object dtype):
In [184]: ds = pd.Series(['' for _ in range(1000)])
In [185]: ds[:] = vendor
In [186]: timeit tuple(ds)
104 µs ± 1.11 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [187]: timeit ds.to_list()
27.2 µs ± 179 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
As to why you get a difference between passing the Series/tuple versus the string, I think you need to examine the fuzz code/docs in more detail. Maybe even test the function(s) with small examples. I don't have fuzz installed, so can't explore that part of your calculations.
You might even want to make up some lists (or tuples) of strings, and experiment with those. I don't think this is a numpy/pandas issue. It's a matter of learning to use fuzz correctly.
I get a big array (image with 12 Mpix) in the array format from the python standard lib.
Since I want to perform operations on those array, I wish to convert it to a numpy array.
I tried the following:
import numpy
import array
from datetime import datetime
test = array.array('d', [0]*12000000)
t = datetime.now()
numpy.array(test)
print datetime.now() - t
I get a result between one or two seconds: equivalent to a loop in python.
Is there a more efficient way of doing this conversion?
np.array(test) # 1.19s
np.fromiter(test, dtype=int) # 1.08s
np.frombuffer(test) # 459ns !!!
asarray(x) is almost always the best choice for any array-like object.
array and fromiter are slow because they perform a copy. Using asarray allows this copy to be elided:
>>> import array
>>> import numpy as np
>>> test = array.array('d', [0]*12000000)
# very slow - this makes multiple copies that grow each time
>>> %timeit np.fromiter(test, dtype=test.typecode)
626 ms ± 3.97 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# fast memory copy
>>> %timeit np.array(test)
63.5 ms ± 639 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# which is equivalent to doing the fast construction followed by a copy
>>> %timeit np.asarray(test).copy()
63.4 ms ± 371 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# so doing just the construction is way faster
>>> %timeit np.asarray(test)
1.73 µs ± 70.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
# marginally faster, but at the expense of verbosity and type safety if you
# get the wrong type
>>> %timeit np.frombuffer(test, dtype=test.typecode)
1.07 µs ± 27.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)