Related
I'm trying to convert a list, say, L = [1, 2, 3, 4, 5, 6, 7, 8, ... , n] into another list L' = [1, 2, -3, -4, 5, 6, -7, -8, ...., ±n] in Python.
My question is if there is a shorter/more efficient way of doing that than using a for-loop:
for i in range(len(L)):
if i%4 > 1:
L[i] *= -1
e.g. by slicing.
The for loop is fairly efficient, since it doesn't waste space for a new L', unless you need to preserve the original list of course, in which case your code is wrong.
If you just care about speed instead of efficiency, you could start out with a numpy array instead of a list, since you'd be able to come up with an operation that may execute faster than the list operation.
If you care about shorter, the comprehension offered by #wkl is the way to go.
l_prime = [-x if i%4 > 1 else x for i, x in enumerate(l)]
Here's the implementations, timed with timeit (standard library):
from timeit import timeit
import numpy as np
import itertools, operator
def samuel(l):
for i in range(len(l)):
if i % 4 > 1:
l[i] *= -1
return l
def chai(l):
return list(map(operator.mul, l, itertools.cycle([1, 1, -1, -1])))
def wkl(l):
return [-x if i % 4 > 1 else x for i, x in enumerate(l)]
def vladimir(l):
ar = np.array(l)
ar[2::4] *= -1
ar[3::4] *= -1
return ar.tolist()
# ensure all outcomes are the same
assert samuel(list(range(1000))) == chai(list(range(1000))) == wkl(list(range(1000))) == vladimir(list(range(1000)))
print('samuel: ', timeit(lambda: samuel(list(range(1000))), number=100000))
print('chai: ', timeit(lambda: chai(list(range(1000))), number=100000))
print('wkl: ', timeit(lambda: wkl(list(range(1000))), number=100000))
print('vladimir: ', timeit(lambda: vladimir(list(range(100000))), number=1000))
Result:
samuel: 6.736065300000519
chai: 3.7625152999999045
wkl: 7.069251500000064
vladimir: 6.424349999997503
The numpy solution would be made to be faster, without the list conversions, as stated:
def vladimir_a(ar):
ar[2::4] *= -1
ar[3::4] *= -1
return ar.tolist()
ar = np.array(list(range(1000)))
print('vladimir array: ', timeit(lambda: vladimir_a(ar), number=100000))
Result:
vladimir array: 1.269356699999662
(I'm aware ar will be modified 100,000 times, but it doesn't affect the performance)
Edit: actually, that's unfair - the ranges were all in the timed section, this would be fair (and not so great)
def vladimir_a(ar):
ar[2::4] *= -1
ar[3::4] *= -1
return ar.tolist()
print('vladimir array: ', timeit(lambda: vladimir_a(np.array(range(1000))), number=100000))
Result:
vladimir array: 6.5144264999998995
So you may need to do some timing in your actual use case to find what's fastest there. Constructing the same array 100000 times (or the same list) clearly isn't what you're doing, and one would expect you are operating on a large dataset for speed to even be a consideration.
With Python 3, you can use itertools's cycle to repeat an iterable over and over again, and then use map to combine two iterables with some function:
>>> import itertools, operator
>>> list(map(operator.mul, range(10), itertools.cycle([1, 1, -1, -1])))
[0, 1, -2, -3, 4, 5, -6, -7, 8, 9]
>>> list(map(operator.mul, [2, 3, 5, 7, 11, 13, 17, 19], itertools.cycle([1, 1, -1, -1])))
[2, 3, -5, -7, 11, 13, -17, -19]
Here's a comparison of various methods:
import time
import numpy as np
n = int(1e7)
s1 = time.time()
L1 = list(range(1, n))
for i in range(len(L1)):
if i % 4 > 1:
L1[i] *= -1
e1 = time.time()
s2 = time.time()
L2 = list(range(1, n))
L2 = np.array(L2)
L2[np.arange(n - 1) % 4 > 1] *= -1
L2 = L2.tolist()
e2 = time.time()
s3 = time.time()
L3 = list(range(1, n))
L3 = np.array(L3)
k = (n - 1) // 4
pat = [False, False, True, True]
idx = np.concatenate((np.tile(pat, k), pat[:n - k * 4 - 1])).astype(bool)
L3[idx] *= -1
L3 = L3.tolist()
e3 = time.time()
s4 = time.time()
L4 = list(range(1, n))
L4 = np.array(L4)
L4[2::4] *= -1
L4[3::4] *= -1
L4 = L4.tolist()
e4 = time.time()
assert all(np.array(L1) == np.array(L2))
assert all(np.array(L1) == np.array(L3))
assert all(np.array(L1) == np.array(L4))
print(f'time1: {e1 - s1:.2f}s, time2: {e2 - s2:.2f}s, time3: {e3 - s3:.2f}s, time4: {e4 - s4:.2f}s')
prints
time1: 3.76s, time2: 1.72s, time3: 1.58s, time4: 1.52s
To add to #Grismar’s answer, numpy arrays can be dramatically faster than iteration because they do operations in parallel (even on CPUs).
Here is how to solve your problem using numpy arrays (with slicing):
import numpy as np
# If you already have a list `L`:
ar = np.array(L)
# But it’s faster to use numpy to
# generate the array of numbers
# (here: integers from 1 to n):
n = 15
ar = np.arange(1, n + 1)
ar[2::4] *= -1
ar[3::4] *= -1
# You can convert a numpy array into a list.
# Note that this step may not be necessary
# at all, because numpy arrays can be
# used where you would use lists.
# But for completeness, here is how to do it:
L_prime = ar.tolist()
A function you can use:
def make_array(n):
ar = np.arange(1, n + 1)
ar[2::4] *= -1
ar[3::4] *= -1
return ar
I have a list:
lst = [ 1,2,3,4,5,6,7,8]
I want to increment all numbers above index 4.
for i in range(4,len(lst)):
lst[i]+=2
Since this operation needs to be done many time, I want to do it the most efficient way possible.
How can I do this fast.
Use Numpy for fast array manipulations, check the example below:
import numpy as np
lst = np.array([1,2,3,4,5,6,7,8])
# add 2 at all indices from 4 till the end of the array
lst[4:] += 2
print(lst)
# array([ 1, 2, 3, 4, 7, 8, 9, 10])
If you are updating large ranges of a large list many times, use a more suitable data structure so that the updates don't take O(n) time each.
One such data structure is a segment tree, where each list element corresponds to a leaf node in a tree; the true value of the list element can be represented as the sum of the values on the path between the leaf node and the root node. This way, adding a number to a single internal node is effectively like adding it to all of the list elements represented by that subtree.
The data structure supports get/set operations by index in O(log n) time, and add-in-range operations also in O(log n) time. The solution below uses a binary tree, implemented using a list of length <= 2n.
class RangeAddList:
def __init__(self, vals):
# list length
self._n = len(vals)
# smallest power of 2 >= list length
self._m = 1 << (self._n - 1).bit_length()
# list representing binary tree; leaf nodes offset by _m
self._vals = [0]*self._m + vals
def __repr__(self):
return '{}({!r})'.format(self.__class__.__name__, list(self))
def __len__(self):
return self._n
def __iter__(self):
for i in range(self._n):
yield self[i]
def __getitem__(self, i):
if i not in range(self._n):
raise IndexError()
# add up values from leaf to root node
t = 0
i += self._m
while i > 0:
t += self._vals[i]
i >>= 1
return t + self._vals[0]
def __setitem__(self, i, x):
# add difference (new value - old value)
self._vals[self._m + i] += x - self[i]
def add_in_range(self, i, j, x):
if i not in range(self._n + 1) or j not in range(self._n + 1):
raise IndexError()
# add at internal nodes spanning range(i, j)
i += self._m
j += self._m
while i < j:
if i & 1:
self._vals[i] += x
i += 1
if j & 1:
j -= 1
self._vals[j] += x
i >>= 1
j >>= 1
Example:
>>> r = RangeAddList([0] * 10)
>>> r.add_in_range(0, 4, 10)
>>> r.add_in_range(6, 9, 20)
>>> r.add_in_range(3, 7, 100)
>>> r
RangeAddList([10, 10, 10, 110, 100, 100, 120, 20, 20, 0])
It turns out that NumPy is so well-optimized, you need to go up to lists of length 50,000 or so before the segment tree catches up. The segment tree is still only about twice as fast as NumPy's O(n) range updates for lists of length 100,000 on my machine. You may want to benchmark with your own data to be sure.
This is a fast way of doing it:
lst1 = [1, 2, 3, 4, 5, 6, 7, 8]
new_list = [*lst[:4], *[x+2 for x in lst1[4:]]]
# or even better
new_list[4:] = [x+2 for x in lst1[4:]]
In terms of speed, numpy isn't faster for lists this small:
import timeit
import numpy as np
lst1 = [1, 2, 3, 4, 5, 6, 7, 8]
npa = np.array(lst)
def numpy_it():
global npa
npa[4:] += 2
def python_it():
global lst1
lst1 = [*lst1[:4], *[x+2 for x in lst1[4:]]]
print(timeit.timeit(numpy_it))
print(timeit.timeit(python_it))
For me gets:
1.7008036
0.6737076000000002
But for anything serious numpy beats generating a new list for the slice that needs replacing, which beats regenerating the entire list (which beats in-place replacement with a loop like in your example):
import timeit
import numpy as np
lst1 = list(range(0, 10000))
npa = np.array(lst1)
lst2 = list(range(0, 10000))
lst3 = list(range(0, 10000))
def numpy_it():
global npa
npa[4:] += 2
def python_it():
global lst1
lst1 = [*lst1[:4], *[x+2 for x in lst1[4:]]]
def python_it_slice():
global lst2
lst2[4:] = [x+2 for x in lst2[4:]]
def python_inplace():
global lst3
for i in range(4, len(lst3)):
lst3[i] = lst3[i] + 2
n = 10000
print(timeit.timeit(numpy_it, number=n))
print(timeit.timeit(python_it_slice, number=n))
print(timeit.timeit(python_it, number=n))
print(timeit.timeit(python_inplace, number=n))
Results:
0.057994199999999996
4.3747423
4.5193105000000005
9.949074000000001
Use assign to slice:
lst[4:] = [x+2 for x in lst[4:]]
Test (on my ancient ThinkPad i3-3110, Python 3.5.2):
import timeit
lst = [1, 2, 3, 4, 5, 6, 7, 8]
def python_it():
global lst
lst = [*lst[:4], *[x+2 for x in lst[4:]]]
def python_it2():
global lst
lst[4:] = [x+2 for x in lst[4:]]
print(timeit.timeit(python_it))
print(timeit.timeit(python_it2))
Prints:
1.2732834180060308
0.9285018060181756
use python builtin map function and lambda
lst = [1,2,3,4,5,6,7,8]
lst[4:] = map(lambda x:x+2, lst[4:])
print(lst)
# [1, 2, 3, 4, 7, 8, 9, 10]
First off, apologies for the vague title, I couldn't think of an appropriate name for this issue.
I have 3 numpy arrays in the follwing formats:
N = ([[13, 14, 15], [2, 5, 7], [4, 6, 8] ... several hundred thousand elements long
e1 = [1, 0, 0]
e2 = [0, 1, 0]
The idea is to create a fourth array, 'v', which shall have the same dimensions as 'N', but will be given values based on an if statement. Here is what I currently have which should better explain the issue:
v = np.zeros([len(N), 3])
for i in range(0, len(N)):
if((N*e1)[i,0] != 0):
v[i] = np.cross(N[i],e1)
else:
v[i] = np.cross(N[i],e2)
This code does what I require it to but does so in a longer than anticipated time (> 5 mins). Is there any form of list comprehension or similar concept I could use to increase the efficiency of the code?
You can use numpy.where to replace if-else and vectorize the process with broadcasting, here is an option with numpy.where:
import numpy as np
np.where(np.repeat(N[:,0] != 0, 3).reshape(1000,3), np.cross(N, e1), np.cross(N, e2))
Some benchmarks here:
1) Data set up:
N = np.array([np.random.randint(0,10,3) for i in range(1000)])
N
#array([[3, 5, 0],
# [5, 0, 8],
# [4, 6, 0],
# ...,
# [9, 4, 2],
# [6, 9, 3],
# [2, 9, 2]])
e1 = np.array([1, 0, 0])
e2 = np.array([0, 1, 0])
2) Timing:
def forloop():
v = np.zeros([len(N), 3]);
for i in range(0, len(N)):
if((N*e1)[i,0] != 0):
v[i] = np.cross(N[i],e1)
else:
v[i] = np.cross(N[i],e2)
return v
def forloop2():
v = np.zeros([len(N), 3])
# Only calculate this one time.
my_product = N*e1
for i in range(0, len(N)):
if my_product[i,0] != 0:
v[i] = np.cross(N[i],e1)
else:
v[i] = np.cross(N[i],e2)
return v
%timeit forloop()
10 loops, best of 3: 25.5 ms per loop
%timeit forloop2()
100 loops, best of 3: 12.7 ms per loop
%timeit np.where(np.repeat(N[:,0] != 0, 3).reshape(1000,3), np.cross(N, e1), np.cross(N, e2))
10000 loops, best of 3: 71.9 µs per loop
3) Result checking for all methods:
v1 = forloop()
v2 = np.where(np.repeat(N[:,0] != 0, 3).reshape(1000,3), np.cross(N, e1), np.cross(N, e2))
v3 = forloop2()
(v3 == v1).all()
# True
(v1 == v2).all()
# True
I'm not certain what it is you're trying to do, but I know why this specific code is so slow for you. The worst offender is (N*e1). That's a simple calculation, and it runs pretty fast with numpy, but you're executing it inside of the loop, len(N) times!.
I am able to execute your code with N == 1000000 in less than 15 seconds on my machine by pulling that outside of the loop. Example below.
v = np.zeros([len(N), 3])
# Only calculate this one time.
my_product = N*e1
for i in range(0, len(N)):
if my_product[i,0] != 0):
v[i] = np.cross(N[i],e1)
else:
v[i] = np.cross(N[i],e2)
The other answer demonstrates how to avoid the for loop and if statements for a lot of extra speed at the cost of somewhat less readable code.
I am trying to translate every element of a numpy.array according to a given key:
For example:
a = np.array([[1,2,3],
[3,2,4]])
my_dict = {1:23, 2:34, 3:36, 4:45}
I want to get:
array([[ 23., 34., 36.],
[ 36., 34., 45.]])
I can see how to do it with a loop:
def loop_translate(a, my_dict):
new_a = np.empty(a.shape)
for i,row in enumerate(a):
new_a[i,:] = map(my_dict.get, row)
return new_a
Is there a more efficient and/or pure numpy way?
Edit:
I timed it, and np.vectorize method proposed by DSM is considerably faster for larger arrays:
In [13]: def loop_translate(a, my_dict):
....: new_a = np.empty(a.shape)
....: for i,row in enumerate(a):
....: new_a[i,:] = map(my_dict.get, row)
....: return new_a
....:
In [14]: def vec_translate(a, my_dict):
....: return np.vectorize(my_dict.__getitem__)(a)
....:
In [15]: a = np.random.randint(1,5, (4,5))
In [16]: a
Out[16]:
array([[2, 4, 3, 1, 1],
[2, 4, 3, 2, 4],
[4, 2, 1, 3, 1],
[2, 4, 3, 4, 1]])
In [17]: %timeit loop_translate(a, my_dict)
10000 loops, best of 3: 77.9 us per loop
In [18]: %timeit vec_translate(a, my_dict)
10000 loops, best of 3: 70.5 us per loop
In [19]: a = np.random.randint(1, 5, (500,500))
In [20]: %timeit loop_translate(a, my_dict)
1 loops, best of 3: 298 ms per loop
In [21]: %timeit vec_translate(a, my_dict)
10 loops, best of 3: 37.6 ms per loop
In [22]: %timeit loop_translate(a, my_dict)
I don't know about efficient, but you could use np.vectorize on the .get method of dictionaries:
>>> a = np.array([[1,2,3],
[3,2,4]])
>>> my_dict = {1:23, 2:34, 3:36, 4:45}
>>> np.vectorize(my_dict.get)(a)
array([[23, 34, 36],
[36, 34, 45]])
Here's another approach, using numpy.unique:
>>> a = np.array([[1,2,3],[3,2,1]])
>>> a
array([[1, 2, 3],
[3, 2, 1]])
>>> d = {1 : 11, 2 : 22, 3 : 33}
>>> u,inv = np.unique(a,return_inverse = True)
>>> np.array([d[x] for x in u])[inv].reshape(a.shape)
array([[11, 22, 33],
[33, 22, 11]])
This approach is much faster than np.vectorize approach when the number of unique elements in array is small.
Explanaion: Python is slow, in this approach the in-python loop is used to convert unique elements, afterwards we rely on extremely optimized numpy indexing operation (done in C) to do the mapping. Hence, if the number of unique elements is comparable to the overall size of the array then there will be no speedup. On the other hand, if there is just a few unique elements, then you can observe a speedup of up to x100.
I think it'd be better to iterate over the dictionary, and set values in all the rows and columns "at once":
>>> a = np.array([[1,2,3],[3,2,1]])
>>> a
array([[1, 2, 3],
[3, 2, 1]])
>>> d = {1 : 11, 2 : 22, 3 : 33}
>>> for k,v in d.iteritems():
... a[a == k] = v
...
>>> a
array([[11, 22, 33],
[33, 22, 11]])
Edit:
While it may not be as sexy as DSM's (really good) answer using numpy.vectorize, my tests of all the proposed methods show that this approach (using #jamylak's suggestion) is actually a bit faster:
from __future__ import division
import numpy as np
a = np.random.randint(1, 5, (500,500))
d = {1 : 11, 2 : 22, 3 : 33, 4 : 44}
def unique_translate(a,d):
u,inv = np.unique(a,return_inverse = True)
return np.array([d[x] for x in u])[inv].reshape(a.shape)
def vec_translate(a, d):
return np.vectorize(d.__getitem__)(a)
def loop_translate(a,d):
n = np.ndarray(a.shape)
for k in d:
n[a == k] = d[k]
return n
def orig_translate(a, d):
new_a = np.empty(a.shape)
for i,row in enumerate(a):
new_a[i,:] = map(d.get, row)
return new_a
if __name__ == '__main__':
import timeit
n_exec = 100
print 'orig'
print timeit.timeit("orig_translate(a,d)",
setup="from __main__ import np,a,d,orig_translate",
number = n_exec) / n_exec
print 'unique'
print timeit.timeit("unique_translate(a,d)",
setup="from __main__ import np,a,d,unique_translate",
number = n_exec) / n_exec
print 'vec'
print timeit.timeit("vec_translate(a,d)",
setup="from __main__ import np,a,d,vec_translate",
number = n_exec) / n_exec
print 'loop'
print timeit.timeit("loop_translate(a,d)",
setup="from __main__ import np,a,d,loop_translate",
number = n_exec) / n_exec
Outputs:
orig
0.222067718506
unique
0.0472617006302
vec
0.0357889199257
loop
0.0285375618935
The numpy_indexed package (disclaimer: I am its author) provides an elegant and efficient vectorized solution to this type of problem:
import numpy_indexed as npi
remapped_a = npi.remap(a, list(my_dict.keys()), list(my_dict.values()))
The method implemented is similar to the approach mentioned by John Vinyard, but even more general. For instance, the items of the array do not need to be ints, but can be any type, even nd-subarrays themselves.
If you set the optional 'missing' kwarg to 'raise' (default is 'ignore'), performance will be slightly better, and you will get a KeyError if not all elements of 'a' are present in the keys.
Assuming your dict keys are positive integers, without huge gaps (similar to a range from 0 to N), you would be better off converting your translation dict to an array such that my_array[i] = my_dict[i], and using numpy indexing to do the translation.
A code using this approach is:
def direct_translate(a, d):
src, values = d.keys(), d.values()
d_array = np.arange(a.max() + 1)
d_array[src] = values
return d_array[a]
Testing with random arrays:
N = 10000
shape = (5000, 5000)
a = np.random.randint(N, size=shape)
my_dict = dict(zip(np.arange(N), np.random.randint(N, size=N)))
For these sizes I get around 140 ms for this approach. The np.get vectorization takes around 5.8 s and the unique_translate around 8 s.
Possible generalizations:
If you have negative values to translate, you could shift the values in a and in the keys of the dictionary by a constant to map them back to positive integers:
def direct_translate(a, d): # handles negative source keys
min_a = a.min()
src, values = np.array(d.keys()) - min_a, d.values()
d_array = np.arange(a.max() - min_a + 1)
d_array[src] = values
return d_array[a - min_a]
If the source keys have huge gaps, the initial array creation would waste memory. I would resort to cython to speed up that function.
If you don't really have to use dictionary as substitution table, simple solution would be (for your example):
a = numpy.array([your array])
my_dict = numpy.array([0, 23, 34, 36, 45]) # your dictionary as array
def Sub (myarr, table) :
return table[myarr]
values = Sub(a, my_dict)
This will work of course only if indexes of d cover all possible values of your a, in other words, only for a with usigned integers.
I have a list of integer numbers and I want to write a function that returns a subset of numbers that are within a range. Something like NumbersWithinRange(list, interval) function name...
I.e.,
list = [4,2,1,7,9,4,3,6,8,97,7,65,3,2,2,78,23,1,3,4,5,67,8,100]
interval = [4,20]
results = NumbersWithinRange(list, interval) # [4,4,6,8,7,8]
maybe i forgot to write one more number in results, but that's the idea...
The list can be as big as 10/20 million length, and the range is normally of a few 100.
Any suggestions on how to do it efficiently with python - I was thinking to use bisect.
Thanks.
I would use numpy for that, especially if the list is that long. For example:
In [101]: list = np.array([4,2,1,7,9,4,3,6,8,97,7,65,3,2,2,78,23,1,3,4,5,67,8,100])
In [102]: list
Out[102]:
array([ 4, 2, 1, 7, 9, 4, 3, 6, 8, 97, 7, 65, 3,
2, 2, 78, 23, 1, 3, 4, 5, 67, 8, 100])
In [103]: good = np.where((list > 4) & (list < 20))
In [104]: list[good]
Out[104]: array([7, 9, 6, 8, 7, 5, 8])
# %timeit says that numpy is MUCH faster than any list comprehension:
# create an array 10**6 random ints b/w 0 and 100
In [129]: arr = np.random.randint(0,100,1000000)
In [130]: interval = xrange(4,21)
In [126]: %timeit r = [x for x in arr if x in interval]
1 loops, best of 3: 14.2 s per loop
In [136]: %timeit good = np.where((list > 4) & (list < 20)) ; new_list = list[good]
100 loops, best of 3: 10.8 ms per loop
In [134]: %timeit r = [x for x in arr if 4 < x < 20]
1 loops, best of 3: 2.22 s per loop
In [142]: %timeit filtered = [i for i in ifilter(lambda x: 4 < x < 20, arr)]
1 loops, best of 3: 2.56 s per loop
The pure-Python Python sortedcontainers module has a SortedList type that can help you. It maintains the list automatically in sorted order and has been tested passed tens of millions of elements. The sorted list type has a bisect function you can use.
from sortedcontainers import SortedList
data = SortedList(...)
def NumbersWithinRange(items, lower, upper):
start = items.bisect(lower)
end = items.bisect_right(upper)
return items[start:end]
subset = NumbersWithinRange(data, 4, 20)
Bisecting and indexing will be much faster this way than scanning the entire list. The sorted containers module is very fast and has a performance comparison page with benchmarks against alternative implementations.
If the list isn't sorted, you need to scan the entire list:
lst = [ 4,2,1,...]
interval=[4,20]
results = [ x for x in lst if interval[0] <= x <= interval[1] ]
If the list is sorted, you can use bisect to find the left and right indices that
bound your range.
left = bisect.bisect_left(lst, interval[0])
right = bisect.bisect_right(lst, interval[1])
results = lst[left+1:right]
Since scanning the list is O(n) and sorting is O(n lg n), it probably is not worth sorting the list just to use bisect unless you plan on doing lots of range extractions.
I think this should be sufficiently efficient:
>>> nums = [4,2,1,7,9,4,3,6,8,97,7,65,3,2,2,78,23,1,3,4,5,67,8,100]
>>> r = [x for x in nums if 4 <= x <21]
>>> r
[4, 7, 9, 4, 6, 8, 7, 4, 5, 8]
Edit:
After J.F. Sebastian's excellent observation, modified the code.
Using iterators
>>> from itertools import ifilter
>>> A = [4,2,1,7,9,4,3,6,8,97,7,65,3,2,2,78,23,1,3,4,5,67,8,100]
>>> [i for i in ifilter(lambda x: 4 < x < 20, A)]
[7, 9, 6, 8, 7, 5, 8]
Let's create a list similar to what you described:
import random
l = [random.randint(-100000,100000) for i in xrange(1000000)]
Now test some possible solutions:
interval=range(400,800)
def v2():
""" return a list """
return [i for i in l if i in interval]
def v3():
""" return a generator """
return list((i for i in l if i in interval))
def v4():
def te(x):
return x in interval
return filter(te,l)
def v5():
return [i for i in ifilter(lambda x: x in interval, l)]
print len(v2()),len(v3()), len(v4()), len(v5())
cmpthese.cmpthese([v2,v3,v4,v5],micro=True, c=2)
Prints this:
rate/sec usec/pass v5 v4 v2 v3
v5 0 6929225.922 -- -0.4% -1.0% -1.6%
v4 0 6903028.488 0.4% -- -0.6% -1.2%
v2 0 6861472.487 1.0% 0.6% -- -0.6%
v3 0 6817855.477 1.6% 1.2% 0.6% --
HOWEVER, watch what happens if interval is a set instead of a list:
interval=set(range(400,800))
cmpthese.cmpthese([v2,v3,v4,v5],micro=True, c=2)
rate/sec usec/pass v5 v4 v3 v2
v5 5 201332.569 -- -20.6% -62.9% -64.6%
v4 6 159871.578 25.9% -- -53.2% -55.4%
v3 13 74769.974 169.3% 113.8% -- -4.7%
v2 14 71270.943 182.5% 124.3% 4.9% --
Now comparing with numpy:
na=np.array(l)
def v7():
""" assume you have to convert from list => numpy array and return a list """
arr=np.array(l)
tgt = np.where((arr >= 400) & (arr < 800))
return [arr[x] for x in tgt][0].tolist()
def v8():
""" start with a numpy list but return a python list """
tgt = np.where((na >= 400) & (na < 800))
return na[tgt].tolist()
def v9():
""" numpy all the way through """
tgt = np.where((na >= 400) & (na < 800))
return [na[x] for x in tgt][0]
# or return na[tgt] if you prefer that syntax...
cmpthese.cmpthese([v2,v3,v4,v5, v7, v8,v9],micro=True, c=2)
rate/sec usec/pass v5 v4 v7 v3 v2 v8 v9
v5 5 185431.957 -- -17.4% -24.7% -63.3% -63.4% -93.6% -93.6%
v4 7 153095.007 21.1% -- -8.8% -55.6% -55.7% -92.3% -92.3%
v7 7 139570.475 32.9% 9.7% -- -51.3% -51.4% -91.5% -91.5%
v3 15 67983.985 172.8% 125.2% 105.3% -- -0.2% -82.6% -82.6%
v2 15 67861.438 173.3% 125.6% 105.7% 0.2% -- -82.5% -82.5%
v8 84 11850.476 1464.8% 1191.9% 1077.8% 473.7% 472.6% -- -0.0%
v9 84 11847.973 1465.1% 1192.2% 1078.0% 473.8% 472.8% 0.0% --
Clearly numpy is faster than pure python as long as you can work with numpy all the way through. Otherwise, use a set for the interval to speed up a bit...
I think you are looking for something like this..
b=[i for i in a if 4<=i<90]
print sorted(set(b))
[4, 5, 6, 7, 8, 9, 23, 65, 67, 78]
If your data set isn't too sparse, you could use "bins" to store and retrieve the data. For example:
a = [4,2,1,7,9,4,3,6,8,97,7,65,3,2,2,78,23,1,3,4,5,67,8,100]
# Initalize a list of 0's [0, 0, ...]
# This is assuming that the minimum possible value is 0
bins = [0 for _ in range(max(a) + 1)]
# Update the bins with the frequency of each number
for i in a:
bins[i] += 1
def NumbersWithinRange(data, interval):
result = []
for i in range(interval[0], interval[1] + 1):
freq = data[i]
if freq > 0:
result += [i] * freq
return result
This works for this test case:
print(NumbersWithinRange(bins, [4, 20]))
# [4, 4, 4, 5, 6, 7, 7, 8, 8, 9]
For simplicity, I omitted some bounds checking in the function.
To reiterate, this may work better in terms of space and time usage, but it depends heavily on your particular data set. The less sparse the data set, the better it will do.