I have two lists of different lengths, which are the range of the lower and upper bound that I am using to filter a nested list "lst". To save space, I just copy part of the data 10 times. I want to iterate over both the upper limit and lower limit twice and here is my attempt:
import numpy as np
import itertools
lst = [[4.256, 3.8518], [2.2121, 1.6064], [3.9662, 3.2433], [5.1571, 5.8898], [4.4909, 3.7328], [9.38, 10.2276], [4.8912, 5.846], [4.5729, 3.5768], [6.25, 5.2267], [3.1019, 4.1603], [7.7822, 14.9629], [4.7673, 12.1189]]
lst_long = lst * 10
lower_limit = np.arange(1, 3, 0.1).tolist()
upper_limit = np.arange(9, 12, 0.1).tolist()
def create_combo(a, b):
for sublist1 in itertools.product(a,b):
for sublist2 in itertools.product(a,b):
yield sublist1[0], sublist1[1], sublist2[0], sublist2[1]
for lower1, upper1, lower2, upper2 in create_combo(lower_limit, upper_limit):
filtered_list = [sublist for sublist in lst_long if lower1<=sublist[0]<=upper1 and lower2<=sublist[1]<=upper2]
x = [lst[0] for lst in filtered_list]
y = [lst[1] for lst in filtered_list]
This code now takes over 9 sec to run on my pc. As the range expands, I am sure the increase in run time would be exponential. Therefore, I am looking for suggestions regarding how I may conduct the iteration more efficiently. Is there any special feature from any package that could speed up the process?
Thank you.
Related
I am seeking to sample n random permutations of a list in Python.
This is my code:
obj = [ 5 8 9 ... 45718 45719 45720]
#type(obj) = numpy.ndarray
pairs = random.sample(list(permutations(obj,2)),k= 150)
Although the code does what I want it to, it causes memory issues. I sometimes receive the error Memory error when running on CPU, and when running on GPU, my virtual machine crashes.
How can I make the code work in a more memory-efficient manner?
This avoids using permutations at all:
count = len(obj)
def index2perm(i,obj):
i1, i2 = divmod(i,len(obj)-1)
if i1 <= i2:
i2 += 1
return (obj[i1],obj[i2])
pairs = [index2perm(i,obj) for i in random.sample(range(count*(count-1)),k=3)]
Building on Pablo Ruiz's excellent answer, I suggest wrapping his sampling solution into a generator function that yields unique permutations by keeping track of what it has already yielded:
import numpy as np
def unique_permutations(sequence, r, n):
"""Yield n unique permutations of r elements from sequence"""
seen = set()
while len(seen) < n:
# This line of code adapted from Pablo Ruiz's answer:
candidate_permutation = tuple(np.random.choice(sequence, r, replace=False))
if candidate_permutation not in seen:
seen.add(candidate_permutation)
yield candidate_permutation
obj = list(range(10))
for permutation in unique_permutations(obj, 2, 15):
# do something with the permutation
# Or, to save the result as a list:
pairs = list(unique_permutations(obj, 2, 15))
My assumption is that you are sampling a small subset of the very large number of possible permutations, in which case collisions will be rare enough that keeping a seen set will not be expensive.
Warnings: this function is an infinite loop if you ask for more permutations than are possible given the inputs. It will also get increasingly slow an n gets close to the number of possible permutations, since collisions will get increasingly frequent.
If I were to put this function in my code base, I would put a shield at the top that calculated the number of possible permutations and raised a ValueError exception if n exceeded that number, and maybe output a warning if n exceeded one tenth that number, or something like that.
You can avoid listing the permutation iterator that could be massive in memory. You can generate random permutations by sampling the list with replace=False.
import numpy as np
obj = np.array([5,8,123,13541,42])
k = 15
permutations = [tuple(np.random.choice(obj, 2, replace=False)) for _ in range(k)]
print(permutations)
This problem becomes much harder, if you for example impose no repetition in your random permutations.
Edit, no repetitions code
I think this is the best possible approach for the non repetition case.
We index all possible permutations from 1 to n**2-n in a permutation matrix where the diagonal should be avoided. We sample the indexes without repetitions and without listing them, then we map the samples to the coordinates of the permutations and then we get the permutations from the indexes of matrix.
import random
import numpy as np
obj = np.array([1,2,3,10,43,19,323,142,334,33,312,31,12])
k = 150
obj_len = len(obj)
indexes = random.sample(range(obj_len**2-obj_len), k)
def mapm(m):
return m + m //(obj_len) +1
permutations = [(obj[mapm(i)//obj_len], obj[mapm(i)%obj_len]) for i in indexes]
This approach is not based on any assumption, does not load the permutations and also the performance is not based on a while loop failing to insert duplicates, as no duplicates are ever generated.
I want to add two numpy arrays of different sizes starting at a specific index. As I need to do this couple of thousand times with large arrays, this needs to be efficient, and I am not sure how to do this efficiently without iterating through each cell.
a = [5,10,15]
b = [0,0,10,10,10,0,0]
res = add_arrays(b,a,2)
print(res) => [0,0,15,20,25,0,0]
naive approach:
# b is the bigger array
def add_arrays(b, a, i):
for j in range(len(a)):
b[i+j] = a[j]
You might assign smaller one into zeros array then add, I would do it following way
import numpy as np
a = np.array([5,10,15])
b = np.array([0,0,10,10,10,0,0])
z = np.zeros(b.shape,dtype=int)
z[2:2+len(a)] = a # 2 is offset
res = z+b
print(res)
output
[ 0 0 15 20 25 0 0]
Disclaimer: I assume that offset + len(a) is always less or equal len(b).
Nothing wrong with your approach. You cannot get better asymptotic time or space complexity. If you want to reduce code lines (which is not an end in itself), you could use slice assignment and some other utils:
def add_arrays(b, a, i):
b[i:i+len(a)] = map(sum, zip(b[i:i+len(a)], a))
But the functional overhead should makes this less performant, if anything.
Some docs:
map
sum
zip
It should be faster than Daweo answer, 1.5-5x times (depending on the size ratio between a and b).
result = b.copy()
result[offset: offset+len(a)] += a
Suppose I have a Python list of arbitrary length k. Now, suppose I would like a random sample of n , (where n <= k!) distinct permutations of that list. I was tempted to try:
import random
import itertools
k = 6
n = 10
mylist = list(range(0, k))
j = random.sample(list(itertools.permutations(mylist)), n)
for i in j:
print(i)
But, naturally, this code becomes unusably slow when k gets too large. Given that the number of permutations that I may be looking for n is going to be relatively small compared to the total number of permutations, computing all of the permutations is unnecessary. Yet it's important that none of the permutations in the final list are duplicates.
How would you achieve this more efficiently? Remember, mylist could be a list of anything, I just used list(range(0, k)) for simplicity.
You can generate permutations, and keep track of the ones you have already generated. To make it more versatile, I made a generator function:
import random
k = 6
n = 10
mylist = list(range(0, k))
def perm_generator(seq):
seen = set()
length = len(seq)
while True:
perm = tuple(random.sample(seq, length))
if perm not in seen:
seen.add(perm)
yield perm
rand_perms = perm_generator(mylist)
j = [next(rand_perms) for _ in range(n)]
for i in j:
print(i)
Naïve implementation
Bellow the naïve implementation I did (well implemented by #Tomothy32, pure PSL using generator):
import numpy as np
mylist = np.array(mylist)
perms = set()
for i in range(n): # (1) Draw N samples from permutations Universe U (#U = k!)
while True: # (2) Endless loop
perm = np.random.permutation(k) # (3) Generate a random permutation form U
key = tuple(perm)
if key not in perms: # (4) Check if permutation already has been drawn (hash table)
perms.update(key) # (5) Insert into set
break # (6) Break the endless loop
print(i, mylist[perm])
It relies on numpy.random.permutation which randomly permute a sequence.
The key idea is:
to generate a new random permutation (index randomly permuted);
to check if permutation already exists and store it (as tuple of int because it must hash) to prevent duplicates;
Then to permute the original list using the index permutation.
This naïve version does not directly suffer to factorial complexity O(k!) of itertools.permutations function which does generate all k! permutations before sampling from it.
About Complexity
There is something interesting about the algorithm design and complexity...
If we want to be sure that the loop could end, we must enforce N <= k!, but it is not guaranteed. Furthermore, assessing the complexity requires to know how many time the endless-loop will actually loop before a new random tuple is found and break it.
Limitation
Let's encapsulate the function written by #Tomothy32:
import math
def get_perms(seq, N=10):
rand_perms = perm_generator(mylist)
return [next(rand_perms) for _ in range(N)]
For instance, this call work for very small k<7:
get_perms(list(range(k)), math.factorial(k))
But will fail before O(k!) complexity (time and memory) when k grows because it boils down to randomly find a unique missing key when all other k!-1 keys have been found.
Always look on the bright side...
On the other hand, it seems the method can generate a reasonable amount of permuted tuples in a reasonable amount of time when N<<<k!. Example, it is possible to draw more than N=5000 tuples of length k where 10 < k < 1000 in less than one second.
When k and N are kept small and N<<<k!, then the algorithm seems to have a complexity:
Constant versus k;
Linear versus N.
This is somehow valuable.
I'm trying to use numpy/pandas to constuct a sliding window style comparator. I've got list of lists each of which is a different length. I want to compare each list to to another list as depicted below:
lists = [[10,15,5],[5,10],[5]]
window_diff(l[1],l[0]) = 25
The window diff for lists[0] and lists[1] would give 25 using the following window sliding technique shown in the image below. Because lists[1] is the shorter path we shift it once to the right, resulting in 2 windows of comparison. If you sum the last row in the image below we get the total difference between the two lists using the two windows of comparison; in this case a total of 25. To note we are taking the absolute difference.
The function should aggregate the total window_diff between each list and the other lists, so in this case
tot = total_diffs(lists)
tot>>[40, 30, 20]
# where tot[0] represents the sum of lists[0] window_diff with all other lists.
I wanted to know if there was a quick route to doing this in pandas or numpy. Currently I am using a very long winded process of for looping through each of the lists and then comparing bitwise by shifting the shorter list in accordance to the longer list.
My approach works fine for short lists, but my dataset is 10,000 lists long and some of these lists contain 60 or so datapoints, so speed is a criteria here. I was wondering if numpy, pandas had some advice on this? Thanks
Sample problem data
from random import randint
lists = [[random.randint(0,1000) for r in range(random.randint(0,60))] for x in range(100000)]
Steps :
Among each pair of lists from the input list of lists create sliding windows for the bigger array and then get the absolute difference against the smaller one in that pair. We can use NumPy strides to get those sliding windows.
Get the total sum and store this summation as a pair-wise differentiation.
Finally sum along each row and col on the 2D array from previous step and their summation is final output.
Thus, the implementation would look something like this -
import itertools
def strided_app(a, L, S=1 ): # Window len = L, Stride len/stepsize = S
a = np.asarray(a)
nrows = ((a.size-L)//S)+1
n = a.strides[0]
return np.lib.stride_tricks.as_strided(a, shape=(nrows,L), strides=(S*n,n))
N = len(lists)
pair_diff_sums = np.zeros((N,N),dtype=type(lists[0][0]))
for i, j in itertools.combinations(range(N), 2):
A, B = lists[i], lists[j]
if len(A)>len(B):
pair_diff_sums[i,j] = np.abs(strided_app(A,L=len(B)) - B).sum()
else:
pair_diff_sums[i,j] = np.abs(strided_app(B,L=len(A)) - A).sum()
out = pair_diff_sums.sum(1) + pair_diff_sums.sum(0)
For really heavy datasets, here's one method using one more level of looping -
N = len(lists)
out = np.zeros((N),dtype=type(lists[0][0]))
for k,i in enumerate(lists):
for j in lists:
if len(i)>len(j):
out[k] += np.abs(strided_app(i,L=len(j)) - j).sum()
else:
out[k] += np.abs(strided_app(j,L=len(i)) - i).sum()
strided_app is inspired from here.
Sample input, output -
In [77]: lists
Out[77]: [[10, 15, 5], [5, 10], [5]]
In [78]: pair_diff_sums
Out[78]:
array([[ 0, 25, 15],
[25, 0, 5],
[15, 5, 0]])
In [79]: out
Out[79]: array([40, 30, 20])
Just for completeness of #Divakar's great answer and for its application to very large datasets:
import itertools
N = len(lists)
out = np.zeros(N, dtype=type(lists[0][0]))
for i, j in itertools.combinations(range(N), 2):
A, B = lists[i], lists[j]
if len(A)>len(B):
diff = np.abs(strided_app(A,L=len(B)) - B).sum()
else:
diff = np.abs(strided_app(B,L=len(A)) - A).sum()
out[i] += diff
out[j] += diff
It does not create unnecessary large datasets and updates a single vector while iterating only over the upper triangular array.
It will still take a while to compute, as there is a tradeoff between computational complexity and larger-than-ram datasets. Solutions for larger than ram datasets often rely on iterations, and python is not great at it. Iterating in python over a large dataset is slow, very slow.
Translating the code above to cython could speedup things a bit.
I want to create a large list containing 20,000 points in the form of:
[[x, y], [x, y], [x, y]]
where x and y can be any random integer between 0 and 1000. How would I be able to do this such that there are no duplicate coordinates [x, y]?
You could just use a while loop to pad it out until it's big enough:
>>> from random import randint
>>> n, N = 1000, 20000
>>> points = {(randint(0, n), randint(0, n)) for i in xrange(N)}
>>> while len(points) < N:
... points |= {(randint(0, n), randint(0, n))}
...
>>> points = list(list(x) for x in points)
Your initial idea was probably slow because it was iterating lists for checking containmentship, which is O(n). This uses sets which are faster, and then only converts to the list structure once at the end.
Try this :
import itertools
x = range(0,10)
aList =[]
for pair in itertools.combinations(x,2):
for i in range(0,10):
aList.append(pair)
print aList
If you want point between 0-10 with all unique and stored in a list,
or you You need it random order, then use some random function .
Since n = 1001 is relatively small in your case, random.sample(population, k) will do just fine, taking a random sample of 20000 pairs from the space of possible pairs (no duplicates):
import random
print random.sample([[x, y] for x in xrange(1001) for y in xrange(1001)], 20000)
This is the most concise and readable solution. (But if n is very big, generating the entire space of points will not be computationally efficient.)
An approach that avoids while loops with unknown iteration counts and avoids storing huge lists in memory is to use random.sample to produce unique encoded values from a single range (in Py3) or xrange (in Py2) to avoid actually generating huge temporaries; a simple mathematical operation can split the "encoded" values back into two values:
import random
xys = random.sample(range(1001 * 1001), 20000)
[divmod(xy, 1001) for xy in xys] # Wrap divmod in list() if you must have list, not tuple