Related
I would like to know how to use the python random.sample() function within a for-loop to generate multiple sample lists that are not identical.
For example, right now I have:
for i in range(3):
sample = random.sample(range(10), k=2)
This will generate 3 sample lists containing two numbers each, but I would like to make sure none of those sample lists are identical. (It is okay if there are repeating values, i.e., (2,1), (3,2), (3,7) would be okay, but (2,1), (1,2), (5,4) would not.)
If you specifically need to "use random.sample() within a for-loop", then you could keep track of samples that you've seen, and check that new ones haven't been seen yet.
import random
seen = set()
for i in range(3):
while True:
sample = random.sample(range(10), k=2)
print(f'TESTING: {sample = }') # For demo
fr = frozenset(sample)
if fr not in seen:
seen.add(fr)
break
print(sample)
Example output:
TESTING: sample = [0, 7]
[0, 7]
TESTING: sample = [0, 7]
TESTING: sample = [1, 5]
[1, 5]
TESTING: sample = [7, 0]
TESTING: sample = [3, 5]
[3, 5]
Here I made seen a set to allow fast lookups, and I converted sample to a frozenset so that order doesn't matter in comparisons. It has to be frozen because a set can't contain another set.
However, this could be very slow with different inputs, especially a larger range of i or smaller range to draw samples from. In theory, its runtime is infinite, but in practice, random's number generator is finite.
Alternatives
There are other ways to do the same thing that could be much more performant. For example, you could take a big random sample, then chunk it into the desired size:
n = 3
k = 2
upper = 10
sample = random.sample(range(upper), k=k*n)
for chunk in chunks(sample, k):
print(chunk)
Example output:
[6, 5]
[3, 0]
[1, 8]
With this approach, you'll never get any duplicate numbers like [[2,1], [3,2], [3,7]] because the sample contains all unique numbers.
This approach was inspired by Sven Marnach's answer on "Non-repetitive random number in numpy", which I coincidentally just read today.
it looks like you are trying to make a nested list of certain list items without repetition from original list, you can try below code.
import random
mylist = list(range(50))
def randomlist(mylist,k):
length = lambda : len(mylist)
newlist = []
while length() >= k:
newlist.append([mylist.pop(random.randint(0, length() - 1)) for I in range(k)])
newlist.append(mylist)
return newlist
randomlist(mylist,6)
[[2, 20, 36, 46, 14, 30],
[4, 12, 13, 3, 28, 5],
[45, 37, 18, 9, 34, 24],
[31, 48, 11, 6, 19, 17],
[40, 38, 0, 7, 22, 42],
[23, 25, 47, 41, 16, 39],
[8, 33, 10, 43, 15, 26],
[1, 49, 35, 44, 27, 21],
[29, 32]]
This should do the trick.
import random
import math
# create set to store samples
a = set()
# number of distinct elements in the population
m = 10
# sample size
k = 2
# number of samples
n = 3
# this protects against an infinite loop (see Safety Note)
if n > math.comb(m, k):
print(
f"Error: {math.comb(m, k)} is the number of {k}-combinations "
f"from a set of {m} distinct elements."
)
exit()
# the meat
while len(a) < n:
a.add(tuple(sorted(random.sample(range(m), k = k))))
print(a)
With a set you are guaranteed to get a collection with no duplicate elements. In a set, you would be allowed to have (1, 2) and (2, 1) inside, which is why sorted is applied. So if [1, 2] is drawn, sorted([1, 2]) returns [1, 2]. And if [2, 1] is subsequently drawn, sorted([2, 1]) returns [1, 2], which won't be added to the set because (1, 2) is already in the set. We use tuple because objects in a set have to be hashable and list objects are not.
I hope this helps. Any questions, please let me know.
Safety Note
To avoid an infinite loop when you change 3 to some large number, you need to know the maximum number of possible samples of the type that you desire.
The relevant mathematical concept for this is a combination.
Suppose your first argument of random.sample() is range(m) where
m is some arbitrary positive integer. Note that this means that the
sample will be drawn from a population of m distinct members
without replacement.
Suppose that you wish to have n samples of length k in total.
The number of possible k-combinations from the set of m distinct elements is
m! / (k! * (m - k)!)
You can get this value via
from math import comb
num_comb = comb(m, k)
comb(m, k) gives the number of different ways to choose k elements from m elements without repetition and without order, which is exactly what we want.
So in the example above, m = 10, k = 2, n = 3.
With these m and k, the number of possible k-combinations from the set of m distinct elements is 45.
You need to ensure that n is less than 45 if you want to use those specific m and k and avoid an infinite loop.
I find myself in a unique situation in which I need to multiply single elements within a listed pair of numbers where each pair is nested within a parent list of elements. For example, I have my pre-defined variables as:
output = []
initial_list = [[1,2],[3,4],[5,6]]
I am trying to calculate an output such that each element is the product of a unique combination (always of length len(initial_list)) of a single element from each pair. Using my example of initial_list, I am looking to generate an output of length pow(2 * len(initial_list)) that is scable for any "n" number of pairs in initial_list (with a minimum of 2 pairs). So in this case each element of the output would be as follows:
output[0] = 1 * 3 * 5
output[1] = 1 * 3 * 6
output[2] = 1 * 4 * 5
output[3] = 1 * 4 * 6
output[4] = 2 * 3 * 5
output[5] = 2 * 3 * 6
output[6] = 2 * 4 * 5
output[7] = 2 * 4 * 6
In my specific case, the order of output assignments does not matter other than output[0], which I need to be equivalent to the product of the first element in each pair in initial_list. What is the best way to proceed to generate an output list such that each element is a unique combination of every element in each list?
...
My initial approach consisted of using;
from itertools import combinations
from itertools import permutations
from itertools import product
to somehow generate a list of every possible combination then multiply the products together and append each product to the output list, but I couldn't figure out a wait to implement the tools successfully. I have since tried to create a recursive function that combines for x in range(2): with nested recursion recalls, but once again I cannot figured out a solution.
Someone more experienced and smarter than me please help me out; Any and all help is appreciated! Thank you!
Without using any external library
def multi_comb(my_list):
"""
This returns the multiplication of
every possible combinationation of
the `my_list` of type [[a1, a2], [b1, b2], ...]
Arg: List
Return: List
"""
if not my_list: return [1]
a, b = my_list.pop(0)
result = multi_comb(my_list)
left = [a * i for i in result]
right = [b * i for i in result]
return (left + right)
print(multi_comb([[1, 2], [3, 4], [5, 6]]))
# Output
# [15, 18, 20, 24, 30, 36, 40, 48]
I am using reccursion to get the result. Here's the visual illustration of how this works.
Instead of taking a top-down approach, we can take bottom-up approach to better understand how this program works.
At the last step, a and b becomes 5 and 6 respectively. Calling multi_comb() with empty list returns [1] as a result. So left and right becomes [5] and [6]. Thus we return [5, 6] to our previous step.
At the second last step, a and b was 3 and 4 respectively. From the last step we got [5, 6] as a result. After multiplying each of the values inside the result with a and b (notice left and right), we return the result [15, 18, 20, 24] to our previous step.
At our first step, that is our starting step, we had a and b as 1 and 2 respectively. The value returned from our last step becomes our result, ie, [15, 18, 20, 24]. Now we multiply both a and b with this result and return our final output.
Note:
This program works only if list is in the form [ [a1, a2], [b1, b2], [c1, c2], ... ] as told by the OP in the comments. The problem of solving the list containing the sub-list of n items will be little different in code, but the concept is same as in this answer.
This problem can also be solved using dynamic programming
output = [1, ]
for arr in initial_list:
output = [a * b for a in arr for b in product]
This problem is easy to solve if you have just one subarray -- the output is the given subarray.
Suppose you solved the problem for the first n - 1 subarrays, and you got the output. The new subarray is appended. How the output should change? The new output is all pair-wise products of the previous output and the "new" subarray.
Look closely, there's an easy pattern. Let there be n sublists, and 2 elements in each: at index 0 and 1. Now, the indexes selected can be represented as a binary string of length n.
It'll start with 0000..000, then 0000...001, 0000...010 and so on. So all you need to do is:
n = len(lst)
for i in range(2**n):
binary = bin(i)[2:] #get binary representation
for j in range(n):
if binary[j]=="1":
#include jth list's 1st index in product
else:
#include jth list's 0th index in product
The problem would a scalable solution would be, since you're generating all possible pairs, the time complexity will be O(2^N)
Your idea to use itertools.product is great!
import itertools
initial_list = [[1,2],[3,4],[5,6]]
combinations = list(itertools.product(*initial_list))
# [(1, 3, 5), (1, 3, 6), (1, 4, 5), (1, 4, 6), (2, 3, 5), (2, 3, 6), (2, 4, 5), (2, 4, 6)]
Now, you can get the product of each tuple in combination using for-loops, or using functools.reduce, or you can use math.prod which was introduced in python 3.8:
import itertools
import math
initial_list = [[1,2],[3,4],[5,6]]
output = [math.prod(c) for c in itertools.product(*initial_list)]
# [15, 18, 20, 24, 30, 36, 40, 48]
import itertools
import functools
import operator
initial_list = [[1,2],[3,4],[5,6]]
output = [functools.reduce(operator.mul, c) for c in itertools.product(*initial_list)]
# [15, 18, 20, 24, 30, 36, 40, 48]
import itertools
output = []
for c in itertools.product(*initial_list):
p = 1
for x in c:
p *= x
output.append(p)
# output == [15, 18, 20, 24, 30, 36, 40, 48]
Note: if you are more familiar with lambdas, operator.mul is pretty much equivalent to lambda x,y: x*y.
itertools.product and math.prod are a nice fit -
from itertools import product
from math import prod
input = [[1,2],[3,4],[5,6]]
output = [prod(x) for x in product(*input)]
print(output)
[15, 18, 20, 24, 30, 36, 40, 48]
Suppose I have a numpy array such as:
a = np.arange(9)
>> array([0, 1, 2, 3, 4, 5, 6, 7, 8])
If I want to raise each element to succeeding powers of two, I can do it this way:
power_2 = np.power(a,2)
power_4 = np.power(a,4)
Then I can combine the arrays by:
np.c_[power_2,power_4]
>> array([[ 0, 0],
[ 1, 1],
[ 4, 16],
[ 9, 81],
[ 16, 256],
[ 25, 625],
[ 36, 1296],
[ 49, 2401],
[ 64, 4096]])
What's an efficient way to do this if I don't know the degree of the even monomial (highest multiple of 2) in advance?
One thing to observe is that x^(2^n) = (...(((x^2)^2)^2)...^2)
meaning that you can compute each column from the previous by taking the square.
If you know the number of columns in advance you can do something like:
import functools as ft
a = np.arange(5)
n = 4
out = np.empty((*a.shape,n),a.dtype)
out[:,0] = a
# Note: this works by side-effect!
# The optional second argument of np.square is "out", i.e. an
# array to write the result to (nonetheless the result is also
# returned directly)
ft.reduce(np.square,out.T)
out
# array([[ 0, 0, 0, 0],
# [ 1, 1, 1, 1],
# [ 2, 4, 16, 256],
# [ 3, 9, 81, 6561],
# [ 4, 16, 256, 65536]])
If the number of columns is not known in advance then the most efficient method is to make a list of columns, append as needed and only in the end use np.column_stack or np.c_ (if using np.c_ do not forget to cast the list to tuple first).
The straightforward approach is:
exponents = [2**n for n in a]
[a**e for e in exponents]
This works fine for relatively small numbers, but I see what looks like numerical overflow on the larger numbers. (Although I can compute those high powers just fine using scalars.)
The most elegant way I could think of is to not calculate the exponents beforehand. Since your exponents follow a very easy pattern, you can express everything using on list-comprehension.
result = [item**2*index for index,item in enumerate(a)]
If you are working with quite large datasets, this will cause some serious overhead. This statement will do all calculations immediately and save all calculated element in one large array. To mitigate this problem, you could you a generator expression, which will generate the data on the fly.
result = (item**2*index for index,item in enumerate(a))
See here for more details.
Numpy has а repeat function, that repeats each element of the array a given (per element) number of times.
I want to implement a function that does similar thing but repeats not individual elements, but variably sized blocks of consecutive elements. Essentially I want the following function:
import numpy as np
def repeat_blocks(a, sizes, repeats):
b = []
start = 0
for i, size in enumerate(sizes):
end = start + size
b.extend([a[start:end]] * repeats[i])
start = end
return np.concatenate(b)
For example, given
a = np.arange(20)
sizes = np.array([3, 5, 2, 6, 4])
repeats = np.array([2, 3, 2, 1, 3])
then
repeat_blocks(a, sizes, repeats)
returns
array([ 0, 1, 2,
0, 1, 2,
3, 4, 5, 6, 7,
3, 4, 5, 6, 7,
3, 4, 5, 6, 7,
8, 9,
8, 9,
10, 11, 12, 13, 14, 15,
16, 17, 18, 19,
16, 17, 18, 19,
16, 17, 18, 19 ])
I want to push these loops into numpy in the name of performance. Is this possible? If so, how?
Here's one vectorized approach using cumsum -
# Get repeats for each group using group lengths/sizes
r1 = np.repeat(np.arange(len(sizes)), repeats)
# Get total size of output array, as needed to initialize output indexing array
N = (sizes*repeats).sum() # or np.dot(sizes, repeats)
# Initialize indexing array with ones as we need to setup incremental indexing
# within each group when cumulatively summed at the final stage.
# Two steps here:
# 1. Within each group, we have multiple sequences, so setup the offsetting
# at each sequence lengths by the seq. lengths preceeeding those.
id_ar = np.ones(N, dtype=int)
id_ar[0] = 0
insert_index = sizes[r1[:-1]].cumsum()
insert_val = (1-sizes)[r1[:-1]]
# 2. For each group, make sure the indexing starts from the next group's
# first element. So, simply assign 1s there.
insert_val[r1[1:] != r1[:-1]] = 1
# Assign index-offseting values
id_ar[insert_index] = insert_val
# Finally index into input array for the group repeated o/p
out = a[id_ar.cumsum()]
This function is a great candidate to speed up using Numba:
#numba.njit
def repeat_blocks_jit(a, sizes, repeats):
out = np.empty((sizes * repeats).sum(), a.dtype)
start = 0
oi = 0
for i, size in enumerate(sizes):
end = start + size
for rep in range(repeats[i]):
oe = oi + size
out[oi:oe] = a[start:end]
oi = oe
start = end
return out
This is significantly faster than Divakar's pure NumPy solution, and a lot closer to your original code. I made no effort at all to optimize it. Note that np.dot() and np.repeat() can't be used here, but that doesn't matter when all the code gets compiled.
Plus, since it is njit meaning "nopython" mode, you can even use #numba.njit(nogil=True) and get multicore speedup if you have many of these calls to make.
www.codingame.com
Task
Write a program which, using a given number of strengths,
identifies the two closest strengths and shows their difference with an integer
Info
n = Number of horses
pi = strength of each horse
d = difference
1 < n < 100000
0 < pi ≤ 10000000
My code currently
def get_dif(a, b):
return abs(a - b)
horse_str = [10, 5, 15, 17, 3, 8, 11, 28, 6, 55, 7]
n = len(horse_str)
d = 10000001
for x in range(len(horse_str)):
for y in range(x, len(horse_str) - 1):
d = min([get_dif(horse_str[x], horse_str[y + 1]), d])
print(d)
Test cases
[3,5,8, 9] outputs: 1
[10, 5, 15, 17, 3, 8, 11, 28, 6, 55, 7] outputs: 1
Problem
They both work but then the next test gives me a very long list of horse strengths and i get **Process has timed out. This may mean that your solution is not optimized enough to handle some cases.
How can i optimise it? Thank you!
EDIT ONE
Default code given
import sys
import math
# Auto-generated code below aims at helping you parse
# the standard input according to the problem statement.
n = int(input())
for i in range(n):
pi = int(input())
# Write an action using print
# To debug: print("Debug messages...", file=sys.stderr)
print("answer")
Since you can use sort method (which is optimized to avoid performing a costly bubble sort or double loop by hand which has O(n**2) complexity, and times out with a very big list), let me propose something:
sort the list
compute the minimum of absolute value of difference of the adjacent values, passing a generator comprehension to the min function
The minimum has to be the abs difference of adjacent values. Since the list is sorted using a fast algorithm, the heavy lifting is done for you.
like this:
horse_str = [10, 5, 15, 17, 3, 8, 11, 28, 6, 55, 7]
sh = sorted(horse_str)
print(min(abs(sh[i]-sh[i+1]) for i in range(len(sh)-1)))
I also get 1 as a result (I hope I didn't miss anything)