HI,
I'm try to find a general expression to obtain exponents of a multivariate polynomial of order order and with n_variables, like the one presented in this reference in equation (3).
Here is my current code, which uses an itertools.product generator.
def generalized_taylor_expansion_exponents( order, n_variables ):
"""
Find the exponents of a multivariate polynomial expression of order
`order` and `n_variable` number of variables.
"""
exps = (p for p in itertools.product(range(order+1), repeat=n_variables) if sum(p) <= order)
# discard the first element, which is all zeros..
exps.next()
return exps
The desired out is this:
for i in generalized_taylor_expansion_exponents(order=3, n_variables=3):
print i
(0, 0, 1)
(0, 0, 2)
(0, 0, 3)
(0, 1, 0)
(0, 1, 1)
(0, 1, 2)
(0, 2, 0)
(0, 2, 1)
(0, 3, 0)
(1, 0, 0)
(1, 0, 1)
(1, 0, 2)
(1, 1, 0)
(1, 1, 1)
(1, 2, 0)
(2, 0, 0)
(2, 0, 1)
(2, 1, 0)
(3, 0, 0)
Actually this code executes fast, because the generator object is only created. If i want to fill a list with values from this generator execution really starts to be slow, mainly because of the high number of calls to sum. Tipical value for order and n_variables is 5 and 10, respectively.
How can i significantly improve execution speed?
Thanks for any help.
Davide Lasagna
Actually your biggest performance issue is that most of the tuples you're generating are too big and need to be thrown away. The following should generate exactly the tuples you want.
def generalized_taylor_expansion_exponents( order, n_variables ):
"""
Find the exponents of a multivariate polynomial expression of order
`order` and `n_variable` number of variables.
"""
pattern = [0] * n_variables
for current_sum in range(1, order+1):
pattern[0] = current_sum
yield tuple(pattern)
while pattern[-1] < current_sum:
for i in range(2, n_variables + 1):
if 0 < pattern[n_variables - i]:
pattern[n_variables - i] -= 1
if 2 < i:
pattern[n_variables - i + 1] = 1 + pattern[-1]
pattern[-1] = 0
else:
pattern[-1] += 1
break
yield tuple(pattern)
pattern[-1] = 0
I would try writing it recursively so as to generate only the desired elements:
def _gtee_helper(order, n_variables):
if n_variables == 0:
yield ()
return
for i in range(order + 1):
for result in _gtee_helper(order - i, n_variables - 1):
yield (i,) + result
def generalized_taylor_expansion_exponents(order, n_variables):
"""
Find the exponents of a multivariate polynomial expression of order
`order` and `n_variable` number of variables.
"""
result = _gtee_helper(order, n_variables)
result.next() # discard the first element, which is all zeros
return result
Related
I would like to know all combinations of 0 and 1 that I can obtain in a determinated length list and defining the number of 0 elements and 1 elements.
Sample:
Length: 4
Number of 0: 2
Number of 1: 2 (this information is length - number of zeroes)
I want to obtain following list:
combination = [[0,0,1,1], [0,1,0,1], [0,1,1,0], [1,0,1,0], [1,0,0,1], [1,1,0,0]]
I have tried with iterations.product, but I can not define the number of 0 and 1.
I did a filter to group all combinations depending on the sum of the list (if sum is 2, I have all the combinations of my sample). However, I need to know all combinations for a length of 106 elements (0s and 1s) and laptop cannot work.
Assuming that your question is: "list all combinations of an equal number
of zeroes and ones for a given number of zeroes".
We use "product" to iterate over all possible sequences of double the length
of the number of zeroes, filtering out all occurrences where zeroes and
ones are not equal (so sum(list) must be equal to number of zeroes).
Print the length of the list.
Repeat to print the actual list.
This will not work well for large number of zeroes.
import itertools
num_zeroes = 2
base_tuple = (0, 1) # * num_zeroes
perms = itertools.product(base_tuple, repeat=2 * num_zeroes)
p2 = itertools.filterfalse(lambda x: sum(x) != num_zeroes, perms)
print(len(list(p2)))
# 6
perms = itertools.product(base_tuple, repeat=2 * num_zeroes)
p2 = itertools.filterfalse(lambda x: sum(x) != num_zeroes, perms)
print(list(p2))
# [(0, 0, 1, 1), (0, 1, 0, 1), (0, 1, 1, 0),
# (1, 0, 0, 1), (1, 0, 1, 0), (1, 1, 0, 0)]
There's actually a very nice recursive solution to the problem of generating all binary strings of length n with exactly k bits set. There are of course n!/(n!k!) such strings.
Let's assume we have a function binaryNK(n, k) which returns the required set. The key observation is that this set can be partitioned into two subsets:
those with a leading 1 followed by each member of binaryNK(n-1, k-1)
those with a leading 0 followed by each member of binaryNK(n-1, k)
Of course, 1 is only valid when k > 0, and 2 is only valid when n > k. The terminating condition is n == 0, at which point we have a solution.
Here's some simple code to illustrate:
def binaryNK(s, n, k):
if n == 0: print(s)
if k > 0: binaryNK(s+"1", n-1, k-1)
if n > k: binaryNK(s+"0", n-1, k)
binaryNK("", 5, 2)
Output:
11000
10100
10010
10001
01100
01010
01001
00110
00101
00011
there is an issue that i am trying to solve which requires me to generate the indices for an n - dimensional list. Eg: [5, 4, 3] is a 3 dimensional list so valid indices are [0, 0, 0], [0, 0, 1], [0, 1, 0] ... [2, 2, 1] ..... [4, 3, 2]. The best I could come up with a recursive algorithm but this isn't constant space
def f1(dims):
def recur(res, lst, depth, dims):
if depth == len(dims):
res.append(lst[::])
return
curr = dims[depth]
for i in range(curr):
lst[depth] = i
recur(res, lst, depth + 1, dims)
res = []
lst = [0] * len(dims)
recur(res, lst, 0, dims)
return res
the dimensions can be any number , ie: 4D, 5D, 15D etc. Each time it would be given in the form of a list . Eg: 5D would be [3,2,1,5,2] and I would need to generate all the valid indices for these while using constant space ( just while loops and indices processing ) . How would I go about generating these efficiently without the help of any in built python functions ( just while, for loops etc )
This is a working solution in constant space (a single loop variable i, and a vector idx of which modified copies are being yielded, and a single temporary length variable n to avoid calling len() on every iteration):
def all_indices(dimensions):
n = len(dimensions)
idx = [0] * n
while True:
for i in range(n):
yield tuple(idx)
if idx[i] + 1 < dimensions[i]:
idx[i] += 1
break
else:
idx[i] = 0
if not any(idx):
break
print(list(all_indices([3, 2, 1])))
Result:
[(0, 0, 0), (1, 0, 0), (2, 0, 0), (0, 0, 0), (0, 1, 0), (1, 1, 0), (2, 1, 0), (0, 1, 0), (0, 0, 0)]
As pointed out in the comments, there's duplicates there, a bit sloppy, this is cleaner:
def all_indices(dimensions):
n = len(dimensions)
idx = [0] * n
yield tuple(idx) # yield the initial 'all zeroes' state
while True:
for i in range(n):
if idx[i] + 1 < dimensions[i]:
idx[i] += 1
yield tuple(idx) # yield new states
break
else:
idx[i] = 0 # no yield, repeated state
if not any(idx):
break
print(list(all_indices([3, 2, 1])))
Alternatively, you could yield before the break instead of at the start of the loop, but I feel having the 'all zeroes' at the start looks cleaner.
The break is there to force a depth first on running through the indices, which ensures the loop never reaches 'all zeroes' again before having passed all possibilities. Try removing the break and then passing in something like [2, 1, 2] and you'll find it is missing a result.
I think a break is actually the 'clean' way to do it, since it allows using a simple for instead of using a while with a more complicated condition and a separate increment statement. You could do that though:
def all_indices3(dimensions):
n = len(dimensions)
idx = [1] + [0] * (n - 1)
yield tuple([0] * n)
while any(idx):
yield tuple(idx)
i = 0
while i < n and idx[i] + 1 == dimensions[i]:
idx[i] = 0
i += 1 % n
if i < n:
idx[i] += 1
This has the same result, but only uses while, if and yields the results in the same order.
Instructions : rat can move just up or right
input:
The first line contains the number of table size n and the number of cheese m.
From the next line, the position x, y of the cheese is given
Output :
The maximum number of cheese to eat
Exemple1:
input : 1 1 1
output : 1 1
Example2:
input :
3 2
1 2
3 1
output : 1
Example 3:
input :
5 5
2 3
3 2
4 3
4 5
5 2
output: 3
how can I solve with python?
I tried
def maxAverageOfPath(table, N):
dp = [[0 for i in range(N)] for j in range(N)]
dp[0][0] = table[0][0]
# Initialize first column of total table(dp) array
for i in range(0, N):
dp[i][0] = 0
for j in range(0, N):
dp[0][j] = 0
for i in range(0, N):
for j in range(0, N):
print(i, j)
if i == N-1 and j == N-1:
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
continue
if i == N-1 :
dp[i][j] = table[i][j + 1]
continue
if j == N-1 :
dp[i][j] = table[i + 1][j]
continue
dp[i][j] = max(table[i + 1][j], table[i][j + 1])
return dp
but failed...
For dynamic programming you want an edge condition(s) and a way to score where you are right now. After that it's more-or-less smart brute force. The smart part comes from memoizing so you don't repeat work.
Here's a basic recursive approach for python that does the following:
Organize table of cheese as a frozen set of tuples. This can be hashed for memoization and you can determine of a location is in the set in constant time.
Creates an edge condition for the end (when both coordinates are N) and an edge condition for when you walk off the map -- that just returns 0.
Uses lru_cache to memoize. You can implement this yourself easily.
from functools import lru_cache
def hasCheese(table, location):
''' Helper function just to improve readability '''
return 1 if location in table else 0
#lru_cache()
def maxC(table, N, location = (0, 0)):
# edge conditions final square and off the grid:
if location[0] == N and location[1] == N:
return hasCheese(table, location)
if any(l > N for l in location):
return 0
# recursion
score_here = hasCheese(table, location)
return max(score_here + maxC(table, N, (location[0] + 1, location[1])),
score_here + maxC(table, N, (location[0], location[1] + 1))
)
t = frozenset([(2, 3), (3, 2), (4, 3), (4, 5), (5, 2)])
N = 5
print(maxC(t, N))
# prints 3
If you want to do this in a top-down manner using a matrix, you need to be very careful that you always have the previous index set. It's easier to make mistakes doing it this way because you need to get the indexes and order just right. When you set it up as two nested increasing loops, that means the next value is always the current cell plus the max of the two cells one unit less — you should always be looking backward in the matrix. It's not clear what you are trying to do when you are looking forward with this:
dp[i][j] = table[i][j + 1]
because j+1 has not been determined yet.
Since the cheese coordinates are 1 indexed, an easy way forward is to make your matrix zero indexed and N+1 in size. Then when you start your for loops at 1 you can always look and at lower index without undershooting the matrix and avoid a lot of the if/else logic. For example:
def hasCheese(table, location):
''' Helper function just to improve readability '''
return 1 if location in table else 0
def maxAverageOfPath(table, N):
# matrix is sized one unit bigger
dp = [[0 for i in range(N+1)] for j in range(N+1)]
# iterate 1-5 inclusive
for i in range(1, N+1):
for j in range(1, N+1):
# because the zeroth row and column are already zero, this works without undershooting the table
dp[i][j] = hasCheese(table, (i, j)) + max(dp[i][j-1], dp[i-1][j])
# solution is in the corner
return dp[N][N]
t = {(2, 3), (3, 2), (4, 3), (4, 5), (5, 2)}
N = 5
print(maxAverageOfPath(t, N)) #3
When you'r done your matrix will look like:
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 1, 1, 1]
[0, 0, 1, 1, 1, 1]
[0, 0, 1, 2, 2, 3]
[0, 0, 2, 2, 2, 3]
Your starting point is at (1, 1) starting in the top-right and your answer is the bottom left corner.
At each point you have two options to move further :
array [row] [col+1]
array [row+1] [col]
As we have to find out a path which involves max cheese.
It can be achived by recurring through the array like below to solve the same:
Solution =>
array [i] [j] + Max(Recur(array [i] [j+1]), Recur(array [i+1] [j]));
How can I randomly shuffle a list so that none of the elements remains in its original position?
In other words, given a list A with distinct elements, I'd like to generate a permutation B of it so that
this permutation is random
and for each n, a[n] != b[n]
e.g.
a = [1,2,3,4]
b = [4,1,2,3] # good
b = [4,2,1,3] # good
a = [1,2,3,4]
x = [2,4,3,1] # bad
I don't know the proper term for such a permutation (is it "total"?) thus having a hard time googling. The correct term appears to be "derangement".
After some research I was able to implement the "early refusal" algorithm as described e.g. in this paper [1]. It goes like this:
import random
def random_derangement(n):
while True:
v = [i for i in range(n)]
for j in range(n - 1, -1, -1):
p = random.randint(0, j)
if v[p] == j:
break
else:
v[j], v[p] = v[p], v[j]
else:
if v[0] != 0:
return tuple(v)
The idea is: we keep shuffling the array, once we find that the permutation we're working on is not valid (v[i]==i), we break and start from scratch.
A quick test shows that this algorithm generates all derangements uniformly:
N = 4
# enumerate all derangements for testing
import itertools
counter = {}
for p in itertools.permutations(range(N)):
if all(p[i] != i for i in p):
counter[p] = 0
# make M probes for each derangement
M = 5000
for _ in range(M*len(counter)):
# generate a random derangement
p = random_derangement(N)
# is it really?
assert p in counter
# ok, record it
counter[p] += 1
# the distribution looks uniform
for p, c in sorted(counter.items()):
print p, c
Results:
(1, 0, 3, 2) 4934
(1, 2, 3, 0) 4952
(1, 3, 0, 2) 4980
(2, 0, 3, 1) 5054
(2, 3, 0, 1) 5032
(2, 3, 1, 0) 5053
(3, 0, 1, 2) 4951
(3, 2, 0, 1) 5048
(3, 2, 1, 0) 4996
I choose this algorithm for simplicity, this presentation [2] briefly outlines other ideas.
References:
[1] An analysis of a simple algorithm for random derangements. Merlini, Sprugnoli, Verri. WSPC Proceedings, 2007.
[2] Generating random derangements. Martínez, Panholzer, Prodinger.
Such permutations are called derangements. In practice you can just try random permutations until hitting a derangement, their ratio approaches the inverse of 'e' as 'n' grows.
As a possible starting point, the Fisher-Yates shuffle goes like this.
def swap(xs, a, b):
xs[a], xs[b] = xs[b], xs[a]
def permute(xs):
for a in xrange(len(xs)):
b = random.choice(xrange(a, len(xs)))
swap(xs, a, b)
Perhaps this will do the trick?
def derange(xs):
for a in xrange(len(xs) - 1):
b = random.choice(xrange(a + 1, len(xs) - 1))
swap(xs, a, b)
swap(len(xs) - 1, random.choice(xrange(n - 1))
Here's the version described by Vatine:
def derange(xs):
for a in xrange(1, len(xs)):
b = random.choice(xrange(0, a))
swap(xs, a, b)
return xs
A quick statistical test:
from collections import Counter
def test(n):
derangements = (tuple(derange(range(n))) for _ in xrange(10000))
for k,v in Counter(derangements).iteritems():
print('{} {}').format(k, v)
test(4):
(1, 3, 0, 2) 1665
(2, 0, 3, 1) 1702
(3, 2, 0, 1) 1636
(1, 2, 3, 0) 1632
(3, 0, 1, 2) 1694
(2, 3, 1, 0) 1671
This does appear uniform over its range, and it has the nice property that each element has an equal chance to appear in each allowed slot.
But unfortunately it doesn't include all of the derangements. There are 9 derangements of size 4. (The formula and an example for n=4 are given on the Wikipedia article).
This should work
import random
totalrandom = False
array = [1, 2, 3, 4]
it = 0
while totalrandom == False:
it += 1
shuffledArray = sorted(array, key=lambda k: random.random())
total = 0
for i in array:
if array[i-1] != shuffledArray[i-1]: total += 1
if total == 4:
totalrandom = True
if it > 10*len(array):
print("'Total random' shuffle impossible")
exit()
print(shuffledArray)
Note the variable it which exits the code if too many iterations are called. This accounts for arrays such as [1, 1, 1] or [3]
EDIT
Turns out that if you're using this with large arrays (bigger than 15 or so), it will be CPU intensive. Using a randomly generated 100 element array and upping it to len(array)**3, it takes my Samsung Galaxy S4 a long time to solve.
EDIT 2
After about 1200 seconds (20 minutes), the program ended saying 'Total Random shuffle impossible'. For large arrays, you need a very large number of permutations... Say len(array)**10 or something.
Code:
import random, time
totalrandom = False
array = []
it = 0
for i in range(1, 100):
array.append(random.randint(1, 6))
start = time.time()
while totalrandom == False:
it += 1
shuffledArray = sorted(array, key=lambda k: random.random())
total = 0
for i in array:
if array[i-1] != shuffledArray[i-1]: total += 1
if total == 4:
totalrandom = True
if it > len(array)**3:
end = time.time()
print(end-start)
print("'Total random' shuffle impossible")
exit()
end = time.time()
print(end-start)
print(shuffledArray)
Here is a smaller one, with pythonic syntax -
import random
def derange(s):
d=s[:]
while any([a==b for a,b in zip(d,s)]):random.shuffle(d)
return d
All it does is shuffles the list until there is no element-wise match. Also, be careful that it'll run forever if a list that cannot be deranged is passed.It happens when there are duplicates. To remove duplicates simply call the function like this derange(list(set(my_list_to_be_deranged))).
import random
a=[1,2,3,4]
c=[]
i=0
while i < len(a):
while 1:
k=random.choice(a)
#print k,a[i]
if k==a[i]:
pass
else:
if k not in c:
if i==len(a)-2:
if a[len(a)-1] not in c:
if k==a[len(a)-1]:
c.append(k)
break
else:
c.append(k)
break
else:
c.append(k)
break
i=i+1
print c
A quick way is to try to shuffle your list until you reach that state. You simply try to shuffle your list until you are left with a list that satisfies your condition.
import random
import copy
def is_derangement(l_original, l_proposal):
return all([l_original[i] != item for i, item in enumerate(l_proposal)])
l_original = [1, 2, 3, 4, 5]
l_proposal = copy.copy(l_original)
while not is_derangement(l_original, l_proposal):
random.shuffle(l_proposal)
print(l_proposal)
what is elegant way to create set of ALL vectors of dimension N, that each element is integer between 0 and K inclusive ([0, K]).
my current code is :
def nodes_init(n, k):
nodes = {}
e = np.identity(n)
nodes[tuple(np.zeros(n))] = 0
s = Set()
s.add(tuple(np.zeros(n)))
s_used = Set()
while len(s) != 0:
node = s.pop()
if node in s_used:
continue
s_used.add(node)
for i in xrange(len(e)):
temp = node + e[i]
temp = cap(temp, k)
temp = tuple(temp)
nodes[temp] = 0
if not temp in s_used:
s.add(temp)
return nodes
def cap(t, k):
for i in xrange(len(t)):
if t[i] > k:
t[i] = k
return t
and I don't like it.
keys of dictionary nodes are desired vectors.
Use itertools
from itertools import product
def nodes_iter(n, k):
""" returns generator (lazy iterator) rather than creating whole list """
return product(range(k+1),repeat=n)
Example usage:
for node in nodes_iter(3,1):
print node
(0, 0, 0)
(0, 0, 1)
(0, 1, 0)
(0, 1, 1)
(1, 0, 0)
(1, 0, 1)
(1, 1, 0)
(1, 1, 1)