Instructions : rat can move just up or right
input:
The first line contains the number of table size n and the number of cheese m.
From the next line, the position x, y of the cheese is given
Output :
The maximum number of cheese to eat
Exemple1:
input : 1 1 1
output : 1 1
Example2:
input :
3 2
1 2
3 1
output : 1
Example 3:
input :
5 5
2 3
3 2
4 3
4 5
5 2
output: 3
how can I solve with python?
I tried
def maxAverageOfPath(table, N):
dp = [[0 for i in range(N)] for j in range(N)]
dp[0][0] = table[0][0]
# Initialize first column of total table(dp) array
for i in range(0, N):
dp[i][0] = 0
for j in range(0, N):
dp[0][j] = 0
for i in range(0, N):
for j in range(0, N):
print(i, j)
if i == N-1 and j == N-1:
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
continue
if i == N-1 :
dp[i][j] = table[i][j + 1]
continue
if j == N-1 :
dp[i][j] = table[i + 1][j]
continue
dp[i][j] = max(table[i + 1][j], table[i][j + 1])
return dp
but failed...
For dynamic programming you want an edge condition(s) and a way to score where you are right now. After that it's more-or-less smart brute force. The smart part comes from memoizing so you don't repeat work.
Here's a basic recursive approach for python that does the following:
Organize table of cheese as a frozen set of tuples. This can be hashed for memoization and you can determine of a location is in the set in constant time.
Creates an edge condition for the end (when both coordinates are N) and an edge condition for when you walk off the map -- that just returns 0.
Uses lru_cache to memoize. You can implement this yourself easily.
from functools import lru_cache
def hasCheese(table, location):
''' Helper function just to improve readability '''
return 1 if location in table else 0
#lru_cache()
def maxC(table, N, location = (0, 0)):
# edge conditions final square and off the grid:
if location[0] == N and location[1] == N:
return hasCheese(table, location)
if any(l > N for l in location):
return 0
# recursion
score_here = hasCheese(table, location)
return max(score_here + maxC(table, N, (location[0] + 1, location[1])),
score_here + maxC(table, N, (location[0], location[1] + 1))
)
t = frozenset([(2, 3), (3, 2), (4, 3), (4, 5), (5, 2)])
N = 5
print(maxC(t, N))
# prints 3
If you want to do this in a top-down manner using a matrix, you need to be very careful that you always have the previous index set. It's easier to make mistakes doing it this way because you need to get the indexes and order just right. When you set it up as two nested increasing loops, that means the next value is always the current cell plus the max of the two cells one unit less — you should always be looking backward in the matrix. It's not clear what you are trying to do when you are looking forward with this:
dp[i][j] = table[i][j + 1]
because j+1 has not been determined yet.
Since the cheese coordinates are 1 indexed, an easy way forward is to make your matrix zero indexed and N+1 in size. Then when you start your for loops at 1 you can always look and at lower index without undershooting the matrix and avoid a lot of the if/else logic. For example:
def hasCheese(table, location):
''' Helper function just to improve readability '''
return 1 if location in table else 0
def maxAverageOfPath(table, N):
# matrix is sized one unit bigger
dp = [[0 for i in range(N+1)] for j in range(N+1)]
# iterate 1-5 inclusive
for i in range(1, N+1):
for j in range(1, N+1):
# because the zeroth row and column are already zero, this works without undershooting the table
dp[i][j] = hasCheese(table, (i, j)) + max(dp[i][j-1], dp[i-1][j])
# solution is in the corner
return dp[N][N]
t = {(2, 3), (3, 2), (4, 3), (4, 5), (5, 2)}
N = 5
print(maxAverageOfPath(t, N)) #3
When you'r done your matrix will look like:
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 1, 1, 1]
[0, 0, 1, 1, 1, 1]
[0, 0, 1, 2, 2, 3]
[0, 0, 2, 2, 2, 3]
Your starting point is at (1, 1) starting in the top-right and your answer is the bottom left corner.
At each point you have two options to move further :
array [row] [col+1]
array [row+1] [col]
As we have to find out a path which involves max cheese.
It can be achived by recurring through the array like below to solve the same:
Solution =>
array [i] [j] + Max(Recur(array [i] [j+1]), Recur(array [i+1] [j]));
Related
I would like to know all combinations of 0 and 1 that I can obtain in a determinated length list and defining the number of 0 elements and 1 elements.
Sample:
Length: 4
Number of 0: 2
Number of 1: 2 (this information is length - number of zeroes)
I want to obtain following list:
combination = [[0,0,1,1], [0,1,0,1], [0,1,1,0], [1,0,1,0], [1,0,0,1], [1,1,0,0]]
I have tried with iterations.product, but I can not define the number of 0 and 1.
I did a filter to group all combinations depending on the sum of the list (if sum is 2, I have all the combinations of my sample). However, I need to know all combinations for a length of 106 elements (0s and 1s) and laptop cannot work.
Assuming that your question is: "list all combinations of an equal number
of zeroes and ones for a given number of zeroes".
We use "product" to iterate over all possible sequences of double the length
of the number of zeroes, filtering out all occurrences where zeroes and
ones are not equal (so sum(list) must be equal to number of zeroes).
Print the length of the list.
Repeat to print the actual list.
This will not work well for large number of zeroes.
import itertools
num_zeroes = 2
base_tuple = (0, 1) # * num_zeroes
perms = itertools.product(base_tuple, repeat=2 * num_zeroes)
p2 = itertools.filterfalse(lambda x: sum(x) != num_zeroes, perms)
print(len(list(p2)))
# 6
perms = itertools.product(base_tuple, repeat=2 * num_zeroes)
p2 = itertools.filterfalse(lambda x: sum(x) != num_zeroes, perms)
print(list(p2))
# [(0, 0, 1, 1), (0, 1, 0, 1), (0, 1, 1, 0),
# (1, 0, 0, 1), (1, 0, 1, 0), (1, 1, 0, 0)]
There's actually a very nice recursive solution to the problem of generating all binary strings of length n with exactly k bits set. There are of course n!/(n!k!) such strings.
Let's assume we have a function binaryNK(n, k) which returns the required set. The key observation is that this set can be partitioned into two subsets:
those with a leading 1 followed by each member of binaryNK(n-1, k-1)
those with a leading 0 followed by each member of binaryNK(n-1, k)
Of course, 1 is only valid when k > 0, and 2 is only valid when n > k. The terminating condition is n == 0, at which point we have a solution.
Here's some simple code to illustrate:
def binaryNK(s, n, k):
if n == 0: print(s)
if k > 0: binaryNK(s+"1", n-1, k-1)
if n > k: binaryNK(s+"0", n-1, k)
binaryNK("", 5, 2)
Output:
11000
10100
10010
10001
01100
01010
01001
00110
00101
00011
there is an issue that i am trying to solve which requires me to generate the indices for an n - dimensional list. Eg: [5, 4, 3] is a 3 dimensional list so valid indices are [0, 0, 0], [0, 0, 1], [0, 1, 0] ... [2, 2, 1] ..... [4, 3, 2]. The best I could come up with a recursive algorithm but this isn't constant space
def f1(dims):
def recur(res, lst, depth, dims):
if depth == len(dims):
res.append(lst[::])
return
curr = dims[depth]
for i in range(curr):
lst[depth] = i
recur(res, lst, depth + 1, dims)
res = []
lst = [0] * len(dims)
recur(res, lst, 0, dims)
return res
the dimensions can be any number , ie: 4D, 5D, 15D etc. Each time it would be given in the form of a list . Eg: 5D would be [3,2,1,5,2] and I would need to generate all the valid indices for these while using constant space ( just while loops and indices processing ) . How would I go about generating these efficiently without the help of any in built python functions ( just while, for loops etc )
This is a working solution in constant space (a single loop variable i, and a vector idx of which modified copies are being yielded, and a single temporary length variable n to avoid calling len() on every iteration):
def all_indices(dimensions):
n = len(dimensions)
idx = [0] * n
while True:
for i in range(n):
yield tuple(idx)
if idx[i] + 1 < dimensions[i]:
idx[i] += 1
break
else:
idx[i] = 0
if not any(idx):
break
print(list(all_indices([3, 2, 1])))
Result:
[(0, 0, 0), (1, 0, 0), (2, 0, 0), (0, 0, 0), (0, 1, 0), (1, 1, 0), (2, 1, 0), (0, 1, 0), (0, 0, 0)]
As pointed out in the comments, there's duplicates there, a bit sloppy, this is cleaner:
def all_indices(dimensions):
n = len(dimensions)
idx = [0] * n
yield tuple(idx) # yield the initial 'all zeroes' state
while True:
for i in range(n):
if idx[i] + 1 < dimensions[i]:
idx[i] += 1
yield tuple(idx) # yield new states
break
else:
idx[i] = 0 # no yield, repeated state
if not any(idx):
break
print(list(all_indices([3, 2, 1])))
Alternatively, you could yield before the break instead of at the start of the loop, but I feel having the 'all zeroes' at the start looks cleaner.
The break is there to force a depth first on running through the indices, which ensures the loop never reaches 'all zeroes' again before having passed all possibilities. Try removing the break and then passing in something like [2, 1, 2] and you'll find it is missing a result.
I think a break is actually the 'clean' way to do it, since it allows using a simple for instead of using a while with a more complicated condition and a separate increment statement. You could do that though:
def all_indices3(dimensions):
n = len(dimensions)
idx = [1] + [0] * (n - 1)
yield tuple([0] * n)
while any(idx):
yield tuple(idx)
i = 0
while i < n and idx[i] + 1 == dimensions[i]:
idx[i] = 0
i += 1 % n
if i < n:
idx[i] += 1
This has the same result, but only uses while, if and yields the results in the same order.
I want to create an infinite loop that counts up and down from 0 to 100 to 0 (and so on) and only stops when some convergence criterion inside the loop is met, so basically something like this:
for i in range(0, infinity):
for j in range(0, 100, 1):
print(j) # (in my case 100 lines of code)
for j in range(100, 0, -1):
print(j) # (same 100 lines of code as above)
Is there any way to merge the two for loops over j into one so that I don't have write out the same code inside the loops twice?
Use the chain method of itertools
import itertools
for i in range(0, infinity):
for j in itertools.chain(range(0, 100, 1), range(100, 0, -1)):
print(j) # (in my case 100 lines of code)
As suggested by #Chepner, you can use itertools.cycle() for the infinite loop:
from itertools import cycle, chain
for i in cycle(chain(range(0, 100, 1), range(100, 0, -1))):
....
As well as the other answers you can use a bit of maths:
while(True):
for i in range(200):
if i > 100:
i = 200 - i
Here's yet another possibility:
while notConverged:
for i in xrange(-100, 101):
print 100 - abs(i)
If you've got a repeated set of code, use a function to save space and effort:
def function(x, y, x, num_from_for_loop):
# 100 lines of code
while not condition:
for i in range(1, 101):
if condition:
break
function(x, y, z, i)
for i in range(100, 0, -1):
if condition:
break
function(x, y, z, i)
You could even use a while True
If you're using Python 3.5+, you can using generic unpacking:
for j in (*range(0, 100, 1), *range(100, 0, -1)):
or prior to Python 3.5, you can use itertools.chain:
from itertools import chain
...
for j in chain(range(0, 100, 1), range(100, 0, -1)):
up = True # since we want to go from 0 to 100 first
while True: #for infinite loop
# For up == True we will print 0-->100 (0,100,1)
# For up == False we will print 100-->0 (100,0,-1)
start,stop,step = (0,100,1) if up else (100,0,-1)
for i in range(start,stop,step):
print(i)
up = not up # if we have just printed from 0-->100 (ie up==True), we want to print 100-->0 next so make up False ie up = not up( True)
# up will help toggle, between 0-->100 and 100-->0
def up_down(lowest_value, highest_value):
current = lowest_value
delta = 1
while True: # Begin infinite loop
yield current
current += delta
if current <= lowest_value or current >= highest_value:
delta *= -1 # Turn around when either limit is hit
This defines a generator, which will continue to yield values for as long as you need. For example:
>>> u = up_down(0, 10)
>>> count = 0
>>> for j in u:
print(j) # for demonstration purposes
count += 1 # your other 100 lines of code here
if count >= 25: # your ending condition here
break
0
1
2
3
4
5
6
7
8
9
10
9
8
7
6
5
4
3
2
1
0
1
2
3
4
I became curious if it's possible to implement such kind of triangle oscillator without conditions and enumerations. Well, one option is the following:
def oscillator(magnitude):
i = 0
x = y = -1
double_magnitude = magnitude + magnitude
while True:
yield i
x = (x + 1) * (1 - (x // (double_magnitude - 1))) # instead of (x + 1) % double_magnitude
y = (y + 1) * (1 - (y // (magnitude - 1))) # instead of (y + 1) % magnitude
difference = x - y # difference ∈ {0, magnitude}
derivative = (-1 * (difference > 0) + 1 * (difference == 0))
i += derivative
The idea behind this is to take 2 sawtooth waves with different periods and subtract one from another. The result will be a square wave with values in {0, magnitude}. Then we just substitute {0, magnitude} with {-1, +1} respectively to get derivative values for our target signal.
Let's look at example with magnitude = 5:
o = oscillator(5)
[next(o) for _ in range(21)]
This outputs [0, 1, 2, 3, 4, 5, 4, 3, 2, 1, 0, 1, 2, 3, 4, 5, 4, 3, 2, 1, 0].
If abs() is allowed, it can be used for simplicity. For example, the following code gives the same output as above:
[abs(5 - ((x + 5) % 10)) for x in range(21)]
This is more of a partial answer than a direct answer to your question, but you can also use the notion of trigonometric functions and their oscillation to imitate a 'back and forth' loop.
If we have a cos function with an amplitude of 100, shifted left and upwards so that f(x) = 0 and 0 <= f(x) <= 100, we then have the formula f(x) = 50(cos(x-pi)+1) (plot of graph may be found here. The range is what you require, and oscillation occurs so there's no need to negate any values.
>>> from math import cos, pi
>>> f = lambda x: 50*(cos(x-pi)+1)
>>> f(0)
0.0
>>> f(pi/2)
50.0
>>> f(pi)
100.0
>>> f(3*pi/2)
50.0
>>> f(2*pi)
0.0
The issue of course comes in that the function doesn't give integer values so easily, thus it's not that helpful - but this may be useful for future readers where trigonometric functions might be helpful for their case.
I had a similar problem a while ago where I also wanted to create values in the form of an infinite triangle wave, but wanted to step over some values. I ended up using a generator (and the range function as other also have been using):
def tri_wave(min, max, step=1):
while True:
yield from range(min, max, step)
yield from range(max, min, -1 * step)
With carefully selected values on min, max and step (i.e. evenly divisible),
for value in tri_wave(0, 8, 2):
print(value, end=", ")
I get the min and max value only once, which was my goal:
...0, 2, 4, 6, 8, 6, 4, 2, 0, 2, 4, 6, 8, 6, 4...
I was using Python 3.6 at the time.
Here is a simple and straightforward solution that does not require any imports:
index = 0
constant = 100
while True:
print(index)
if index == constant:
step = -1
elif index == 0:
step = 1
index += step
How can I randomly shuffle a list so that none of the elements remains in its original position?
In other words, given a list A with distinct elements, I'd like to generate a permutation B of it so that
this permutation is random
and for each n, a[n] != b[n]
e.g.
a = [1,2,3,4]
b = [4,1,2,3] # good
b = [4,2,1,3] # good
a = [1,2,3,4]
x = [2,4,3,1] # bad
I don't know the proper term for such a permutation (is it "total"?) thus having a hard time googling. The correct term appears to be "derangement".
After some research I was able to implement the "early refusal" algorithm as described e.g. in this paper [1]. It goes like this:
import random
def random_derangement(n):
while True:
v = [i for i in range(n)]
for j in range(n - 1, -1, -1):
p = random.randint(0, j)
if v[p] == j:
break
else:
v[j], v[p] = v[p], v[j]
else:
if v[0] != 0:
return tuple(v)
The idea is: we keep shuffling the array, once we find that the permutation we're working on is not valid (v[i]==i), we break and start from scratch.
A quick test shows that this algorithm generates all derangements uniformly:
N = 4
# enumerate all derangements for testing
import itertools
counter = {}
for p in itertools.permutations(range(N)):
if all(p[i] != i for i in p):
counter[p] = 0
# make M probes for each derangement
M = 5000
for _ in range(M*len(counter)):
# generate a random derangement
p = random_derangement(N)
# is it really?
assert p in counter
# ok, record it
counter[p] += 1
# the distribution looks uniform
for p, c in sorted(counter.items()):
print p, c
Results:
(1, 0, 3, 2) 4934
(1, 2, 3, 0) 4952
(1, 3, 0, 2) 4980
(2, 0, 3, 1) 5054
(2, 3, 0, 1) 5032
(2, 3, 1, 0) 5053
(3, 0, 1, 2) 4951
(3, 2, 0, 1) 5048
(3, 2, 1, 0) 4996
I choose this algorithm for simplicity, this presentation [2] briefly outlines other ideas.
References:
[1] An analysis of a simple algorithm for random derangements. Merlini, Sprugnoli, Verri. WSPC Proceedings, 2007.
[2] Generating random derangements. Martínez, Panholzer, Prodinger.
Such permutations are called derangements. In practice you can just try random permutations until hitting a derangement, their ratio approaches the inverse of 'e' as 'n' grows.
As a possible starting point, the Fisher-Yates shuffle goes like this.
def swap(xs, a, b):
xs[a], xs[b] = xs[b], xs[a]
def permute(xs):
for a in xrange(len(xs)):
b = random.choice(xrange(a, len(xs)))
swap(xs, a, b)
Perhaps this will do the trick?
def derange(xs):
for a in xrange(len(xs) - 1):
b = random.choice(xrange(a + 1, len(xs) - 1))
swap(xs, a, b)
swap(len(xs) - 1, random.choice(xrange(n - 1))
Here's the version described by Vatine:
def derange(xs):
for a in xrange(1, len(xs)):
b = random.choice(xrange(0, a))
swap(xs, a, b)
return xs
A quick statistical test:
from collections import Counter
def test(n):
derangements = (tuple(derange(range(n))) for _ in xrange(10000))
for k,v in Counter(derangements).iteritems():
print('{} {}').format(k, v)
test(4):
(1, 3, 0, 2) 1665
(2, 0, 3, 1) 1702
(3, 2, 0, 1) 1636
(1, 2, 3, 0) 1632
(3, 0, 1, 2) 1694
(2, 3, 1, 0) 1671
This does appear uniform over its range, and it has the nice property that each element has an equal chance to appear in each allowed slot.
But unfortunately it doesn't include all of the derangements. There are 9 derangements of size 4. (The formula and an example for n=4 are given on the Wikipedia article).
This should work
import random
totalrandom = False
array = [1, 2, 3, 4]
it = 0
while totalrandom == False:
it += 1
shuffledArray = sorted(array, key=lambda k: random.random())
total = 0
for i in array:
if array[i-1] != shuffledArray[i-1]: total += 1
if total == 4:
totalrandom = True
if it > 10*len(array):
print("'Total random' shuffle impossible")
exit()
print(shuffledArray)
Note the variable it which exits the code if too many iterations are called. This accounts for arrays such as [1, 1, 1] or [3]
EDIT
Turns out that if you're using this with large arrays (bigger than 15 or so), it will be CPU intensive. Using a randomly generated 100 element array and upping it to len(array)**3, it takes my Samsung Galaxy S4 a long time to solve.
EDIT 2
After about 1200 seconds (20 minutes), the program ended saying 'Total Random shuffle impossible'. For large arrays, you need a very large number of permutations... Say len(array)**10 or something.
Code:
import random, time
totalrandom = False
array = []
it = 0
for i in range(1, 100):
array.append(random.randint(1, 6))
start = time.time()
while totalrandom == False:
it += 1
shuffledArray = sorted(array, key=lambda k: random.random())
total = 0
for i in array:
if array[i-1] != shuffledArray[i-1]: total += 1
if total == 4:
totalrandom = True
if it > len(array)**3:
end = time.time()
print(end-start)
print("'Total random' shuffle impossible")
exit()
end = time.time()
print(end-start)
print(shuffledArray)
Here is a smaller one, with pythonic syntax -
import random
def derange(s):
d=s[:]
while any([a==b for a,b in zip(d,s)]):random.shuffle(d)
return d
All it does is shuffles the list until there is no element-wise match. Also, be careful that it'll run forever if a list that cannot be deranged is passed.It happens when there are duplicates. To remove duplicates simply call the function like this derange(list(set(my_list_to_be_deranged))).
import random
a=[1,2,3,4]
c=[]
i=0
while i < len(a):
while 1:
k=random.choice(a)
#print k,a[i]
if k==a[i]:
pass
else:
if k not in c:
if i==len(a)-2:
if a[len(a)-1] not in c:
if k==a[len(a)-1]:
c.append(k)
break
else:
c.append(k)
break
else:
c.append(k)
break
i=i+1
print c
A quick way is to try to shuffle your list until you reach that state. You simply try to shuffle your list until you are left with a list that satisfies your condition.
import random
import copy
def is_derangement(l_original, l_proposal):
return all([l_original[i] != item for i, item in enumerate(l_proposal)])
l_original = [1, 2, 3, 4, 5]
l_proposal = copy.copy(l_original)
while not is_derangement(l_original, l_proposal):
random.shuffle(l_proposal)
print(l_proposal)
HI,
I'm try to find a general expression to obtain exponents of a multivariate polynomial of order order and with n_variables, like the one presented in this reference in equation (3).
Here is my current code, which uses an itertools.product generator.
def generalized_taylor_expansion_exponents( order, n_variables ):
"""
Find the exponents of a multivariate polynomial expression of order
`order` and `n_variable` number of variables.
"""
exps = (p for p in itertools.product(range(order+1), repeat=n_variables) if sum(p) <= order)
# discard the first element, which is all zeros..
exps.next()
return exps
The desired out is this:
for i in generalized_taylor_expansion_exponents(order=3, n_variables=3):
print i
(0, 0, 1)
(0, 0, 2)
(0, 0, 3)
(0, 1, 0)
(0, 1, 1)
(0, 1, 2)
(0, 2, 0)
(0, 2, 1)
(0, 3, 0)
(1, 0, 0)
(1, 0, 1)
(1, 0, 2)
(1, 1, 0)
(1, 1, 1)
(1, 2, 0)
(2, 0, 0)
(2, 0, 1)
(2, 1, 0)
(3, 0, 0)
Actually this code executes fast, because the generator object is only created. If i want to fill a list with values from this generator execution really starts to be slow, mainly because of the high number of calls to sum. Tipical value for order and n_variables is 5 and 10, respectively.
How can i significantly improve execution speed?
Thanks for any help.
Davide Lasagna
Actually your biggest performance issue is that most of the tuples you're generating are too big and need to be thrown away. The following should generate exactly the tuples you want.
def generalized_taylor_expansion_exponents( order, n_variables ):
"""
Find the exponents of a multivariate polynomial expression of order
`order` and `n_variable` number of variables.
"""
pattern = [0] * n_variables
for current_sum in range(1, order+1):
pattern[0] = current_sum
yield tuple(pattern)
while pattern[-1] < current_sum:
for i in range(2, n_variables + 1):
if 0 < pattern[n_variables - i]:
pattern[n_variables - i] -= 1
if 2 < i:
pattern[n_variables - i + 1] = 1 + pattern[-1]
pattern[-1] = 0
else:
pattern[-1] += 1
break
yield tuple(pattern)
pattern[-1] = 0
I would try writing it recursively so as to generate only the desired elements:
def _gtee_helper(order, n_variables):
if n_variables == 0:
yield ()
return
for i in range(order + 1):
for result in _gtee_helper(order - i, n_variables - 1):
yield (i,) + result
def generalized_taylor_expansion_exponents(order, n_variables):
"""
Find the exponents of a multivariate polynomial expression of order
`order` and `n_variable` number of variables.
"""
result = _gtee_helper(order, n_variables)
result.next() # discard the first element, which is all zeros
return result