Please excuse my naivete as I don't have much programming experience. While googling something for an unrelated question, I stumbled upon this:
https://www.geeksforgeeks.org/find-number-of-solutions-of-a-linear-equation-of-n-variables/
I completely understand the first (extremely inefficient) bit of code. But the second:
def countSol(coeff, n, rhs):
# Create and initialize a table
# to store results of subproblems
dp = [0 for i in range(rhs + 1)]
dp[0] = 1
# Fill table in bottom up manner
for i in range(n):
for j in range(coeff[i], rhs + 1):
dp[j] += dp[j - coeff[i]]
return dp[rhs]
confuses me. My question being: why does this second program count the number of non-negative integer solutions?
I have written out several examples, including the one given in the article, and I understand that it does indeed do this. And I understand how it is populating the list. But I don't understand exactly why this works.
Please excuse what must be, to some, an ignorant question. But I would quite like to understand the logic, as I think it rather clever that such a little snip-it is able able to answer a question as general as "How many non negative integer solutions exist" (for some general equation).
This algorithms is pretty cool and demonstrates the power of looking for a solution from a different perspective.
Let's take a example: 3x + 2y + z = 6, where LHS is the left hand side and RHS is the right hand side.
dp[k] will keep track of the number of unique ways to arrive at a RHS value of k by substituting non-negative integer values for LHS variables.
The i loop iterates over the variables in the LHS. The algorithm begins with setting all the variables to zero. So, the only possible k value is zero, hence
k 0 1 2 3 4 5 6
dp[k] = 1 0 0 0 0 0 0
For i = 0, we will update dp to reflect what happens if x is 1 or 2. We don't care about x > 2 because the solutions are all non-negative and 3x would be too big. The j loop is responsible for updating dp and dp[k] gets incremented by dp[k - 3] because we can arrive at RHS value k by adding one copy of the coefficient 3 to k-3. The result is
k 0 1 2 3 4 5 6
dp[k] = 1 0 0 1 0 0 1
Now the algorithm continues with i = 1, updating dp to reflect all possible RHS values where x is 0, 1, or 2 and y is 0, 1, 2, or 3. This time the j loop increments dp[k] by dp[k-2] because we can arrive at RHS value k by adding one copy of the coefficient 2 to k-2, resulting in
k 0 1 2 3 4 5 6
dp[k] = 1 0 1 1 1 1 2
Finally, the algorithm incorporates z = 1, 2, 3, 4, 5, or 6, resulting in
k 0 1 2 3 4 5 6
dp[k] = 1 1 2 3 4 5 7
In addition to computing the answer in pseudo-polynomial time, dp encodes the answer for every RHS <= the input right hand side.
Related
Here is a 5x5 matrix, with all cells unknown, it looks something like this:
A1+B1+C1+D1+E1| 1
A2+B2+C2+D2+E2| 0
A3+B3+C3+D3+E3| 1
A4+B4+C4+D4+E4| 3
A5+B5+C5+D5+E5| 2
_______________
2 1 2 1 1
So, the summation of rows can be seen on the right, and the summation of columns can be seen on the bottom. The solution can only be 0 or 1, and as an example here is the solution to the specific one I have typed out above:
0+0+1+0+0| 1
0+0+0+0+0| 0
1+0+0+0+0| 1
1+1+0+0+1| 3
0+0+1+1+0| 2
____________
2 1 2 1 1
As you can see, summing the rows and columns gives the results on the right and bottom.
My question: How would you go about entering the original matrix with unknowns and having python iterate each cell with 0 or 1 until the puzzle is complete?
You don't really need a matrix -- just use vectors (tuples) of length 25. They can represent 5x5 matrices according to the following scheme:
0 1 2 3 4
5 6 7 8 9
10 11 12 13 14
15 16 17 18 19
20 21 22 23 24
These are the indices of such tuples. Note that the row and column of an index can be obtained from the function divmod.
You can use product from itertools to iterate over the 2**25 possible ways of filling in the matrix.
These ideas lead to the following code:
from itertools import product
#nxn matrices will be represented by tuples of length n**2,
#in row-major order
#the following function caluculates row and column sums:
def all_sums(array,n):
row_sums = [0]*n
col_sums = [0]*n
for i,x in enumerate(array):
q,r = divmod(i,n)
row_sums[q] += x
col_sums[r] += x
return row_sums, col_sums
#in what follows, row_sums, col_sums are lists of target values
def solve_puzzle(row_sums, col_sums):
n = len(row_sums)
for p in product(range(2),repeat = n*n):
if all_sums(p,n) == (row_sums, col_sums):
return p
return "no solution"
solution = solve_puzzle([1,0,1,3,2],[2,1,2,1,1])
for i in range(0,25,5):
print(solution[i:i+5])
Output:
(0, 0, 0, 0, 1)
(0, 0, 0, 0, 0)
(0, 0, 0, 1, 0)
(1, 1, 1, 0, 0)
(1, 0, 1, 0, 0)
In this case brute-force was feasible. If you go much beyond 5x5 it would no longer be feasible, and more sophisticated algorithms would be required.
This is a special case of an integer linear programming problem. The special case of 0-1 integer linear programming is still unfortunately NP-complete, though there exist many algorithms including heuristic ones. You can use a built-in library to do this for you.
I tried to sort an array by permuting it with itself
(the array contain all the numbers in range between 0 to its length-1)
so to test it I used random.shuffle but it had some unexpected results
a = np.array(range(10))
random.shuffle(a)
a = a[a]
a = a[a]
print(a)
# not a sorted array
# [9 5 2 3 1 7 6 8 0 4]
a = np.array([2,1,4,7,6,5,0,3,8,9])
a = a[a]
a = a[a]
print(a)
# [0 1 2 3 4 5 6 7 8 9]
so for some reason the permutation when using the second example of an unsorted array returns the sorted array as expected but the shuffled array doesn't work the same way.
Does anyone know why? Or if there is an easier way to sort using permutation or something similar it would be great.
TL;DR
There is no reason to expect a = a[a] to sort the array. In most cases it won't. In case of a coincidence it might.
What is the operation c = b[a]? or Applying a permutation
When you use an array a obtained by shuffling range(n) as a mask for an array b of same size n, you are applying a permutation, in the mathematical sense, to the elements of b. For instance:
a = [2,0,1]
b = np.array(['Alice','Bob','Charlie'])
print(b[a])
# ['Charlie' 'Alice' 'Bob']
In this example, array a represents the permutation (2 0 1), which is a cycle of length 3. Since the length of the cycle is 3, if you apply it three times, you will end up where you started:
a = [2,0,1]
b = np.array(['Alice','Bob','Charlie'])
c = b
for i in range(3):
c = c[a]
print(c)
# ['Charlie' 'Alice' 'Bob']
# ['Bob' 'Charlie' 'Alice']
# ['Alice' 'Bob' 'Charlie']
Note that I used strings for the elements of b ton avoid confusing them with indices. Of course, I could have used numbers from range(n):
a = [2,0,1]
b = np.array([0,1,2])
c = b
for i in range(3):
c = c[a]
print(c)
# [2 0 1]
# [1 2 0]
# [0 1 2]
You might see an interesting, but unsurprising fact: The first line is equal to a; in other words, the first result of applying a to b is equal to a itself. This is because b was initialised to [0 1 2], which represent the identity permutation id; thus, the permutations that we find by repeatedly applying a to b are:
id == a^0
a
a^2
a^3 == id
Can we always go back where we started? or The rank of a permutation
It is a well-known result of algebra that if you apply the same permutation again and again, you will eventually end up on the identity permutation. In algebraic notations: for every permutation a, there exists an integer k such that a^k == id.
Can we guess the value of k?
The minimum value of k is called the rank of a permutation.
If a is a cycle, then the minimum possible k is the length of the cycle. In our previous example, a was a cycle of length 3, so it took three applications of a before we found the identity permutation again.
How about a cycle of length 2? A cycle of length 2 is just "swapping two elements". For instance, swapping elements 0 and 1:
a = [1,0,2]
b = np.array([0,1,2])
c = b
for i in range(2):
c = c[a]
print(c)
# [1 0 2]
# [0 1 2]
We swap 0 and 1, then we swap them back.
How about two disjoint cycles? Let's try a cycle of length 3 on the first three elements, simultaneously with swapping the last two elements:
a = [2,0,1,3,4,5,7,6]
b = np.array([0,1,2,3,4,5,6,7])
c = b
for i in range(6):
c = c[a]
print(c)
# [2 0 1 3 4 5 7 6]
# [1 2 0 3 4 5 6 7]
# [0 1 2 3 4 5 7 6]
# [2 0 1 3 4 5 6 7]
# [1 2 0 3 4 5 7 6]
# [0 1 2 3 4 5 6 7]
As you can see by carefully examining the intermediary results, there is a period of length 3 on the first three elements, and a period of length 2 on the last two elements. The overall period is the least common multiple of the two periods, which is 6.
What is k in general? A well-known theorem of algebra states: every permutation can be written as a product of disjoint cycles. The rank of a cycle is the length of the cycle. The rank of a product of disjoint cycles is the least common multiple of the ranks of cycles.
A coincidence in your code: sorting [2,1,4,7,6,5,0,3,8,9]
Let us go back to your python code.
a = np.array([2,1,4,7,6,5,0,3,8,9])
a = a[a]
a = a[a]
print(a)
# [0 1 2 3 4 5 6 7 8 9]
How many times did you apply permutation a? Note that because of the assignment a =, array a changed between the first and the second lines a = a[a]. Let us dissipate some confusion by using a different variable name for every different value. Your code is equivalent to:
a = np.array([2,1,4,7,6,5,0,3,8,9])
a2 = a[a]
a4 = a2[a2]
print(a4)
Or equivalently:
a = np.array([2,1,4,7,6,5,0,3,8,9])
a4 = (a[a])[a[a]]
This last line looks a little bit complicated. However, a cool result of algebra is that composition of permutations is associative. You already knew that addition and multiplication were associative: x+(y+z) == (x+y)+z and x(yz) == (xy)z. Well, it turns out that composition of permutations is associative as well! Using numpy's masks, this means that:
a[b[c]] == (a[b])[c]
Thus your python code is equivalent to:
a = np.array([2,1,4,7,6,5,0,3,8,9])
a4 = ((a[a])[a])[a]
print(a4)
Or without the unneeded parentheses:
a = np.array([2,1,4,7,6,5,0,3,8,9])
a4 = a[a][a][a]
print(a4)
Since a4 is the identity permutation, this tells us that the rank of a divides 4. Thus the rank of a is 1, 2 or 4. This tells us that a can be written as a product of swaps and length-4 cycles. The only permutation of rank 1 is the identity itself. Permutations of rank 2 are products of disjoint swaps, and we can see that this is not the case of a. Thus the rank of a must be exactly 4.
You can find the cycles by choosing an element, and following its orbit: what values is that element successively transformed into? Here we see that:
0 is transformed into 2; 2 is transformed into 4; 4 is transformed into 6; 6 is transformed into 0;
1 remains untouched;
3 becomes 7; 7 becomes 3;
5 is untouched; 8 and 9 are untouched.
Conclusion: Your numpy array represents the permutation (0 -> 2 -> 4 -> 6 -> 0)(3 <-> 7), and its rank is the least common multiple of 4 and 2, lcm(4,2) == 4.
it's took some time but I figure a way to do it.
numpy doesn't have this fiture but panda does have.
by using df.reindex I can sort a data frame by it indexes
import pandas as pd
import numpy as np
train_df = pd.DataFrame(range(10))
train_df = train_df.reindex(np.random.permutation(train_df.index))
print(train_df) # random dataframe contaning all values up to 9
train_df = train_df.reindex(range(10))
print(train_df) # sort data frame
I have two equal sized arrays ( array1 and array2 ) of 0's and 1's. How do I get all the arrays whose bit wise union with array1 result into array2 ? For example,if array1 = [1, 1, 1] and array2 = [1, 1, 1]. Output should be all eight arrays : [0, 0, 0], [1, 0, 0], ...., [1, 1, 1] . Are there efficient solutions to it or only brute force is the way ?
My try :
I tried to calculate bit wise difference first and if any of bit is negative then return false( not possible to combine first array with any kind of array to get array2). If all bits are non-negative then .... if bit in difference is 0 then it can be replaced by 0 or 1 either( this is wrong assumption albeit and fails for if array1= [0,0] , array2= [0,0], and if any bit in difference is 0 then required array has to have 1 at that place to make it 1
Here's how I would go about solving this problem:
First, let's think about this. You need to find all arrays of binary values that, when combined (via some operator) with a known binary value, = a new binary value. Don't try to solve the problem yet. Assume you need to go from 00 to 11. How many possible answers are there? Assume you need to go from 11 to 11. How many possible answers are there? Can you do any better (in the worst case) than a brute force approach? That'll give you a complexity bound.
With that rough bound in mind, tackle the bits of the question that are a bit curious. Drill down onto the question a little bit more. What is the 'bitwise union operator'? Is it 'and'? Is it 'or'? Is it something more complicated? 'Bitwise Union' sounds like B[i] = A[i] OR X[i], but anyone asking that question could mean something else..
Depending on the answer to questions 1 and 2, you have a lot to work with here. I can think of a few different options, but I think from here you can come up with an algorithm.
Once you have a solution, you need to think about "Can I do a better job here'? A lot of that goes back to the initial impressions about the problem and how they're constructed, and what/how much you think you can optimize.
Note: I will explain the following with an example input:
A = [0 0 1 0 1 1], B = [1 1 1 0 1 1]
Assuming you want to calculate X for the equations A OR X = B, let us see what are the options for each choice of bit in A and B:
A OR X = B
--------------------
0 0 0
0 1 1
1 N.A. 0
1 (0,1) 1
If any bit in A is 1, and its corresponding B bit is 0, there are no solutions possible. Return an empty set.
If the corresponding bits in A and B are 1, the corresponding bit in X does not matter.
Now, see that one solution for X is B itself, (if condition #1, as stated above, is satisfied). Hence, lets construct a number start_num = B. This will be one solution, and the other solutions will be constructed from this.
start_num = B = [1 1 1 0 1 1]
The 'choice' bits are those where X can take any value, i.e. those positions where A=1 and B=1. Let us make another number choice = A AND B, so that choice = 1 denotes those positions. Also notice that, if there are k positions where choice = 1, the total number of solutions is 2^k.
choice = A AND B = [0 0 1 0 1 1] ,hence, k = 3
Store these 'choice' positions in an array (of length k), starting from the right (LSB = 0). Let us call this array pos_array.
pos_array = [0 1 3]
Notice that all the 'choice' bits in start_num are set to 1. Hence, all the other solutions will have some (1 <= p <= k) of these bits set to 0. Now that we know which bits are to be changed, we need to make these solutions in an efficient manner.
This can be done by making all solutions in an order where the difference between the previous solution and the current one is just at one position, hence making it efficient to calculate the solutions. For example, if we have two 'choice' bits, the following explains the difference between simply running through all combinations in an arithmetic progression, and going through them in a 1-bit-change order:
1-bit-toggle-order decreasing order
---------------------- ----------------------
1 1 // start 1 1 // start
1 0 // toggle bit 0 1 0 // subtract 1
0 0 // toggle bit 1 0 1 // subtract 1
0 1 // toggle bit 0 0 0 // subtract 1
(We want to exploit the speed of bitwise operations, hence we will use the 1-bit-toggle order).
Now, we will build each solution: (This is not actual C code, just an explanation)
addToSet(start_num); // add the initial solution to the set
for(i=1; i<2^k; i++)
{
pos = 0;
count = i;
while( ( count & 1) != 0)
{
count = count>>1;
pos++;
}
toggle(start_num[pos_array[pos]]); // update start_num by toggling the desired bit
addToSet(start_num); // Add the updated vector to the set
}
If this code is run on the above example, the following toggle statements will be executed:
toggle(start_num[0])
toggle(start_num[1])
toggle(start_num[0])
toggle(start_num[3])
toggle(start_num[0])
toggle(start_num[1])
toggle(start_num[0])
, which will result in the following additions:
addToSet([1 1 1 0 1 0])
addToSet([1 1 1 0 0 0])
addToSet([1 1 1 0 0 1])
addToSet([1 1 0 0 0 1])
addToSet([1 1 0 0 0 0])
addToSet([1 1 0 0 1 0])
addToSet([1 1 0 0 1 1])
, which, in addition to the already-present initial solution [1 1 1 0 1 1], completes the set.
NOTE: I am not an expert in bitwise operations, besides other things. I think there are better ways to write the algorithm, making better use of bit-access pointers and bitwise binary operations (and will be glad if someone can suggest improvements). What I am proposing with this solution is the general approach to this problem.
You can construct the digit options for each slot i by evaluating:
for d in (0, 1):
if (array1[i] or d) == array2[i]):
digits[i].append(d)
Then you just need to iterate over i.
The objective is to construct a list of lists: [[0,1],[1],[0,1]] showing the valid digits in each slot. Then you can use itertools.product() to construct all of the valid arrays:
arrays = list(itertools.product(*digits))
You can put all this together using list comprehensions and this would result in:
list(it.product(*[[d for d in (0, 1) if (x or d) == y] for x, y in zip(array1, array2)]))
In action:
>>> import itertools as it
>>> a1, a2 = [1,1,1], [1,1,1]
>>> list(it.product(*[[d for d in (0, 1) if (x or d) == y] for x, y in zip(a1, a2)]))
[(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)]
>>> a1, a2 = [1,0,0], [1,1,1]
>>> list(it.product(*[[d for d in (0, 1) if (x or d) == y] for x, y in zip(a1, a2)]))
[(0, 1, 1), (1, 1, 1)]
>>> a1, a2 = [1,0,0], [0,1,1]
>>> list(it.product(*[[d for d in (0, 1) if (x or d) == y] for x, y in zip(a1, a2)]))
[]
I am researching how python implements dictionaries. One of the equations in the python dictionary implementation relates the pseudo random probing for an empty dictionary slot using the equation
j = ((j*5) + 1) % 2**i
which is explained here.
I have read this question, How are Python's Built In Dictionaries Implemented?, and basically understand how dictionaries are implemented.
What I don't understand is why/how the equation:
j = ((j*5) + 1) % 2**i
cycles through all the remainders of 2**i. For instance, if i = 3 for a total starting size of 8. j goes through the cycle:
0
1
6
7
4
5
2
3
0
if the starting size is 16, it would go through the cycle:
0 1 6 15 12 13 2 11 8 9 14 7 4 5 10 3 0
This is very useful for probing all the slots in the dictionary. But why does it work ? Why does j = ((j*5)+1) work but not j = ((j*6)+1) or j = ((j*3)+1) both of which get stuck in smaller cycles.
I am hoping to get a more intuitive understanding of this than the equation just works and that's why they used it.
This is the same principle that pseudo-random number generators use, as Jasper hinted at, namely linear congruential generators. A linear congruential generator is a sequence that follows the relationship X_(n+1) = (a * X_n + c) mod m. From the wiki page,
The period of a general LCG is at most m, and for some choices of factor a much less than that. The LCG will have a full period for all seed values if and only if:
m and c are relatively prime.
a - 1 is divisible by all prime factors of m.
a - 1 is divisible by 4 if m is divisible by 4.
It's clear to see that 5 is the smallest a to satisfy these requirements, namely
2^i and 1 are relatively prime.
4 is divisible by 2.
4 is divisible by 4.
Also interestingly, 5 is not the only number that satisfies these conditions. 9 will also work. Taking m to be 16, using j=(9*j+1)%16 yields
0 1 10 11 4 5 14 15 8 9 2 3 12 13 6 7
The proof for these three conditions can be found in the original Hull-Dobell paper on page 5, along with a bunch of other PRNG-related theorems that also may be of interest.
Question : A set of numbers separated by space is passed as input. The program must print the largest snake sequence present in the numbers. A snake sequence is made up of adjacent numbers such that for each number, the number on the right or left is +1 or -1 of it's value. If multiple snake sequences of maximum length is possible print the snake sequence appearing in the natural input order.
Example Input/Output 1:
Input:
9 8 7 5 3 0 1 -2 -3 1 2
Output:
3 2 1 0 1
Example Input/Output 2:
Input:
-5 -4 -3 -1 0 1 4 6 5 4 3 4 3 2 1 0 2 -3 9
Output:
6 5 4 3 4 3 2 1 0 -1 0 1 2
Example Input/Output 3:
Input:
5 6 7 9 8 8
Output:
5 6 7 8 9 8
I have searched online & have only found references to find a snake sequence when a grid of numbers is given & not an array.
My Solution so far :
Create a 2D Array containing all the numbers from input as 1 value and the 2nd value being the max length sequence that can be generated starting from that number. But this doesn't always generate the max length sequence and doesn't work at all when there are 2 snakes of max length.
Assuming that the order in the original set of numbers does not matter, as seems to be the case in your question, this seems to be an instance of the Longest Path Problem, which is NP-hard.
Think of it that way: You can create a graph from your numbers, with edges between all pairs of nodes that have a difference of one. Now, the longest simple (acyclic) path in this graph is your solution. Your first example would correspond to this graph and path. (Note that there are two 1 nodes for the two ones in the input set.)
While this in itself does not solve your problem, it should help you getting started finding an algorithm to solve (or approximate) it, now that you know a better/more common name for the problem.
One algorithm works like this: Starting from each of the numbers, determine the "adjacent" numbers and do sort of a depth-first search through the graph to determine the longest path. Remember to temporarily remove the visited nodes from the graph. This has a worstcase complexity of O(2n) 1), but apparently it's sufficient for your examples.
def longest_snake(numbers, counts, path):
best = path
for n in sorted(counts, key=numbers.index):
if counts[n] > 0 and (path == [] or abs(path[-1] - n) == 1):
counts[n] -= 1
res = longest_snake(numbers, counts, path + [n])
if len(res) > len(best):
best = res
counts[n] += 1
return best
Example:
>>> from collections import Counter
>>> numbers = list(map(int, "9 8 7 5 3 0 1 -2 -3 1 2".split()))
>>> longest_snake(numbers, Counter(numbers), [])
[3, 2, 1, 0, 1]
Note that this algorithm will reliably find a maximum "snake" sequence, using no number more often than allowed. However, it may not find the specific sequence that's expected as the output, i.e. "the snake sequence appearing in the natural input order", whatever that's supposed to mean.
To get closer to the "natural order", you might try the numbers in the same order as they appear in the input (as I did with sorted), but that does not work perfectly, either. Anyway, I'm sure you can figure out the rest by yourself.
1) In this special case, the graph has a branching factor of 2, thus O(2n); in the more general case, the complexity would be closer to O(n!).