I am trying to find the number of ways to construct an array such that consecutive positions contain different values.
Specifically, I need to construct an array with elements such that each element 1 between and k , all inclusive. I also want the first and last elements of the array to be 1 and x.
Complete problem statement:
Here is what I tried:
def countArray(n, k, x):
# Return the number of ways to fill in the array.
if x > k:
return 0
if x == 1:
return 0
def fact(n):
if n == 0:
return 1
fact_range = n+1
T = [1 for i in range(fact_range)]
for i in range(1,fact_range):
T[i] = i * T[i-1]
return T[fact_range-1]
ways = fact(k) / (fact(n-2)*fact(k-(n-2)))
return int(ways)
In short, I did K(C)N-2 to find the ways. How could I solve this?
It passes one of the base case with inputs as countArray(4,3,2) but fails for 16 other cases.
Let X(n) be the number of ways of constructing an array of length n, starting with 1 and ending in x (and not repeating any numbers). Let Y(n) be the number of ways of constructing an array of length n, starting with 1 and NOT ending in x (and not repeating any numbers).
Then there's these recurrence relations (for n>1)
X(n+1) = Y(n)
Y(n+1) = X(n)*(k-1) + Y(n)*(k-2)
In words: If you want an array of length n+1 ending in x, then you need an array of length n not ending in x. And if you want an array of length n+1 not ending in x, then you can either add any of the k-1 symbols to an array of length n ending in x, or you can take an array of length n not ending in x, and add any of the k-2 symbols that aren't x and don't repeat the last value.
For the base case, n=1, if x is 1 then X(1)=1, Y(1)=0 otherwise, X(1)=0, Y(1)=1
This gives you an O(n)-time method of computing the result.
def ways(n, k, x):
M = 10**9 + 7
wx = (x == 1)
wnx = (x != 1)
for _ in range(n-1):
wx, wnx = wnx, wx * (k-1) + wnx*(k-2)
wnx = wnx % M
return wx
print(ways(100, 5, 2))
In principle you can reduce this to O(log n) by expressing the recurrence relations as a matrix and computing the matrix power (mod M), but it's probably not necessary for the question.
[Additional working]
We have the recurrence relations:
X(n+1) = Y(n)
Y(n+1) = X(n)*(k-1) + Y(n)*(k-2)
Using the first, we can replace the Y(_) in the second with X(_+1) to reduce it down to a single variable. Then:
X(n+2) = X(n)*(k-1) + X(n+1)*(k-2)
Using standard techniques, we can solve this linear recurrence relation exactly.
In the case x!=1, we have:
X(n) = ((k-1)^(n-1) - (-1)^n) / k
And in the case x=1, we have:
X(n) = ((k-1)^(n-1) - (1-k)(-1)^n)/k
We can compute these mod M using Fermat's little theorem because M is prime. So 1/k = k^(M-2) mod M.
Thus we have (with a little bit of optimization) this short program that solves the problem and runs in O(log n) time:
def ways2(n, k, x):
S = -1 if n%2 else 1
return ((pow(k-1, n-1, M) + S) * pow(k, M-2, M) - S*(x==1)) % M
could you try this DP version: (it's passed all tests) (it's inspired by #PaulHankin and take DP approach - will run performance later to see what's diff for big matrix)
def countArray(n, k, x):
# Return the number of ways to fill in the array.
big_mod = 10 ** 9 + 7
dp = [[1], [1]]
if x == 1:
dp = [[1], [0]]
else:
dp = [[1], [1]]
for _ in range(n-2):
dp[0].append(dp[0][-1] * (k - 1) % big_mod)
dp[1].append((dp[0][-1] - dp[1][-1]) % big_mod)
return dp[1][-1]
Related
Using Python, I would like to implement a function that takes a natural number n as input and outputs a list of natural numbers [y1, y2, y3, ...] such that n + y1*y1 and n + y2*y2 and n + y3*y3 and so forth is again a square.
What I tried so far is to obtain one y-value using the following function:
def find_square(n:int) -> tuple[int, int]:
if n%2 == 1:
y = (n-1)//2
x = n+y*y
return (y,x)
return None
It works fine, eg. find_square(13689) gives me a correct solution y=6844. It would be great to have an algorithm that yields all possible y-values such as y=44 or y=156.
Simplest slow approach is of course for given N just to iterate all possible Y and check if N + Y^2 is square.
But there is a much faster approach using integer Factorization technique:
Lets notice that to solve equation N + Y^2 = X^2, that is to find all integer pairs (X, Y) for given fixed integer N, we can rewrite this equation to N = X^2 - Y^2 = (X + Y) * (X - Y) which follows from famous school formula of difference of squares.
Now lets rename two factors as A, B i.e. N = (X + Y) * (X - Y) = A * B, which means that X = (A + B) / 2 and Y = (A - B) / 2.
Notice that A and B should be of same odditiy, either both odd or both even, otherwise in last formulas above we can't have whole division by 2.
We will factorize N into all possible pairs of two factors (A, B) of same oddity. For fast factorization in code below I used simple to implement but yet quite fast algorithm Pollard Rho, also two extra algorithms were needed as a helper to Pollard Rho, one is Fermat Primality Test (which allows fast checking if number is probably prime) and second is Trial Division Factorization (which helps Pollard Rho to factor out small factors, which could cause Pollard Rho to fail).
Pollard Rho for composite number has time complexity O(N^(1/4)) which is very fast even for 64-bit numbers. Any faster factorization algorithm can be chosen if needed a bigger space to be searched. My fast algorithm time is dominated by speed of factorization, remaining part of algorithm is blazingly fast, just few iterations of loop with simple formulas.
If your N is a square itself (hence we know its root easily), then Pollard Rho can factor N even much faster, within O(N^(1/8)) time. Even for 128-bit numbers it means very small time, 2^16 operations, and I hope you're solving your task for less than 128 bit numbers.
If you want to process a range of possible N values then fastest way to factorize them is to use techniques similar to Sieve of Erathosthenes, using set of prime numbers, it allows to compute all factors for all N numbers within some range. Using Sieve of Erathosthenes for the case of range of Ns is much faster than factorizing each N with Pollard Rho.
After factoring N into pairs (A, B) we compute (X, Y) based on (A, B) by formulas above. And output resulting Y as a solution of fast algorithm.
Following code as an example is implemented in pure Python. Of course one can use Numba to speed it up, Numba usually gives 30-200 times speedup, for Python it achieves same speed as optimized C++. But I thought that main thing here is to implement fast algorithm, Numba optimizations can be done easily afterwards.
I added time measurement into following code. Although it is pure Python still my fast algorithm achieves 8500x times speedup compared to regular brute force approach for limit of 1 000 000.
You can change limit variable to tweak amount of searched space, or num_tests variable to tweak amount of different tests.
Following code implements both solutions - fast solution find_fast() described above plus very tiny brute force solution find_slow() which is very slow as it scans all possible candidates. This slow solution is only used to compare correctness in tests and compare speedup.
Code below uses nothing except few standard Python library modules, no external modules were used.
Try it online!
def find_slow(N):
import math
def is_square(x):
root = int(math.sqrt(float(x)) + 0.5)
return root * root == x, root
l = []
for y in range(N):
if is_square(N + y ** 2)[0]:
l.append(y)
return l
def find_fast(N):
import itertools, functools
Prod = lambda it: functools.reduce(lambda a, b: a * b, it, 1)
fs = factor(N)
mfs = {}
for e in fs:
mfs[e] = mfs.get(e, 0) + 1
fs = sorted(mfs.items())
del mfs
Ys = set()
for take_a in itertools.product(*[
(range(v + 1) if k != 2 else range(1, v)) for k, v in fs]):
A = Prod([p ** t for (p, _), t in zip(fs, take_a)])
B = N // A
assert A * B == N, (N, A, B, take_a)
if A < B:
continue
X = (A + B) // 2
Y = (A - B) // 2
assert N + Y ** 2 == X ** 2, (N, A, B, X, Y)
Ys.add(Y)
return sorted(Ys)
def trial_div_factor(n, limit = None):
# https://en.wikipedia.org/wiki/Trial_division
fs = []
while n & 1 == 0:
fs.append(2)
n >>= 1
all_checked = False
for d in range(3, (limit or n) + 1, 2):
if d * d > n:
all_checked = True
break
while True:
q, r = divmod(n, d)
if r != 0:
break
fs.append(d)
n = q
if n > 1 and all_checked:
fs.append(n)
n = 1
return fs, n
def fermat_prp(n, trials = 32):
# https://en.wikipedia.org/wiki/Fermat_primality_test
import random
if n <= 16:
return n in (2, 3, 5, 7, 11, 13)
for i in range(trials):
if pow(random.randint(2, n - 2), n - 1, n) != 1:
return False
return True
def pollard_rho_factor(n):
# https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm
import math, random
fs, n = trial_div_factor(n, 1 << 7)
if n <= 1:
return fs
if fermat_prp(n):
return sorted(fs + [n])
for itry in range(8):
failed = False
x = random.randint(2, n - 2)
for cycle in range(1, 1 << 60):
y = x
for i in range(1 << cycle):
x = (x * x + 1) % n
d = math.gcd(x - y, n)
if d == 1:
continue
if d == n:
failed = True
break
return sorted(fs + pollard_rho_factor(d) + pollard_rho_factor(n // d))
if failed:
break
assert False, f'Pollard Rho failed! n = {n}'
def factor(N):
import functools
Prod = lambda it: functools.reduce(lambda a, b: a * b, it, 1)
fs = pollard_rho_factor(N)
assert N == Prod(fs), (N, fs)
return sorted(fs)
def test():
import random, time
limit = 1 << 20
num_tests = 20
t0, t1 = 0, 0
for i in range(num_tests):
if (round(i / num_tests * 1000)) % 100 == 0 or i + 1 >= num_tests:
print(f'test {i}, ', end = '', flush = True)
N = random.randrange(limit)
tb = time.time()
r0 = find_slow(N)
t0 += time.time() - tb
tb = time.time()
r1 = find_fast(N)
t1 += time.time() - tb
assert r0 == r1, (N, r0, r1, t0, t1)
print(f'\nTime slow {t0:.05f} sec, fast {t1:.05f} sec, speedup {round(t0 / max(1e-6, t1))} times')
if __name__ == '__main__':
test()
Output:
test 0, test 2, test 4, test 6, test 8, test 10, test 12, test 14, test 16, test 18, test 19,
Time slow 26.28198 sec, fast 0.00301 sec, speedup 8732 times
For the easiest solution, you can try this:
import math
n=13689 #or we can ask user to input a square number.
for i in range(1,9999):
if math.sqrt(n+i**2).is_integer():
print(i)
I am trying to define a function which will approximate pi in python using one of Euler's methods. His formula is as follows:
My code so far is this:
def pi_euler1(n):
numerator = list(range(2 , n))
for i in numerator:
j = 2
while i * j <= numerator[-1]:
if i * j in numerator:
numerator.remove(i * j)
j += 1
for k in numerator:
if (k + 1) % 4 == 0:
denominator = k + 1
else:
denominator = k - 1
#Because all primes are odd, both numbers inbetween them are divisible by 2,
#and by extension 1 of the 2 numbers is divisible by 4
term = numerator / denominator
I know this is wrong, and also incomplete. I'm just not quite sure what the TypeError that I mentioned earlier actually means. I'm just quite stuck with it, I want to create a list of the terms and then find their products. Am I on the right lines?
Update:
I have worked ways around this, fixing the clearly obvious errors that were prevalent thanks to msconi and Johanc, now with the following code:
import math
def pi_euler1(n):
numerator = list(range(2 , 13 + math.ceil(n*(math.log(n)+math.log(math.log(n))))))
denominator=[]
for i in numerator:
j = 2
while i * j <= numerator[-1]:
if (i * j) in numerator:
numerator.remove(i * j)
j += 1
numerator.remove(2)
for k in numerator:
if (k + 1) % 4 == 0:
denominator.append(k+1)
else:
denominator.append(k-1)
a=1
for i in range(n):
a *= numerator[i] / denominator[i]
return 4*a
This seems to work, when I tried to plot a graph of the errors from pi in a semilogy axes scale, I was getting a domain error, but i needed to change the upper bound of the range to n+1 because log(0) is undefined. Thank you guys
Here is the code with some small modifications to get it working:
import math
def pi_euler1(n):
lim = n * n + 4
numerator = list(range(3, lim, 2))
for i in numerator:
j = 3
while i * j <= numerator[-1]:
if i * j in numerator:
numerator.remove(i * j)
j += 2
euler_product = 1
for k in numerator[:n]:
if (k + 1) % 4 == 0:
denominator = k + 1
else:
denominator = k - 1
factor = k / denominator
euler_product *= factor
return euler_product * 4
print(pi_euler1(3))
print(pi_euler1(10000))
print(math.pi)
Output:
3.28125
3.148427801913721
3.141592653589793
Remarks:
You only want the odd primes, so you can start with a list of odd numbers.
j can start with 3 and increment in steps of 2. In fact, j can start at i because all the multiples of i smaller than i*i are already removed earlier.
In general it is very bad practise to remove elements from the list over which you are iterating. See e.g. this post. Internally, Python uses an index into the list over which it iterates. Coincidently, this is not a problem in this specific case, because only numbers larger than the current are removed.
Also, removing elements from a very long list is very slow, as each time the complete list needs to be moved to fill the gap. Therefore, it is better to work with two separate lists.
You didn't calculate the resulting product, nor did you return it.
As you notice, this formula converges very slowly.
As mentioned in the comments, the previous version interpreted n as the limit for highest prime, while in fact n should be the number of primes. I adapted the code to rectify that. In the above version with a crude limit; the version below tries a tighter approximation for the limit.
Here is a reworked version, without removing from the list you're iterating. Instead of removing elements, it just marks them. This is much faster, so a larger n can be used in a reasonable time:
import math
def pi_euler_v3(n):
if n < 3:
lim = 6
else:
lim = n*n
while lim / math.log(lim) / 2 > n:
lim //= 2
print(n, lim)
numerator = list(range(3, lim, 2))
odd_primes = []
for i in numerator:
if i is not None:
odd_primes.append(i)
if len(odd_primes) >= n:
break
j = i
while i * j < lim:
numerator[(i*j-3) // 2] = None
j += 2
if len(odd_primes) != n:
print(f"Wrong limit calculation, only {len(odd_primes)} primes instead of {n}")
euler_product = 1
for k in odd_primes:
denominator = k + 1 if k % 4 == 3 else k - 1
euler_product *= k / denominator
return euler_product * 4
print(pi_euler_v2(100000))
print(math.pi)
Output:
3.141752253548891
3.141592653589793
In term = numerator / denominator you are dividing a list by a number, which doesn't make sense. Divide k by the denominator in the loop in order to use the numerator element for each of the equation's factors one by one. Then you could multiply them repeatedly to the term term *= i / denominator, which you initialize in the beginning as term = 1.
Another issue is the first loop, which won't give you the first n prime numbers. For example, for n=3, list(range(2 , n)) = [2]. Therefore, the only prime you will get is 2.
Examples,
1.Input=4
Output=111
Explanation,
1 = 1³(divisors of 1)
2 = 1³ + 2³(divisors of 2)
3 = 1³ + 3³(divisors of 3)
4 = 1³ + 2³ + 4³(divisors of 4)
------------------------
sum = 111(output)
1.Input=5
Output=237
Explanation,
1 = 1³(divisors of 1)
2 = 1³ + 2³(divisors of 2)
3 = 1³ + 3³(divisors of 3)
4 = 1³ + 2³ + 4³(divisors of 4)
5 = 1³ + 5³(divisors of 5)
-----------------------------
sum = 237 (output)
x=int(raw_input().strip())
tot=0
for i in range(1,x+1):
for j in range(1,i+1):
if(i%j==0):
tot+=j**3
print tot
Using this code I can find the answer for small number less than one million.
But I want to find the answer for very large numbers. Is there any algorithm
for how to solve it easily for large numbers?
Offhand I don't see a slick way to make this truly efficient, but it's easy to make it a whole lot faster. If you view your examples as matrices, you're summing them a row at a time. This requires, for each i, finding all the divisors of i and summing their cubes. In all, this requires a number of operations proportional to x**2.
You can easily cut that to a number of operations proportional to x, by summing the matrix by columns instead. Given an integer j, how many integers in 1..x are divisible by j? That's easy: there are x//j multiples of j in the range, so divisor j contributes j**3 * (x // j) to the grand total.
def better(x):
return sum(j**3 * (x // j) for j in range(1, x+1))
That runs much faster, but still takes time proportional to x.
There are lower-level tricks you can play to speed that in turn by constant factors, but they still take O(x) time overall. For example, note that x // j == 1 for all j such that x // 2 < j <= x. So about half the terms in the sum can be skipped, replaced by closed-form expressions for a sum of consecutive cubes:
def sum3(x):
"""Return sum(i**3 for i in range(1, x+1))"""
return (x * (x+1) // 2)**2
def better2(x):
result = sum(j**3 * (x // j) for j in range(1, x//2 + 1))
result += sum3(x) - sum3(x//2)
return result
better2() is about twice as fast as better(), but to get faster than O(x) would require deeper insight.
Quicker
Thinking about this in spare moments, I still don't have a truly clever idea. But the last idea I gave can be carried to a logical conclusion: don't just group together divisors with only one multiple in range, but also those with two multiples in range, and three, and four, and ... That leads to better3() below, which does a number of operations roughly proportional to the square root of x:
def better3(x):
result = 0
for i in range(1, x+1):
q1 = x // i
# value i has q1 multiples in range
result += i**3 * q1
# which values have i multiples?
q2 = x // (i+1) + 1
assert x // q1 == i == x // q2
if i < q2:
result += i * (sum3(q1) - sum3(q2 - 1))
if i+1 >= q2: # this becomes true when i reaches roughly sqrt(x)
break
return result
Of course O(sqrt(x)) is an enormous improvement over the original O(x**2), but for very large arguments it's still impractical. For example better3(10**6) appears to complete instantly, but better3(10**12) takes a few seconds, and better3(10**16) is time for a coffee break ;-)
Note: I'm using Python 3. If you're using Python 2, use xrange() instead of range().
One more
better4() has the same O(sqrt(x)) time behavior as better3(), but does the summations in a different order that allows for simpler code and fewer calls to sum3(). For "large" arguments, it's about 50% faster than better3() on my box.
def better4(x):
result = 0
for i in range(1, x+1):
d = x // i
if d >= i:
# d is the largest divisor that appears `i` times, and
# all divisors less than `d` also appear at least that
# often. Account for one occurence of each.
result += sum3(d)
else:
i -= 1
lastd = x // i
# We already accounted for i occurrences of all divisors
# < lastd, and all occurrences of divisors >= lastd.
# Account for the rest.
result += sum(j**3 * (x // j - i)
for j in range(1, lastd))
break
return result
It may be possible to do better by extending the algorithm in "A Successive Approximation Algorithm for Computing the Divisor Summatory Function". That takes O(cube_root(x)) time for the possibly simpler problem of summing the number of divisors. But it's much more involved, and I don't care enough about this problem to pursue it myself ;-)
Subtlety
There's a subtlety in the math that's easy to miss, so I'll spell it out, but only as it pertains to better4().
After d = x // i, the comment claims that d is the largest divisor that appears i times. But is that true? The actual number of times d appears is x // d, which we did not compute. How do we know that x // d in fact equals i?
That's the purpose of the if d >= i: guarding that comment. After d = x // i we know that
x == d*i + r
for some integer r satisfying 0 <= r < i. That's essentially what floor division means. But since d >= i is also known (that's what the if test ensures), it must also be the case that 0 <= r < d. And that's how we know x // d is i.
This can break down when d >= i is not true, which is why a different method needs to be used then. For example, if x == 500 and i == 51, d (x // i) is 9, but it's certainly not the case that 9 is the largest divisor that appears 51 times. In fact, 9 appears 500 // 9 == 55 times. While for positive real numbers
d == x/i
if and only if
i == x/d
that's not always so for floor division. But, as above, the first does imply the second if we also know that d >= i.
Just for Fun
better5() rewrites better4() for about another 10% speed gain. The real pedagogical point is to show that it's easy to compute all the loop limits in advance. Part of the point of the odd code structure above is that it magically returns 0 for a 0 input without needing to test for that. better5() gives up on that:
def isqrt(n):
"Return floor(sqrt(n)) for int n > 0."
g = 1 << ((n.bit_length() + 1) >> 1)
d = n // g
while d < g:
g = (d + g) >> 1
d = n // g
return g
def better5(x):
assert x > 0
u = isqrt(x)
v = x // u
return (sum(map(sum3, (x // d for d in range(1, u+1)))) +
sum(x // i * i**3 for i in range(1, v)) -
u * sum3(v-1))
def sum_divisors(n):
sum = 0
i = 0
for i in range (1, n) :
if n % i == 0 and n != 0 :
sum = sum + i
# Return the sum of all divisors of n, not including n
return sum
print(sum_divisors(0))
# 0
print(sum_divisors(3)) # Should sum of 1
# 1
print(sum_divisors(36)) # Should sum of 1+2+3+4+6+9+12+18
# 55
print(sum_divisors(102)) # Should be sum of 2+3+6+17+34+51
# 114
I get this code from leetcode.
class Solution(object):
def myPow(self, x, n):
if n == 0:
return 1
if n == -1:
return 1 / x
return self.myPow(x * x, n / 2) * ([1, x][n % 2])
This code is used to implement poe(x, n), which means x**n in Python.
I want to know why it can implement pow(x, n).
It looks doesn't make sense...
I understand
if n == 0:
and
if n == -1:
But the core code:
self.myPow(x * x, n / 2) * ([1, x][n % 2])
is really hard to understand.
BTW, this code only works on Python 2.7. If you want to test on Python 3, you should change
myPow(x*x, n / 2) * ([1, x][n % 2])
to
myPow(x*x, n // 2) * ([1, x][n % 2])
The recursive function is to compute power (most probably integer, non negative or -1, power) of a number, as you might have expected (something like x = 2.2 and n = 9).
(And this seems to be written for Python 2.x (due to the n/2 having expected result of integer instead of n//2))
The initial returns are very straight-forward math.
if n == 0:
return 1
if n == -1:
return 1 / x
When the power is 0, then you return 1 and then the power is -1, you return 1/x.
Now the next line consists of two elements:
self.myPow(x * x, n/2)
and
[1, x][n%2]
The first one self.myPow(x * x, n/2) simply means you want to make higher power (not 0 or -1) into half of it by squaring the powered number x
(most probably to speed up the calculation by reducing the number of multiplication needed - imagine if you have case to compute 2^58. By multiplication, you have to multiply the number 58 times. But if you divide it into two every time and solve it recursively, you end up will smaller number of operations).
Example:
x^8 = (x^2)^4 = y^4 #thus you reduce the number of operation you need to perform
Here, you pass x^2 as your next argument in the recursive (that is y) and do it recursively till the power is 0 or -1.
And the next one is you get the modulo of two of the divided power. This is to make up the case for odd case (that is, when the power n is odd).
[1,x][n%2] #is 1 when n is even, is x when n is odd
If n is odd, then by doing n/2, you lose one x in the process. Thus you have to make up by multiplying the self.myPow(x * x, n / 2) with that x. But if your n is not odd (even), you do not lose one x, thus you do not need to multiply the result by x but by 1.
Illustratively:
x^9 = (x^2)^4 * x #take a look the x here
but
x^8 = (x^2)^4 * 1 #take a look the 1 here
Thus, this:
[1, x][n % 2]
is to multiply the previous recursion by either 1 (for even n case) or x (for odd n case) and is equivalent to ternary expression:
1 if n % 2 == 0 else x
This is divide and conquer technique. The implementation above is a fast way of computing exponentiation. At each call, half of the multiplications are eliminated.
Assuming that n is even, x^n can be written as below (If n is odd, it requires one extra multiplication)
x^n = (x^2)^(n/2)
or
x^n = (x^n/2)^2
The function shown above implements the 1st version. It's easy to implement the 2nd one also (I removed recursion base cases below)
r = self.myPow(x,n/2)
return r * r * ([1, x][n % 2])
The right answer might be below
class Solution:
def myPow(self, x: float, n: int) -> float:
if n == 0:
return 1
if n > 0:
return self.myPow(x * x, int(n / 2)) * ([1, x][n % 2])
else:
return self.myPow(x * x, int(n / 2)) * ([1, 1/x][n % 2])
I’m having a bit of trouble controlling the results from a data generating algorithm I am working on. Basically it takes values from a list and then lists all the different combinations to get to a specific sum. So far the code works fine(haven’t tested scaling it with many variables yet), but I need to allow for negative numbers to be include in the list.
The way I think I can solve this problem is to put a collar on the possible results as to prevent infinity results(if apples is 2 and oranges are -1 then for any sum, there will be an infinite solutions but if I say there is a limit of either then it cannot go on forever.)
So Here's super basic code that detects weights:
import math
data = [-2, 10,5,50,20,25,40]
target_sum = 100
max_percent = .8 #no value can exceed 80% of total(this is to prevent infinite solutions
for node in data:
max_value = abs(math.floor((target_sum * max_percent)/node))
print node, "'s max value is ", max_value
Here's the code that generates the results(first function generates a table if its possible and the second function composes the actual results. Details/pseudo code of the algo is here: Can brute force algorithms scale? ):
from collections import defaultdict
data = [-2, 10,5,50,20,25,40]
target_sum = 100
# T[x, i] is True if 'x' can be solved
# by a linear combination of data[:i+1]
T = defaultdict(bool) # all values are False by default
T[0, 0] = True # base case
for i, x in enumerate(data): # i is index, x is data[i]
for s in range(target_sum + 1): #set the range of one higher than sum to include sum itself
for c in range(s / x + 1):
if T[s - c * x, i]:
T[s, i+1] = True
coeff = [0]*len(data)
def RecursivelyListAllThatWork(k, sum): # Using last k variables, make sum
# /* Base case: If we've assigned all the variables correctly, list this
# * solution.
# */
if k == 0:
# print what we have so far
print(' + '.join("%2s*%s" % t for t in zip(coeff, data)))
return
x_k = data[k-1]
# /* Recursive step: Try all coefficients, but only if they work. */
for c in range(sum // x_k + 1):
if T[sum - c * x_k, k - 1]:
# mark the coefficient of x_k to be c
coeff[k-1] = c
RecursivelyListAllThatWork(k - 1, sum - c * x_k)
# unmark the coefficient of x_k
coeff[k-1] = 0
RecursivelyListAllThatWork(len(data), target_sum)
My problem is, I don't know where/how to integrate my limiting code to the main code inorder to restrict results and allow for negative numbers. When I add a negative number to the list, it displays it but does not include it in the output. I think this is due to it not being added to the table(first function) and I'm not sure how to have it added(and still keep the programs structure so I can scale it with more variables).
Thanks in advance and if anything is unclear please let me know.
edit: a bit unrelated(and if detracts from the question just ignore, but since your looking at the code already, is there a way I can utilize both cpus on my machine with this code? Right now when I run it, it only uses one cpu. I know the technical method of parallel computing in python but not sure how to logically parallelize this algo)
You can restrict results by changing both loops over c from
for c in range(s / x + 1):
to
max_value = int(abs((target_sum * max_percent)/x))
for c in range(max_value + 1):
This will ensure that any coefficient in the final answer will be an integer in the range 0 to max_value inclusive.
A simple way of adding negative values is to change the loop over s from
for s in range(target_sum + 1):
to
R=200 # Maximum size of any partial sum
for s in range(-R,R+1):
Note that if you do it this way then your solution will have an additional constraint.
The new constraint is that the absolute value of every partial weighted sum must be <=R.
(You can make R large to avoid this constraint reducing the number of solutions, but this will slow down execution.)
The complete code looks like:
from collections import defaultdict
data = [-2,10,5,50,20,25,40]
target_sum = 100
# T[x, i] is True if 'x' can be solved
# by a linear combination of data[:i+1]
T = defaultdict(bool) # all values are False by default
T[0, 0] = True # base case
R=200 # Maximum size of any partial sum
max_percent=0.8 # Maximum weight of any term
for i, x in enumerate(data): # i is index, x is data[i]
for s in range(-R,R+1): #set the range of one higher than sum to include sum itself
max_value = int(abs((target_sum * max_percent)/x))
for c in range(max_value + 1):
if T[s - c * x, i]:
T[s, i+1] = True
coeff = [0]*len(data)
def RecursivelyListAllThatWork(k, sum): # Using last k variables, make sum
# /* Base case: If we've assigned all the variables correctly, list this
# * solution.
# */
if k == 0:
# print what we have so far
print(' + '.join("%2s*%s" % t for t in zip(coeff, data)))
return
x_k = data[k-1]
# /* Recursive step: Try all coefficients, but only if they work. */
max_value = int(abs((target_sum * max_percent)/x_k))
for c in range(max_value + 1):
if T[sum - c * x_k, k - 1]:
# mark the coefficient of x_k to be c
coeff[k-1] = c
RecursivelyListAllThatWork(k - 1, sum - c * x_k)
# unmark the coefficient of x_k
coeff[k-1] = 0
RecursivelyListAllThatWork(len(data), target_sum)