I have integer input: 0 < a, K, N < 10^9
I need to find all b numbers that satisfy:
a + b <= N
(a + b) % K = 0
For example: 10 6 40 -> [2, 8, 14, 20, 26]
I tried a simple brute force and failed (Time Limit Exceeded). Can anyone suggest answer? Thanks
a, K, N = [int(x) for x in input().split()]
count = 0
b = 1
while (a + b <= N):
if ((a + b) % K) == 0:
count+=1
print(b, end=" ")
b+=1
if (count == 0):
print(-1)
The first condition is trivial in the sense that it just poses an upper limit on b. The second condition can be rephrased using the definition of % as
a + b = P * K
For some arbitrary integer P. From this, is simple to compute the smallest b by finding the smallest P that gives you a positive result for P * K - a. In other words
P * K - a >= 0
P * K >= a
P >= a / K
P = ceil(a / K)
So you have
b0 = ceil(a / K) * K - a
b = range(b0, N + 1, K)
range is a generator, so it won't compute the values up front. You can force that by doing list(b).
At the same time, if you only need the count of elements, range objects will do the math on the limits and step size for you conveniently, all without computing the actual values, so you can just do len(b).
To find the list of bs, you can use some maths. First, we note that (a + b) % K is equivalent to a % K + b % K. Also when n % K is 0, that means that n is a multiple of K. So the smallest value of b is n * K - a for the smallest value of n where this calculation is still positive. Once you find that value, you can simply add K repeatedly to find all other values of b.
b = k - a%k
Example: a=19, k=11, b = 11-19%11 = 11-8 =3
I am trying to find the number of ways to construct an array such that consecutive positions contain different values.
Specifically, I need to construct an array with elements such that each element 1 between and k , all inclusive. I also want the first and last elements of the array to be 1 and x.
Complete problem statement:
Here is what I tried:
def countArray(n, k, x):
# Return the number of ways to fill in the array.
if x > k:
return 0
if x == 1:
return 0
def fact(n):
if n == 0:
return 1
fact_range = n+1
T = [1 for i in range(fact_range)]
for i in range(1,fact_range):
T[i] = i * T[i-1]
return T[fact_range-1]
ways = fact(k) / (fact(n-2)*fact(k-(n-2)))
return int(ways)
In short, I did K(C)N-2 to find the ways. How could I solve this?
It passes one of the base case with inputs as countArray(4,3,2) but fails for 16 other cases.
Let X(n) be the number of ways of constructing an array of length n, starting with 1 and ending in x (and not repeating any numbers). Let Y(n) be the number of ways of constructing an array of length n, starting with 1 and NOT ending in x (and not repeating any numbers).
Then there's these recurrence relations (for n>1)
X(n+1) = Y(n)
Y(n+1) = X(n)*(k-1) + Y(n)*(k-2)
In words: If you want an array of length n+1 ending in x, then you need an array of length n not ending in x. And if you want an array of length n+1 not ending in x, then you can either add any of the k-1 symbols to an array of length n ending in x, or you can take an array of length n not ending in x, and add any of the k-2 symbols that aren't x and don't repeat the last value.
For the base case, n=1, if x is 1 then X(1)=1, Y(1)=0 otherwise, X(1)=0, Y(1)=1
This gives you an O(n)-time method of computing the result.
def ways(n, k, x):
M = 10**9 + 7
wx = (x == 1)
wnx = (x != 1)
for _ in range(n-1):
wx, wnx = wnx, wx * (k-1) + wnx*(k-2)
wnx = wnx % M
return wx
print(ways(100, 5, 2))
In principle you can reduce this to O(log n) by expressing the recurrence relations as a matrix and computing the matrix power (mod M), but it's probably not necessary for the question.
[Additional working]
We have the recurrence relations:
X(n+1) = Y(n)
Y(n+1) = X(n)*(k-1) + Y(n)*(k-2)
Using the first, we can replace the Y(_) in the second with X(_+1) to reduce it down to a single variable. Then:
X(n+2) = X(n)*(k-1) + X(n+1)*(k-2)
Using standard techniques, we can solve this linear recurrence relation exactly.
In the case x!=1, we have:
X(n) = ((k-1)^(n-1) - (-1)^n) / k
And in the case x=1, we have:
X(n) = ((k-1)^(n-1) - (1-k)(-1)^n)/k
We can compute these mod M using Fermat's little theorem because M is prime. So 1/k = k^(M-2) mod M.
Thus we have (with a little bit of optimization) this short program that solves the problem and runs in O(log n) time:
def ways2(n, k, x):
S = -1 if n%2 else 1
return ((pow(k-1, n-1, M) + S) * pow(k, M-2, M) - S*(x==1)) % M
could you try this DP version: (it's passed all tests) (it's inspired by #PaulHankin and take DP approach - will run performance later to see what's diff for big matrix)
def countArray(n, k, x):
# Return the number of ways to fill in the array.
big_mod = 10 ** 9 + 7
dp = [[1], [1]]
if x == 1:
dp = [[1], [0]]
else:
dp = [[1], [1]]
for _ in range(n-2):
dp[0].append(dp[0][-1] * (k - 1) % big_mod)
dp[1].append((dp[0][-1] - dp[1][-1]) % big_mod)
return dp[1][-1]
So, I'm trying to fit some pairs of x,y data with a quadratic regression, a sample formula can be found at http://polynomialregression.drque.net/math.html.
Following is my code that does the regression using that explicit formula and using numpy inbuilt functions,
import numpy as np
x = [6.230825,6.248279,6.265732]
y = [0.312949,0.309886,0.306639472]
toCheck = x[2]
def evaluateValue(coeff,x):
c,b,a = coeff
val = np.around( a+b*x+c*x**2,9)
act = 0.306639472
error= np.abs(act-val)*100/act
print "Value = {:.9f} Error = {:.2f}%".format(val,error)
###### USing numpy######################
coeff = np.polyfit(x,y,2)
evaluateValue(coeff, toCheck)
################# Using explicit formula
def determinant(a,b,c,d,e,f,g,h,i):
# the matrix is [[a,b,c],[d,e,f],[g,h,i]]
return a*(e*i - f*h) - b*(d*i - g*f) + c*(d*h - e*g)
a = b = c = d = e = m = n = p = 0
a = len(x)
for i,j in zip(x,y):
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
det = determinant(a,b,c,b,c,d,c,d,e)
c0 = determinant(m,b,c,n,c,d,p,d,e)/det
c1 = determinant(a,m,c,b,n,d,c,p,e)/det
c2 = determinant(a,b,m,b,c,n,c,d,p)/det
evaluateValue([c2,c1,c0], toCheck)
######Using another explicit alternative
def determinantAlt(a,b,c,d,e,f,g,h,i):
return a*e*i - a*f*h - b*d*i +b*g*f + c*d*h - c*e*g # <- barckets removed
a = b = c = d = e = m = n = p = 0
a = len(x)
for i,j in zip(x,y):
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
det = determinantAlt(a,b,c,b,c,d,c,d,e)
c0 = determinantAlt(m,b,c,n,c,d,p,d,e)/det
c1 = determinantAlt(a,m,c,b,n,d,c,p,e)/det
c2 = determinantAlt(a,b,m,b,c,n,c,d,p)/det
evaluateValue([c2,c1,c0], toCheck)
This code gives this output
Value = 0.306639472 Error = 0.00%
Value = 0.308333580 Error = 0.55%
Value = 0.585786477 Error = 91.03%
As, you can see these are different from each other and third one is totally wrong. Now my questions are:
1. Why the explicit formula is giving slightly wrong result and how to improve that?
2. How numpy is giving so accurate result?
3. In the third case only by openning the parenthesis, how come the result changes so drastically?
So there are a few things that are going on here that are unfortunately plaguing the way you are doing things. Take a look at this code:
for i,j in zip(x,y):
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
You are building features such that the x values are not only squared, but cubed and fourth powered.
If you print out each of these values before you put them into the 3 x 3 matrix to solve:
In [35]: a = b = c = d = e = m = n = p = 0
...: a = len(x)
...: for i,j in zip(xx,y):
...: b += i
...: c += i**2
...: d += i**3
...: e += i**4
...: m += j
...: n += j*i
...: p += j*i**2
...: print(a, b, c, d, e, m, n, p)
...:
...:
3 18.744836 117.12356813829001 731.8283056811686 4572.738547313946 0.9294744720000001 5.807505391292503 36.28641270376207
When dealing with floating-point arithmetic and especially for small values, the order of operations does matter. What's happening here is that by fluke, the mix of both small values and large values that have been computed result in a value that is very small. Therefore, when you compute the determinant using the factored form and expanded form, notice how you get slightly different results but also look at the precision of the values:
In [36]: det = determinant(a,b,c,b,c,d,c,d,e)
In [37]: det
Out[37]: 1.0913403514223319e-10
In [38]: det = determinantAlt(a,b,c,b,c,d,c,d,e)
In [39]: det
Out[39]: 2.3283064365386963e-10
The determinant is on the order of 10-10! The reason why there's a discrepancy is because with floating-point arithmetic, theoretically both determinant methods should yield the same result but unfortunately in reality they are giving slightly different results and this is due to something called error propagation. Because there are a finite number of bits that can represent a floating-point number, the order of operations changes how the error propagates, so even though you are removing the parentheses and the formulas do essentially match, the order of operations to get to the result are now different. This article is an essential read for any software developer who deals with floating-point arithmetic regularly: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Therefore, when you're trying to solve the system with Cramer's Rule, inevitably when you divide by the main determinant in your code, even though the change is on the order of 10-10, the change is negligible between the two methods but you will get very different results because you're dividing by this number when solving for the coefficients.
The reason why NumPy doesn't have this problem is because they solve the system by least-squares and the pseudo-inverse and not using Cramer's Rule. I would not recommend using Cramer's Rule to find regression coefficients mostly due to experience and that there are more robust ways of doing it.
However to solve your particular problem, it's good to normalize the data so that the dynamic range is now centered at 0. Therefore, the features you use to construct your coefficient matrix are more sensible and thus the computational process has an easier time dealing with the data. In your case, something as simple as subtracting the data with the mean of the x values should work. As such, if you have new data points you want to predict, you must subtract by the mean of the x data first prior to doing the prediction.
Therefore at the beginning of your code, perform mean subtraction and regress on this data. I've showed you where I've modified the code given your source above:
import numpy as np
x = [6.230825,6.248279,6.265732]
y = [0.312949,0.309886,0.306639472]
# Calculate mean
me = sum(x) / len(x)
# Make new dataset that is mean subtracted
xx = [pt - me for pt in x]
#toCheck = x[2]
# Data point to check is now mean subtracted
toCheck = x[2] - me
def evaluateValue(coeff,x):
c,b,a = coeff
val = np.around( a+b*x+c*x**2,9)
act = 0.306639472
error= np.abs(act-val)*100/act
print("Value = {:.9f} Error = {:.2f}%".format(val,error))
###### USing numpy######################
coeff = np.polyfit(xx,y,2) # Change
evaluateValue(coeff, toCheck)
################# Using explicit formula
def determinant(a,b,c,d,e,f,g,h,i):
# the matrix is [[a,b,c],[d,e,f],[g,h,i]]
return a*(e*i - f*h) - b*(d*i - g*f) + c*(d*h - e*g)
a = b = c = d = e = m = n = p = 0
a = len(x)
for i,j in zip(xx,y): # Change
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
det = determinant(a,b,c,b,c,d,c,d,e)
c0 = determinant(m,b,c,n,c,d,p,d,e)/det
c1 = determinant(a,m,c,b,n,d,c,p,e)/det
c2 = determinant(a,b,m,b,c,n,c,d,p)/det
evaluateValue([c2,c1,c0], toCheck)
######Using another explicit alternative
def determinantAlt(a,b,c,d,e,f,g,h,i):
return a*e*i - a*f*h - b*d*i +b*g*f + c*d*h - c*e*g # <- barckets removed
a = b = c = d = e = m = n = p = 0
a = len(x)
for i,j in zip(xx,y): # Change
b += i
c += i**2
d += i**3
e += i**4
m += j
n += j*i
p += j*i**2
det = determinantAlt(a,b,c,b,c,d,c,d,e)
c0 = determinantAlt(m,b,c,n,c,d,p,d,e)/det
c1 = determinantAlt(a,m,c,b,n,d,c,p,e)/det
c2 = determinantAlt(a,b,m,b,c,n,c,d,p)/det
evaluateValue([c2,c1,c0], toCheck)
When I run this, we now get:
In [41]: run interp_test
Value = 0.306639472 Error = 0.00%
Value = 0.306639472 Error = 0.00%
Value = 0.306639472 Error = 0.00%
As some final reading for you, this is a similar problem that someone else encountered which I addressed in their question: Fitting a quadratic function in python without numpy polyfit. The summary is that I advised them not to use Cramer's Rule and to use least-squares through the pseudo-inverse. I showed them how to get exactly the same results without using numpy.polyfit. Also, using least-squares generalizes where if you have more than 3 points, you can still fit a quadratic through your points so that the model has the smallest error possible.