I was writing a program where I need to calculate insanely huge numbers.
k = int(input())
print(int((2**k)*5 % (10**9 + 7))
Here, k being of the orders of 109
As expected, this was rather slow( taking upto 5 seconds to calculate) whereas my program needs to finish computing in 1 second.
After a little research online I found a function pow(), and by writing
p = 10**9 + 7
print(int(pow(2, k- 1,p)*10))
This works fine for small numbers but messes up at large numbers. I can understand why that is happening( because this isn't essentially what I want to calculate and the modulus operation with such a large number doesn't affect the calculation with small values of k).
I also found libraries like gmpy2 and numpy but I don't know how to use them since I'm just a beginner with python.
So how can I write an expression for what I want to calculate and which works fast enough and doesn't err at large numbers too?
You can optimize your operation by passing the number you want to take modulus from as the third argument of builtin pow and multiplying the result by 5
def func(k):
x = pow(2, k, pow(10,9) + 7) * 5
return int(x)
Related
def exponentiation(base,n):
if n == 0:
return 1
if n % 2 == 0:
return exponentiation(base*base, n/2)
else:
return base * exponentiation(base * base, (n-1)/2)
if __name__ == '__main__':
print(len(str(exponentiation(2, 66666666))))
For very large integers, the computer becomes quite sluggish at finding the product of numbers; And I know that 1 Gigabyte of RAM can store atleast 2^8000000000 digits, but this program slows down far before this limit is reached.
I wished to use Exponentiation by squaring in order to improve the rate at which the program did the multiplications, but yet it seems as though there is a problem with the program storing such large integers.
Just use the built-in ** operator for this. It works significantly faster.
big_number_a = 2 ** 66666666
big_number_b = exponentiation(2, 66666666)
big_number_a == big_number_b # True
Also, don't try converting such a huge number to a decimal string with str unless you really have to. That part is super slow.
Yes, there is a faster way:
exponentiation = pow
This is about twice as fast as your method, and it works for non-integers as well.
The exponentiation time in your code is negligible, though. Most of its time is spent converting the integers to strings. If you want the number of digits an integer has, use int(math.log10(n)) + 1 instead.
What is the fastest way to compute e^x, given x can be a floating point value.
Right now I have used the python's math library to compute this, below is the complete code where in result = -0.490631 + 0.774275 * math.exp(0.474907 * sum) is the main logic, rest is file handling code which the question demands.
import math
import sys
def sum_digits(n):
r = 0
while n:
r, n = r + n % 10, n // 10
return r
def _print(string):
fo = open("output.txt", "w+")
fo.write(string)
fo.close()
try:
f = open('input.txt')
except IOError:
_print("error")
sys.exit()
data = f.read()
num = data.split('\n', 1)[0]
try:
val = int(num)
except ValueError:
_print("error")
sys.exit()
sum = sum_digits(int(num))
f.close()
if (sum == 2):
_print("1")
else:
result = -0.490631 + 0.774275 * math.exp(0.474907 * sum)
_print(str(math.ceil(result)))
The rvalue of result is the equation of curve (which is the solution to a programming problem) which I derived from wolfarm-mathematica using my own data set.
But this doesn't seem to pass the par criteria of the assessment !
I have also tried the newton-raphson way but the convergence for larger x is causing the problem, other than that, calculating the natural log ln(x) is a challenge there again !
I don't have any language constraint so any solution is acceptable. Also if the python's math library is fastest as some of the comments says then can anyone give an insight on the time complexity and execution time of this program, in short the efficiency of the program ?
I don't know if the exponential curve math is accurate in this code, but it certainly isn't the slow point.
First, you read the input data in one read call. It does have to be read, but that loads the entire file. The next step takes the first line only, so it would seem more appropriate to use readline. That split itself is O(n) where n is the file size, at least, which might include data you were ignoring since you only process one line.
Second, you convert that line into an int. This probably requires Python's long integer support, but the operation could be O(n) or O(n^2). A single pass algorithm would multiply the accumulated number by 10 for each digit, allocating one or two new (longer) longs each time.
Third, sum_digits breaks that long int down into digits again. It does so using division, which is expensive, and two operations as well, rather than using divmod. That's O(n^2), because each division has to process every higher digit for each digit. And it's only needed because of the conversion you just did.
Summing the digits found in a string is likely easier done with something like sum(int(c) for c in l if c.isdigit()) where l is the input line. It's not particularly fast, as there's quite a bit of overhead in the digit conversions and the sum might grow large, but it does make a single pass with a fairly tight loop; it's somewhere between O(n) and O(n log n), depending on the length of the data, because the sum might grow large itself.
As for the unknown exponential curve, the existence of an exception for a low number is concerning. There's likely some other option that's both faster and more accurate if the answer's an integer anyway.
Lastly, you have at least four distinct output data formats: error, 2, 3.0, 3e+20. Do you know which of these is expected? Perhaps you should be using formatted output rather than str to convert your numbers.
One extra note: If the data is really large, processing it in chunks will definitely speed things up (instead of running out of memory, needing to swap, etc). As you're looking for a digit sum your size complexity can be reduced from O(n) to O(log n).
Q(x)=[Q(x−1)+Q(x−2)]^2
Q(0)=0, Q(1)=1
I need to find Q(29). I wrote a code in python but it is taking too long. How to get the output (any language would be fine)?
Here is the code I wrote:
a=0
b=1
for i in range(28):
c=(a+b)*(a+b)
a=b
b=c
print(b)
I don't think this is a tractable problem with programming. The reason why your code is slow is that the numbers within grow very rapidly, and python uses infinite-precision integers, so it takes its time computing the result.
Try your code with double-precision floats:
a=0.0
b=1.0
for i in range(28):
c=(a+b)*(a+b)
a=b
b=c
print(b)
The answer is inf. This is because the answer is much much larger than the largest representable double-precision number, which is rougly 10^308. You could try using finite-precision integers, but those will have an even smaller representable maximum. Note that using doubles will lead to loss of precision, but surely you don't want to know every single digit of your huuuge number (side note: I happen to know that you do, making your job even harder).
So here's some math background for my skepticism: Your recurrence relation goes
Q[k] = (Q[k-2] + Q[k-1])^2
You can formulate a more tractable sequence from the square root of this sequence:
P[k] = sqrt(Q[k])
P[k] = P[k-2]^2 + P[k-1]^2
If you can solve for P, you'll know Q = P^2.
Now, consider this sequence:
R[k] = R[k-1]^2
Starting from the same initial values, this will always be smaller than P[k], since
P[k] = P[k-2]^2 + P[k-1]^2 >= P[k-1]^2
(but this will be a "pretty close" lower bound as the first term will always be insignificant compared to the second). We can construct this sequence:
R[k] = R[k-1]^2 = R[k-2]^4 = R[k-3]^6 = R[k-m]^(2^m) = R[0]^(2^k)
Since P[1 give or take] starts with value 2, we should consider
R[k] = 2^(2^k)
as a lower bound for P[k], give or take a few exponents of 2. For k=28 this is
P[28] > 2^(2^28) = 2^(268435456) = 10^(log10(2)*2^28) ~ 10^80807124
That's at least 80807124 digits for the final value of P, which is the square root of the number you're looking for. That makes Q[28] larger than 10^1.6e8. If you printed that number into a text file, it would take more than 150 megabytes.
If you imagine you're trying to handle these integers exactly, you'll see why it takes so long, and why you should reconsider your approach. What if you could compute that huge number? What would you do with it? How long would it take python to print that number on your screen? None of this is trivial, so I suggest that you try to solve your problem on paper, or find a way around it.
Note that you can use a symbolic math package such as sympy in python to get a feeling of how hard your problem is:
import sympy as sym
a,b,c,b0 = sym.symbols('a,b,c,b0')
a = 0
b = b0
for k in range(28):
c = (a+b)**2
a = b
b = c
print(c)
This will take a while, but it will fill your screen with the explicit expression for Q[k] with only b0 as parameter. You would "only" have to substitute your values into that monster to obtain the exact result. You could also try sym.simplify on the expression, but I couldn't wait for that to return anything meaningful.
During lunch time I let your loop run, and it finished. The result has
>>> import math
>>> print(math.log10(c))
49287457.71120789
So my lower bound for k=28 is a bit large, probably due to off-by-one errors in the exponent. The memory needed to store this integer is
>>> import sys
>>> sys.getsizeof(c)
21830612
that is roughly 20 MB.
This can be solved with brute force but it is still an interesting problem since it uses two different "slow" operations and there are trade-offs in choosing the correct approach.
There are two places where the native Python implementation of algorithm is slow: the multiplication of large numbers and the conversion of large numbers to a string.
Python uses the Karatsuba algorithm for multiplication. It has a running time of O(n^1.585) where n is the length of the numbers. It does get slower as the numbers get larger but you can compute Q(29).
The algorithm for converting a Python integer to its decimal representation is much slower. It has running time of O(n^2). For large numbers, it is much slower than multiplication.
Note: the times for conversion to a string also include the actual calculation time.
On my computer, computing Q(25) requires ~2.5 seconds but conversion to a string requires ~3 minutes 9 seconds. Computing Q(26) requires ~7.5 seconds but conversion to a string requires ~12 minutes 36 seconds. As the size of the number doubles, multiplication time increases by a factor of 3 and the running time of string conversion increases by a factor of 4. The running time of the conversion to string dominates. Computing Q(29) takes about 3 minutes and 20 seconds but conversion to a string will take more than 12 hours (I didn't actually wait that long).
One option is the gmpy2 module that provides access the very fast GMP library. With gmpy2, Q(26) can be calculated in ~0.2 seconds and converted into a string in ~1.2 seconds. Q(29) can be calculated in ~1.7 seconds and converted into a string in ~15 seconds. Multiplication in GMP is O(n*ln(n)). Conversion to decimal is faster that Python's O(n^2) algorithm but still slower than multiplication.
The fastest option is Python's decimal module. Instead of using a radix-2, or binary, internal representation, it uses a radix-10 (actually of power of 10) internal representation. Calculations are slightly slower but conversion to a string is very fast; it is just O(n). Calculating Q(29) requires ~9.2 seconds but calculating and conversion together only requires ~9.5 seconds. The time for conversion to string is only ~0.3 seconds.
Here is an example program using decimal. It also sums the individual digits of the final value.
import decimal
decimal.getcontext().prec = 200000000
decimal.getcontext().Emax = 200000000
decimal.getcontext().Emin = -200000000
def sum_of_digits(x):
return sum(map(int, (t for t in str(x))))
a = decimal.Decimal(0)
b = decimal.Decimal(1)
for i in range(28):
c = (a + b) * (a + b)
a = b
b = c
temp = str(b)
print(i, len(temp), sum_of_digits(temp))
I didn't include the time for converting the millions of digits into strings and adding them in the discussion above. That time should be the same for each version.
This WILL take too long, since is a kind of geometric progression which tends to infinity.
Example:
a=0
b=1
c=1*1 = 1
a=1
b=1
c=2*2 = 4
a=1
b=4
c=5*5 = 25
a=4
b=25
c= 29*29 = 841
a=25
b=841
.
.
.
You can check if c%10==0 and then divide it, and in the end multiplyit number of times you divided it but in the end it'll be the same large number. If you really need to do this calculation try using C++ it should run it faster than Python.
Here's your code written in C++
#include <cstdlib>
#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
long long int a=0;
long long int b=1;
long long int c=0;
for(int i=0;i<28;i++){
c=(a+b)*(a+b);
a=b;
b=c;
}
cout << c;
return 0;
}
We recently delve into infinite series in calculus and that being said, I'm having so much fun with it. I derived my own inverse tan infinte series in python and set to 1 to get pi/4*4 to get pi. I know it's not the fastest algorithm, so please let's not discuss about my algorithm. What I would like to discuss is how do I represent very very small numbers in python. What I notice is as my programs iterate the series, it stops somewhere at the 20 decimal places (give or take). I tried using decimal module and that only pushed to about 509. I want an infinite (almost) representation.
Is there a way to do such thing? I reckon no data type will be able to handle such immensity, but if you can show me a way around that, I would appreciate that very much.
Python's decimal module requires that you specify the "context," which affects how precise the representation will be.
I might recommend gmpy2 for this type of thing - you can do the calculation on rational numbers (arbitrary precision) and convert to decimal at the last step.
Here's an example - substitute your own algorithm as needed:
import gmpy2
# See https://gmpy2.readthedocs.org/en/latest/mpfr.html
gmpy2.get_context().precision = 10000
pi = 0
for n in range(1000000):
# Formula from http://en.wikipedia.org/wiki/Calculating_pi#Arctangent
numer = pow(2, n + 1)
denom = gmpy2.bincoef(n + n, n) * (n + n + 1)
frac = gmpy2.mpq(numer, denom)
pi += frac
# Print every 1000 iterations
if n % 1000 == 0:
print(gmpy2.mpfr(pi))
I am messing around with the lambda function and I understand what I can do with it in a simple fashion, but when I try something more advanced I am running into errors and I don't see why.
Here is what I am trying if you can tell me where I am going wrong it would be appricated.
import math
C = lambda n,k: math.factorial(n)/(math.factorial(k))(math.factorial(n-k))
print C(10,5)
I should note that I am running into errors trying to run the code on Codepad. I do not have access to Idle.
Try this:
from math import factorial
from __future__ import division
C = lambda n, k : factorial(n) / factorial(k) * factorial(n-k)
print C(10,5)
> 3628800.0
You were missing a *, and also it's possible that the division should take in consideration decimals, so the old division operator / won't do. That's why I'm importing the new / operator, which performs decimal division.
UPDATE:
Well, after all it seems like it's Codepad's fault - it supports Python 2.5.1, and factorial was added in Python 2.6. Just implement your own factorial function and be done with it, or even better start using a real Python interpreter.
def factorial(n):
fac = 1
for i in xrange(1, n+1):
fac *= i
return fac
I think you're missing a * between the second 2 factorial clauses. You're getting an error because you're trying to run (math.factorial(k))(math.factorial(n-k)), which turns into something like 10(math.factorial(n-k), which makes no sense.
Presumably the value you wish to compute is “n-choose-k”, the number of combinations of n things taken k at a time. The formula for that is n!/(k! * (n-k)!). When the missing * is added to your calculation, it produces n!/k! * (n-k)!, which equals (n!/k!)*(n-k)!. (Note, k! evenly divides n!.) For example, with n=10 and k=5, C(10,5) should be 3628800/(120*120) = 252, but your calculation would give 3628800/120*120 = 3628800, which is incorrect by a factor of 14400.
You can of course fix the parenthesization:
>>> C = lambda n,k: math.factorial(n)/(math.factorial(k)*math.factorial(n-k))
>>> C(10,5)
252
But note that if math.factorial(j) takes j-1 multiplications to calculate, then C(n,k) takes n-1+k-1+n-k-1+1 = 2*n-2 multiplications and one division. That's about four times as many multiply operations as necessary. The code shown below uses j multiplies and j divides, where j is the smaller of k and n-k, so j is at most n/2. On some machines division is much slower than multiplication, but on most machines j multiplies and j divides will run a lot faster than 2*n-2 multiplications and one division.
More importantly, C(n,k) is far smaller than n!. Computing via the n!/(k!*(n-k)!) formula requires more than 64-bit precision whenever n exceeds 20. For example, C(21,1) returns the value 21L. By contrast, the code below computes up to D(61,30)=232714176627630544 before requiring more than 64 bits to compute D(62,31)=465428353255261088L. (I named the function below “D” instead of “C” to avoid name clash.)
For small computations on big fast machines, the extra multiplies and extra precision requirements are unimportant. However, for big computations on small machines, they become important.
In short, the order of multiplications and divisions in D() keeps the maximum intermediate values that appear minimal. The largest values appear in the last pass of the for loop. Also note that in the for loop, i is always an exact divisor of c*j and no truncation occurs. This is a fairly standard algorithm for computing “n-choose-k”.
def D(n, k):
c, j, k = 1, n, min(k,n-k)
for i in range(1,k+1):
c, j = c*j/i, j-1
return c
Results from interpreter:
>>> D(10,5)
252
>>> D(61,30)
232714176627630544
>>> D(62,31)
465428353255261088L