how to create a python function called mySqrt that will approximate the square root of a number, call it n, by using Newton’s algorithm. Here's what I tried so far:
def newguess(x):
result = x/2
return result
def mySqrt(n):
result = (1/2) * (oldguess + (n/oldguess))
return result
v = newguess(45)
t = mySqrt(65)
print(t)
I think this is what you are looking for:
def my_sqrt(n):
approx = n/2
closer = (approx + n/approx)/2
while closer != approx:
approx = closer
closer = (approx + n/approx)/2
return approx
The Newton method finds an approximated solution r of the equation f(x) = 0 as follows:
[Initialize] Set r to some initial guess. Set epsilon := 0.00001 (precision)
[Iterate] While abs(f(r)) > epsilon Repeat r := r - f(r)/f'(r)
[End] Return r
In step 1 above, epsilon is the precision you want to achieve. The larger the precision the longer your program will take. In step 2 f'(r) stands for the derivative of f at r.
Now, you want to compute sqrt(a) for any value of a >= 0 using the Newton method.
By definition x = sqrt(a) means x^2 = a or x^2 - a = 0. Let f(x) = x^2 - a. Finding a solution r of f(x) = 0 is equivalent to finding r = sqrt(a). Note that in this case we have f'(x) = 2*x.
If we now apply the above algorithm to this case with a/2 as the initial guess (actually anything between 0 and a), we get:
[Initialize] Set r := a/2 and epsilon := 0.000000001
[Iterate] While abs(r^2 - a) > epsilon Repeat r := r - (r^2 - a)/(2*r)
[End] Return r
So, the only you have to do now is to translate these three simple steps into a phyton program.
Here is a solution which uses 50 iterations to approximate the value:
def mySqrt(n):
newGuess=n/2
for i in range(50):
newGuess=0.5*(newGuess + (n/newGuess))
return newGuess
Related
I'm trying to evaluate a Taylor polynomial for the natural logarithm, ln(x), centred at a=1 in Python. I'm using the series given on Wikipedia however when I try a simple calculation like ln(2.7) instead of giving me something close to 1 it gives me a gigantic number. Is there something obvious that I'm doing wrong?
def log(x):
n=1000
s=0
for i in range(1,n):
s += ((-1)**(i+1))*((x-1)**i)/i
return s
Using the Taylor series:
Gives the result:
EDIT: If anyone stumbles across this an alternative way to evaluate the natural logarithm of some real number is to use numerical integration (e.g. Riemann sum, midpoint rule, trapezoid rule, Simpson's rule etc) to evaluate the integral that is often used to define the natural logarithm;
That series is only valid when x is <= 1. For x>1 you will need a different series.
For example this one (found here):
def ln(x): return 2*sum(((x-1)/(x+1))**i/i for i in range(1,100,2))
output:
ln(2.7) # 0.9932517730102833
math.log(2.7) # 0.9932517730102834
Note that it takes a lot more than 100 terms to converge as x gets bigger (up to a point where it'll become impractical)
You can compensate for that by adding the logarithms of smaller factors of x:
def ln(x):
if x > 2: return ln(x/2) + ln(2) # ln(x) = ln(x/2 * 2) = ln(x/2) + ln(2)
return 2*sum(((x-1)/(x+1))**i/i for i in range(1,1000,2))
which is something you can also do in your Taylor based function to support x>1:
def log(x):
if x > 1: return log(x/2) - log(0.5) # ln(2) = -ln(1/2)
n=1000
s=0
for i in range(1,n):
s += ((-1)**(i+1))*((x-1)**i)/i
return s
These series also take more terms to converge when x gets closer to zero so you may want to work them in the other direction as well to keep the actual value to compute between 0.5 and 1:
def log(x):
if x > 1: return log(x/2) - log(0.5) # ln(x/2 * 2) = ln(x/2) + ln(2)
if x < 0.5: return log(2*x) + log(0.5) # ln(x*2 / 2) = ln(x*2) - ln(2)
...
If performance is an issue, you'll want to store ln(2) or log(0.5) somewhere and reuse it instead of computing it on every call
for example:
ln2 = None
def ln(x):
if x <= 2:
return 2*sum(((x-1)/(x+1))**i/i for i in range(1,10000,2))
global ln2
if ln2 is None: ln2 = ln(2)
n2 = 0
while x>2: x,n2 = x/2,n2+1
return ln2*n2 + ln(x)
The program is correct, but the Mercator series has the following caveat:
The series converges to the natural logarithm (shifted by 1) whenever −1 < x ≤ 1.
The series diverges when x > 1, so you shouldn't expect a result close to 1.
The python function math.frexp(x) can be used to advantage here to modify the problem so that the taylor series is working with a value close to one. math.frexp(x) is described as:
Return the mantissa and exponent of x as the pair (m, e). m is a float
and e is an integer such that x == m * 2**e exactly. If x is zero,
returns (0.0, 0), otherwise 0.5 <= abs(m) < 1. This is used to “pick
apart” the internal representation of a float in a portable way.
Using math.frexp(x) should not be regarded as "cheating" because it is presumably implemented just by accessing the bit fields in the underlying binary floating point representation. It isn't absolutely guaranteed that the representation of floats will be IEEE 754 binary64, but as far as I know every platform uses this. sys.float_info can be examined to find out the actual representation details.
Much like the other answer does you can use the standard logarithmic identities as follows: Let m, e = math.frexp(x). Then log(x) = log(m * 2e) = log(m) + e * log(2). log(2) can be precomputed to full precision ahead of time and is just a constant in the program. Here is some code illustrating this to compute the two similar taylor series approximations to log(x). The number of terms in each series was determined by trial and error rather than rigorous analysis.
taylor1 implements log(1 + x) = x1 - (1/2) * x2 + (1/3) * x3 ...
taylor2 implements log(x) = 2 * [t + (1/3) * t3 + (1/5) * t5 ...], where t = (x - 1) / (x + 1).
import math
import struct
_LOG_OF_2 = 0.69314718055994530941723212145817656807550013436025
def taylor1(x):
m, e = math.frexp(x)
log_of_m = 0
num_terms = 36
sign = 1
m_minus1_power = m - 1
for k in range(1, num_terms + 1):
log_of_m += sign * m_minus1_power / k
sign = -sign
m_minus1_power *= m - 1
return log_of_m + e * _LOG_OF_2
def taylor2(x):
m, e = math.frexp(x)
num_terms = 12
half_log_of_m = 0
t = (m - 1) / (m + 1)
t_squared = t * t
t_power = t
denominator = 1
for k in range(num_terms):
half_log_of_m += t_power / denominator
denominator += 2
t_power *= t_squared
return 2 * half_log_of_m + e * _LOG_OF_2
This seems to work well over most of the domain of log(x), but as x approaches 1 (and log(x) approaches 0) the transformation provided by x = m * 2e actually produces a less accurate result. So a better algorithm would first check if x is close to 1, say abs(x-1) < .5, and if so the just compute the taylor series approximation directly on x.
My answer is just using the Taylor series for In(x). I really hope this helps. It is simple and straight to the point.
enter image description here
I am trying to do square root using newton-raphson algorithm of random numbers given by this formula:
a = m * 10 ^c
where m is random float in range (0,1) and c is random integer in range (-300,300).
Code i wrote works perfectly with precision of root as 0.01 and c in range (-30,30) but freezes or returns wrong results when i use c range given in task.
here is code for newton function
def newton_raphson(a):
iterations_count = 0
x_n_result = a/2
while abs(x_n_result - a / x_n_result) > 0.01:
x_n_result = (x_n_result + a/x_n_result)/2
iterations_count = iterations_count + 1
if x_n_result*x_n_result == a:
break
iterations.append(iterations_count)
results.append(x_n_result)
print("Result of function", x_n_result)
return
and part where numbers to root are randomized
for i in range(0, 100):
m = random.uniform(0, 1)
c = random.randint(-30, 30)
a = m * 10 **c
random_c.append(c)
numbers.append(a)
print("Number to root : ", i, "|", a, '\n')
newton_raphson(a)
plot of the amount of iteration from the value of c
plt.bar(random_c, iterations, color='red')
Script is supposed to root 100 random numbers and then plot amount of iteration required to root number from values of c. Problem is like i said before with proper range of c value. I believe it has to do something with range of variables. Any sugestion how to solve this?
First observation is that your logic will get you a square root, not a cubic root.
The second thing is that your random numbers can contain negative values which will never converge for a square root.
If you really wanted a cubic root, you could do it like this:
def cubic(number):
result = number
while abs(number/result/result - result) > 0.01:
result += (number/result/result - result)/2
return result
you could also approach this in a generic fashion by creating a newton/raphson general purpose function that takes a delta function as parameter to use on a number parameter:
def newtonRaphson(delta,n):
result = n
while abs(delta(n,result)) > 0.01:
result += delta(n,result)/2
return result
def cubic(n,r): return n/r/r - r
def sqrt(n,r): return n/r - r
The use the newtonRaphson method with your chosen delta function:
newtonRaphson(sqrt,25) # 5.000023178253949
newtonRaphson(cubic,125) # 5.003284700817307
So I stumbled upon this thread on here with this script and it returns a negative d value and my p and q values are both prime. Any reason for this? Possibly just a faulty script?
def egcd(a, b):
x,y, u,v = 0,1, 1,0
while a != 0:
q, r = b//a, b%a
m, n = x-u*q, y-v*q
b,a, x,y, u,v = a,r, u,v, m,n
gcd = b
return gcd, x, y
def main():
p = 153143042272527868798412612417204434156935146874282990942386694020462861918068684561281763577034706600608387699148071015194725533394126069826857182428660427818277378724977554365910231524827258160904493774748749088477328204812171935987088715261127321911849092207070653272176072509933245978935455542420691737433
q = 156408916769576372285319235535320446340733908943564048157238512311891352879208957302116527435165097143521156600690562005797819820759620198602417583539668686152735534648541252847927334505648478214810780526425005943955838623325525300844493280040860604499838598837599791480284496210333200247148213274376422459183
e = 65537
ct = 313988037963374298820978547334691775209030794488153797919908078268748481143989264914905339615142922814128844328634563572589348152033399603422391976806881268233227257794938078078328711322137471700521343697410517378556947578179313088971194144321604618116160929667545497531855177496472117286033893354292910116962836092382600437895778451279347150269487601855438439995904578842465409043702035314087803621608887259671021452664437398875243519136039772309162874333619819693154364159330510837267059503793075233800618970190874388025990206963764588045741047395830966876247164745591863323438401959588889139372816750244127256609
# compute n
n = p * q
# Compute phi(n)
phi = (p - 1) * (q - 1)
# Compute modular inverse of e
gcd, a, b = egcd(e, phi)
d = a
print( "n: " + str(d) );
# Decrypt ciphertext
pt = pow(ct,d,n)
print( "pt: " + str(pt) )
if __name__ == "__main__":
main()
This can happen, I'll explain why below, but for practical purposes you'll want to know how to fix it. The answer to that is to add phi to d and use that value instead: everything will work as RSA should.
So why does it happen? The algorithm computes the extended gcd. The result of egcd is a*e + b*phi = gcd, and in the case of RSA, we have gcd = 1 so a*e + b*phi = 1.
If you look at this equation modulo phi (which is the order of the multiplicative group), then a*e == 1 mod phi which is what you need to make RSA work. In fact, by the same congruence, you can add or subtract any multiple of phi to a and the congruence still holds.
Now look at the equation again: a*e + b*phi = 1. We know e and phi are positive integers. You can't have all positive integers in this equation or else no way would it add up to 1 (it would be much larger than 1). So that means either a or b is going to be negative. Sometimes it will be a that is negative, other times it will be b. When it is b, then your a comes out as you would expect: a positive integer that you then assign to the value d. But the other times, you get a negative value for a. We don't want that, so simply add phi to it and make that your value of d.
I am trying to implement Theil's index (http://en.wikipedia.org/wiki/Theil_index) in Python to measure inequality of revenue in a list.
The formula is basically Shannon's entropy, so it deals with log. My problem is that I have a few revenues at 0 in my list, and log(0) makes my formula unhappy. I believe adding a tiny float to 0 wouldn't work as log(tinyFloat) = -inf, and that would mess my index up.
[EDIT]
Here's a snippet (taken from another, much cleaner -and freely available-, implementation)
def error_if_not_in_range01(value):
if (value <= 0) or (value > 1):
raise Exception, \
str(value) + ' is not in [0,1)!'
def H(x)
n = len(x)
entropy = 0.0
sum = 0.0
for x_i in x: # work on all x[i]
print x_i
error_if_not_in_range01(x_i)
sum += x_i
group_negentropy = x_i*log(x_i)
entropy += group_negentropy
error_if_not_1(sum)
return -entropy
def T(x):
print x
n = len(x)
maximum_entropy = log(n)
actual_entropy = H(x)
redundancy = maximum_entropy - actual_entropy
inequality = 1 - exp(-redundancy)
return redundancy,inequality
Is there any way out of this problem?
If I understand you correctly, the formula you are trying to implement is the following:
In this case, your problem is calculating the natural logarithm of Xi / mean(X), when Xi = 0.
However, since that has to be multiplied by Xi / mean(X) first, if Xi == 0 the value of ln(Xi / mean(X)) doesn't matter because it will be multiplied by zero. You can treat the value of the formula for that entry as zero, and skip calculating the logarithm entirely.
In the case that you are implementing Shannon's formula directly, the same holds:
In both the first and second form, calculating the log is not necessary if Pi == 0, because whatever value it is, it will have been multiplied by zero.
UPDATE:
Given the code you quoted, you can replace x_i*log(x_i) with a function as follows:
def Group_negentropy(x_i):
if x_i == 0:
return 0
else:
return x_i*log(x_i)
def H(x)
n = len(x)
entropy = 0.0
sum = 0.0
for x_i in x: # work on all x[i]
print x_i
error_if_not_in_range01(x_i)
sum += x_i
group_negentropy = Group_negentropy(x_i)
entropy += group_negentropy
error_if_not_1(sum)
return -entropy
I have got this code to solve Newton's method for a given polynomial and initial guess value. I want to turn into an iterative process which Newton's method actually is. The program should keeping running till the output value "x_n" becomes constant. And that final value of x_n is the actual root. Also, while using this method in my algorithm it should always produce a positive root between 0 and 1. So does converting the negative output (root) into a positive number would make any difference? Thank you.
import copy
poly = [[-0.25,3], [0.375,2], [-0.375,1], [-3.1,0]]
def poly_diff(poly):
""" Differentiate a polynomial. """
newlist = copy.deepcopy(poly)
for term in newlist:
term[0] *= term[1]
term[1] -= 1
return newlist
def poly_apply(poly, x):
""" Apply a value to a polynomial. """
sum = 0.0
for term in poly:
sum += term[0] * (x ** term[1])
return sum
def poly_root(poly):
""" Returns a root of the polynomial"""
poly_d = poly_diff(poly)
x = float(raw_input("Enter initial guess:"))
x_n = x - (float(poly_apply(poly, x)) / poly_apply(poly_d, x))
print x_n
if __name__ == "__main__" :
poly_root(poly)
First, in poly_diff, you should check to see if the exponent is zero, and if so simply remove that term from the result. Otherwise you will end up with the derivative being undefined at zero.
def poly_root(poly):
""" Returns a root of the polynomial"""
poly_d = poly_diff(poly)
x = None
x_n = float(raw_input("Enter initial guess:"))
while x != x_n:
x = x_n
x_n = x - (float(poly_apply(poly, x)) / poly_apply(poly_d, x))
return x_n
That should do it. However, I think it is possible that for certain polynomials this may not terminate, due to floating point rounding error. It may end up in a repeating cycle of approximations that differ only in the least significant bits. You might terminate when the percentage of change reaches a lower limit, or after a number of iterations.
import copy
poly = [[1,64], [2,109], [3,137], [4,138], [5,171], [6,170]]
def poly_diff(poly):
newlist = copy.deepcopy(poly)
for term in newlist:
term[0] *= term[1]
term[1] -= 1
return newlist
def poly_apply(poly, x):
sum = 0.0
for term in poly:
sum += term[0] * (x ** term[1])
return sum
def poly_root(poly):
poly_d = poly_diff(poly)
x = float(input("Enter initial guess:"))
x_n = x - (float(poly_apply(poly, x)) / poly_apply(poly_d, x))
print (x_n)
if __name__ == "__main__" :
poly_root(poly)