How do I calculate and print out the value of ln(1+x) using the series expansion:
ln(1+x) expansion
using a while loop and including terms whose magnitude is greater than 10-8. Print out the sum to each number of terms to show the result converging.
So far this is my code but it calculates lnsum2 to be a very large number and hence never ends.
n=1
lnsum2= np.cumsum((((-1)**(n+1)*(x**n)/n)))
while lnsum2>10**-8:
n+=1
lnsum2 = lnsum2 + np.cumsum((((-1)**(n+1)*(x**n)/n)))
else: print('The sum of terms greater than 10^-8 is:', lnsum2)
Many thanks.
Right I've now got code that works using a while loop. Thanks for all the help!!
Maybe it's a bit over-kill, but here's a nice solution using sympy to evaluate infinite series.
from sympy.abc import k
from sympy import Sum, oo as inf
import math
x = 0.5
result = Sum(
(
x**(2*k-1) /
(2*k-1)
) - (
x**(2*k) / (2*k)
),
(k, 1, inf)).doit()
#print(result) # 0.5*hyper((0.5, 1), (3/2,), 0.25) - 0.14384103622589
print(float(result)) # 0.4054651081081644
print(math.log(x+1, math.e)) # 0.4054651081081644
EDIT:
I think the problem with your original code is that you haven't quite implemented the series (if I'm understanding the figure in your question correctly). It looks like the series you're trying to implement can be represented as
x^(2n-1) x^(2n)
( + ---------- - -------- ... for n = 1 to n = infinity )
2n-1 2n
whereas your code actually implements this series
(-1)^2 * (x * 1) ( (-1)^(n+1) * (x^n) )
----------------- + ( -------------------- ... for n = 2 to n = infinity )
1 ( n )
EDIT 2:
If you really have to do the iterations yourself, rather than using sympy, here is code which works:
import math
x = 0.5
n=0
sums = []
while True:
n += 1
this_sum = (x**(2*n-1) / (2*n-1)) - (x**(2*n) / (2*n))
if abs(this_sum) < 1e-8:
break
sums.append(this_sum)
lnsum = sum(sums)
print('The sum of terms greater than 10^-8 is:\t\t', lnsum)
print('math.log yields:\t\t\t\t', math.log(x+1, math.e))
Output:
The sum of terms greater than 10^-8 is: 0.4054651046035002
math.log yields: 0.4054651081081644
Related
I have some simple Mathematica code that I'm struggling to convert to Python and could use some help:
a = ((-1)^(n))*4/(Pi*(2 n + 1));
f = a*Cos[(2 n + 1)*t];
sum = Sum[f, {n, 0, 10}];
Plot[sum, {t, -2 \[Pi], 2 \[Pi]}]
The plot looks like this:
For context, I have a function f(t):
I need to plot the sum of the first 10 terms. In Mathematica this was pretty straighforward, but for some reason I just can't seem to figure out how to make it work in Python. I've tried defining a function a(n), but when I try to set f(t) equal to the sum using my list of odd numbers, it doesn't work because t is not defined, but t is a variable. Any help would be much appreciated.
Below is a sample of one of the many different things I've tried. I know that it's not quite right in terms of getting the parity of the terms to alternate, but more important I just want to figure out how to get 'f' to be the sum of the first 10 terms of the summation:
n = list(range(1,20,2))
def a(n):
return ((-1)**(n))*4/(np.pi*n)
f = 0
for i in n:
f += a(i)*np.cos(i*t)
modifying your code, look the part which are different, mostly the mistake was in the part which you are not calculating based on n 0-10 :
n = np.arange(0,10)
t = np.linspace(-2 * np.pi, 2 *np.pi, 10000)
def a(n):
return ((-1)**(n))*4/(np.pi*(2*n+1))
f = 0
for i in n:
f += a(i)*np.cos((2*i +1) * t)
however you could write you could in matrix form, and avoid looping, using the vector and broadcasting:
n = np.arange(10)[:,None]
t = np.linspace(-2 * np.pi, 2 *np.pi, 10000)[:,None]
a = ((-1) ** n) * 4 / (np.pi*(2*n + 1))
f = (a * np.cos((2 * n + 1) * t.T )).sum(axis=0)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 months ago.
Improve this question
Problem Statement
Edit: I have transcribed the image as suggested although I think some terms are better shown in the picture if anything is unclear here;
This function takes in a positive integer n and returns the sum of the following series Sn, as long as the absolute value of each term is larger than stop.
Sn= 1 − 1/2 + 1/3 − 1/4 + ... + (−1)n+1/n + ...
You can assume that stop is a float value and 0 < stop < 1.
You need not round the output.
For example, if stop = 0.249, then Sn is evaluated with only four terms.
Sn = 1 − 1/2 + 1/3 − 1/4
For example, if stop = 0.199, then Sn is evaluated with only five terms.
Sn = 1 − 1/2 + 1/3 − 1/4 + 1/5
The built-in function abs() is useful. You should use a while loop.
Test cases:
print( alternating_while(0.249) )
print( alternating_while(0.199) )
gives:
0.5833333333333333
0.7833333333333332
Now for this question, I want to get the sum of this series based on the conditions stipulated in the question.
My problem is I don't understand how to type the formula given in the question because I'm not familiar with how the while-loop works. Can someone instruct me on how to?
def alternating_while(stop):
total = 0
n = 1
term = 1
while abs(term) > stop:
total= (-1) ** (n + 1) / n + alternating_while(n - 1)
return total
No reason to use recursion as it wasn't mentioned as a requirement. Just check the term in the while loop for the stop condition:
Python 3.8+ (for the := operator):
def alternating_while(stop):
n = 1
total = 0
while abs(term := (-1)**(n+1)/n) > stop:
total += term
n += 1
return total
print(alternating_while(0.249))
print(alternating_while(0.199))
Output:
0.5833333333333333
0.7833333333333332
Pre-Python 3.8 version:
def alternating_while(stop):
n = 1
total = 0
while True:
term = (-1)**(n+1)/n
if abs(term) <= stop:
break
total += term
n += 1
return total
Or:
def alternating_while(stop):
n = 1
total = 0
term = (-1)**(n+1)/n
while abs(term) > stop:
total += term
n += 1
term = (-1)**(n+1)/n # redundant
return total
The key is "alternating". You can just increment the current denominator one at a time. If it is odd, you add. Otherwise, you subtract. abs is not really required; I'm not sure why they would mention it.
def alternating_while(stop):
total = 0
denom = 1
while 1/denom > stop:
if denom & 1:
total += 1/denom
else:
total -= 1/denom
denom += 1
return total
print(alternating_while(0.249))
print(alternating_while(0.199))
Output:
0.5833333333333333
0.7833333333333332
You need to cycle between adding and subtracting. The itertools module has a very helpful cycle class which you could utilise thus:
from itertools import cycle
from operator import add, sub
def get_term(d=2):
while True:
yield 1 / d
d += 1
def calc(stop=0.199):
c = cycle((sub, add))
term = get_term()
Sn = 1
while (t := next(term)) > stop:
Sn = next(c)(Sn, t)
return Sn
print(calc())
Output:
0.6936474305598223
Note:
The reference in the problem statement to absolute values seems to be irrelevant as no terms will ever be negative
I understand you need to use while in this particular problem, and this answer won't immediately help you as it is probably a few steps ahead of the current level of your course. The hope however is that you'll find it intriguing, and will perhaps come back to it in the future when you start being interested in performance and the topics introduced here.
from math import ceil
def f(stop):
n = ceil(1 / stop) - 1
return sum([(2 * (k & 1) - 1) / k for k in range(1, n + 1)])
Explanation
First, we want to establish ahead of time n, so that we avoid a math evaluation at each loop to decide whether to stop or not. Instead, the main loop is now for k in range(1, n + 1) which will go from 1 to n, included.
We use the oddness of k (k & 1) to determine the sign of each term, i.e. +1 for k == 1, -1 for k == 2, etc.
We make the series of terms in a list comprehension (for speed).
(A point often missed by many Pythonistas): building the list using such a comprehension and then summing it is, counter-intuitively, slightly faster than summing directly from a generator. In other words, sum([expr for k in generator]) is faster than sum(expr for k in generator). Note: I haven't tested this with Python 3.11 and that version of Python has many speed improvements.
For fun, you can change slightly the loop above to return the elements of the terms and inspect them:
def g(stop):
n = ceil(1 / stop) - 1
return [(2 * (k & 0x1) - 1, k) for k in range(1, n + 1)]
>>> g(.249)
[(1, 1), (-1, 2), (1, 3), (-1, 4)]
I'm trying to evaluate a Taylor polynomial for the natural logarithm, ln(x), centred at a=1 in Python. I'm using the series given on Wikipedia however when I try a simple calculation like ln(2.7) instead of giving me something close to 1 it gives me a gigantic number. Is there something obvious that I'm doing wrong?
def log(x):
n=1000
s=0
for i in range(1,n):
s += ((-1)**(i+1))*((x-1)**i)/i
return s
Using the Taylor series:
Gives the result:
EDIT: If anyone stumbles across this an alternative way to evaluate the natural logarithm of some real number is to use numerical integration (e.g. Riemann sum, midpoint rule, trapezoid rule, Simpson's rule etc) to evaluate the integral that is often used to define the natural logarithm;
That series is only valid when x is <= 1. For x>1 you will need a different series.
For example this one (found here):
def ln(x): return 2*sum(((x-1)/(x+1))**i/i for i in range(1,100,2))
output:
ln(2.7) # 0.9932517730102833
math.log(2.7) # 0.9932517730102834
Note that it takes a lot more than 100 terms to converge as x gets bigger (up to a point where it'll become impractical)
You can compensate for that by adding the logarithms of smaller factors of x:
def ln(x):
if x > 2: return ln(x/2) + ln(2) # ln(x) = ln(x/2 * 2) = ln(x/2) + ln(2)
return 2*sum(((x-1)/(x+1))**i/i for i in range(1,1000,2))
which is something you can also do in your Taylor based function to support x>1:
def log(x):
if x > 1: return log(x/2) - log(0.5) # ln(2) = -ln(1/2)
n=1000
s=0
for i in range(1,n):
s += ((-1)**(i+1))*((x-1)**i)/i
return s
These series also take more terms to converge when x gets closer to zero so you may want to work them in the other direction as well to keep the actual value to compute between 0.5 and 1:
def log(x):
if x > 1: return log(x/2) - log(0.5) # ln(x/2 * 2) = ln(x/2) + ln(2)
if x < 0.5: return log(2*x) + log(0.5) # ln(x*2 / 2) = ln(x*2) - ln(2)
...
If performance is an issue, you'll want to store ln(2) or log(0.5) somewhere and reuse it instead of computing it on every call
for example:
ln2 = None
def ln(x):
if x <= 2:
return 2*sum(((x-1)/(x+1))**i/i for i in range(1,10000,2))
global ln2
if ln2 is None: ln2 = ln(2)
n2 = 0
while x>2: x,n2 = x/2,n2+1
return ln2*n2 + ln(x)
The program is correct, but the Mercator series has the following caveat:
The series converges to the natural logarithm (shifted by 1) whenever −1 < x ≤ 1.
The series diverges when x > 1, so you shouldn't expect a result close to 1.
The python function math.frexp(x) can be used to advantage here to modify the problem so that the taylor series is working with a value close to one. math.frexp(x) is described as:
Return the mantissa and exponent of x as the pair (m, e). m is a float
and e is an integer such that x == m * 2**e exactly. If x is zero,
returns (0.0, 0), otherwise 0.5 <= abs(m) < 1. This is used to “pick
apart” the internal representation of a float in a portable way.
Using math.frexp(x) should not be regarded as "cheating" because it is presumably implemented just by accessing the bit fields in the underlying binary floating point representation. It isn't absolutely guaranteed that the representation of floats will be IEEE 754 binary64, but as far as I know every platform uses this. sys.float_info can be examined to find out the actual representation details.
Much like the other answer does you can use the standard logarithmic identities as follows: Let m, e = math.frexp(x). Then log(x) = log(m * 2e) = log(m) + e * log(2). log(2) can be precomputed to full precision ahead of time and is just a constant in the program. Here is some code illustrating this to compute the two similar taylor series approximations to log(x). The number of terms in each series was determined by trial and error rather than rigorous analysis.
taylor1 implements log(1 + x) = x1 - (1/2) * x2 + (1/3) * x3 ...
taylor2 implements log(x) = 2 * [t + (1/3) * t3 + (1/5) * t5 ...], where t = (x - 1) / (x + 1).
import math
import struct
_LOG_OF_2 = 0.69314718055994530941723212145817656807550013436025
def taylor1(x):
m, e = math.frexp(x)
log_of_m = 0
num_terms = 36
sign = 1
m_minus1_power = m - 1
for k in range(1, num_terms + 1):
log_of_m += sign * m_minus1_power / k
sign = -sign
m_minus1_power *= m - 1
return log_of_m + e * _LOG_OF_2
def taylor2(x):
m, e = math.frexp(x)
num_terms = 12
half_log_of_m = 0
t = (m - 1) / (m + 1)
t_squared = t * t
t_power = t
denominator = 1
for k in range(num_terms):
half_log_of_m += t_power / denominator
denominator += 2
t_power *= t_squared
return 2 * half_log_of_m + e * _LOG_OF_2
This seems to work well over most of the domain of log(x), but as x approaches 1 (and log(x) approaches 0) the transformation provided by x = m * 2e actually produces a less accurate result. So a better algorithm would first check if x is close to 1, say abs(x-1) < .5, and if so the just compute the taylor series approximation directly on x.
My answer is just using the Taylor series for In(x). I really hope this helps. It is simple and straight to the point.
enter image description here
If the sum is 1, I could just divide the values by their sum. However, this approach is not applicable when the sum is 0.
Maybe I could compute the opposite of each value I sample, so I would always have a pair of numbers, such that their sum is 0. However this approach reduces the "randomness" I would like to have in my random array.
Are there better approaches?
Edit: the array length can vary (from 3 to few hundreds), but it has to be fixed before sampling.
There is a Dirichlet-Rescale (DRS) algorithm that generates random numbers summing up to a given number. As it says, it has the feature that
the vectors are uniformly distributed over the valid region of the
domain of all possible vectors, bounded by the constraints.
There is also a Python library for it.
You could use sklearns Standardscaler. It scales your data to have a variance of 1 and a mean of 0. The mean of 0 is equivalent to a sum of 0.
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import numpy as np
rand_numbers = StandardScaler().fit_transform(np.random.rand(100,1, ))
If you don't want to use sklearn you can standardize by hand, the formula is pretty simple:
rand_numbers = np.random.rand(1000,1, )
rand_numbers = (rand_numbers - np.mean(rand_numbers)) / np.std(rand_numbers)
The problem here is the variance of 1, that causes numbers greater than 1 or smaller than -1. Therefor you devide the array by its max abs value.
rand_numbers = rand_numbers*(1/max(abs(rand_numbers)))
Now you have an array with values between -1 and 1 with a sum really close to zero.
print(sum(rand_numbers))
print(min(rand_numbers))
print(max(rand_numbers))
Output:
[-1.51822999e-14]
[-0.99356294]
[1.]
What you will have with this solution is either one 1 or one -1 in your data allways. If you would want to avoid this you could add a positive random factor to the division through the max abs. rand_numbers*(1/(max(abs(rand_numbers))+randomfactor))
Edit
As #KarlKnechtel mentioned the division by the standard deviation is redundant with the division by max absolute value.
The above can be simply done by:
rand_numbers = np.random.rand(100000,1, )
rand_numbers = rand_numbers - np.mean(rand_numbers)
rand_numbers = rand_numbers / max(abs(rand_numbers))
I would try the following solution:
def draw_randoms_while_sum_not_zero(eps):
r = random.uniform(-1, 1)
sum = r
yield r
while abs(sum) > eps:
if sum > 0:
r = random.uniform(-1, 0)
else:
r = random.uniform(0,1)
sum += r
yield r
As the floating point numbers are not perfectly accurate, you can never be sure, that the numbers you'd draw might sum up to 0. You need to decide, what margin is acceptable and call the above generator.
It'll yield (lazily return) random numbers as you need them as long as they don't sum up to 0 ± eps
epss = [0.1, 0.01, 0.001, 0.0001, 0.00001]
for eps in epss:
lengths = []
for _ in range(100):
lengths.append(len(list(draw_randoms_while_sum_not_zero(eps))))
print(f'{eps}: min={min(lengths)}, max={max(lengths)}, avg={sum(lengths)/len(lengths)}')
Results:
0.1: min=1, max=24, avg=6.1
0.01: min=1, max=174, avg=49.27
0.001: min=4, max=2837, avg=421.41
0.0001: min=5, max=21830, avg=4486.51
1e-05: min=183, max=226286, avg=48754.42
Since you are fine with the approach of generating lots of numbers and dividing by the sum, why not generate n/2 positive numbers divide by sum. Then generate n/2 negative numbers and divide by sum?
Want a random positive to negative mix? Randomly generate that mix randomly first then continue.
One way to generate such list is by having the opposite number.
If that is not a desirable property, you can introduce some extra randomness by adding / subtracting the same random value to different opposite couples, e.g.:
def exact_sum_uniform_random(num, min_val=-1.0, max_val=1.0, epsilon=0.1):
items = [random.uniform(min_val, max_val) for _ in range(num // 2)]
opposites = [-x for x in items]
if num % 2 != 0:
items.append(0.0)
for i in range(len(items)):
diff = random.random() * epsilon
if items[i] + diff <= max_val \
and any(opposite - diff >= min_val for opposite in opposites):
items[i] += diff
modified = False
while not modified:
j = random.randint(0, num // 2 - 1)
if opposites[j] - diff >= min_val:
opposites[j] -= diff
modified = True
result = items + opposites
random.shuffle(result)
return result
random.seed(0)
x = exact_sum_uniform_random(3)
print(x, sum(x))
# [0.7646391433441265, -0.7686875811622043, 0.004048437818077755] 2.2551405187698492e-17
EDIT
If the upper and lower limits are not strict, a simple way to construct a zero sum sequence is to sum-normalize two separate sequences to 1 and -1 and join them together:
def norm(items, scale):
return [item / scale for item in items]
def zero_sum_uniform_random(num, min_val=-1.0, max_val=1.0):
a = [random.uniform(min_val, max_val) for _ in range(num // 2)]
a = norm(a, sum(a))
b = [random.uniform(min_val, max_val) for _ in range(num - len(a))]
b = norm(b, -sum(b))
result = a + b
random.shuffle(result)
return result
random.seed(0)
n = 3
x = exact_mean_uniform_random(n)
print(exact_mean_uniform_random(n), sum(x))
# [1.0, 2.2578843364303585, -3.2578843364303585] 0.0
Note that both approaches will not have, in general, a uniform distribution.
I'm being asked to add the first 100 terms f the sequence (1 + 1/2 + 1/4 + 1/8 ...etc)
what i ve been trying is something Iike
for x in range(101):
n = ((1)/(2**x))
sum(n)
gives me an error, guess you cant put ranges to a power
print(n)
will give me a list of all the values, but i need them summed together
anyone able to give me a hand?
using qtconsole if that's of any relevance, i'm quite new to this if you haven't already guessed
You keep only one value at a time. If you want the sum, you need to aggregate the results, and for that you'd need an initial value, to which you can add each round the current term:
n = 0 # initial value
for x in range(100):
n += 1 / 2**x # add current term
print(n)
Hmm, there is actually a formula for sum of geometric series:
In your question, a is 1, r is 0.5 and n is 100
So we can do like
a = 1
r = 0.5
n = 100
print(a * (1 - r ** n) / (1 - r))
It is important to initialize sum_n to zero. With each iteration, you add (1/2**x) from your sequence/series to sum_n until you reach n_range.
n_range = 101
sum_n = 0 # initialize sum_n to zero
for x in range(n_range):
sum_n += (1/(2**x))
print(sum_n)
You are getting an error because sum takes an iterable and you are passing it a float:
sum(iterable[, start])
To solve your problem, as others have suggested, you need to init an accumulator and add your power on every iteration.
If you absolutely must use the sum function:
>>> import math
>>> sum(map(lambda x:math.pow(2,-x),range(100)))
2.0