What's wrong with this function to solve cubic equations? - python

I am using Python 2 and the fairly simple method given in Wikipedia's article "Cubic function". This could also be a problem with the cube root function I have to define in order to create the function mentioned in the title.
# Cube root and cubic equation solver
#
# Copyright (c) 2013 user2330618
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, you can obtain one at http://www.mozilla.org/MPL/2.0/.
from __future__ import division
import cmath
from cmath import log, sqrt
def cbrt(x):
"""Computes the cube root of a number."""
if x.imag != 0:
return cmath.exp(log(x) / 3)
else:
if x < 0:
d = (-x) ** (1 / 3)
return -d
elif x >= 0:
return x ** (1 / 3)
def cubic(a, b, c, d):
"""Returns the real roots to cubic equations in expanded form."""
# Define the discriminants
D = (18 * a * b * c * d) - (4 * (b ** 3) * d) + ((b ** 2) * (c ** 2)) - \
(4 * a * (c ** 3)) - (27 * (a ** 2) * d ** 2)
D0 = (b ** 2) - (3 * a * c)
i = 1j # Because I prefer i over j
# Test for some special cases
if D == 0 and D0 == 0:
return -(b / (3 * a))
elif D == 0 and D0 != 0:
return [((b * c) - (9 * a * d)) / (-2 * D0), ((b ** 3) - (4 * a * b * c)
+ (9 * (a ** 2) * d)) / (-a * D0)]
else:
D1 = (2 * (b ** 3)) - (9 * a * b * c) + (27 * (a ** 2) * d)
# More special cases
if D != 0 and D0 == 0 and D1 < 0:
C = cbrt((D1 - sqrt((D1 ** 2) - (4 * (D0 ** 3)))) / 2)
else:
C = cbrt((D1 + sqrt((D1 ** 2) - (4 * (D0 ** 3)))) / 2)
u_2 = (-1 + (i * sqrt(3))) / 2
u_3 = (-1 - (i * sqrt(3))) / 2
x_1 = (-(b + C + (D0 / C))) / (3 * a)
x_2 = (-(b + (u_2 * C) + (D0 / (u_2 * C)))) / (3 * a)
x_3 = (-(b + (u_3 * C) + (D0 / (u_3 * C)))) / (3 * a)
if D > 0:
return [x_1, x_2, x_3]
else:
return x_1
I've found that this function is capable of solving some simple cubic equations:
print cubic(1, 3, 3, 1)
-1.0
And a while ago I had gotten it to a point where it could solve equations with two roots. I've just done a rewrite and now it's gone haywire. For example, these coefficients are the expanded form of (2x - 4)(x + 4)(x + 2) and it should return [4.0, -4.0, -2.0] or something similar:
print cubic(2, 8, -8, -32)
[(-4+1.4802973661668753e-16j), (2+2.9605947323337506e-16j), (-2.0000000000000004-1.1842378929335002e-15j)]
Is this more a mathematical or a programming mistake I'm making?
Update: Thank you, everyone, for your answers, but there are more problems with this function than I've iterated so far. For example, I often get an error relating to the cube root function:
print cubic(1, 2, 3, 4) # Correct solution: about -1.65
...
if x > 0:
TypeError: no ordering relation is defined for complex numbers
print cubic(1, -3, -3, -1) # Correct solution: about 3.8473
if x > 0:
TypeError: no ordering relation is defined for complex numbers

Wolfram Alpha confirms that the roots to your last cubic are indeed
(-4, -2, 2)
and not as you say
... it should return [4.0, -4.0, -2.0]
Not withstanding that (I presume) typo, your program gives
[(-4+1.4802973661668753e-16j), (2+2.9605947323337506e-16j), (-2.0000000000000004-1.1842378929335002e-15j)]
Which to accuracy of 10**(-15) are the exact same roots as the correct solution. The tiny imaginary part is probably due, as others have said, to rounding.
Note that you'll have to use exact arithmetic to always correctly cancel if you are using a solution like Cardano's. This one of the reasons why programs like MAPLE or Mathematica exist, there is often a disconnect from the formula to the implementation.
To get only the real portion of a number in pure python you call .real. Example:
a = 3.0+4.0j
print a.real
>> 3.0

Hooked's answer is the way to go if you want to do this numerically. You can also do it symbolically using sympy:
>>> from sympy import roots
>>> roots('2*x**3 + 8*x**2 - 8*x - 32')
{2: 1, -4: 1, -2: 1}
This gives you the roots and their multiplicity.

You are using integer values - which are not automatically converted to floats by Python.
The more generic solution will be to write coefficients in the function as float numbers - 18.0 instead of 18, etc. That will do the trick
An illustration - from the code:
>>> 2**(1/3)
1
>>> 2**(1/3.)
1.2599210498948732
>>>

Related

Karatsuba algorithm, slight inaccuracy

I've spent a while trying to implement Karatsuba's algorithm in Python and I'm getting close though when I try to multiply two larger numbers (over ~10^15), my result starts to get inaccurate. I can't figure out why.
Side question: Would there be a way for my base case to be "both (instead of either) x and y are strictly less (instead of less) than 10"
def karatsuba(x, y):
# 1. Split ints
if x <= 10 or y <= 10:
#Base case
return x * y
n_x = ceil(log(x, 10)) # Nb of digits in x
n_y = ceil(log(y, 10))
n = max(n_x, n_y)
b = int(x % (10 ** (n // 2)))
a = int(x / (10 ** (n // 2)))
d = int(y % (10 ** (n // 2)))
c = int(y / (10 ** (n // 2)))
# 2. Recursive calls
ac = karatsuba(a, c)
bd = karatsuba(b, d)
kara = karatsuba((a + b), (c + d))
res = ac * (10 ** (2*(n//2))) + (kara - ac - bd) * (10 ** (n//2)) + bd
return res
Example :
x = 151222321858446622145369417738339374
y = 875336699541236667457869597252254524
karatsuba(x, y)
returns:
132370448112535269852891372864998437604548273605778561898354233338827976
instead of:
132370448112535277024334963430875927265604725663292579898354233338827976
You lose precision by going through float due to your / divisions. Use // instead. Then you also don't need to convert back to int. Better yet, use divmod:
N = 10 ** (n // 2)
a, b = divmod(x, N)
c, d = divmod(y, N)

np.int64 behaves differently from int in math-operations

I have come across a very strange problem where i do a lot of math and the result is inf or nan when my input is of type <class 'numpy.int64'>, but i get the correct (checked analytically) results when my input is of type <class 'int'>. The only library functions i use are np.math.factorial(), np.sum() and np.array(). I also use a generator object to sum over series and the Boltzmann constant from scipy.constants.
My question is essentially this: Are their any known cases where np.int64 objects will behave very differently from int objects?
When i run with np.int64 input, i get the RuntimeWarnings: overflow encountered in long_scalars, divide by zero encountered in double_scalars and invalid value encountered in double_scalars. However, the largest number i plug into the factorial function is 36, and i don't get these warnings when i use int input.
Below is a code that reproduces the behaviour. I was unable to find out more exactly where it comes from.
import numpy as np
import scipy.constants as const
# Some representible numbers
sigma = np.array([1, 2])
sigma12 = 1.5
mole_weights = np.array([10,15])
T = 100
M1, M2 = mole_weights/np.sum(mole_weights)
m0 = np.sum(mole_weights)
fac = np.math.factorial
def summation(start, stop, func, args=None):
#sum over the function func for all ints from start to and including stop, pass 'args' as additional arguments
if args is not None:
return sum(func(i, args) for i in range(start, stop + 1))
else:
return sum(func(i) for i in range(start, stop + 1))
def delta(i, j):
#kronecker delta
if i == j:
return 1
else:
return 0
def w(l, r):
# l,r are ints, return a float
return 0.25 * (2 - ((1 / (l + 1)) * (1 + (-1) ** l))) * np.math.factorial(r + 1)
def omega(ij, l, r):
# l, r are int, ij is and ID, returns float
if ij in (1, 2):
return sigma[ij - 1] ** 2 * np.sqrt(
(np.pi * const.Boltzmann * T) / mole_weights[ij - 1]) * w(l, r)
elif ij in (12, 21):
return 0.5 * sigma12 ** 2 * np.sqrt(
2 * np.pi * const.Boltzmann * T / (m0 * M1 * M2)) * w(l, r)
else:
raise ValueError('(' + str(ij) + ', ' + str(l) + ', ' + str(r) + ') are non-valid arguments for omega.')
def A_prime(p, q, r, l):
'''
p, q, r, l are ints. returns a float
'''
F = (M1 ** 2 + M2 ** 2) / (2 * M1 * M2)
G = (M1 - M2) / M2
def inner(w, args):
i, k = args
return ((8 ** i * fac(p + q - 2 * i - w) * (-1) ** (r + i) * fac(r + 1) * fac(
2 * (p + q + 2 - i - w)) * 2 ** (2 * r) * F ** (i - k) * G ** w) /
(fac(p - i - w) * fac(q - i - w) * fac(r - i) * fac(p + q + 1 - i - r - w) * fac(2 * r + 2) * fac(
p + q + 2 - i - w)
* 4 ** (p + q + 1) * fac(k) * fac(i - k) * fac(w))) * (
2 ** (2 * w - 1) * M1 ** i * M2 ** (p + q - i - w)) * 2 * (
M1 * (p + q + 1 - i - r - w) * delta(k, l) - M2 * (r - i) * delta(k, l - 1))
def sum_w(k, i):
return summation(0, min(p, q, p + q + 1 - r) - i, inner, args=(i, k))
def sum_k(i):
return summation(l - 1, min(l, i), sum_w, args=i)
return summation(l - 1, min(p, q, r, p + q + 1 - r), sum_k)
def H_i(p, q):
'''
p, q are ints. Returns a float
'''
def inner(r, l):
return A_prime(p, q, r, l) * omega(12, l, r)
def sum_r(l):
return summation(l, p + q + 2 - l, inner, args=l)
val = 8 * summation(1, min(p, q) + 1, sum_r)
return val
p, q = np.int64(8), np.int64(8)
print(H_i(p,q)) #nan
print(H_i(int(p) ,int(q))) #1.3480582058153066e-08
Numpy's int64 is a 64-bit integer, meaning it consists of 64 places that are either 0 or 1. Thus the smallest representable value is -2**63 and the biggest one is 2**63 - 1
Python's int is essentially unlimited in length, so it can represent any value. It is equivalent to a BigInteger in Java. It's stored as a list of int64s essentially that are considered a single large number.
What you have here is a classic integer overflow. You mentioned that you "only" plug 36 into the factorial function, but the factorial function grows very fast, and 36! = 3.7e41 > 9.2e18 = 2**63 - 1, so you get a number bigger than you can represent in an int64!
Since int64s are also called longs this is exactly what the warning overflow encountered in long_scalars is trying to tell you!

Python holding back the value from the previous execution

This is a program to find the roots of a quadratic equation, but when i execute the program more than once the values from the previous execution still remain in the list root. How can I clear it?
When I put del root in the function quad(), it gives an error UnboundLocalError: local variable 'root' referenced before assignment. Why?
import math
import cmath
root=[]
def roots(a:int,b:int,c:int):
if ((b**2)-4*a*c)>=0:
x1=(-b+(math.sqrt((b**2)-4*a*c)))/(2*a)
x2=(-b-(math.sqrt((b**2)-4*a*c)))/(2*a)
else:
x1=(-b+cmath.sqrt((b**2)-4*a*c))/(2*a)
x2=(-b-cmath.sqrt((b**2)-4*a*c))/(2*a)
root.append(x1)
root.append(x2)
return root
def quad():
a=int(input("enter the co-efficient of x^2-integer"))
b=int(input("enter the co-efficient of x-integer"))
c=int(input("enter the constant-integer"))
roots(a,b,c)
print(root)
del root
convert root to a local variable,
import math
import cmath
def calculate_roots(a: int, b: int, c: int):
roots = []
if ((b ** 2) - 4 * a * c) >= 0:
x1 = (-b + (math.sqrt((b ** 2) - 4 * a * c))) / (2 * a)
x2 = (-b - (math.sqrt((b ** 2) - 4 * a * c))) / (2 * a)
else:
x1 = (-b + cmath.sqrt((b ** 2) - 4 * a * c)) / (2 * a)
x2 = (-b - cmath.sqrt((b ** 2) - 4 * a * c)) / (2 * a)
roots.append(x1)
roots.append(x2)
return roots
def quad():
a = int(input("enter the co-efficient of x^2-integer"))
b = int(input("enter the co-efficient of x-integer"))
c = int(input("enter the constant-integer"))
roots = calculate_roots(a, b, c)

Predict the number of iterations required - iterated weighted average

Pardon me but I could find a better title. Please look at this super-simple Python program:
x = start = 1.0
target = 0.1
coeff = 0.999
for c in range(100000):
print('{:5d} {:f}'.format(c, x))
if abs(target - x) < abs((x - start) * 0.01):
break
x = x * coeff + target * (1 - coeff)
Brief explaination: this program moves x towards target calculating iteratively the weighted average of x and target with coeff as weight. It stops when x reaches the 1% of the initial difference.
The number of iterations remains the same no matter the initial value of x and target.
How can I set coeff in order to predict how many iterations will take place?
Thanks a lot.
Let's make this a function, f.
f(0) is the initial value (start, in this case 1.0).
f(x) = f(x - 1) * c + T * (1 - c).
(So f(1) is the next value of x, f(2) is the one after that, and so on. We want to find the value of x where |T - f(x)| < 0.01 * |f(0) - f(x)|)
So let's rewrite f(x) to be linear:
f(x) = f(x - 1) * c + T * (1 - c)
= (f(x - 2) * c + T * (1 - c)) * c + T * (1 - c)
= (f(x - 2) * c ** 2 + T * c * (1 - c)) + T * (1 - c)
= ((f(x - 3) * c + T * (1 - c)) * c ** 2 + T * c * (1 - c)) + T * (1 - c)
= f(x - 3) * c ** 3 + T * c ** 2 * (1 - c) + T * c * (1 - c) + T * (1 - c)
= f(0) * c ** x + T * c ** (x - 1) * (1 - c) + T * c ** (x - 2) * (1 - c) + ... + T * c * (1 - c) + T * (1 - c)
= f(0) * c ** x + (T * (1 - c)) [(sum r = 0 to x - 1) (c ** r)]
# Summation of a geometric series
= f(0) * c ** x + (T * (1 - c)) ((1 - c ** x) / (1 - c))
= f(0) * c ** x + T (1 - c ** x)
So, the nth value of x will be start * c ** n + target * (1 - c ** n).
We want:
|T - f(x)| < 0.01 * |f(0) - f(x)|
|T - f(0) * c ** x - T (1 - c ** x)| < 0.01 * |f(0) - f(0) * c ** x - T (1 - c ** x)|
|(c ** x) * T - (c ** x) f(0)| < 0.01 * |(1 - c ** x) * f(0) - (1 - c ** x) * T|
(c ** x) * |T - f(0)| < 0.01 * (1 - c ** x) * |T - f(0)|
c ** x < 0.01 * (1 - c ** x)
c ** x < 0.01 - 0.01 * c ** x
1.01 * c ** x < 0.01
c ** x < 1 / 101
x < log (1 / 101) / log c
(I somehow ended up with x < when it should be x >, but it gives the correct answer. With c = 0.999, x > 4612.8, and it terminates on step 4613).
In the end, it is independant of start and target.
Also, for a general percentage difference of p,
c ** x > p * (1 - c ** x)
c ** x > p - p c ** x
(1 + p) c ** x > p
c ** x > p / (1 + p)
x > log (p / (1 + p)) / log c
So for a coefficient of c, there will be log (1 / 101) / log c steps.
If you have the number of steps you want, call it I, you have
I = log_c(1 / 101)
c ** I = 1 / 101
c = (1 / 101) ** (1 / I)
So c should be set to the Ith root of 1 / 101.
Your code reduces the distance between x and the target by a factor of coeff in each execution of the loop. Thus, if start is greater than target, we get the formula
target - x = (x - start) * coeff ** c
where c is the number of loops we have done.
Your ending criterion is (again, if start is greater than target),
x - target < (start - x) * 0.01
Solving for x by algebra we get
x > (target + 0.01 * s) / (1 + 0.01)
Substituting that into our first expression and simplifying a bit makes both start and target drop out of the inequality--now you see why those values did not matter--and we get
0.01 / (1 + 0.01) < coeff ** c
Solving for c we get
c > log(0.01 / (1 + 0.01), coeff)
So the final answer for the number of loops is
ceil(log(0.01 / (1 + 0.01), coeff))
or alternatively, if you do not like logarithms to an arbitrary base,
ceil(log(0.01 / (1 + 0.01)) / log(coeff))
You could replace that first logarithm in that last expression with its result, but I left it that way to see what different result you would get if you change the constant in your end criterion away from 0.01.
The result of that expression in your particular case is
4613
which is correct. Note that both the ceil and log function are in Python's math unit, so remember to import those functions before doing that calculation. Also note that Python's floating point calculations are not exact, so your actual number of loops may differ from that by one, if you change the values of coeff or of 0.01.

Python 3.4.1 - Don't show the final result?

Make a program to find the roots of a quadratic equation using the Bhaskara formule:
for to calculate the square root using the formula: number ** 0.5
I can put the a,b and c, but when I run the program does't show the result of the roots x1 and x2 ...
This is my code so far:
a = int(input("a "))
b = int(input("b "))
c = int(input("c "))
delta = b * b - 4 * a * c
if (delta >= 0):
x1 = (-b + delta ** 0.5) / 2 * a
x2 = (-b - (delta) ** 0.5) / 2 * a
print("x1: ", x1)
print("x2: ", x2)
All a, b, and c real values have two roots (except in the case where the delta is 0)—but sometimes the roots are complex numbers, not real numbers.
In particular, if I remember correctly:
If delta > 0, there are two real roots.
If delta == 0, there is only one root, the real number -b/(2*a).
If delta < 0, there are two complex roots (which are always conjugates).
If you do your math with complex numbers, you can use the same formula, (-b +/- delta**0.5) / 2a, for all three cases, and you'll get two real numbers, or 0 twice, or two complex numbers, as appropriate.
There are also ways to calculate the real and imaginary parts of the third case without doing complex math, but since Python makes complex math easy, why bother unless you're specifically trying to learn about those ways?
So, if you always want to print 2 roots, all you have to do is remove that if delta >= 0: line (and dedent the next few lines). Raising a negative float to the 0.5 power will give you a complex automatically, and that will make the rest of the expression complex. Like this:
delta = b * b - 4 * a * c
x1 = (-b + delta ** 0.5) / 2 * a
x2 = (-b - (delta) ** 0.5) / 2 * a
print("x1: ", x1)
print("x2: ", x2)
If you only want 0-2 real roots, your code is already correct as-is. You might want to add a check for delta == 0, or just for x1 == x2, so you don't print the same value twice. Like this:
delta = b * b - 4 * a * c
if delta >= 0:
x1 = (-b + delta ** 0.5) / 2 * a
x2 = (-b - (delta) ** 0.5) / 2 * a
print("x1: ", x1)
if x1 != x2:
print("x2: ", x2)
If you want some kind of error message, all you need to do is add an else clause. Something like this:
delta = b * b - 4 * a * c
if delta >= 0:
x1 = (-b + delta ** 0.5) / 2 * a
x2 = (-b - (delta) ** 0.5) / 2 * a
print("x1: ", x1)
print("x2: ", x2)
else:
print('No real solutions because of negative delta: ", delta)
Which one do you want? I have no idea. That's a question for you to answer. Once you decide what output you want for, say, 3, 4, and 5, you can pick the version that gives you that output.

Categories