Related
I'm having trouble with the slow computation of my Python code. Based on the pycallgraph below, the bottleneck seems to be the module named miepython.miepython.mie_S1_S2 (highlighted by pink), which takes 0.47 seconds per call.
The source code for this module is as follows:
import numpy as np
from numba import njit, int32, float64, complex128
__all__ = ('ez_mie',
'ez_intensities',
'generate_mie_costheta',
'i_par',
'i_per',
'i_unpolarized',
'mie',
'mie_S1_S2',
'mie_cdf',
'mie_mu_with_uniform_cdf',
)
#njit((complex128, float64, float64[:]), cache=True)
def _mie_S1_S2(m, x, mu):
"""
Calculate the scattering amplitude functions for spheres.
The amplitude functions have been normalized so that when integrated
over all 4*pi solid angles, the integral will be qext*pi*x**2.
The units are weird, sr**(-0.5)
Args:
m: the complex index of refraction of the sphere
x: the size parameter of the sphere
mu: array of angles, cos(theta), to calculate scattering amplitudes
Returns:
S1, S2: the scattering amplitudes at each angle mu [sr**(-0.5)]
"""
nstop = int(x + 4.05 * x**0.33333 + 2.0) + 1
a = np.zeros(nstop - 1, dtype=np.complex128)
b = np.zeros(nstop - 1, dtype=np.complex128)
_mie_An_Bn(m, x, a, b)
nangles = len(mu)
S1 = np.zeros(nangles, dtype=np.complex128)
S2 = np.zeros(nangles, dtype=np.complex128)
nstop = len(a)
for k in range(nangles):
pi_nm2 = 0
pi_nm1 = 1
for n in range(1, nstop):
tau_nm1 = n * mu[k] * pi_nm1 - (n + 1) * pi_nm2
S1[k] += (2 * n + 1) * (pi_nm1 * a[n - 1]
+ tau_nm1 * b[n - 1]) / (n + 1) / n
S2[k] += (2 * n + 1) * (tau_nm1 * a[n - 1]
+ pi_nm1 * b[n - 1]) / (n + 1) / n
temp = pi_nm1
pi_nm1 = ((2 * n + 1) * mu[k] * pi_nm1 - (n + 1) * pi_nm2) / n
pi_nm2 = temp
# calculate norm = sqrt(pi * Qext * x**2)
n = np.arange(1, nstop + 1)
norm = np.sqrt(2 * np.pi * np.sum((2 * n + 1) * (a.real + b.real)))
S1 /= norm
S2 /= norm
return [S1, S2]
Apparently, the source code is jitted by Numba so it should be faster than it actually is. The number of iterations in for loop in this function is around 25,000 (len(mu)=50, len(a)-1=500).
Any ideas on how to speed up this computation? Is something hindering the fast computation of Numba? Or, do you think the computation is already fast enough?
[More details]
In the above, another function _mie_An_Bn is being used. This function is also jitted, and the source code is as follows:
#njit((complex128, float64, complex128[:], complex128[:]), cache=True)
def _mie_An_Bn(m, x, a, b):
"""
Compute arrays of Mie coefficients A and B for a sphere.
This estimates the size of the arrays based on Wiscombe's formula. The length
of the arrays is chosen so that the error when the series are summed is
around 1e-6.
Args:
m: the complex index of refraction of the sphere
x: the size parameter of the sphere
Returns:
An, Bn: arrays of Mie coefficents
"""
psi_nm1 = np.sin(x) # nm1 = n-1 = 0
psi_n = psi_nm1 / x - np.cos(x) # n = 1
xi_nm1 = complex(psi_nm1, np.cos(x))
xi_n = complex(psi_n, np.cos(x) / x + np.sin(x))
nstop = len(a)
if m.real > 0.0:
D = _D_calc(m, x, nstop + 1)
for n in range(1, nstop):
temp = D[n] / m + n / x
a[n - 1] = (temp * psi_n - psi_nm1) / (temp * xi_n - xi_nm1)
temp = D[n] * m + n / x
b[n - 1] = (temp * psi_n - psi_nm1) / (temp * xi_n - xi_nm1)
xi = (2 * n + 1) * xi_n / x - xi_nm1
xi_nm1 = xi_n
xi_n = xi
psi_nm1 = psi_n
psi_n = xi_n.real
else:
for n in range(1, nstop):
a[n - 1] = (n * psi_n / x - psi_nm1) / (n * xi_n / x - xi_nm1)
b[n - 1] = psi_n / xi_n
xi = (2 * n + 1) * xi_n / x - xi_nm1
xi_nm1 = xi_n
xi_n = xi
psi_nm1 = psi_n
psi_n = xi_n.real
The example inputs are like the followings:
m = 1.336-2.462e-09j
x = 8526.95
mu = np.array([-1., -0.7500396, 0.46037385, 0.5988121, 0.67384093, 0.72468684, 0.76421644, 0.79175856, 0.81723714, 0.83962897, 0.85924182, 0.87641596, 0.89383665, 0.90708978, 0.91931481, 0.93067567, 0.94073113, 0.94961222, 0.95689496, 0.96467123, 0.97138347, 0.97791831, 0.98339434, 0.98870543, 0.99414948, 0.9975728 0.9989995, 0.9989995, 0.9989995, 0.9989995, 0.9989995,0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899952, 0.99899952,
0.99899952, 0.99899952, 0.99899952, 0.99899952, 0.99899952, 0.99899952, 0.99899952, 1. ])
I am focussing on _mie_S1_S2 since it appear to be the most expensive function on the provided example dataset.
First of all, you can use the parameter fastmath=True to the JIT to accelerate the computation if there is no values like +Inf, -Inf, -0 or NaN computed.
Then you can pre-compute some expensive expression containing divisions or implicit integer-to-float conversions. Note that (2 * n + 1) / n = 2 + 1/n and (n + 1) / n = 1 + 1/n. This can be useful to reduce the number of precomputed array but did not change the performance on my machine (this may change regarding the target architecture). Note also that such a precomputation have a slight impact on the result accuracy (most of the time negligible and sometime better than the reference implementation).
On my machine, this strategy make the code 4.5 times faster with fastmath=True and 2.8 times faster without.
The k-based loop can be parallelized using parallel=True and prange of Numba. However, this may not be always faster on all machines (especially the ones with a lot of cores) since the loop is pretty fast.
Here is the final code:
#njit((complex128, float64, float64[:]), cache=True, parallel=True)
def _mie_S1_S2_opt(m, x, mu):
nstop = int(x + 4.05 * x**0.33333 + 2.0) + 1
a = np.zeros(nstop - 1, dtype=np.complex128)
b = np.zeros(nstop - 1, dtype=np.complex128)
_mie_An_Bn(m, x, a, b)
nangles = len(mu)
S1 = np.zeros(nangles, dtype=np.complex128)
S2 = np.zeros(nangles, dtype=np.complex128)
factor1 = np.empty(nstop, dtype=np.float64)
factor2 = np.empty(nstop, dtype=np.float64)
factor3 = np.empty(nstop, dtype=np.float64)
for n in range(1, nstop):
factor1[n - 1] = (2 * n + 1) / (n + 1) / n
factor2[n - 1] = (2 * n + 1) / n
factor3[n - 1] = (n + 1) / n
nstop = len(a)
for k in nb.prange(nangles):
pi_nm2 = 0
pi_nm1 = 1
for n in range(1, nstop):
i = n - 1
tau_nm1 = n * mu[k] * pi_nm1 - (n + 1.0) * pi_nm2
S1[k] += factor1[i] * (pi_nm1 * a[i] + tau_nm1 * b[i])
S2[k] += factor1[i] * (tau_nm1 * a[i] + pi_nm1 * b[i])
temp = pi_nm1
pi_nm1 = factor2[i] * mu[k] * pi_nm1 - factor3[i] * pi_nm2
pi_nm2 = temp
# calculate norm = sqrt(pi * Qext * x**2)
n = np.arange(1, nstop + 1)
norm = np.sqrt(2 * np.pi * np.sum((2 * n + 1) * (a.real + b.real)))
S1 /= norm
S2 /= norm
return [S1, S2]
%timeit -n 1000 _mie_S1_S2_opt(m, x, mu)
On my machine with 6 cores, the final optimized implementation is 12 times faster with fastmath=True and 8.8 times faster without. Note that using similar strategies in other functions may also helps to speed up them.
I have come across a very strange problem where i do a lot of math and the result is inf or nan when my input is of type <class 'numpy.int64'>, but i get the correct (checked analytically) results when my input is of type <class 'int'>. The only library functions i use are np.math.factorial(), np.sum() and np.array(). I also use a generator object to sum over series and the Boltzmann constant from scipy.constants.
My question is essentially this: Are their any known cases where np.int64 objects will behave very differently from int objects?
When i run with np.int64 input, i get the RuntimeWarnings: overflow encountered in long_scalars, divide by zero encountered in double_scalars and invalid value encountered in double_scalars. However, the largest number i plug into the factorial function is 36, and i don't get these warnings when i use int input.
Below is a code that reproduces the behaviour. I was unable to find out more exactly where it comes from.
import numpy as np
import scipy.constants as const
# Some representible numbers
sigma = np.array([1, 2])
sigma12 = 1.5
mole_weights = np.array([10,15])
T = 100
M1, M2 = mole_weights/np.sum(mole_weights)
m0 = np.sum(mole_weights)
fac = np.math.factorial
def summation(start, stop, func, args=None):
#sum over the function func for all ints from start to and including stop, pass 'args' as additional arguments
if args is not None:
return sum(func(i, args) for i in range(start, stop + 1))
else:
return sum(func(i) for i in range(start, stop + 1))
def delta(i, j):
#kronecker delta
if i == j:
return 1
else:
return 0
def w(l, r):
# l,r are ints, return a float
return 0.25 * (2 - ((1 / (l + 1)) * (1 + (-1) ** l))) * np.math.factorial(r + 1)
def omega(ij, l, r):
# l, r are int, ij is and ID, returns float
if ij in (1, 2):
return sigma[ij - 1] ** 2 * np.sqrt(
(np.pi * const.Boltzmann * T) / mole_weights[ij - 1]) * w(l, r)
elif ij in (12, 21):
return 0.5 * sigma12 ** 2 * np.sqrt(
2 * np.pi * const.Boltzmann * T / (m0 * M1 * M2)) * w(l, r)
else:
raise ValueError('(' + str(ij) + ', ' + str(l) + ', ' + str(r) + ') are non-valid arguments for omega.')
def A_prime(p, q, r, l):
'''
p, q, r, l are ints. returns a float
'''
F = (M1 ** 2 + M2 ** 2) / (2 * M1 * M2)
G = (M1 - M2) / M2
def inner(w, args):
i, k = args
return ((8 ** i * fac(p + q - 2 * i - w) * (-1) ** (r + i) * fac(r + 1) * fac(
2 * (p + q + 2 - i - w)) * 2 ** (2 * r) * F ** (i - k) * G ** w) /
(fac(p - i - w) * fac(q - i - w) * fac(r - i) * fac(p + q + 1 - i - r - w) * fac(2 * r + 2) * fac(
p + q + 2 - i - w)
* 4 ** (p + q + 1) * fac(k) * fac(i - k) * fac(w))) * (
2 ** (2 * w - 1) * M1 ** i * M2 ** (p + q - i - w)) * 2 * (
M1 * (p + q + 1 - i - r - w) * delta(k, l) - M2 * (r - i) * delta(k, l - 1))
def sum_w(k, i):
return summation(0, min(p, q, p + q + 1 - r) - i, inner, args=(i, k))
def sum_k(i):
return summation(l - 1, min(l, i), sum_w, args=i)
return summation(l - 1, min(p, q, r, p + q + 1 - r), sum_k)
def H_i(p, q):
'''
p, q are ints. Returns a float
'''
def inner(r, l):
return A_prime(p, q, r, l) * omega(12, l, r)
def sum_r(l):
return summation(l, p + q + 2 - l, inner, args=l)
val = 8 * summation(1, min(p, q) + 1, sum_r)
return val
p, q = np.int64(8), np.int64(8)
print(H_i(p,q)) #nan
print(H_i(int(p) ,int(q))) #1.3480582058153066e-08
Numpy's int64 is a 64-bit integer, meaning it consists of 64 places that are either 0 or 1. Thus the smallest representable value is -2**63 and the biggest one is 2**63 - 1
Python's int is essentially unlimited in length, so it can represent any value. It is equivalent to a BigInteger in Java. It's stored as a list of int64s essentially that are considered a single large number.
What you have here is a classic integer overflow. You mentioned that you "only" plug 36 into the factorial function, but the factorial function grows very fast, and 36! = 3.7e41 > 9.2e18 = 2**63 - 1, so you get a number bigger than you can represent in an int64!
Since int64s are also called longs this is exactly what the warning overflow encountered in long_scalars is trying to tell you!
I have 2 functions right now, one of them is to code a simple recursive function that will calculate exponents in n number of steps
the second function which is my primary problem, is n/2 steps. I am confused on how to negotiate or control the number of recursive calls which is represented by n.
this is an assignment question, i am not allowed to use loops of any kind, "while" and "for" are not allowed, only if thens, so please go easy on me because i know it looks simple.
def simple_recursive_power(x, n):
print("n="+str(n))
if n ==1:
return x
else:
return x* simple_recursive_power(x,n-1)
print("the simple recurse method="+ str(simple_recursive_power(3,3)))
""the above works, the one below is working the wrong way""
def advanced_recursive_power(x, n):
print("n="+str(n))
if n <= 1:
return x
else:
return x * advanced_recursive_power(x, n-1/2)
print("advanced recursion="+ str(advanced_recursive_power(3,3)))
The better exponential function that takes half of the cycles does not just need an adjustment of N, it needs a better algorithm.
The simple exponent works like this: take N steps, at each step multiply what you have with X.
If you want to halve the number of steps, the crucial detail to notice is that, if you multiply with X*X, you are taking two steps at a time.
def advanced_recursive_power(x, n):
print("n="+str(n))
if n == 0:
return 1
elif n == 1:
return x
else:
return x * x * advanced_recursive_power(x, n - 2)
Now, this cuts down number of function invocations, but not number of multiplications: for example with N = 7, we went from X * X * X * X * X * X * X to X * (X * X) * (X * X) * (X * X). If we could just pre-calculate X * X, we could actually cut down on multiplications as well... This will calculate (X2 = X * X); X * X2 * X2 * X2, with four multiplications, not seven:
def super_advanced_recursive_power(x, n):
print("n="+str(n))
if n % 2 == 0:
start = 1
else:
start = x
return start * simple_recursive_power(x * x, n // 2)
You can drastically cut down the number of steps if you pass on powers:
def arp(x, n):
"""Advanced recursive power equivalent to x ** n"""
print('#', x, n)
if n == 0:
return 1
elif n == 1:
return x
elif n % 2: # odd exponential
return x * arp(x * x, (n - 1) // 2)
else: # even exponential
return arp(x * x, n // 2)
This takes O(log n) steps only.
>>> arp(3, 15)
# 3 15
# 9 7
# 81 3
# 6561 1
14348907
This is the equivalent of expressing addition as a series of decrements and increments:
def recursive_add(x, y):
if y == 0:
return x
return (x + 1, y - 1)
This uses that x + y == (x + 1) + (y - 1). Similarly, for powers the relation x ** n == (x * x) ** (n / 2) holds true. While it is pretty slow for addition (linear) it is fast for powers (exponential).
This exploits that even powers repeat terms. For example, 2**8 can be written as ((2 * 2) * (2 * 2)) * ((2 * 2) * (2 * 2)) - notice how the term 2 * 2 and (2 * 2) * (2 * 2) repeats. We can rewrite 2**8 as ((2 ** 2) ** 2) ** 2. This is exactly what the last term for even exponentials does recursively.
For odd exponentials, we have the problem that we would go from, say, 2 ** 3 to 4 ** 1.5. Thus, we use x ** n == x * (x ** (n - 1)) to go from an odd to an even exponent. Since we have excluded the case of n == 1, we know that n >= 3 and thus it is safe to proceed with x * x, (n-1) // 2 directly.
The net effect is that n is halved on each step, not just once.
Pardon me but I could find a better title. Please look at this super-simple Python program:
x = start = 1.0
target = 0.1
coeff = 0.999
for c in range(100000):
print('{:5d} {:f}'.format(c, x))
if abs(target - x) < abs((x - start) * 0.01):
break
x = x * coeff + target * (1 - coeff)
Brief explaination: this program moves x towards target calculating iteratively the weighted average of x and target with coeff as weight. It stops when x reaches the 1% of the initial difference.
The number of iterations remains the same no matter the initial value of x and target.
How can I set coeff in order to predict how many iterations will take place?
Thanks a lot.
Let's make this a function, f.
f(0) is the initial value (start, in this case 1.0).
f(x) = f(x - 1) * c + T * (1 - c).
(So f(1) is the next value of x, f(2) is the one after that, and so on. We want to find the value of x where |T - f(x)| < 0.01 * |f(0) - f(x)|)
So let's rewrite f(x) to be linear:
f(x) = f(x - 1) * c + T * (1 - c)
= (f(x - 2) * c + T * (1 - c)) * c + T * (1 - c)
= (f(x - 2) * c ** 2 + T * c * (1 - c)) + T * (1 - c)
= ((f(x - 3) * c + T * (1 - c)) * c ** 2 + T * c * (1 - c)) + T * (1 - c)
= f(x - 3) * c ** 3 + T * c ** 2 * (1 - c) + T * c * (1 - c) + T * (1 - c)
= f(0) * c ** x + T * c ** (x - 1) * (1 - c) + T * c ** (x - 2) * (1 - c) + ... + T * c * (1 - c) + T * (1 - c)
= f(0) * c ** x + (T * (1 - c)) [(sum r = 0 to x - 1) (c ** r)]
# Summation of a geometric series
= f(0) * c ** x + (T * (1 - c)) ((1 - c ** x) / (1 - c))
= f(0) * c ** x + T (1 - c ** x)
So, the nth value of x will be start * c ** n + target * (1 - c ** n).
We want:
|T - f(x)| < 0.01 * |f(0) - f(x)|
|T - f(0) * c ** x - T (1 - c ** x)| < 0.01 * |f(0) - f(0) * c ** x - T (1 - c ** x)|
|(c ** x) * T - (c ** x) f(0)| < 0.01 * |(1 - c ** x) * f(0) - (1 - c ** x) * T|
(c ** x) * |T - f(0)| < 0.01 * (1 - c ** x) * |T - f(0)|
c ** x < 0.01 * (1 - c ** x)
c ** x < 0.01 - 0.01 * c ** x
1.01 * c ** x < 0.01
c ** x < 1 / 101
x < log (1 / 101) / log c
(I somehow ended up with x < when it should be x >, but it gives the correct answer. With c = 0.999, x > 4612.8, and it terminates on step 4613).
In the end, it is independant of start and target.
Also, for a general percentage difference of p,
c ** x > p * (1 - c ** x)
c ** x > p - p c ** x
(1 + p) c ** x > p
c ** x > p / (1 + p)
x > log (p / (1 + p)) / log c
So for a coefficient of c, there will be log (1 / 101) / log c steps.
If you have the number of steps you want, call it I, you have
I = log_c(1 / 101)
c ** I = 1 / 101
c = (1 / 101) ** (1 / I)
So c should be set to the Ith root of 1 / 101.
Your code reduces the distance between x and the target by a factor of coeff in each execution of the loop. Thus, if start is greater than target, we get the formula
target - x = (x - start) * coeff ** c
where c is the number of loops we have done.
Your ending criterion is (again, if start is greater than target),
x - target < (start - x) * 0.01
Solving for x by algebra we get
x > (target + 0.01 * s) / (1 + 0.01)
Substituting that into our first expression and simplifying a bit makes both start and target drop out of the inequality--now you see why those values did not matter--and we get
0.01 / (1 + 0.01) < coeff ** c
Solving for c we get
c > log(0.01 / (1 + 0.01), coeff)
So the final answer for the number of loops is
ceil(log(0.01 / (1 + 0.01), coeff))
or alternatively, if you do not like logarithms to an arbitrary base,
ceil(log(0.01 / (1 + 0.01)) / log(coeff))
You could replace that first logarithm in that last expression with its result, but I left it that way to see what different result you would get if you change the constant in your end criterion away from 0.01.
The result of that expression in your particular case is
4613
which is correct. Note that both the ceil and log function are in Python's math unit, so remember to import those functions before doing that calculation. Also note that Python's floating point calculations are not exact, so your actual number of loops may differ from that by one, if you change the values of coeff or of 0.01.
I am using Python 2 and the fairly simple method given in Wikipedia's article "Cubic function". This could also be a problem with the cube root function I have to define in order to create the function mentioned in the title.
# Cube root and cubic equation solver
#
# Copyright (c) 2013 user2330618
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, you can obtain one at http://www.mozilla.org/MPL/2.0/.
from __future__ import division
import cmath
from cmath import log, sqrt
def cbrt(x):
"""Computes the cube root of a number."""
if x.imag != 0:
return cmath.exp(log(x) / 3)
else:
if x < 0:
d = (-x) ** (1 / 3)
return -d
elif x >= 0:
return x ** (1 / 3)
def cubic(a, b, c, d):
"""Returns the real roots to cubic equations in expanded form."""
# Define the discriminants
D = (18 * a * b * c * d) - (4 * (b ** 3) * d) + ((b ** 2) * (c ** 2)) - \
(4 * a * (c ** 3)) - (27 * (a ** 2) * d ** 2)
D0 = (b ** 2) - (3 * a * c)
i = 1j # Because I prefer i over j
# Test for some special cases
if D == 0 and D0 == 0:
return -(b / (3 * a))
elif D == 0 and D0 != 0:
return [((b * c) - (9 * a * d)) / (-2 * D0), ((b ** 3) - (4 * a * b * c)
+ (9 * (a ** 2) * d)) / (-a * D0)]
else:
D1 = (2 * (b ** 3)) - (9 * a * b * c) + (27 * (a ** 2) * d)
# More special cases
if D != 0 and D0 == 0 and D1 < 0:
C = cbrt((D1 - sqrt((D1 ** 2) - (4 * (D0 ** 3)))) / 2)
else:
C = cbrt((D1 + sqrt((D1 ** 2) - (4 * (D0 ** 3)))) / 2)
u_2 = (-1 + (i * sqrt(3))) / 2
u_3 = (-1 - (i * sqrt(3))) / 2
x_1 = (-(b + C + (D0 / C))) / (3 * a)
x_2 = (-(b + (u_2 * C) + (D0 / (u_2 * C)))) / (3 * a)
x_3 = (-(b + (u_3 * C) + (D0 / (u_3 * C)))) / (3 * a)
if D > 0:
return [x_1, x_2, x_3]
else:
return x_1
I've found that this function is capable of solving some simple cubic equations:
print cubic(1, 3, 3, 1)
-1.0
And a while ago I had gotten it to a point where it could solve equations with two roots. I've just done a rewrite and now it's gone haywire. For example, these coefficients are the expanded form of (2x - 4)(x + 4)(x + 2) and it should return [4.0, -4.0, -2.0] or something similar:
print cubic(2, 8, -8, -32)
[(-4+1.4802973661668753e-16j), (2+2.9605947323337506e-16j), (-2.0000000000000004-1.1842378929335002e-15j)]
Is this more a mathematical or a programming mistake I'm making?
Update: Thank you, everyone, for your answers, but there are more problems with this function than I've iterated so far. For example, I often get an error relating to the cube root function:
print cubic(1, 2, 3, 4) # Correct solution: about -1.65
...
if x > 0:
TypeError: no ordering relation is defined for complex numbers
print cubic(1, -3, -3, -1) # Correct solution: about 3.8473
if x > 0:
TypeError: no ordering relation is defined for complex numbers
Wolfram Alpha confirms that the roots to your last cubic are indeed
(-4, -2, 2)
and not as you say
... it should return [4.0, -4.0, -2.0]
Not withstanding that (I presume) typo, your program gives
[(-4+1.4802973661668753e-16j), (2+2.9605947323337506e-16j), (-2.0000000000000004-1.1842378929335002e-15j)]
Which to accuracy of 10**(-15) are the exact same roots as the correct solution. The tiny imaginary part is probably due, as others have said, to rounding.
Note that you'll have to use exact arithmetic to always correctly cancel if you are using a solution like Cardano's. This one of the reasons why programs like MAPLE or Mathematica exist, there is often a disconnect from the formula to the implementation.
To get only the real portion of a number in pure python you call .real. Example:
a = 3.0+4.0j
print a.real
>> 3.0
Hooked's answer is the way to go if you want to do this numerically. You can also do it symbolically using sympy:
>>> from sympy import roots
>>> roots('2*x**3 + 8*x**2 - 8*x - 32')
{2: 1, -4: 1, -2: 1}
This gives you the roots and their multiplicity.
You are using integer values - which are not automatically converted to floats by Python.
The more generic solution will be to write coefficients in the function as float numbers - 18.0 instead of 18, etc. That will do the trick
An illustration - from the code:
>>> 2**(1/3)
1
>>> 2**(1/3.)
1.2599210498948732
>>>