Evaluating polynomials in modulo arithmetic - python

I need to repeatedly evaluate a polynomial of the form
f(x)=c(0)+c(1)*x+...+c(k-1)*x^(k-1) mod p
where k is an integer, p is a large prime number and c(0),...,c(p) are between 1 and p.
For my applications, k=10, p should be greater than 1000.
I would prefer to do this in Python and as fast as possible. I don't know enough about modulo arithmetic in Python to implement this efficiently (e.g. how to exploit that we can use Mersenne primes p=2^q-1 in which case about should use that multiplication is a register shift, avoid trouble by adding integers over different orders of magnitude,...).
Motivation: k-independent hashing, see https://en.wikipedia.org/wiki/K-independent_hashing. This seems to a very popular academic subject but I was not able to find any implementations for k>2.

In general, you can compute the value of a polynomial using the following construction:
def value(poly, x):
"""Evaluates a polynomial POLY for a given x.
The polynomial is expressed as a list of coefficients, with
the coefficient for x ** N at poly[N].
This means that x ** 2 + 2*x + 3 is expressed as [3, 2, 1].
"""
v = 0
# Bit messy, but we're basically generating the indexes of
# our polynomial coefficients from highest to lowest
for coeff in reverse(poly):
v = v * x + coeff
return v
To evaluate this modulo a value, we can simply change the inner loop to v = v * x + poly[ix] % p (and pass our modulus as the parameter p).
We can show that the example polynom (x^2 + 2x + 3) is computed correctly by unwinding the loop and see that what we have is (((1) * x + 2) * x + 3) (each parenthesis level is one iteration through the loop), this can be simplified to 1 * x * x + 2 * x + 3, which is clearly the expected polynomial.
By using this, we should never end up with an intermediate value larger than p * x.

Related

Finding roots of a non linear expression when multiplied with a linear expression

Here is a simple polynomial equation:
b^2 + 2b + 1 = 0
I could easily solve this as:
import numpy as np
from scipy.optimize import fsolve
eq = lambda b : np.power(b,2) + 2*b + 1
fsolve(eq, np.linspace(0,1,2))
Similarly I could solve any equation, that has finite number of terms. But how do I solve an equation with infinite number of terms which is given as :
The above equation could be written as :
5 = (1 - l) * (5.5 + 4.0*l + 4*l^2 + 6*l^3 + 5*l^4 + 5*l^5 + 5*l^6 + 5*l^7 + 5*l^8 + 5*l^9 + 5*l^10 )
when n goes from 1 to 10. But I want to solve this for sufficiently large value of n such that LHS ~= RHS.
I know the values of LHS and G1 -> Ginf but cannot understand how could I compute the value of lambda here.
I tried looking at numpy polynomial functions but could not find a function that is relevant here.
The following glosses over the fact that I do not 100% understand the coefficient notation G_t:t+n (what kind of dependency is that supposed to indicate exactly?)
Obviously, the solution will depend on coefficients. If as your example suggests, the coefficients are all equal above some index n_0 then your r.h.s. expression is a telescoping sum and equal to G_t:1 + sum_1^n_0 [G_t:n - G_t:n+1] l^n`. Be sure to note that this sum is finite, so you know how to proceed from here.
One caveat: you must have |l| < 1 otherwise the series does not converge and the r.h.s. is undefined, although some kind of continuation argument may be possible.

Theoretical vs actual time-complexity for algorithm calculating 2^n

I am trying to compute the time-complexity and compare it with the actual computation times.
If I am not mistaken, the time-complexity is O(log(n)), but looking at the actual computation times it looks more like O(n) or even O(nlog(n)).
What could be reason for this difference?
def pow(n):
"""Return 2**n, where n is a nonnegative integer."""
if n == 0:
return 1
x = pow(n//2)
if n%2 == 0:
return x*x
return 2*x*x
Theoretical time-complexity:
Actual run times:
I was suspecting your time calculation is not accurate, so I did it using timeit, here're my stats:
import timeit
# N
sx = [10, 100, 1000, 10e4, 10e5, 5e5, 10e6, 2e6, 5e6]
# average runtime in seconds
sy = [timeit.timeit('pow(%d)' % i, number=100, globals=globals()) for i in sx]
Update:
Well, the code did run with O(n*log(n))...! A possible explanation is that multiplication / division is not O(1) for large numbers so this part doesn't hold:
T(n) = 1 + T(n//2)
= 1 + 1 + T(n//4)
# ^ ^
# mul>1
# div>1
# when n is large
Experiment with multiplication and division:
mul = lambda x: x*x
div = lambda y: x//2
s1 = [timeit.timeit('mul(%d)' % i, number=1000, globals=globals()) for i in sx]
s2 = [timeit.timeit('div(%d)' % i, number=1000, globals=globals()) for i in sx]
And plots, same for mul and div - they are not O(1) (?) small integers seem to be more efficient but no big difference for large integers. I don't know what could be the cause then. (though, I should keep the answer here if it can help)
The number of iterations will be log(n,2) but each iteration needs to perform a multiplication between two numbers that are twice as large as the preceding iteration's.
The best multiplication algorithms for variable precision numbers perform in O(N * log(N) * log(log(N))) or O(N^log(3)) where N is the number of digits (bits or words) needed to represent the number. It would seem that the two complexities combine to produce execution times that are larger than O(log(n)) in practice.
The digit count of the two numbers at each iteration is 2^i. So the total time will be the sum of multiplication (x*x) complexities for numbers going through the log(n) iterations
To compute the function's time complexity based on the Schönhage–Strassen multiplications algorithm, we would need to add the time complexity of each iteration using : O(N * log(N) * log(log(N))):
∑ 2^i * log(2^i) * log(log(2^i)) [i = 0...log(n)]
∑ 2^i * i * log(i) [i = 0...log(n)]
which would be quite complex, so let's look at a simpler scenario.
If Python's variable precision multiplications used the most naive O(N^2) algorithm, the worst case time could be expressed as:
∑ (2^i)^2 [i = 0...log(n)]
∑ 4^i [i = 0...log(n)]
(4^(log(n)+1)-1)/3 # because ∑K^i [i=0..n] = (K^(n+1)-1)/(K-1)
( 4*4^log(n) - 1 ) / 3
( 4*(2^log(n))^2 - 1 ) / 3
(4*n^2-1)/3 # 2^log(n) = n
(4/3)*n^2-1/3
This would be O(n^2), which suggests that the log(n) iteration time cancels itself out in favour of the multiplication's complexity profile.
We get the same result if we apply this reasoning to the Karatsuba multiplication algorithm: O(N^log(3)):
∑ (2^i)^log(3) [i=0..log(n)]
∑ (2^log(3))^i [i=0..log(n)]
∑ 3^i [i=0..log(n)]
( 3^(log(n)+1) - 1 ) / 2 # because ∑K^i [i=0..n] = (K^(n+1)-1)/(K-1)
( 3*3^log(n) - 1 ) / 2
( 3*(2^log(3))^log(n) - 1 ) / 2
( 3*(2^log(n))^log(3) - 1 ) / 2
(3/2)*n^log(3) - 1/2
which corresponds to O(n^log(3)) and corroborates the theory.
Note that the last column of your measurement table is misleading because you're making n progress exponentially. This changes the meaning of t[i]/t[i-1] and its interpretation for the evaluation of time complexity. It would be more meaningful if the progression between N[i] and N[i-1] was linear.
Taking into account the N[i]/N[i-1] ratio in the calculation, I found that the results seem to correlate more with O(n^log(3)) which would suggest that Python uses Karatsuba for large integer multiplications. (for version 3.7.1 on MacOS) However, this correlation is very weak.
FINAL ANSWER: O(log(N))
After doing more tests, I realized that there are wild variations in the time taken to multiply large numbers. Sometimes larger numbers take considerably less time than smaller ones. This makes the timing figures suspect and correlation to a time complexity based on a small and irregular sample is not going to be conclusive.
With a larger and more evenly distributed sample, the time strongly correlates (0.99) with log(N). This would mean the differences introduced by multiplication overhead only impact fixed points in the value range. Intentionally selecting values of N that are orders of magnitude apart exacerbated the impact of these fixed points thus skewing the results.
So you can ignore all the nice theories I wrote above, because the data shows that the time complexity is indeed Log(n). You just have to use a more meaningful sample (and better rate of change calculations).
Its because multiply 2 small numbers its O(1). But multiply 2 long number (N - num)O(log(N)**2). https://en.wikipedia.org/wiki/Multiplication_algorithm
So in each step time increase not O(log(N))
This can be complex, but there are different cases that you will have to examine for different values of n since this is recursive. This should explain it https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms).
You have to consider the true input size of the function. It's not the magnitude of n, but the number of bits needed to represent n, which is logarithmic in the magnitude. That is, dividing a number by 2 doesn't cut the input size in half: it only reduces it by 1 bit. This means that for an n-bit number (whose value is between 2^n and 2^(n+1)), the running time is indeed logarithmic in magnitude, but linear in the number of bits.
n lg n bits to represent n
--------------------------------------
10 between 2 and 3 4 (1010)
100 between 4 and 5 7 (1100100)
1000 just under 7 10 (1111101000)
10000 between 9 and 10 14 (10011100010000)
Each time you multiply n by 10, you are only increasing the input size by 3-4 bits, roughly a factor of 2, not a factor of 10.
For some integer values python will internally use "long reperesentation" And in your case this happens somewhere after n=63, so your theoretical time complexity should be correct only for values of n < 63.
For "long representation" multiplying 2 numbers (x * y) have complexity bigger than O(1):
for x == y (e.g. x*x) complexity is around O(Py_SIZE(x)² / 2).
for x != y (e.g. 2*x) multiplication is performed like "Schoolbook long multiplication", so complexity will be O(Py_SIZE(x)*Py_SIZE(y)). In your case it might a little affect performance too, because 2*x*x will do (2*x)*x, while faster way would be to do 2*(x*x)
And so for n>=63 theoretical complexity must also account for complexity of multiplications.
It's possible to measure "pure" complexity of custom pow (ignoring complexity of multiplication) if you can reduce complexity of multiplication to O(1). For example:
SQUARE_CACHE = {}
HALFS_CACHE = {}
def square_and_double(x, do_double=False):
key = hash((x, do_double))
if key not in SQUARE_CACHE:
if do_double:
SQUARE_CACHE[key] = 2 * square_and_double(x, False)
else:
SQUARE_CACHE[key] = x*x
return SQUARE_CACHE[key]
def half_and_remainder(x):
key = hash(x)
if key not in HALFS_CACHE:
HALFS_CACHE[key] = divmod(x, 2)
return HALFS_CACHE[key]
def pow(n):
"""Return 2**n, where n is a non-negative integer."""
if n == 0:
return 1
x = pow(n//2)
return square_and_double(x, do_double=bool(n % 2 != 0))
def pow_alt(n):
"""Return 2**n, where n is a non-negative integer."""
if n == 0:
return 1
half_n, remainder = half_and_remainder(n)
x = pow_alt(half_n)
return square_and_double(x, do_double=bool(remainder != 0))
import timeit
import math
# Values of n:
sx = sorted([int(x) for x in [100, 1000, 10e4, 10e5, 5e5, 10e6, 2e6, 5e6, 10e7, 10e8, 10e9]])
# Fill caches of `square_and_double` and `half_and_remainder` to ensure that complexity of both `x*x` and of `divmod(x, 2)` are O(1):
[pow_alt(n) for n in sx]
# Average runtime in ms:
sy = [timeit.timeit('pow_alt(%d)' % n, number=500, globals=globals())*1000 for n in sx]
# Theoretical values:
base = 2
sy_theory = [sy[0]]
t0 = sy[0] / (math.log(sx[0], base))
sy_theory.extend([
t0*math.log(x, base)
for x in sx[1:]
])
print("real timings:")
print(sy)
print("\ntheory timings:")
print(sy_theory)
print('\n\nt/t_prev:')
print("real:")
print(['--' if i == 0 else "%.2f" % (sy[i]/sy[i-1]) for i in range(len(sy))])
print("\ntheory:")
print(['--' if i == 0 else "%.2f" % (sy_theory[i]/sy_theory[i-1]) for i in range(len(sy_theory))])
# OUTPUT:
real timings:
[1.7171500003314577, 2.515988002414815, 4.5264500004122965, 4.929114998958539, 5.251838003459852, 5.606903003354091, 6.680275000690017, 6.948587004444562, 7.609975000377744, 8.97067000187235, 16.48820400441764]
theory timings:
[1.7171500003314577, 2.5757250004971866, 4.292875000828644, 4.892993172417281, 5.151450000994373, 5.409906829571465, 5.751568172583011, 6.010025001160103, 6.868600001325832, 7.727175001491561, 8.585750001657289]
t/t_prev:
real:
['--', '1.47', '1.80', '1.09', '1.07', '1.07', '1.19', '1.04', '1.10', '1.18', '1.84']
theory:
['--', '1.50', '1.67', '1.14', '1.05', '1.05', '1.06', '1.04', '1.14', '1.12', '1.11']
Results are still not perfect but close to theoretical O(log(n))
You can generate textbook-like results if you count what they count, the steps taken:
def pow(n):
global calls
calls+=1
"""Return 2**n, where n is a nonnegative integer."""
if n == 0:
return 1
x = pow(n//2)
if n%2 == 0:
return x*x
return 2*x*x
def steppow(n):
global calls
calls=0
pow(n)
return calls
sx = [math.pow(10,n) for n in range(1,11)]
sy = [steppow(n)/math.log(n) for n in sx]
print(sy)
Then it produces something like this:
[2.1714724095162588, 1.737177927613007, 1.5924131003119235, 1.6286043071371943, 1.5634601348517065, 1.5200306866613815, 1.5510517210830421, 1.5200306866613813, 1.4959032154445342, 1.5200306866613813]
Where the 1.52... appears to be some kind of favourite.
But the actual runtime also includes the seemingly innocent mathematical operations, which grow in complexity too as the number physically grows in the memory. CPython uses a numer of multiplication implementations branching at various points:
long_mul is the entry:
if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) {
stwodigits v = (stwodigits)(MEDIUM_VALUE(a)) * MEDIUM_VALUE(b);
return PyLong_FromLongLong((long long)v);
}
z = k_mul(a, b);
if the numbers fit into a CPU word, they get multiplied in place (but the result may be larger, hence the LongLong (*)), otherwise they will use k_mul() which stands for Karatsuba multiplication, which also checks a couple things based on size and value:
i = a == b ? KARATSUBA_SQUARE_CUTOFF : KARATSUBA_CUTOFF;
if (asize <= i) {
if (asize == 0)
return (PyLongObject *)PyLong_FromLong(0);
else
return x_mul(a, b);
}
for shorter numbers, a classic algorithm is used, x_mul(), and shortness-check also depends on the product being a square, beause x_mul() has an optimized code path for calculating x*x-like expressions. However, above a certain in-memory-size, the algorithm stays locally, but then it does another check about how different the magnitude of the two values are different:
if (2 * asize <= bsize)
return k_lopsided_mul(a, b);
possibly branching to yet another algorithm, k_lopsided_mul(), which is still Karatsuba, but optimized for multiplying numbers with significant difference in magnitude.
In short, even the 2*x*x has significance, if you replace it with x*x*2, timeit results will differ:
2*x*x: [0.00020009249478223623, 0.0002965123323532072, 0.00034258906889154733, 0.0024181753953639975, 0.03395215528201522, 0.4794894526936972, 4.802882867816082]
x*x*2: [0.00014974939375012042, 0.00020265231347948998, 0.00034002925019471775, 0.0024501731290706985, 0.03400164511014836, 0.462764023966729, 4.841786565730171]
(measured as
sx = [math.pow(10,n) for n in range(1,8)]
sy = [timeit.timeit('pow(%d)' % i, number=100, globals=globals()) for i in sx]
)
(*) by the way, as the size of the result is often overestimated (like at the very beginning, long*long may or may not fit into a long afterwards), there is a long_normalize function too, which at the end does spend time on freeing extra memory (see comment above it), but still sets the correct size for the internal object, which involves a loop counting zeroes in front of the actual number.

Python program to multiply two polynomial where each term of the polynomial is represented as a pair of integers (coefficient, exponent)?

fuction takes two list(having tuples as values) as input
i got in my mind following algorithm to write code for this, but to write it properly.
-->firstly make required no. of dictionary to store coefficient of each power is multiplied with all coefficient of polynomial p2.
then all dictionary coefficient are added which having same power.
def multpoly(p1,p2):
dp1=dict(map(reversed, p1))
dp2=dict(map(reversed, p2))
kdp1=list(dp1.keys())
kdp2=list(dp2.keys())
rslt={}
if len(kdp1)>=len(kdp2):
kd1=kdp1
kd2=kdp2
elif len(kdp1)<len(kdp2):
kd1=kdp2
kd2=kdp1
for n in kd2:
for m in kd1:
rslt[n]={m:0}
if len(dp1)<=len(dp2):
rslt[n][m+n]=rslt[n][m+n] + dp1[n]*dp2[m]
elif len(dp1)>len(dp2):
rslt[n][m+n]=rslt[n][m+n] + dp2[n]*dp1[m]
return(rslt)
If I understand correctly, you want a function to multiply two polynomials and return the result. In the future, try and post a specific question. Here is code that will work for you:
def multiply_terms(term_1, term_2):
new_c = term_1[0] * term_2[0]
new_e = term_1[1] + term_2[1]
return (new_c, new_e)
def multpoly(p1, p2):
"""
#params p1,p2 are lists of tuples where each tuple represents a pair of term coefficient and exponent
"""
# multiply terms
result_poly = []
for term_1 in p1:
for term_2 in p2:
result_poly.append(multiply_terms(term_1, term_2))
# collect like terms
collected_terms = []
exps = [term[1] for term in result_poly]
for e in exps:
count = 0
for term in result_poly:
if term[1] == e:
count += term[0]
collected_terms.append((count, e))
return collected_terms
Note however, there are definitely much better ways to represent these polynomials such that the multiplication is faster and easier to code. Your idea with the dict is slightly better but still messy. You could use a list where the index represents the exponent and the value represents the coefficient. For ex. you could represent 2x^4 + 3x + 1 as [1, 3, 0, 0, 2].

Sum of powers for lists of tuples

My assignment is to create a function to sum the powers of tuples.
def sumOfPowers(tups, primes):
x = 0;
for i in range (1, len(primes) + 1):
x += pow(tups, i);
return x;
So far I have this.
tups - list of one or more tuples, primes - list of one or more primes
It doesn't work because the inputs are tuples and not single integers. How could I fix this to make it work for lists?
[/edit]
Sample output:
sumOfPowers([(2,3), (5,6)], [3,5,7,11,13,17,19,23,29]) == 2**3 + 5**6
True
sumOfPowers([(2,10**1000000 + 1), (-2,10**1000000 + 1), (3,3)], primes)
27
Sum of powers of [(2,4),(3,5),(-6,3)] is 2^4 + 3^5 + (−6)^3
**The purpose of the prime is to perform the computation of a^k1 + ... a^kn modulo every prime in the list entered. (aka perform the sum computation specified by each input modulo each of the primes in the second input list, then solve using the chinese remainder theorem )
Primes list used in the example input:
15481619,15481633,15481657,15481663,15481727,15481733,15481769,15481787
,15481793,15481801,15481819,15481859,15481871,15481897,15481901,15481933
,15481981,15481993,15481997,15482011,15482023,15482029,15482119,15482123
,15482149,15482153,15482161,15482167,15482177,15482219,15482231,15482263
,15482309,15482323,15482329,15482333,15482347,15482371,15482377,15482387
,15482419,15482431,15482437,15482447,15482449,15482459,15482477,15482479
,15482531,15482567,15482569,15482573,15482581,15482627,15482633,15482639
,15482669,15482681,15482683,15482711,15482729,15482743,15482771,15482773
,15482783,15482807,15482809,15482827,15482851,15482861,15482893,15482911
,15482917,15482923,15482941,15482947,15482977,15482993,15483023,15483029
,15483067,15483077,15483079,15483089,15483101,15483103,15483121,15483151
,15483161,15483211,15483253,15483317,15483331,15483337,15483343,15483359
,15483383,15483409,15483449,15483491,15483493,15483511,15483521,15483553
,15483557,15483571,15483581,15483619,15483631,15483641,15483653,15483659
,15483683,15483697,15483701,15483703,15483707,15483731,15483737,15483749
,15483799,15483817,15483829,15483833,15483857,15483869,15483907,15483971
,15483977,15483983,15483989,15483997,15484033,15484039,15484061,15484087
,15484099,15484123,15484141,15484153,15484187,15484199,15484201,15484211
,15484219,15484223,15484243,15484247,15484279,15484333,15484363,15484387
,15484393,15484409,15484421,15484453,15484457,15484459,15484471,15484489
,15484517,15484519,15484549,15484559,15484591,15484627,15484631,15484643
,15484661,15484697,15484709,15484723,15484769,15484771,15484783,15484817
,15484823,15484873,15484877,15484879,15484901,15484919,15484939,15484951
,15484961,15484999,15485039,15485053,15485059,15485077,15485083,15485143
,15485161,15485179,15485191,15485221,15485243,15485251,15485257,15485273
,15485287,15485291,15485293,15485299,15485311,15485321,15485339,15485341
,15485357,15485363,15485383,15485389,15485401,15485411,15485429,15485441
,15485447,15485471,15485473,15485497,15485537,15485539,15485543,15485549
,15485557,15485567,15485581,15485609,15485611,15485621,15485651,15485653
,15485669,15485677,15485689,15485711,15485737,15485747,15485761,15485773
,15485783,15485801,15485807,15485837,15485843,15485849,15485857,15485863
I am not quite sure if I understand you correctly, but maybe you are looking for something like this:
from functools import reduce
def sumOfPowersModuloPrimes (tups, primes):
return [reduce(lambda x, y: (x + y) % p, (pow (b, e, p) for b, e in tups), 0) for p in primes]
You shouldn't run into any memory issues as your (intermediate) values never exceed max(primes). If your resulting list is too large, then return a generator and work with it instead of a list.
Ignoring primes, since they don't appear to be used for anything:
def sumOfPowers(tups, primes):
return sum( pow(x,y) for x,y in tups)
Is it possible that you are supposed to compute the sum modulo one or more of the prime numbers? Something like
2**3 + 5**2 mod 3 = 8 + 25 mod 3 = 33 mod 3 = 0
(where a+b mod c means to take the remainder of the sum a+b after dividing by c).
One guess at how multiple primes would be used is to use the product of the primes as the
divisor.
def sumOfPower(tups, primes):
# There are better ways to compute this product. Loop
# is for explanatory purposes only.
c = 1
for p in primes:
p *= c
return sum( pow(x,y,c) for x,y in tups)
(I also seem to remember that a mod pq == (a mod p) mod q if p and q are both primes, but I could be mistaken.)
Another is to return one sum for each prime:
def sumOfPower(tups, primes):
return [ sum( pow(x,y,c) for x,y in tups ) for c in primes ]
def sumOfPowers (powerPairs, unusedPrimesParameter):
sum = 0
for base, exponent in powerPairs:
sum += base ** exponent
return sum
Or short:
def sumOfPowers (powerPairs, unusedPrimesParameter):
return sum(base ** exponent for base, exponent in powerPairs)
perform the sum computation specified by each input modulo each of the primes in the second input list
That’s a completely different thing. However, you still haven’t really explained what your function is supposed to do and how it should work. Given that you mentioned Euler's theorem and the Chinese remainder theorem, I guess there is a lot more to it than you actually made us believe. You probably want to solve the exponentiations by using Euler's theorem to reduce those large powers. I’m not willing to further guess what is going on though; this seems to involve a non-trivial math problem you should solve on the paper first.
def sumOfPowers (powerPairs, primes):
for prime in primes:
sum = 0
for base, exponent in powerPairs:
sum += pow(base, exponent, prime)
# do something with the sum here
# Chinese remainder theorem?
return something

How can I create functions that handle polynomials?

I have these problems about polynomials and I've spent about 4 hours on this, but I just can't get it. I'm new to Python and programming and I've tried working it out on paper, but I just don't know.
Write and test a Python function negate(p) that negates the polynomial represented by the list of its coeffeicients p and returns a new polynomial (represented as a list). In other words, write a function that makes the list of numbers negative.
Write a Python function eval_polynomial(p, x) that returns the value of P(x), where P is the polynomial represented by the list of its coefficients p. For example, eval_polynomial([1, 0, 3], 2) should return 1*2^2 + 0*2 + 3 = 7. Use a single while loop.
Write and test a function multiply_by_one_term(p, a, k) that multiplies a given polynomial p, represented by a list of coefficients, by ax^k and returns the product as a new list.
I would really appreciate it if someone could help me.
I'd recommend using numpy.poly1d and numpy.polymul, where the coefficients are a0*x2 + a1*x + a2.
For example, to represent 3*x**2 + 2*x + 1:
p1 = numpy.poly1d([3,2,1])
And with the resulting poly1d object you can operate using *, / and so on...:
print(p1*p1)
# 4 3 2
#9 x + 12 x + 10 x + 4 x + 1
If you want to build your own functions, assuming that p contains the coefficients in order: a0 + a1*x + a2*x**2 + ...:
def eval_polynomial(p,x):
return sum((a*x**i for i,a in enumerate(p)))
def multiply_by_one_term(p, a, k):
return [0]*k + [a*i for i in p]
Note
My evaluate function uses exponentials, which can be avoided with Horner's rule, as posted in another answer, which is available in Numpy's polyval function
Please use Horner's Method instead!
For polynomials, you should consider Horner's Method. Its main feature is that computing a polynomial of order N requires only N multiplies and N additions -- no exponentials:
def eval_polynomial(P, x):
'''
Compute polynomial P(x) where P is a vector of coefficients, highest
order coefficient at P[0]. Uses Horner's Method.
'''
result = 0
for coeff in P:
result = x * result + coeff
return result
>>> eval_poly([1, 0, 3], 2)
7
You can work through it by hand, or follow the link to see how it works.

Categories