How to work with polynomials over Galois fields in SymPy - python

How are Galois fields represented in SymPy? I couldn't find any documentation for this online, but SymPy contains a module called "galoistools", so I thought I should give it a try. I tried the following experiment:
from sympy import *
x = symbols("x")
A = [LC(Poly(i*x, modulus=8) * Poly(j*x, modulus=8)) for i in range(1, 8) for j in range(1, i+1)]
B = [LC(Poly(i*x, domain=GF(8)) * Poly(j*x, domain=GF(8))) for i in range(1, 8) for j in range(1, i+1)]
However, the resulting lists A and B are identical, so I'm obviously misunderstanding how this is supposed to be used. I'm trying to work in GF(8), i.e. GF(2^3), which is not the same as computing modulo 8.

At present SymPy does not have support for finite fields other than Z/pZ. The existing class GF(n) is misleadingly named; it actually implements Z/nZ as you observed.
However, using the low-level routines in galoistools module one can create a class for general finite fields GF(p^n) and for polynomials over such a field: see this answer where these classes are implemented (for the purpose of computing an interpolating polynomial, but they can be used for other things too). This is just a minimal class; it does not interface with advanced polynomial manipulation methods that are implemented in SymPy.

Related

Reducing multiple inequalities to one in python

I have two equalities and an inequality relationship between them. I would like to simplify those to a single inequality (I do not need to solve the unknowns). Here is my code:
from sympy.abc import x,y
from sympy import reduce_inequalities
eq1 = np.multiply(array_1[0], (1-delta))+delta*Q[0]*[x,y]
eq2 = np.multiply(array_1[1], (1-delta)) + delta*Q[1]*[x,y]
print(reduce_inequalities(eq1, eq2))
array_1 is a 1x4 array I defined previously, and I only need one element (I choose the elements by slicing the array). Q is a 4x2 array I defined previously. The unknowns are x and y. The output I get is True.
Is there any way to simplify this with sympy or any other python library in such a way I could use the simplified version later on?
Edit: I forgot to mention delta is also defined previously.

Sympy series expansion with numerical integration

I want to make a series expansion for a function F(e,Eo) up to a certain power of e and integrate over the Eo variable numerically.
What I thought was using SymPy to make the power series in e, and then use MPMath for the numerical integration over Eo.
Below is an example code. I receive the message that it can not create mpf from the expression. I guess the problem has to do with the fact that with the series from SymPy has an O(e**5) term at the end, and later that I want the numerical integration to show a function of e instead of a number.
import sympy as sp
import numpy as np
from mpmath import *
e = sp.symbols('e')
Eo = sp.symbols('Eo')
expr = sp.sin(e-2*Eo).series(e, 0, 5)
F = lambda Eo : expr
I = quad(F, [0, 2*np.pi])
print(I)
It’s evident that I need a different approach, but I would still need the numerical integration for my actual code because it has much more complicated expressions that could not be integrated analytically.
Edit: I should have chosen a complex function of real variables for the example code, I am trying this (the expansion and integration) for functions such as:
expr = (cos(Eo) - e - I*sqrt(1 - e ** 2)*sin(Eo)) ** 2 * (cos(2*(Eo - e*sin(Eo))) + I*sin(2*(Eo - e*sin(Eo))))/(1 - e*cos(Eo)) ** 4
Here is an approach similar to Wrzlprmft's answer but with a different way of handling coefficients, via SymPy's polynomial module:
from sympy import sin, pi, symbols, Integral, Poly
def integrate_coeff(coeff):
partial_integral = coeff.integrate((Eo, 0, 2*pi))
return partial_integral.n() if partial_integral.has(Integral) else partial_integral
e,Eo = symbols("e Eo")
expr = sin(e-sin(2*Eo))
degree = 5
coeffs = Poly(expr.series(e, 0, degree).removeO(), e).all_coeffs()
new_coeffs = map(integrate_coeff, coeffs)
print((Poly(new_coeffs, e).as_expr() + e**degree).series(e, 0, degree))
The main code is three lines: (1) extract coefficients of e up to given degree; (2) integrate each, numerically if we must; (3) print the result, presenting it as a series rather than a polynomial (hence the trick with adding e**degree, to make SymPy aware that the series continues). Output:
-6.81273574401304e-108 + 4.80787886126883*e + 3.40636787200652e-108*e**2 - 0.801313143544804*e**3 - 2.12897992000408e-109*e**4 + O(e**5)
I want the numerical integration to show a function of e instead of a number.
This is fundamentally impossible.
Either your integration is analytical or numerical, and if it is numerical it will only handle and yield numbers for you (the words numerical and number are similar for a reason).
If you want to split the integration into numerical and analytical components, you have to do so yourself – or hope that SymPy automatically splits the integration as needed, which it unfortunately is not yet capable of.
This is how I would do it (details are commented in the code):
from sympy import sin, pi, symbols, Integral
from itertools import islice
e,Eo = symbols("e Eo")
expr = sin(e-sin(2*Eo))
# Create a generator yielding the first five summands of the series.
# This avoids the O(e**5) term.
series = islice(expr.series(e,0,None),5)
integral = 0
for power,summand in enumerate(series):
# Remove the e from the expression
Eo_part = summand/e**power
# … and ensure that it worked:
assert not Eo_part.has(e)
# Integrate the Eo part:
partial_integral = Eo_part.integrate((Eo,0,2*pi))
# If the integral cannot be evaluated analytically, …
if partial_integral.has(Integral):
# … replace it by the numerical estimate:
partial_integral = partial_integral.n()
# Re-attach the e component and add it to the sum:
integral += partial_integral*e**power
Note that I did not use NumPy or MPMath at all (though SymPy uses the latter under the hood for numerical estimates). Unless you really really know what you’re doing, mixing those two with SymPy is a bad idea as they are not even aware of SymPy expressions.

Python using scipy.optimise to find the solution to an equation

I want to solve an equation using scipy.optimise
I want to find the solution, n, for the equation
a**n + b**n = c**n
where
a=2.3
b=2.4
c=2.94
I have a list of triplets (a,b,c) I want to experiment with and I know the range of the exponent n will always be 2.0 < n < 4.0. Could I use this fact to speed up the convergence of the solution.
If your function is scalar, and accepts a scalar (your case), and if you know that:
your solution is in a given interval, and the function is continuous in the same interval (your case)
you are interested in one solution, not necessarily in all (if more than 1) solutions in that interval
You can speed up the solution using the bisection algorithm, implemented here in scipy, which requires the conditions above to guarantee convergence.
The idea behind the algorithm is quite simple, with log convergence.
See this fundamental calculus theorem on which the algorithm is based.
EDIT: I couldn't resist, here you have a MWE
import scipy.optimize as opt
def sol(a,b,c):
f = lambda n : a**n + b**n - c**n
return opt.bisect(f,2,4)
print(sol(2.3,2.4,2.94)
>3.1010655957
As requested in the comments, here's how to do it using mpmath.
We supply the a, b, c parameters as strings rather than as Python floats for maximum accuracy. Converting strings to mpf (mp floats) will be as accurate as the current precision allows. If instead we convert from Python floats then we'd be using numbers that suffer from the imprecision inherent in Python floats.
mp.dps allows us to set the precision in the form of the number of decimal digits.
The mpmath findroot function accepts an initial approximation argument. This can be a single value, or it may be an interval, given as a list or a tuple. It's ok to use Python floats in that interval.
from mpmath import mp
mp.dps = 30
a, b, c = [mp.mpf(u) for u in ('2.3', '2.4', '2.94')]
def f(x):
return a**x + b**x - c**x
x = mp.findroot(f, [2, 4])
print(x, f(x))
output
3.10106559575904097402104750305 -3.15544362088404722164691426113e-30
By default, findroot uses a simple secant solver. The docs recommend using the 'anderson' or 'ridder' solvers when supplying an interval, but for this equation all 3 solvers give identical results.

Model measurement and error in NumPy

I'd like to try the SciPy suite instead of Octave for doing the statistics in my lab experiments. Most of my questions were answered here, there is just another thing left:
I usually have an error attached to the measurements, in Octave I just did the following:
R.val = 10;
R.err = 0.1;
U.val = 4;
U.err = 0.1;
And then I would calculate I with it like so:
I.val = U.val / R.val;
I.err = sqrt(
(1 / R.val * U.err)^2
+ (U.val / R.val^2 * R.err)^2
);
When I had a bunch of measurements, I usually used a structure array, like this:
R(0).val = 1;
R(0).err = 0.1;
…
R(15).val = 100;
R(15).err = 9;
Then I could do R(0).val or directly access all of them using R.val and I had a column vector with all the values, for mean(R.val) for instance.
How could I represent this using SciPy/NumPy/Python?
This kind of error propagation is exactly what the uncertainties Python package does. It does so transparently and by correctly handling correlations:
from uncertainties import ufloat
R = ufloat(10, 0.1)
U = ufloat(4, 0.1)
I = U/R
print I
prints 0.4+/-0.0107703296143, after automatically determining and calculating the error formula that you typed manually in your example. Also, I.n and I.s are respectively the nominal value (your val) and the standard deviation (your err).
Arrays holding numbers with uncertainties can also be used (http://pythonhosted.org/uncertainties/numpy_guide.html).
(Disclaimer: I'm the author of this package.)
The easiest is indeed to use NumPy structured arrays, that give you the possibility to define arrays of homogeneous elements (a record) composed of other homogeneous elements (fields).
For example, you could define
R = np.empty(15, dtype=[('val',float),('err',float)])
and then fill the corresponding columns:
R['val'] = ...
R['err'] = ...
Alternatively, you could define the array at once if you have your val and err in two lists:
R = np.array(zip(val_list, err_list), dtype=[('val',float),('err',float)])
In both cases, you can access individual elements by indices, like R[0] (which would give you a specific object, a np.void, that still gives you the possibility to access the fields separately), or by slices R[1:-1]...
With your example, you could do:
I = np.empty_like(R)
I['val'] = U['val'] / R['val']
I['err'] = np.sqrt((1 / R['val'] * U['err'])**2 + (U['val'] / R['val']**2 * R['err'])**2)
You could also use record array, which are basic structured array with the __getattr__ and __setattr__ methods overloaded in such way that you can access the fields as attributes (like in R.val) as well as indices (like the standard R['val']). Of course, as these basic methods are overloaded, record arrays are not as efficient as structured arrays.
For just one measurement probably simple namedtuple would suffice.
And instead of structure arrays you can use numpy's record arrays. Seems to be little bit more mouthful though.
Also google cache of NumPy for Matlab Users (direct link doesn't work for me atm) can help with some counterparts of basic operations.
There is a package for representing quantities along with uncertainties in Python. It is called quantities ! (also on PyPI).

mrdivide function in MATLAB: what is it doing, and how can I do it in Python?

I have this line of MATLAB code:
a/b
I am using these inputs:
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9]
b = ones(25, 18)
This is the result (a 1x25 matrix):
[5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
What is MATLAB doing? I am trying to duplicate this behavior in Python, and the mrdivide documentation in MATLAB was unhelpful. Where does the 5 come from, and why are the rest of the values 0?
I have tried this with other inputs and receive similar results, usually just a different first element and zeros filling the remainder of the matrix. In Python when I use linalg.lstsq(b.T,a.T), all of the values in the first matrix returned (i.e. not the singular one) are 0.2. I have already tried right division in Python and it gives something completely off with the wrong dimensions.
I understand what a least square approximation is, I just need to know what mrdivide is doing.
Related:
Array division- translating from MATLAB to Python
MRDIVIDE or the / operator actually solves the xb = a linear system, as opposed to MLDIVIDE or the \ operator which will solve the system bx = a.
To solve a system xb = a with a non-symmetric, non-invertible matrix b, you can either rely on mridivide(), which is done via factorization of b with Gauss elimination, or pinv(), which is done via Singular Value Decomposition, and zero-ing of the singular values below a (default) tolerance level.
Here is the difference (for the case of mldivide): What is the difference between PINV and MLDIVIDE when I solve A*x=b?
When the system is overdetermined, both algorithms provide the
same answer. When the system is underdetermined, PINV will return the
solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will
pick the solution with least number of non-zero elements.
In your example:
% solve xb = a
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9];
b = ones(25, 18);
the system is underdetermined, and the two different solutions will be:
x1 = a/b; % MRDIVIDE: sparsest solution (min L0 norm)
x2 = a*pinv(b); % PINV: minimum norm solution (min L2)
>> x1 = a/b
Warning: Rank deficient, rank = 1, tol = 2.3551e-014.
ans =
5.0000 0 0 ... 0
>> x2 = a*pinv(b)
ans =
0.2 0.2 0.2 ... 0.2
In both cases the approximation error of xb-a is non-negligible (non-exact solution) and the same, i.e. norm(x1*b-a) and norm(x2*b-a) will return the same result.
What is MATLAB doing?
A great break-down of the algorithms (and checks on properties) invoked by the '\' operator, depending upon the structure of matrix b is given in this post in scicomp.stackexchange.com. I am assuming similar options apply for the / operator.
For your example, MATLAB is most probably doing a Gaussian elimination, giving the sparsest solution amongst a infinitude (that's where the 5 comes from).
What is Python doing?
Python, in linalg.lstsq uses pseudo-inverse/SVD, as demonstrated above (that's why you get a vector of 0.2's). In effect, the following will both give you the same result as MATLAB's pinv():
from numpy import *
a = array([1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9])
b = ones((25, 18))
# xb = a: solve b.T x.T = a.T instead
x2 = linalg.lstsq(b.T, a.T)[0]
x2 = dot(a, linalg.pinv(b))
TL;DR: A/B = np.linalg.solve(B.conj().T, A.conj().T).conj().T
I did not find the earlier answers to create a satisfactory substitute, so I dug into Matlab's reference documents for mrdivide further and found the solution. I cannot explain the actual mathematics here or take credit for coming up with the answer. I'm just following Matlab's explanation. Additionally, I wanted to post the actual detail from Matlab to give credit. If it's a copyright issue, someone tell me and I'll remove the actual text.
%/ Slash or right matrix divide.
% A/B is the matrix division of B into A, which is roughly the
% same as A*INV(B) , except it is computed in a different way.
% More precisely, A/B = (B'\A')'. See MLDIVIDE for details.
%
% C = MRDIVIDE(A,B) is called for the syntax 'A / B' when A or B is an
% object.
%
% See also MLDIVIDE, RDIVIDE, LDIVIDE.
% Copyright 1984-2005 The MathWorks, Inc.
Note that the ' symbol indicates the complex conjugate transpose. In python using numpy, that requires .conj().T chained together.
Per this handy "cheat sheet" of numpy for matlab users, linalg.lstsq(b,a) -- linalg is numpy.linalg.linalg, a light-weight version of the full scipy.linalg.
a/b finds the least square solution to the system of linear equations bx = a
if b is invertible, this is a*inv(b), but if it isn't, the it is the x which minimises norm(bx-a)
You can read more about least squares on wikipedia.
according to matlab documentation, mrdivide will return at most k non-zero values, where k is the computed rank of b. my guess is that matlab in your case solves the least squares problem given by replacing b by b(:1) (which has the same rank). In this case the moore-penrose inverse b2 = b(1,:); inv(b2*b2')*b2*a' is defined and gives the same answer

Categories