I use Sympy solve() function to solve a large number of equations. All variables in the equations are defined as symbols. Variables can start with the letter P or F. I use solve() to express one specific P variable (the one that I observe) with only F variables, so I use solve() to substitute all other P variables with F variables. The sum of the coefficients before the F variables is ideally 1 or almost 1 (e.g.: 0.99).
This produces good results till a certain point where the number of equations becomes pretty big and also their length. There the Sympy solve() function starts to give me wrong results. The sum of the coefficients becomes negative (e.g. -7,...). It looks like that the solve() function gets problems with substituting any carrying over all variables and their coefficients.
Is there a way to correct this problem?
Dictionary of equations under link: https://drive.google.com/open?id=1VBQucrDU-o1diCd6i4rR3MlRh95qycmK
import json
from sympy import Symbol, Add, Eq, solve
# Get data
# data from link above
with open("C:\\\\Test\\dict.json") as f:
equations = json.load(f)
comp =[]
expressions = []
for p, equation_components in equations.items():
p = Symbol(p)
comp.append(p)
expression = []
for name, multiplier in equation_components.items():
if type(multiplier) == float or type(multiplier) == int:
expression.append(Symbol(name) * multiplier)
else:
expression.append(Symbol(name) * Symbol(multiplier))
expressions.append(Eq(p, Add(*expression)))
# Solution for variable P137807
print("Solving...")
# Works for slice :364 !!!!!
solutions = solve(expressions[:364], comp[:364], simplify=False, rational=False)
# Gives wrong results for slice :366 and above !!!!!
# solutions = solve(expressions[:366], comp[:366], simplify=False, rational=False)
vm_symbol = Symbol("P137807")
solution_1 = solutions[vm_symbol]
print("\n")
print("Solution_1:")
print(solution_1)
print("\n")
#Sum of coefficients
list_sum = []
for i in solution_1.args:
if str(i.args[1]) != "ANaN":
list_sum.append(i.args[0])
coeff_sum = sum(list_sum)
print("Sum:")
print(coeff_sum)
...
I just wanted to mark the problem as solved and provide reference to the solution. Please look at numerical instability when solving n=385 linear equations with Float coefficients #17136.
The solution that worked for me was to use the following solver and not the Sympy solve() function:
def ssolve(eqs, syms):
"""return the solution of linear system of equations
with symbolic coefficients and a unique solution.
Examples
========
>>> eqs=[x-1,x+2*y-z-2,x+z+w-6,2*y+z+x-2]
>>> v=[x,y,z,w]
>>> ssolve(eqs, v)
{x: 1, z: 0, w: 5, y: 1/2}
"""
from sympy.solvers.solveset import linear_coeffs
v = list(syms)
N = len(v)
# convert equations to coefficient dictionaries
print('checking linearity')
d = []
v0 = v + [0]
for e in [i.rewrite(Add) for i in eqs]:
co = linear_coeffs(e, *v)
di = dict([(i, c) for i, c in zip(v0, co) if c or not i])
d.append(di)
print('forward solving')
sol = {}
impl = {}
done = False
while not done:
# check for those that are done
more = set([i for i, di in enumerate(d) if len(di) == 2])
did = 0
while more:
di = d[more.pop()]
c = di.pop(0)
x = list(di)[0]
a = di.pop(x)
K = sol[x] = -c/a
v.remove(x)
changed = True
did += 1
# update everyone else
for j, dj in enumerate(d):
if x not in dj:
continue
dj[0] += dj.pop(x)*K
if len(dj) == 2:
more.add(j)
if did: print('found',did,'definitions')
# solve implicitly for the next variable
dcan = [i for i in d if len(i) > 2]
if not dcan:
done = True
else:
# take shortest first
di = next(ordered(dcan, lambda i: len(i)))
done = False
x = next(ordered(i for i in di if i))
c = di.pop(x)
for k in di:
di[k] /= -c
impl[x] = di.copy()
di.clear()
v.remove(x)
# update everyone else
for j, dj in enumerate(d):
if x not in dj:
continue
done = False
c = dj.pop(x)
for k in impl[x]:
dj[k] = dj.get(k, 0) + impl[x][k]*c
have = set(sol)
sol[0] = 1
while N - len(have):
print(N - len(have), 'to backsub')
for k in impl:
if impl[k] and not set(impl[k]) - have - {0}:
sol[k] = sum(impl[k][vi]*sol[vi] for vi in impl[k])
impl[k].clear()
have.add(k)
sol.pop(0)
return sol
Related
I have a code for doing constrained optimization in Matlab:
this one is for objective function from deriving a function according to the constraint and the case:
function f=funcobj(X);
f=[(2-3*X(1)*2+X(3)*(1-X(1))+X(4)*(5+X(1)/5));
(3-4*X(2)+3*X(3)+2*X(4));
(X(1)+3*X(2)-X(1).^2/2-5.5);
(5*X(1)+2*X(2)+X(1).^2/10-10)];
end
and this one is the Jacobian function
function [f0,jac]=jacobian(x);
h = 1.0e-4;
n = length(x);
jac = zeros(n,n);
f0 = funcobj(x);
for zz=1:n;
temp = x(zz);
x(zz)= temp + h;
f1 = funcobj(x);
x(zz)= temp;
jac(:,zz) = (f1 - f0)/h;
disp((f1 - f0)/h);
end
% disp(jac)
and this one is the main code
clc;close all;clear all
%initial
X(1,:)= [1 1 1 1]';
niter=30;tol=1e-6;
for ii=1:niter-1
disp(X(ii,:));
[f,dp]=jacobian(X(ii,:));
dX=inv(dp)*f;
X(ii+1,:)=X(ii, :)'-dX;
fprintf('Iterasion=ti Solution=$.4f \n',ii,X(ii+1))
if abs(X(ii+1,:)-X(ii,:))<tol;
r=X(ii+1,:);
disp('The Solution is convergent')
break
end
end
x=r(1);y=r(2);lambda_1=r(3);lambda_2=r(4);
f = (2*x)+(3*y)-(x).^3-2*(y.^2);
disp('Case 1')
disp(['x=' num2str(x) ', y = ' num2str(y),',f = ' num2str(f)])
disp(['lambda_1 = ' num2str(lambda_1), ', lambda_2 = ' num2str(lambda_2)])
When I try to convert it to Python, I still confused with the X array and how to rewrite jacobian in Python. This is my attempt:
import numpy as np
def funcobj(z):
f = np.array([[2-3*z[0]**2 + z[2]*(1-z[0])+z[3]*(5+z[0]/5)], [(-4*z[1]+3*z[2]+2*z[3])], [z[0]+3*z[1]-(z[2]**2)/2-5.5], [5*z[0]+2*z[1]+(z[0]**2)/10-10]])
print(f)
return f
def jacobian(X):
h = 1.0e-4
n = len(X)
print(n)
jac = np.zeros([4,4])
f0 = funcobj(X)
for i in range(0,n):
temp = X[i]
X[i] = temp + h
f1 = funcobj(X)
X[i] = temp
#print((f1-f0)/h)
jac[0,i] = (f1-f0)/h
return (f0, jac)
X=np.array([[1],[1],[1],[1]])
niter=30
tol=1e-6
for i in range(0,niter):
jacobian(X[:,i])
if abs(X[:,i]-X[:,i-1])<tol:
r=X[:,i]
print('The Solution is convergent')
break
How to fix this code? I still get the error in Python
Your funcobj returns a np.ndarray with shape (n,1) instead of (n,). Note that contrary to matlab, in numpy, the former corresponds to a matrix while the latter corresponds to a vector. Next, in the line jac[0, i] = (f1-f0)/h you are trying to assign a np.ndarray to a single matrix element. It should be jac[:, i] instead. Note also that range starts by default at 0 since python has 0-based indexing.
In code:
def funcobj(z):
f1 = 2-3*z[0]**2 + z[2]*(1-z[0])+z[3]*(5+z[0]/5)
f2 = (-4*z[1]+3*z[2]+2*z[3])
f3 = z[0]+3*z[1]-(z[2]**2)/2-5.5
f4 = 5*z[0]+2*z[1]+(z[0]**2)/10-10
return np.array((f1, f2, f3, f4))
def jacobian(X):
h = 1.0e-4
n = len(X)
jac = np.zeros([4,4])
f0 = funcobj(X)
for i in range(n):
temp = X[i]
X[i] = temp + h
f1 = funcobj(X)
X[i] = temp
#print((f1-f0)/h)
jac[:,i] = (f1-f0)/h
return (f0, jac)
x0 = np.ones(4)
# works as expected
print(jacobian(x0))
Now it's your turn to go on from here and implement the main algorithm in python.
I'm new to python but I been working on a code which can solve an integral equation which range is also changing according the unknown parameter. I tried to use Sympy solve function but it does not return any result. I solved the problem with a for loop, but its really slow, inefficient. I'm sure there must be a better solution, a solver. Maybe there is an other approach? I'am messing up something? I attach also the code.
import sympy as sy
from sympy.solvers import solve
alphasum = 1.707
Lky = 3.078
g = 8
Ep = 195
sigp1 = 1401.927
sigp0 = 1476
e = 2.718282
u = 0.05
k = 0.007
lsl = sy.Symbol('lsl')
def lefthand(g, Ep):
return g * Ep
def rigthhand(sigp1, sigp0, e, u, alphasum, Lky, k, lsl):
return (sigp1 - (-sigp1 + 2 * sigp0 - 2 * sigp0 * (1 - e ** (-u * (((alphasum / Lky) * lsl) + k * lsl)))))
equr = (sy.integrate(rigthhand(sigp1, sigp0, e, u, alphasum, Lky, k, lsl), (lsl, 0, lsl)))
equl = lefthand(g, Ep)
print(equr)
print (equl)
print (equr-equl)
result = solve(equr-equl, lsl,warn =True,check=False,minimal=True,quick=True,simplify=True,quartics=True)
print(result)
You can use nsolve to find numerical solutions:
In [11]: sympy.nsolve(equr-equl, lsl, 8)
Out[11]: 8.60275245116315
In [12]: sympy.nsolve(equr-equl, lsl, -4)
Out[12]: -4.53215114758428
When u is 0 your equation is linear and you can solve if for lsl; let this be an initial guess for the value of lsl at a larger value of u. Increase your value of u to the target value. Let's use capital U for the parameter we will control, replacing it with values closer and closer to u:
>>> U = Symbol('U')
>>> equr = (sy.integrate(rigthhand(sigp1, sigp0, e, U, alphasum, Lky, k, lsl), (lsl, 0, lsl)))
>>> equl = lefthand(g, Ep)
>>> z = equr-equl
>>> u0 = solve(z.subs(U,0),lsl)[0]
>>> for i in range(1,10): # get to u in 9 steps
... u0 = nsolve(z.subs(U,i*u/10.), u0)
>>> print(u0)
-4.71178322070344
Now define a larger value of u
>>> u = 0.3
>>> for i in range(1,10): # get to u in 9 steps
... u0 = nsolve(z.subs(U,i*u/10.), u0)
...
>>> u0
-2.21489271112540
Our intitial guess will (usually) be pretty good since it is exact when u is 0; if it fails then you might need to take more than 9 steps to reach the target value.
I have an ordinary differential equation like this:
DiffEq = Eq(-ℏ*ℏ*diff(Ψ,x,2)/(2*m) + m*w*w*(x*x)*Ψ/2 - E*Ψ , 0)
I want to perform a variable change :
sp.Eq(u , x*sqrt(m*w/ℏ))
sp.Eq(Ψ, H*exp(-u*u/2))
How can I do this with sympy?
Use the following function:
def variable_change(ODE,dependent_var,
independent_var,
new_dependent_var = None,
new_independent_var= None,
dependent_var_relation = None,
independent_var_relation = None,
order = 2):
if new_dependent_var == None:
new_dependent_var = dependent_var
if new_independent_var == None:
new_independent_var = independent_var
# dependent variable change
if new_independent_var != independent_var:
for i in range(order, -1, -1):
# remplace derivate
a = D(dependent_var , independent_var, i )
ξ = Function("ξ")(independent_var)
b = D( dependent_var.subs(independent_var, ξ), independent_var ,i)
rel = solve(independent_var_relation, new_independent_var)[0]
for j in range(order, 0, -1):
b = b.subs( D(ξ,independent_var,j), D(rel,independent_var,j))
b = b.subs(ξ, new_independent_var)
rel = solve(independent_var_relation, independent_var)[0]
b = b.subs(independent_var, rel)
ODE = ODE.subs(a,b)
ODE = ODE.subs(independent_var, rel)
# change of variables of indpendent variable
if new_dependent_var != dependent_var:
ODE = (ODE.subs(dependent_var.subs(independent_var,new_independent_var) , (solve(dependent_var_relation, dependent_var)[0])))
ODE = ODE.doit().expand()
return ODE.simplify()
For the example posted:
from sympy import *
from sympy import diff as D
E, ℏ ,w,m,x,u = symbols("E, ℏ , w,m,x,u")
Ψ ,H = map(Function, ["Ψ ","H"])
Ψ ,H = Ψ(x), H(u)
DiffEq = Eq(-ℏ*ℏ*D(Ψ,x,2)/(2*m) + m*w*w*(x*x)*Ψ/2 - E*Ψ,0)
display(DiffEq)
display(Eq(u , x*sqrt(m*w/ℏ)))
display(Eq(Ψ, H*exp(-u*u/2)))
newODE = variable_change(ODE = DiffEq,
independent_var = x,
new_independent_var= u,
independent_var_relation = Eq(u , x*sqrt(m*w/ℏ)),
dependent_var = Ψ,
new_dependent_var = H,
dependent_var_relation = Eq(Ψ, H*exp(-u*u/2)),
order = 2)
display(newODE)
Under this substitution the differential equation outputted is then:
Eq((-E*H + u*w*ℏ*D(H, u) + w*ℏ*H/2 - w*ℏ*D(H, (u, 2))/2)*exp(-u**2/2), 0)
If anyone is wondering how they could do it as well on CoCalc notebooks/anywhere where you can mix Sage and Python, here I defined basically the same variables and functions as OP did on his accepted answer, and then after substitution the result is converted back to Sage:
# Sage objects
var("E w m x u")
var("h_bar", latex_name = r'\hbar')
Ψ = function("Ψ")(x)
H = function('H')(u)
DiffEq = (-h_bar*h_bar*Ψ.diff(x, 2)/(2*m) + m*w*w*(x*x)*Ψ/2 - E*Ψ == 0)
display(DiffEq)
display(u == x*sqrt(m*w/h_bar))
display(Ψ == H*exp(-u*u/2))
# Function is purely sympy
newODE = variable_change(
ODE = DiffEq._sympy_(),
independent_var = x._sympy_(),
new_independent_var = u._sympy_(),
independent_var_relation = (u == x*sqrt(m*w/h_bar))._sympy_(),
dependent_var = Ψ._sympy_(),
new_dependent_var = H._sympy_(),
dependent_var_relation = (Ψ == H*exp(-u*u/2))._sympy_(),
order = 2
)
display(newODE._sage_())
Note that the only difference is that here things are converted to SymPy when using as arguments inside OP's function (it'll probably break if you don't!). After you call _sympy_() only once on a variable or expression, every sympy object gets a _sage_() method to convert back.
The result given was:
# Sage object again
1/2*(2*h_bar*u*w*diff(H(u), u) + h_bar*w*H(u) - h_bar*w*diff(H(u), u, u) - 2*E*H(u))*e^(-1/2*u^2) == 0
Which is just OP's result, but Sage handles operands a little bit differently.
Note: in order to avoid overriding stuff on Sage after importing everything from SymPy, you may want to import only diff as D, Function and solve from the main library. You might also want to rename sympy's solve to something else to avoid overriding Sage's own sage.symbolic.relation.solve.
I am using sympy to solve some equations and I am running into a problem. I have this issue with many equations but I will illustrate with an example. I have an equation with multiple variables and I want to solve this equation in terms of all variables but one is excluded. For instance the equation 0 = 2^n*(2-a) - b + 1. Here there are three variables a, b and n. I want to get the values for a and b not in terms of n so the a and b may not contain n.
2^n*(2-a) - b + 1 = 0
# Since we don't want to solve in terms of n we know that (2 - a)
# has to be zero and -b + 1 has to be zero.
2 - a = 0
a = 2
-b + 1 = 0
b = 1
I want sympy to do this. Maybe I'm just not looking at the right documentation but I have found no way to do this. When I use solve and instruct it to solve for symbols a and b sympy returns to me a single solution where a is defined in terms of n and b. I assume this means I am free to choose b and n, However I don't want to fix n to a specific value I want n to still be a variable.
Code:
import sympy
n = sympy.var("n", integer = True)
a = sympy.var("a")
b = sympy.var("b")
f = 2**n*(2-a) - b + 1
solutions = sympy.solve(f, [a,b], dict = True)
# this will return: "[{a: 2**(-n)*(2**(n + 1) - b + 1)}]".
# A single solution where b and n are free variables.
# However this means I have to choose an n I don't want
# to that I want it to hold for any n.
I really hope someone can help me. I have been searching google for hours now...
Ok, here's what I came up with. This seems to solve the type of equations you're looking for. I've provided some tests as well. Of course, this code is rough and can be easily caused to fail, so i'd take it more as a starting point than a complete solution
import sympy
n = sympy.Symbol('n')
a = sympy.Symbol('a')
b = sympy.Symbol('b')
c = sympy.Symbol('c')
d = sympy.Symbol('d')
e = sympy.Symbol('e')
f = sympy.sympify(2**n*(2-a) - b + 1)
g = sympy.sympify(2**n*(2-a) -2**(n-1)*(c+5) - b + 1)
h = sympy.sympify(2**n*(2-a) -2**(n-1)*(e-1) +(c-3)*9**n - b + 1)
i = sympy.sympify(2**n*(2-a) -2**(n-1)*(e+4) +(c-3)*9**n - b + 1 + (d+2)*9**(n+2))
def rewrite(expr):
if expr.is_Add:
return sympy.Add(*[rewrite(f) for f in expr.args])
if expr.is_Mul:
return sympy.Mul(*[rewrite(f) for f in expr.args])
if expr.is_Pow:
if expr.args[0].is_Number:
if expr.args[1].is_Symbol:
return expr
elif expr.args[1].is_Add:
base = expr.args[0]
power = sympy.solve(expr.args[1])
sym = expr.args[1].free_symbols.pop()
return sympy.Mul(sympy.Pow(base,-power[0]), sympy.Pow(base,sym))
else:
return expr
else:
return expr
else:
return expr
def my_solve(expr):
if not expr.is_Add:
return None
consts_list = []
equations_list = []
for arg in expr.args:
if not sympy.Symbol('n') in arg.free_symbols:
consts_list.append(arg)
elif arg.is_Mul:
coeff_list = []
for nested_arg in arg.args:
if not sympy.Symbol('n') in nested_arg.free_symbols:
coeff_list.append(nested_arg)
equations_list.append(sympy.Mul(*coeff_list))
equations_list.append(sympy.Add(*consts_list))
results = {}
for eq in equations_list:
var_name = eq.free_symbols.pop()
val = sympy.solve(eq)[0]
results[var_name] = val
return results
print(my_solve(rewrite(f)))
print(my_solve(rewrite(g)))
print(my_solve(rewrite(h)))
print(my_solve(rewrite(i)))
I have a class that was taking in lists of 1's and 0's and performing GF(2) finite field arithmetic operations. It used to work until I tried to make it take the input in polynomial format. As for how the finite arithmetic will be done after fixing the regex issue, I was thinking about overloading the operators.
The actual code in parsePolyToListInput(input) works when outside the class. The problem seems to be in the regex, which errors that it will only take in a string (this makes sense), but does not seem to initialize with self.expr as a parameter (that's a problem). The #staticmethod just before the initialization was an attempt to salvage the unbound error as it the polynomial was passed in, but this is apparently completely wrong. Just to save you time if you decide to look at any of the arithmetic operations, modular inverse does not work (seems to be due to the formatting issue after every iteration of that while loop for division in the function and what the return type is):
import re
class gf2poly:
#binary arithemtic on polynomials
##staticmethod
def __init__(self,expr):
self.expr = expr
#self.expr = [int(i) for i in expr]
self.expr = gf2poly.parsePolyToListInput(self.expr)
def convert(self): #to clarify the input if necessary
convertToString = str(self.expr)
print "expression is %s"%(convertToString)
def id(self): #returns modulus 2 (1,0,0,1,1,....) for input lists
return [int(self.expr[i])%2 for i in range(len(self.expr))]
def listToInt(self): #converts list to integer for later use
result = gf2poly.id(self)
return int(''.join(map(str,result)))
def prepBinary(a,b): #converts to base 2 and orders min and max for use
a = gf2poly.listToInt(a); b = gf2poly.listToInt(b)
bina = int(str(a),2); binb = int(str(b),2)
a = min(bina,binb); b = max(bina,binb);
return a,b
#staticmethod
def outFormat(raw):
raw = str(raw[::-1]); g = [] #reverse binary string for enumeration
[g.append(i) for i,c in enumerate(raw) if c == '1']
processed = "x**"+' + x**'.join(map(str, g[::-1]))
if len(g) == 0: return 0 #return 0 if list empty
return processed #returns result in gf(2) polynomial form
def parsePolyToListInput(poly):
c = [int(i.group(0)) for i in re.finditer(r'\d+', poly)] #re.finditer returns an iterator
#m = max(c)
return [1 if x in c else 0 for x in xrange(max(c), -1, -1)]
#return d
def add(self,other): #accepts 2 lists as parameters
a = gf2poly.listToInt(self); b = gf2poly.listToInt(other)
bina = int(str(a),2); binb = int(str(b),2)
m = bina^binb; z = "{0:b}".format(m)
return z #returns binary string
def subtract(self,other): #basically same as add() but built differently
result = [self.expr[i] ^ other.expr[i] for i in range(len(max(self.expr,other.expr)))]
return int(''.join(map(str,result)))
def multiply(a,b): #a,b are lists like (1,0,1,0,0,1,....)
a,b = gf2poly.prepBinary(a,b)
g = []; bitsa = "{0:b}".format(a)
[g.append((b<<i)*int(bit)) for i,bit in enumerate(bitsa)]
m = reduce(lambda x,y: x^y,g); z = "{0:b}".format(m)
return z #returns product of 2 polynomials in gf2
def divide(a,b): #a,b are lists like (1,0,1,0,0,1,....)
a,b = gf2poly.prepBinary(a,b)
bitsa = "{0:b}".format(a); bitsb = "{0:b}".format(b)
difflen = len(str(bitsb)) - len(str(bitsa))
c = a<<difflen; q=0
while difflen >= 0 and b != 0: #a is divisor, b is dividend, b/a
q+=1<<difflen; b = b^c # b/a because of sorting in prep
lendif = abs(len(str(bin(b))) - len(str(bin(c))))
c = c>>lendif; difflen -= lendif
r = "{0:b}".format(b); q = "{0:b}".format(q)
return r,q #returns r remainder and q quotient in gf2 division
def remainder(a,b): #separate function for clarity when calling
r = gf2poly.divide(a,b)[0]; r = int(str(r),2)
return "{0:b}".format(r)
def quotient(a,b): #separate function for clarity when calling
q = gf2poly.divide(a,b)[1]; q = int(str(q),2)
return "{0:b}".format(q)
def extendedEuclideanGF2(a,b): # extended euclidean. a,b are GF(2) polynomials in list form
inita,initb=a,b; x,prevx=0,1; y,prevy = 1,0
while sum(b) != 0:
q = gf2poly.quotient(a,b);
q = list(q); q = [int(x) for x in q]
#q = list(q);
#q = tuple([int(i) for i in q])
q = gf2poly(q)
a,b = b,gf2poly.remainder(a,b);
#a = map(list, a);
#b = [list(x) for x in a];
#a = [int(x) for x in a]; b = [int(x) for x in b];
b = list(b); b = [int(x) for x in b]
#b = list(b);
#b = tuple([int(i) for i in b])
b = gf2poly(b)
#x,prevx = (prevx-q*x, x);
#y,prevy=(prevy-q*y, y)
print "types ",type(q),type(a),type(b)
#q=a//b; a,b = b,a%b; x,prevx = (prevx-q*x, x); y,prevy=(prevy-q*y, y)
#print("%d * %d + %d * %d = %d" % (inita,prevx,initb,prevy,a))
return a,prevx,prevy # returns gcd of (a,b), and factors s and t
def modular_inverse(a,mod): # where a,mod are GF(2) polynomials in list form
gcd,s,t = gf2poly.extendedEuclideanGF2(a,mod); mi = gf2poly.remainder(s,mod)
#gcd,s,t = ext_euc_alg_i(a,mod); mi = s%mod
if gcd !=1: return False
#print ("%d * %d mod %d = 1"%(a,mi,mod))
return mi # returns modular inverse of a,mod
I usually test it with this input:
a = x**14 + x**1 + x**0
p1 = gf2poly(a)
b = x**6 + x**2 + x**1
p2 = gf2poly(b)
The first thing you might notice about my code is that it's not very good. There are 2 reasons for that:
1) I wrote it so that the 1st version could do work in the finite field GF(2), and output in polynomial format. Then the next versions were supposed to be able to take polynomial inputs, and also perform the crucial 'modular inverse' function which is not working as planned (this means it's actually not working at all).
2) I'm teaching myself Python (I'm actually teaching myself programming overall), so any constructive criticism from pro Python programmers is welcome as I'm trying to break myself of beginner habits as quickly as possible.
EDIT:
Maybe some more of the code I've been testing with will help clarify what works and what doesn't:
t1 = [1,1,1]; t2 = [1,0,1]; t3 = [1,1]; t4 = [1, 0, 1, 1, 1, 1, 1]
t5 = [1,1,1,1]; t6 = [1,1,0,1]; t7 = [1,0,1,1,0]
f1 = gf2poly(t1); f2 = gf2poly(t2); f3 = gf2poly(t3); f4 = gf2poly(t4)
f5 = gf2poly(t5);f6 = gf2poly(t6);f7 = gf2poly(t7)
##print "subtract: ",a.subtract(b)
##print "add: ",a.add(b)
##print "multiply: ",gf2poly.multiply(f1,f3)
##print "multiply: ",gf2poly.multiply(f1,f2)
##print "multiply: ",gf2poly.multiply(f3,f4)
##print "degree a: ",a.degree()
##print "degree c: ",c.degree()
##print "divide: ",gf2poly.divide(f1,b)
##print "divide: ",gf2poly.divide(f4,a)
##print "divide: ",gf2poly.divide(f4,f2)
##print "divide: ",gf2poly.divide(f2,a)
##print "***********************************"
##print "quotient: ",gf2poly.quotient(f2,f5)
##print "remainder: ",gf2poly.remainder(f2,f5)
##testq = gf2poly.quotient(f4,f2)
##testr = gf2poly.remainder(f4,f2)
##print "quotient: ",testq,type(testq)
##print "remainder: ",testr,type(testr)
##print "***********************************"
##print "outFormat testp: ",gf2poly.outFormat(testq)
##print "outFormat testr: ",gf2poly.outFormat(testr)
##print "***********************************"
#print "gf2poly.modular_inverse(): ",gf2poly.modular_inverse(f2,f3)
print "p1 ",p1 #,type(f2),type(f3)
#print "parsePolyToListInput ",gf2poly.parsePolyToListInput(a)
Part of your problem is that you haven't declared self as an argument for parsePolyToListInput. When you call a method, the instance you call it on is implicitly bound as the first argument. Naming the first argument self is a convention, not a strict requirement - the instance is being bound to poly, which you then try to run a regexp over.
It looks me like there's some confusion in your design here about what's behavior of individual instances of the class and what's class-level or module-level behavior. In Python, it's perfectly acceptable to leave something that doesn't take an instance of a class as a parameter defined as a module-level function rather than shoehorning it in awkwardly. parsePolyToListInput might be one such function.
Your add implementation, similarly, has a comment saying it "accepts 2 lists as parameters". In fact, it's going to get a gf2poly instance as its first argument - this is probably right if you're planning to do operator overloading, but it means the second argument should also be a gf2poly instance as well.
EDIT:
Yeah, your example code shows a breakdown between class behavior and instance behavior. Either your multiply call should look something like this:
print "multiply: ",f1.multiply(f3)
Or multiply shouldn't be a method at all:
gfpoly.py:
def multiply(f1, f2):
a,b = prepBinary(a,b)
g = []; bitsa = "{0:b}".format(a)
[g.append((b<<i)*int(bit)) for i,bit in enumerate(bitsa)]
m = reduce(lambda x,y: x^y,g); z = "{0:b}".format(m)
return z #returns product of 2 polynomials in gf2
That latter approach is, for instance, how the standard math library does things.
The advantage of defining a multiplication method is that you could name it appropriately (http://docs.python.org/2/reference/datamodel.html#special-method-names) and use it with the * operator:
print "multiply: ",f1 *f3