Get coefficients of a polynomial pyomo constraint - python

In addition to the question here I would like to know how to obtain the coefficients of an arbitrary polynomial constraint of a pyomo model. So, for
m= ConcreteModel()
m.x_1 = Var()
m.x_2 = Var()
m.x_3 = Var(within = Integers)
m.x_4 = Var(within = Integers)
m.c= Constraint(expr=2*m.x_1**2 + 5*m.x_1*m.x_2 + m.x_4 <= 2)
I would like to have
coeff[c] = [2, 5, 1].

To my knowledge, there is no easy way to do this without walking the expression tree for arbitrary polynomials (since you could have (x-3)^2+5x+6).
One approach could be to sympy-ify the pyomo expression and ask sympy for those values: How to extract all coefficients in sympy
The current implementation of differentiate actually makes use of sympy: https://github.com/Pyomo/pyomo/blob/4997726dd1f11bdb86589ff1c2f4badc654a69ad/pyomo/core/base/symbolic.py#L128

Related

How to solve a system of equations with symbolic dimension in Sympy?

Suppose I have a vector of unknowns x_i, with i=1,...,N, where N is symbolic, and I have some nonlinear system of M equations, f_m(x_1,...,x_N) = 0, for m=1,...,M. How should I approach this problem with Sympy?
To make things more concrete: For example, suppose that I have the following system, described by the generic equation eq below, valid for all i:
N = Symbol('N', integer = True)
i = Idx('i', (1,N))
x = IndexedBase('x', real = True)
eq = exp(x[i]) + Sum(x[i], i) # This is a generic equation, valid for all i
How should I approach this? Is there a way to look for a solution for all x[i]?
How to state this problem to Sympy? If there are clarifications needed, I would be glad to provide them.

Numpy: Solve linear equation system with one unknown + number

I would like to solve a linear equation system in numpy in order to check whether a point lines up with a vector or not.
Given are the following equations for a vector2:
point[x] = vector1[x] + λ * vector2[x]
point[y] = vector1[y] + λ * vector2[y]
Numpys linalg.solve() offers the option to solve two equations in the form:
ax + by = c
by defining the parameters a and b in a numpy.array().
But I can't seem to find a way to deal with equations with one fixed parameter like:
m*x + b = 0
Am I missing a point or do I have to deal with another solution?
Thanks in advance!
Hi I will give it a try to help with this question.
The numpy.linagl.solve says:
Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b.
Note the assumptions made on the matrix!
Lambda the same
If your lambda for the point[x] and point[y] equation should be the same. Then just concatenate all the vectors.
x_new = np.concatenate([x,y])
vec1_new = np.concatenate([vec1_x,vec1_y])
...
Assuming that this will overdetermined your system and probably it will. Meaning you have too many equations and only one parameter to determine (well-determined assumption violated). My approach would be to go with least sqare.
The numpy.linagl.lstsq has a least square method too. Where the equation is y = mx + c is solved. For your case this is y = point[x], x = vector2[x] and c = vector1[x].
This is copied from the numpy.linagl.lstsq example:
x = np.array([0, 1, 2, 3])
y = np.array([-1, 0.2, 0.9, 2.1])
A = np.vstack([x, np.ones(len(x))]).T # => horizontal stack
m, c = np.linalg.lstsq(A, y, rcond=None)[0]
Lambda different
If the lambdas are different. Stack the vector2[x] and vector2[y] horizontal and you have [lambda_1, lambda_2] to find. Probably also more equations then lambds and you will find a least square solution.
Note
Keep in mind that even if you construct your system from a staight line and a fixed lambda. You might need a least square approach due to rounding and numeric differences.
You can solve your equation 2*x + 4 = 0 with sympy:
from sympy.abc import x
from sympy import Eq, solve
eq = Eq(2 * x + 4, 0)
print(solve(eq))

Constrained non-linear optimization - overdetermined system of polynomial equations

I have a system of 21 polynomial equations in a total of 12 unknowns a, ..., l. Each equation has the general form V1*abc + V2*abd + ... + V64*jkl = x, where V1, ..., V64 are each either 0 or 1, i.e., each equation contains on the left hand side the sum of some products of three different unknowns.
There is a set of constrains: a + b + c + d = 1, e + f + g + h = 1, i + j + k + l = 1. The sum of all xs (right hand sides) is equal to 1.
I have as an input a vector of xs. Is there a solver which could provide me the values of a, ..., l which yield a vector of x's as close as possible to the original xs while adhering to the constrains ? I'm looking for a python implementation.
I looked in scipy.optimize but I'm not able to establish which method is preferable for my problem.
You might want to try this python binding for IPOPT. IPOPT is an optimization library that uses an interior-point solver for finding (local) optima of functions with generalized constraints, both equality and inequality constraints. As you've described your problem, you won't care about the inequality constraints.
A candidate function for your optimization objective would be the sum of the squared differences for your 21 polynomial equations. Let's say you start with your initial x, which is a 21-element vector, then your objective would be:
(V1_0*abc + V2_0*abd + ... + V64_0*jkl - x_0)^2 + (V1_1*abc + V2_1*abd + ... + V64_1*jkl - x_1)^2 + ...(V1_{21}*abc + V2_{21}*abd + ... + V64_{21}*jkl - x_{21})^2
To use IPOPT, you will need to compute the partial derivatives of your constraints and objective wrt all of your variable a-l.
If IPOPT won't work for you, you might even be able to use scipy.optimize with this objective function. From the docs, it looks like scipy.optimize will try to pick the method appropriate for your problem based upon how you define it; if you define your constraints and objective correctly, scipy.optimize should pick the correct method.

Iterative polynomial multiplication -- Chebyshev polynomials in Python

My question is: What is the best approach to iterative polynomial multiplication in Python?
I thought an interesting project would be to write a function in Python to generate the coefficients and exponents of each term for a Chebyshev polynomial of a given degree. The recursive function to generate such a polynomial (represented by Tn(x)) is:
With:
T0(x) = 1
and
T1(x) = x:
Tn(x) = 2xTn-1(x) - Tn-2(x)
What I have so far isn't very useful, but I am having trouble kind of wrapping my brain around how to get this going. What I want to happen is the following:
>> chebyshev(4)
[[8,4], [8,2], [1,0]]
This list represents the Chebyshev polynomial of the 4th degree:
T4(x) = 8x4 - 8x2 + 1
import sys
def chebyshev(n, a=[1,0], b=[1,1]):
z = [2,1]
result = []
if n == 0:
return a
if n == 1:
return b
print >> sys.stderr, ([z[0]*b[0],
z[1]+b[1]],
a) # This displays the proper result for n = 2
return result
The one solution I found on the web didn't work, so I am hoping someone can shed some light.
p.s. More information on Chebyshev polynomials: CSU Fullteron, Wikipedia - Chebyshev polynomials. They are very cool/useful, and tie together some really interesting trig functions/properties; worth a read.
SciPy has an implementation for Chebyshev
http://www.scipy.org/doc/api_docs/SciPy.special.orthogonal.html
I would suggest looking at their code.
The best implementation for Chebyshev is :
// Computes T_n(x), with -1 <= x <= 1
real T( int n, real x )
{
return cos( n*acos(x) ) ;
}
If you test this against other implementations, including explicit polynomial evaluation and iteratively computing the recurrence relation, this is actually just as fast. Try it yourself..
Generally:
Explicit polynomial evaluation is the worst (for large n)
Recursive evaluation is a little better
cosine evaluation is the best
orthopy (a project of mine) also supports computation of Chebyshev polynomials. With
import orthopy
# from sympy.abc import x
x = 0.5
normalization = "normal" # or "classical", "monic"
evaluator = orthopy.c1.chebyshev1.Eval(x, normalization)
for _ in range(10):
print(next(evaluator))
0.5641895835477564
0.39894228040143276
-0.39894228040143265
...
you get the values of the polynomials with increasing degree at x = 0.5. You can use a list/vector of multiple values, or even sympy symbolics.
Computation happens with recurrence relations of course. If you're interested in the coefficients, check out
rc = orthopy.c1.chebyshev1.RecurrenceCoefficients("monic", symbolic=True)

Calculating the area underneath a mathematical function

I have a range of data that I have approximated using a polynomial of degree 2 in Python. I want to calculate the area underneath this polynomial between 0 and 1.
Is there a calculus, or similar package from numpy that I can use, or should I just make a simple function to integrate these functions?
I'm a little unclear what the best approach for defining mathematical functions is.
Thanks.
If you're integrating only polynomials, you don't need to represent a general mathematical function, use numpy.poly1d, which has an integ method for integration.
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> i = p.integ()
>>> i
poly1d([ 0.66666667, 2. , 6. , 0. ])
>>> integrand = i(1) - i(0) # Use call notation to evaluate a poly1d
>>> integrand
8.6666666666666661
For integrating arbitrary numerical functions, you would use scipy.integrate with normal Python functions for functions. For integrating functions analytically, you would use sympy. It doesn't sound like you want either in this case, especially not the latter.
Look, Ma, no imports!
>>> coeffs = [2., 4., 6.]
>>> sum(coeff / (i+1) for i, coeff in enumerate(reversed(coeffs)))
8.6666666666666661
>>>
Our guarantee: Works for a polynomial of any positive degree or your money back!
Update from our research lab: Guarantee extended; s/positive/non-negative/ :-)
Update Here's the industrial-strength version that is robust in the face of stray ints in the coefficients without having a function call in the loop, and uses neither enumerate() nor reversed() in the setup:
>>> icoeffs = [2, 4, 6]
>>> tot = 0.0
>>> divisor = float(len(icoeffs))
>>> for coeff in icoeffs:
... tot += coeff / divisor
... divisor -= 1.0
...
>>> tot
8.6666666666666661
>>>
It might be overkill to resort to general-purpose numeric integration algorithms for your special case...if you work out the algebra, there's a simple expression that gives you the area.
You have a polynomial of degree 2: f(x) = ax2 + bx + c
You want to find the area under the curve for x in the range [0,1].
The antiderivative F(x) = ax3/3 + bx2/2 + cx + C
The area under the curve from 0 to 1 is: F(1) - F(0) = a/3 + b/2 + c
So if you're only calculating the area for the interval [0,1], you might consider
using this simple expression rather than resorting to the general-purpose methods.
'quad' in scipy.integrate is the general purpose method for integrating functions of a single variable over a definite interval. In a simple case (such as the one described in your question) you pass in your function and the lower and upper limits, respectively. 'quad' returns a tuple comprised of the integral result and an upper bound on the error term.
from scipy import integrate as TG
fnx = lambda x: 3*x**2 + 9*x # some polynomial of degree two
aoc, err = TG.quad(fnx, 0, 1)
[Note: after i posted this i an answer posted before mine, and which represents polynomials using 'poly1d' in Numpy. My scriptlet just above can also accept a polynomial in this form:
import numpy as NP
px = NP.poly1d([2,4,6])
aoc, err = TG.quad(px, 0, 1)
# returns (8.6666666666666661, 9.6219328800846896e-14)
If one is integrating quadratic or cubic polynomials from the get-go, an alternative to deriving the explicit integral expressions is to use Simpson's rule; it is a deep fact that this method exactly integrates polynomials of degree 3 and lower.
To borrow Mike Graham's example (I haven't used Python in a while; apologies if the code looks wonky):
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> integrand = (1 - 0)(p(0) + 4*p((0 + 1)/2) + p(1))/6
uses Simpson's rule to compute the value of integrand. You can verify for yourself that the method works as advertised.
Of course, I did not simplify the expression for integrand to indicate that the 0 and 1 can be replaced with arbitrary values u and v, and the code will still work for finding the integral of the function from u to v.

Categories