Numpy: Solve linear equation system with one unknown + number - python

I would like to solve a linear equation system in numpy in order to check whether a point lines up with a vector or not.
Given are the following equations for a vector2:
point[x] = vector1[x] + λ * vector2[x]
point[y] = vector1[y] + λ * vector2[y]
Numpys linalg.solve() offers the option to solve two equations in the form:
ax + by = c
by defining the parameters a and b in a numpy.array().
But I can't seem to find a way to deal with equations with one fixed parameter like:
m*x + b = 0
Am I missing a point or do I have to deal with another solution?
Thanks in advance!

Hi I will give it a try to help with this question.
The numpy.linagl.solve says:
Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b.
Note the assumptions made on the matrix!
Lambda the same
If your lambda for the point[x] and point[y] equation should be the same. Then just concatenate all the vectors.
x_new = np.concatenate([x,y])
vec1_new = np.concatenate([vec1_x,vec1_y])
...
Assuming that this will overdetermined your system and probably it will. Meaning you have too many equations and only one parameter to determine (well-determined assumption violated). My approach would be to go with least sqare.
The numpy.linagl.lstsq has a least square method too. Where the equation is y = mx + c is solved. For your case this is y = point[x], x = vector2[x] and c = vector1[x].
This is copied from the numpy.linagl.lstsq example:
x = np.array([0, 1, 2, 3])
y = np.array([-1, 0.2, 0.9, 2.1])
A = np.vstack([x, np.ones(len(x))]).T # => horizontal stack
m, c = np.linalg.lstsq(A, y, rcond=None)[0]
Lambda different
If the lambdas are different. Stack the vector2[x] and vector2[y] horizontal and you have [lambda_1, lambda_2] to find. Probably also more equations then lambds and you will find a least square solution.
Note
Keep in mind that even if you construct your system from a staight line and a fixed lambda. You might need a least square approach due to rounding and numeric differences.

You can solve your equation 2*x + 4 = 0 with sympy:
from sympy.abc import x
from sympy import Eq, solve
eq = Eq(2 * x + 4, 0)
print(solve(eq))

Related

How do I get multiple outputs for 2 simultaneous equations in python?

I'm trying to plot a graph with 2 simultaneous equations, but I don't need to solve them, I'm just trying to get multiple results from substitution, like when x is 1, or when y is 0.
My equations are 5x + 2y = 20, y = 2x + 1
All the solutions that I found are only to solve the equation, not substituting values to get multiple results.
halp pls
preferably with numpy or sympy functions, i'm tryiing to learn those haha
expanding from the answer above from #Michael Rovinsky. since you mentioned plotting it.
I would modify the code into following
import matplotlib.pyplot as plt # used for ploting
def get_f1_val(x):
return x*-2.5+10
def get_f2_val(x):
return x*2+1
#select the value you want to use for x, you can use a for loop if theres no specific value needed, or if you need a lot of x vals for the graph
x_val = [1,2,3,4,5]
f1_y_val = []
f2_y_val = []
for xval in x_val:
f1_y_val.append(get_f1_val(xval))
f2_y_val.append(get_f2_val(xval))
plt.plot(x_val, f1_y_val)
plt.plot(x_val, f2_y_val)
plt.show()
and technically you do have to solve the equation to plot out the graph, no matter what. there isnt a way to plot something without "solving it".
You are asking two different questions: how to plot and how to get values. If you are plotting, the plotting engine will supply the values, you just have to put the equations in a form that it can work with. In this case, as univariate equations.
>>> from sympy import var, solve, Eq, plot
>>> var('x y')
(x, y)
>>> eqs = Eq(5*x + 2*y, 20), Eq(y, 2*x + 1) # put into Eq form
Since the equations are linear in y we can use the single solution for y -- an expression in terms of x -- as the expressions to plot:
>>> plot(*[solve(i,y)[0] for i in eqs], (x,-1,1))
(Note: plot_implicit can plot a single equation in two variables without you having to solve for one or the other.)
Normalize 5x + 2y = 20 to y = -2.5x + 10 :
def get_values(x):
return [x * -2.5 + 10, x * 2 + 1]
print(get_values(-1))
print(get_values(0))
print(get_values(1))

How to solve nonlinear equations using a for loop in python?

I am trying to solve for non linear equations in python. I have tried using the solver of the Sympy but it doesn't seem to work in a for loop statement. I am tyring to solve for the variable x over a range of inputs [N].
I have attached my code below
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
f_curve_coefficients = [-7.14285714e-02, 1.96333333e+01, 6.85130952e+03]
S = [0.2122, 0, 0]
a2 = f_curve_coefficients[0]
a1 = f_curve_coefficients[1]
a0 = f_curve_coefficients[2]
s2 = S[0]
s1 = S[1]
s0 = S[2]
answer=[]
x = symbols('x')
for N in range(0,2500,5):
solve([a2*x**2+a1*N*x+a0*N**2-s2*x**2-s1*x-s0-0])
answer.append(x)
print(answer)
There could be more efficient ways of solving this problem than using sympy * any help will be much appreicated.
Note I am still new to python, after transisitioning from Matlab. I could easliy solve this problem in Matlab and could attach the code, but I am battiling with this in Python
Answering to your question "There could be more efficient ways of solving this problem than using sympy * "
you can use fsolve to find the roots of non linear equation:
fsolve returns the roots of the (non-linear) equations defined by func(x) = 0 given a starting estimate
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html
below is the code:
from scipy.optimize import fsolve
import numpy as np
def f(variables) :
(x,y) = variables
first_eq = 2*x + y - 1
second_eq = x**2 + y**2 - 1
return [first_eq, second_eq]
roots = fsolve(f, (-1 , -1)) # fsolve(equations, X_0)
print(roots)
# [ 0.8 -0.6]
print(np.isclose(f(roots), [0.0, 0.0])) # func(root) should be almost 0.0.
If you prefer sympy you can use nsolve.
>>> nsolve([x+y**2-4, exp(x)+x*y-3], [x, y], [1, 1])
[0.620344523485226]
[1.83838393066159]
The first argument is a list of equations, the second is list of variables and the third is an initial guess.
Also For details, you can checkout similar question asked earlier on stack overflow regarding ways to solve Non-linear equations in python:
How to solve a pair of nonlinear equations using Python?
According to this documentation, the output of solve is the solution. Nothing is assigned to x, that's still just the symbol.
x = symbols('x')
for N in range(0,2500,5):
result = solve(a2*x**2+a1*N*x+a0*N**2-s2*x**2-s1*x-s0-0)
answer.append(result)

numpy fit coefficients to linear combination of polynomials

I have data that I want to fit with polynomials. I have 200,000 data points, so I want an efficient algorithm. I want to use the numpy.polynomial package so that I can try different families and degrees of polynomials. Is there some way I can formulate this as a system of equations like Ax=b? Is there a better way to solve this than with scipy.minimize?
import numpy as np
from scipy.optimize import minimize as mini
x1 = np.random.random(2000)
x2 = np.random.random(2000)
y = 20 * np.sin(x1) + x2 - np.sin (30 * x1 - x2 / 10)
def fitness(x, degree=5):
poly1 = np.polynomial.polynomial.polyval(x1, x[:degree])
poly2 = np.polynomial.polynomial.polyval(x2, x[degree:])
return np.sum((y - (poly1 + poly2)) ** 2 )
# It seems like I should be able to solve this as a system of equations
# x = np.linalg.solve(np.concatenate([x1, x2]), y)
# minimize the sum of the squared residuals to find the optimal polynomial coefficients
x = mini(fitness, np.ones(10))
print fitness(x.x)
Your intuition is right. You can solve this as a system of equations of the form Ax = b.
However:
The system is overdefined and you want to get the least-squares solution, so you need to use np.linalg.lstsq instead of np.linalg.solve.
You can't use polyval because you need to separate the coefficients and powers of the independent variable.
This is how to construct the system of equations and solve it:
A = np.stack([x1**0, x1**1, x1**2, x1**3, x1**4, x2**0, x2**1, x2**2, x2**3, x2**4]).T
xx = np.linalg.lstsq(A, y)[0]
print(fitness(xx)) # test the result with original fitness function
Of course you can generalize over the degree:
A = np.stack([x1**p for p in range(degree)] + [x2**p for p in range(degree)]).T
With the example data, the least squares solution runs much faster than the minimize solution (800µs vs 35ms on my laptop). However, A can become quite large, so if memory is an issue minimize might still be an option.
Update:
Without any knowledge about the internals of the polynomial function things become tricky, but it is possible to separate terms and coefficients. Here is a somewhat ugly way to construct the system matrix A from a function like polyval:
def construct_A(valfunc, degree):
columns1 = []
columns2 = []
for p in range(degree):
c = np.zeros(degree)
c[p] = 1
columns1.append(valfunc(x1, c))
columns2.append(valfunc(x2, c))
return np.stack(columns1 + columns2).T
A = construct_A(np.polynomial.polynomial.polyval, 5)
xx = np.linalg.lstsq(A, y)[0]
print(fitness(xx)) # test the result with original fitness function

Get all positive integral solutions for a linear equation

A game I played has a riddle that involves solving the following equation:
x*411 + y*295 + z*161 = 3200
Not wanting to think I just slapped it into sympy, which I haven’t really used up to that point:
>>> from sympy import *
>>> x, y, z = symbols('x y z', integer=True, positive=True)
>>> solve(x*411 + y*295 + z*161 - 3200, [x, y, z])
[{x: -295*y/411 - 161*z/411 + 3200/411}]
Hmm, this only gave me a dependent solution, but I want all possible solutions in the domain I constrained the variables to, e.g. (assuming there are no other solutions) [{x: 4, y: 2, z:6}] or [(4, 2, 6)]
Of course I could now manually substitute two variables in a nested loop, or solve it by hand (as I did to get the solution above), but I want to know how to get sympy (or another library) to do it for me.
SymPy can solve Diophantine equations but doesn't have a built-in way to generate positive solutions. With Sage one can do that easily: here is four-line code that generates all nonnegative integer solutions of your equation.
p = MixedIntegerLinearProgram()
w = p.new_variable(integer=True, nonnegative=True)
p.add_constraint(411*w[0] + 295*w[1] + 161*w[2] == 3200)
p.polyhedron().integral_points()
The output is ((4, 2, 6),)
Behind the scenes, integral_points will most likely just run a multiple loop; although when that doesn't seem to work it tries to use Smith normal form.
I know you wanted positive solutions, but (a) it's easy to exclude any zero-containing tuples from the answer; (b) it's also easy to replace x by x-1, etc, prior to solving; (c) sticking to "nonnegative" makes it easy to create a polyhedron using Mixed Integer Linear Programming module
as above.
According to documentation one can also build a Polyhedron object directly from a system of inequalities ("Hrep"). This would allow one to explicitly say x >= 1, etc, but I haven't succeeded at this route.
With SymPy
The output of SymPy's Diophantine module is a parametric solution, like
(t_0, 2627*t_0 + 161*t_1 - 19200, -4816*t_0 - 295*t_1 + 35200)
in your example. This can be used in a loop to generate solutions in a pretty efficient way. The sticky point is finding bounds for parameters t_0 and t_1. Since this is just an illustration, I looked at the last expression above and plugged the limits 35200/4816 and 35200/295 directly in the loops below.
from sympy import *
x, y, z = symbols('x y z')
[s] = diophantine(x*411 + y*295 + z*161 - 3200)
print(s)
t_0, t_1 = s[2].free_symbols
for t0 in range(int(35200/4816)+1):
for t1 in range(int(35200/295)+1):
sol = [expr.subs({t_0: t0, t_1: t1}) for expr in s]
if min(sol) > 0:
print(sol)
The output is [4, 2, 6].

Calculating the area underneath a mathematical function

I have a range of data that I have approximated using a polynomial of degree 2 in Python. I want to calculate the area underneath this polynomial between 0 and 1.
Is there a calculus, or similar package from numpy that I can use, or should I just make a simple function to integrate these functions?
I'm a little unclear what the best approach for defining mathematical functions is.
Thanks.
If you're integrating only polynomials, you don't need to represent a general mathematical function, use numpy.poly1d, which has an integ method for integration.
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> i = p.integ()
>>> i
poly1d([ 0.66666667, 2. , 6. , 0. ])
>>> integrand = i(1) - i(0) # Use call notation to evaluate a poly1d
>>> integrand
8.6666666666666661
For integrating arbitrary numerical functions, you would use scipy.integrate with normal Python functions for functions. For integrating functions analytically, you would use sympy. It doesn't sound like you want either in this case, especially not the latter.
Look, Ma, no imports!
>>> coeffs = [2., 4., 6.]
>>> sum(coeff / (i+1) for i, coeff in enumerate(reversed(coeffs)))
8.6666666666666661
>>>
Our guarantee: Works for a polynomial of any positive degree or your money back!
Update from our research lab: Guarantee extended; s/positive/non-negative/ :-)
Update Here's the industrial-strength version that is robust in the face of stray ints in the coefficients without having a function call in the loop, and uses neither enumerate() nor reversed() in the setup:
>>> icoeffs = [2, 4, 6]
>>> tot = 0.0
>>> divisor = float(len(icoeffs))
>>> for coeff in icoeffs:
... tot += coeff / divisor
... divisor -= 1.0
...
>>> tot
8.6666666666666661
>>>
It might be overkill to resort to general-purpose numeric integration algorithms for your special case...if you work out the algebra, there's a simple expression that gives you the area.
You have a polynomial of degree 2: f(x) = ax2 + bx + c
You want to find the area under the curve for x in the range [0,1].
The antiderivative F(x) = ax3/3 + bx2/2 + cx + C
The area under the curve from 0 to 1 is: F(1) - F(0) = a/3 + b/2 + c
So if you're only calculating the area for the interval [0,1], you might consider
using this simple expression rather than resorting to the general-purpose methods.
'quad' in scipy.integrate is the general purpose method for integrating functions of a single variable over a definite interval. In a simple case (such as the one described in your question) you pass in your function and the lower and upper limits, respectively. 'quad' returns a tuple comprised of the integral result and an upper bound on the error term.
from scipy import integrate as TG
fnx = lambda x: 3*x**2 + 9*x # some polynomial of degree two
aoc, err = TG.quad(fnx, 0, 1)
[Note: after i posted this i an answer posted before mine, and which represents polynomials using 'poly1d' in Numpy. My scriptlet just above can also accept a polynomial in this form:
import numpy as NP
px = NP.poly1d([2,4,6])
aoc, err = TG.quad(px, 0, 1)
# returns (8.6666666666666661, 9.6219328800846896e-14)
If one is integrating quadratic or cubic polynomials from the get-go, an alternative to deriving the explicit integral expressions is to use Simpson's rule; it is a deep fact that this method exactly integrates polynomials of degree 3 and lower.
To borrow Mike Graham's example (I haven't used Python in a while; apologies if the code looks wonky):
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> integrand = (1 - 0)(p(0) + 4*p((0 + 1)/2) + p(1))/6
uses Simpson's rule to compute the value of integrand. You can verify for yourself that the method works as advertised.
Of course, I did not simplify the expression for integrand to indicate that the 0 and 1 can be replaced with arbitrary values u and v, and the code will still work for finding the integral of the function from u to v.

Categories