Getting the non-trivial solution to a set of linear equations - python

I'm trying to write a program that will allow me to solve a system of equations using numpy, however, I want the solution to be non-trivial (not all zeros). Obviously the program is just going to set everything to 0, and boom, problem solved. I attempted to use a while loop (like below), but quickly found out it's going to continue to spit 0 back at me. I don't care if I end up using numpy, I'm open to other solutions if it's more elegant.
I actually haven't solved this particular set by hand, maybe the trivial solution is the only solution. If so, the principle still applies. Numpy seems to always spit 0 back.
Any help would be appreciated! Thanks.
x1 = .5
x2 = .3
x3 = .2
x4 = .05
a = np.array([[x1,x2],[x3,x4]])
b = np.array([0,0])
ans = np.linalg.solve(a,b)
while ans[0] == 0 and ans[1] == 0:
print ("got here")
ans = np.linalg.solve(a,b)
print(ans)

In your case, the matrix a is invertible. Therefore your system of linear equations has only one solution and the solution is [0, 0]. Are you wondering why you only get that unique solution?

Check out Sympy and it's use of solve and matrix calculations. Here are the pages for both.
http://docs.sympy.org/latest/tutorial/matrices.html
http://docs.sympy.org/latest/tutorial/solvers.html

Related

Pinv not inverting my complex matrix entirely correct

I have quite an extensive code so I'm not sure how I can share it and it be easy for you to read but my main question concerns the pinv function in numpy.linalg.
I am inverting a non-square complex matrix. Upon inverting I find myself with absolute values that are correct but the real or the complex (always one is incorrect but never both) values are negative when they need to be postive and vice-versa.
To resolve this I thought multiplying by -1 would have resolved the problem but as mentioned it's never both signs that are wrong. Does anyone have any idea why the pinv function would do this?
I wrote a code about it without using np.linalg.pinv. it worked fine.
it is my code:
X and Y are my matrix
Xt = np.transpose(X)
X1 = np.matmul(Xt,X)
X2 = np.matmul(X,Xt)
try:
Xinv = np.linalg.inv(X1)
W = np.matmul(Xinv,Xt)
print("1")
except:
Xinv = np.linalg.inv(X2)
W = np.matmul(Xt,Xinv)
print("2")
#W = np.linalg.pinv(X,rcond=1e-5)
W = np.matmul(W,Y)

Is there a programmable method for calculating the exponent value of a power sum

Say I have an equation:
a^x + b^x + c^x = n
Since I know a, b, c and n, is there a way to solve for x?
I have been struggling with this problem for a while now, and I can't seem to find a solution online.
My current method is to iterate over X until the left side is "close enough" to n. The method is pretty slow and in an already computationally difficult algorithm.
Example:
3^x + 5^x + 7^x = 83
How do i go about solving for x. (2 in this case)
I tried the equation in WolframAlpha and it seems to know how to solve it, but any other program fails to do so.
I probably should also mention that X is not an integer (mostly in 0.01 to 0.05 range in my case).
You can use scipy library. You can install it using command pip install scipy
Then, this code will work:
from scipy.optimize import root
def eqn(x):
return 3**x + 5**x + 7**x - 83
myroot = root(eqn, 2)
print(myroot.x)
Here, root takes two arguments root(fun, x0) where fun is the function of the equation and x0 is an rough estimate of the root value. For example if you know that your root will fall in range of (0,1) then you can enter 0 as rough estimate.
Also make sure the equation entered in the code is such that R.H.S. is equal to 0.
In our case 3^x + 5^x + 7^x = 83 becomes 3^x + 5^x + 7^x - 83 = 0
Reference Documentation
If you want to stick to base Python, it is easy enough to implement Newton's method for this problem:
from math import log
def solve(a,b,c,n,guess,tol = 1e-12):
x = guess
for i in range(100):
x_new = x - (a**x + b**x + c**x - n)/(log(a)*a**x + log(b)*b**x + log(c)*c**x)
if abs(x-x_new) < tol: return x_new
x = x_new
return "Doesn't converge on a root"
Newton's method might fail to converge in some pathological cases, hence an escape valve for such cases. In practice it converges very rapidly.
For example:
>>> solve(3,5,7,83,1)
2.0
Despite all this, I think that Cute Panda's answer is superior. It is easy enough to do a straight-forward implementation of such numerical algorithms, one that works adequately in most cases, but naive implementations such as the one give above tend to be vulnerable to excessive round-off error as well as other problems. scipy uses highly optimized routines which are implemented in a much more robust way.

Solve Linear Equation with constraints

I am pretty new to the subject of linear programming and would appreciate any pointers.
I have a slightly complicated equation but here is a simpler version of the problem:
x1 + x2 = 10
#subject to the following constraints:
0 <= x1 <= 5 and
3x1 <= x2 <= 20
Basically x2 has to have a value that is greater than 3 times that of x1. So in this case the solutions are, x1 = [0,1,2] and correspondingly x2 = [10, 9, 8]
There is a lot of material out there for minimizing or maximizing an objective function but this is not one of them. What do you call solving such type of problems and also what is the recommended way to solve this preferably using some libraries from python that finds one single or multiple feasible solutions?
Your problem could be stated as
min 0*x1+0*x2 ("zero coefficients")
subject to
x1+x2=10
3x1-x2<=0
x2<=20 (note that this constraint follows from x1,x2>=0 and their sum being 10)
This can easily fed into a linear programming package such as pulp. I am more of a R user than a python user hence I can not provide details. You could solve it also online without any programming.
EDIT: rereading your question, I see that your desired solutions are not continuous (e.g. it seems you are not looking for [2.5, 7.5] as solution), but are restricted to integer values. The problem would then called a "mixed integer problem" instead of "linear problem". Pulp, however, should be able to solve it if you can declare the variables x1, x2 as integers.
Another point is, if you are after ALL integer solutions given the constraints. There has been some discussions about that here on stackoverflow, however I am unsure if pulp can do that out of the box.

Can I make a Min Z = max(a,b,c) in PuLP

I was wondering if i could make a multiple objective function in PuLP, by doing this Can I make a Min Z = max(a,b,c) in PuLP, however when using this code
ilp_prob = pulp.LpProblem("Miniimize Problem", pulp.LpMinimize)
x = []
if m >3:
return 1,1
for i in range(m):
temp = []
for j in range(len(jobs)):
temp += [pulp.LpVariable("x_%s_%s" %((i+1),(j+1)),0,1, cat = 'Binary')]
x+= [temp]
ilp_prob += max([pulp.lpSum([jobs[j]*x[i][j] for j in range(len(jobs))] for i in range(m))])
for i in range(len(jobs)):
ilp_prob += pulp.lpSum([x[j][i] for j in range(m)])==1
ilp_prob.solve()
It just returns all 1 in x[0], and all 0 in x[0].
I'm pretty sure you can't just use python's (!) max on pulp's internal expressions! Those solvers are working on a very specific problem-specification, LP standard form, where is no concept for that!
The exception would be if pulp would overload this max-function for it's data-structures (don't know if that's possible at all in python), but i'm pretty sure pulp does not support re-formulations like that (there is some needed; as again: the target is the Standard-form).
cvxpy for example does not overload, but introduces customized max-functions, which internally transform your problem.
That being said: i'm surprised your code runs without a critical error. But i'm too lazy to check pulps sources here.
Have a look at the usual LP/IP formulation-guides.
A first idea would be:
target: min (max(a,b,c))
reformulation:
introduce a new variable z
add constraints:
z >= a
z >= b
z >= c
assumption: the objective somehow want's to minimize z (maximizing will get you in trouble as the problem will get unbounded!)
this is the case here, as the final objective for our target would look like:
min(z)
Remark: One has to be careful that the problem will still be linear/convex (depending on the solver). In this case (our simple example; i did not check your whole model) i don't see a problem, but in more complex cases, min(max(complex_expression)) subjective to complex constraints, this might introduce non-convexity (and can't be solved by Conic solvers incl. LP-solvers).
And just throwing a keyword in the ring: your approach/objective sounds a bit like robust-optimization, where usually some worst-case scenario is optimized. Not all multi-objective optimization problems are treating multiple objective-components like that.

How to handle number overflow?

I am calculating a trend line slope using numpy:
xs = []
ys = []
my_x = 0
for i in range(2000):
my_x += 1
ys.append(5*my_x+random.rand())
xs.append(my_x)
A = matrix(xs).T;
b = matrix(ys).T;
N = A.T*A
U = A.T*b
print N,U
a = (N.I*U)[0,0]
print a
The result I get is a=-8.2053307679 instead of the expected 5. Probably it happends beacuse the number in variable N is too big.
How to overcome this problem ? any help will be appreciated.
When I run the code, the answer is as you would expect:
[[2668667000]] [[ 1.33443472e+10]]
5.00037927592
It's probably due to the fact that you're on a 32-bit system, and I'm on a 64-bit system. Instead, you can use
A = matrix(xs, dtype='float64').T;
b = matrix(ys, dtype='float64').T;
Just FYI, when using numpy you'll be much more efficient if you work on vectorizing your algorithms. For example, you could replace the first several lines with this:
xs = np.arange(2000)
ys = 5 * xs + np.random.rand(2000)
Edit – one more thing: numerically, it is a bad idea to explicitly invert matrices when doing computations like these. It would be better to use something like a = np.linalg.solve(N, U)[0, 0] in your algorithm. It won't make a big difference here, but if you move to more complicated problems it definitely will! For some discussion this, take a look at this article.
:) The problem solved by using:
A = matrix(xs,float64).T;
b = matrix(ys,float64).T;

Categories