I have two equalities and an inequality relationship between them. I would like to simplify those to a single inequality (I do not need to solve the unknowns). Here is my code:
from sympy.abc import x,y
from sympy import reduce_inequalities
eq1 = np.multiply(array_1[0], (1-delta))+delta*Q[0]*[x,y]
eq2 = np.multiply(array_1[1], (1-delta)) + delta*Q[1]*[x,y]
print(reduce_inequalities(eq1, eq2))
array_1 is a 1x4 array I defined previously, and I only need one element (I choose the elements by slicing the array). Q is a 4x2 array I defined previously. The unknowns are x and y. The output I get is True.
Is there any way to simplify this with sympy or any other python library in such a way I could use the simplified version later on?
Edit: I forgot to mention delta is also defined previously.
Related
my task is to integrate the following equation for 1d Fresnel diffraction, in red in:
The point is that you are fourier transforming the aperture to a pattern on the screen, but for now just focussing on integrating the horizontal strip in 1d (ignoring the height for now). yprime is thus ignored. You also have fixed z and k, and j is an imaginary number. I have written the following code for it:
import math
import numpy as np
import cmath
k=5
z=5
x=0
j=cmath.sqrt(-1)
func=math.exp((j*k/2*z)(x-xp)*(x-xp))
def X(xp1,xp2,function,N):
h=(xp2-xp1)/N
y=0.0
xp=xp1
for x in np.arange(1, N/2 +1): #summing odd order y terms
y+=4*f(xp)
xp+=2*h
xp=xp1+2*h
for x in np.arange(0, N/2): #summing even order y terms
y+=2*f(x)
xp+=2*h
integral= (h/3)*(y+f(xp1)+f(xp2))
return integral
print(simpson(0,5,func,10))
however, it is saying that xp is not defined. but I clearly have defined xp in the function.
Does anyone have an idea what could be wrong?
Thanks
EDIT: here is a neater version of my code. But it's still asking me to define xp..
import math
import cmath
lamda=0.2
k=(2*math.pi)/lamda
z=0.1
def expfunc(x, xp):
func = math.exp(((1j)*k/2*z)(x-(xp))*(x-(xp)))
return(func)
def X(xp1,xp2,x,f,N):
h=(xp2-xp1)/N
y=0.0
xp=xp1
for i in np.arange(1, N/2 +1): #summing odd order y terms
y+=4*f(xp)
xp+=2*h
xp=xp1+2*h
for i in np.arange(0, N/2): #summing even order y terms
y+=2*f(xp)
xp+=2*h
integral= (h/3)*(y+f(xp1)+f(xp2))
return integral
print(X(0,1,x,expfunc,10))
You try to use the variable xp before you have defined it.
import math
import numpy as np
import cmath
k=5
z=5
x=0
j=cmath.sqrt(-1)
func=math.exp((j*k/2*z)(x-xp)*(x-xp)) #xp is not defined yet
You gave initial values for everything else except xp.
when you define func as you did
func=math.exp((j*k/2*z)(x-xp)*(x-xp))
you define a single value called func. What you probably want is something like that:
func = lambda x,xp : math.exp((j*k/2*z)(x-xp)*(x-xp))
and then change call of func to
y+=4*f(x, xp)
and
y+=2*f(x, xp)
I believe the issue is y+=4*f(xp) inside the first for loop of the function X.
At the very end you have print(X(0,1,x,expfunc,10)) where expfunc is acting as f in the bit of code y+=4*f(xp). The function expfunc takes two arguments, one of them defined as xp. Although the variable you pass in f is defined with the name xp, the function only sees that you have passed in the first argument x and not the argument xp.
Further, I do not see the variable x in print(X(0,1,x,expfunc,10)) defined anywhere.
Also, the second snipit of code is much different than the first. If the same questions apply then you should remove the first snipit altogether and/or rephrase your questions because from what I see in the second chuck the error you claim to be getting should not be raised.
So I am starting with an equality of an equation and a fraction that I use to solve for both x and y:
mrs = y/x
ratio = 2/5
x = sympy.solveset(sympy.Eq(mrs, ratio), x)
y = sympy.solveset(sympy.Eq(mrs, ratio), y)
In the end, solving for y returns:
{2*x/5}
Which is a FiniteSet
But solving for x returns:
{5*y/2} \ {0}
Which is a Complement
I don't get why solving for one variable gives me a FiniteSet when solving for the other doesn't do the same? Also, would there be a way to solve for the other variable so as to get a FiniteSet instead of a Complement?
What do you expect as a result? Could you solve this problem by hand and write the expected solution? And why would you want a FiniteSet as solution?
I myself can not come up with a better notation than sympy, since x=0 needs to be excluded.
When you continue working with the solutions sympy can easily work with both, FiniteSet and Complement. Mathematically those are not completely different structures. The difference is that sympy somehow needs to represent these solutions internally and can not use the same construction for everything, but rather uses small building blocks to create the solution. The result you get with type(x) is symply the last building block used.
EDIT: Some math here: x=0 does not solve the equation y/x=2/5 for any y. So this must be excluded from the solutionset.
If you solve for y, then x=0 is already excluded since y/0 is not well defined.
If you solve for y, then y=0 is a priori possible, since 0/x=0 for x!=0. Thus sympy needs to exclude x=0 manually, which it does by removing 0 from the set of solutions.
Now, since we know that x=0 can never be a solution of the equation we can exclude it before even trying to solve the equation. Therefore we do
x = sympy.symbols('x', real=True, nonzero=True)
right at the beginning of the example (before the definition of mrs). The rest can remain unchanged.
I need to solve this:
Check if AT * n * A = n, where A is the test matrix, AT is the transposed test matrix and n = [[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,-1]].
I don't know how to check for equality due to the numerical errors in the float multiplication. How do I go about doing this?
Current code:
def trans(A):
n = numpy.matrix([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,-1]])
c = numpy.matrix.transpose(A) * n * numpy.matrix(A)
Have then tried
>if c == n:
return True
I have also tried assigning variables to every element of matrix and then checking that each variable is within certain limits.
Typically, the way that numerical-precision limitations are overcome is by allowing for some epsilon (or error-value) between the actual value and expected value that is still considered 'equal'. For example, I might say that some value a is equal to some value b if they are within plus/minus 0.01. This would be implemented in python as:
def float_equals(a, b, epsilon):
return abs(a-b)<epsilon
Of course, for matrixes entered as lists, this isn't quite so simple. We have to check if all values are within the epsilon to their partner. One example solution would be as follows, assuming your matrices are standard python lists:
from itertools import product # need this to generate indexes
def matrix_float_equals(A, B, epsilon):
return all(abs(A[i][j]-B[i][j])<epsilon for i,j in product(xrange(len(A)), repeat = 2))
all returns True iff all values in a list are True (list-wise and). product effectively dot-products two lists, with the repeat keyword allowing easy duplicate lists. Therefore given a range repeated twice, it will produce a list of tuples for each index. Of course, this method of index generation assumes square, equally-sized matrices. For non-square matrices you have to get more creative, but the idea is the same.
However, as is typically the way in python, there are libraries that do this kind of thing for you. Numpy's allclose does exactly this; compares two numpy arrays for equality element-wise within some tolerance. If you're working with matrices in python for numeric analysis, numpy is really the way to go, I would get familiar with its basic API.
If a and b are numpy arrays or matrices of the same shape, then you can use allclose:
if numpy.allclose(a, b): # a is approximately equal to b
# do something ...
This checks that for all i and all j, |aij - bij| < εa for some absolute error εa (by default 10-5) and that |aij - bij| < |bij| εr for some relative error εr (by default 10-8). Thus it is safe to use, even if your calculations introduce numerical errors.
I need to invert a large, dense matrix which I hoped to use Scipy's gmres to do. Fortunately, the dense matrix A follows a pattern and I do not need to store the matrix in memory. The LinearOperator class allows us to construct an object which acts as the matrix for GMRES and can compute directly the matrix vector product A*v. That is, we write a function mv(v) which takes as input a vector v and returns mv(v) = A*v. Then, we can use the LinearOperator class to create A_LinOp = LinearOperator(shape = shape, matvec = mv). We can put the linear operator into the Scipy gmres command to evaluate the matrix vector products without ever having to fully load A into memory.
The documentation for the LinearOperator is found here: LinearOperator Documentation.
Here is my problem: to write the routine to compute the matrix vector product mv(v) = A*v, I need another input vector C. The entries in A are of the form A[i,j] = f(C[i] - C[j]). So, what I really want is for mv to be of two inputs, one fixed vector input C, and one variable input v which we want to compute A*v.
MATLAB has a similar setup, where would write x = gmres(#(v) mv(v,C),b) where b is the right hand side of the problem Ax = b, , and mv is the function that takes as variable input v which we want to compute A*v and C is the fixed, known vector which we need for the assembly of A.
My problem is that I can't figure out how to allow the LinearOperator class to accept two inputs, one variable and one "fixed" like I can in MATLAB.
Is there a way to do the analogous operation in SciPy? Alternatively, if anyone knows of a better way of inverting a large, dense matrix (50000, 50000) where the entries follow a pattern, I would greatly appreciate any suggestions.
Thanks!
EDIT: I should have stated this information actually. The matrix is actually (in block form) [A C; C^T 0], where A is N x N (N large) and C is N x 3, and the 0 is 3 x 3 and C^T is the transpose of C. This array C is the same array as the one mentioned above. The entries of A follow a pattern A[i,j] = f(C[i] - C[j]).
I wrote mv(v,C) to go row by row construct A*v[i] for i=0,N, by computing sum f(C[i]-C[j)*v[j] (actually, I do numpy.dot(FC,v) where FC[j] = f(C[i]-C[j]) which works well). Then, at the end doing the computations for the C^T rows. I was hoping to eventually replace the large for loop with a multiprocessing call to parallelize the for loop, but that's a future thing to consider. I will also look into using Cython to speed up the computations.
This is very late, but if you're still interested...
Your A matrix must be very low rank since it's a nonlinearly transformed version of a rank-2 matrix. Plus it's symmetric. That means it's trivial to inverse: get the truncated eigenvalue decompostion with, say, 5 eigenvalues: A = U*S*U', then invert that: A^-1 = U*S^-1*U'. S is diagonal so this is inexpensive. You can get the truncated eigenvalue decomposition with eigh.
That takes care of A. Then for the rest: use the block matrix inversion formula. Looks nasty, but I will bet you 100,000,000 prussian francs that it's 50x faster than the direct method you were using.
I faced the same situation (some years later than you) of trying to use more than one argument to LinearOperator, but for another problem. The solution I found was the use of global variables, to avoid passing the variables as arguments to the function.
Using excel solver, it is easy to find a solution (optimum value for x and y )for this equation:
(x*14.80461) + (y * -4.9233) + (10*0.4803) ≈ 0
However, I can't figure out how to do this in Python. The existing scipy optimize library function like fsolve() or leastsq() seems to work with only one variable.... (I might just not know how to use them)...
Any suggestions?
Thanks!
>>> def f(x):
... return x[0]*14.80461 + x[1]*(-4.9233) + x[2]*(10*0.4803)
>>> def vf(x):
... return [f(x), 0, 0]
>> xx = fsolve(vf, x0=[0,0,1])
>>>
>>> f(xx)
8.8817841970012523e-16
Since the solution is not unique, different initial values for an unknown lead to different (valid) solutions.
EDIT: Why this works. Well, it's a dirty hack. It's just that fsolve and its relatives deal with systems of equations. What I did here, I defined a system of three equations (f(x) returns a three-element list) for three variables (x has three elements). Now fsolve uses a Newton-type algorithm to converge to a solution.
Clearly, the system is underdefined: you can specify arbitrary values of two variables, say, x[1] and x[2] and find x[0] to satisfy the only non-trivial equation you have. You can see this explicitly by specifying a couple of initial guesses for x0 and see different outputs, all of which satisfy f(x)=0 up to a certain tolerance.