What is the easiest way to solve matrix inverse using python - python

I Wanted to solve matrix inverse without calling numpy into python. I want to know if it possible or not.

your question title:
What is the easiest way to solve matrix inverse using python
import numpy
numpy.linalg.inv(your_matrix)
or the same with scipy instead of numpy -- that's definitely the easiest, for you as a programmer.
What is your reason not to use numpy?
You can of course look for an algorithm and implement it manually. But the built-in function are based on the Fortran LAPACK algorithms, which are tested and optimized for the last 50 years... they will be hard to surpass...

Related

Is there a NumPy equivalent for SciPy's cKDTree function?

I'm trying to convert a radial distribution function for my use, but the code I'm looking at uses a cKDTree. The problem is that I want to use only numpy in my function.
Does anyone know an equivalent function in numpy that can be used or a way to make an equivalent "tree"?

Implementation of NumPy exponential function

I'm trying to perform an evaluation of total floating-point operations (FLOPs) of a neural network.
My problem is the following. I'm using a sigmoid function. My question is how to eval the FLOPs of the exponential function. I'm using Tensorflow which relies on NumPy for the exp function.
I tried to dig into the Numpy code but didn't find the implementation ... I saw some subjects here talking about fast implementation of exponential but it doesn't really help.
My guess is that it would use a Taylor implementation or Chebychev.
Do you have any clue about this? And if so an estimation of the amount of FLOPs. I tried to find some references as well on Google but nothing really standardized ...
Thank you a lot for your answers.
I looked into it for a bit and what i found is that numpy indeed uses the C implementation as seen here.
Tensorflow though doesnt use nmpy implementation, instead it uses the scalar_logistics_opfunction from the C++ library called Eigen. The source for that can be found here.

Scipy Linear algebra LinearOperator function utilised in Conjugate Gradient

I am preconditioning a matrix using spilu, however, to pass this preconditioner into cg (the built in conjugate gradient method) it is necessary to use the LinearOperator function, can someone explain to me the parameter matvec, and why I need to use it. Below is my current code
Ainv=scla.spilu(A,drop_tol= 1e-7)
Ainv=scla.LinearOperator(Ainv.shape,matvec=Ainv)
scla.cg(A,b,maxiter=maxIterations, M = Ainv)
However this doesnt work and I am given the error TypeError: 'SuperLU' object is not callable. I have played around and tried
Ainv=scla.LinearOperator(Ainv.shape,matvec=Ainv.solve)
instead. This seems to work but I want to know why matvec needs Ainv.solve rather than just Ainv, and is it the right thing to feed LinearOperator?
Thanks for your time
Without having much experience with this part of scipy, some comments:
According to the docs you don't have to use LinearOperator, but you might do
M : {sparse matrix, dense matrix, LinearOperator}, so you can use explicit matrices too!
The idea/advantage of the LinearOperator:
Many iterative methods (e.g. cg, gmres) do not need to know the individual entries of a matrix to solve a linear system A*x=b. Such solvers only require the computation of matrix vector products docs
Depending on the task, sometimes even matrix-free approaches are available which can be much more efficient
The working approach you presented is indeed the correct one (some other source doing it similarily, and some course-materials doing it like that)
The idea of not using the inverse matrix, but using solve() here is not to form the inverse explicitly (which might be very costly)
A similar idea is very common in BFGS-based optimization algorithms although wiki might not give much insight here
scipy has an extra LinearOperator for this not forming the inverse explicitly! (although i think it's only used for statistics / completing/finishing some optimization; but i successfully build some LBFGS-based optimizers with this one)
Source # scicomp.stackexchange discussing this without touching scipy
And because of that i would assume spilu is completely going for this too (returning an object with a solve-method)

Python and associated Legendre polynomials

I have been searching for a python implementation of the associated Legendre polynomials quite a long time and have found nothing satisfying me. There is an implementation in scipy.special, but it is not vectorized. I have found a solution to use pygsl interface with gsl library, but I had a hard time to get everything compiled.
Does anyone know better solution to get access to associated Legendre polynomials in efficiently vectorized way, i.e. Legendre functions has to be applied for multidimensional matrices?
scipy.special.lpmv is vectorized.

Is there a python version of MATLAB's place function

MATLAB's place function is pretty handy for determining a matrix that gives you the desired eigenvalues of a system.
I'm trying to implement it in Python, but numpy.place doesn't seem to be an analagous function and I can't for the life of me seem to find anything better in the numpy or scipy documentation.
Found this and other control functions in Python Control Systems Library

Categories