Block-diagonal Schur factorization in Python - python

I have a real non-diagonalizable matrix that I'm looking to decompose as tidily as possible. I would love to put it in Jordan normal form, but since that's problematic numerically I'm looking for the next best thing. I've discovered that there are FORTRAN and MATLAB routines that will do a block-diagonal Schur factorization of a matrix. The FORTRAN implementation in SLICOT is MB03RD and the MATLAB implementation is bdschur (which for all I know could just be a wrapper around MB03RD).
I don't have MATLAB on my computer, and the code that's generating my matrices is in Python, so I'm looking for an equivalent function in Python. Old documentation for Python Control Systems Library indicated that an emulation of bdschur was planned, but it doesn't show up anywhere in the current docs. The Slycot repository has a FORTRAN file for MB03RD, but I can't seem to find much documentation for Slycot, and when I import it very few functions actually appear to be wrapped as Python functions.
I would like to know if anyone knows of a way to call an equivalent routine in Python, or if there exists some other similar decomposition that has an implementation in Python.

Related

Signal Convolution in C++ like Python np.convolve

I am writing a numerical simulation code where a convolution of a signal and a response function is needed (full mode). Now this sounds like a standard problem and I have used np.convolve etc. in python to great effect.
However, given that I need a faster computation (this convolution needs to be performed millions of times per simulation), I have started to implement this in C++, but I have struggled to find a analogue of np.convolve or scipy.fftconvolve in C++, where I would just plug in two std::vector<double> arrays and get the result of the discrete convolution. The only thing remotely resembling what I need is the implementation of a convolution from Numerical Recipes, however comparing to the numpy results this implementation seems to be just wrong.
So my question boils down to: Where can I find a library/code that performs the convolution just like the Python implementations do? Surely there must be some already exisiting, fast solution.

Calculate principle value integral with julia

With the python package scipy one can find the principle value of a function (given that the pole is of low order) using the "cauchy" weighting method, see scipy.integrate.quad (consider for instance this question, where its usage is demonstrated). Is something analogous possible within the julia ecosystem (of course on can import scipy easily, but the native integration packages of julia should be, in principle, superior).
There doesn't seem to be any native library that does this. GSL has it (https://www.gnu.org/software/gsl/doc/html/integration.html#qawc-adaptive-integration-for-cauchy-principal-values), so you can call it through https://github.com/JuliaMath/GSL.jl

Is there a python version of MATLAB's place function

MATLAB's place function is pretty handy for determining a matrix that gives you the desired eigenvalues of a system.
I'm trying to implement it in Python, but numpy.place doesn't seem to be an analagous function and I can't for the life of me seem to find anything better in the numpy or scipy documentation.
Found this and other control functions in Python Control Systems Library

numpy: code to update least squares with more observations

I am looking for a numpy-based implementation of ordinary least squares that would allow the fit to be updated with more observations. Something along the lines of Applied Statistics algorithm AS 274 or R's biglm.
Failing that, a routine for updating a QR decomposition with new rows would also be of interest.
Any pointers?
scikits.statsmodels has an recursive OLS that updates the inverse X'X in the sandbox that could be used for this. (used only to calculate recursive OLS residuals.)
Nathaniel Smith posted his code for OLS when the data is too large to fit in memory to the scipy-user mailing list. The main code updates X'X.
I think econpy also has a function for this.
Pandas has an expanding OLS, but it may not be easy to use in an online fashion.
Nathaniels code might be the closest to biglm. I don't think there is anything for general linear model (error covariance different from identity).
All need some work before they can be used for this. I don't know of any python(-wrapped) code that would update QR.
update:
see http://mail.scipy.org/pipermail/scipy-dev/2010-February/013853.html
there is incremental qr and cholesky in cholmod available, but I didn't try it, either license or compilation on windows problems, and I don't think I tried to get incremental_qr to work
see attachements
http://mail.scipy.org/pipermail/scipy-dev/2010-February/013844.html
You might try the pythonequations project at http://code.google.com/p/pythonequations/downloads/list, though it may be more than you need it does use scipy and numpy. That code is the middleware for the http://zunzun.com online curve and surface fitting web site (I'm the author). The source code comes with many examples. Alternatively, the web site alone may be sufficient - please give it a try.
James Phillips
2548 Vera Cruz Drive
Birmingham, AL 35235 USA
zunzun#zunzun.com
This is not a detailed answer yet, but:
AFAIK, the QR update like this is not implemented in numpy, but anyway I'll like ask you to specify a more detailed manner what you are actually aiming for.
Especially, why it would not be acceptable to just calculate new estimate for x (of Ax= b) with k latest observations, when (bunch of) new observations arrives (and with modern hardware, k indeed can be quite large one)?
The LSQ.F90 part of the file compiles easily enough with,
gfortran-4.4 -shared -fPIC -g -o lsq.so LSQ.F90
and this works in Python,
from ctypes import cdll
lsq = cdll.LoadLibrary('./lsq.so')
As soon as I figure out the function call I'll include it in this answer.

Nash equilibrium in Python

Is there a Python library out there that solves for the Nash equilibrium of two-person zero-games? I know the solution can be written down in terms of linear constraints and, in theory, scipy should be able to optimize it. However, for two-person zero-games the solution is exact and unique, but some of the solvers fail to converge for certain problems.
Rather than listing any of the libraries on Linear programing on the Python website, I would like to know what library would be most effective in terms of ease of use and speed.
Raymond Hettinger wrote a recipe for solving zero-sum payoff matrices. It should serve your purposes alright.
As for a more general library for solving game theory, there's nothing specifically designed for that. But, like you said, scipy can tackle optimization problems like this. You might be able to do something with GarlicSim, which claims to be for "any kind of simulation: Physics, game theory..." but I've never used it before so I can't recommend it.
There is Gambit, which is a little difficult to set up, but has a python API.
I've just started putting together some game theory python code: http://drvinceknight.github.com/Gamepy/
There's code which:
solves matching games,
calculates shapley values in cooperative games,
runs agent based simulations to identify emergent behaviour in normal form games,
(clumsily - my python foo is still growing) uses the lrs library (written in C: http://cgm.cs.mcgill.ca/~avis/C/lrs.html) to calculate the solutions to normal form games (this is I believe what you want).
The code is all available on github and that site (the first link at the beginning of this answer) explains how the code works and gives user examples.
You might also want to check out 'Gambit' which I've never used.

Categories