With the python package scipy one can find the principle value of a function (given that the pole is of low order) using the "cauchy" weighting method, see scipy.integrate.quad (consider for instance this question, where its usage is demonstrated). Is something analogous possible within the julia ecosystem (of course on can import scipy easily, but the native integration packages of julia should be, in principle, superior).
There doesn't seem to be any native library that does this. GSL has it (https://www.gnu.org/software/gsl/doc/html/integration.html#qawc-adaptive-integration-for-cauchy-principal-values), so you can call it through https://github.com/JuliaMath/GSL.jl
Related
I have a real non-diagonalizable matrix that I'm looking to decompose as tidily as possible. I would love to put it in Jordan normal form, but since that's problematic numerically I'm looking for the next best thing. I've discovered that there are FORTRAN and MATLAB routines that will do a block-diagonal Schur factorization of a matrix. The FORTRAN implementation in SLICOT is MB03RD and the MATLAB implementation is bdschur (which for all I know could just be a wrapper around MB03RD).
I don't have MATLAB on my computer, and the code that's generating my matrices is in Python, so I'm looking for an equivalent function in Python. Old documentation for Python Control Systems Library indicated that an emulation of bdschur was planned, but it doesn't show up anywhere in the current docs. The Slycot repository has a FORTRAN file for MB03RD, but I can't seem to find much documentation for Slycot, and when I import it very few functions actually appear to be wrapped as Python functions.
I would like to know if anyone knows of a way to call an equivalent routine in Python, or if there exists some other similar decomposition that has an implementation in Python.
In lapack there are much more functions than in "lapack interface" in scipy. Is there any reason behind this and is there OS-independentway to call lapack functions directly?
I realize that I may call dinamic library directly, but this means writing my own wrapper and this is not what I want.
To make a real usecase, I need to call dsbgv to solve generalized eigenproblem for banded matrix. It is orders of magnitude faster than using eig which is for general matrix.
scipy.linalg.lapack is an organically grown (tm) set of wrappers, added by different people with different goals, needs, motivations and time budgets over quite a few years.
cython_lapack is a complete set of wrappers for a certain (old enough) version of LAPACK. It is lower level however: you need to supply all lapack arguments, ensure the correct array ordering, alignment etc.
Following from this question, is there a way to use any method other than MLE (maximum-likelihood estimation) for fitting a continuous distribution in scipy? I think that my data may be resulting in the MLE method diverging, so I want to try using the method of moments instead, but I can't find out how to do it in scipy. Specifically, I'm expecting to find something like
scipy.stats.genextreme.fit(data, method=method_of_moments)
Does anyone know if this is possible, and if so how to do it?
Few things to mention:
1) scipy does not have support for GMM. There is some support for GMM via statsmodels (http://statsmodels.sourceforge.net/stable/gmm.html), you can also access many R routines via Rpy2 (and R is bound to have every flavour of GMM ever invented): http://rpy.sourceforge.net/rpy2/doc-2.1/html/index.html
2) Regarding stability of convergence, if this is the issue, then probably your problem is not with the objective being maximised (eg. likelihood, as opposed to a generalised moment) but with the optimiser. Gradient optimisers can be really fussy (or rather, the problems we give them are not really suited for gradient optimisation, leading to poor convergence).
If statsmodels and Rpy do not give you the routine you need, it is perhaps a good idea to write out your moment computation out verbose, and see how you can maximise it yourself - perhaps a custom-made little tool would work well for you?
I am working on multi-objective optimization in Matlab, and am using the fiminimax in the Optimization toolbox. I want to know if fminimax applies Pareto optimization, and if not, why? Also, can you suggest a multi-objective optimization package in Matlab or Python that does use Pareto?
For python, DEAP may be the one you're looking for. Extensive documentation with a lot of real life examples, and a really helpful Google Groups forum. It implements two robust MO algorithms: NSGA-II and SPEA-II.
Edit (as requested)
I am using DEAP for my MSc thesis, so I will let you know how we are using Pareto optimality. Setting DEAP up is pretty straight-forward, as you will see in the examples. Use this one as a starting point. This is the short version, which uses the built-in algorithms and operators. Read both and then follow these guidelines.
As the OneMax example is single-objective, it doesn't use MO algorithms. However, it's easy to implement them:
Change your evaluation function so it returns a n-tuple with the desired scores. If you want to minimize standard deviation too, something like return sum(individual), numpy.std(individual) would work.
Also, modify the weights parameter of the base.Fitness object so it matches that returned n-tuple. A positive float means maximization, while a negative one means minimization. You can use any real number, but I would stick with 1.0 and -1.0 for the sake of simplicity.
Change your genetic operators to cxSimulatedBinaryBounded(), mutPolynomialBounded() and selNSGA2(), for crossover, mutation and selection operations, respectively. These are the suggested methods, as they were developed by the NSGA-II authors.
If you want to use one of the embedded ready-to-go algorithms in DEAP, choose MuPlusLambda().
When calling the algorithm, remember to change the halloffame parameter from HallOfFame() to ParetoFront(). This will return all non-dominated individuals, instead of the best lexicographically sorted "best individuals in all generations". Then you can resolve your Pareto Front as desired: weighted sum, custom lexicographic sorting, etc.
I hope that helps. Take into account that there's also a full, somehow more advanced, NSGA2 example available here.
For fminimax and fgoalattain it looks like the answer is no. However, the genetic algorithm solver, gamultiobj, is Pareto set-based, though I'm not sure if it's the kind of multi-objective optimization function you want to use. gamultiobj implements the NGSA-II evolutionary algorithm. There's also this package that implements the Strengthen Pareto Evolutionary Algorithm 2 (SPEA-II) in C with a Matlab mex interface. It's a bit old so you might want to recompile it (you'll need to anyways if you're not on Windows 32-bit).
MATLAB's place function is pretty handy for determining a matrix that gives you the desired eigenvalues of a system.
I'm trying to implement it in Python, but numpy.place doesn't seem to be an analagous function and I can't for the life of me seem to find anything better in the numpy or scipy documentation.
Found this and other control functions in Python Control Systems Library