Solution method used for Scipy solve_banded matrix - python

Does anyone know what method scipy.linalg.solve_banded uses to solve the system of equations? The documentation does not state the solution method used by the function. Usually the Thomas algorithm, a.k.a. TDMA, is used for these types of systems but I was wondering if this Scipy function uses some other solution method.

The Github code shows that scipy uses the lapack routine gbsv() to solve this. You can read about gbsv() here and here.
I am not sure if this the same as the Thomas algorithm. Looks like both use LU decomposition, though.

Related

Implementation of NumPy exponential function

I'm trying to perform an evaluation of total floating-point operations (FLOPs) of a neural network.
My problem is the following. I'm using a sigmoid function. My question is how to eval the FLOPs of the exponential function. I'm using Tensorflow which relies on NumPy for the exp function.
I tried to dig into the Numpy code but didn't find the implementation ... I saw some subjects here talking about fast implementation of exponential but it doesn't really help.
My guess is that it would use a Taylor implementation or Chebychev.
Do you have any clue about this? And if so an estimation of the amount of FLOPs. I tried to find some references as well on Google but nothing really standardized ...
Thank you a lot for your answers.
I looked into it for a bit and what i found is that numpy indeed uses the C implementation as seen here.
Tensorflow though doesnt use nmpy implementation, instead it uses the scalar_logistics_opfunction from the C++ library called Eigen. The source for that can be found here.

NMaximize in Mathematica equivalent in Python

I am trying to find a equivalent of "NMaximize" optimization command in Mathematica in Python. I tried googling but did not help much.
The mathematica docs describe the methods usable within NMaximize as: Possible settings for the Method option include "NelderMead", "DifferentialEvolution", "SimulatedAnnealing", and "RandomSearch"..
Have a look at scipy's optimize which also supports:
NelderMead
DifferentialEvolution
and much more...
It is very important to find the correct tool for your optimization problem! This is at least dependent on:
Discrete variables?
Smooth optimization function?
Linear, Conic, Non-convex optimization problem?
and again: much more...
Compared to Mathematica's approach, you will have to choose the method a-priori within scipy (at some extent).

How to refer to the curve_fit() (method, function?) of scipy.optimize

The use of scipy.optimize.curve_fit() has been important in my current (astrophysics-related) research project. Now that I'm working on a publication I want to make reference to scipy.optimize.curve_fit() in my paper. The current draft of my paper refers to curve_fit() as follows
...are fit using the curve_fit() function in the optimize module
of SciPy.
I want to make sure that my use of the words "function" and "module" are correct. I am still learning the structure of modules, methods, and functions in Python and wanted to make sure that I am referring to them correctly.
Bonus: The SciPy website's citation guidelines state:
For any specific algorithm, also consider citing the original author’s paper (this can often be found under the “References” section of the docstring).
As far as I can tell, curve_fit() has no references specified in its docstring, and neither does leastsq() which it relies on heavily. I am planning on just citing the general SciPy library (as specified in the citation guidelines on the website) rather than the specific guideline. Is there a more specific reference anyone can point me to?
The notes in the SciPy reference to curve_fit() indicate it uses the Levenberg-Marquardt algorithm through leastsq(). The notes under leastsq say it is a wrapper around MINPACK's lmdif and lmder algorithms. Under scipy.optimize.root() it mentions the Levenberg-Marquardt implementation in MINPACK, and points to: More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom. 1980. User Guide for MINPACK-1. (R102 in the SciPy 0.13.0 Reference Guide Bibliography).

Does Matlab's fminimax apply Pareto optimality?

I am working on multi-objective optimization in Matlab, and am using the fiminimax in the Optimization toolbox. I want to know if fminimax applies Pareto optimization, and if not, why? Also, can you suggest a multi-objective optimization package in Matlab or Python that does use Pareto?
For python, DEAP may be the one you're looking for. Extensive documentation with a lot of real life examples, and a really helpful Google Groups forum. It implements two robust MO algorithms: NSGA-II and SPEA-II.
Edit (as requested)
I am using DEAP for my MSc thesis, so I will let you know how we are using Pareto optimality. Setting DEAP up is pretty straight-forward, as you will see in the examples. Use this one as a starting point. This is the short version, which uses the built-in algorithms and operators. Read both and then follow these guidelines.
As the OneMax example is single-objective, it doesn't use MO algorithms. However, it's easy to implement them:
Change your evaluation function so it returns a n-tuple with the desired scores. If you want to minimize standard deviation too, something like return sum(individual), numpy.std(individual) would work.
Also, modify the weights parameter of the base.Fitness object so it matches that returned n-tuple. A positive float means maximization, while a negative one means minimization. You can use any real number, but I would stick with 1.0 and -1.0 for the sake of simplicity.
Change your genetic operators to cxSimulatedBinaryBounded(), mutPolynomialBounded() and selNSGA2(), for crossover, mutation and selection operations, respectively. These are the suggested methods, as they were developed by the NSGA-II authors.
If you want to use one of the embedded ready-to-go algorithms in DEAP, choose MuPlusLambda().
When calling the algorithm, remember to change the halloffame parameter from HallOfFame() to ParetoFront(). This will return all non-dominated individuals, instead of the best lexicographically sorted "best individuals in all generations". Then you can resolve your Pareto Front as desired: weighted sum, custom lexicographic sorting, etc.
I hope that helps. Take into account that there's also a full, somehow more advanced, NSGA2 example available here.
For fminimax and fgoalattain it looks like the answer is no. However, the genetic algorithm solver, gamultiobj, is Pareto set-based, though I'm not sure if it's the kind of multi-objective optimization function you want to use. gamultiobj implements the NGSA-II evolutionary algorithm. There's also this package that implements the Strengthen Pareto Evolutionary Algorithm 2 (SPEA-II) in C with a Matlab mex interface. It's a bit old so you might want to recompile it (you'll need to anyways if you're not on Windows 32-bit).

Is there a python version of MATLAB's place function

MATLAB's place function is pretty handy for determining a matrix that gives you the desired eigenvalues of a system.
I'm trying to implement it in Python, but numpy.place doesn't seem to be an analagous function and I can't for the life of me seem to find anything better in the numpy or scipy documentation.
Found this and other control functions in Python Control Systems Library

Categories