NaN/inf values in scikit-learn manifold learning functions - python

I have a manifold learning / non-linear dimensionality reduction problem where I know distances between objects up to some threshold, and beyond that I just know that the distance is "far". Also, in some cases some of the distances might be missing. I am trying to use sklearn.manifold in order to perform the task of finding a 1d representation. A natural representation would be to represent "far" distances an inf and missing distances as nan.
However, it seems that currently scikit-learn does not support nan and inf values in distance matrices given to manifold learning functions in sklearn.manifold, since I get ValueError: Array contains NaN or infinity.
Is there a conceptual reason for this? Some methods seem to be especially suitable for inf, e.g. non-metric MDS. Also I know that some implementations of these methods in other languages are able to handle missing/inf values.
Instead of using inf I have considered setting "far" values to a very large number, but I am not sure how this will affect the results.
Update:
I dug in the code of sklearn.manifold.MDS._smacof_single() and found a piece of code and a comment saying that "similarities with 0 are considered as missing values". Is this an undocumented way to specify missing-values? Does this work with all manifold functions?

Short answer: As you mentioned the non-metric MDS is capable of working with incomplete dissimilarity matrices. You are right: Setting values to zero allows will be interpreted as missing values when using MDS(metric=False). It won't work for other manifold learning procedures that are not based on non-metric MDS, but there might be similar (non-documented) approaches available.
On your question concerning
Replacing inf by high values will shape your low dimensional representation for sure. Whether this is valid rather is a conceptual question that one can only answer knowing the origin of the inf values. Is the inf-entries mean something like "these data are reeaaaalllyyyy distant from each other" replacement by high values can make sense (like in your case). If it is rather missing knowledge about the dissimilarity I would not recommend to replace by inf. If there is no other solution (like non-metric MDS or matrix completion) then I would rather recommend to replace by the median of the measurable distances in such cases (checkout Imputation).
Checkout my answer to a similar question from 2017.

Related

Solving a large (150 variable) system of linear, ordinary differential equations; running into floating point rounding and/or stiffness problems

EDIT: Original post too vague. I am looking for an algorithm to solve a large-system, solvable, linear IVP that can handle very small floating point values. Solving for the eigenvectors and eigenvalues is impossible with numpy.linalg.eig() as the returned values are complex and should not be, it does not support numpy.float128 either, and the matrix is not symmetric so numpy.linalg.eigh() won't work. Sympy could do it given an infinite amount of time, but after running it for 5 hours I gave up. scipy.integrate.solve_ivp() works with implicit methods (have tried Radau and BDF), but the output is wildly wrong. Are there any libraries, methods, algorithms, or solutions for working with this many, very small numbers?
Feel free to ignore the rest of this.
I have a 150x150 sparse (~500 nonzero entries of 22500) matrix representing a system of first order, linear differential equations. I'm attempting to find the eigenvalues and eigenvectors of this matrix to construct a function that serves as the analytical solution to the system so that I can just give it a time and it will give me values for each variable. I've used this method in the past for similar 40x40 matrices, and it's much (tens, in some cases hundreds of times) faster than scipy.integrate.solve_ivp() and also makes post model analysis much easier as I can find maximum values and maximum rates of change using scipy.optimize.fmin() or evaluate my function at inf to see where things settle if left long enough.
This time around, however, numpy.linalg.eig() doesn't seem to like my matrix and is giving me complex values, which I know are wrong because I'm modeling a physical system that can't have complex rates of growth or decay (or sinusoidal solutions), much less complex values for its variables. I believe this to be a stiffness or floating point rounding problem where the underlying LAPACK algorithm is unable to handle either the very small values (smallest is ~3e-14, and most nonzero values are of similar scale) or disparity between some values (largest is ~4000, but values greater than 1 only show up a handful of times).
I have seen suggestions for similar users' problems to use sympy to solve for the eigenvalues, but when it hadn't solved my matrix after 5 hours I figured it wasn't a viable solution for my large system. I've also seen suggestions to use numpy.real_if_close() to remove the imaginary portions of the complex values, but I'm not sure this is a good solution either; several eigenvalues from numpy.linalg.eig() are 0, which is a sign of error to me, but additionally almost all the real portions are of the same scale as the imaginary portions (exceedingly small), which makes me question their validity as well. My matrix is real, but unfortunately not symmetric, so numpy.linalg.eigh() is not viable either.
I'm at a point where I may just run scipy.integrate.solve_ivp() for an arbitrarily long time (a few thousand hours) which will probably take a long time to compute, and then use scipy.optimize.curve_fit() to approximate the analytical solutions I want, since I have a good idea of their forms. This isn't ideal as it makes my program much slower, and I'm also not even sure it will work with the stiffness and rounding problems I've encountered with numpy.linalg.eig(); I suspect Radau or BDF would be able to navigate the stiffness, but not the rounding.
Anybody have any ideas? Any other algorithms for finding eigenvalues that could handle this? Can numpy.linalg.eig() work with numpy.float128 instead of numpy.float64 or would even that extra precision not help?
I'm happy to provide additional details upon request. I'm open to changing languages if needed.
As mentioned in the comment chain above the best solution for this is to use a Matrix Exponential, which is a lot simpler (and apparently less error prone) than diagonalizing your system with eigenvectors and eigenvalues.
For my case I used scipy.sparse.linalg.expm() since my system is sparse. It's fast, accurate, and simple. My only complaint is the loss of evaluation at infinity, but it's easy enough to work around.

General Question about (Hyper)parameter optimization via Python

I have a dataset with numerical values. Those values are used alongside constants to calculate different factors. Based on these factors decisions are made which ultimately lead to a single numerical value.
My goal is to find the maximum (or minimum when multiplied by -1) of this numerical value by changing those constants.
I have been looking at the library SciPy Hyperparamater optimization (machine learning) and minimize(). But after reading through several tutorials I am a little bit lost on which one to use and how to implement it.
Is there someone who would be so kind and guide me onto the right track?

Clustering on large, mixed type data

I'm dealing with a dataframe of dimension 4 million x 70. Most columns are numeric, and some are categorical, in addition to the occasional missing values. It is essential that the clustering is ran on all data points, and we look to produce around 400,000 clusters (so subsampling the dataset is not an option).
I have looked at using Gower's distance metric for mixed type data, but this produces a dissimilarity matrix of dimension 4 million x 4 million, which is just not feasible to work with since it has 10^13 elements. So, the method needs to avoid dissimilarity matrices entirely.
Ideally, we would use an agglomerative clustering method, since we want a large amount of clusters.
What would be a suitable method for this problem? I am struggling to find a method which meets all of these requirements, and I realise it's a big ask.
Plan B is to use a simple rules-based grouping method based on categorical variables alone, handpicking only a few variables to cluster on since we will suffer from the curse of dimensionality otherwise.
The first step is going to be turning those categorical values into numbers somehow, and the second step is going to be putting the now all numeric attributes into the same scale.
Clustering is computationally expensive, so you might try a third step of representing this data by the top 10 components of a PCA (or however many components have an eigenvalue > 1) to reduce the columns.
For the clustering step, you'll have your choice of algorithms. I would think something hierarchical would be helpful for you, since even though you expect a high number of clusters, it makes intuitive sense that those clusters would fall under larger clusters that continue to make sense all the way down to a small number of "parent" clusters. A popular choice might be HDBSCAN, but I tend to prefer trying OPTICS. The implementation in free ELKI seems to be the fastest (it takes some messing around with to figure it out) because it runs in java. The output of ELKI is a little strange, it outputs a file for every cluster so you have to then use python to loop through the files and create your final mapping, unfortunately. But it's all doable (including executing the ELKI command) from python if you're building an automated pipeline.

The difference between C++ (LAPACK, sgels) and Python (Numpy, lstsq) results

I am comparing the numerical results of C++ and Python computations. In C++, I make use of LAPACK's sgels function to compute the coefficients of a linear regression problem. In Python, I use Numpy's linalg.lstsq function for a similar task.
What is the mathematical difference between the methods used by sgels and linalg.lstsq?
What is the expected error (e.g. 6 significant digits) when comparing the results (i.e. the regression coefficients) numerically?
FYI: I am by no means a C++ or Python expert, which makes it difficult to understand what is going on inside the functions.
Taking a look at the source of numpy, in the file linalg.py, lstsq relies on LAPACK's zgelsd() for complex and dgelsd() for real. Here are the differences to sgels():
dgelsd() is for double while sgels() is for float. There is a difference of precision...
dgels() makes use the QR factorization of the matrix A and assumes that A has full rank. The condition number of the matrix must be reasonable to get a significant result. See this course for getting the logic of the method. On the other hand, dgelsd() makes use of the Singular value decomposition of A. In particular, A can be rank defiencient and small singular values are discarted depending of the additional argument rcond or machine precision. Notice that numpy's default value for rcond is -1: negative values refers to machine precision. See this course for the logic.
According to the benchmark of LAPACK, on can expect dgels() to be about 5 time faster than dgelsd().
You may see significant differences between the result of sgels() and dgelsd() if the matrix is ill conditionned. Indeed, there is a bound on the error of the linear regression which depends on the algorithm and the value of rcond() that is used. See the user guide of LAPACK on, Error Bounds for Linear Least Squares Problems for estimates of the errors and Further Details: Error Bounds for Linear Least Squares Problems for technical details.
As a conclusion, sgels() and dgels() can be used if the measures in b are accurate and easily related to the explanatory variables. For instance, if sensors are placed at the exits of exhaust pipes, it's easy to guess which motors are running. But sometimes, the linear link between the source and the measures is not precisely known (uncertainty on the terms of A) or discriminating polluters on the base of measurements becomes more difficult (Some polluters are far from the set of sensors and A is ill-conditionned). In this kind of situation, dgelsd() and tunning the rcond argument can help. Whenever in doubt, use dgelsd() and estimate the error on the estimated x according to LAPACK's user guide.

Is the format/structure of SciPy's condensed distance matrix stable?

Several SciPy functions are documented as taking a "condensed distance matrix as returned by scipy.spatial.distance.pdist". Now, inspection shows that what pdist returns is the row-major 1D-array form of the upper off-diagonal part of the distance matrix. This is all well and good, and natural and obvious, but is it documented or defined anywhere? I'd rather not assume anything about a data structure that'll suddenly change. (Granted, there isn't a lot of things it could change to, but I guess one possibility would be to wrap the array in an object that allows matrix-like indexing.)
Honestly, this is a better question for the scipy users or dev list, as it's about future plans for scipy.
However, the structure is fairly rigorously documented in the docstrings for both scipy.spatial.pdist and in scipy.spatial.squareform.
E.g. for pdist:
Returns a condensed distance matrix Y. For
each :math:`i` and :math:`j` (where :math:`i<j<n`), the
metric ``dist(u=X[i], v=X[j])`` is computed and stored in the
:math:`ij`th entry.
See ``squareform`` for information on how to calculate the index of
this entry or to convert the condensed distance matrix to a
redundant square matrix.
Becuase of this, and the fact that so many other functions in scipy.spatial expect a distance matrix in this form, I'd seriously doubt it's going to change without a number of depreciation warnings and announcements.
Modules in scipy itself (as opposed to scipy's scikits) are fairly stable, and there's a great deal of consideration put into backwards compatibility when changes are made (and because of this, there's quite a bit of legacy "cruft" in scipy: e.g. the fact that the core scipy module is just numpy with different defaults on a couple of functions.).

Categories