griddata runtime error -- Python / SciPy (Interpolation) - python

I use scipy's griddate-function for interpolation.
What does the following error message means which appears when python is executing the griddata-function?
File "C:\Python25\lib\site-packages\scipy\interpolate\ndgriddata.py", line 182, in griddata
ip = LinearNDInterpolator(points, values, fill_value=fill_value)
File "interpnd.pyx", line 192, in interpnd.LinearNDInterpolator.__init__ (scipy\interpolate\interpnd.c:2524)
File "qhull.pyx", line 917, in scipy.spatial.qhull.Delaunay.__init__ (scipy\spatial\qhull.c:4030)
File "qhull.pyx", line 170, in scipy.spatial.qhull._construct_delaunay (scipy\spatial\qhull.c:1269)
RuntimeError: Qhull error

This typically means that the point set you passed in cannot be triangulated. Some common cases when this might occur:
You have 2D data, but all the points lie along a line. In this case there is no triangulation of the data to non-degenerate triangles.
You have 3D data, but all the points lie on a plane, so no decomposition to non-degenerate tetrahedra. And so on to higher dimensions.
In these cases, interpolation does not make sense either, so this failure is not an indication of a bug, but incorrect usage of griddata.
Typically, Qhull prints additional information on what went wrong to stderr, so check the program output to see what it says.

This indicates that the qhull (http://www.qhull.org) code which is used by the function is not returning a result because of an error.
Does this always happen, or only for certain inputs?
Can you post an example input which causes the error?

Related

Why I cannot do TIN Interpolation in QGIS?

I want to do TIN Interpolation on a layer but when I fill all the fields with the right data (vector layer, interpolation attribute, extent etc) the algorithm does not run and shows me this message:
Traceback (most recent call last):
File "C:/PROGRA~1/QGIS 3.14/apps/qgis/./python/plugins\processing\algs\qgis\TinInterpolation.py", line 188, in processAlgorithm
writer.writeFile(feedback)
Exception: unknown
Execution failed after 0.08 seconds
Does anybody have an idea about it?? Thank you
I had the same issue. I converted a dxf file into a shape file and then I tried to use Tin interpolation but it didn't work. Then I realized that, in my dxf file, there were some very very small lines and polyline and, after removing them, the interpolation went just fine. I don't really have an explanation but maybe this could help you.
It is because of some small lines that are in your file that cannot be handled by Interpolation. You can use Generalizer3 in QGIS plugins to remove those lines.

Not enough memory to perform factorization expm scipy.sparse.linalg.splu

I have been trying to use TieDIE. In a few words, this software includes an algorithm that find significant subnetwork when you pass some query nodes and a network. With smaller networks It works just fine, but the network that I am interested in, is quite big, It has 21988 nodes and 360474 edges. TieDIE generates an initial network kernel using scipy (although Matlab is also an option to generate this kernel I do not own a license). During the generation of this kernel I get the following error:
Not enough memory to perform factorization. Traceback (most recent call last):
File "Trials.py",
line 44, in <module> diffuser = SciPYKernel(network_path)
File "lib/kernel_scipy.py",
line 83, in __init__ self.kernel = expm(time_T*L)
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 602, in expm return _expm(A, use_exact_onenorm='auto')
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 665, in _expm X = _solve_P_Q(U, V, structure=structure)
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 699, in _solve_P_Q return spsolve(Q, P)
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py",
line 198, in spsolve Afactsolve = factorized(A)
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py",
line 440, in factorized return splu(A).solve
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py",
line 309, in splu ilu=False, options=_options)
MemoryError
What is the most interesting thing about this is that I am using a cluster computer that has 64 cpus, and 700GB or RAM and the software peaks at 1.3% of Memory usage (~10GB), according to a ps monitoring, at some moment of execution and crushing later. I have been told that there is no limit in the usage of RAM... So I really have no clue about what could be happening ...
Maybe someone here could help me on finding an alternative to scipy or solving it.
Is it possible that the memory error comes because of just one node is being used? In this the case, how could I distribute the work across the nodes?
Thanks in advance.
That's right, for a very large network like that you'll need high memory on a single node. The easiest solution is of course a workaround, either:
(1) Is there any way you reduce the size of your input network while still capturing relevant biology? Maybe just look for all the nodes 2 steps away from your input nodes?
(2) Use the new Cytoscape API to do the diffusion for you: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005598 (https://github.com/idekerlab/heat-diffusion)
(3) Use PageRank instead of computing a heat kernel (not ideal, as we've shown that Diffusion tends to work better on biological networks).
Hope this helps!
-Evan Paull (TieDIE developer/lead author)

petsc4py: Creating AIJ Matrix from csc_matrix results in TypeError

I am trying to create a petsc-matrix form an already existing csc-matrix. With this in mind I created the following example code:
import numpy as np
import scipy.sparse as sp
import math as math
from petsc4py import PETSc
n=100
A = sp.csc_matrix((n,n),dtype=np.complex128)
print A.shape
A[1:5,:]=1+1j*5*math.pi
p1=A.indptr
p2=A.indices
p3=A.data
petsc_mat = PETSc.Mat().createAIJ(size=A.shape,csr=(p1,p2,p3))
This works perfectly well as long as the matrix only consist of real values. When the matrix is complex running this piece of code results in a
TypeError: Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe'.
I tried to figure out where the error occurs exactly, but could not make much sense of the traceback:
petsc_mat = PETSc.Mat().createAIJ(size=A.shape,csr=(p1,p2,p3)) File "Mat.pyx", line 265, in petsc4py.PETSc.Mat.createAIJ (src/petsc4py.PETSc.c:98970)
File "petscmat.pxi", line 662, in petsc4py.PETSc.Mat_AllocAIJ (src/petsc4py.PETSc.c:24264)
File "petscmat.pxi", line 633, in petsc4py.PETSc.Mat_AllocAIJ_CSR (src/petsc4py.PETSc.c:23858)
File "arraynpy.pxi", line 136, in petsc4py.PETSc.iarray_s (src/petsc4py.PETSc.c:8048)
File "arraynpy.pxi", line 117, in petsc4py.PETSc.iarray (src/petsc4py.PETSc.c:7771)
Is there an efficient way of creating a petsc matrix (of which i want to retrieve some eigenpairs later) from a complex scipy csc matrix ?
I would be really happy if you guys could help me find my (hopefully not too obvious) mistake.
I had troubles getting PETSc to work, so I configured it more than just once, and in the last run I obviously forgot the option --with-scalar-type=complex.
This is what I should have done:
Either check the log file $PETSC_DIR/arch-linux2-c-opt/conf/configure.log.
Or take a look at the reconfigure-arch-linux2-c-opt.py.
There you can find all options you used to configure PETSc. In case you use SLEPc as well, you also need to recompile it. Now since I added the option (--with-scalar-type=complex) to the reconfigure script and ran it, everything works perfectly fine.

ValueError: too many boolean indices

I am trying to visualize some data using matpolid, but i got this error
File "C:\Python27\lib\site-packages\matplotlib\mlab.py", line 2775, in griddata
tri = delaunay.Triangulation(x,y)
File "C:\Python27\lib\site-packages\matplotlib\delaunay\triangulate.py", line 98, in __init__
duplicates = self._get_duplicate_point_indices()
File "C:\Python27\lib\site-packages\matplotlib\delaunay\triangulate.py", line 137, in _get_duplicate_point_indices
return j_sorted[mask_duplicates]
ValueError: too many boolean indices
It happens when i call function
data=griddata(self.dataX,self.dataY,self.dataFreq,xi,yi)
Does anyone know why I got that error? I suppoes it it something with parameters, but I can figure out what
Might be worth updating your matplotlib. There has been a lot of work on the triangulation code that has made it into v1.3.0.
The what's new page for matplotlib v1.3.0 can be found at http://matplotlib.org/users/whats_new.html#triangular-grid-interpolation

Factors Analysis using MDP in Python

Excuse my ignorance, I'm very new to Python. I'm trying to perform factor analysis in Python using MDP (though I can use another library if there's a better solution).
I have an m by n matrix (called matrix) and I tried to do:
import mdp
mdp.nodes.FANode()(matrix)
but I get back an error. I'm guessing maybe my matrix isn't formed properly? My goal is find out how many components are in the data and find out which rows load onto which components.
Here is the traceback:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mdp/signal_node.py", line 630, in __call__
return self.execute(x, *args, **kwargs)
File "mdp/signal_node.py", line 611, in execute
self._pre_execution_checks(x)
File "mdp/signal_node.py", line 480, in _pre_execution_checks
self.train(x)
File "mdp/signal_node.py", line 571, in train
self._check_input(x)
File "mdp/signal_node.py", line 429, in _check_input
if not x.ndim == 2:
AttributeError: 'list' object has no attribute 'ndim'
Does anyone have any idea what's going on, and feel like explaining it to a Python newbie?
I have absolutely no experience with mdp, but it looks like it expects your matrices to be passed as a Numpy array instead of a list. Numpy is a package for high performance scientific computing. You can go to the Numpy home page and install it. After doing so, try altering your code to this:
import mdp, numpy
mdp.nodes.FANode()(numpy.array(matrix))
As Stephen said, the data must be a numpy array. More precisely it must be a 2D array, with the first index representing the different sampes and the second index representing the data dimensions (using the wrong order here can lead to the "singular matrix" error).
You should also take a look at the MDP documentation, which should answer all your questions. If that doesn't help there is the MDP user mailing list.

Categories