I want to do TIN Interpolation on a layer but when I fill all the fields with the right data (vector layer, interpolation attribute, extent etc) the algorithm does not run and shows me this message:
Traceback (most recent call last):
File "C:/PROGRA~1/QGIS 3.14/apps/qgis/./python/plugins\processing\algs\qgis\TinInterpolation.py", line 188, in processAlgorithm
writer.writeFile(feedback)
Exception: unknown
Execution failed after 0.08 seconds
Does anybody have an idea about it?? Thank you
I had the same issue. I converted a dxf file into a shape file and then I tried to use Tin interpolation but it didn't work. Then I realized that, in my dxf file, there were some very very small lines and polyline and, after removing them, the interpolation went just fine. I don't really have an explanation but maybe this could help you.
It is because of some small lines that are in your file that cannot be handled by Interpolation. You can use Generalizer3 in QGIS plugins to remove those lines.
Related
I am writing a bit of code to allow me to remove seams from an image. Currently, my code allows me to find, highlight, and remove seams from a greyscale image. I am needing to remove seams from colored images and my code will not allow me to do that. Can anyone help me modify my code to allow me to do this?
My code:
import numpy as np
import cv2
import math
import time
def getEdgeImage(img,margin=10):
kernel=np.float64([[-1,0,1]])
Ix=cv2.filter2D(img,cv2.CV_64F,kernel)
Iy=cv2.filter2D(img,cv2.CV_64F,kernel)
I=np.hypot(Ix,Iy)
m=I.max()
I[:,:margin]=m
I[:,-margin:]=m
return I
def getEnergyMap(img,repulseMask=None,attractMask=None):
edges=getEdgeImage(img)
if attractMask is not None:
edges[attractMask==1]=-10
if repulseMask is not None:
edges[repulseMask==1]=235
kernel=np.ones(3,np.float64)
for i in range(1,len(edges)):
minAbove=cv2.erode(edges[i-1],kernel).T[0]
edges[i]+=minAbove
return edges
def getSeam(img,repulseMask=None,attractMask=None):
energyMap=getEnergyMap(img,repulseMask,attractMask)
y=len(energyMap)-1
x=np.argmin(energyMap[y])
seam=[(x,y)]
while len(seam)<len(energyMap):
x,y=seam[-1]
newY=y-1
newX=x+np.argmin(energyMap[newY,x-1:x+2])-1
seam.append((newX,newY))
return seam
img1=cv2.imread("image.jpg") #[::2,::2]
# attractMask=img1*0
# repulseMask=img1*0
seam=getSeam(img1)
The attract and repulse masks are unimportant to the code currently, they are just used so I can manually plug in pixel coordinates to increase or decrease the amount of seams going in that coordinate plane.
The error I get when I run this code:
Traceback (most recent call last):
File "Program.py", line 110, in <module>
seam=getSeam(img1)
File "Program.py", line 62, in getSeam
energyMap=getEnergyMap(img,repulseMask,attractMask)
File "Program.py", line 58, in getEnergyMap
edges[i]+=minAbove
ValueError: operands could not be broadcast together with shapes (960,3) (960,) (960,3)
Is there anyway that I can get this to work with my code? I'll modify the functions if that is what I need to do.
Then try this, these are separate channels given to functions individually.
r=img1[:,:,0]
seam_r=getSeam(r)
g=img1[:,:,1]
seam_g=getSeam(g)
b=img1[:,:,2]
seam_b=getSeam(b)
After this, pass the results to your post function individually.
I have been trying to use TieDIE. In a few words, this software includes an algorithm that find significant subnetwork when you pass some query nodes and a network. With smaller networks It works just fine, but the network that I am interested in, is quite big, It has 21988 nodes and 360474 edges. TieDIE generates an initial network kernel using scipy (although Matlab is also an option to generate this kernel I do not own a license). During the generation of this kernel I get the following error:
Not enough memory to perform factorization. Traceback (most recent call last):
File "Trials.py",
line 44, in <module> diffuser = SciPYKernel(network_path)
File "lib/kernel_scipy.py",
line 83, in __init__ self.kernel = expm(time_T*L)
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 602, in expm return _expm(A, use_exact_onenorm='auto')
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 665, in _expm X = _solve_P_Q(U, V, structure=structure)
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 699, in _solve_P_Q return spsolve(Q, P)
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py",
line 198, in spsolve Afactsolve = factorized(A)
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py",
line 440, in factorized return splu(A).solve
File "/home/agmoreno/TieDIE-trials/TieDIE/local/lib/python2.7/site-packages/scipy/sparse/linalg/dsolve/linsolve.py",
line 309, in splu ilu=False, options=_options)
MemoryError
What is the most interesting thing about this is that I am using a cluster computer that has 64 cpus, and 700GB or RAM and the software peaks at 1.3% of Memory usage (~10GB), according to a ps monitoring, at some moment of execution and crushing later. I have been told that there is no limit in the usage of RAM... So I really have no clue about what could be happening ...
Maybe someone here could help me on finding an alternative to scipy or solving it.
Is it possible that the memory error comes because of just one node is being used? In this the case, how could I distribute the work across the nodes?
Thanks in advance.
That's right, for a very large network like that you'll need high memory on a single node. The easiest solution is of course a workaround, either:
(1) Is there any way you reduce the size of your input network while still capturing relevant biology? Maybe just look for all the nodes 2 steps away from your input nodes?
(2) Use the new Cytoscape API to do the diffusion for you: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005598 (https://github.com/idekerlab/heat-diffusion)
(3) Use PageRank instead of computing a heat kernel (not ideal, as we've shown that Diffusion tends to work better on biological networks).
Hope this helps!
-Evan Paull (TieDIE developer/lead author)
This might be a noob question...
I'm following this tutorial on Emotion Recognition With Python, OpenCV and a Face Dataset
When I run the training code get the following error:
OpenCV Error: Bad argument (Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4.) in predict, file /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp, line 132
Traceback (most recent call last):
File "trainModel.py", line 64, in <module>
correct = run_recognizer()
File "trainModel.py", line 52, in run_recognizer
pred, conf = fishface.predict(image)
cv2.error: /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp:132: error: (-5) Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4. in function predict
It is complaining about the image size not being 350×350=122500 although all the images in my dataset folder are the correct size 350x350px.
And my user name is not ‘jenkins’ as it says in /Users/jenkins/miniconda… not sure where it comes from or how to replace it with my correct path to fisher_faces.cpp
Thanks for your help!
Don't worry about that path. The OpenCV library you are using was built on someone else's machine, and the error messages got paths from their machine baked in. It's just trying to tell you in which OpenCV source file the error is occurring in, namely this one.
(In this case, Jenkins is a popular build bot.)
I use scipy's griddate-function for interpolation.
What does the following error message means which appears when python is executing the griddata-function?
File "C:\Python25\lib\site-packages\scipy\interpolate\ndgriddata.py", line 182, in griddata
ip = LinearNDInterpolator(points, values, fill_value=fill_value)
File "interpnd.pyx", line 192, in interpnd.LinearNDInterpolator.__init__ (scipy\interpolate\interpnd.c:2524)
File "qhull.pyx", line 917, in scipy.spatial.qhull.Delaunay.__init__ (scipy\spatial\qhull.c:4030)
File "qhull.pyx", line 170, in scipy.spatial.qhull._construct_delaunay (scipy\spatial\qhull.c:1269)
RuntimeError: Qhull error
This typically means that the point set you passed in cannot be triangulated. Some common cases when this might occur:
You have 2D data, but all the points lie along a line. In this case there is no triangulation of the data to non-degenerate triangles.
You have 3D data, but all the points lie on a plane, so no decomposition to non-degenerate tetrahedra. And so on to higher dimensions.
In these cases, interpolation does not make sense either, so this failure is not an indication of a bug, but incorrect usage of griddata.
Typically, Qhull prints additional information on what went wrong to stderr, so check the program output to see what it says.
This indicates that the qhull (http://www.qhull.org) code which is used by the function is not returning a result because of an error.
Does this always happen, or only for certain inputs?
Can you post an example input which causes the error?
Excuse my ignorance, I'm very new to Python. I'm trying to perform factor analysis in Python using MDP (though I can use another library if there's a better solution).
I have an m by n matrix (called matrix) and I tried to do:
import mdp
mdp.nodes.FANode()(matrix)
but I get back an error. I'm guessing maybe my matrix isn't formed properly? My goal is find out how many components are in the data and find out which rows load onto which components.
Here is the traceback:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mdp/signal_node.py", line 630, in __call__
return self.execute(x, *args, **kwargs)
File "mdp/signal_node.py", line 611, in execute
self._pre_execution_checks(x)
File "mdp/signal_node.py", line 480, in _pre_execution_checks
self.train(x)
File "mdp/signal_node.py", line 571, in train
self._check_input(x)
File "mdp/signal_node.py", line 429, in _check_input
if not x.ndim == 2:
AttributeError: 'list' object has no attribute 'ndim'
Does anyone have any idea what's going on, and feel like explaining it to a Python newbie?
I have absolutely no experience with mdp, but it looks like it expects your matrices to be passed as a Numpy array instead of a list. Numpy is a package for high performance scientific computing. You can go to the Numpy home page and install it. After doing so, try altering your code to this:
import mdp, numpy
mdp.nodes.FANode()(numpy.array(matrix))
As Stephen said, the data must be a numpy array. More precisely it must be a 2D array, with the first index representing the different sampes and the second index representing the data dimensions (using the wrong order here can lead to the "singular matrix" error).
You should also take a look at the MDP documentation, which should answer all your questions. If that doesn't help there is the MDP user mailing list.