I am trying to visualize some data using matpolid, but i got this error
File "C:\Python27\lib\site-packages\matplotlib\mlab.py", line 2775, in griddata
tri = delaunay.Triangulation(x,y)
File "C:\Python27\lib\site-packages\matplotlib\delaunay\triangulate.py", line 98, in __init__
duplicates = self._get_duplicate_point_indices()
File "C:\Python27\lib\site-packages\matplotlib\delaunay\triangulate.py", line 137, in _get_duplicate_point_indices
return j_sorted[mask_duplicates]
ValueError: too many boolean indices
It happens when i call function
data=griddata(self.dataX,self.dataY,self.dataFreq,xi,yi)
Does anyone know why I got that error? I suppoes it it something with parameters, but I can figure out what
Might be worth updating your matplotlib. There has been a lot of work on the triangulation code that has made it into v1.3.0.
The what's new page for matplotlib v1.3.0 can be found at http://matplotlib.org/users/whats_new.html#triangular-grid-interpolation
Related
I am trying to run the code from the following github rep:
https://github.com/iamkrut/image_inpainting_resnet_unet
I havent changed anything in the code and it is causing a ValueError, that the object is too deep, when the code tries to save the image. The error seems to come from these two lines.
images = img_tensor.cpu().detach().permute(0,2,3,1)
plt.imsave(join(data_dir, 'samples', image), images[index,:,:,:3])
Here is the error statement
File "train.py", line 205, in <module>
data_dir=args.data_dir)
File "train.py", line 94, in train_net
plt.imsave(join(data_dir, 'samples', image), images[index,:,:,:]);
File "C:\ProgramData\Anaconda3\envs\torch2\lib\site-packages\matplotlib\pyplot.py", line 2140, in imsave
return matplotlib.image.imsave(fname, arr, **kwargs)
File "C:\ProgramData\Anaconda3\envs\torch2\lib\site-packages\matplotlib\image.py", line 1498, in imsave
_png.write_png(rgba, fname, dpi=dpi)
ValueError: object too deep for desired array
Anyone know what could be causing this or how to fix it?
Thank you
matplotlib package does not understand the pytorch datatype (tensor). you should convert tensor array to numpy array and then use matplotlib functions.
a = torch.rand(10, 3, 20, 20)
plt.imsave("test.jpg", a.cpu().detach().permute(0, 2, 3, 1)[0, ...]) # Error
plt.imsave("test.jpg", a.cpu().detach().permute(0, 2, 3, 1).numpy()[0, ...])
I managed to fix the code by changing the lines to
images=img_tensor.cpu().numpy()[0]
images = np.transpose(images, (1,2,0))
plt.imsave(join(data_dir, 'samples', image), images)
Still not sure what was wrong with the previous version. So if anyone knows please tell me.
I am trying to create a petsc-matrix form an already existing csc-matrix. With this in mind I created the following example code:
import numpy as np
import scipy.sparse as sp
import math as math
from petsc4py import PETSc
n=100
A = sp.csc_matrix((n,n),dtype=np.complex128)
print A.shape
A[1:5,:]=1+1j*5*math.pi
p1=A.indptr
p2=A.indices
p3=A.data
petsc_mat = PETSc.Mat().createAIJ(size=A.shape,csr=(p1,p2,p3))
This works perfectly well as long as the matrix only consist of real values. When the matrix is complex running this piece of code results in a
TypeError: Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe'.
I tried to figure out where the error occurs exactly, but could not make much sense of the traceback:
petsc_mat = PETSc.Mat().createAIJ(size=A.shape,csr=(p1,p2,p3)) File "Mat.pyx", line 265, in petsc4py.PETSc.Mat.createAIJ (src/petsc4py.PETSc.c:98970)
File "petscmat.pxi", line 662, in petsc4py.PETSc.Mat_AllocAIJ (src/petsc4py.PETSc.c:24264)
File "petscmat.pxi", line 633, in petsc4py.PETSc.Mat_AllocAIJ_CSR (src/petsc4py.PETSc.c:23858)
File "arraynpy.pxi", line 136, in petsc4py.PETSc.iarray_s (src/petsc4py.PETSc.c:8048)
File "arraynpy.pxi", line 117, in petsc4py.PETSc.iarray (src/petsc4py.PETSc.c:7771)
Is there an efficient way of creating a petsc matrix (of which i want to retrieve some eigenpairs later) from a complex scipy csc matrix ?
I would be really happy if you guys could help me find my (hopefully not too obvious) mistake.
I had troubles getting PETSc to work, so I configured it more than just once, and in the last run I obviously forgot the option --with-scalar-type=complex.
This is what I should have done:
Either check the log file $PETSC_DIR/arch-linux2-c-opt/conf/configure.log.
Or take a look at the reconfigure-arch-linux2-c-opt.py.
There you can find all options you used to configure PETSc. In case you use SLEPc as well, you also need to recompile it. Now since I added the option (--with-scalar-type=complex) to the reconfigure script and ran it, everything works perfectly fine.
I use scipy's griddate-function for interpolation.
What does the following error message means which appears when python is executing the griddata-function?
File "C:\Python25\lib\site-packages\scipy\interpolate\ndgriddata.py", line 182, in griddata
ip = LinearNDInterpolator(points, values, fill_value=fill_value)
File "interpnd.pyx", line 192, in interpnd.LinearNDInterpolator.__init__ (scipy\interpolate\interpnd.c:2524)
File "qhull.pyx", line 917, in scipy.spatial.qhull.Delaunay.__init__ (scipy\spatial\qhull.c:4030)
File "qhull.pyx", line 170, in scipy.spatial.qhull._construct_delaunay (scipy\spatial\qhull.c:1269)
RuntimeError: Qhull error
This typically means that the point set you passed in cannot be triangulated. Some common cases when this might occur:
You have 2D data, but all the points lie along a line. In this case there is no triangulation of the data to non-degenerate triangles.
You have 3D data, but all the points lie on a plane, so no decomposition to non-degenerate tetrahedra. And so on to higher dimensions.
In these cases, interpolation does not make sense either, so this failure is not an indication of a bug, but incorrect usage of griddata.
Typically, Qhull prints additional information on what went wrong to stderr, so check the program output to see what it says.
This indicates that the qhull (http://www.qhull.org) code which is used by the function is not returning a result because of an error.
Does this always happen, or only for certain inputs?
Can you post an example input which causes the error?
Excuse my ignorance, I'm very new to Python. I'm trying to perform factor analysis in Python using MDP (though I can use another library if there's a better solution).
I have an m by n matrix (called matrix) and I tried to do:
import mdp
mdp.nodes.FANode()(matrix)
but I get back an error. I'm guessing maybe my matrix isn't formed properly? My goal is find out how many components are in the data and find out which rows load onto which components.
Here is the traceback:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mdp/signal_node.py", line 630, in __call__
return self.execute(x, *args, **kwargs)
File "mdp/signal_node.py", line 611, in execute
self._pre_execution_checks(x)
File "mdp/signal_node.py", line 480, in _pre_execution_checks
self.train(x)
File "mdp/signal_node.py", line 571, in train
self._check_input(x)
File "mdp/signal_node.py", line 429, in _check_input
if not x.ndim == 2:
AttributeError: 'list' object has no attribute 'ndim'
Does anyone have any idea what's going on, and feel like explaining it to a Python newbie?
I have absolutely no experience with mdp, but it looks like it expects your matrices to be passed as a Numpy array instead of a list. Numpy is a package for high performance scientific computing. You can go to the Numpy home page and install it. After doing so, try altering your code to this:
import mdp, numpy
mdp.nodes.FANode()(numpy.array(matrix))
As Stephen said, the data must be a numpy array. More precisely it must be a 2D array, with the first index representing the different sampes and the second index representing the data dimensions (using the wrong order here can lead to the "singular matrix" error).
You should also take a look at the MDP documentation, which should answer all your questions. If that doesn't help there is the MDP user mailing list.
I'm having trouble with the Scatter function in the MPI4Py Python module.
My assumption is that I should be able to pass it a single list for the sendbuffer. However, I'm getting a consistent error message when I do that, or indeed add the other two arguments, recvbuf and root:
File "code/step3.py", line 682, in subbox_grid
i = mpi_communicator.Scatter(station_range, station_data)
File "Comm.pyx", line 427, in mpi4py.MPI.Comm.Scatter (src/
mpi4py_MPI.c:44993)
File "message.pxi", line 321, in mpi4py.MPI._p_msg_cco.for_scatter
(src/mpi4py_MPI.c:14497)
File "message.pxi", line 232, in mpi4py.MPI._p_msg_cco.for_cco_send
(src/mpi4py_MPI.c:13630)
File "message.pxi", line 36, in mpi4py.MPI.message_simple (src/
mpi4py_MPI.c:11904)
ValueError: message: expecting 2 or 3 items
Here is the relevant code snipped, starting a few lines above 682
mentioned above.
for station in stations
#snip--do some stuff with station
station_data = []
station_range = range(1,len(station))
mpi_communicator = MPI.COMM_WORLD
i = mpi_communicator.Scatter(station_range, nsm)
#snip--do some stuff with station[i]
nsm = combine(avg, wt, dnew, nf1, nl1, wti[i], wtm, station[i].id)
station_data = mpi_communicator.Gather(station_range, nsm)
I've tried a number of combinations initializing station_range, but I
must not be understanding the Scatter argument types properly.
Does a Python/MPI guru have a clarification this?
If you want to move raw buffers (as with Gather), you provide a triplet [buffer, size, type]. Look at the demos for examples of this. If you want to send Python objects, you should use the higher level interface and call gather (note the lowercase) which uses pickle internally.