I have some numpy array containing data that I would visualize on a 2D grid. Some of the data is unphysical and I would like to mask this data. However, I could not figure out how to set the mask attribute of tricontour correctly. I tried:
import matplotlib.pyplot as mp
import numpy as np
with open('some_data.dat', 'r') as infile:
x, y, z = np.loadtxt(infile, usecols=(0, 1, 2), unpack=True)
isbad = np.less(z, 1.4) | np.greater(z, 2.1)
mp.tricontourf(x, y, z, mask = isbad)
But the resulting figure is simply not masked. I tried masking part of a contourf plot in matplotlib, i.e.
z2 = np.ma.array(z, mask= isbad)
mp.tricontourf(x, y, z2)
which did not work either. I want to use tricontourf instrad of contourf, because I do not want to grid my data.
z[isbad] = np.nan
results in a Segmentation fault when calling tricontourf
Here's the figure, the red colours are the ones I would like to mark as unphysical.
Here comes the trick. I need to collect the indices of triangles (which are indices into z!), evaluate whether they are good or not and then accept only the triangles for that at least one corner is valid (reducing the dimension from (ntri, 3) to ntri
triang = tr.Triangulation(x, y)
mask = np.all(np.where(isbad[triang.triangles], True, False), axis=1)
triang.set_mask(mask)
colplt = mp.tricontourf(triang, z)
mp.colorbar()
Inspired by this link: http://matplotlib.org/examples/pylab_examples/tripcolor_demo.html
wsj's answer didn't work for me since it didn't remove certain masked points (I think when not all of the nodes were bad).
This solution did:
z[isbad] = numpy.NaN
z = numpy.ma.masked_invalid(z)
vmin, vmax = z.min(), z.max()
z = z.filled(fill_value=-999)
levels = numpy.linspace(vmin, vmax, n_points)
plt.tricontourf(x, y, z, levels=levels)
Related
I have interpolated my nifti image (mri data) via linear method. but the problem is that field of view in output changes (input is 400*400 but output is 4000*3000 ) I am searching for a way to keep the dimensions while interpolating. Do you may have a solution?
Thank you
t1_ = "name.nii.gz"
img_t1_ = nib.load(t1_)
img_t1_ = np.double(img_t1_.get_fdata())
slice_ = 300
img_t1_ = np.rot90(img_t1_[:,:,slice_,0])
x = np.linspace(0, img_t1_.shape[1], img_t1_.shape[1])
y = np.linspace(0, img_t1_.shape[0], img_t1_.shape[0])
X, Y = np.meshgrid(x, y)
Z = img_t1_
x2 = np.linspace(0, img_t1_.shape[1], 9*img_t1_.shape[1])
y2 = np.linspace(0, img_t1_.shape[0], 9*img_t1_.shape[0])
print(x2.shape[0], y2.shape[0])
tmp_z_ = np.zeros((x2.shape[0], y2.shape[0]))
f_linear = interp2d(x, y, Z, kind='linear')
Z2 = f_linear(x2, y2)
I assume by keeping the dimensions, you mean keeping the aspect ratio (i.e. input: 400x400 -> output: 4000x4000). Also, I'm not exactly sure what you are trying to achieve with your interpolation. However, depending on the use case, the zoom function in scipy's ndimage module could do the job. If you chose parameter order=1, this corresponds to linear interpolation.
I'm not familiar with the packages you use to import your images. The following example assumes that the image data has been loaded to img_data, a two dimensional float array. Just adapt to your use case.
from scipy.ndimage import zoom
interpolated_img_data = zoom(img_data, 10, order=1)
I have a 3d mask which is an ellipsoid. I have extracted the coordinates of the mask using np.argwhere. The coordinates can be assigned as x, y, z as in the example code. My question is how can I get my mask back (in the form of 3d numpy or boolean array of the same shape) from the coordinates x, y, z ?
import numpy as np
import scipy
import skimage
from skimage import draw
mask = skimage.draw.ellipsoid(10,12,18)
print mask.shape
coord = np.argwhere(mask)
x = coord[:,0]
y = coord[:,1]
z = coord[:,2]
The above code gives me boolean mask of the shape (23, 27, 39) and now I want to construct the same mask of exactly same shape using x, y, z coordinates. How can it be done?
I would like to modify the question above a bit. Now if I rotate my coordinates using quaternion which will give me new set of coordinates and then with new coordinates x1,y1,z1 I want to construct my boolean mask of shape (23,27,39) as that of original mask ? How can that be done ?
import quaternion
angle1 = 90
rotation = np.exp(quaternion.quaternion(0,0, 1) * angle1*(np.pi/180) / 2)
coord_rotd = quaternion.rotate_vectors(rotation, coord)
x1 = coord_rotd[:,0]
y1 = coord_rotd[:,1]
z1 = coord_rotd[:,2]
You can use directly x, y and z to reconstruct your mask. First, use a new array with the same shape as your mask. I pre-filled everything with zeros (i.e. False). Next, set each coordinate defined by x, y and z to True:
new_mask = np.zeros_like(mask)
new_mask[x,y,z] = True
# Check if mask and new_mask is the same
np.allclose(mask, new_mask)
# True
If you ask, if you can reconstruct your mask only knowing x, y and z, this is not possible. Because you loose information of what is not filled. Just imagine having your ellipsoid at a corner of a huge cube. How would you know (only knowing how the ellipsoid looks), how large the cube is?
Regarding your second question:
You have to fix your coordinates, because they can be out of your scenery. So I defined a function that takes care of this:
def fixCoordinates(coord, shape):
# move to the positive edge
# remove negative indices
# you can also add now +1 to
# have a margin around your ellipse
coord -= coord.min(0)
# trim coordinates outside of scene
for i, s in enumerate(shape):
coord[coord[:,i] >= s] = s-1
# Return coordinates and change dtype
return coord.astype(np.int)
And if you modify your code slightly, you can use the same strategy as before:
# your code
import quaternion
angle1 = 90
rotation = np.exp(quaternion.quaternion(0,0, 1) * angle1*(np.pi/180) / 2)
coord_rotd = quaternion.rotate_vectors(rotation, coord_rotd)
# Create new mask
new_mask2 = np.zeros_like(new_mask)
# Fix coordinates
coord_rotd = fixCoordinates(coord_rotd, mask.shape)
x1 = coord_rotd[:,0]
y1 = coord_rotd[:,1]
z1 = coord_rotd[:,2]
# create new mask, similar as before
new_mask2[x1, y1, z1] = True
Given your example rotation, you can now plot both masks (that have the same shape), side by side:
If you know the shape of your old mask, try this:
new_mask = np.full(old_mask_shape, True) # Fill new_mask with True everywhere
new_mask[x,y,z] = False # Set False for the ellipsoid part alone
Note:
old_mask_shape should be the same as shape of the image on which you intend to apply the mask.
If you want a True mask rather than a False one (if you want the ellipsoid part to be True and everywhere else False) just interchange True and False in the above two lines of code.
I have a 2D triangle mesh with n vertices that is stored in a variable tri (a matplotlib.tri.Triangulation object); I can plot the mesh with matplotlib's tripcolor function easily enough and everything works fine. However, I also have (r,g,b) triples for each vertex (vcolors), and these values do not fall along a single dimension thus can't be easily converted to a color-map (for example, imagine if you overlaid a triangle mesh on a large photo of a park, then assigned each vertex the color of the pixel beneath it).
I thought I would be able to do something like this:
matplotlib.pyplot.tripcolor(tri, vcolors)
ValueError: Collections can only map rank 1 arrays
Is there a convenient way to convert a vcolors-like (n x 3) matrix into something usable by tripcolor? Is there an alternative to tripcolor that accepts vertex colors?
One thing I have tried is to make my own colormap:
z = numpy.asarray(range(len(vcolors)), dtype=np.float) / (len(vcolors) - 1)
cmap = matplotlib.colors.Colormap(vcolors, N=len(vcolors))
matplotlib.pyplot.tripcolor(tri, z, cmap=cmap)
matplotlib.pyplot.show()
This however did nothing---no figure appears and no error is raised; the function returns a figure handle but nothing ever gets rendered (I'm using an IPython notebook). Note that if I call the following, a plot appears just fine:
tripcolor(tri, np.zeros(len(vcolors)))
matplotlib.pyplot.show()
I'm using Python 2.7.
After rooting around in matplotlib's tripcolor and Colormap code, I came up with the following solution, which seems to work only as long as one uses 'gouraud' shading (otherwise, it does a very poor job of deducing the face colors; see below).
The trick is to create a colormap that, when given n evenly spaced numbers between 0 and 1 (inclusive) reproduces the original array of colors:
def colors_to_cmap(colors):
'''
colors_to_cmap(nx3_or_nx4_rgba_array) yields a matplotlib colormap object that, when
that will reproduce the colors in the given array when passed a list of n evenly
spaced numbers between 0 and 1 (inclusive), where n is the length of the argument.
Example:
cmap = colors_to_cmap(colors)
zs = np.asarray(range(len(colors)), dtype=np.float) / (len(colors)-1)
# cmap(zs) should reproduce colors; cmap[zs[i]] == colors[i]
'''
colors = np.asarray(colors)
if colors.shape[1] == 3:
colors = np.hstack((colors, np.ones((len(colors),1))))
steps = (0.5 + np.asarray(range(len(colors)-1), dtype=np.float))/(len(colors) - 1)
return matplotlib.colors.LinearSegmentedColormap(
'auto_cmap',
{clrname: ([(0, col[0], col[0])] +
[(step, c0, c1) for (step,c0,c1) in zip(steps, col[:-1], col[1:])] +
[(1, col[-1], col[-1])])
for (clridx,clrname) in enumerate(['red', 'green', 'blue', 'alpha'])
for col in [colors[:,clridx]]},
N=len(colors))
Again, note that 'gouraud' shading is required for this to work. To demonstrate why this fails, the following code blocks show my particular use case. (I am plotting part of a flattened cortical sheet with a partially transparent data overlay). In this code, there are 40,886 vertices (in the_map.coordinates) and 81,126 triangles (in the_map.indexed_faces); the colors array has shape (40886, 3).
The following code works fine with 'gouraud' shading:
tri = matplotlib.tri.Triangulation(the_map.coordinates[0],
the_map.coordinates[1],
triangles=the_map.indexed_faces.T)
cmap = rgbs_to_cmap(colors)
zs = np.asarray(range(the_map.vertex_count), dtype=np.float) / (the_map.vertex_count - 1)
plt.figure(figsize=(16,16))
plt.tripcolor(tri, zs, cmap=cmap, shading='gouraud')
But without 'gouraud' shading, the face-colors are perhaps being assigned according to the average of their vertices (have not verified this), which is clearly wrong:
plt.figure(figsize=(16,16))
plt.tripcolor(tri, zs, cmap=cmap)
A much simpler way of creating the color map is via from_list:
z = numpy.arange(n)
cmap = matplotlib.colors.LinearSegmentedColormap.from_list(
'mymap', rgb, N=len(rgb)
)
While for the tripcolor function, use of a colormap is obligatory, the PolyCollection and TriMesh classes (from matplotlib.collection) that it calls internally can deal with RGB color arrays as well. I have used the following code, based on the tripcolor source, to draw a triangle mesh with given RGB face colors:
tri = Triangulation(...)
colors = nx3 RGB array
maskedTris = tri.get_masked_triangles()
verts = np.stack((tri.x[maskedTris], tri.y[maskedTris]), axis=-1)
collection = PolyCollection(verts)
collection.set_facecolor(colors)
plt.gca().add_collection(collection)
plt.gca().autoscale_view()
To set colors per vertex (Gouraud shading), use a TriMesh instead (with set_facecolor).
I have some data that I plotted the PDF using matplotlib's hist2D function.
The result looks like this:
The hist2d function returns a triple of arrays: H,xedges,yedges. H being the 2D histogram value.
Now I'd like to turn this discrete H matrix and turn it into a function, that returns the value of H for any given (x,y) input.
In other words I'd like to turn my 2D histogram into a 2D step function. Is there a specific function that would be computationally cheap that I could use on that purpose?
This looks like a pretty simple operation (usually done for image processing but with pixel indices instead of real numbers) but I'm unable to find anything about it, can you please help me?
You can construct an interpolator from the counts like this:
from numpy import random, histogram2d, diff
import matplotlib.pyplot as plt
from scipy.interpolate import interp2d
# Generate sample data
n = 10000
x = random.randn(n)
y = -x + random.randn(n)
# bin
nbins = 100
H, xedges, yedges = histogram2d(x, y, bins=nbins)
# Figure out centers of bins
def centers(edges):
return edges[:-1] + diff(edges[:2])/2
xcenters = centers(xedges)
ycenters = centers(yedges)
# Construct interpolator
pdf = interp2d(xcenters, ycenters, H)
# test
plt.pcolor(xedges, yedges, pdf(xedges, yedges))
Result:
Note that this will be linearly interpolated rather than step-wise. For a quicker version which assumes a regular grid, this will also work:
from numpy import meshgrid, vectorize
def position(edges, value):
return int((value - edges[0])/diff(edges[:2]))
#vectorize
def pdf2(x, y):
return H[position(yedges, y), position(xedges, x)]
# test - note we need the meshgrid here to get the right shapes
xx, yy = meshgrid(xcenters, ycenters)
plt.pcolor(xedges, yedges, pdf2(xx, yy))
I am trying to write a simple python code for a plot of intensity vs wavelength for a given temperature, T=200K.
So far I have this...
import scipy as sp
import math
import matplotlib.pyplot as plt
import numpy as np
pi = np.pi
h = 6.626e-34
c = 3.0e+8
k = 1.38e-23
def planck(wav, T):
a = 2.0*h*pi*c**2
b = h*c/(wav*k*T)
intensity = a/ ( (wav**5)*(math.e**b - 1.0) )
return intensity
I don't know how to define wavelength(wav) and thus produce the plot of Plancks Formula. Any help would be appreciated.
Here's a basic plot. To plot using plt.plot(x, y, fmt) you need two arrays x and y of the same size, where x is the x coordinate of each point to plot and y is the y coordinate, and fmt is a string describing how to plot the numbers.
So all you need to do is create an evenly spaced array of wavelengths (an np.array which I named wavelengths). This can be done with arange(start, end, spacing) which will create an array from start to end (not inclusive) spaced at spacing apart.
Then compute the intensity using your function at each of those points in the array (which will be stored in another np.array), and then call plt.plot to plot them. Note numpy let's you do mathematical operations on arrays quickly in a vectorized form which will be computationally efficient.
import matplotlib.pyplot as plt
import numpy as np
h = 6.626e-34
c = 3.0e+8
k = 1.38e-23
def planck(wav, T):
a = 2.0*h*c**2
b = h*c/(wav*k*T)
intensity = a/ ( (wav**5) * (np.exp(b) - 1.0) )
return intensity
# generate x-axis in increments from 1nm to 3 micrometer in 1 nm increments
# starting at 1 nm to avoid wav = 0, which would result in division by zero.
wavelengths = np.arange(1e-9, 3e-6, 1e-9)
# intensity at 4000K, 5000K, 6000K, 7000K
intensity4000 = planck(wavelengths, 4000.)
intensity5000 = planck(wavelengths, 5000.)
intensity6000 = planck(wavelengths, 6000.)
intensity7000 = planck(wavelengths, 7000.)
plt.plot(wavelengths*1e9, intensity4000, 'r-')
# plot intensity4000 versus wavelength in nm as a red line
plt.plot(wavelengths*1e9, intensity5000, 'g-') # 5000K green line
plt.plot(wavelengths*1e9, intensity6000, 'b-') # 6000K blue line
plt.plot(wavelengths*1e9, intensity7000, 'k-') # 7000K black line
# show the plot
plt.show()
And you see:
You probably will want to clean up the axes labels, add a legend, plot the intensity at multiple temperatures on the same plot, among other things. Consult the relevant matplotlib documentation.
You may also want to use the RADIS library, which allows you to plot the Planck function against wavelengths, or against frequency / wavenumber, if needed !
from radis import sPlanck
sPlanck(wavelength_min=135, wavelength_max=3000, T=4000).plot()
sPlanck(wavelength_min=135, wavelength_max=3000, T=5000).plot(nfig='same')
sPlanck(wavelength_min=135, wavelength_max=3000, T=6000).plot(nfig='same')
sPlanck(wavelength_min=135, wavelength_max=3000, T=7000).plot(nfig='same')
Just want to point out that there seems to be an equivalent of what OP wants to do in astropy:
https://docs.astropy.org/en/stable/api/astropy.modeling.physical_models.BlackBody.html
Unfortunately, it is not very clear to me yet how to get wavelength vs frequency based expression.