Show "Heat Map" image with alpha values - Matplotlib / Python - python

I'm trying to plot some data to analyze them.
My data is defined as below:
class Data(object):
def __init__(self, rows=200, cols=300):
"""
The Data constructor
"""
# The data grid
self.cols = cols
self.rows = rows
# The 2D data structure
self.data = numpy.zeros((rows, cols), float)
At first, I had this method:
def generate_data_heat_map(data, x_axis_label, y_axis_label, plot_title, file_path):
plt.figure()
plt.title(plot_title)
fig = plt.imshow(data.data, extent=[0, data.cols, data.rows, 0])
plt.xlabel(x_axis_label)
plt.ylabel(y_axis_label)
plt.colorbar(fig)
plt.savefig(file_path + '.png')
plt.close()
This gives me something as a heat map image (second figure), 'cause I'm passing to it an MxN [luminance (grayscale, float array only)]. And don't know why this doesn't generate a grayscale image, but so far I didn't worry about it 'cause that is the result I wanted.
After some more calculation, I had this method to visualize my data, using the data_property as RGB and data_uncertaity as alpha:
def generate_data_uncertainty_heat_map(data_property, data_uncertainty, x_axis_label, y_axis_label, plot_title, file_path):
plt.figure()
uncertainty = numpy.zeros((data_property.rows, data_property.cols, 4))
uncertainty[..., :3] = data_property.data[..., numpy.newaxis]
uncertainty[..., 3] = data_uncertainty.data
plt.title(plot_title)
fig = plt.imshow(uncertainty.data, extent=[0, data_property.cols, data_property.rows, 0])
plt.xlabel(x_axis_label)
plt.ylabel(y_axis_label)
plt.colorbar(fig)
plt.savefig(file_path + '.png')
plt.close()
But, of course, this give me a grayscale image with alpha values, since I am repeating the same values for R, G and B. But what I really would like to have was the first method result (colored) with some alpha values calculated as uncertainty about the data.
I've noticed that my color bar has nothing about my data too (it's in RGB, I can't use it to analyze my data)
I don't know how to achieve the result that I want, which is to a have a "heat map" plot with merged the alpha values defined with my uncertainty_data and a color bar representing this uncertainty. Like merging this two images above:
This as my color:
This as my alpha:
With the conversion presented by #BlazBratanic, I guess I can see a little bit of color (not sure about it), but its far of what I was expecting.
All my values is between 0.0 and 1.0.
Thank you in advance.

Use Matplotlib cm module to map your grayscale to color values. If i remember correctly "jet" is the default colormap. So you would do something like:
uncertainty = plt.cm.jet(data_property.data)
uncertainty[..., 3] = data_uncertainty.data

Related

Reduce color list to more common colors - Python

I have a list of 1723 colors in hex codes (I could also turn them to RGB but the issue is not the format of the color), like this: cols = ['#A62E2E', '#D99036', '#D9C27E', '#D9AB9A', '#592C22'].
I'm trying to reduce the amount of colors in that list to 1/10th of what it already is by grouping similar colors. So in my example, the 1723 colors will be mapped to 172.
I have already checked these posts: post1, post2, post3, post4, post5 but they are not exactly what I want. Basically I want to create the groups dynamically from the color list I have and not a preexisting one.
It would also be very beneficial for me if I could keep as much variety as possible so preferably id like as different groups as possible.
What I've already tried:
The only solution I've found so far is a function from a stackoverflow post
def closest_color(rgb):
r, g, b = rgb
color_diffs = []
for color in nodes['Rgb']:
cr, cg, cb = color
color_diff = sqrt(abs(r - cr)**2 + abs(g - cg)**2 + abs(b - cb)**2)
color_diffs.append((color_diff, color))
return min(color_diffs)[1]
Which basically gives you the closest color to your agument from a list, but this requires a preexisting list of colors and does the mapping.
The way I'm thinking it could be done is iterate over my list, leaving the current element out and calling this function for the rest of the list and grouping the 2 colors, then doing that until I have only 172 colors. However im not sure if that will give me enough distinct colors or how to group the 2 colors I get for that matter.
I don't know enough about colors to figure out a way of doing that without messing my color range.
Here is a method that clusters the colors using scipy. Please note that using RBG is not recommended, and you would need to transform your data to a uniform color space) before clustering. The transformation in the example is included as a placeholder: it is not really useful since YIQ is not a uniform color space. There are different modules that can be used to perform the transformation.
The final list is in rgb_clusters.
import colorsys
from scipy.cluster.vq import kmeans2
from numpy.random import random_sample
n_colors = 1732
n_seek = n_colors // 10
rgb_data = random_sample((n_colors, 3))
# Let's change from RGB to another colour space. YIQ is not a
# good choice: a uniform colour space should be
# used (see https://en.wikipedia.org/wiki/Color_appearance_model).
# in this example I use YIQ for its simplicity.
yiq_data = [colorsys.rgb_to_yiq(*rgb) for rgb in rgb_data]
yiq_clusters, mapping = kmeans2(yiq_data, n_seek, minit='++')
rgb_clusters = [colorsys.yiq_to_rgb(*yiq) for yiq in yiq_clusters]
# Let's see the results graphically
from matplotlib import pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(*zip(*rgb_clusters), c=rgb_clusters, s=20, label="centroids")
ax.scatter(*zip(*rgb_data), c=rgb_data, s=6, label="data")
for o_data, idx_cluster in zip(rgb_data, mapping):
cluster = rgb_clusters[idx_cluster]
ax.plot(*zip(o_data, cluster), c=cluster)
ax.set_xlabel("R")
ax.set_ylabel("G")
ax.set_zlabel("B")
plt.legend()
Result:

Plotting per-point alpha values in 3D scatterplot throws ValueError

I have data in form of a 3D array, with "intensities" at every point. Depending on the intensity, I want to plot the point with a higher alpha. There are a lot of low-value outliers, so color coding (with scalar floats) won't work since they eclipse the real data.
What I have tried:
#this generates a 3D array with higher values around the center
a = np.array([0,1,2,3,4,5,4,3,2,1])
aa = np.outer(a,a)
aaa = np.einsum("ij,jk,jl",aa,aa,aa)
x_,y_,z_,v_ = [],[],[],[]
from matplotlib.colors import to_rgb,to_rgba
for x in range(aaa.shape[0]):
for y in range(aaa.shape[1]):
for z in range(aaa.shape[2]):
x_.append(x)
y_.append(y)
z_.append(z)
v_.append(aaa[x,y,z])
r,g,b = to_rgb("blue")
color = np.array([[r,g,b,a] for a in v_])
fig = plt.figure()
ax = fig.add_subplot(projection = '3d')
ax.scatter(x_,y_,z_,c =color)
plt.show()
the scatterplot documentation says that color can be a 2D array of RGBA, which I do pass. Hoever when I try to run the code, I get the following error:
ValueError: 'c' argument has 4000 elements, which is inconsistent with 'x' and 'y' with size 1000.
I just found my own answer.
The "A 2D array in which the rows are RGB or RGBA." statement in the documentation was a bit confusing - one needs to convert the RGBA rows to RGBA objects first, so that list comprehension should read:
color = np.array([to_rgba([r,g,b,a]) for a in v_])

(python image-processing) Using equalize_hist()

I'm doing image equalization.
I've learn that equalization is making bars to similar heights in histogram.
But when I equalize my img-data, it just changes img-data-values.
So the heights of histogram is same as before equalization.
What did I do wrong?
In Short, I want to make x-axis to (0 ~ 15) and get similar heights in histogram.(what I think equalization)
my assignment
After equalization
full code
making data
I made data with lots of full() and append()
(if someone knows better way to solve my assignment than plz tell me)
reshaping and mixing data
table = data.reshape((5, 149))
img2 = array(sorted(table, key = lambda k: random.random()))
plot data
f = plt.figure()
f.show(plt.hist(img2.flatten(), bins = 256))
equalize
img2_ex = ex.equalize_hist(img2)
plot equalize data
f = plt.figure()
f.show(plt.hist(img2_ex.flatten(), bins = 256))

tripcolor using RGB values for each vertex

I have a 2D triangle mesh with n vertices that is stored in a variable tri (a matplotlib.tri.Triangulation object); I can plot the mesh with matplotlib's tripcolor function easily enough and everything works fine. However, I also have (r,g,b) triples for each vertex (vcolors), and these values do not fall along a single dimension thus can't be easily converted to a color-map (for example, imagine if you overlaid a triangle mesh on a large photo of a park, then assigned each vertex the color of the pixel beneath it).
I thought I would be able to do something like this:
matplotlib.pyplot.tripcolor(tri, vcolors)
ValueError: Collections can only map rank 1 arrays
Is there a convenient way to convert a vcolors-like (n x 3) matrix into something usable by tripcolor? Is there an alternative to tripcolor that accepts vertex colors?
One thing I have tried is to make my own colormap:
z = numpy.asarray(range(len(vcolors)), dtype=np.float) / (len(vcolors) - 1)
cmap = matplotlib.colors.Colormap(vcolors, N=len(vcolors))
matplotlib.pyplot.tripcolor(tri, z, cmap=cmap)
matplotlib.pyplot.show()
This however did nothing---no figure appears and no error is raised; the function returns a figure handle but nothing ever gets rendered (I'm using an IPython notebook). Note that if I call the following, a plot appears just fine:
tripcolor(tri, np.zeros(len(vcolors)))
matplotlib.pyplot.show()
I'm using Python 2.7.
After rooting around in matplotlib's tripcolor and Colormap code, I came up with the following solution, which seems to work only as long as one uses 'gouraud' shading (otherwise, it does a very poor job of deducing the face colors; see below).
The trick is to create a colormap that, when given n evenly spaced numbers between 0 and 1 (inclusive) reproduces the original array of colors:
def colors_to_cmap(colors):
'''
colors_to_cmap(nx3_or_nx4_rgba_array) yields a matplotlib colormap object that, when
that will reproduce the colors in the given array when passed a list of n evenly
spaced numbers between 0 and 1 (inclusive), where n is the length of the argument.
Example:
cmap = colors_to_cmap(colors)
zs = np.asarray(range(len(colors)), dtype=np.float) / (len(colors)-1)
# cmap(zs) should reproduce colors; cmap[zs[i]] == colors[i]
'''
colors = np.asarray(colors)
if colors.shape[1] == 3:
colors = np.hstack((colors, np.ones((len(colors),1))))
steps = (0.5 + np.asarray(range(len(colors)-1), dtype=np.float))/(len(colors) - 1)
return matplotlib.colors.LinearSegmentedColormap(
'auto_cmap',
{clrname: ([(0, col[0], col[0])] +
[(step, c0, c1) for (step,c0,c1) in zip(steps, col[:-1], col[1:])] +
[(1, col[-1], col[-1])])
for (clridx,clrname) in enumerate(['red', 'green', 'blue', 'alpha'])
for col in [colors[:,clridx]]},
N=len(colors))
Again, note that 'gouraud' shading is required for this to work. To demonstrate why this fails, the following code blocks show my particular use case. (I am plotting part of a flattened cortical sheet with a partially transparent data overlay). In this code, there are 40,886 vertices (in the_map.coordinates) and 81,126 triangles (in the_map.indexed_faces); the colors array has shape (40886, 3).
The following code works fine with 'gouraud' shading:
tri = matplotlib.tri.Triangulation(the_map.coordinates[0],
the_map.coordinates[1],
triangles=the_map.indexed_faces.T)
cmap = rgbs_to_cmap(colors)
zs = np.asarray(range(the_map.vertex_count), dtype=np.float) / (the_map.vertex_count - 1)
plt.figure(figsize=(16,16))
plt.tripcolor(tri, zs, cmap=cmap, shading='gouraud')
But without 'gouraud' shading, the face-colors are perhaps being assigned according to the average of their vertices (have not verified this), which is clearly wrong:
plt.figure(figsize=(16,16))
plt.tripcolor(tri, zs, cmap=cmap)
A much simpler way of creating the color map is via from_list:
z = numpy.arange(n)
cmap = matplotlib.colors.LinearSegmentedColormap.from_list(
'mymap', rgb, N=len(rgb)
)
While for the tripcolor function, use of a colormap is obligatory, the PolyCollection and TriMesh classes (from matplotlib.collection) that it calls internally can deal with RGB color arrays as well. I have used the following code, based on the tripcolor source, to draw a triangle mesh with given RGB face colors:
tri = Triangulation(...)
colors = nx3 RGB array
maskedTris = tri.get_masked_triangles()
verts = np.stack((tri.x[maskedTris], tri.y[maskedTris]), axis=-1)
collection = PolyCollection(verts)
collection.set_facecolor(colors)
plt.gca().add_collection(collection)
plt.gca().autoscale_view()
To set colors per vertex (Gouraud shading), use a TriMesh instead (with set_facecolor).

Center a colormap around 0 in Mayavi

I'm plotting a point cloud and coloring by residual error. I'd like the colormap to remain centered on 0, so that 0 error is white.
I see answers for matplotlib. What about Mayavi?
from mayavi import mlab
mlab.points3d(x, y, z, e, colormap='RdBu')
you can set the vmin and vmax of the colormap explicitly with mlab.points3d. So, you could just make sure that vmin = -vmax. Something like this:
mylimit = 10
mlab.points3d(x, y, z, e, colormap='RdBu',vmin=-mylimit,vmax=mylimit)
Or, you could set the limit automatically with something like:
mylimit = max(abs(e.min()),abs(e.max()))
In case anybody wishes to do this but so that the full extent of the colorbar is used, here is a solution I made (with help from here) for mayavi that stretches the colorbar so that the centre of it is on zero:
#Mayavi surface
s = mlab.surf(data)
#Get the lut table of the data
lut = s.module_manager.scalar_lut_manager.lut.table.asarray()
maxd = np.max(data)
mind = np.min(data)
#Data range
dran = maxd - mind
#Proportion of the data range at which the centred value lies
zdp = abs(mind / dran)
#The +0.5's here are because floats are rounded down when converted to ints
#index equal portion of distance along colormap
cmzi = int(zdp * 255 + 0.5)
#linspace from zero to 128, with number of points matching portion to side of zero
topi = np.linspace(0, 127, cmzi) + 0.5
#and for other side
boti = np.linspace(128, 255, 255 - cmzi) + 0.5
#convert these linspaces to ints and map the new lut from these
shift_index = np.hstack([topi.astype(int), boti.astype(int)])
s.module_manager.scalar_lut_manager.lut.table = self.lut[shift_index]
#Force update of the figure now that we have changed the LUT
mlab.draw()
Note that if you wish to do this multiple times for the same surface (ie. if you're modifying the mayavi scalars rather than redrawing the plot) you need to make a record of the initial lut table and modify that each time.

Categories