Missing filled contours when using contourf - python

I am trying to plot plot a function over a 2D domain using contourf. Unfortunately, my first attempt did not work out very well. There was a region in the plot that was unexpectedly not covered by any contours. For debugging purposes, I have reduced the problem to the smallest dataset that I could find that also reveals the issue with missing filled contours:
import matplotlib.pyplot as plt
import numpy as np
v = np.array([0, 1, 2, 3])
x, y = np.meshgrid(v, v)
z = np.array([[5.5e-14, 5.5e-14, 5.5e-14, 5.5e-14],
[2e-13, 2e-13, 2e-13, 2e-13],
[2.2e-13, 2.2e-13, 2.2e-13, 2.2e-13],
[0, 0,0, 0]])
fig, ax = plt.subplots()
cntr = ax.contourf(x, y, z)
fig.colorbar(cntr, ax=ax)
plt.show()
This gives the following plot:
as you can see there are missing contours from y = 1.5 to approximately y=2.0.
Another strange thing I observed: If I normalize the z matrix by multiplying by e.g. 1e14 before plotting it works fine.

Related

coloring lines generated from numerical points

I calculate the eigenvalues of large matrices depending on a parameter and would like to plot the eigenvalues in different colors. So I do not have functions where I can conveniently plot different functions in different colors, but instead I just have a set of points which just get connected as interpolation. My problem is that the lines should be intersecting, but that cannot be achieved with this numerical approach.
Maybe it is best explained with a small example.
import numpy as np
import numpy.linalg
import matplotlib.pyplot as plt
def mat(x):
#return np.array([[np.sin(x), 0], [0, -np.sin(x)]])
return np.array([[np.sin(x), 0, 0], [0, -np.sin(x), 0], [0, 0, np.sin(10*x)+x]])
fig=plt.figure()
fig.suptitle('wrong colors')
ax=fig.add_subplot(111)
# x = np.linspace(-1,1,100) # no, not that easy, the intersection points are difficult to find
x = np.sort(np.random.uniform(low=-1, high=1, size=1000))
#evs = np.zeros((2, len(x)))
evs = np.zeros((3, len(x)))
for i in range(len(x)):
evs[:, i] = np.linalg.eigvalsh(mat(x[i]))
print(np.shape(evs))
ax.plot(x, evs[0,:], color='C0')
ax.plot(x, evs[1,:], color='C1')
ax.plot(x, evs[2,:], color='C2')
# just reference plot, this is how it should look like
fig2 = plt.figure()
fig2.suptitle('correct colors')
ax2 = fig2.add_subplot(111)
ax2.plot(x, np.sin(x), color='C0')
ax2.plot(x, -np.sin(x), color='C1')
ax2.plot(x, np.sin(10*x)+x, color='C2')
plt.show()
So what I get is this:
What I would like to have is this:
One difficulty is that the intersection point is difficult to calculate and usually not included. That's ok, I don't need the point, as the graphics is purely informative. But the colors should be shown correctly. Any suggestions how I could achieve something like this easily?
To give you an idea of where this is to be used, have a look at the following picture.
Here, the straight lines in the middle should have a different color than the curved ones.
Besides the matrix being a lot more complex, the image is created in the same way as above.
EDIT: My example was not good and clear, I have come up with one which is closer to my real problem. The matrix is numeric and I cannot diagonalize it analytically, i.e. I cannot know whether it is sin, cos or maybe some mean np.sin(2*x+0.2)+np.cos(x)**2.
Here you go:
Just concatenate the first part of one signal with the last part of the other
import numpy as np
import numpy.linalg
import matplotlib.pyplot as plt
def mat(x):
return np.array([[np.sin(x), 0], [0, -np.sin(x)]])
fig=plt.figure()
fig.suptitle('wrong colors')
ax=fig.add_subplot(111)
x = np.linspace(-1,1,100)
evs = np.zeros((2, len(x)))
for i in range(len(x)):
evs[:, i] = np.linalg.eigvalsh(mat(x[i]))
print(np.shape(evs))
ax.plot(x, np.concatenate((evs[0,:int(len(x)//2)],evs[1,int(len(x)//2):])), color='C0')
ax.plot(x, np.concatenate((evs[1,:int(len(x)//2)],evs[0,int(len(x)//2):])), color='C1')
plt.show()

Is there anything in matplotlib that behaves like alpha but reversed?

A good way to show the concentration of the data points in a plot is using a scatter plot with non-unit transparency. As a result, the areas with more concentration would appear darker.
# this is synthetic example
N = 10000 # a very very large number
x = np.random.normal(0, 1, N)
y = np.random.normal(0, 1, N)
plt.scatter(x, y, marker='.', alpha=0.1) # an area full of dots, darker wherever the number of dots is more
which gives something like this:
Imagine the case we want to emphasize on the outliers. So the situation is almost reversed: A plot in which the less-concentrated areas are bolder. (There might be a trick to apply for my simple example, but imagine a general case where a distribution of points are not known prior, or it's difficult to define a rule for transparency/weight on color.)
I was thinking if there's anything handy same as alpha that is designed for this job specifically. Although other ideas for emphasizing on outliers are also welcomed.
UPDATE: This is what happens when more then one data point is scattered on the same area:
I'm looking for something like the picture below, the more data point, the less transparent the marker.
To answer the question: You can calculate the density of points, normalize it and encode it in the alpha channel of a colormap.
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
# this is synthetic example
N = 10000 # a very very large number
x = np.random.normal(0, 1, N)
y = np.random.normal(0, 1, N)
fig, (ax,ax2) = plt.subplots(ncols=2, figsize=(8,5))
ax.scatter(x, y, marker='.', alpha=0.1)
values = np.vstack([x,y])
kernel = stats.gaussian_kde(values)
weights = kernel(values)
weights = weights/weights.max()
cols = plt.cm.Blues([0.8, 0.5])
cols[:,3] = [1., 0.005]
cmap = LinearSegmentedColormap.from_list("", cols)
ax2.scatter(x, y, c=weights, s = 1, marker='.', cmap=cmap)
plt.show()
Left is the original image, right is the image where higher density points have a lower alpha.
Note, however, that this is undesireable, because high density transparent points are undistinguishable from low density. I.e. in the right image it really looks as though you have a hole in the middle of your distribution.
Clearly, a solution with a colormap which does not contain the color of the background is a lot less confusing to the reader.
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
# this is synthetic example
N = 10000 # a very very large number
x = np.random.normal(0, 1, N)
y = np.random.normal(0, 1, N)
fig, ax = plt.subplots(figsize=(5,5))
values = np.vstack([x,y])
kernel = stats.gaussian_kde(values)
weights = kernel(values)
weights = weights/weights.max()
ax.scatter(x, y, c = weights, s=9, edgecolor="none", marker='.', cmap="magma")
plt.show()
Here, low density points are still emphazised by darker color, but at the same time it's clear to the viewer that the highest density lies in the middle.
As far as I know, there is no "direct" solution to this quite interesting problem. As a workaround, I propose this solution:
N = 10000 # a very very large number
x = np.random.normal(0, 1, N)
y = np.random.normal(0, 1, N)
fig = plt.figure() # create figure directly to be able to extract the bg color
ax = fig.gca()
ax.scatter(x, y, marker='.') # plot all markers without alpha
bgcolor = ax.get_facecolor() # extract current background color
# plot with alpha, "overwriting" dense points
ax.scatter(x, y, marker='.', color=bgcolor, alpha=0.2)
This will plot all points without transparency and then plot all points again with some transparency, "overwriting" those points with the highest density the most. Setting the alpha value to other higher values will put more emphasis to outliers and vice versa.
Of course the color of the second scatter plot needs to be adjusted to your background color. In my example this is done by extracting the background color and setting it as the new scatter plot's color.
This solution is independent of the kind of distribution. It only depends on the density of the points. However it produces twice the amount of points, thus may take slightly longer to render.
Reproducing the edit in the question, my solution is showing exactly the desired behavior. The leftmost point is a single point and is the darkest, the rightmost is consisting of three points and is the lightest color.
x = [0, 1, 1, 2, 2, 2]
y = [0, 0, 0, 0, 0, 0]
fig = plt.figure() # create figure directly to be able to extract the bg color
ax = fig.gca()
ax.scatter(x, y, marker='.', s=10000) # plot all markers without alpha
bgcolor = ax.get_facecolor() # extract current background color
# plot with alpha, "overwriting" dense points
ax.scatter(x, y, marker='.', color=bgcolor, alpha=0.2, s=10000)
Assuming that the distributions are centered around a specific point (e.g. (0,0) in this case), I would use this:
import numpy as np
import matplotlib.pyplot as plt
N = 500
# 0 mean, 0.2 std
x = np.random.normal(0,0.2,N)
y = np.random.normal(0,0.2,N)
# calculate the distance to (0, 0).
color = np.sqrt((x-0)**2 + (y-0)**2)
plt.scatter(x , y, c=color, cmap='plasma', alpha=0.7)
plt.show()
Results:
I don't know if it helps you, because it's not exactly you asked for, but you can simply color points, which values are bigger than some threshold. For example:
import matplotlib.pyplot as plt
num = 100
threshold = 80
x = np.linspace(0, 100, num=num)
y = np.random.normal(size=num)*45
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.scatter(x[np.abs(y) < threshold], y[np.abs(y) < threshold], color="#00FFAA")
ax.scatter(x[np.abs(y) >= threshold], y[np.abs(y) >= threshold], color="#AA00FF")
plt.show()

How to change the axis dimension from pixel to length in matplotlib? is there any code in general?

Since the complete simulation is to big to post it right here only the code to plot the spectrum is given (I think this is enough)
d = i.sum(axis=2)
pylab.figure(figsize=(15,15))
pylab = imshow(d)
plt.axis('tight')
pylab.show()
This spectrum is given in pixel. But I would like to have this in the units of length. I will hope you may give me some advices.
Do you mean that you want axis ticks to show your custom dimensions instead of the number of pixels in d? If yes, use the extent keyword of imshow:
import numpy
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
d = numpy.random.normal(size=(20, 40))
fig = plt.figure()
s = fig.add_subplot(1, 1, 1)
s.imshow(d, extent=(0, 1, 0, 0.5), interpolation='none')
fig.tight_layout()
fig.savefig('tt.png')
I'm guess a bit at what your problem is, so let's start by stating my interpretation/ You have some 2D data d that you plot using imshow and the units on the x and y axes are in the number of pixels. For example in the following we see the x axis labelled from 0 -> 10 for the number of data points:
import numpy as np
import matplotlib.pyplot as plt
# Generate a fake d
x = np.linspace(-1, 1, 10)
y = np.linspace(-1, 1, 10)
X, Y = np.meshgrid(x, y)
d = np.sin(X**2 + Y**2)
plt.imshow(d)
If this correctly describes your issue, then the solution is to avoid using imshow, which is designed to plot images. Firstly this will help as imshow attemps to interpolate to give a smoother image (which may hide features in the spectrum) and second because it is an image, there is no meaningful x and y data so it doesn't plot it.
The best alternative would be to use plt.pcolormesh which generate a psuedocolor plot of a 2D array and takes as arguments X and Y, which are both 2D arrays of points to which the values of d correspond.
For example:
# Generate a fake d
x = np.linspace(-1, 1, 10)
y = np.linspace(-1, 1, 10)
X, Y = np.meshgrid(x, y)
d = np.sin(X**2 + Y**2)
plt.pcolormesh(X, Y, d)
Now the x and y values correspond to the values of X and Y.

Plot of 3D matrix with colour scale - Python

I would like to plot a 3D matrix - essentially a box of numbers, each labelled by an x, y, z triad of coordinates- by assigning a different colour to each of the x, y, z point, according to its magnitude (for example, bigger numbers in red and smaller numbers in blue).
I cannot plot sections of the matrix, I rather need to plot the whole matrix together.
If we call matrix3D my matrix, its elements are built this way:
matrix3D[x][y][z] = np.exp(-(x**2+y**2+z**2))
How can I obtain the desired plot?
EDIT: Using Mayavi2 Contour3D(), I have tried to write the following:
from mayavi import mlab
X = np.arange(0, n_x, 1)
Y = np.arange(0, n_z, 1)
Z = np.arange(0, n_z, 1)
X, Y, Z = np.meshgrid(X, Y, Z)
obj = mlab.contour3d(X, Y, Z, matrix3D, contours=4, transparent=True)
where n_x, n_y, n_z are the dimension of the 3 axes. How can I actually see and/or save the image now?
If you need to plot the whole thing I think you're best taking a look at mayavi. This will let you plot a volume and you should be able to get the results you need.
I know you said you need to plot the whole thing at once, but this might still be of some use. You can use countourf to plot like this:
import numpy as np
import matplotlib.pyplot as plt
matrix3D = np.empty((10, 10, 10))
x = np.arange(10)
y = np.arange(10)
z = np.arange(10)
matrix3D[x][y][z] = np.exp(-(x**2+y**2+z**2))
fig = plt.figure()
ax = fig.add_subplot(plt.subplot(1, 1, 1))
ax.contourf(x, y, matrix3D[:, :, 3])
plt.show()
This gives you a slice of the 3D matrix (in this example the 4th slice).

How does one draw the X = 0 plane using matplotlib (mpl3d)?

Here is an answer that lets one plot a plane using matplotlib, but if one uses the vector [1, 0, 0], nothing gets plotted! This makes sense, because of the way the code is set up (meshgrid is on X-Y plane, and then Z points determine the surface.
So, how can I plot the X = 0 plane using matplotlib?
This is less generic than the example that you linked, but it does the trick:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
yy, zz = np.meshgrid(range(2), range(2))
xx = yy*0
ax = plt.subplot(projection='3d')
ax.plot_surface(xx, yy, zz)
plt.show()

Categories