Matplotlib troubleshooting, matplotlib.pyplot.xlim() - python

I am having difficulty understanding how matplotlib.pyplot.xlim() works.
I am plotting a simple plot of x values vs y values. The y values are numerical points in the range of 100-600. The x values are of magnitude e-09 to e-13. So, I plot x against y. This is my plot, with generic pseudocode
import matplotlib.pyplot as plt
x = np.array
y = np.array
plt.plot(x,y)
plt.ylim(0,400)
plt.show()
As you can tell, there's plenty of structure between 0 and 0.5. I would like to look at that.
So, I try
plt.plot(x,y)
plt.xlim(0,0.5)
plt.ylim(0,400)
plt.show()
The output plot is completely blank. I see nothing.
So, I try, xlim= -1 to +1
plt.plot(x,y)
plt.xlim(-1,1)
plt.ylim(0,400)
plt.show()
This is the output plot.
Using the origninal plot, how can I set the x-axis to see the actual data?

As you clearly mentioned
The x values are of magnitude e-09 to e-13.
So if you want to see the values that lie within 1e-8 and 0.5e-9 you should do:
plt.xlim(1e-8,0.5e-9)
instead of
plt.xlim(0,0.5)
where you have no values to show as the values of x are within e-09 to e-13.

If your x-values are of magnitude of 1e-9 till 1e-13 you have completely different length scales. In this case a logarithmic axis may be appropriate. Note that this works only if all x-values are strictly positive.
plt.xscale('log')

Related

Creating a pseudo color plot with a linear and nonlinear axis and computing values based on the center of grid values

I have the equation: z(x,y)=1+x^(2/3)y^(-3/4)
I would like to calculate values of z for x=[0,100] and y=[10^1,10^4]. I will do this for 100 points in each axis direction. My grid, then, will be 100x100 points. In the x-direction I want the points spaced linearly. In the y-direction I want the points space logarithmically.
Were I to need these values I could easily go through the following:
x=np.linspace(0,100,100)
y=np.logspace(1,4,100)
z=np.zeros( (len(x), len(y)) )
for i in range(len(x)):
for j in range(len(y)):
z[i,j]=1+x[i]**(2/3)*y[j]**(-3/4)
The problem for me comes with visualizing these results. I know that I would need to create a grid of points. I feel my options are to create a meshgrid with the values and then use pcolor.
My issue here is that the values at the center of the block do not coincide with the calculated values. In the x-direction I could fix this by shifting the x-vector by half of dx (the step between successive values). I'm not so sure how I would do this for the y-axis. Furthermore, If I wanted to compute values for each of the y-direction values, including the end points, they would not all show up.
In the final visualization I would like to have the y-axis as a log scale and the x axis as a linear scale. I would also like the tick marks to fall in the center of the cells, correlating with the correct value. Can someone point me to the correct plotting functions for this. I have to resolve the issue using pcolor or pcolormesh.
Should you require more details, please let me know.
In current matplotlib, you can use pcolormesh with shading='nearest', and it will center the blocks with the values:
import matplotlib.pyplot as plt
y_plot = np.log10(y)
z[5, 5] = 0 # to make it more evident
plt.pcolormesh(x, y_plot, z, shading="nearest")
plt.colorbar()
ax = plt.gca()
ax.set_xticks(x)
ax.set_yticks(y_plot)
plt.axvline(x[5])
plt.axhline(y_plot[5])
Output:

Plotting KDE with logarithmic x-data in Matplotlib

I want to plot a KDE for some data with data that covers a large range in x-values. Therefore I want to use a logarithmic scale for the x-axis. For plotting I was using seaborn and the solution from Plotting 2D Kernel Density Estimation with Python, both of which fail once I set the xscale to logarithmic. When I take the logarithm of my x-data beforehand, everything looks fine, except the tics and ticlabels are still linear with the logarithm of the actual values as the labels. I could manually change the tics using something like:
labels = np.array(ax.get_xticks().tolist(), dtype=np.float64)
new_labels = [r'$10^{%.1f}$' % (labels[i]) for i in range(len(labels))]
ax.set_xticklabels(new_labels)
but in my eyes that looks just wrong and is nothing close to the axis labels (including the minor tics) when I would just use
ax.set_xscale('log')
Is there an easier way to plot a KDE with logarithmic x-data? Or is it possible to just change the tic- or label-scale without changing the scaling of the data, so that I could plot the logarithmic values of x and change the scaling of the labels afterwards?
Edit:
The plot I want to create looks like this:
The two right columns are what it is supposed to look like. There I used the the x data with the logarithm already applied. I don't like the labels on the x-axis, though.
The left column displays the plots, when the original data is used for the kde and all the other plots, and afterwards the scale is changed using
ax.set_xscale('log')
For some reason the kde, does not look like it is supposed to look. This is also not a result of erroneous data, since it looks just fine if the logarithmic data is used.
Edit 2:
A working example of code is
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data = np.random.multivariate_normal((0, 0), [[0.8, 0.05], [0.05, 0.7]], 100)
x = np.power(10, data[:, 0])
y = data[:, 1]
fig, ax = plt.subplots(2, 1)
sns.kdeplot(data=np.log10(x), data2=y, ax=ax[0])
sns.kdeplot(data=x, data2=y, ax=ax[1])
ax[1].set_xscale('log')
plt.show()
The ax[1] plot is not displayed correctly for me (the x-axis is inverted), but the general behavior is the same as for the case described above. I believe the problem lies with the bandwidth of the kde, which should probably account for the logarithmic x-data.
I found an answer that works for me and wanted to post it in case someone else has a similar problem.
Based on the accepted answer from this post, I defined a function that first applies the logarithm to the x-data and after the KDE was performed, transforms the x-values back to the original values. Afterwards I can simply plot the contours and use ax.set_xscale('log')
import numpy as np
import scipy.stats as st
def logx_kde(x, y, xmin, xmax, ymin, ymax):
x = np.log10(x)
# Peform the kernel density estimate
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
positions = np.vstack([xx.ravel(), yy.ravel()])
values = np.vstack([x, y])
kernel = st.gaussian_kde(values)
f = np.reshape(kernel(positions).T, xx.shape)
return np.power(10, xx), yy, f

Setting both axes logarithmic in bar plot matploblib

I have already binned data to plot a histogram. For this reason I'm using the plt.bar() function. I'd like to set both axes in the plot to a logarithmic scale.
If I set plt.bar(x, y, width=10, color='b', log=True) which lets me set the y-axis to log but I can't set the x-axis logarithmic.
I've tried plt.xscale('log') unfortunately this doesn't work right. The x-axis ticks vanish and the sizes of the bars don't have equal width.
I would be grateful for any help.
By default, the bars of a barplot have a width of 0.8. Therefore they appear larger for smaller x values on a logarithmic scale. If instead of specifying a constant width, one uses the distance between the bin edges and supplies this to the width argument, the bars will have the correct width. One would also need to set the align to "edge" for this to work.
import matplotlib.pyplot as plt
import numpy as np; np.random.seed(1)
x = np.logspace(0, 5, num=21)
y = (np.sin(1.e-2*(x[:-1]-20))+3)**10
fig, ax = plt.subplots()
ax.bar(x[:-1], y, width=np.diff(x), log=True,ec="k", align="edge")
ax.set_xscale("log")
plt.show()
I cannot reproduce missing ticklabels for a logarithmic scaling. This may be due to some settings in the code that are not shown in the question or due to the fact that an older matplotlib version is used. The example here works fine with matplotlib 2.0.
If the goal is to have equal width bars, assuming datapoints are not equidistant, then the most proper solution is to set width as
plt.bar(x, y, width=c*np.array(x), color='b', log=True) for a constant c appropriate for the plot. Alignment can be anything.
I know it is a very old question and you might have solved it but I've come to this post because I was with something like this but at the y axis and I manage to solve it just using ax.set_ylim(df['my data'].min()+100, df['my data'].max()+100). In y axis I have some sensible information which I thouhg the best way was to show in log scale but when I set log scale I couldn't see the numbers proper (as this post in x axis) so I just leave the idea of use log and use the min and max argment. It sets the scale of my graph much like as log. Still looking for another way for doesnt need use that -+100 at set_ylim.
While this does not actually use pyplot.bar, I think this method could be helpful in achieving what the OP is trying to do. I found this to be easier than trying to calibrate the width as a function of the log-scale, though it's more steps. Create a line collection whose width is independent of the chart scale.
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.collections as coll
#Generate data and sort into bins
a = np.random.logseries(0.5, 1000)
hist, bin_edges = np.histogram(a, bins=20, density=False)
x = bin_edges[:-1] # remove the top-end from bin_edges to match dimensions of hist
lines = []
for i in range(len(x)):
pair=[(x[i],0), (x[i], hist[i])]
lines.append(pair)
linecoll = coll.LineCollection(lines, linewidths=10, linestyles='solid')
fig, ax = plt.subplots()
ax.add_collection(linecoll)
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_xlim(min(x)/10,max(x)*10)
ax.set_ylim(0.1,1.1*max(hist)) #since this is an unweighted histogram, the logy doesn't make much sense.
Resulting plot - no frills
One drawback is that the "bars" will be centered, but this could be changed by offsetting the x-values by half of the linewidth value ... I think it would be
x_new = x + (linewidth/2)*10**round(np.log10(x),0).

Fill area under curve in matlibplot python on log scale

I'm trying to fill the area under a curve with matplotlib. The script below works fine.
import matplotlib.pyplot as plt
from math import sqrt
x = range(100)
y = [sqrt(i) for i in x]
plt.plot(x,y,color='k',lw=2)
plt.fill_between(x,y,0,color='0.8')
plt.show()
However if I set the y-scale to logarithmic (see below). It sometimes fills the area above the curve ! Can anyone help me? I would like to fill the area between the curve and y = 0.
x = range(100)
y = [sqrt(i) for i in x]
plt.plot(x,y,color='k',lw=2)
plt.fill_between(x,y,0,color='0.8')
plt.yscale('log')
plt.show()
Thanks in advance!
With a logarithmic y-scale, fill_between(x, y, 0) tells matplotlib to fill the region between log(0) = -infinity and log(y). Naturally, it balks. You can avoid the problem by changing 0 to some small number like 1e-6.
As mentioned, 0 -> -inf in a log scale. Thus, any plotted value that was less than or equal to zero would be problematic (requiring an infinite ylim in log space). This problem exists independently of whether you are using fill_between() or not.
Fortunately, matplotlib provides a way to handle this nicely. In the default behavior, matplotlib masks the values of every value less than or equal to zero. In your example, this means that your entire y=0 line is masked and excluded from the polygon defining the filled-between area. The result is that the polygon is simply closed by drawing a line from (100,10) down and leftward to (0,0). Another option is to clip the values. In this case, they are set to 1e-300 and are not consulted when determining the ylim of the plot. So to get your desired result, do the following:
plt.yscale('log', nonposy='clip')

Plot a histogram such that bar heights sum to 1 (probability)

I'd like to plot a normalized histogram from a vector using matplotlib. I tried the following:
plt.hist(myarray, normed=True)
as well as:
plt.hist(myarray, normed=1)
but neither option produces a y-axis from [0, 1] such that the bar heights of the histogram sum to 1.
If you want the sum of all bars to be equal unity, weight each bin by the total number of values:
weights = np.ones_like(myarray) / len(myarray)
plt.hist(myarray, weights=weights)
Note for Python 2.x: add casting to float() for one of the operators of the division as otherwise you would end up with zeros due to integer division
It would be more helpful if you posed a more complete working (or in this case non-working) example.
I tried the following:
import numpy as np
import matplotlib.pyplot as plt
x = np.random.randn(1000)
fig = plt.figure()
ax = fig.add_subplot(111)
n, bins, rectangles = ax.hist(x, 50, density=True)
fig.canvas.draw()
plt.show()
This will indeed produce a bar-chart histogram with a y-axis that goes from [0,1].
Further, as per the hist documentation (i.e. ax.hist? from ipython), I think the sum is fine too:
*normed*:
If *True*, the first element of the return tuple will
be the counts normalized to form a probability density, i.e.,
``n/(len(x)*dbin)``. In a probability density, the integral of
the histogram should be 1; you can verify that with a
trapezoidal integration of the probability density function::
pdf, bins, patches = ax.hist(...)
print np.sum(pdf * np.diff(bins))
Giving this a try after the commands above:
np.sum(n * np.diff(bins))
I get a return value of 1.0 as expected. Remember that normed=True doesn't mean that the sum of the value at each bar will be unity, but rather than the integral over the bars is unity. In my case np.sum(n) returned approx 7.2767.
I know this answer is too late considering the question is dated 2010 but I came across this question as I was facing a similar problem myself. As already stated in the answer, normed=True means that the total area under the histogram is equal to 1 but the sum of heights is not equal to 1. However, I wanted to, for convenience of physical interpretation of a histogram, make one with sum of heights equal to 1.
I found a hint in the following question - Python: Histogram with area normalized to something other than 1
But I was not able to find a way of making bars mimic the histtype="step" feature hist(). This diverted me to : Matplotlib - Stepped histogram with already binned data
If the community finds it acceptable I should like to put forth a solution which synthesises ideas from both the above posts.
import matplotlib.pyplot as plt
# Let X be the array whose histogram needs to be plotted.
nx, xbins, ptchs = plt.hist(X, bins=20)
plt.clf() # Get rid of this histogram since not the one we want.
nx_frac = nx/float(len(nx)) # Each bin divided by total number of objects.
width = xbins[1] - xbins[0] # Width of each bin.
x = np.ravel(zip(xbins[:-1], xbins[:-1]+width))
y = np.ravel(zip(nx_frac,nx_frac))
plt.plot(x,y,linestyle="dashed",label="MyLabel")
#... Further formatting.
This has worked wonderfully for me though in some cases I have noticed that the left most "bar" or the right most "bar" of the histogram does not close down by touching the lowest point of the Y-axis. In such a case adding an element 0 at the begging or the end of y achieved the necessary result.
Just thought I'd share my experience. Thank you.
Here is another simple solution using np.histogram() method.
myarray = np.random.random(100)
results, edges = np.histogram(myarray, normed=True)
binWidth = edges[1] - edges[0]
plt.bar(edges[:-1], results*binWidth, binWidth)
You can indeed check that the total sums up to 1 with:
> print sum(results*binWidth)
1.0
The easiest solution is to use seaborn.histplot, or seaborn.displot with kind='hist', and specify stat='probability'
probability: or proportion: normalize such that bar heights sum to 1
density: normalize such that the total area of the histogram equals 1
data: pandas.DataFrame, numpy.ndarray, mapping, or sequence
seaborn is a high-level API for matplotlib
Tested in python 3.8.12, matplotlib 3.4.3, seaborn 0.11.2
Imports and Data
import seaborn as sns
import matplotlib.pyplot as plt
# load data
df = sns.load_dataset('penguins')
sns.histplot
axes-level plot
# create figure and axes
fig, ax = plt.subplots(figsize=(6, 5))
p = sns.histplot(data=df, x='flipper_length_mm', stat='probability', ax=ax)
sns.displot
figure-level plot
p = sns.displot(data=df, x='flipper_length_mm', stat='probability', height=4, aspect=1.5)
Since matplotlib 3.0.2, normed=True is deprecated. To get the desired output I had to do:
import numpy as np
data=np.random.randn(1000)
bins=np.arange(-3.0,3.0,51)
counts, _ = np.histogram(data,bins=bins)
if density: # equivalent of normed=True
counts_weighter=counts.sum()
else: # equivalent of normed=False
counts_weighter=1.0
plt.hist(bins[:-1],bins=bins,weights=counts/counts_weighter)
Trying to specify weights and density simultaneously as arguments to plt.hist() did not work for me. If anyone know of a way to get that working without having access to the normed keyword argument then please let me know in the comments and I will delete/modify this answer.
If you want bin centres then don't use bins[:-1] which are the bin edges - you need to choose a suitable scheme for how to calculate the centres (which may or may not be trivially derived).

Categories