I have a very sparse array that looks like:
Array A: min = -68093253945.0 max=8.54631971208e+13
Array B: min=-1e+15 max = 1.87343e+14
And also each array will have concentration at certain levels e.g. near 2000, near 1m, near 0.05 and so on.
I am trying to compare these two arrays in terms of concentration, and want to do so in a way that is invariant to the number of entries in each. I also want to account for huge outliers if possible and maybe compress the bins to be between 0 and 1 or something of this sort.
The aim is to make a histogram via:
plt.hist(A,alpha=0.5,label='A') # plt.hist passes it's arguments to np.histogram
ion()
plt.hist(B,alpha=0.5,label='B')
plt.title("Histogram of Values")
plt.legend(loc='upper right')
plt.savefig('valuecomp.png')
How do I do this? I have experimented with:
A = stats.zscore(A)
B = stats.zscore(B)
A = preprocessing.scale(A)
B = preprocessing.scale(B)
A = preprocessing.scale(A, axis=0, with_mean=True, with_std=True, copy=True)
B = preprocessing.scale(B, axis=0, with_mean=True, with_std=True, copy=True)
And then for my histograms, adding normed=True, range(0,100). All the methods give me a histogram with a massive vertical chunk near to 0.0 instead of distributing the values smoothly. range(0,100) looks good but it ignores any values like 1m outside of 100.
Perhaps I need to remove outliers from my data first and then do a histogram?
#sascha's suggestion of using AstroML was a good one, but the knuth and freedman versions seem to take astronomically long (excuse the pun), and the blocks version simply thinned the blocks.
I took the sigmoid of each value via from scipy.special import expit and then plotted the histogram that way. Only way I could get this to work.
Related
I have several numpy matrices collected over some time. I now want to visualize these matrices and explore visual similarities among them. The matrices contain small numbers from 0.0 to 1.0.
To compare them, I want to ensure that the same "areas" get colored with the same color, e.g. that 0.01 to 0.02 always is red, and 0.02 to 0.03 always is green. I have two question:
I found another question which has this code snippet:
a = np.random.normal(0.0,0.5,size=(5000,10))**2
a = a/np.sum(a,axis=1)[:,None] # Normalize
plt.pcolor(a)
What is the effect of the second line, precisely the [:,None] statement. I tried normalizing a matrix by:
max_a = a/10# Normalize
print(max_a.shape)
plt.pcolor(max_a)
but there is not much visual difference compared to the visualization for the unnormalized matrix. When I then add the [:,None] statement I get an error
ValueError: too many values to unpack (expected 2)
which is expected since the shape now is 10,1,10. I therefor want to know what the brackets do and how to read the statement.
Secondly, and related, I want to make sure that I can visual compare the matrices. I therefor want to fix the "colorization", e.g. the ranges when a color is green or red, so that I do not end up with 0 to 0.1 as green in plot A and with 0 to 0.1 as red in plot B. How can I fix the "translation" from floats to colors? Do I have to normalize each matrix with a same constant, e.g. 10? Or do I normalize them with an unique value -- do I even need normalization here?
[:,None] adds new axis so you'll be able to divide sum of all columns in each row - it is the same as using np.sum(a,axis=1)[:,np.newaxis] - when you sum all columns with np.sum(a,axis=1) you'll get 1d array with shape (5000), but to be able to normalize your matrix with summed columns you need 2d array with shape (5000,1), that's why new axis is needed.
You can have fixed colors by fixing scale of your colormap: plt.pcolor(max_a,vmin=0,vmax=1)
adding discrete colorbar might also help:
from pylab import cm
cmap = cm.get_cmap('jet', 10)
plt.pcolor(a,cmap=cmap,vmin=0,vmax=1)
plt.colorbar()
I have a series of simple mass-radius relationships (so a 2d plot) that I'd like to include in one plot according to how well of a fit it is to my data. I have the radii (x), masses (y), and a separate 1d array that quantifies how well the M-R relationship fits to my data. This 1d array can be likened to error, but it isn't calculated using a standard Python function (I calculate it myself).
Ideally, my end result is a series of ~2000 mass-radius relationships on one plot, where each mass-radius relationship is color coded according to its agreement with my data. So something like this, but instead of two colors, it's on a grayscale:
Here's a snippet of what I'm trying to do but obviously isn't working, as I didn't even define a colormap:
for i in range(10):
plt.plot(x,y,c=error[i])
plt.colorbar()
plt.show()
And again, I'd like to have each element in error correspond to a color in greyscale.
I know this is simple so I'm definitely outing myself as an amateur here, but I really appreciate any help!
EDIT: Here is the code snippet where I made the plot:
for i in range(2396):
if eps[i]==0.:
plt.plot(f[i,:,1],f[i,:,0],c='g',linewidth=0.1)
else:
plt.plot(f[i,:,1],f[i,:,0],c='r',linewidth=0.1)
plt.xlabel('Radius')
plt.ylabel('Mass')
plt.title('Neutron Star Mass-Radius Relationships')
You have one fit value for each series of points:
Here is a script to plot multiple series on a single plot, where each series (i.e. each line) is colored based on a third fit variable:
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
fit = np.random.rand(25)
cmap = mpl.cm.get_cmap('binary')
color_gradients = cmap(fit) # this line changed! it was incorrect before
fig, (ax1,ax2) = plt.subplots(1,2, gridspec_kw={'width_ratios': [30, 1]})
for i,_ in enumerate(fit):
x = sorted(np.random.randint(100, size=25))
y = sorted(np.random.randint(100, size=25))
ax1.plot(x, y, c=color_gradients[i])
cb = mpl.colorbar.ColorbarBase(ax2, cmap=cmap,
orientation='vertical',
ticks=[0,1])
Now responding to your questions from the comments:
How does fit play into the rest of the plot?
fit is an array of random decimals between 0 and 1, corresponding to the "error" values for each series:
>>>fit
array([0.76458568, 0.15017328, 0.70686393, 0.98885091, 0.18449953,
0.62506401, 0.49513702, 0.69138913, 0.96844495, 0.48937011,
0.09878352, 0.68965829, 0.13524182, 0.95419698, 0.39844843,
0.63095159, 0.95933663, 0.00693236, 0.98212815, 0.16262205,
0.26274884, 0.56880703, 0.68233984, 0.18304883, 0.66759496])
fit is used to generate the divisions of the color gradient in these lines:
cmap = mpl.cm.get_cmap('binary')
color_gradients = cmap(fit)
I'm not sure where the specific documentation for this is, but basically, passing an array of numbers to the cmap will return an array of RGBA color values spaced accordingly to the array passed:
>>>color_gradients
array([[0.23529412, 0.23529412, 0.23529412, 1. ],
[0.85098039, 0.85098039, 0.85098039, 1. ],
[0.29411765, 0.29411765, 0.29411765, 1. ],
[0.00784314, 0.00784314, 0.00784314, 1. ],
.
.
.
So this array can be used to assign specific colors to each line, based on their fit. And it assumes the higher numbers are better fits, and that you want better fits to be colored darker.
Note that before I had color_gradient_divisions = [(1/len(fit))*i for i in range(len(fit))], which was incorrect as it evenly divides the color map into 25 pieces, not actually returning values corresponding to the fit.
The cmap is also passed to the colorbar when constructing it. Often you can just call plt.colorbar to simply create one, but here matplotlib doesn't automatically know what to create a color bar for as the lines are separate and manually colored. So instead, we create 2 axes, one for the plot and one for the colorbar (spacing them accordingly with the gridspec_kw argument), and then using mpl.colorbar.ColorbarBase to make the colorbar (I also removed a norm argument b/c I don't think it is needed).
why have you used an underscore in the for loop?
This is a pattern in Python, typically meaning "I'm not using this thing". enumerate returns an iterator of tuples with the structure (value index, value). So enumerate(fit) returns (0, 0.76458568), (1, 0.15017328), etc (based on the data shown above). I am only using the index (i) to get the corresponding position (and color) in color_gradients (ax1.plot(x, y, c=color_gradients[i])). Even though the values from fit are being returned by enumerate, I am not using them, so I instead point them to _. If I was using them within the loop, I would use a typical variable name instead.
enumerate is the encouraged way to loop over an iterable if you need to access both the count of the values and the values themselves. People tend to use for i in range(len(fit)) also to do this (which works fine!) but the further I've gone with Python the more I've seen people avoiding that.
This was a little bit of a confusing example; I set my loop to iterate over fit b/c I was conceptualizing "creating one graph for each value in fit". But I could have just looped over color_gradients (for c in color_gradients) which might be more clear.
But in your real data, something like enumerate may be helpful if you are looping over multiple aligned arrays. In my example, I just create new random data within each loop. But you will likely want to have an array of fit values, an array of color values, an array (of series) of radii, and an array (of series) of masses, such that the ith element of each array corresponds to the same star. You may be iterating over one array and want to access the same position in another (zip is used for this also).
I'll leave this second answer here, even though it wasn't what OP was getting at:
You have one fit value for each point:
Here, each pair of x,y coordinates has its own fit value:
import numpy as np
import matplotlib.pyplot as plt
x = np.random.randint(100, size=25)
y = np.random.randint(100, size=25)
fit = np.random.rand(25)
plt.scatter(x, y, c=fit, cmap='binary')
plt.colorbar()
Note that with either approach, poorly fitting points or lines may be invisible
So basically I have some data and I need to find a way to smoothen it out (so that the line produced from it is smooth and not jittery). When plotted out the data right now looks like this:
and what I want it to look is like this:
I tried using this numpy method to get the equation of the line, but it did not work for me as the graph repeats (there are multiple readings so the graph rises, saturates, then falls then repeats that multiple times) so there isn't really an equation that can represent that.
I also tried this but it did not work for the same reason as above.
The graph is defined as such:
gx = [] #x is already taken so gx -> graphx
gy = [] #same as above
#Put in data
#Get nice data #[this is what I need help with]
#Plot nice data and original data
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
The method I think would be most applicable to my solution is getting the average of every 2 points and setting that to the value of both points, but this idea doesn't sit right with me - potential values may be lost.
You could use a infinite horizon filter
import numpy as np
import matplotlib.pyplot as plt
x = 0.85 # adjust x to use more or less of the previous value
k = np.sin(np.linspace(0.5,1.5,100))+np.random.normal(0,0.05,100)
filtered = np.zeros_like(k)
#filtered = newvalue*x+oldvalue*(1-x)
filtered[0]=k[0]
for i in range(1,len(k)):
# uses x% of the previous filtered value and 1-x % of the new value
filtered[i] = filtered[i-1]*x+k[i]*(1-x)
plt.plot(k)
plt.plot(filtered)
plt.show()
I figured it out, by averaging 4 results I was able to significantly smooth out the graph. Here is a demonstration:
Hope this helps whoever needs it
I am trying to plot some data as a histogram in matplotlib using a high precision values as x-axis ticks. The data is between 0 and 0.4, but most values are really close like:
0.05678, 0.05879, 0.125678, 0.129067
I used np.around() in order to make the values (and it made them as it should from 0 to 0.4) less but it didn't work quite right for all of the data.
Here is an example of the one that worked somewhat right ->
and one that didn't ->
you can see there are points after 0.4 which is just not right.
Here is the code I used in Jupyter Notebook:
plt.hist(x=[advb_ratios,adj_ratios,verb_ratios],color = ['r','y','b'], bins =10, label = ['adverbs','adjectives', 'verbs'])
plt.xticks(np.around(ranks,1))
plt.xlabel('Argument Rank')
plt.ylabel('Frequency')
plt.legend()
plt.show()
It is the same for both histograms only different x that I am plotting, all x values that are used are between 0 and 1.
So my questions are:
Is there a way to fix that and reflect my data as it is?
Is it better to give my rank values different labels that will separate them more from one another for example - 1,2,3,4 or I will lose the precision of my data and some useful info?
What is the general approach in such situations? Would a different graphic help? What?
I don't understand your problem, the fact that you data is between 0 and 0.4 should not influence the way it is displayed. I don't see why you need to do anything else but call plt.hist().
In addition, you can pass an array to the bins argument to indicate which bins you want, so you could do something like that to force the size of your bins to be always the same
# Fake data
x1 = np.random.normal(loc=0, scale=0.1, size=(1000,))
x2 = np.random.normal(loc=0.2, scale=0.1, size=(1000,))
x3 = np.random.normal(loc=0.4, scale=0.1, size=(1000,))
plt.hist([x1,x2,x3], bins=np.linspace(0,0.4,10))
Assume I have the following observations:
1,2,3,4,5,6,7,100
Now I want to make a plot how the observations are distributed percent wise:
First 12.5% of the observations is <=1 (1 out of 8)
First 50% of the observations is <=4 (4 out of 4)
First 87.5% of the observations is <=7 (7 out of 8)
First 100% of the observations is <=100 (8 out of 8)
My questions:
How is such kind of plot called? (so max observation on y axis per percentile, percentile on x axis?). A kind of histogram?
How can I create such kind of plot in Matplotlib/Numpy?
Thanks
I'm not sure what such a plot would be called (edit: it appears it's called a cumulative frequency plot, or something similar). However, it's easy to do.
Essentially, if you have sorted data, then the percentage of observations <= a value at index i is just (i+1)/len(data). It's easy to create an x array using arange that satisfies this. So, for example:
from matplotlib import pylab
import numpy as np
a = np.array([1,2,3,4,5,6,7,100])
pylab.plot( np.arange(1,len(a)+1)/len(a), a, # This part is required
'-', drawstyle='steps' ) # This part is stylistic
Gives:
If you'd prefer your x axis go from 0 to 100 rather than 0 to
Note too that this works for your example data because it is already sorted. If you are using unsorted data, then sort it first, with np.sort for example:
c = np.random.randn(100)
c.sort()
pylab.plot( np.arange(1,len(c)+1)/len(c), c, '-', drawstyle='steps' )