Matplotlib graphing distribution with two colors - python

The goal here is to color value above a certain threshold into one color and values below this threshold into another color. The code below tries to just separate it into two histographs but it only looks balanced if the threshold is at 50%. I'm assuming I must play around with the discreetlevel variable.
finalutilityrange is some vector with a bunch of values(you must generate it to test the code), which I am trying to graph. The value deter is the value that determines whether they will be blue or red. discreetlevel is just the amount of bins I would want.
import random
import numpy as np
import matplotlib.pyplot as plt
discreetlevel = 10
deter = 2
for x in range(0,len(finalutilityrange)):
if finalutilityrange[x-1]>=deter:
piraterange.append(finalutilityrange[x-1])
else:
nonpiraterange.append(finalutilityrange[x-1])
plt.hist(piraterange,bins=discreetlevel,normed=False,cumulative=False,color = 'b')
plt.hist(nonpiraterange,bins=discreetlevel),normed=False,cumulative=False,color = 'r')
plt.title("Histogram")
plt.xlabel("Utlity")
plt.ylabel("Probability")
plt.show()

This solution is a bit more complex than #user2699's. I am just presenting it for completeness. You have full control over the patch objects that hist returns, so if you can ensure that the threshold you are using is exactly on a bin edge, it is easy to change to color of selected patches. You can do this because hist can accept a sequence of bin edges as the bins parameter.
import numpy as np
from matplotlib import pyplot as plt
# Make sample data
finalutilityrange = np.random.randn(100)
discreetlevel = 10
deter = 0.2
# Manually create `discreetlevel` bins anchored to `deter`
binsAbove = round(discreetlevel * np.count_nonzero(finalutilityrange > deter) / finalutilityrange.size)
binsBelow = discreetlevel - binsAbove
binwidth = max((finalutilityrange.max() - deter) / binsAbove,
(deter - finalutilityrange.min()) / binsBelow)
bins = np.concatenate([
np.arange(deter - binsBelow * binwidth, deter, binwidth),
np.arange(deter, deter + (binsAbove + 0.5) * binwidth, binwidth)
])
# Use the bins to make a single histogram
h, bins, patches = plt.hist(finalutilityrange, bins, color='b')
# Change the appropriate patches to red
plt.setp([p for p, b in zip(patches, bins) if b >= deter], color='r')
The result is a homogenous histogram with bins of different colors:
The bins may be a tad wider than if you did not anchor to deter. Either the first or last bin will generally go a little past the edge of the data.

This answer doesn't address your code since it isn't self-contained, but for what you're trying to do the default histogram should work (assuming numpy/pyplot is loaded)
x = randn(100)
idx = x < 0.2 # Threshold to separate values
hist([x[idx], x[~idx]], color=['b', 'r'])
Explanation:
first line just generates some random data to test,
creates an index for where the data is below some threshold, this can be negated with ~ to find where it's above the threshold
Last line plots the histogram. The command takes a list of separate groups to plot, which doesn't make a big difference here but if normed=True it will
There's more the hist plot can do, so look over the documentation before you accidentally implement it yourself.

Just as above do:
x = np.random.randn(100)
threshold_x = 0.2 # Threshold to separate values
x_lower, x_upper = (
[_ for _ in x if _ < threshold_x],
[_ for _ in x if _ >= threshold_x]
)
hist([x_lower, x_upper], color=['b', 'r'])

Related

Colormap with maximum distinguishable colours

Using matplotlib in Python I'm plotting anywhere between 20 and 50 lines. Using matplotlib's sliding colour scales these become indistinguishable after a certain number of lines are plotted (well before 20).
While I've seen a few example of code in Matlab and C# to create colour maps of an arbitrary number of colours which are maximally distinguishable from one another I can't find anything for Python.
Can anyone point me in the direction of something in Python that will do this?
Cheers
I loved the idea of palette created by #xuancong84 and modified his code a bit to make it not depending on alpha channel. I drop it here for others to use, thank you #xuancong84!
import math
import numpy as np
from matplotlib.colors import ListedColormap
from matplotlib.cm import hsv
def generate_colormap(number_of_distinct_colors: int = 80):
if number_of_distinct_colors == 0:
number_of_distinct_colors = 80
number_of_shades = 7
number_of_distinct_colors_with_multiply_of_shades = int(math.ceil(number_of_distinct_colors / number_of_shades) * number_of_shades)
# Create an array with uniformly drawn floats taken from <0, 1) partition
linearly_distributed_nums = np.arange(number_of_distinct_colors_with_multiply_of_shades) / number_of_distinct_colors_with_multiply_of_shades
# We are going to reorganise monotonically growing numbers in such way that there will be single array with saw-like pattern
# but each saw tooth is slightly higher than the one before
# First divide linearly_distributed_nums into number_of_shades sub-arrays containing linearly distributed numbers
arr_by_shade_rows = linearly_distributed_nums.reshape(number_of_shades, number_of_distinct_colors_with_multiply_of_shades // number_of_shades)
# Transpose the above matrix (columns become rows) - as a result each row contains saw tooth with values slightly higher than row above
arr_by_shade_columns = arr_by_shade_rows.T
# Keep number of saw teeth for later
number_of_partitions = arr_by_shade_columns.shape[0]
# Flatten the above matrix - join each row into single array
nums_distributed_like_rising_saw = arr_by_shade_columns.reshape(-1)
# HSV colour map is cyclic (https://matplotlib.org/tutorials/colors/colormaps.html#cyclic), we'll use this property
initial_cm = hsv(nums_distributed_like_rising_saw)
lower_partitions_half = number_of_partitions // 2
upper_partitions_half = number_of_partitions - lower_partitions_half
# Modify lower half in such way that colours towards beginning of partition are darker
# First colours are affected more, colours closer to the middle are affected less
lower_half = lower_partitions_half * number_of_shades
for i in range(3):
initial_cm[0:lower_half, i] *= np.arange(0.2, 1, 0.8/lower_half)
# Modify second half in such way that colours towards end of partition are less intense and brighter
# Colours closer to the middle are affected less, colours closer to the end are affected more
for i in range(3):
for j in range(upper_partitions_half):
modifier = np.ones(number_of_shades) - initial_cm[lower_half + j * number_of_shades: lower_half + (j + 1) * number_of_shades, i]
modifier = j * modifier / upper_partitions_half
initial_cm[lower_half + j * number_of_shades: lower_half + (j + 1) * number_of_shades, i] += modifier
return ListedColormap(initial_cm)
These are the colours I get:
from matplotlib import pyplot as plt
import numpy as np
N = 16
M = 7
H = np.arange(N*M).reshape([N,M])
fig = plt.figure(figsize=(10, 10))
ax = plt.pcolor(H, cmap=generate_colormap(N*M))
plt.show()
Recently, I also encountered the same problem. So I created the following simple Python code to generate visually distinguishable colors for jupyter notebook matplotlib. It is not quite maximally perceptually distinguishable, but it works better than most built-in colormaps in matplotlib.
The algorithm splits the HSV scale into 2 chunks, 1st chunk with increasing RGB value, 2nd chunk with decreasing alpha so that the color can blends into the white background.
Take note that if you are using any toolkit other than jupyter notebook, you must make sure the background is white, otherwise, alpha blending will be different and the resultant color will also be different.
Furthermore, the distinctiveness of color highly depends on your computer screen, projector, etc. A color palette distinguishable on one screen does not necessarily imply on another. You must physically test it out if you want to use for presentation.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
def generate_colormap(N):
arr = np.arange(N)/N
N_up = int(math.ceil(N/7)*7)
arr.resize(N_up)
arr = arr.reshape(7,N_up//7).T.reshape(-1)
ret = matplotlib.cm.hsv(arr)
n = ret[:,3].size
a = n//2
b = n-a
for i in range(3):
ret[0:n//2,i] *= np.arange(0.2,1,0.8/a)
ret[n//2:,3] *= np.arange(1,0.1,-0.9/b)
# print(ret)
return ret
N = 16
H = np.arange(N*N).reshape([N,N])
fig = plt.figure(figsize=(10, 10))
ax = plt.pcolor(H, cmap=ListedColormap(generate_colormap(N*N)))

Unequal width binned histogram in python

I have an array with probability values stored in it. Some values are 0. I need to plot a histogram such that there are equal number of elements in each bin. I tried using matplotlibs hist function but that lets me decide number of bins. How do I go about plotting this?(Normal plot and hist work but its not what is needed)
I have 10000 entries. Only 200 have values greater than 0 and lie between 0.0005 and 0.2. This distribution isnt even as 0.2 only one element has whereas 2000 approx have value 0.0005. So plotting it was an issue as the bins had to be of unequal width with equal number of elements
The task does not make much sense to me, but the following code does, what i understood as the thing to do.
I also think the last lines of the code are what you really wanted to do. Using different bin-widths to improve visualization (but don't target the distribution of equal amount of samples within each bin)! I used astroml's hist with method='blocks' (astropy supports this too)
Code
# Python 3 -> beware the // operator!
import numpy as np
import matplotlib.pyplot as plt
from astroML import plotting as amlp
N_VALUES = 1000
N_BINS = 100
# Create fake data
prob_array = np.random.randn(N_VALUES)
prob_array /= np.max(np.abs(prob_array),axis=0) # scale a bit
# Sort array
prob_array = np.sort(prob_array)
# Calculate bin-borders,
bin_borders = [np.amin(prob_array)] + [prob_array[(N_VALUES // N_BINS) * i] for i in range(1, N_BINS)] + [np.amax(prob_array)]
print('SAMPLES: ', prob_array)
print('BIN-BORDERS: ', bin_borders)
# Plot hist
counts, x, y = plt.hist(prob_array, bins=bin_borders)
plt.xlim(bin_borders[0], bin_borders[-1] + 1e-2)
print('COUNTS: ', counts)
plt.show()
# And this is, what i think, what you really want
fig, (ax1, ax2) = plt.subplots(2)
left_blob = np.random.randn(N_VALUES/10) + 3
right_blob = np.random.randn(N_VALUES) + 110
both = np.hstack((left_blob, right_blob)) # data is hard to visualize with equal bin-widths
ax1.hist(both)
amlp.hist(both, bins='blocks', ax=ax2)
plt.show()
Output

Python/Matplotlib: Randomly select "sample" scatter points for different marker

Pretty much exactly what the question states, but a little context:
I'm creating a program to plot a large number of points (~10,000, but it will be more later on). This is being done using matplotlib's plt.scatter. This command is part of a loop that saves the figure, so I can later animate it.
What I want to be able to do is randomly select a small portion of these particles (say, maybe 100?) and give them a different marker than the rest, even though they're part of the same data set. This is so I can use them as placeholders to see the motion of individual particles, as well as the bulk material.
Is there a way to use a different marker for a small subset of the same data?
For reference, the particles are uniformly distributed just using the numpy random sampler, but my code for that is:
for i in range(N): # N number of particles
particle_position[i] = np.random.uniform(0, xmax) # Initialize in spatial domain
particle_velocity[i] = np.random.normal(0, 5) # Initialize in velocity space
for i in range(maxtime):
plt.scatter(particle_position, particle_velocity, s=1, c=norm_xvel, cmap=br_disc, lw=0)
The position and velocity change on each iteration of the main loop (there's quite a bit of code), but these are the main initialization and plotting routines.
I had an idea that perhaps I could randomly select a bunch of i values from range(N), and use an ax.scatter() command to plot them on the same axes?
Here is a possible solution to have a subset of your points identified with a different marker:
import matplotlib.pyplot as plt
import numpy as np
SIZE = 100
SAMPLE_SIZE = 10
def select_subset(seq, size):
"""selects a subset of the data using ...
"""
return seq[:size]
points_x = np.random.uniform(-1, 1, size=SIZE)
points_y = np.random.uniform(-1, 1, size=SIZE)
plt.scatter(points_x, points_y, marker=".", color="blue")
plt.scatter(select_subset(points_x, SAMPLE_SIZE),
select_subset(points_y, SAMPLE_SIZE),
marker="o", color="red")
plt.show()
It uses plt.scatter twice; once on the full data set, the other on the sample points.
You will have to decide how you want to select the sample of points - it is isolated in the select_subset function..
You could also extract the sample points from the data set to prevent marking them twice, but numpy is rather inefficient at deleting or resizing.
Maybe a better method is to use a mask? A mask has the advantage of leaving your original data intact and in order.
Here is a way to proceed with masks:
import matplotlib.pyplot as plt
import numpy as np
import random
SIZE = 100
SAMPLE_SIZE = 10
def make_mask(data_size, sample_size):
mask = np.array([True] * sample_size + [False ] * (data_size - sample_size))
np.random.shuffle(mask)
return mask
points_x = np.random.uniform(-1, 1, size=SIZE)
points_y = np.random.uniform(-1, 1, size=SIZE)
mask = make_mask(SIZE, SAMPLE_SIZE)
not_mask = np.invert(mask)
plt.scatter(points_x[not_mask], points_y[not_mask], marker=".", color="blue")
plt.scatter(points_x[mask], points_y[mask], marker="o", color="red")
plt.show()
As you see, scatter is called once on a subset of the data points (the ones not selected in the sample), and a second time on the sampled subset, and draws each subset with its own marker. It is efficient & leaves the original data intact.
The code below does what you want. I have selected a random set v_sub_index of N_sub indices in the correct range (0 to N) and draw those (with _sub suffix) from the larger samples particle_position and particle_velocity. Please note that you don't have to loop to generate random samples. Numpy has great functionality for that without having to use for loops.
import numpy as np
import matplotlib.pyplot as pl
N = 100
xmax = 1.
v_sigma = 2.5 / 2. # 95% of the samples contained within 0, 5
v_mean = 2.5 # mean at 2.5
N_sub = 10
v_sub_index = np.random.randint(0, N, N_sub)
particle_position = np.random.rand (N) * xmax
particle_velocity = np.random.randn(N)
particle_position_sub = np.array(particle_position[v_sub_index])
particle_velocity_sub = np.array(particle_velocity[v_sub_index])
particle_position_nosub = np.delete(particle_position, v_sub_index)
particle_velocity_nosub = np.delete(particle_velocity, v_sub_index)
pl.scatter(particle_position_nosub, particle_velocity_nosub, color='b', marker='o')
pl.scatter(particle_position_sub , particle_velocity_sub , color='r', marker='^')
pl.show()

Plancks Formula for Blackbody spectrum

I am trying to write a simple python code for a plot of intensity vs wavelength for a given temperature, T=200K.
So far I have this...
import scipy as sp
import math
import matplotlib.pyplot as plt
import numpy as np
pi = np.pi
h = 6.626e-34
c = 3.0e+8
k = 1.38e-23
def planck(wav, T):
a = 2.0*h*pi*c**2
b = h*c/(wav*k*T)
intensity = a/ ( (wav**5)*(math.e**b - 1.0) )
return intensity
I don't know how to define wavelength(wav) and thus produce the plot of Plancks Formula. Any help would be appreciated.
Here's a basic plot. To plot using plt.plot(x, y, fmt) you need two arrays x and y of the same size, where x is the x coordinate of each point to plot and y is the y coordinate, and fmt is a string describing how to plot the numbers.
So all you need to do is create an evenly spaced array of wavelengths (an np.array which I named wavelengths). This can be done with arange(start, end, spacing) which will create an array from start to end (not inclusive) spaced at spacing apart.
Then compute the intensity using your function at each of those points in the array (which will be stored in another np.array), and then call plt.plot to plot them. Note numpy let's you do mathematical operations on arrays quickly in a vectorized form which will be computationally efficient.
import matplotlib.pyplot as plt
import numpy as np
h = 6.626e-34
c = 3.0e+8
k = 1.38e-23
def planck(wav, T):
a = 2.0*h*c**2
b = h*c/(wav*k*T)
intensity = a/ ( (wav**5) * (np.exp(b) - 1.0) )
return intensity
# generate x-axis in increments from 1nm to 3 micrometer in 1 nm increments
# starting at 1 nm to avoid wav = 0, which would result in division by zero.
wavelengths = np.arange(1e-9, 3e-6, 1e-9)
# intensity at 4000K, 5000K, 6000K, 7000K
intensity4000 = planck(wavelengths, 4000.)
intensity5000 = planck(wavelengths, 5000.)
intensity6000 = planck(wavelengths, 6000.)
intensity7000 = planck(wavelengths, 7000.)
plt.plot(wavelengths*1e9, intensity4000, 'r-')
# plot intensity4000 versus wavelength in nm as a red line
plt.plot(wavelengths*1e9, intensity5000, 'g-') # 5000K green line
plt.plot(wavelengths*1e9, intensity6000, 'b-') # 6000K blue line
plt.plot(wavelengths*1e9, intensity7000, 'k-') # 7000K black line
# show the plot
plt.show()
And you see:
You probably will want to clean up the axes labels, add a legend, plot the intensity at multiple temperatures on the same plot, among other things. Consult the relevant matplotlib documentation.
You may also want to use the RADIS library, which allows you to plot the Planck function against wavelengths, or against frequency / wavenumber, if needed !
from radis import sPlanck
sPlanck(wavelength_min=135, wavelength_max=3000, T=4000).plot()
sPlanck(wavelength_min=135, wavelength_max=3000, T=5000).plot(nfig='same')
sPlanck(wavelength_min=135, wavelength_max=3000, T=6000).plot(nfig='same')
sPlanck(wavelength_min=135, wavelength_max=3000, T=7000).plot(nfig='same')
Just want to point out that there seems to be an equivalent of what OP wants to do in astropy:
https://docs.astropy.org/en/stable/api/astropy.modeling.physical_models.BlackBody.html
Unfortunately, it is not very clear to me yet how to get wavelength vs frequency based expression.

Two colour scatter plot in R or in python

I have a dataset of three columns and n number of rows. column 1 contains name, column 2 value1, and column 3 value2 (rank2).
I want to plot a scatter plot with the outlier values displaying names.
The R commands I am using in are:
tiff('scatterplot.tiff')
data<-read.table("scatterplot_data", header=T)
attach(data)
reg1<-lm(A~B)
plot(A,B,col="red")
abline(reg1)
outliers<-data[which(2^(data[,2]-data[,3]) >= 4 | 2^(data[,2]-data[,3]) <=0.25),]
text(outliers[,2], outliers[,3],labels=outliers[,1],cex=0.50)
dev.off()
and I get a figure like this:
What I want is the labels on the lower half should be of one colour and the labels in upper half should be of another colour say green and red respectively.
Any suggestions, or adjustment in the commands?
You already have a logical test that works to your satisfaction. Just use it in the color spec to text:
text(outliers[,2], outliers[,3],labels=outliers[,1],cex=0.50,
col=c("blue", "green")[
which(2^(data[,2]-data[,3]) >= 4 , 2^(data[,2]-data[,3]) <=0.25)] )
It's untested of course because you offered no test case, but my reasoning is that the which() function should return 1 for the differences >= 4, and 2 for the ones <= 0.25, and integer(0) for all the others and that this should give you the proper alignment of color choices with the 'outliers' vector.
Using python, matplotlib (pylab) to plot, and scipy, numpy to fit data. The trick with numpy is to create a index or mask to filter out the results that you want.
EDIT: Want to selectively color the top and bottom outliers? It's a simple combination of both masks that we created:
import scipy as sci
import numpy as np
import pylab as plt
# Create some data
N = 1000
X = np.random.normal(5,1,size=N)
Y = X + np.random.normal(0,5.5,size=N)/np.random.normal(5,.1)
NAMES = ["foo"]*1000 # Customize names here
# Fit a polynomial
(a,b)=sci.polyfit(X,Y,1)
# Find all points above the line
idx = (X*a + b) < Y
# Scatter according to that index
plt.scatter(X[idx],Y[idx], color='r')
plt.scatter(X[~idx],Y[~idx], color='g')
# Find top 10 outliers
err = ((X*a+b) - Y) ** 2
idx_L = np.argsort(err)[-10:]
for i in idx_L:
plt.text(X[i], Y[i], NAMES[i])
# Color the outliers purple or black
top = idx_L[idx[idx_L]]
bot = idx_L[~idx[idx_L]]
plt.scatter(X[top],Y[top], color='purple')
plt.scatter(X[bot],Y[bot], color='black')
XF = np.linspace(0,10,1000)
plt.plot(XF, XF*a + b, 'k--')
plt.axis('tight')
plt.show()

Categories