Related
I am trying to create a 3D barplot using matplotlib in python, and apply a colormap which is tied some data (4th dimension) which is not explicitly plotted. I think what makes this even more complicated is that I want this 4th dimension to be a range of values as opposed to a single value.
So far I have managed to create the 3D bar plot with a colormap tied to the z-dimension thanks primarily to this post how to plot gradient fill on the 3d bars in matplotlib. The code can be found below.
import numpy as np
import glob,os
from matplotlib import pyplot as plt
import matplotlib.colors as cl
import matplotlib.cm as cm
from mpl_toolkits.mplot3d import Axes3D
os.chdir('./')
# axis details for the bar plot
x = ['1', '2', '3', '4', '5'] # labels
x_tick_locks = np.arange(0.1, len(x) + 0.1, 1)
x_axis = np.arange(len(x))
y = ['A', 'B']
y_tick_locks = np.arange(-0.1, len(y) - 0.1, 1)
y_axis = np.arange(len(y))
x_axis, y_axis = np.meshgrid(x_axis, y_axis)
x_axis = x_axis.flatten()
y_axis = y_axis.flatten()
x_data_final = np.ones(len(x) * len(y)) * 0.5
y_data_final = np.ones(len(x) * len(y)) * 0.5
z_axis = np.zeros(len(x)*len(y))
z_data_final = [[30, 10, 15, 20, 25], [10, 15, 15, 28, 40]]
values_min = [[5, 1, 6, 8, 3], [2, 1, 3, 9, 4]]
values_max = [[20, 45, 11, 60, 30], [11, 28, 6, 30, 40]]
cmap_max = max(values_max)
cmap_min = min(values_min)
############################### FOR 3D SCALED GRADIENT BARS ###############################
def make_bar(ax, x0=0, y0=0, width = 0.5, height=1 , cmap="plasma",
norm=cl.Normalize(vmin=0, vmax=1), **kwargs ):
# Make data
u = np.linspace(0, 2*np.pi, 4+1)+np.pi/4.
v_ = np.linspace(np.pi/4., 3./4*np.pi, 100)
v = np.linspace(0, np.pi, len(v_)+2 )
v[0] = 0 ; v[-1] = np.pi; v[1:-1] = v_
#print(u)
x = np.outer(np.cos(u), np.sin(v))
y = np.outer(np.sin(u), np.sin(v))
z = np.outer(np.ones(np.size(u)), np.cos(v))
xthr = np.sin(np.pi/4.)**2 ; zthr = np.sin(np.pi/4.)
x[x > xthr] = xthr; x[x < -xthr] = -xthr
y[y > xthr] = xthr; y[y < -xthr] = -xthr
z[z > zthr] = zthr ; z[z < -zthr] = -zthr
x *= 1./xthr*width; y *= 1./xthr*width
z += zthr
z *= height/(2.*zthr)
#translate
x += x0; y += y0
#plot
ax.plot_surface(x, y, z, cmap=cmap, norm=norm, **kwargs)
def make_bars(ax, x, y, height, width=1):
widths = np.array(width)*np.ones_like(x)
x = np.array(x).flatten()
y = np.array(y).flatten()
h = np.array(height).flatten()
w = np.array(widths).flatten()
norm = cl.Normalize(vmin=0, vmax=h.max())
for i in range(len(x.flatten())):
make_bar(ax, x0=x[i], y0=y[i], width = w[i] , height=h[i], norm=norm)
############################### FOR 3D SCALED GRADIENT BARS ###############################
# Creating graph surface
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111, projection= Axes3D.name)
ax.azim = 50
ax.dist = 10
ax.elev = 30
ax.invert_xaxis()
ax.set_box_aspect((1, 0.5, 1))
ax.zaxis.labelpad=7
ax.text(0.9, 2.2, 0, 'Group', 'x')
ax.text(-2, 0.7, 0, 'Class', 'y')
ax.set_xticks(x_tick_locks)
ax.set_xticklabels(x, ha='left')
ax.tick_params(axis='x', which='major', pad=-2)
ax.set_yticks(y_tick_locks)
ax.set_yticklabels(y, ha='right', rotation=30)
ax.tick_params(axis='y', which='major', pad=-5)
ax.set_zlabel('Number')
make_bars(ax, x_axis, y_axis, z_data_final, width=0.2, )
fig.colorbar(plt.cm.ScalarMappable(cmap = 'plasma'), ax = ax, shrink=0.8)
#plt.tight_layout() # doesn't seem to work properly for 3d plots?
plt.show()
As I mentioned, I don't want the colormap to be tied to the z-axis but rather a 4th dimension, which is a range. In other words, I want the colours of the colormap to range from cmap_min to cmap_max (so min is 1 and max is 60), then for the bar plot with a z_data_final entry of 30 for example, its colours should correspond with the range of 5 to 20.
Some other posts seem to provide a solution for a single 4th dimensional value, i.e. (python) plot 3d surface with colormap as 4th dimension, function of x,y,z or How to make a 4d plot using Python with matplotlib however I wasn't able to find anything specific to bar plots with a range of values as your 4th dimensional data.
I would appreciate any guidance in this matter, thanks in advance.
This is the 3D bar plot with colormap tied to the z-dimension
I am using python to batch process some data and plot it. I can fit it quite well using scipy.curve_fit, a bi-exponential function and some sensible initial guesses. Here is a code snippet:
def biexpfunc(x, a, b, c, d, e):
y_new = []
for i in range(len(x)):
y = (a * np.exp(b*x[i])) + (c * np.exp(d*x[i])) + e
y_new.append(y)
return y_new
x = np.linspace(0, 160, 100)
y = biexpfunc(x, 50, -0.2, 50, -0.1, 10)
jitter_y = y + 0.5 *np.random.rand(len(y)) - 0.1
plt.scatter(x, jitter_y)
sigma = np.ones(len(x))
sigma[[0, -1]] = 0.01
popt, pcov = curve_fit(biexpfunc, x, jitter_y, p0 = (50, -0.2, 50, -0.1, 10),
sigma = sigma)
x_fit = np.linspace(0, x[-1])
y_fit = biexpfunc(x_fit, *popt)
plt.plot(x_fit, y_fit, 'r--')
plt.show()
I know how to interpolate this to find y for a given value of x (by putting it back into the function), but how can I find x for a given value of y? I feel like there must be a sensible method that doesn't require re-arrangement and defining a new function (partially because maths is not my strong suit and I don't know how to!). If the curve fits the data well is there a way to simply read off a value? Any assistance would be greatly appreciated!
Turns out, your question has nothing to do with curve fitting but is actually about root finding. Scipy.optimize has a whole arsenal of functions for this task. Choosing and configuring the right one is sometimes difficult. I might not be the best guide here, but since no-one else stepped up...
Root finding tries to determine x-values for which f(x) is zero. To find an x0 where f(x0) is a certain y0-value, we simply transform the function into g(x) = f(x)-y0.
Since your function is monotonous, not more than one root is to be expected for a given y-value. We also know the x-interval in which to search, so bisect seems to be a reasonable strategy:
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit, bisect
def biexpfunc(x, a, b, c, d, e):
return (a * np.exp(b*x)) + (c * np.exp(d*x)) + e
np.random.seed(123)
x = np.linspace(0, 160, 100)
y = biexpfunc(x, 50, -0.2, 50, -0.1, 10)
jitter_y = y + 0.5 *np.random.rand(len(y)) - 0.1
fig, ax = plt.subplots(figsize=(10, 8))
ax.scatter(x, jitter_y, marker="x", color="blue", label="raw data")
#your curve fit routine
sigma = np.ones(len(x))
sigma[[0, -1]] = 0.01
popt, pcov = curve_fit(biexpfunc, x, jitter_y, p0 = (50, -0.2, 50, -0.1, 10), sigma = sigma)
x_fit = np.linspace(x.min(), x.max(), 100)
y_fit = biexpfunc(x_fit, *popt)
ax.plot(x_fit, y_fit, 'r--', label="fit")
#y-value for which we want to determine the x-value(s)
y_test=55
test_popt = popt.copy()
test_popt[-1] -= y_test
#here, the bisect method tries to establish the x for which f(x)=0
x_test=bisect(biexpfunc, x.min(), x.max(), args=tuple(test_popt))
#we calculate the deviation from the expected y-value
tol_test, = np.abs(y_test - biexpfunc(np.asarray([x_test]), *popt))
#and mark the determined point in the graph
ax.axhline(y_test, ls="--", color="grey")
ax.axvline(x_test, ls="--", color="grey")
ax.plot(x_test, y_test, c="tab:orange", marker="o", markersize=15, alpha=0.5)
ax.annotate(f"X: {x_test:.2f}, Y: {y_test:.2f}\ntol: {tol_test:.4f}",
xy=(x_test, y_test), xytext=(50, 50), textcoords="offset points",
arrowprops=dict(facecolor="tab:orange", shrink=0.05),)
ax.legend(title="root finding: bisect")
plt.show()
Sample output:
Another way to determine roots for more complex functions is, surprise, root. The script is mainly identical, only the root-routine is slightly different, for instance, we can choose the root-finding method:
import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import curve_fit, root
def biexpfunc(x, a, b, c, d, e):
return (a * np.exp(b*x)) + (c * np.exp(d*x)) + e
np.random.seed(123)
x = np.linspace(0, 160, 100)
y = biexpfunc(x, 50, -0.2, 50, -0.1, 10)
jitter_y = y + 0.5 *np.random.rand(len(y)) - 0.1
fig, ax = plt.subplots(figsize=(10, 8))
ax.scatter(x, jitter_y, marker="x", color="blue", label="raw data")
#your curve fit routine
sigma = np.ones(len(x))
sigma[[0, -1]] = 0.01
popt, pcov = curve_fit(biexpfunc, x, jitter_y, p0 = (50, -0.2, 50, -0.1, 10), sigma = sigma)
x_fit = np.linspace(x.min(), x.max(), 100)
y_fit = biexpfunc(x_fit, *popt)
ax.plot(x_fit, y_fit, 'r--', label="fit")
#y-value for which we want to determine the x-value(s)
y_test=55
test_popt = popt.copy()
test_popt[-1] -= y_test
#calculate corresponding x-value with root finding
r=root(biexpfunc, x.mean(), args=tuple(test_popt), method="lm")
x_test, = r.x
tol_test, = np.abs(y_test - biexpfunc(r.x, *popt))
#mark point in graph
ax.axhline(y_test, ls="--", color="grey")
ax.axvline(x_test, ls="--", color="grey")
ax.plot(x_test, y_test, c="tab:orange", marker="o", markersize=15, alpha=0.5)
ax.annotate(f"X: {x_test:.2f}, Y: {y_test:.2f}\ntol: {tol_test:.4f}",
xy=(x_test, y_test), xytext=(50, 50), textcoords="offset points",
arrowprops=dict(facecolor="tab:orange", shrink=0.05))
ax.legend(title="root finding: lm")
plt.show()
Sample output:
The graphs look in this case identical. This is not necessarily for every function so; just like for curve fitting, the correct approach can dramatically improve the outcome.
I could really use a tip to help me plotting a decision boundary to separate to classes of data. I created some sample data (from a Gaussian distribution) via Python NumPy. In this case, every data point is a 2D coordinate, i.e., a 1 column vector consisting of 2 rows. E.g.,
[ 1
2 ]
Let's assume I have 2 classes, class1 and class2, and I created 100 data points for class1 and 100 data points for class2 via the code below (assigned to the variables x1_samples and x2_samples).
mu_vec1 = np.array([0,0])
cov_mat1 = np.array([[2,0],[0,2]])
x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)
mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector
mu_vec2 = np.array([1,2])
cov_mat2 = np.array([[1,0],[0,1]])
x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100)
mu_vec2 = mu_vec2.reshape(1,2).T
When I plot the data points for each class, it would look like this:
Now, I came up with an equation for an decision boundary to separate both classes and would like to add it to the plot. However, I am not really sure how I can plot this function:
def decision_boundary(x_vec, mu_vec1, mu_vec2):
g1 = (x_vec-mu_vec1).T.dot((x_vec-mu_vec1))
g2 = 2*( (x_vec-mu_vec2).T.dot((x_vec-mu_vec2)) )
return g1 - g2
I would really appreciate any help!
EDIT:
Intuitively (If I did my math right) I would expect the decision boundary to look somewhat like this red line when I plot the function...
Your question is more complicated than a simple plot : you need to draw the contour which will maximize the inter-class distance. Fortunately it's a well-studied field, particularly for SVM machine learning.
The easiest method is to download the scikit-learn module, which provides a lot of cool methods to draw boundaries: scikit-learn: Support Vector Machines
Code :
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
import scipy
from sklearn import svm
mu_vec1 = np.array([0,0])
cov_mat1 = np.array([[2,0],[0,2]])
x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)
mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector
mu_vec2 = np.array([1,2])
cov_mat2 = np.array([[1,0],[0,1]])
x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100)
mu_vec2 = mu_vec2.reshape(1,2).T
fig = plt.figure()
plt.scatter(x1_samples[:,0],x1_samples[:,1], marker='+')
plt.scatter(x2_samples[:,0],x2_samples[:,1], c= 'green', marker='o')
X = np.concatenate((x1_samples,x2_samples), axis = 0)
Y = np.array([0]*100 + [1]*100)
C = 1.0 # SVM regularization parameter
clf = svm.SVC(kernel = 'linear', gamma=0.7, C=C )
clf.fit(X, Y)
Linear Plot
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - (clf.intercept_[0]) / w[1]
plt.plot(xx, yy, 'k-')
MultiLinear Plot
C = 1.0 # SVM regularization parameter
clf = svm.SVC(kernel = 'rbf', gamma=0.7, C=C )
clf.fit(X, Y)
h = .02 # step size in the mesh
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, cmap=plt.cm.Paired)
Implementation
If you want to implement it yourself, you need to solve the following quadratic equation:
The Wikipedia article
Unfortunately, for non-linear boundaries like the one you draw, it's a difficult problem relying on a kernel trick but there isn't a clear cut solution.
Based on the way you've written decision_boundary you'll want to use the contour function, as Joe noted above. If you just want the boundary line, you can draw a single contour at the 0 level:
f, ax = plt.subplots(figsize=(7, 7))
c1, c2 = "#3366AA", "#AA3333"
ax.scatter(*x1_samples.T, c=c1, s=40)
ax.scatter(*x2_samples.T, c=c2, marker="D", s=40)
x_vec = np.linspace(*ax.get_xlim())
ax.contour(x_vec, x_vec,
decision_boundary(x_vec, mu_vec1, mu_vec2),
levels=[0], cmap="Greys_r")
Which makes:
Those were some great suggestions, thanks a lot for your help! I ended up solving the equation analytically and this is the solution I ended up with (I just want to post it for future reference:
# 2-category classification with random 2D-sample data
# from a multivariate normal distribution
import numpy as np
from matplotlib import pyplot as plt
def decision_boundary(x_1):
""" Calculates the x_2 value for plotting the decision boundary."""
return 4 - np.sqrt(-x_1**2 + 4*x_1 + 6 + np.log(16))
# Generating a Gaussion dataset:
# creating random vectors from the multivariate normal distribution
# given mean and covariance
mu_vec1 = np.array([0,0])
cov_mat1 = np.array([[2,0],[0,2]])
x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)
mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector
mu_vec2 = np.array([1,2])
cov_mat2 = np.array([[1,0],[0,1]])
x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100)
mu_vec2 = mu_vec2.reshape(1,2).T # to 1-col vector
# Main scatter plot and plot annotation
f, ax = plt.subplots(figsize=(7, 7))
ax.scatter(x1_samples[:,0], x1_samples[:,1], marker='o', color='green', s=40, alpha=0.5)
ax.scatter(x2_samples[:,0], x2_samples[:,1], marker='^', color='blue', s=40, alpha=0.5)
plt.legend(['Class1 (w1)', 'Class2 (w2)'], loc='upper right')
plt.title('Densities of 2 classes with 25 bivariate random patterns each')
plt.ylabel('x2')
plt.xlabel('x1')
ftext = 'p(x|w1) ~ N(mu1=(0,0)^t, cov1=I)\np(x|w2) ~ N(mu2=(1,1)^t, cov2=I)'
plt.figtext(.15,.8, ftext, fontsize=11, ha='left')
# Adding decision boundary to plot
x_1 = np.arange(-5, 5, 0.1)
bound = decision_boundary(x_1)
plt.plot(x_1, bound, 'r--', lw=3)
x_vec = np.linspace(*ax.get_xlim())
x_1 = np.arange(0, 100, 0.05)
plt.show()
And the code can be found here
EDIT:
I also have a convenience function for plotting decision regions for classifiers that implement a fit and predict method, e.g., the classifiers in scikit-learn, which is useful if the solution cannot be found analytically. A more detailed description how it works can be found here.
You can create your own equation for the boundary:
where you have to find the positions x0 and y0, as well as the constants ai and bi for the radius equation. So, you have 2*(n+1)+2 variables. Using scipy.optimize.leastsq is straightforward for this type of problem.
The code attached below builds the residual for the leastsq penalizing the points outsize the boundary. The result for your problem, obtained with:
x, y = find_boundary(x2_samples[:,0], x2_samples[:,1], n)
ax.plot(x, y, '-k', lw=2.)
x, y = find_boundary(x1_samples[:,0], x1_samples[:,1], n)
ax.plot(x, y, '--k', lw=2.)
using n=1:
using n=2:
usng n=5:
using n=7:
import numpy as np
from numpy import sin, cos, pi
from scipy.optimize import leastsq
def find_boundary(x, y, n, plot_pts=1000):
def sines(theta):
ans = np.array([sin(i*theta) for i in range(n+1)])
return ans
def cosines(theta):
ans = np.array([cos(i*theta) for i in range(n+1)])
return ans
def residual(params, x, y):
x0 = params[0]
y0 = params[1]
c = params[2:]
r_pts = ((x-x0)**2 + (y-y0)**2)**0.5
thetas = np.arctan2((y-y0), (x-x0))
m = np.vstack((sines(thetas), cosines(thetas))).T
r_bound = m.dot(c)
delta = r_pts - r_bound
delta[delta>0] *= 10
return delta
# initial guess for x0 and y0
x0 = x.mean()
y0 = y.mean()
params = np.zeros(2 + 2*(n+1))
params[0] = x0
params[1] = y0
params[2:] += 1000
popt, pcov = leastsq(residual, x0=params, args=(x, y),
ftol=1.e-12, xtol=1.e-12)
thetas = np.linspace(0, 2*pi, plot_pts)
m = np.vstack((sines(thetas), cosines(thetas))).T
c = np.array(popt[2:])
r_bound = m.dot(c)
x_bound = popt[0] + r_bound*cos(thetas)
y_bound = popt[1] + r_bound*sin(thetas)
return x_bound, y_bound
I like the mglearn library to draw decision boundaries. Here is one example from the book "Introduction to Machine Learning with Python" by A. Mueller:
fig, axes = plt.subplots(1, 3, figsize=(10, 3))
for n_neighbors, ax in zip([1, 3, 9], axes):
clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(X, y)
mglearn.plots.plot_2d_separator(clf, X, fill=True, eps=0.5, ax=ax, alpha=.4)
mglearn.discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)
ax.set_title("{} neighbor(s)".format(n_neighbors))
ax.set_xlabel("feature 0")
ax.set_ylabel("feature 1")
axes[0].legend(loc=3)
If you want to use scikit learn, you can write your code like this:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
# read data
data = pd.read_csv('ex2data1.txt', header=None)
X = data[[0,1]].values
y = data[2]
# use LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X, y)
# Coefficient of the features in the decision function. (from theta 1 to theta n)
parameters = log_reg.coef_[0]
# Intercept (a.k.a. bias) added to the decision function. (theta 0)
parameter0 = log_reg.intercept_
# Plotting the decision boundary
fig = plt.figure(figsize=(10,7))
x_values = [np.min(X[:, 1] -5 ), np.max(X[:, 1] +5 )]
# calcul y values
y_values = np.dot((-1./parameters[1]), (np.dot(parameters[0],x_values) + parameter0))
colors=['red' if l==0 else 'blue' for l in y]
plt.scatter(X[:, 0], X[:, 1], label='Logistics regression', color=colors)
plt.plot(x_values, y_values, label='Decision Boundary')
plt.show()
see: Building-a-Logistic-Regression-with-Scikit-learn
Just solved a very similar problem with a different approach (root finding) and wanted to post this alternative as answer here for future reference:
def discr_func(x, y, cov_mat, mu_vec):
"""
Calculates the value of the discriminant function for a dx1 dimensional
sample given covariance matrix and mean vector.
Keyword arguments:
x_vec: A dx1 dimensional numpy array representing the sample.
cov_mat: numpy array of the covariance matrix.
mu_vec: dx1 dimensional numpy array of the sample mean.
Returns a float value as result of the discriminant function.
"""
x_vec = np.array([[x],[y]])
W_i = (-1/2) * np.linalg.inv(cov_mat)
assert(W_i.shape[0] > 1 and W_i.shape[1] > 1), 'W_i must be a matrix'
w_i = np.linalg.inv(cov_mat).dot(mu_vec)
assert(w_i.shape[0] > 1 and w_i.shape[1] == 1), 'w_i must be a column vector'
omega_i_p1 = (((-1/2) * (mu_vec).T).dot(np.linalg.inv(cov_mat))).dot(mu_vec)
omega_i_p2 = (-1/2) * np.log(np.linalg.det(cov_mat))
omega_i = omega_i_p1 - omega_i_p2
assert(omega_i.shape == (1, 1)), 'omega_i must be a scalar'
g = ((x_vec.T).dot(W_i)).dot(x_vec) + (w_i.T).dot(x_vec) + omega_i
return float(g)
#g1 = discr_func(x, y, cov_mat=cov_mat1, mu_vec=mu_vec_1)
#g2 = discr_func(x, y, cov_mat=cov_mat2, mu_vec=mu_vec_2)
x_est50 = list(np.arange(-6, 6, 0.1))
y_est50 = []
for i in x_est50:
y_est50.append(scipy.optimize.bisect(lambda y: discr_func(i, y, cov_mat=cov_est_1, mu_vec=mu_est_1) - \
discr_func(i, y, cov_mat=cov_est_2, mu_vec=mu_est_2), -10,10))
y_est50 = [float(i) for i in y_est50]
Here is the result:
(blue the quadratic case, red the linear case (equal variances)
I know this question has been answered in a very thorough way analytically. I just wanted to share a possible 'hack' to the problem. It is unwieldy but gets the job done.
Start by building a mesh grid of the 2d area and then based on the classifier just build a class map of the entire space. Subsequently detect changes in the decision made row-wise and store the edges points in a list and scatter plot the points.
def disc(x): # returns the class of the point based on location x = [x,y]
temp = 0.5 + 0.5*np.sign(disc0(x)-disc1(x))
# disc0() and disc1() are the discriminant functions of the respective classes
return 0*temp + 1*(1-temp)
num = 200
a = np.linspace(-4,4,num)
b = np.linspace(-6,6,num)
X,Y = np.meshgrid(a,b)
def decColor(x,y):
temp = np.zeros((num,num))
print x.shape, np.size(x,axis=0)
for l in range(num):
for m in range(num):
p = np.array([x[l,m],y[l,m]])
#print p
temp[l,m] = disc(p)
return temp
boundColorMap = decColor(X,Y)
group = 0
boundary = []
for x in range(num):
group = boundColorMap[x,0]
for y in range(num):
if boundColorMap[x,y]!=group:
boundary.append([X[x,y],Y[x,y]])
group = boundColorMap[x,y]
boundary = np.array(boundary)
Sample Decision Boundary for a simple bivariate gaussian classifier
Given two bi-variate normal distributions, you can use Gaussian Discriminant Analysis (GDA) to come up with a decision boundary as the difference between the log of the 2 pdf's.
Here's a way to do it using scipy multivariate_normal (the code is not optimized):
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
from numpy.linalg import norm
from numpy.linalg import inv
from scipy.spatial.distance import mahalanobis
def normal_scatter(mean, cov, p):
size = 100
sigma_x = cov[0,0]
sigma_y = cov[1,1]
mu_x = mean[0]
mu_y = mean[1]
x_ps, y_ps = np.random.multivariate_normal(mean, cov, size).T
x,y = np.mgrid[mu_x-3*sigma_x:mu_x+3*sigma_x:1/size, mu_y-3*sigma_y:mu_y+3*sigma_y:1/size]
grid = np.empty(x.shape + (2,))
grid[:, :, 0] = x; grid[:, :, 1] = y
z = p*multivariate_normal.pdf(grid, mean, cov)
return x_ps, y_ps, x,y,z
# Dist 1
mu_1 = np.array([1, 1])
cov_1 = .5*np.array([[1, 0], [0, 1]])
p_1 = .5
x_ps, y_ps, x,y,z = normal_scatter(mu_1, cov_1, p_1)
plt.plot(x_ps,y_ps,'x')
plt.contour(x, y, z, cmap='Blues', levels=3)
# Dist 2
mu_2 = np.array([2, 1])
#cov_2 = np.array([[2, -1], [-1, 1]])
cov_2 = cov_1
p_2 = .5
x_ps, y_ps, x,y,z = normal_scatter(mu_2, cov_2, p_2)
plt.plot(x_ps,y_ps,'.')
plt.contour(x, y, z, cmap='Oranges', levels=3)
# Decision Boundary
X = np.empty(x.shape + (2,))
X[:, :, 0] = x; X[:, :, 1] = y
g = np.log(p_1*multivariate_normal.pdf(X, mu_1, cov_1)) - np.log(p_2*multivariate_normal.pdf(X, mu_2, cov_2))
plt.contour(x, y, g, [0])
plt.grid()
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot([mu_1[0], mu_2[0]], [mu_1[1], mu_2[1]], 'k')
plt.show()
If p_1 != p_2, then you get non-linear boundary. The decision boundary is given by g above.
Then to plot the decision hyper-plane (line in 2D), you need to evaluate g for a 2D mesh, then get the contour which will give a separating line.
You can also assume to have equal co-variance matrices for both distributions, which will give a linear decision boundary. In this case, you can replace the calculation of g in the above code with the following:
W = inv(cov_1).dot(mu_1-mu_2)
x_0 = 1/2*(mu_1+mu_2) - cov_1.dot(np.log(p_1/p_2)).dot((mu_1-mu_2)/mahalanobis(mu_1, mu_2, cov_1))
X = np.empty(x.shape + (2,))
X[:, :, 0] = x; X[:, :, 1] = y
g = (X-x_0).dot(W)
i use this method from this book python-machine-learning-2nd.pdf URL
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.8,
c=colors[idx],
marker=markers[idx],
label=cl,
edgecolor='black')
# highlight test samples
if test_idx:
# plot all samples
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0],
X_test[:, 1],
c='',
edgecolor='black',
alpha=1.0,
linewidth=1,
marker='o',
s=100,
label='test set')
Since version 1.1, sklearn has a function for this:
https://scikit-learn.org/stable/modules/generated/sklearn.inspection.DecisionBoundaryDisplay.html#sklearn.inspection.DecisionBoundaryDisplay
I have two variables, x and y, that are random variables. I want to fit a curve to them that plateaus. I've been able to do this using an exponential fit but I'd like to do so with a quadratic fit as well.
How can I get the fit to flatten out at the top? FWIW, the y data were generated such that no value goes above: 4300. So probably in the new curve it should have this requirement.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = np.asarray([70,37,39,42,35,35,44,40,42,51,65,32,56,51,33,47,33,42,33,44,46,38,53,38,54,54,51,46,50,51,48,48,50,32,54,60,41,40,50,49,58,35,53,66,41,48,43,54,51])
y = np.asarray([3781,3036,3270,3366,2919,2966,3326,2812,3053,3496,3875,1823,3510,3615,2987,3589,2791,2819,1885,3570,3431,3095,3678,2297,3636,3569,3547,3553,3463,3422,3516,3538,3671,1888,3680,3775,2720,3450,3563,3345,3731,2145,3364,3928,2720,3621,3425,3687,3630])
def polyfit(x, y, degree):
results = {}
coeffs = np.polyfit(x, y, degree)
# Polynomial Coefficients
results['polynomial'] = coeffs.tolist()
# r-squared, fit values, and average
p = np.poly1d(coeffs)
yhat = p(x)
ybar = np.sum(y)/len(y)
ssreg = np.sum((yhat-ybar)**2)
sstot = np.sum((y - ybar)**2)
results['determination'] = ssreg / sstot
return results, yhat, ybar
def plot_polyfit(x=None, y=None, degree=None):
# degree = degree of the fitting polynomial
xmin = min(x)
xmax = max(x)
fig, ax = plt.subplots(figsize=(5,4))
p = np.poly1d(np.polyfit(x, y, degree))
t = np.linspace(xmin, xmax, len(x))
ax.plot(x, y, 'ok', t, p(t), '-', markersize=3, alpha=0.6, linewidth=2.5)
results, yhat, ybar = polyfit(x,y,degree)
R_squared = results['determination']
textstr = r'$r^2=%.2f$' % (R_squared, )
props = dict(boxstyle='square', facecolor='lightgray', alpha=0.5)
fig.text(0.05, 0.95, textstr, transform=ax.transAxes, fontsize=12,
verticalalignment='top', bbox=props)
results['polynomial'][0]
plot_polyfit(x=x, y=y, degree=2)
In contrast, I can use the same functions and get the curve to plateau better when the data are so:
x2 = np.asarray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12])
y2 = np.asarray([2, 4, 8, 12, 14, 18, 20, 21, 22, 23, 24, 24])
plot_polyfit(x=x2, y=y2, degree=2)
Edits suggested by #tstanisl:
def plot_newfit(xdat, ydat):
x,y = xdat, ydat
xmax = 4300
def new_fit(A,x,B):
return A*(x - xmax)**2+B # testing this out
fig, axs = plt.subplots(figsize=(5,4))
# Find best fit.
popt, pcov = curve_fit(new_fit, x, y)
# Top plot
# Plot data and best fit curve.
axs.plot(x, y,'ok', alpha=0.6)
axs.plot(np.sort(x), new_fit(np.sort(x), *popt),'-')
#r2
residuals = y - new_fit(x, *popt)
ss_res = np.sum(residuals**2)
ss_tot = np.sum((y-np.mean(y))**2)
r_squared = 1 - (ss_res / ss_tot)
r_squared
# Add text
textstr = r'$r^2=%.2f$' % (r_squared, )
props = dict(boxstyle='square', facecolor='lightgray', alpha=0.5)
fig.text(0.05, 0.95, textstr, transform=axs.transAxes, fontsize=12,
verticalalignment='top', bbox=props)
plot_newfit(x,y)
You just need to slightly modify new_fit() to fit A, B rather x and B.
Set xmax to the desired location of the peek. Using x.max() will guarantee that the fit curve will flatten at the last sample.
def new_fit(x, A, B):
xmax = x.max() # or 4300
return A*(x - xmax)**2+B # testing this out
Result:
I'm not too familiar with scipy.optimise but, if you find the Euclidian distance between the point that contains x-max and the one that contains your y-max, divide it in half and do some trig, you could use that coord to either force your quadratic through it, or use it in your array. (again not too familiar with scipy.optimise so I'm not sure if that first option is possible, but the second should lessen the downwards curve)
I can provide the proof if you don't understand.
I have a waveform from an ultrasonic sensor, based on the peaks I have calculated the radiuses (object distance from the sensor) and I would like to use matplotlib to plot the radiuses on a colormap to accentuate all the possible locations for objects in the field of view of the sensor- that should result in a colormap that has circles with the calculated radiuses on it so, that the results with a bigger intensity at that radius (value) have a brighter color.
Based on measured radiuses: [ 0. 3.434 6.868 10.302]
And values: [1, 5, 1, 3]
This drawing would illustrate what I want (sorry for the bad gimp skills, these are supposed to be circles):
In real life the colourmap is supposed to be a lot more "fluctuating" with no such perfectly defined narrow circles.
Here's my code that only gives me a blank graph:
def plot_2D_heatmap(self, radiuses, values):
print(radiuses)
#[ 0. 3.434 6.868 10.302]
print(values)
#[1, 5, 1, 3]
#calculate the x and y coordinates in mm for each measured radius and angle
angles = np.linspace(0, 2*np.pi, 36) #every 10 degrees
no_of_coordinates = len(radiuses) * len(angles)
X = []
Y = []
Z = np.zeros((no_of_coordinates,no_of_coordinates))
for r in range(len(radiuses)):
for a in range(len(angles)):
x = radiuses[r] * np.cos(angles[a])
y = radiuses[r] * np.sin(angles[a])
X.append(x)
Y.append(y)
Z[a][r] = values[r]
'''
print(r)
print(a)
print(values[r])
'''
norm = cm.colors.Normalize(vmax=abs(np.array(Z)).max(), vmin=-abs(np.array(Z)).max())
fig, ax = plt.subplots()
cset1 = ax.contourf(
X, Y, Z, 4,
norm=norm)
plt.show()
And here is some code that produces kind of the result I want, but the circles are "inside out" - the centers should be (0,0) and I feel I shouldn't be doing this so "manually:
print(radiuses)
#[ 0. 3.434 6.868 10.302]
print(values)
#[1, 5, 1, 3]
#calculate the x and y coordinates in mm for each measured radius and angle
x = np.linspace(-20, 20, 40)
y = np.linspace(-20, 20, 40)
X, Y = np.meshgrid(y,x)
angles = np.linspace(0, 2*np.pi, 360) #every 1 degrees
no_of_coordinates = len(radiuses) * len(angles)
Z = np.zeros((40, 40))
for r in range(len(radiuses)):
for a in range(len(angles)):
x = radiuses[r] * np.sin(angles[a])
y = radiuses[r] * np.cos(angles[a])
x = round(x)
y = round(y)
Z[x][y] = values[r]
norm = cm.colors.Normalize(vmax=abs(np.array(Z)).max(), vmin=-abs(np.array(Z)).max())
fig, ax = plt.subplots()
print(X)
print(Y)
print(Z)
cset1 = ax.contourf(
X, Y, Z, [1, 2],
norm=norm)
plt.colorbar(cset1)
plt.show()
Your Z represents the plot but does not use the same coordinates as the plot. In other words, (0, 0) on the plot is actually about Z[20, 20]. Key changes when looping through Z should be:
x = int(round(x)) + 20
y = int(round(y)) + 20
Just to be clear, I made a few minor changes to your code so that it can run without errors and give what you've shown. In the end, key changes give the following plot, which is hopefully what you want.