So I'm making a Graphical Calculator, which shows an intersection between graphs and axes. I found the method from Intersection of two graphs in Python, find the x value to work most of the time, however trying to plot the x-axis intersection of x**2 as such
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-5, 5, 0.01)
g = (x) ** 2
plt.plot(x, g, '-')
idx = np.argwhere(np.diff(np.sign(g))).flatten()
plt.plot(x[idx], g[idx], 'ro')
plt.show()
doesn't put the dot at (0,0) point. I assumed it has something to do with the fact that 0 is not in g, so the grpah it doesn't actually pass through the point exactly and instead gets really close to it. So I experimented with changing idx to
epsilon = 0.0001
# or another real small number
idx = g < epsilon
Unfortunately, that only seemed to make a lot of points near the actual x-intercept, instead of just one.
You are close, instead, I just search for where the absolute value of the derivative is at a minimum such that
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-5, 5, 0.01)
g = x**2
plt.plot(np.abs(np.diff(g)))
plt.show()
which shows that the minimum should be at index 500:
Then all you need to do is return the index of the minimum value with argmin and plot that point
idx = np.argmin(np.abs(np.diff(g)))
plt.plot(x, g, '-')
plt.scatter(x[idx],g[idx])
plt.show()
You'll need to modify the idx variable to return multiple roots, but for the question you posted, this should be sufficient.
I'm making contour plots which are basically analytical or numerical solutions to a fluid dynamic system. I don't think the technical stuff really matters too much, but here's my plots. The first plot is the numerical (Matrix system) solution, and the second plot is the nice closed form (single forumla) solution.
As can be seen, my second plot has the bubbles on the right hand side. Looking at the legend/scale, I have negative values. I'd like to not have negative values, or not plot them, although I'm not sure how to adjust this within my code. I've spent some time looking into how to adjust the z values to being positive only, but I can't seem to get it. I'll drop my plot code, and then my nice closed form function that is used in the plot.
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
import scipy.special as sp1
from mpl_toolkits.mplot3d import Axes3D
def v(r,z,gamma):
a=r*(1-z/gamma)
sums = 0
for n in range(1,26):
sums += ((sp1.iv(1(n*np.pi*r)/gamma))/(n*sp1.iv(1(n*np.pi)/gamma)))*np.sin(n*np.pi*z/gamma)
return a-(2/np.pi)*sums
def plot_contour(a, filename=None, zlabel='v(r,z)',cmap=plt.cm.gnuplot):
fig = plt.figure(figsize=(5,4))
ax = fig.add_subplot(111)
x = np.arange(a.shape[0])
y = np.arange(a.shape[1])
X, Y = np.meshgrid(x, y)
Z = a[X, Y]
cset = ax.contourf(X, Y, Z, 20, cmap=cmap)
ax.set_xlabel('r')
ax.set_ylabel('z')
ax.set_title('\u0393=2.5')
ax.axis('off')
ax.set_aspect(1)
cb = fig.colorbar(cset, shrink=0.5, aspect=5)
cb.set_label(zlabel)
if filename:
fig.savefig(filename,dpi=1600)
plt.close(fig)
return filename
else:
return ax
...
plot_contour(v1, 'gamma25e+1')
This is all the necessary code. The rest of it is the matrix solution stuff, which is just a bunch of linear algebra. Any help on what I need to add or adjust to prevent negative values from showing up on the second plot. It should look exactly like the first.
I've spent some time looking into how to adjust the z values to being positive only
what you can do depends greatly on what you want to do with the results below zero, if your sole purpose is to make the points below zero show as zero, you can simply make them zero, however that would be showing a wrong result.
x = np.arange(a.shape[0])
y = np.arange(a.shape[1])
X, Y = np.meshgrid(x, y)
Z = a[X, Y]
Z[Z < 0] = 0
another solution is to subtract the minimum value of you data so that the minimum value of the result is 0.
x = np.arange(a.shape[0])
y = np.arange(a.shape[1])
X, Y = np.meshgrid(x, y)
Z = a[X, Y]
Z -= np.amin(Z)
I have a 2D output matrix (say, Z) which was calculated as a function of two variables x,y.
x varies in a non-uniform manner like [1e-5,5e-5,1e-4,5e-4,1e-3,5e-3,1e-2]
y varies in a uniform manner like [300,400,500,600,700,800]
[ say, Z = np.random.rand(7,6) ]
I was trying to plot a colormap of the matrix Z by first creating a meshgrid for x,y and then using the pcolormesh. Since, my x values are non-uniform, I do not kn ow how to proceed. Any inputs would be greatly appreciated.
No need for meshgrids; regarding the non-uniform axes: In your case a log-scale works fine:
import numpy as np
from matplotlib import pyplot as plt
x = [1e-5,5e-5,1e-4,5e-4,1e-3,5e-3,1e-2]
y = [300,400,500,600,700,800]
# either enlarge x and y by one number (right-most
# endpoint for those bins), or make Z smaller as I did
Z = np.random.rand(6,5)
fig = plt.figure()
ax = fig.gca()
ax.pcolormesh(x,y,Z.T)
ax.set_xscale("log")
fig.show()
I'm trying to plot a probability distribution (say probability of k events). It should be plotted as a logscale on the horizontal axis since the behavior at large values of k looks like k^{-alpha}. So it's a straight line for large k on a logscale plot.
But 0 happens.
I want to plot this in a way that is easy to interpret.
For an example, consider a probability defined so that p_0 = 0.5 and for k= 1, 2, 3, ... we set p_k = Ck^{-2} where if I've calculated correctly C=3/pi^2. This should sum to 1 and produce a nice straight line for k>0, but obviously, I can't stick in 0. Nevertheless it's important that the person looking at the image understand that 0 exists and has significant probability.
I'm using matplotlib (in python), but really I'm interested in how we could visualize this. The implementation can be sorted later.
In order to put 0 into the plot, you have apply symlog to x axis and log to y axis. I am putting some code here in case you are not familiar with matplotlib, then you can start with code below. For details, pls check doc.
import numpy as np
import matplotlib.pyplot as plt
n = 100
x = np.arange(0, n)
y = 3/(np.pi*np.pi)/(x[1:])**2
y = np.concatenate([[0.5], y])
fig, ax = plt.subplots(1, 1, figsize=(7.2, 7.2))
ax.plot(x, y, 'x')
ax.set_xlim(-1, n)
ax.set_xscale('symlog')
ax.set_yscale('log')
I am working with a large number of 3D points, each with x,y,z values stored in numpy arrays.
For background, the points will always fall within a cylinder of fixed radius, and height = max z value of the points.
My objective is to split the bounding cylinder (or column if it is easier) into e.g. 1 m height strata, and then count the number of points within each cell
of a regular grid (e.g. 1 m x 1 m) overlaid on each strata.
Conceptually, the operation would be the same as overlaying a raster and counting the points intersecting each pixel.
The grid of cells can form a square or a disk, it doesn't matter.
After a lot of searching and reading, my current thinking is to use some combination of numpy.linspace and numpy.meshgrid to generate the vertices of each cell stored within an array and test each cell against each point to see if it is 'in'. This seems inefficient, especially when working with thousands of points.
The numpy / scipy suite seems well suited to the problem, but I have not found a solution yet. Any suggestions would be much appreciated.
I have included a few example points and some code to visualize the data.
# Setup
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Load in X,Y,Z values from a sub-sample of 10 points for testing
# XY Values are scaled to a reasonable point of origin
z_vals = np.array([3.08,4.46,0.27,2.40,0.48,0.21,0.31,3.28,4.09,1.75])
x_vals = np.array([22.88,20.00,20.36,24.11,40.48,29.08,36.02,29.14,32.20,18.96])
y_vals = np.array([31.31,25.04,31.86,41.81,38.23,31.57,42.65,18.09,35.78,31.78])
# This plot is instructive to visualize the problem
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x_vals, y_vals, z_vals, c='b', marker='o')
plt.show()
I am not sure I understand perfectly what you are looking for, but since every "cell" seems to have a 1m side for all directions, couldn't you:
round all your values to integers (rasterize your data) probably with some floor function;
create a bijection from these integer coordinates to something more convenient with something like:
(64**2)*x + (64)*y + z # assuming all values are in [0,63]
You can put z rather at the beginning if you want to more easely focus on height later
compute the histogram of each "cell" (several functions from numpy/scipy or numpy can do it);
revert the bijection if needed (ie. know the "true" coordinates of each cell once the count is known)
Maybe I didn't understand well, but in case it helps...
Thanks #Baruchel. It turns out the n-dimensional histograms suggested by #DilithiumMatrix provides a fairly simple solution to the problem I posted. After some reading, here is my current solution for anyone else that faces a similar problem.
As this is my first Python/Numpy effort any improvements/suggestions, especially regarding performance, would be welcome. Thanks.
# Setup
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Load in X,Y,Z values from a sub-sample of 10 points for testing
# XY Values are scaled to a reasonable point of origin
z_vals = np.array([3.08,4.46,0.27,2.40,0.48,0.21,0.31,3.28,4.09,1.75])
x_vals = np.array([22.88,20.00,20.36,24.11,40.48,29.08,36.02,29.14,32.20,18.96])
y_vals = np.array([31.31,25.04,31.86,41.81,38.23,31.57,42.65,18.09,35.78,31.78])
# Updated code below
# Variables needed for 2D,3D histograms
xmax, ymax, zmax = int(x_vals.max())+1, int(y_vals.max())+1, int(z_vals.max())+1
xmin, ymin, zmin = int(x_vals.min()), int(y_vals.min()), int(z_vals.min())
xrange, yrange, zrange = xmax-xmin, ymax-ymin, zmax-zmin
xedges = np.linspace(xmin, xmax, (xrange + 1), dtype=int)
yedges = np.linspace(ymin, ymax, (yrange + 1), dtype=int)
zedges = np.linspace(zmin, zmax, (zrange + 1), dtype=int)
# Make the 2D histogram
h2d, xedges, yedges = np.histogram2d(x_vals, y_vals, bins=(xedges, yedges))
assert np.count_nonzero(h2d) == len(x_vals), "Unclassified points in the array"
extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]]
plt.imshow(h2d.transpose(), extent=extent, interpolation='none', origin='low')
# Transpose and origin must be used to make the array line up when using imshow, unsure why
# Plot settings, not sure yet on matplotlib update/override objects
plt.grid(b=True, which='both')
plt.xticks(xedges)
plt.yticks(yedges)
plt.xlabel('X-Axis')
plt.ylabel('Y-Axis')
plt.plot(x_vals, y_vals, 'ro')
plt.show()
# 3-dimensional histogram with 1 x 1 x 1 m bins. Produces point counts in each 1m3 cell.
xyzstack = np.stack([x_vals,y_vals,z_vals], axis=1)
h3d, Hedges = np.histogramdd(xyzstack, bins=(xedges, yedges, zedges))
assert np.count_nonzero(h3d) == len(x_vals), "Unclassified points in the array"
h3d.shape # Shape of the array should be same as the edge dimensions
testzbin = np.sum(np.logical_and(z_vals >= 1, z_vals < 2)) # Slice to test with
np.sum(h3d[:,:,1]) == testzbin # Test num points in second bins
np.sum(h3d, axis=2) # Sum of all vertical points above each x,y 'pixel'
# only in this example the h2d and np.sum(h3d,axis=2) arrays will match as no z bins have >1 points
# Remaining issue - how to get a r x c count of empty z bins.
# i.e. for each 'pixel' how many z bins contained no points?
# Possible solution is to reshape to use logical operators
count2d = h3d.reshape(xrange * yrange, zrange) # Maintain dimensions per num 3D cells defined
zerobins = (count2d == 0).sum(1)
zerobins.shape
# Get back to x,y grid with counts - ready for output as image with counts=pixel digital number
bincount_pixels = zerobins.reshape(xrange,yrange)
# Appears to work, perhaps there is a way without reshapeing?
PS if you are facing a similar problem scikit patch extraction looks like another possible solution.