Plotting Error Bars As Gradients - python

I have a series of observations for risk and performance (x, y co-ordinates). These are based on a series of bootstrap samples. I also have the 5th and 95th percentiles of the bootstrap samples - these represent the 90% probability window around the observations.
As it stands I am plotting this data as a scatter, then using error bars to show the probability window around the observation. Note the error bars are not symmetrical on either axis.
I'd like to instead plot these as an ellipse around the center point, ideally with a gradient going from an alpha of 1 at the center to ~0.2 at the edge.
Could anyone give me a nudge in the correct direction for this? Equally if anyone has a more elegant way to demonstrate a probability window around points on a scatter I'm happy to try it.

Related

How to visualize correlation of discrete data using scatter_matrix in python?

for attribute in ['alcohol','chlorides','density']:
compare = wine_data[["quality", attribute]]
plot = pp.scatter_matrix(compare)
plt.show()
I found the following graph. Quality is an integer in the range of 0-10. ['alcohol','chlorides','density'] are continues data. The correlations between ['alcohol','chlorides','density'] and quality are 0.432733,-0.305599 and -0.207202, respectively. How do I understand the three graphs below? Is there a better way to visualize the correlation of discrete datas?
I prefer Seaborn's regplot function - which will graph the same scatterplot you see here along with a regression line on top o fit. The regression line will help you understand whether the correlation is positive or negative (upward / downward sloping) as well as providing error bars in shading around the regression line.
https://seaborn.pydata.org/generated/seaborn.regplot.html

Python contour plot vs pcolormesh for probability map

So I have two arrays of points that I need to plot that I have stored in arrays, but at each of these points there is a probability of some event happening so each has a value ranging from 0 to 1. My idea was to find a way to assign these probabilities to their respective (x,y) coordinate and display it as a heatmap. The code to plot this is as follows:
plt.pcolormesh(xcoord,ycoord,des_mag)
plt.show()
Where xcoord and ycoord are arrays. I could only make this run if I made des_mag a 2D array, in this case a 2000x2000 array with only entries on the diagonal since xcoord and ycoord each contain 2000 coordinates. All the des_mag values vary from 0 to 1. When I run this the output is simply a graph with a solid background and one tiny grid point in the corner with a different color. I'm 95% confident the issue is my lack of understanding on what it is I need to input for the plot, but I can't seem to find many examples for clarity on the issue. If anyone has any suggestions it would be greatly appreciated.

Compute integral of a "map" along a given direction

EDIT: this is for Python!
Im trying to compute an integral of a "map" to produce a line density but I want my integration direction to vary based on the map itself. Here is the idea:
Now, the test case that I am using is the following picture:
So I want to integrate along the cross direction of the "plume", and as such my axes are rotated by an angle of around 90 + theta compared to the regular x and y axes, where theta is rotation angle of the plume with respect to the (original) x axis.
I have tried rotating the entire matrix of values using ndimage.rotate and integrate the rotated picture in the normal x and y directions. This does, however, not produce the desired results for the final line density as I lose information regarding the "direction" of the plume (all integrations are done for a northern plume).
I'm sure there must be a way to perform an integration along a specified axis but I can't quite find it.
Thanks in advance!

How to plot coarse-grained average of a set of data points?

I have a set of discrete 2-dimensional data points. Each of these points has a measured value associated with it. I would like to get a scatter plot with points colored by their measured values. But the data points are so dense that points with different colors would overlap with each other, that may not be good for visualization. So I am thinking if I could associate the color for each point based on the coarse-grained average of measured values of some points near it. Does anyone know how to implement this in Python?
Thanks!
I have it done by using sklearn.neighbors.RadiusNeighborsClassifier(), the idea is the take the average of the values of the neighbors within a specific radius. Suppose the coordinates of the data points are in the list temp_coors, the values associated with these points are coloring, then coloring could be coarse-grained in the following way:
r_neigh = RadiusNeighborsRegressor(radius=smoothing_radius, weights='uniform')
r_neigh.fit(temp_coors, coloring)
coloring = r_neigh.predict(temp_coors)

matplotlib radar plot min values

I started with the matplotlib radar example but values below some min values disappear.
I have a gist here.
The result looks like
As you can see in the gist, the values for D and E in series A are both 3 but they don't show up at all.
There is some scaling going on.
In order to find out what the problem is I started with the original values and removed one by one.
When I removed one whole series then the scale would shrink.
Here an example (removing Factor 5) and scale in [0,0.2] range shrinks.
From
to
I don't care so much about the scaling but I would like my values at 3 score to show up.
Many thanks
Actually, the values for D and E in series A do show up, although they are plotted in the center of the plot. This is because the limits of your "y-axis" is autoscaled.
If you want to have a fixed "minimum radius", you can simply put ax.set_ylim(bottom=0) in your for-loop.
If you want the minimum radius to be a number relative to the lowest plotted value, you can include something like ax.set_ylim(np.asarray(data.values()).flatten().min() - margin) in the for-loop, where margin is the distance from the lowest plotted value to the center of the plot.
With fixed center at radius 0 (added markers to better show that the points are plotted):
By setting margin = 1, and using the relative y-limits, I get this output:

Categories