This question already has an answer here:
Combining two matplotlib colormaps
(1 answer)
Closed 7 years ago.
I use matplotlib to plot temperature data for combustion simulations, with temperature in the flame ranging from 3200K - 5500K and temperature outside of the flame ranging from 300K to 1000K. I want to generate projection plots using two different colormaps, one for within the flame and one for outside of it, to show slight variations in both regions. I don't see any temperatures in the intermediate region of 1000K - 3200K, so I waste resolution in my colormap by using one map for the entire 300K - 5500K range. I tried using some of the diverging maps, but they still miss the small variations at the high and low ends.
Does anyone have any suggestions for how to combine two colormaps into one, using one of the colormaps for each temperature range?
EDIT
To make my question more specific: I want to use Matlplotlib's 'hot' colormap for data points in the 3200 - 5500 range and 'cool' for data points in the 300 - 1000 range.
Is there any way to get the source code for these two colormaps, normalize them to their respective start and end points, and combine both into one cmap?
Here's a great write up on creating custom color maps:
http://cresspahl.blogspot.com/2012/03/expanded-control-of-octaves-colormap.html
You can simply modify the color changes around your data set and leave the portion that's not represented unchanged.
Related
I'm working on a dataset of SMS records [datetime_entry, sms_sent] and I was looking to copy a really effective trend visual from a well cited Electricity demand study. Does anyone know the name of this plot, or the implementation of something similar in Python (as I'm not sure this was done in Python).
I know how to subplot the 4 charts after splitting the data by quarter, I'm just stumped on the plot type and stylization.
This is what matplotlib calls an eventplot.
Essentially each vertical line represents an occurance of a Mwh demand during that specific hour. So each row in the plot should have as many vertical lines as there are days in that quarter.
While it works in this plot for these data, relying on the combination of alpha level + data density can be slightly unreliable as the data change as the number of overlapping points is not readily visible. So you can also create a similar visualization using hist2d, where you manually specify your bins.
I am trying to create a 3D graph of a dataset that contains one x-axis (time_step), and about 8 y-axes at different scales. The reason I need to do so, is to show a significant change in the series that can be observed in all the different features at a particular point in time.
The issue I am running into is the fact that they y-axes are in different scales. For example, bonding_energy is around 1000, whereas potential is around-58000. What I would like to do is scale the y-axes around to place the lines somewhat parallel to one another.
Here is a sample of the dataset I am working with:
Here is an example of the image I am trying to create:
I have worked out a table somewhat like the one in the link. The ultimate goal for plotting is to find out if there is a seasonal change pattern for certain products in a state. I have tried to figure out a 3-D plot in python, with x-axis being product name, y-axis being month and z-axis being YR2012 and YR2013 respectively.
And another small question related to this is how could I make python know that the SALESMONTH column contains month type of data rather than plain integers.
Thanks!
I am currently working on a project where I have to bin up to 10-dimensional data. This works totally fine with numpy.histogramdd, however with one have a serious obstacle:
My parameter space is pretty large, but only a fraction is actually inhabited by data (say, maybe a few % or so...). In these regions, the data is quite rich, so I would like to use relatively small bin widths. The problem here, however, is that the RAM usage totally explodes. I see usage of 20GB+ for only 5 dimensions which is already absolutely not practical. I tried defining the grid myself, but the problem persists...
My idea would be to manually specify the bin edges, where I just use very large bin widths for empty regions in the data space. Only in regions where I actually have data, I would need to go to a finer scale.
I was wondering if anyone here knows of such an implementation already which works in arbitrary numbers of dimensions.
thanks 😊
I think you should first remap your data, then create the histogram, and then interpret the histogram knowing the values have been transformed. One possibility would be to tweak the histogram tick labels so that they display mapped values.
One possible way of doing it, for example, would be:
Sort one dimension of data as an unidimensional array;
Integrate this array, so you have a cumulative distribution;
Find the steepest part of this distribution, and choose a horizontal interval corresponding to a "good" bin size for the peak of your histogram - that is, a size that gives you good resolution;
Find the size of this same interval along the vertical axis. That will give you a bin size to apply along the vertical axis;
Create the bins using the vertical span of that bin - that is, "draw" horizontal, equidistant lines to create your bins, instead of the most common way of drawing vertical ones;
That way, you'll have lots of bins where data is more dense, and lesser bins where data is more sparse.
Two things to consider:
The mapping function is the cumulative distribution of the sorted values along that dimension. This can be quite arbitrary. If the distribution resembles some well known algebraic function, you could define it mathematically and use it to perform a two-way transform between actual value data and "adaptive" histogram data;
This applies to only one dimension. Care must be taken as how this would work if the histograms from multiple dimensions are to be combined.
I am using python to plot points. The plot shows relationship between area and the # of points of interest (POIs) in this area. I have 3000 area values and 3000 # of POI values.
Now the plot looks like this:
The problem is that, at lower left side, points are severely overlapping each other so it is hard to get enough information. Most areas are not that big and they don't have many POIs.
I want to make a plot with little overlapping. I am wondering whether I can use unevenly distributed axis or use histogram to make a beautiful plot. Can anyone help me?
I would suggest using a logarithmic scale for the y axis. You can either use pyplot.semilogy(...) or pyplot.yscale('log') (http://matplotlib.org/api/pyplot_api.html).
Note that points where area <= 0 will not be rendered.
I think we have two major choices here. First adjusting this plot, and second choosing to display your data in another type of plot.
In the first option, I would suggest clipping the boundries. You have plenty of space around the borders. If you limit the plot to the boundries, your data would scale better. On top of it, you may choose to plot the points with smaller dots, so that they would seem less overlapping.
Second option would be to choose displaying data in a different view, such as histograms. This might give a better insight in terms of distribution of your data among different bins. But this would be completely different type of view, in regards to the former plot.
I would suggest trying to adjust the plot by limiting the boundries of the plot to the data points, so that the plot area would have enough space to scale the data and try using histograms later. But as I mentioned, these are two different things and would give different insights about your data.
For adjusting you might try this:
x1,x2,y1,y2 = plt.axis()
plt.axis((x1,x2,y1,y2))
You would probably need to make minor adjustments to the axis variables. Note that there should definetly be better options instead of this, but this was the first thing that came to my mind.