I have two arrays of n by n indicating U and V velocities. I wish to plot the streamlines.
I am aware this can be done for regularly spaced arrays using streamplot. I'm currently working on an array that is relatively close to regularly spaced so the issues are not that big. However, soon I will be working on arrays where the arrays are far from evenly spaced. With the coordinates following curves.
I've looked for a way to plot these but I cannot seem to find any modules. The only two ways I could think of doing it are:
approach the actual solution with a regularly spaced grid
use a vector field instead of a streamline field (similar to what was done here Streamlines for irregular spaced wind data in R?)
My question is if there is a module which I can use and if not, what would be the best approach to achieve a suitable result.
Related
I am stuck with python and matplotlib imshow(). Aim is it to show a twodimensonal color map which represents three dimensions.
My x-axis is represented by an array'TG'(93 entries). My y-axis is a set of arrays dependend of my 'TG' To be precise we have 93 different arrays with the length of 340. My z-axis is also a set of arrays depended of my 'TG' equally sized then y (93x340).
Basically what I have is a set of two-dimensonal measurements which I want to plot in color dependend on a third array. Is there a clever way to do that. I was trying to find out on my own first, but all I found is that most common is the problem with just a z-plane(two-dimensonal plot). So I have two matrices of the order of (93x340) and one array(93). Do you know a helpful advise.
Without more detail on your specific problem, it's hard to guess what is the best way to represent your data. I am going to give an example, hopefully it is relevant.
Suppose we are collecting height and weight of a group of people. Maybe the index of the person is your first dimension, and the height and weight depends on who it is. Then one way to represent this data is use height and weight as the x and y axes, and plot each person as a dot in that two dimensional space.
In this example, the person index doesn't really have much meaning, thus no color is needed.
I'm working with some instrument data that has records the temperature at a specific latitude, longitude, and pressure (height) coordinate. I need to create a 3d grid from this instrument data that I can then use to take a vertical cross sections of the interpolated gridded data. I've looked at pretty much every interpolation function/library I can find and I'm still having trouble just wrapping my head around how to do this.
I'd prefer not to use Mayavi, since it seems to bug out on my school's server and I'd rather not try to deal with fixing it right now.
The data is currently in 4 separate 1d arrays and I used those to mock up some scatter plots of what I'm trying to get.
Here is the structure of my instrument data points:
And here is what I'm trying to create:
Ultimately, I'd like to create some kind of 3d contour from these points that I can take slices of. Each of the plotted points has a corresponding temperature attached to it, which is really what I think is throwing me off in terms of dimensions and whatnot.
There are a few options to go from the unstructured data which you have to a structured dataset.
The simplest option might be to use the scipy interpolate.griddata method which can interpolate unstructured points using, linear or cubic interpolation.
Another option is to define your grid and then average all of the unstructured points which fall into each grid cell, giving you some gridded representation of the data. You could use a tool such as CIS to do this easily (full disclosure, I wrote this package to do exactly this kind of thing).
Or, there are more complicated methods of interpolating the data by trying to determine the most likely value of the grid points based on the unstructured data, for example using kriging with the pyKriging package, though I've never used this.
I'm stuck at this things for last couple of days. I have two sets of unstructured data. First set of data has only 9583 points and second one has 60000 points. I need to interpolate a particular field of first set data(x1[len = 9583], y1[len = 9583], temperature1[len = 9583]) to second set of data (x2[len = 60000], y2[len = 60000], temperature2[len = 60000]). I have tried with python "SmoothBivariateSpline" and looked for other options. But I did not find any reasonable solutions. I'm open for either python or c++ . It would be highly appreciable if anybody can help me figuring out the solution. I attached the figure for your convenience
Figure shows two sets of unstructured
There's a bunch on SO on unstructured interpolation, including:
Python/Scipy 2D Interpolation (Non-uniform Data)
What to do if I want 3D spline/smooth interpolation of random unstructured data?
3D interpolation of 2 lists in python
And of course, there's always the SciPy Interpolation Documentation
Contrary to the title of your question, your data looks more like what Numpy calls "1D data" (that is XY data) rather than "2D data" (XYZ data). It also looks like your data is mostly sampling & noise, rather than fit to some mathematical function. So, I'd suggest starting at the top of the list from the SciPy Docs, skipping over the univariate and multivariate stuff and trying something in the 1-D Splines section, perhaps Scipy.interpolate.BSpline.
I am currently working on a project where I have to bin up to 10-dimensional data. This works totally fine with numpy.histogramdd, however with one have a serious obstacle:
My parameter space is pretty large, but only a fraction is actually inhabited by data (say, maybe a few % or so...). In these regions, the data is quite rich, so I would like to use relatively small bin widths. The problem here, however, is that the RAM usage totally explodes. I see usage of 20GB+ for only 5 dimensions which is already absolutely not practical. I tried defining the grid myself, but the problem persists...
My idea would be to manually specify the bin edges, where I just use very large bin widths for empty regions in the data space. Only in regions where I actually have data, I would need to go to a finer scale.
I was wondering if anyone here knows of such an implementation already which works in arbitrary numbers of dimensions.
thanks 😊
I think you should first remap your data, then create the histogram, and then interpret the histogram knowing the values have been transformed. One possibility would be to tweak the histogram tick labels so that they display mapped values.
One possible way of doing it, for example, would be:
Sort one dimension of data as an unidimensional array;
Integrate this array, so you have a cumulative distribution;
Find the steepest part of this distribution, and choose a horizontal interval corresponding to a "good" bin size for the peak of your histogram - that is, a size that gives you good resolution;
Find the size of this same interval along the vertical axis. That will give you a bin size to apply along the vertical axis;
Create the bins using the vertical span of that bin - that is, "draw" horizontal, equidistant lines to create your bins, instead of the most common way of drawing vertical ones;
That way, you'll have lots of bins where data is more dense, and lesser bins where data is more sparse.
Two things to consider:
The mapping function is the cumulative distribution of the sorted values along that dimension. This can be quite arbitrary. If the distribution resembles some well known algebraic function, you could define it mathematically and use it to perform a two-way transform between actual value data and "adaptive" histogram data;
This applies to only one dimension. Care must be taken as how this would work if the histograms from multiple dimensions are to be combined.
I have tho following issue: i have two Numpy arrays containing the x,y coordinates of many (potentiality, tens of thousand) of disks, whose (different) radii are contained in a third array r. Both the radii and the coordinates are measured with the same units. I would like to plot them with the exact radius given by r but in a fast way.
I have found some partial solutions (Matplotlib scatter can use the three arrays and it is reasonably fast, but the size is scale in points, the Circles instances are too slow and Mayavi is in my opinion an overkill since the problem is 2d).
The speed is important due to the large number of circles I want to plot. Is there a simple native solution that did not come to my mind?