I want to visualize such 4D data in the 2D plane. Is there any way to do that?
You can use principal component analysis to reduce the dimensions. Sklearn has an easy to use package. PCA uses eigen vectors to determine the most 'important' directions in the higher dimensional space and projects the values into the lower dimensional space. Although keep in mind the generated vectors do lose some information from the higher dimensional input.
You can use pairs plot. Look at this link: Visualizing Data with Pairs Plots in Python.
Depending on the data type, you can plot them on a 2d field with the following dimensions:
Dim 1-2: X and Y axes
Dim 3: plotted point size
Dim 4: plotted point color gradient
Or if you do it with a 3D software, you can plot in 3D, with all the point plotted with color gradient. While rotating the field, you can have vision on all dimensions.
Related
I have two matrices: m x m and m x 1.
The 2D matrix's data is dependent on the 1D matrix, so I want to show this case in a one plot.
Earlier, to visualize the 2D matrix I used a heatmap and also now I think about using it.
Both matrices have the same label ticks, so I got an idea to double the 1D matrix and place them next to the 2D matrix's labels to let them share the same label ticks:
Is it possible to achieve such a plot^ in Python, especially in seaborn or matplotlib?
Or maybe there is another, better way to visualize such a data?
For now, I see here one issue - a lack of a legend for 1D matrices, but I have no idea where and how it should be placed.
I was wondering whether it is possible to create a 3D vector Field PLot from a 4D numpy array.
The array is in the shape (10,10,10,3) which is basically the three vector components at each respective grid point. Grid spacing is 1.
There is a matplotlib function ax.quiver() (https://matplotlib.org/stable/gallery/mplot3d/quiver3d.html) that I would like to use. It can be used for irregular grid, which is why it takes 6 meshgrids as input (x,y,z,u,v,w) for the grid and the vector components respectively.
I know how to create the xyz meshgrids, but I am not sure how to extract the vector components uvw from my 4d array.
Help would be appreciated!
My aim is to compute the maximum and minimum diameters of a slice of a 3D tubular shape (as shown in the attached figure).
What I tried : several rotations of the skeleton of the shape to make it parallel to the Z axis and then cut the interpolated tubular shape at some voxel to get the slice.
However these rotations make lot of interpolations. Hence, I would like to avoid these with a method that computes the diameters (max and min) without using rotations.
Any help please?
I have a numpy array with 2d points that I convert from 3d to 2d via the following equation:
https://wikimedia.org/api/rest_v1/media/math/render/svg/198f15da062c7ce00598d7a2f9bd8169d7042ed3
How can I convert the point back to 3D?
I used the top down view matrix that is in the image above. Found in Wikiperia: https://en.wikipedia.org/wiki/Orthographic_projection
#To 2D from 3d:
points2D = np.array([np.matmul(camera_pos, point) for point in points3D])[:,:2]
You cannot convert it back using just the projected points. Note that your projection basically is just looking at the (x,y) values and discarding the z value so there is no way to know what z was after doing this.
For instance, consider the points u = [1,2,3] and v=[1,2,-3]. These both project to [1,2,0], so there is no way to know if we should make [1,2,0] into u or v when we try to invert (undo) the projection.
In terms of the matrix operation, this is because the projection matrices are not invertible (except the identity matrix).
You will need more information than just the projected points to be able to recover the original points.
I'm using Healpy (the HEALPix tools developed in Python) to read and write full-sky CMB maps.
I am confused as to how the coordinates of pixels on a sky map translates into entries of a numpy ndarray. How does one HEALPix pixel in a FITS file translate into an ndarray entry?
For example, let's say I have a generated CMB map, Nside=64, lmax=64, using the default RING scheme. The total number of pixels is given by Npix=12*Nside**2. So, for my example, that is 49152 total number of pixels.
The FITS file I have is in the format 1 column by 48 rows. (The 1 column is all temperature values.) So, I transform this file into a n-dim numpy ndarray. The shape of this array gives (48,1024).
Question 1: Is this 2-d array a projection of all the map pixels on the sky into "matrix" form? I think it is. Using my example, 48*1024 = 49152, the total number of pixels.
Question 2: Where does the 1024 standard come from? What does this mean in terms of a HEALPix map and its coordinates? How does one 2-dim ndarray entry (pixel) relate to another (by angle, by HEALPix position, etc.)?
For another example, map Nside=1024. When I convert this into a 2-dimensional ndarray, I get the shape (12288, 1024).
EDIT: Why does this convention of 1024-element arrays exist? How does this correspond to the pixel coordinates on a map?
the best source of information about the HEALPIX pixelization is the Healpix primer:
http://healpix.jpl.nasa.gov/html/intro.htm
A map is a 1-dimensional vector, each index is a pixel in the sky.
you can turn pixel number into coordinates in the sky using hp.pix2ang.
Shaping it in rows of 1024 is just a trick to make read/write to disk faster.