I have an Numpy array of the size (size_x, size_y) holding different values. These values are a Gaussian random field and the size in both dimension is given.
Also, I have a Numpy array of the size (nr_points, 2) with an amount of two-dimensional coordinates. nr_points is the amount of xy-coordinates in this array and given.
The sizes (size_x, size_y) are different of the bounding box of all points given in the second array.
How do I efficiently scale and map the values of the first array to the points?
Here is a graphical sketch of the desired task.
Normalize the coordinate values to the range of the size of the field array which will probably produce fractional (not-integer) coordinates.
scale = (field_array_size - 1) / (coord_max - coord_min)
scaled_coords = coordinates * scale
normed_coords = scaled_coords - scaled_coords_min
coordinate x values should be scaled to the field array x dimension size
coordinate y values should be scaled to the field array y dimension size
You can only index the field array with integers so you have two choices:
round the new coordinates to zero decimal places and convert to ints then use them as indices
interpolate the field array values using the new coordinates
Related
I have a problem here.
My data is a 3d shape of np array
(256, 256, 1) = (x coordinate, y coordinate, pressure value)
Now I would like to draw a contour plot using this np array.
But the problem is that the shape of my data does not fit into plt.contourf
Any idea on how to preprocess my data before feeding it to contourf?
Since you have a singular value for each position [M,N], you can simply squeeze out that dimension and have it represented by a matrix of shape [M,N].
data = data.squeeze(2)
plt.contourf(data)
The squeezed and original array contain the exact same data, but are just represented slightly differently.
I have an array of object with size (22, 2) for Y axis, and the corresponding x-axis is an array of object with size (22, 2) as well. I need to calculate the Area between the curve y = f(x), and the x-axis, for each array.
I wrote this code:
Area = np.trapz(y,x)
I got an array of object in which each module consists of many elements, but I expected to get a single value (area) for each y=f(x).
How can I calculate correctly the area for each function f(x) stored in the array of object?
I have a list a 3d image represented in an array of size 50x50x50. Every element of this 3D array is a pixel. I've differentiated every pixel in the x,y,z direction. How can I represent this in the array?
After differentiating it, I get a list of size 3, and within each index is a 50,50,50. This is therefore a list has the differentiated image for x,y and z direction, which is very nearly what I want. But I would like an array which was 50,50,50,3 rather than 3,50,50,50.
This is what I would want represented. Every pixel has a value for x,y and z
My code:
array_image=full_image[0:50,0:50,0:50]
Gradient=np.gradient(array_image)
If you look at the np.gradient doc carefully, it actually returns what you want but with different shape.
gradient : ndarray or list of ndarray.
A set of ndarrays (or a single ndarray if there is only one dimension)
corresponding to the derivatives of f with respect to each dimension.
Each derivative has the same shape as f.
So your Gradient is a list of gradients for array_image, corresponding to the each dimension.
res = np.zeros([50,50,50,3])
for i in range(3):
res[:,:,:,i] = Gradient[i]
Given a list of irregularly spaced data points that are used to describe a line or curve or some noisy function, I would like to convert the data to an image-like numpy array, where each "pixel" either contains data or not (say, 1/0 for data / no data). A similar question was asked here; however, I do not necessarily have a data point for each possible x position, and my points are not necessarily integer values.
For example, say I wanted my output image array to span -10 <= x <= 30, 0 <= y <= 20 with each element in the array spanning 1 distance unit (image array would be 20x40). I have data points [(0.1, 0), (20, 5), (23.5, 18)] that I want to "plot". I would like to find all elements in the array that fall on the line segments created by connecting these points, and populate the array by setting these elements equal to 1 and the rest equal to 0.
EDIT: This is a start, but it seems to convert an entire plot into an image, not just the data (so, it includes axes, labels, etc).
I have a 3D array created using the numpy mgrid command so that each element has a certain value and the indexes retain the spatial information. For example, if one summed over the z-axis (3rd dimension) then the the resultant 2D array could be used in matplotlib with the function imshow() to obtain an image with different binned pixel values.
My question is: How can I obtain the index values for each element in this grid (a,b,c)?
I need to use the index values to calculate the relative angle of each point to the origin of the grid. (eg. theta=sin-1(sqrt(x^2+y^2)/sqrt(x^2+y^2+z^2))
Maybe this can be translated to another 3D grid where each element is the array [a,b,c]?
I'm not exactly clear on your meaning, but if you are looking for 3d arrays that contain the indices x, y, and z, then the following may suit your needs; assume your data is held in a 3D array called "abc":
import numpy as nm
x,y,z = nm.mgrid[[slice(dm) for dm in abc.shape]]