I have a numpy array like this:
[[[0,0,0], [1,0,0], ..., [1919,0,0]],
[[0,1,0], [1,1,0], ..., [1919,1,0]],
...,
[[0,1019,0], [1,1019,0], ..., [1919,1019,0]]]
To create I use function (thanks to #Divakar and #unutbu for helping in other question):
def indices_zero_grid(m,n):
I,J = np.ogrid[:m,:n]
out = np.zeros((m,n,3), dtype=int)
out[...,0] = I
out[...,1] = J
return out
I can access this array by command:
>>> out = indices_zero_grid(3,2)
>>> out
array([[[0, 0, 0],
[0, 1, 0]],
[[1, 0, 0],
[1, 1, 0]],
[[2, 0, 0],
[2, 1, 0]]])
>>> out[1,1]
array([1, 1, 0])
Now I wanted to plot 2d histogram where (x,y) (out[(x,y]) is the coordinates and the third value is number of occurrences. I've tried using normal matplotlib plot, but I have so many values for each coordinates (I need 1920x1080) that program needs too much memory.
If I understand correctly, you want an image of size 1920x1080 which colors the pixel at coordinate (x, y) according to the value of out[x, y].
In that case, you could use
import numpy as np
import matplotlib.pyplot as plt
def indices_zero_grid(m,n):
I,J = np.ogrid[:m,:n]
out = np.zeros((m,n,3), dtype=int)
out[...,0] = I
out[...,1] = J
return out
h, w = 1920, 1080
out = indices_zero_grid(h, w)
out[..., 2] = np.random.randint(256, size=(h, w))
plt.imshow(out[..., 2])
plt.show()
which yields
Notice that the other two "columns", out[..., 0] and out[..., 1] are not used. This suggests that indices_zero_grid is not really needed here.
plt.imshow can accept an array of shape (1920, 1080). This array has a scalar value at each location in the array. The structure of the array tells imshow where to color each cell. Unlike a scatter plot, you don't need to generate the coordinates yourself.
Related
Let's say I have a two-dimensional array
import numpy as np
a = np.array([[1, 1, 1], [2,2,2], [3,3,3]])
and I would like to replace the third vector (in the second dimension) with zeros. I would do
a[:, 2] = np.array([0, 0, 0])
But what if I would like to be able to do that programmatically? I mean, let's say that variable x = 1 contained the dimension on which I wanted to do the replacing. How would the function replace(arr, dimension, value, arr_to_be_replaced) have to look if I wanted to call it as replace(a, x, 2, np.array([0, 0, 0])?
numpy has a similar function, insert. However, it doesn't replace at dimension i, it returns a copy with an additional vector.
All solutions are welcome, but I do prefer a solution that doesn't recreate the array as to save memory.
arr[:, 1]
is basically shorthand for
arr[(slice(None), 1)]
that is, a tuple with slice elements and integers.
Knowing that, you can construct a tuple of slice objects manually, adjust the values depending on an axis parameter and use that as your index. So for
import numpy as np
arr = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]])
axis = 1
idx = 2
arr[:, idx] = np.array([0, 0, 0])
# ^- axis position
you can use
slices = [slice(None)] * arr.ndim
slices[axis] = idx
arr[tuple(slices)] = np.array([0, 0, 0])
I have a multiple numpy arrays like so:
[1, 5, 0, 0]
[2, 1, 3, 1]
[1, 3, 4, 1]
All my arrays have different values and shapes.
I want to write a function that will pad all my arrays to the same shape.
Currently, I am doing something like this (inside a for loop):
width = int(7000 - size[0])
height = int(7000 - size[1])
data = np.pad(data, (width, height), 'constant', constant_values=(0,0))
Where data is the array being edited, and 7000x7000 is my largest array.
This is giving me a MemoryError.
i tried my own code with this:
import numpy
def arrays(arr):
return numpy.array(arr[::-1], float)
arr = input().strip().split(' ')
result = arrays(arr)
print(result)
I have 2d numpy array (think greyscale image). I want to assign certain value to a list of coordinates to this array, such that:
img = np.zeros((5, 5))
coords = np.array([[0, 1], [1, 2], [2, 3], [3, 4]])
def bad_use_of_numpy(img, coords):
for i, coord in enumerate(coords):
img[coord[0], coord[1]] = 255
return img
bad_use_of_numpy(img, coords)
This works, but I feel like I can take advantage of numpy functionality to make it faster. I also might have a use case later to to something like following:
img = np.zeros((5, 5))
coords = np.array([[0, 1], [1, 2], [2, 3], [3, 4]])
vals = np.array([1, 2, 3, 4])
def bad_use_of_numpy(img, coords, vals):
for coord in coords:
img[coord[0], coord[1]] = vals[i]
return img
bad_use_of_numpy(img, coords, vals)
Is there a more vectorized way of doing that?
We can unpack each row of coords as row, col indices for indexing into img and then assign.
Now, since the question is tagged : Python 3.x, on it we can simply unpack with [*coords.T] and then assign -
img[[*coords.T]] = 255
Generically, we can use tuple to unpack -
img[tuple(coords.T)] = 255
We can also compute the linear indices and then assign with np.put -
np.put(img, np.ravel_multi_index(coords.T, img.shape), 255)
I have a set of coordinate means (3D) and a set of standard deviations (3D) accompying them like this:
means = [[x1, y1, z1],
[x2, y2, z2],
...
[xn, yn, zn]]
stds = [[sx1, sy1, sz1],
[sx2, sy2, sz2],
...
[sxn, syn, szn]]
so the problem is N x 3
I am looking to generate 1000 coordinate sample sets (N x 3 x 1000) randomly using np.random.normal(). Currently I generate the samples using a for loop:
for i in range(0,1000):
samples = np.random.normal(means, stds)
But I have the feeling I can lose the for loop and let numpy do it faster and in one call, anybody know how I should code that?
or alternatively use the size argument:
import numpy as np
means = [ [0, 0, 0], [1, 1, 1] ]
std = [ [1, 1, 1], [1, 1, 1] ]
#100 samples
print(np.random.normal(means, std, size = (100, len(means), 3)))
You can repeat your means and stds arrays 1000 times, and then call np.random.normal() once.
means = [[0, 0, 0],
[1, 1, 1]]
stds = [[1, 1, 1],
[2, 2, 2]]
means = numpy.array(means) * numpy.ones(1000)[:, None, None]
stds = numpy.array(stds) * numpy.ones(1000)[:, None, None]
samples = numpy.random.normal(means, stds)
How could I smooth the x[1,3] and x[3,2] elements of the array,
x = np.array([[0,0,0,0,0],[0,0,0,1,0],[0,0,0,0,0],[0,0,1,0,0],[0,0,0,0,0]])
with two two-dimensional gaussian functions of width 1 and 2, respectively? In essence I need a function that allows me to smooth single "point like" array elements with gaussians of differing widths, such that I get an array with smoothly varying values.
I am a little confused with the question you asked and the comments you have posted. It seems to me that you want to use scipy.ndimage.filters.gaussian_filter but I don't understand what you mean by:
[...] gaussian functions with different sigma values to each pixel. [...]
In fact, since you use a 2-dimensional array x the gaussian filter will have 2 parameters. The rule is: one sigma value per dimension rather than one sigma value per pixel.
Here is a short example:
import matplotlib.pyplot as pl
import numpy as np
import scipy as sp
import scipy.ndimage
n = 200 # widht/height of the array
m = 1000 # number of points
sigma_y = 3.0
sigma_x = 2.0
# Create input array
x = np.zeros((n, n))
i = np.random.choice(range(0, n * n), size=m)
x[i / n, i % n] = 1.0
# Plot input array
pl.imshow(x, cmap='Blues', interpolation='nearest')
pl.xlabel("$x$")
pl.ylabel("$y$")
pl.savefig("array.png")
# Apply gaussian filter
sigma = [sigma_y, sigma_x]
y = sp.ndimage.filters.gaussian_filter(x, sigma, mode='constant')
# Display filtered array
pl.imshow(y, cmap='Blues', interpolation='nearest')
pl.xlabel("$x$")
pl.ylabel("$y$")
pl.title("$\sigma_x = " + str(sigma_x) + "\quad \sigma_y = " + str(sigma_y) + "$")
pl.savefig("smooth_array_" + str(sigma_x) + "_" + str(sigma_y) + ".png")
Here is the initial array:
Here are some results for different values of sigma_x and sigma_y:
This allows to properly account for the influence of the second parameter of scipy.ndimage.filters.gaussian_filter.
However, according to the previous quote, you might be more interested in the assigement of different weights to each pixel. In this case, scipy.ndimage.filters.convolve is the function you are looking for. Here is the corresponding example:
import matplotlib.pyplot as pl
import numpy as np
import scipy as sp
import scipy.ndimage
# Arbitrary weights
weights = np.array([[0, 0, 1, 0, 0],
[0, 2, 4, 2, 0],
[1, 4, 8, 4, 1],
[0, 2, 4, 2, 0],
[0, 0, 1, 0, 0]],
dtype=np.float)
weights = weights / np.sum(weights[:])
y = sp.ndimage.filters.convolve(x, weights, mode='constant')
# Display filtered array
pl.imshow(y, cmap='Blues', interpolation='nearest')
pl.xlabel("$x$")
pl.ylabel("$y$")
pl.savefig("smooth_array.png")
And the corresponding result:
I hope this will help you.