I've got 3x3D arrays which are the red, green and blue channels of a 3D rgb image. What is an elegant way in numpy to to create a histogram volume of the input channels?
The operation would be equivalent to
""" assume R, G and B are 3D arrays and output is a 3D array filled with zeros """
for x in x_dim:
for y in y_dim:
for z in z_dim:
output[ R[x][y][z] ][ G[x][y][z] ][ B[x][y][z] ] += 1
This code too slow for large images. Can numpy improve the efficiency of the above algorithm?
You can do it using numpy.histogramdd but, as you say, the method proposed by #jozzas won't work. What you have to do is flatten each of your three 3D arrays and then combine them into a 2-d array of dimensions (x_dim*y_dim*z_dim, 3), which you pass to histogramdd. The fact that your original data are 3D is a red herring, since the spatial information is irrelevant to calculating the histogram.
Here is an example using random data in the channel cubes:
import numpy
n = 400 # approximate largest cube size that works on my laptop
# Fill channel cubes with random 8-bit integers
r = numpy.random.randint(256, size=(n,n,n)).astype(numpy.uint8)
g = numpy.random.randint(256, size=(n,n,n)).astype(numpy.uint8)
b = numpy.random.randint(256, size=(n,n,n)).astype(numpy.uint8)
# reorder data into for suitable for histogramming
data = numpy.vstack((r.flat, g.flat, b.flat)).astype(numpy.uint8).T
# Destroy originals to save space
del(r); del(g); del(b)
m = 256 # size of 3d histogram cube
hist, edges = numpy.histogramdd(
data, bins=m, range=((-0.5,255.5),(-0.5,255.5),(-0.5,255.5))
)
# Check that it worked
assert hist.sum() == n**3, 'Failed to conserve pixels'
This does use a lot more memory than you would expect because histogramdd seems to be using 64-bit floats to do its work, even though we are sending it 8-bit integers.
Assuming 8-bit channels, the 3-tuple of integers (R,G,B) can be thought of as a single number in base 256: R*256**2 + G*256 + B. Thus we can convert the 3 arrays R,G,B into a single array of "color values" and use np.bincount to produce the desired histogram.
import numpy as np
def using_bincount(r,g,b):
r=r.ravel().astype('int32')
g=g.ravel().astype('int32')
b=b.ravel().astype('int32')
output=np.zeros((base*base*base),dtype='int32')
result=np.bincount(r*base**2+g*base+b)
output[:len(result)]+=result
output=output.reshape((base,base,base))
return output
def using_histogramdd(r,g,b):
data = np.vstack((r.flat, g.flat, b.flat)).astype(np.uint8).T
del(r); del(g); del(b)
hist, edges = np.histogramdd(
data, bins=base, range=([0,base],[0,base],[0,base])
)
return hist
np.random.seed(0)
n = 200
base = 256
r = np.random.randint(base, size=(n,n,n)).astype(np.uint8)
g = np.random.randint(base, size=(n,n,n)).astype(np.uint8)
b = np.random.randint(base, size=(n,n,n)).astype(np.uint8)
if __name__=='__main__':
bhist=using_bincount(r,g,b)
hhist=using_histogramdd(r,g,b)
assert np.allclose(bhist,hhist)
These timeit results suggest using_bincount is faster than using_histogramdd, perhaps because histogramdd is built for handling floats and bins which are ranges, while bincount is solely for counting integers.
% python -mtimeit -s'import test' 'test.using_bincount(test.r,test.g,test.b)'
10 loops, best of 3: 1.07 sec per loop
% python -mtimeit -s'import test' 'test.using_histogramdd(test.r,test.g,test.b)'
10 loops, best of 3: 8.42 sec per loop
You can use numpy's histogramdd to compute the histogram of an n-dimensional array. If you don't want a histogram for each 2d slice, be sure to set the bins for that dimension to 1.
To get the overall histogram, you could compute them individually for the R, G and B channels and then take the maximum value of the three for each position.
Related
I want to vectorise the dot product of several 3x3 matrices (rotation matrix around x-axis) with several 3x1 vectors. The application is the transformation of points (approx 500k per array) from one to another coordinate system.
Here in the example only four of each. Hence, the result should be again 4 times a 3x1 vector, respectively the single components x,y,z be a 4x0 vector. But I cannot get the dimensions figured out: Here the dot product with tensordot in results in a shape of (4,3,4), of which I need the diagonals again:
x,y,z = np.zeros((3,4,1))
rota = np.arange(4* 3 * 3).reshape((4,3, 3))
v= np.arange(4 * 3).reshape((4, 3))
result = np.zeros_like(v, dtype = np.float64)
vec_rotated = np.tensordot(rota,v, axes=([-1],[1]))
for i in range(result.shape[0]):
result[i,:] = vec_rotated[i,:,i]
x,y,z = result.T
How can i vectorise the complete thing?
Use np.einsum for an efficient solution -
x,y,z = np.einsum('ijk,ik->ji',rota,v)
Alternative with np.matmul/# operator in Python 3.x -
x,y,z = np.matmul(rota,v[:,:,None])[...,0].T
x,y,z = (rota#v[...,None])[...,0].T
works via transpose to obtain one component per diagonal:
vec_rotated = vec_rotated.transpose((1,0,2))
x,y,z = np.diag(vec_rotated[0,:,:]),np.diag(vec_rotated[1,:,:]),np.diag(vec_rotated[2,:,:])
I am reading an image captured through opencv and want to map a function to every pixel value in the image. The output is an m x n x 3 numpy array, where m and n are the coordinates of length and width of the image and the three values are the corresponding blue, green, and red values for each pixel.
I first thought to run a nested for loop to each value in the image. However, it takes a long time to run, so I am looking for a more efficient way to loop over the image quickly.
Here is the nested for loop:
a = list()
for row in img:
for col in row:
a.append(np.sqrt(np.prod(col[1:])))
adjusted = np.asarray(a).reshape((img.shape[0], img.shape[1]))
This code works, but I would like to make it run faster. I know vectorization could be an option, but I do not know how to apply it onto only part of an array and not a whole array. To do this, I think I could reshape it to img.reshape((np.prod(img.shape[:2]),3)) and then loop over each set of three values, but I do not know the correct function/iterator to use.
Also, if opencv/numpy/scipy has another function that does just this, it would be a great help. I'm also open to other options, but I wanted to give some ideas that I had.
In the end, I want to take the input and calculate the geometric mean of the red and green values and create an n x m array of the geometric means. Any help would be appreciated!
This can be vectorized using the axis parameter in np.prod(). Setting axis=-1 will cause the product to only be taken on the last axis.
To perform this product on only the last two channels, index the array to extract only those channels using img[..., 1:]
You can replace your code with the following line:
adjusted = np.sqrt(np.prod(img[..., 1:], axis=-1))
For fun, let's profile these two functions using some simulated data:
import numpy as np
img = np.random.random((100,100,3))
def original_function(img):
a = []
for row in img:
for col in row:
a.append(np.sqrt(np.prod(col[1:])))
adjusted = np.asarray(a).reshape((img.shape[0], img.shape[1]))
return adjusted
def improved_function(img):
return np.sqrt(np.prod(img[:,:,1:], axis=-1))
>>> %timeit -n 100 original_function(img)
100 loops, best of 3: 55.5 ms per loop
>>> %timeit -n 100 improved_function(img)
100 loops, best of 3: 115 µs per loop
500x improvement in speed! The beauty of numpy vectorization :)
What I am trying to do is take a numpy array representing 3D image data and calculate the hessian matrix for every voxel. My input is a matrix of shape (Z,X,Y) and I can easily take a slice along z and retrieve a single original image.
gx, gy, gz = np.gradient(imgs)
gxx, gxy, gxz = np.gradient(gx)
gyx, gyy, gyz = np.gradient(gy)
gzx, gzy, gzz = np.gradient(gz)
And I can access the hessian for an individual voxel as follows:
x = 100
y = 100
z = 63
H = [[gxx[z][x][y], gxy[z][x][y], gxz[z][x][y]],
[gyx[z][x][y], gyy[z][x][y], gyz[z][x][y]],
[gzx[z][x][y], gzy[z][x][y], gzz[z][x][y]]]
But this is cumbersome and I can't easily slice the data.
I have tried using reshape as follows
H = H.reshape(Z, X, Y, 3, 3)
But when I test this by retrieving the hessian for a specific voxel the, the value returned from the reshaped array is completely different than the original array.
I think I could use zip somehow but I have only been able to find that for making lists of tuples.
Bonus: If there's a faster way to accomplish this please let me know, I essentially need to calculate the three eigenvalues of the hessian matrix for every voxel in the 3D data set. Calculating the hessian values is really fast but finding the eigenvalues for a single 2D image slice takes about 20 seconds. Are there any GPUs or tensor flow accelerated libraries for image processing?
We can use a list comprehension to get the hessians -
H_all = np.array([np.gradient(i) for i in np.gradient(imgs)]).transpose(2,3,4,0,1)
Just to give it a bit of explanation : [np.gradient(i) for i in np.gradient(imgs)] loops through the two levels of outputs from np.gradient calls, resulting in a (3 x 3) shaped tensor at the outer two axes. We need these two as the last two axes in the final output. So, we push those at the end with the transpose.
Thus, H_all holds all the hessians and hence we can extract our specific hessian given x,y,z, like so -
x = 100
y = 100
z = 63
H = H_all[z,y,x]
I have an image's numpy array of shape (224,224,4). Each pixel has 4 dimension - r,g,b,alpha. I need to extract the (r,g,b) values for each pixel where it's alpha channel is 255.
I thought to first delete all elements in the array where alpha value is <255, and then extract only the first 3 values(r,g,b) of these remaining elements, but doing it in simple loops in Python is very slow. Is there a fast way to do it using numpy operations?
Something similar to this? https://stackoverflow.com/a/21017621/4747268
This should work: arr[arr[:,:,3]==255][:,:,:3]
something like this?
import numpy as np
x = np.random.random((255,255,4))
y = np.where(x[:,:,3] >0.5)
res = x[y][:,0:3]
where you have to fit > 0.5 to your needs (e.g. ==255). The result will be a matrix with all pixels stacked vertically
I am trying to down sample a fixed [Mx1] vector into any given [Nx1] dimensions by using averaging method. I have a dynamic window size that changes every time depending upon the required output array. So, in some cases i get lucky and get window size of int that perfectly fits according to the window size and sometimes i get floating number as a windows size. But, how can i use floating size windows to make a vector of [Nx1] size from a fixed [Mx1] vector?
Below is the code that i have tried:
chunk = 0.35
def fixed_meanVector(vec, chunk):
size = (vec.size*chunk) #size of output according to the chunk
R = (vec.size/size) #windows size to transform array into chunk size
pad_size = math.ceil(float(vec.size)/R)*R - vec.size
vec_padded = np.append(vec, np.zeros(pad_size)*np.NaN)
print "Org Vector: ",vec.size, "output Size: ",size, "Windows Size: ",R, "Padding size", pad_size
newVec = scipy.nanmean(vec_padded.reshape(-1,R), axis=1)
print "New Vector shape: ",newVec.shape
return newVec
print "Word Mean of N values Similarity: ",cosine(fixed_meanVector(vector1, chunk)
,fixed_meanVector(vector2, chunk))
Output:
New Vector shape: (200,)
Org Vector: 400 output Size: 140.0 Windows Size: 2.85714285714 Padding size 0.0
New Vector shape: (200,)
0.46111661289
In above example, I need to down sample [Mx1] ([400x1]) vector in Nx1 ([140x1]) dimensions. So, dynamically window size [2.857x1] can be used to downsample [Mx1] vector . But, in this case i am getting a vector of [200x1] as my output instead of [140x1] due to the floating window it raises to the flour(2.85) it is downsampled with -> [2x1].
Padding is zero because, my window size is perfect for new [Nx1] dimensions. So, is there any way to use such type of windows sizes to down sample a [Mx1] vector?
It is possible but not natural to vectorise that, as soon as M%N>0. because the amount of cells used to build the result array is not constant, between 3 and 4 in your case.
The natural method is to run through the array, adjusting at each bin :
the idea is to fill each bin until overflow. then cut the overflow (carry) and keep it for next bin. the last carry is always null using int arithmetic.
The code :
def resized(data,N):
M=data.size
res=empty(N,data.dtype)
carry=0
m=0
for n in range(N):
sum = carry
while m*N - n*M < M :
sum += data[m]
m += 1
carry = (m-(n+1)*M/N)*data[m-1]
sum -= carry
res[n] = sum*N/M
return res
Test :
In [5]: resized(np.ones(7),3)
Out[5]: array([ 1., 1., 1.])
In [6]: %timeit resized(rand(400),140)
1000 loops, best of 3: 1.43 ms per loop
It works, but not very quickly. Fortunatelly, you can speed it with numba :
from numba import jit
resized2=jit(resized)
In [7]: %timeit resized2(rand(400),140)
1 loops, best of 3: 8.21 µs per loop
Probably faster than any pure numpy solution (here for N=3*M):
IN [8]: %timeit rand(402).reshape(-1,3).mean(1)
10000 loops, best of 3: 39.2 µs per loop
Note it works also if M>N.
In [9]: resized(arange(4.),9)
Out[9]: array([ 0. , 0. , 0.75, 1. , 1.5 , 2. , 2.25, 3. , 3. ])
You're doing it wrong, you build a window for your required decimation, not the other way around.
Mr Nyquist says you can't have a BW above fs/2, or you'll have nasty aliasing.
So to solve it you don't just "average", but lowpass so that frequencies above fs/2 are below your acceptable noise floor.
MA's are a valid type of low pass filter, you're just applying it to the wrong array.
The usual case for arbitrary decimation is.
Upsample -> Lowpass -> Downsample
So, to be able to arbitrary Decimate from N to M samples the algorithm is:
find LCM between your current samples your target samples.
upsample by LCM/N
design a filter using a stop frequency ws<= M/LCM
downsample by LCM/M
What you call averaging method, is a FIR filter with a rectangular window
If you use the first zero of the frequency response in that window as stop band, then you you can calculate the needed window size K , as
2/K <= M/LCM
so you must use windows of size:
ceil(2*LCM/M) = K
Obviously, you don't need to implement all of this. Just design a proper window with ws<= M/LCM and apply it using scipy.signal.resample.
And if the ceil applied to the window messes up your results, don't use rectangular windows, there are tons of better filters you can use.