OpenCV easy way to call fillConvexPoly() on 3d area? - python

I have a grayscale 3D image represented as a numpy array. Dimensions are height x width x depth. Given a square = [p1,p2,p3,p4] I want to call fillConvexPoly(square, 100) on every depth layer of the array. I know I can just loop through the depth and call the function a few hundred times, but I feel like doing so fails to take advantage of the fact that I am working with a numpy array. Is there a faster way to accomplish this?

All you need to do is index the rectangle in the first two dimensions, and that will select that rectangle in every channel in the third dimension. Then you can simply fill with whatever values you like. For e.g., I'll create a stack of 100 random images that are 5x5, and assign every pixel on the inside 1, leaving just the border of the image the random values it started with. Even though here just the first image is printed, each one looks like the first except the values around the edge.
>>> import numpy as np
>>> imgs = np.random.rand(5, 5, 100)
>>> imgs = np.random.rand(5, 5, 100)
>>> imgs[:, :, 0]
array([[ 0.17818592, 0.7427181 , 0.83685674, 0.27231489, 0.037665 ],
[ 0.61994589, 0.64282216, 0.20543185, 0.65049771, 0.52236919],
[ 0.78862153, 0.86612292, 0.48208187, 0.1233576 , 0.18561781],
[ 0.09628382, 0.08812067, 0.50085837, 0.92871428, 0.28052041],
[ 0.87715376, 0.38269949, 0.76995739, 0.83079243, 0.90698188]])
>>> imgs[1:4, 1:4] = 1
>>> imgs[:, :, 0]
array([[ 0.17818592, 0.7427181 , 0.83685674, 0.27231489, 0.037665 ],
[ 0.61994589, 1. , 1. , 1. , 0.52236919],
[ 0.78862153, 1. , 1. , 1. , 0.18561781],
[ 0.09628382, 1. , 1. , 1. , 0.28052041],
[ 0.87715376, 0.38269949, 0.76995739, 0.83079243, 0.90698188]])

Related

Avoid using for loop. Python 3

I have an array of shape (3,2):
import numpy as np
arr = np.array([[0.,0.],[0.25,-0.125],[0.5,-0.125]])
I was trying to build a matrix (matrix) of dimensions (6,2), with the results of the outer product of the elements i,i of arr and arr.T. At the moment I am using a for loop such as:
size = np.shape(arr)
matrix = np.zeros((size[0]*size[1],size[1]))
for i in range(np.shape(arr)[0]):
prod = np.outer(arr[i],arr[i].T)
matrix[size[1]*i:size[1]+size[1]*i,:] = prod
Resulting:
matrix =array([[ 0. , 0. ],
[ 0. , 0. ],
[ 0.0625 , -0.03125 ],
[-0.03125 , 0.015625],
[ 0.25 , -0.0625 ],
[-0.0625 , 0.015625]])
Is there any way to build this matrix without using a for loop (e.g. broadcasting)?
Extend arrays to 3D with None/np.newaxis keeping the first axis aligned, while letting the second axis getting pair-wise multiplied, perform multiplication leveraging broadcasting and reshape to 2D -
matrix = (arr[:,None,:]*arr[:,:,None]).reshape(-1,arr.shape[1])
We can also use np.einsum -
matrix = np.einsum('ij,ik->ijk',arr,arr).reshape(-1,arr.shape[1])
einsum string representation might be more intuitive as it lets us visualize three things :
Axes that are aligned (axis=0 here).
Axes that are getting summed up (none here).
Axes that are kept i.e. element-wise multiplied (axis=1 here).

How to blur 3D array of points, while maintaining their original values? (Python)

I have a sparse 3D array of values. I am trying to turn each "point" into a fuzzy "sphere", by applying a Gaussian filter to the array.
I would like the original value at the point (x,y,z) to remain the same. I just want to create falloff values around this point... But applying the Gaussian filter changes the original (x,y,z) value as well.
I am currently doing this:
dataCube = scipy.ndimage.filters.gaussian_filter(dataCube, 3, truncate=8)
Is there a way for me to normalize this, or do something so that my original values are still in this new dataCube? I am not necessarily tied to using a Gaussian filter, if that is not the best approach.
You can do this using a convolution with a kernel that has 1 as its central value, and a width smaller than the spacing between your data points.
1-d example:
import numpy as np
import scipy.signal
data = np.array([0,0,0,0,0,5,0,0,0,0,0])
kernel = np.array([0.5,1,0.5])
scipy.signal.convolve(data, kernel, mode="same")
gives
array([ 0. , 0. , 0. , 0. , 2.5, 5. , 2.5, 0. , 0. , 0. , 0. ])
Note that fftconvolve might be much faster for large arrays. You also have to specify what should happen at the boundaries of your array.
Update: 3-d example
import numpy as np
from scipy import signal
# first build the smoothing kernel
sigma = 1.0 # width of kernel
x = np.arange(-3,4,1) # coordinate arrays -- make sure they contain 0!
y = np.arange(-3,4,1)
z = np.arange(-3,4,1)
xx, yy, zz = np.meshgrid(x,y,z)
kernel = np.exp(-(xx**2 + yy**2 + zz**2)/(2*sigma**2))
# apply to sample data
data = np.zeros((11,11,11))
data[5,5,5] = 5.
filtered = signal.convolve(data, kernel, mode="same")
# check output
print filtered[:,5,5]
gives
[ 0. 0. 0.05554498 0.67667642 3.0326533 5. 3.0326533
0.67667642 0.05554498 0. 0. ]

Probability functions convolution in python

There are N distributions which take on integer values 0,... with associated probabilities. Further, I assume 3 variables [value, prob]:
import numpy as np
x = np.array([ [0,0.3],[1,0.2],[3,0.5] ])
y = np.array([ [10,0.2],[11,0.4],[13,0.1],[14,0.3] ])
z = np.array([ [21,0.3],[23,0.7] ])
As there are N variables I convolve first x+y, then I add z, and so on.
Unfortunately numpy.convole() takes 1-d arrays as input variables, so it does not suit in this case directly. I play with variables to take them all values 0,1,2,...,23 (if value is not know then Pr=0)... I feel like there is another much better solution.
Does anyone have a suggestion for making it more efficient? Thanks in advance.
I don't see a built-in method for this in Scipy; there's a way to define a custom discrete random variables, but those don't support addition. Here is an approach using pandas, assuming import pandas as pd and x,y,z as in your example:
values = np.add.outer(x[:,0], y[:,0]).flatten()
probs = np.multiply.outer(x[:,1], y[:,1]).flatten()
df = pd.DataFrame({'values': values, 'probs': probs})
conv = df.groupby('values').sum()
result = conv.reset_index().values
The output is
array([[ 10. , 0.06],
[ 11. , 0.16],
[ 12. , 0.08],
[ 13. , 0.13],
[ 14. , 0.31],
[ 15. , 0.06],
[ 16. , 0.05],
[ 17. , 0.15]])
With more than two variables, you don't have to go back and forth between numpy and pandas: the additional variables can be included at the beginning.
values = np.add.outer(np.add.outer(x[:,0], y[:,0]), z[:,0]).flatten()
probs = np.multiply.outer(np.multiply.outer(x[:,1], y[:,1]), z[:,1]).flatten()
Aside: it would be better to keep values and probabilities in separate numpy arrays, if they have different intrinsic data types (integers vs reals).

Image filtering in Python (image normalization)

I want to write code that does image filtering. I use simple 3x3 kernel and then use scipy.ndimage.filters.convolve() function. After filtering, range of the values is -1.27 to 1.12. How to normalize data after filtering? Do I need to crop values (values less then zero set to zero, and greater than 1 set to 1), or use linear normalization? Is it OK if values after filtering are greater than range [0,1]?
>>> import numpy as np
>>> x = np.random.randn(10)
>>> x
array([-0.15827641, -0.90237627, 0.74738448, 0.80802178, 0.48720684,
0.56213483, -0.34239788, 1.75621007, 0.63168393, 0.99192999])
You could clip out values outside your range although you would lose that information:
>>> np.clip(x,0,1)
array([ 0. , 0. , 0.74738448, 0.80802178, 0.48720684,
0.56213483, 0. , 1. , 0.63168393, 0.99192999])
To preserve the scaling, you can linearly renormalise into the range 0 to 1:
>>> (x - np.min(x))/(np.max(x) - np.min(x))
array([ 0.27988553, 0. , 0.6205406 , 0.64334869, 0.52267744,
0.55086084, 0.21063013, 1. , 0.57702102, 0.71252388])
Is it OK if values after filtering are greater than range [0,1]?
This is really dependant on your use case for the filtered image.

Saving an array inside a column of a matrix in numpy shape error

Let's say I do some calculation and I get a matrix of size 3 by 3 each time in a loop. Assume that each time, I want to save such matrix in a column of a bigger matrix, whose number of rows is equal to 9 (total number of elements in the smaller matrix). first I reshape the smaller matrix and then try to save it into one column of the big matrix. A simple code for only one column looks something like this:
import numpy as np
Big = np.zeros((9,3))
Small = np.random.rand(3,3)
Big[:,0]= np.reshape(Small,(9,1))
print Big
But python throws me the following error:
Big[:,0]= np.reshape(Small,(9,1))
ValueError: could not broadcast input array from shape (9,1) into shape (9)
I also tried to use flatten, but that didn't work either. Is there any way to create a shape(9) array from the small matrix or any other way to handle this error?
Your help is greatly appreciated!
try:
import numpy as np
Big = np.zeros((9,3))
Small = np.random.rand(3,3)
Big[:,0]= np.reshape(Small,(9,))
print Big
or:
import numpy as np
Big = np.zeros((9,3))
Small = np.random.rand(3,3)
Big[:,0]= Small.reshape((9,1))
print Big
or:
import numpy as np
Big = np.zeros((9,3))
Small = np.random.rand(3,3)
Big[:,[0]]= np.reshape(Small,(9,1))
print Big
Either case gets me:
[[ 0.81527817 0. 0. ]
[ 0.4018887 0. 0. ]
[ 0.55423212 0. 0. ]
[ 0.18543227 0. 0. ]
[ 0.3069444 0. 0. ]
[ 0.72315677 0. 0. ]
[ 0.81592963 0. 0. ]
[ 0.63026719 0. 0. ]
[ 0.22529578 0. 0. ]]
Explanation
the shape of Big you are trying to assign to is (9, ) one-dimensional. The shape you are trying to assign with is (9, 1) two-dimensional. You need to reconcile this by making the two-dim a one-dim np.reshape(Small, (9,1)) into np.reshape(Small, (9,)). Or, make the one-dim into a two-dim Big[:, 0] into Big[:, [0]]. The exception is when I assigned 'Big[:, 0] = Small.reshape((9,1))`. In this case, numpy must be checking.

Categories