Finding matching submatrices inside a matrix - python

I have a 100x200 2D array expressed as a numpy array consisting of black (0) and white (255) cells. It is a bitmap file. I then have 2D shapes (it's easiest to think of them as letters) that are also 2D black and white cells.
I know I can naively iterate through the matrix but this is going to be a 'hot' portion of my code so speed is an concern. Is there a fast way to perform this in numpy/scipy?
I looked briefly at Scipy's correlate function. I am not interested in 'fuzzy matches', only exact matches. I also looked at some academic papers but they are above my head.

You can use correlate. You'll need to set your black values to -1 and your white values to 1 (or vice-versa) so that you know the value of the peak of the correlation, and that it only occurs with the correct letter.
The following code does what I think you want.
import numpy
from scipy import signal
# Set up the inputs
a = numpy.random.randn(100, 200)
a[a<0] = 0
a[a>0] = 255
b = numpy.random.randn(20, 20)
b[b<0] = 0
b[b>0] = 255
# put b somewhere in a
a[37:37+b.shape[0], 84:84+b.shape[1]] = b
# Now the actual solution...
# Set the black values to -1
a[a==0] = -1
b[b==0] = -1
# and the white values to 1
a[a==255] = 1
b[b==255] = 1
max_peak = numpy.prod(b.shape)
# c will contain max_peak where the overlap is perfect
c = signal.correlate(a, b, 'valid')
overlaps = numpy.where(c == max_peak)
print overlaps
This outputs (array([37]), array([84])), the locations of the offsets set in the code.
You will likely find that if your letter size multiplied by your big array size is bigger than roughly Nlog(N), where N is corresponding size of the big array in which you're searching (for each dimension), then you will probably get a speed up by using an fft based algorithm like scipy.signal.fftconvolve (bearing in mind that you'll need to flip each axis of one of the datasets if you're using a convolution rather than a correlation - flipud and fliplr). The only modification would be to assigning c:
c = signal.fftconvolve(a, numpy.fliplr(numpy.flipud(b)), 'valid')
Comparing the timings on the sizes above:
In [5]: timeit c = signal.fftconvolve(a, numpy.fliplr(numpy.flipud(b)), 'valid')
100 loops, best of 3: 6.78 ms per loop
In [6]: timeit c = signal.correlate(a, b, 'valid')
10 loops, best of 3: 151 ms per loop

Here is a method you may be able to use, or adapt, depending upon the details of your requirements. It uses ndimage.label and ndimage.find_objects:
label the image using ndimage.label this finds all blobs in the array and labels them to integers.
Get the slices of these blobs using ndimage.find_objects
Then use set intersection to see if the found blobs correspond with your wanted blobs
Code for 1. and 2.:
import scipy
from scipy import ndimage
import matplotlib.pyplot as plt
#flatten to ensure greyscale.
im = scipy.misc.imread('letters.png',flatten=1)
objects, number_of_objects = ndimage.label(im)
letters = ndimage.find_objects(objects)
#to save the images for illustrative purposes only:
plt.imsave('ob.png',objects)
for i,j in enumerate(letters):
plt.imsave('ob'+str(i)+'.png',objects[j])
example input:
labelled:
isolated blobs to test against:

Related

Most efficient way to change the mean and standard deviation of each channel in an RGB image to custom values?

I have a dataset of 512x512 RGB images. I have two 1x3 arrays
mean_value = [0.485,0.456,0.406]
std_value = [0.229,0.224,0.225]
Each element represents the desired mean and standard deviation of one RGB channel.
This is the formula used to compute the new image,1 subscript representing the original image and 2 subscript representing the desired mean and std.
So far my code contained a for loop to run through all the channels.
for i in range(3):
norm_tmp = np.subtract(img[:,:,i],np.mean(img[:,:,i]))
std_mult = np.divide(std_value[i],np.std(img[:,:,i]))
img[:,:,i] = np.add(mean_value[i],np.multiply(norm_tmp,std_mult))
However, I would like to avoid this and possibly find a numpy implementation that does it without the for loop. I've seen some attempts that use openCV functions as well, so if there's an efficient function, I'd also be happy with that but so far I haven't been able to come to a satisfying solution yet.
You can do with broadcasting:
# random data
np.random.seed(42)
img = np.random.rand(512,512,3)
norm_tmp = img - img.mean(axis=-1)[...,None]
std_mul = std_value / img.std(axis=-1)[...,None]
scaled_img = mean_value + norm_tmp * std_mul
Output (scale on left)

Remove connected components below a threshold in a 3-d array

I am working on 3-D numpy array in python and want to do post-processing on CNN output of Brain Tumor Segmentation images. We get a 3-D (208x208x155) numpy array with values as 0/1/2/4 for each pixel. I want to remove the connected components with a threshold less than 1000 for better results.
I tried erosion-dilation but don't get good results. Can anyone help me?
Ok, so shrink and grow will, as you realised yourself, not be the way to approach this problem. What you need to do is region labelling, and it seems that Scipy has a method that will let you do that for nd images.
I assume that by threshold less than 1000 you mean sum of the pixel values in the connected components.
Here is an outline of how I would do it.
from scipy.ndimage import label
segmentation_mask = [...] # This should be your 3D mask.
# Let us create a binary mask.
# It is 0 everywhere `segmentation_mask` is 0 and 1 everywhere else.
binary_mask = segmentation_mask.copy()
binary_mask[binary_mask != 0] = 1
# Now, we perform region labelling. This way, every connected component
# will have their own colour value.
labelled_mask, num_labels = label(binary_mask)
# Let us now remove all the too small regions.
refined_mask = segmentation_mask.copy()
minimum_cc_sum = 1000
for label in range(num_labels):
if np.sum(refined_mask[labelled_mask == label]) < minimum_cc_sum:
refined_mask[labelled_mask == label] = 0

Optimize 4D Numpy array construction

I have a 4D array data of shape (50,8,2048,256) which are 50 groups containing 8 2048x256 pixel images. times is an array of shape (50,8) giving the time that each image was taken.
I calculate a 1st order polynomial fit at each pixel for all images in each group, giving me an array of shape (50,2048,256,2). This is essentially a vector plot for each of the 50 groups. The code I use to store the polynomials is:
fits = np.ones((50,2048,256,2))
times = times.reshape(50,8,1).repeat(2048,2).reshape(50,8,2048,1).repeat(256,3)
for group in range(50):
for xpos in range(2048):
for ypos in range(256):
px_data = data[:,:,ypos,xpos]
fits[group,ypos,xpos,:] = np.polyfit(times[group,:,ypos,xpos],data[group,:,ypos,xpos],1)
Now the challenge is that I want to generate an array new_data of shape (50,12,2048,256) where I use the polynomial coefficients from fits and the times from new_time to generate 50 groups of 12 images.
I figure I can use something like np.polyval(fits, new_time) to generate the images but I'm very confused with how to phrase it. It should be something like:
new_data = np.ones((50,12,2048,256))
for i,(times,fit) in enumerate(zip(new_times,fits)):
new_data[i] = np.polyval(fit,times)
But I'm getting broadcasting errors. Any assistance would be greatly appreciated!
Update
Ok, so I changed the code a bit so that it does work and do exactly what I want, but it is terribly slow with all these loops (~1 minute per group meaning this would take me almost an hour to run!). Can anyone suggest a way to optimize this to speed it up?
# Generate the polynomials for each pixel in each group
fits = np.ones((50,2048,256,2))
times = np.arange(0,50*8*grptme,grptme).reshape(50,8)
times = times.reshape(50,8,1).repeat(2048,2).reshape(50,8,2048,1).repeat(256,3)
for group in range(50):
for xpos in range(2048):
for ypos in range(256):
fits[group,xpos,ypos] = np.polyfit(times[group,:,xpos,ypos],data[group,:,xpos,ypos],1)
# Create new array of 12 images per group using the polynomials for each pixel
new_data = np.ones((50,12,2048,256))
times = np.arange(0,50*12*grptme,grptme).reshape(50,12)
times = times.reshape(50,12,1).repeat(2048,2).reshape(50,12,2048,1).repeat(256,3)
for group in range(50):
for img in range(12):
for xpos in range(2048):
for ypos in range(256):
new_data[group,img,xpos,ypos] = np.polynomial.polynomial.polyval(times[group,img,xpos,ypos],fits[group,xpos,ypos])
Regarding the speed I see a lot of loops which is what should and often can be avoided due to the beauty of numpy. If I understand your problem fully you want to fit a first order polynom on 50 groups of 8 data points 2048 * 256 times. So for the fit the shape of your image does not play a role. So my suggestion is to flatten your images because with np.polyfit you can fit for a range of x-values several sets of y-values at the same time
From the doc string
x : array_like, shape (M,)
x-coordinates of the M sample points ``(x[i], y[i])``.
y : array_like, shape (M,) or (M, K)
y-coordinates of the sample points. Several data sets of sample
points sharing the same x-coordinates can be fitted at once by
passing in a 2D-array that contains one dataset per column.
So I would go for
# Generate the polynomials for each pixel in each group
fits = np.ones((50,2048*256,2))
times = np.arange(0,50*8*grptme,grptme).reshape(50,8)
data_fit = data.reshape((50,8,2048*256))
for group in range(50):
fits[group] = np.polyfit(times[group],data_fit[group],1).T
fits_original_shape = fits.reshape((50,2048,256,2))
The transposing is necessary since you want to have the parameters in the last index, but np.polyfit has them first and then the different data sets
And then to evaluate it it is basically the same trick again:
# Create new array of 12 images per group using the polynomials for each pixel
new_data = np.zeros((50,12,2048*256))
times = np.arange(0,50*12*grptme,grptme).reshape(50,12)
#times = times.reshape(50,12,1).repeat(2048,2).reshape(50,12,2048,1).repeat(256,3)
for group in range(50):
new_data[group] = np.polynomial.polynomial.polyval(times[group],fits[group].T).T
new_data_original_shape = new_data.reshape((50,12,2048,256))
The two transposes are again needed due to the ordering of the parameters vs. the different data sets so that matches with the shapes of your arrays.
Probably one could also avoid with some advanced numpy magic the loop over the groups, but with this the code runs much faster already.
I hope it helps!

Efficient processing of pixel + neighborhood in numpy image

I have a range image of a scene. I traverse the image and calculate the average change in depth under the detection window. The detection windows changes size based on the average depth of the surrounding pixels of the current location. I accumulate the average change to produce a simple response image.
Most of the time is spent in the for loop, it is taking about 40+s for a 512x52 image on my machine. I was hoping for some speed up. Is there a more efficient/faster way to traverse the image? Is there a better pythonic/numpy/scipy way to visit each pixel? Or shall I go learn cython?
EDIT: I have reduced running time to about 18s by using scipy.misc.imread() instead of skimage.io.imread(). Not sure what the difference is, I will try to investigate.
Here is a simplified version of the code:
import matplotlib.pylab as plt
import numpy as np
from skimage.io import imread
from skimage.transform import integral_image, integrate
import time
def intersect(a, b):
'''Determine the intersection of two rectangles'''
rect = (0,0,0,0)
r0 = max(a[0],b[0])
c0 = max(a[1],b[1])
r1 = min(a[2],b[2])
c1 = min(a[3],b[3])
# Do we have a valid intersection?
if r1 > r0 and c1 > c0:
rect = (r0,c0,r1,c1)
return rect
# Setup data
depth_src = imread("test.jpg", as_grey=True)
depth_intg = integral_image(depth_src) # integrate to find sum depth in region
depth_pts = integral_image(depth_src > 0) # integrate to find num points which have depth
boundary = (0,0,depth_src.shape[0]-1,depth_src.shape[1]-1) # rectangle to intersect with
# Image to accumulate response
out_img = np.zeros(depth_src.shape)
# Average dimensions of bbox/detection window per unit length of depth
model = (0.602,2.044) # width, height
start_time = time.time()
for (r,c), junk in np.ndenumerate(depth_src):
# Find points around current pixel
r0, c0, r1, c1 = intersect((r-1, c-1, r+1, c+1), boundary)
# Calculate average of depth of points around current pixel
scale = integrate(depth_intg, r0, c0, r1, c1) * 255 / 9.0
# Based on average depth, create the detection window
r0 = r - (model[0] * scale/2)
c0 = c - (model[1] * scale/2)
r1 = r + (model[0] * scale/2)
c1 = c + (model[1] * scale/2)
# Used scale optimised detection window to extract features
r0, c0, r1, c1 = intersect((r0,c0,r1,c1), boundary)
depth_count = integrate(depth_pts,r0,c0,r1,c1)
if depth_count:
depth_sum = integrate(depth_intg,r0,c0,r1,c1)
avg_change = depth_sum / depth_count
# Accumulate response
out_img[r0:r1,c0:c1] += avg_change
print time.time() - start_time, " seconds"
plt.imshow(out_img)
plt.gray()
plt.show()
Michael, interesting question. It seems that the main performance problem you have is that each pixel in the image has two integrate() functions computed on it, one of size 3x3 and the other of a size which is not known in advance. Calculating individual integrals in this way is extremely inefficient, regardless of what numpy functions you use; it's an algorithmic issue, not an implementation issue. Consider an image of size NN. You can calculate all integrals of any size KK in that image using only approximately 4*NN operations, not (as one might naively expect) NNKK. The way you do that is first calculate an image of sliding sums over a window K in each row, and then sliding sums over the result in each column. Updating each sliding sum to move to the next pixel requires only adding the newest pixel in the current window and subtracting the oldest pixel in the previous window, thus two operations per pixel regardless of window size. We do have to do that twice (for rows and columns), therefore 4 operations per pixel.
I am not sure if there is a sliding window sum built into numpy, but this answer suggests a couple of ways to do it, using stride tricks: https://stackoverflow.com/a/12713297/1828289. You can certainly accomplish the same with one loop over columns and one loop over rows (taking slices to extract a row/column).
Example:
# img is a 2D ndarray
# K is the size of sums to calculate using sliding window
row_sums = numpy.zeros_like(img)
for i in range( img.shape[0] ):
if i > K:
row_sums[i,:] = row_sums[i-1,:] - img[i-K-1,:] + img[i,:]
elif i > 1:
row_sums[i,:] = row_sums[i-1,:] + img[i,:]
else: # i == 0
row_sums[i,:] = img[i,:]
col_sums = numpy.zeros_like(img)
for j in range( img.shape[1] ):
if j > K:
col_sums[:,j] = col_sums[:,j-1] - row_sums[:,j-K-1] + row_sums[:,j]
elif j > 1:
col_sums[:,j] = col_sums[:,j-1] + row_sums[:,j]
else: # j == 0
col_sums[:,j] = row_sums[:,j]
# here col_sums[i,j] should be equal to numpy.sum(img[i-K:i, j-K:j]) if i >=K and j >= K
# first K rows and columns in col_sums contain partial sums and can be ignored
How do you best apply that to your case? I think you might want to pre-compute the integrals for 3x3 (average depth) and also for several larger sizes, and use the value of the 3x3 to select one of the larger sizes for the detection window (assuming I understand the intent of your algorithm). The range of larger sizes you need might be limited, or artificially limiting it might still work acceptably well, just pick the nearest size. Calculating all integrals together using sliding sums is so much more efficient that I am almost certain it is worth calculating them for a lot of sizes you would never use at a particular pixel, especially if some of the sizes are large.
P.S. This is a minor addition, but you may want to avoid calling intersect() for every pixel: either (a) only process pixels which are farther from the edge than the max integral size, or (b) add margins to the image of the max integral size on all sides, filling the margins with either zeros or nans, or (c) (best approach) use slices to take care of this automatically: a slice index outside the boundary of an ndarray is automatically limited to the boundary, except of course negative indexes are wrapped around.
EDIT: added example of sliding window sums

Minimum Distance Algorithm using GDAL and Python

I'm trying to implement the Minimum Distance Algorithm for image classification using GDAL and Python. After calculating the mean pixel-value of the sample areas and storing them into a list of arrays ("sample_array"), I read the image into an array called "values". With the following code I loop through this array:
values = valBD.ReadAsArray()
# loop through pixel columns
for X in range(0,XSize):
# loop thorugh pixel lines
for Y in range (0, YSize):
# initialize variables
minDist = 9999
# get minimum distance
for iSample in range (0, sample_count):
# dist = calc_distance(values[jPixel, iPixel], sample_array[iSample])
# computing minimum distance
iPixelVal = values[Y, X]
mean = sample_array[iSample]
dist = math.sqrt((iPixelVal - mean) * (iPixelVal - mean)) # only for testing
if dist < minDist:
minDist = dist
values[Y, X] = iSample
classBD.WriteArray(values, xoff=0, yoff=0)
This procedure takes very long for big images. That's why I want to ask if somebody knows a faster method. I don't know much about access-speed of different variables in python. Or maybe someone knows a libary I could use.
Thanks in advance,
Mario
You should definitely be using NumPy. I work with some pretty large raster datasets and NumPy burns through them. On my machine, with the code below there's no noticeable delay for a 1000 x 1000 array. An explanation of how this works follows the code.
import numpy as np
from scipy.spatial.distance import cdist
# some starter data
dim = (1000,1000)
values = np.random.randint(0, 10, dim)
# cdist will want 'samples' as a 2-d array
samples = np.array([1, 2, 3]).reshape(-1, 1)
# this could be a one-liner
# 'values' must have the same number of columns as 'samples'
mins = cdist(values.reshape(-1, 1), samples)
outvalues = mins.argmin(axis=1).reshape(dim)
cdist() calculates the "distance" from each element in values to each of the elements in samples. This generates a 1,000,000 x 3 array, where each row n has the distance from pixel nin the original array to each of the sample values [1, 2, 3]. argmin(axis=1) gives you the index of the minimum value along each row, which is what you want. A quick reshape gives you the rectangular format you'd expect for an image.
Agree with Thomas K: use PIL, or else write a C-function and wrap it using e.g. ctypes, or at very least use some numPy matrix operations.
Or else use pypy on your existing code (JIT-compiled code can be 100x faster, on image code). Try pypy and tell us what speedup you got.
Bottom line: never do stuff pixel-wise like this natively in cPython, the interpreting and memory-mgt overhead will kill you.

Categories