Nested for loop to numpy convolve - python

How can I improve the speed of this function?
def foo(mri_data, radius):
mask = mri_data.copy()
ny = len(mri_data[0,:])
nx = len(mri_data[:])
for y in xrange(0, ny):
for x in xrange(0, nx):
if (mri_data[x-radius:x+radius,y-radius:y+radius] != 1.0).all():
mask[x,y] = 0.0
return mask.copy()
It takes in image slices in the form of a numpy array. Iterates through each pixels and tests a bounding box around that pixel. If no values in the box are equal to 1 than we discard that pixel by setting it to 0.
I've been told I can use numpy.convolve but I am uncertain how this relates.
EDIT: The images values are in binary range so lowest value is 0.0 and max value is 1.0. With values in between ex: 0.767.

One of the cases where you can abuse convolution. I wouldn't use it, but the boundaries are otherwise tedious...
from scipy.ndimage import convolve
not_one = (mri_data != 1.0) # are you sure you want to compare with float like that?!
conv = convolve(not_one, np.ones((2*radius, 2*radius)))
all_not_one = (conv == (2*radius)**2)
mask[all_not_one] = 0
Should do the same thing really...

What you're doing is called a binary_dilation but there is a small bug in your code. Specifically you're getting negative indices when x, y are smaller than radius. These negative numbers are interpreted using numpy indexing rules, which is not what you want here more on indexing here, giving you the wrong result on two edges of your image.
Here is some code that uses binary dilation to accomplish the same thing, and fix the above mentioned bug.
import numpy as np
from scipy.ndimage import binary_dilation
def foo(mri_data, radius):
structure = np.ones((2*radius, 2*radius))
# I set the origin here to match your code
mask = binary_dilation(mri_data == 1, structure, origin=-1)
return np.where(mask, mri_data, 0)

Related

Labeling a matrix

I've been trying to do a code that labels a binary matrix, i.e. I want to do a function that finds all connected components in an image and assigns a unique label to all points in the same component. The problem is that I found a function, imbinarize(), that creates a binary image and I want to know how to do it without that function (because I don't know how to do it).
EDIT: I realized that it isn't needed to binarize the image, because it is being assumed that all the images that are put as argument are already binarized. So, I changed my code. It happens that code is not working, and I think the problem is in one of the cycles, but I can't understand why.
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
def connected_components(image):
M = image * 1
# write your code here
(row, column) = M.shape #shape of the matrix
#Second step
L = 2
#Third step
q = []
#Fourth step
#Method to look for ones starting on the pixel (0, 0) and going from left to right and top-down
for i in np.arange(row):
for j in np.arange(column):
if M[i][j] == 1:
M[i][j] = L
q.append(M[i-1][j])
q.append(M[i+1][j])
q.append(M[i][j-1])
q.append(M[i][j+1])
#Fifth step
while len(q) != 0: #same as saying 'while q is not empty'
if q[0] == 1:
M[0] = L
q.append(M[i-1][j])
q.append(M[i+1][j])
q.append(M[i][j-1])
q.append(M[i][j+1])
#Sixth step
L = L + 1
#Seventh step: goes to the beginning of the for-cycle
return labels
pyplot.binarize in its most simple form thresholds an image such that any intensity whose value is beyond a certain threshold is assigned a binary 1 / True and a binary 0 / False otherwise. It is actually more sophisticated than this as it uses some image morphology for noise removal as well as use adaptive thresholds to find the most optimal value to separate between foreground and background. As I see this post as more for validating the connected components algorithm you've created, I'm going to assume that the basic algorithm is fine and the actual algorithm to be out of scope for your needs.
Once you read in the image with matplotlib, it is most likely going to be three channels so you'll need to convert the image into grayscale first, then threshold after. We can make this more adaptive based on the number of channels that exist.
Therefore, let's define a function to threshold the image for us. You'll need to play around with the threshold until you get good results. Also take note that plt.imread reads in float32 values, so the threshold will be defined between [0-1]. We can try 0.5 as a good start:
def binarize(im, threshold=0.5):
if len(im.shape) == 3:
gray = 0.299*im[...,0] + 0.587*im[...,1] + 0.114*im[...,2]
else:
gray = im
return (gray >= threshold).astype(np.uint8)
This will check if the input image is in RGB. If it is, convert to grayscale accordingly. The method to convert from RGB to grayscale uses the SMPTE Rec. 709 standard. Once we have the grayscale image, simply return a new image where everything that meets the threshold and beyond gets assigned an integer 1 and everything else is integer 0. I've converted the result to an integer type because your connected components algorithm assumes a 0/1 labelling.
You can then replace your code with:
#First step
Image = plt.imread(image) #reads the image on the argument
M = binarize(Image) #imbinarize() converts an image to a binary matrix
(row, column) = np.M.shape #shape of the matrix
Minor Note
In your test code, you are supplying a test image directly whereas your actual code performs an imread operation. imread expects a string so by specifying the actual array, your code will produce an error. If you want to accommodate for both an array and a string, you should check to see if the input is a string vs. an array:
if type(image) is str:
Image = plt.imread(image) #reads the image on the argument
else:
Image = image
M = binarize(Image) #imbinarize() converts an image to a binary matrix
(row, column) = np.M.shape #shape of the matrix

Python IndexError: Out of bounds

I've created a class of which I pass an image (2D array, 1280x720). It's suppose to iterate through, looking for the highest value:
import bumpy as np
class myCv:
def maxIntLoc(self,image):
intensity = image[0,0] #columns, rows
coordinates = (0,0)
for y in xrange(0,len(image)):
for x in xrange(0,len(image[0])):
if np.all(image[x,y] > intensity):
intensity = image[x,y]
coordinates = (x,y)
return (intensity,coordinates)
Yet when I run it I get the error:
if np.all(image[x,y] > intensity):
IndexError: index 720 is out of bounds for axis 0 with size 720
Any help would be great as I'm new to Python.
Thanks,
Shaun
Regardless of the index error that you are experience, which has been addressed by others, iterating through pixels/voxels is not a valid method for manipulating images. The issue becomes particularly evident in multi-dimensional images, where you face the curse of dimensionality.
The correct way to do this is to use vectorisation in programming languages that support it (e.g. Python, Julia, MATLAB). Through this method, you will achieve the results you're looking for much more efficiently (and thousands of times faster). Click here to find out more about vectorisation (aka. array programming). In Python, this can be achieved either using generators, which are not suitable for images as they don't really produce the results until called; or using NumPy arrays.
Here is an example:
Masking image matrices by vectorisation
from numpy.random import randint
from matplotlib.pyplot import figure, imshow, title, grid, show
def mask_img(img, thresh, replacement):
# Copy of the image for masking. Use of |.copy()| is essential to
# prevent memory mapping.
masked = initial_image.copy()
# Replacement is the value to replace anything that
# (in this case) is bellow the threshold.
masked[initial_image<thresh] = replacement # Mask using vectorisation methods.
return masked
# Initial image to be masked (arbitrary example here).
# In this example, we assign a 100 x 100 matrix of random integers
# between 1 and 256 as our sample image.
initial_image = randint(0, 256, [100, 100])
threshold = 150 # Threshold
# Masking process.
masked_image = mask_img(initial_image, threshold, 0)
# Plots.
fig = figure(figsize=[16,9])
fig.add_subplot(121)
imshow(initial_image, interpolation='None', cmap='gray')
title('Initial image')
grid('off')
fig.add_subplot(122)
imshow(masked_image, interpolation='None', cmap='gray')
title('Masked image')
grid('off')
show()
Which returns:
Of course you can put the masking process (function) in a loop to do this on a batch of images. You can modify the indices and do it on 3D, 4D (e.g. MRI), or 5D (e.g. CAT scan) images too, without the need to iterate over each individual pixel or voxel.
Hope this helps.
In python, like most programming languages, indexes start at 0.
So you can access only pixels from 0 to 719.
Check with a debug print that len(image) and len(image[0]) are indeed returning 1280 and 720.

Numpy image slicing returning black patches/ wrong values

The end goal is to take an image and slice it up into samples that I save. The problem is that my slices are randomly returning black/ incorrect patches. Bellow is a small sample program.
import scipy.ndimage as ndimage
import scipy.misc as misc
import numpy as np
image32 = misc.imread("work0.png")
patches = np.zeros((36, 8, 8))
for i in range(4):
for j in range(4):
patches[i*4 + j] = image32[i:i+8,j:j+8]
misc.imsave("{0}{1}.png".format(i,j), patches[i*4 + j])
An example of my image would be:
Patch of 0,0 of 8x8 patch yields:
Two things:
You are initializing your patch matrix to be the wrong data type. By default, numpy will make patches matrix a np.float64 type and if you use this with saving, you won't get the results you would expect. Specifically, if you consult Mr. F's answer, there is actually some scaling performed on floating-point images where the minimum and maximum values of the image get scaled to black and white respectively and so if you have an image that is completely uniform in background, both the minimum and maximum will be the same and will get visualized to black. As such, the best thing is to respect the original image's data type, namely setting the dtype of your patches matrix to np.uint8.
Judging from your for loop indexing, you want to extract out 8 x 8 patches that are non-overlapping. This means that if you have a 32 x 32 image with 8 x 8 patches, you have 16 patches in total arranged in a 4 x 4 grid.
Therefore, you need to change the patches statement so that it has 16 in the first dimension, not 36. In addition, you'll have to adjust the way you're indexing into your image to extract out the 8 x 8 patches because right now, the patches are overlapping. Specifically, you want to make the image patch indexing go from 8*i to 8*(i+1) for the rows and 8*j to 8*(j+1) for the columns. If you substitute sample values of i and j yourself, you'll see that we get unique 8 x 8 patches for each grid in your image.
With both of the above things I noted, the modified code should be:
import scipy.ndimage as ndimage
import scipy.misc as misc
import numpy as np
image32 = misc.imread('work0.png')
patches = np.zeros((16,8,8), dtype=np.uint8) # Change
for i in range(4):
for j in range(4):
patches[i*4 + j] = image32[8*i:8*(i+1),8*j:8*(j+1)] # Change
misc.imsave("{0}{1}.png".format(i,j), patches[i*4 + j])
When I do this and take a look at the output images, I get what I expect.
To be absolutely sure, let's plot the segments using matplotlib. You've conveniently saved all of the patches in patches so it shouldn't be a problem showing what we need. However, I'll place some code in comments so that you can read in the images that were saved from disk with your above code so you can verify that it still works, regardless of looking at patches or the images on disk:
import matplotlib.pyplot as plt
plt.figure()
for i in range(4):
for j in range(4):
plt.subplot(4, 4, 4*i + j + 1)
img = patches[4*i + j]
# or you can do this:
# img = misc.imread('{0}{1}.png'.format(i,j))
img = np.dstack([img, img, img])
plt.imshow(img)
plt.show()
The weird thing about matplotlib.pyplot.imshow is that if you have an image that is single channel (such as your case) that has the same intensity all around, it gets visualized to black no matter what the colour map is, much like what we experienced with imsave. Therefore, I had to artificially make this a RGB image but with all of the channels to be the same so this gets visualized as grayscale before we show the image.
We get:
According to this answer the issue is that imsave normalizes the data so that the computed minimum is defined as black (and, if there is a distinct maximum, that is defined as white).
This led me to go digging as to why the suggested use of uint8 did work to create the desired output. As it turns out, in the source there is a function called bytescale that gets called internally.
Actually, imsave itself is a very thin wrapper around toimage followed by save (from the image object). Inside of toimage if mode is None (which it is by default), that's when bytescale gets invoked.
It turns out that bytescale has an if statement that checks for the uint8 data type, and if the data is in that format, it returns the data unaltered. But if not, then the data is scaled according to a max and min transformation (where 0 and 255 are the default low and high pixel values to compare to).
This is the full snippet of code linked above:
if data.dtype == uint8:
return data
if high < low:
raise ValueError("`high` should be larger than `low`.")
if cmin is None:
cmin = data.min()
if cmax is None:
cmax = data.max()
cscale = cmax - cmin
if cscale < 0:
raise ValueError("`cmax` should be larger than `cmin`.")
elif cscale == 0:
cscale = 1
scale = float(high - low) / cscale
bytedata = (data * 1.0 - cmin) * scale + 0.4999
bytedata[bytedata > high] = high
bytedata[bytedata < 0] = 0
return cast[uint8](bytedata) + cast[uint8](low)
For the blocks of your data that are all 255, cscale will be 0, which will be checked for and changed to 1. Then the line
bytedata = (data * 1.0 - cmin) * scale + 0.4999
will result in the whole image block having the float value of 0.4999, thus set explicitly to 0 in the next chunk of code (when casted to uint8 from float) as for example:
In [102]: np.cast[np.uint8](0.4999)
Out[102]: array(0, dtype=uint8)
You can see in the body of bytescale that there are only two possible ways to return: either your data is type uint8 and it's returned as-is, or else it goes through this kind of silly scaling process. So in the end, it is indeed correct, and good practice, to be using uint8 for the pieces of your code that specifically load from or save to an image format via these functions.
So this cascade of stuff is why you were getting all zeros in the outputted image file and why the other suggestion of using dtype=np.uint8 actually helps you. It's not because you need to avoid floating point data for images, just because of this bizarre convention to check and scale data on the part of imsave.

Moving average of an array in Python

I have an array where discreet sinewave values are recorded and stored. I want to find the max and min of the waveform. Since the sinewave data is recorded voltages using a DAQ, there will be some noise, so I want to do a weighted average. Assuming self.yArray contains my sinewave values, here is my code so far:
filterarray = []
filtersize = 2
length = len(self.yArray)
for x in range (0, length-(filtersize+1)):
for y in range (0,filtersize):
summation = sum(self.yArray[x+y])
ave = summation/filtersize
filterarray.append(ave)
My issue seems to be in the second for loop, where depending on my averaging window size (filtersize), I want to sum up the values in the window to take the average of them. I receive an error saying:
summation = sum(self.yArray[x+y])
TypeError: 'float' object is not iterable
I am an EE with very little experience in programming, so any help would be greatly appreciated!
The other answers correctly describe your error, but this type of problem really calls out for using numpy. Numpy will run faster, be more memory efficient, and is more expressive and convenient for this type of problem. Here's an example:
import numpy as np
import matplotlib.pyplot as plt
# make a sine wave with noise
times = np.arange(0, 10*np.pi, .01)
noise = .1*np.random.ranf(len(times))
wfm = np.sin(times) + noise
# smoothing it with a running average in one line using a convolution
# using a convolution, you could also easily smooth with other filters
# like a Gaussian, etc.
n_ave = 20
smoothed = np.convolve(wfm, np.ones(n_ave)/n_ave, mode='same')
plt.plot(times, wfm, times, -.5+smoothed)
plt.show()
If you don't want to use numpy, it should also be noted that there's a logical error in your program that results in the TypeError. The problem is that in the line
summation = sum(self.yArray[x+y])
you're using sum within the loop where your also calculating the sum. So either you need to use sum without the loop, or loop through the array and add up all the elements, but not both (and it's doing both, ie, applying sum to the indexed array element, that leads to the error in the first place). That is, here are two solutions:
filterarray = []
filtersize = 2
length = len(self.yArray)
for x in range (0, length-(filtersize+1)):
summation = sum(self.yArray[x:x+filtersize]) # sum over section of array
ave = summation/filtersize
filterarray.append(ave)
or
filterarray = []
filtersize = 2
length = len(self.yArray)
for x in range (0, length-(filtersize+1)):
summation = 0.
for y in range (0,filtersize):
summation = self.yArray[x+y]
ave = summation/filtersize
filterarray.append(ave)
self.yArray[x+y] is returning a single item out of the self.yArray list. If you are trying to get a subset of the yArray, you can use the slice operator instead:
summation = sum(self.yArray[x:y])
to return an iterable that the sum builtin can use.
A bit more information about python slices can be found here (scroll down to the "Sequences" section): http://docs.python.org/2/reference/datamodel.html#the-standard-type-hierarchy
You could use numpy, like:
import numpy
filtersize = 2
ysums = numpy.cumsum(numpy.array(self.yArray, dtype=float))
ylags = numpy.roll(ysums, filtersize)
ylags[0:filtersize] = 0.0
moving_avg = (ysums - ylags) / filtersize
Your original code attempts to call sum on the float value stored at yArray[x+y], where x+y is evaluating to some integer representing the index of that float value.
Try:
summation = sum(self.yArray[x:y])
Indeed numpy is the way to go. One of the nice features of python is list comprehensions, allowing you to do away with the typical nested for loop constructs. Here goes an example, for your particular problem...
import numpy as np
step=2
res=[np.sum(myarr[i:i+step],dtype=np.float)/step for i in range(len(myarr)-step+1)]

Numpy manipulating array of True values dependent on x/y index

So I have an array (it's large - 2048x2048), and I would like to do some element wise operations dependent on where they are. I'm very confused how to do this (I was told not to use for loops, and when I tried that my IDE froze and it was going really slow).
Onto the question:
h = aperatureimage
h[:,:] = 0
indices = np.where(aperatureimage>1)
for True in h:
h[index] = np.exp(1j*k*z)*np.exp(1j*k*(x**2+y**2)/(2*z))/(1j*wave*z)
So I have an index, which is (I'm assuming here) essentially a 'cropped' version of my larger aperatureimage array. *Note: Aperature image is a grayscale image converted to an array, it has a shape or text on it, and I would like to find all the 'white' regions of the aperature and perform my operation.
How can I access the individual x/y values of index which will allow me to perform my exponential operation? When I try index[:,None], leads to the program spitting out 'ValueError: broadcast dimensions too large'. I also get array is not broadcastable to correct shape. Any help would be appreciated!
One more clarification: x and y are the only values I would like to change (essentially the points in my array where there is white, z, k, and whatever else are defined previously).
EDIT:
I'm not sure the code I posted above is correct, it returns two empty arrays. When I do this though
index = (aperatureimage==1)
print len(index)
Actually, nothing I've done so far works correctly. I have a 2048x2048 image with a 128x128 white square in the middle of it. I would like to convert this image to an array, look through all the values and determine the index values (x,y) where the array is not black (I only have white/black, bilevel image didn't work for me). I would then like to take all the values (x,y) where the array is not 0, and multiply them by the h[index] value listed above.
I can post more information if necessary. If you can't tell, I'm stuck.
EDIT2: Here's some code that might help - I think I have the problem above solved (I can now access members of the array and perform operations on them). But - for some reason the Fx values in my for loop never increase, it loops Fy forever....
import sys, os
from scipy.signal import *
import numpy as np
import Image, ImageDraw, ImageFont, ImageOps, ImageEnhance, ImageColor
def createImage(aperature, type):
imsize = aperature*8
middle = imsize/2
im = Image.new("L", (imsize,imsize))
draw = ImageDraw.Draw(im)
box = ((middle-aperature/2, middle-aperature/2), (middle+aperature/2, middle+aperature/2))
import sys, os
from scipy.signal import *
import numpy as np
import Image, ImageDraw, ImageFont, ImageOps, ImageEnhance, ImageColor
def createImage(aperature, type):
imsize = aperature*8 #Add 0 padding to make it nice
middle = imsize/2 # The middle (physical 0) of our image will be the imagesize/2
im = Image.new("L", (imsize,imsize)) #Make a grayscale image with imsize*imsize pixels
draw = ImageDraw.Draw(im) #Create a new draw method
box = ((middle-aperature/2, middle-aperature/2), (middle+aperature/2, middle+aperature/2)) #Bounding box for aperature
if type == 'Rectangle':
draw.rectangle(box, fill = 'white') #Draw rectangle in the box and color it white
del draw
return im, middle
def Diffraction(aperaturediameter = 1, type = 'Rectangle', z = 2000000, wave = .001):
# Constants
deltaF = 1/8 # Image will be 8mm wide
z = 1/3.
wave = 0.001
k = 2*pi/wave
# Now let's get to work
aperature = aperaturediameter * 128 # Aperaturediameter (in mm) to some pixels
im, middle = createImage(aperature, type) #Create an image depending on type of aperature
aperaturearray = np.array(im) # Turn image into numpy array
# Fourier Transform of Aperature
Ta = np.fft.fftshift(np.fft.fft2(aperaturearray))/(len(aperaturearray))
# Transforming and calculating of Transfer Function Method
H = aperaturearray.copy() # Copy image so H (transfer function) has the same dimensions as aperaturearray
H[:,:] = 0 # Set H to 0
U = aperaturearray.copy()
U[:,:] = 0
index = np.nonzero(aperaturearray) # Find nonzero elements of aperaturearray
H[index[0],index[1]] = np.exp(1j*k*z)*np.exp(-1j*k*wave*z*((index[0]-middle)**2+(index[1]-middle)**2)) # Free space transfer for ap array
Utfm = abs(np.fft.fftshift(np.fft.ifft2(Ta*H))) # Compute intensity at distance z
# Fourier Integral Method
apindex = np.nonzero(aperaturearray)
U[index[0],index[1]] = aperaturearray[index[0],index[1]] * np.exp(1j*k*((index[0]-middle)**2+(index[1]-middle)**2)/(2*z))
Ufim = abs(np.fft.fftshift(np.fft.fft2(U))/len(U))
# Save image
fim = Image.fromarray(np.uint8(Ufim))
fim.save("PATH\Fim.jpg")
ftfm = Image.fromarray(np.uint8(Utfm))
ftfm.save("PATH\FTFM.jpg")
print "that may have worked..."
return
if __name__ == '__main__':
Diffraction()
You'll need numpy, scipy, and PIL to work with this code.
When I run this, it goes through the code, but there is no data in them (everything is black). Now I have a real problem here as I don't entirely understand the math I'm doing (this is for HW), and I don't have a firm grasp on Python.
U[index[0],index[1]] = aperaturearray[index[0],index[1]] * np.exp(1j*k*((index[0]-middle)**2+(index[1]-middle)**2)/(2*z))
Should that line work for performing elementwise calculations on my array?
Could you perhaps post a minimal, yet complete, example? One that we can copy/paste and run ourselves?
In the meantime, in the first two lines of your current example:
h = aperatureimage
h[:,:] = 0
you set both 'aperatureimage' and 'h' to 0. That's probably not what you intended. You might want to consider:
h = aperatureimage.copy()
This generates a copy of aperatureimage while your code simply points h to the same array as aperatureimage. So changing one changes the other.
Be aware, copying very large arrays might cost you more memory then you would prefer.
What I think you are trying to do is this:
import numpy as np
N = 2048
M = 64
a = np.zeros((N, N))
a[N/2-M:N/2+M,N/2-M:N/2+M]=1
x,y = np.meshgrid(np.linspace(0, 1, N), np.linspace(0, 1, N))
b = a.copy()
indices = np.where(a>0)
b[indices] = np.exp(x[indices]**2+y[indices]**2)
Or something similar. This, in any case, sets some values in 'b' based on the x/y coordinates where 'a' is bigger than 0. Try visualizing it with imshow. Good luck!
Concerning the edit
You should normalize your output so it fits in the 8 bit integer. Currently, one of your arrays has a maximum value much larger than 255 and one has a maximum much smaller. Try this instead:
fim = Image.fromarray(np.uint8(255*Ufim/np.amax(Ufim)))
fim.save("PATH\Fim.jpg")
ftfm = Image.fromarray(np.uint8(255*Utfm/np.amax(Utfm)))
ftfm.save("PATH\FTFM.jpg")
Also consider np.zeros_like() instead of copying and clearing H and U.
Finally, I personally very much like working with ipython when developing something like this. If you put the code in your Diffraction function in the top level of your script (in place of 'if __ name __ &c.'), then you can access the variables directly from ipython. A quick command like np.amax(Utfm) would show you that there are indeed values!=0. imshow() is always nice to look at matrices.

Categories