Be warned, this is a newbie question.
I acquired some noisy data (a 1x200 pixel sclice from a grayscale image), for which I am trying to build a simple FFT low-pass filter. I do understand the general principle of the Fourier Transform, but I ran into trouble trying to implement it.
My script works well on example data, but behaves in a strange manner on my data.
I think I must be mixing dimensions at some point, but it's been quite a few long hours and I cannot find where! I suspect that, because the output (please see below) of print(signal.shape) is different between the example and real data. Furthermore, scipy.fftpack.rfft(signal) seems to do nothing to my signal instead of computing the function in the frequency domain, as it should.
My script:
(will run out-of-the-box using example data, just by copy-pasting everything below #example data)
import cv2
import numpy as np
from scipy.fftpack import rfft, irfft, fftfreq, fft, ifft
import matplotlib as mpl
import matplotlib.pyplot as plt
#===========================================
#GETTING DATA AND SETTING CONSTANTS
#===========================================
REACH = 100
COURSE = 180
CENTER = (cx, cy)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.equalizeHist(gray)
gray2 = gray.copy()
#drawing initial vector
cv2.line(gray, (cx, cy + REACH), (cx, cy - REACH), 0, 5)
cv2.circle(gray, (cx, cy + REACH), 10, 0, -1)
cv2.circle(gray, (cx, cy), REACH, 0, 5)
#flooding contour with white
cv2.drawContours(gray2, contours, index, 255, -1)
#real data
signal = gray2[(cy - REACH):(cy + REACH), (cx-0.5):(cx+0.5)]
time = np.linspace(0, 2*REACH, num=200)
#example data
time = np.linspace(0,10,2000)
signal = np.cos(5*np.pi*time) + np.cos(7*np.pi*time)
#=============================================
#THE FFT TRANSFORM & FILTERING
#=============================================
#signal filtering
f_signal = rfft(signal)
W = fftfreq(signal.size, d=time[1]-time[0])
cut_f_signal = f_signal.copy()
cut_f_signal[(W>5)] = 0
cut_signal = irfft(cut_f_signal)
#==================================
#FROM HERE ITS ONLY PLOTTING
#==================================
print(signal.shape)
plt.figure(figsize=(8,8))
ax1 = plt.subplot(321)
ax1.plot(signal)
ax1.set_title("Original Signal", color='green', fontsize=16)
ax2 = plt.subplot(322)
ax2.plot(np.abs(f_signal))
plt.xlim(0,100)
ax2.set_title("FFT Signal", color='green', fontsize=16)
ax3 = plt.subplot(323)
ax3.plot(cut_f_signal)
plt.xlim(0,100)
ax3.set_title("Filtered FFT Signal", color='green', fontsize=16)
ax4 = plt.subplot(324)
ax4.plot(cut_signal)
ax4.set_title("Filtered Signal", color='green', fontsize=16)
for i in [ax1,ax2,ax3,ax4]:
i.tick_params(labelsize=16, labelcolor='green')
plt.tight_layout()
plt.show()
The result on real data:
parameters:
signal = gray2[(cy - REACH):(cy + REACH), (cx-0.5):(cx+0.5)]
time = np.linspace(0, 2*REACH, num=200)
filtering parameter:
cut_f_signal[(W<0.05)] = 0
Output:
output of signal.shape is (200L, 1L)
The result on example data:
parameters:
signal = np.cos(5*np.pi*time) + np.cos(7*np.pi*time)
time = np.linspace(0,10,2000)
filtering parameter:
cut_f_signal[(W>5)] = 0
Output:
output of signal.shape is (2000L,)
So I began to think about that, and after a time I realized the stupidity in my ways. My base data is an image, and I take a slice of it to generate the above signal.
So instead of trying to implement a less-than-satisfying home-brewed FFT filter to smoooth the signal, it is in fact much better and easier to smooth the image with one of the numerous battle-tested filters (gaussian, bilateral, etc.) available in equally numerous image libs (in my case, OpenCV)...
Thanks to the people that took the time to try and help!
Related
I am using astropy visualization to make a colored image of M66 in this case.
Before doing anything I learnt that I have to cast my RGB .fts array with numpy.float_()
forCasting = np.float_()
### READING
b = fits.open("data/"+"M66-Blue.fts")[0].data
r = fits.open("data/"+"M66-Red.fts")[0].data
g = fits.open("data/"+"M66-Green.fts")[0].data
### CASTING
r = np.array(r,forCasting)
g = np.array(g,forCasting)
b = np.array(b,forCasting)
so that I could proceed with my stretch like :
stretch = SqrtStretch() + ZScaleInterval()
r = stretch(b)
g = stretch(r)
b = stretch(g)
plt.imshow(r, origin='lower')
plt.show()
plt.imshow(g, origin='lower')
plt.show()
plt.imshow(b, origin='lower')
plt.show()
Then I just use the method make_lupton_rgb from astropy.visualizaion as follow, but I have a super dark image that I cannot distinguish anything. Does anybody know why I have a dark final image here? Do you have any suggestions?
### SAVING
# rgb_default = make_lupton_rgb(r, g, b, minimum=1000, stretch=900, Q=100, filename="provafinale.png")
rgb_default = make_lupton_rgb(r, g, b, filename="provafinale.png")
plt.imshow(rgb_default, origin='lower')
plt.show()
Thanks!
It looks like you have to set stretch and Q arguments of make_lupton_rgb.
The default values are stretch=5 and Q=8, that gives dark result.
I have no experience with astropy or with Astronomy.
I just played with the arguments, and got bright image using stretch=1 and Q=0.
rgb_default = make_lupton_rgb(r, g, b, minimum=0, stretch=1, Q=0, filename="provafinale.png")
I tried computing minimum and stretch using np.percentile, for linear stretching the output.
I tested the code using an m8_050507_9i9m image from index_fits.
Here is the code I used for testing:
import numpy as np
from astropy.io import fits
from astropy.visualization import SqrtStretch
from astropy.visualization import ZScaleInterval
from astropy.visualization import make_lupton_rgb
from matplotlib import pyplot as plt
forCasting = np.float_()
### READING
# http://www.mistisoftware.com/astronomy/index_fits.htm
r = fits.open("m8_050507_9i9m_R.FIT")[0].data
g = fits.open("m8_050507_9i9m_G.FIT")[0].data
b = fits.open("m8_050507_9i9m_B.FIT")[0].data
# Crop the top and the right margin (contains black pixels)
r = r[:, :-200]
g = g[:, :-200]
b = b[:, :-200]
### CASTING
r = np.array(r,forCasting)
g = np.array(g,forCasting)
b = np.array(b,forCasting)
stretch = SqrtStretch() + ZScaleInterval()
r = stretch(b)
g = stretch(r)
b = stretch(g)
plt.imshow(r, origin='lower')
plt.imshow(g, origin='lower')
plt.imshow(b, origin='lower')
### SAVING
# https://docs.astropy.org/en/stable/api/astropy.visualization.make_lupton_rgb.html
# astropy.visualization.make_lupton_rgb(image_r, image_g, image_b, minimum=0, stretch=5, Q=8, fil/ename=None)[source]
# Return a Red/Green/Blue color image from up to 3 images using an asinh stretch.
# The input images can be int or float, and in any range or bit-depth.
lo_val, up_val = np.percentile(np.hstack((r.flatten(), g.flatten(), b.flatten())), (0.5, 99.5)) # Get the value of lower and upper 0.5% of all pixels
stretch_val = up_val - lo_val
rgb_default = make_lupton_rgb(r, g, b, minimum=lo_val, stretch=stretch_val, Q=0, filename="provafinale.png")
# Cut the top rows - contains black pixels
rgb_default = rgb_default[100:, :, :]
plt.imshow(rgb_default, origin='lower')
plt.show()
Result:
I am having issues fitting a Gaussian to my data. Currently the output for my code looks like
this. Where orange is the data, blue is the gaussian fit and green is an in-built gaussian fitter however I do not wish to use it as it never quite begins at zero and I do not have access to the code. I would like my output to look something like this where the drawn in red is the gaussian fit.
I have tried reading about the curve_fit documentation however at best I get a fit that looks like this which fits over all the data, however, this is undesirable as I am only interested in the central peak which is my main issue - I do not know how to get curve_fit to fit a gaussian on the central peak like in the second image.
I have considered using a weights function like np.random.choice() or looking at the data file's maximum value and then looking at the second derivative at either side of the central peak to see where there are changes in inflection but am unsure how best to implement this.
How would I best go about this? I have done a lot of googling but cant quite get my head around changing curve_fit to suit my needs.
Cheers for any pointers!
This is a data file.
https://drive.google.com/file/d/1qrAkD74U6L46GoGnvMiUHdPuLEToS6Pv/view?usp=sharing
This is my code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from matplotlib.pyplot import figure
plt.close('all')
fpathB4 = 'E:\.1. Work - Current Projects + Old Projects\Current Projects\PF 4MHz Laser System\.8. 1050 SYSTEM\AC traces'
fpath = fpathB4.replace('\\','/') + ('/')
filename = '300'
with open(fpath+filename) as f:
dataraw = f.readlines()
FWHM = dataraw[8].split(':')[1].split()[0]
FWHM = np.float(FWHM)
print("##### For AC file -", filename, "#####")
print("Auto-co guess -", FWHM, "ps")
pulseduration = FWHM/np.sqrt(2)
pulseduration = str(pulseduration)
dataraw = dataraw[15:]
print("Pulse duration -", pulseduration, "ps" + "\n")
time = np.array([])
acf1 = np.array([]) #### DATA
fit = np.array([]) #### Gaussian fit
for k in dataraw:
data = k.split()
time = np.append(time, np.float(data[0]))
acf1= np.append(acf1, np.float(data[1]))
fit = np.append(fit, np.float(data[2]))
n = len(time)
y = acf1.copy()
x = time.copy()
mean = sum(x*y)/n
sigma = sum(y*(x-mean)**2)/n
def gaus(x,a,x0,sigma):
return a*np.exp(-(x-x0)**2/(2*sigma**2))
popt,pcov = curve_fit(gaus,x,y,p0=[1,mean,sigma])
plt.plot(x,gaus(x,*popt)/np.max(gaus(x,*popt)))
figure(num=1, figsize=(8, 3), dpi=96, facecolor='w', edgecolor='k') # figsize = (length, height)
plt.plot(time, acf1/np.max(acf1), label = 'Data - ' + filename, linewidth = 1)
plt.plot(time, fit/np.max(fit), label = '$FWHM_{{\Delta t}}$ (ps) = ' + pulseduration)
plt.autoscale(enable = True, axis = 'x', tight = True)
plt.title("Auto-Correlation Data")
plt.xlabel("Time (ps)")
plt.ylabel("Intensity (a.u.)")
plt.legend()
I think the problem might be that the data are not completely Gaussian-like. It seems you have some kind of Airy/sinc function due to the time resolution of your acquisition instrument. Still, if you are only interested in the center you can still fit it using a single gaussian:
import fitwrap as fw
import pandas as pd
df = pd.read_csv('300', skip_blank_lines=True, skiprows=13, sep='\s+')
def gaussian_no_offset(x, x0=2, sigma=1, amp=300):
return amp*np.exp(-(x-x0)**2/sigma**2)
fw.fit(gaussian_no_offset, df.time, df.acf1)
x0: 2.59158 +/- 0.00828 (0.3%) initial:2
sigma: 0.373 +/- 0.0117 (3.1%) initial:1
amp: 355.02 +/- 9.65 (2.7%) initial:300
If you want to be slightly more precise I can think of a sinc squared function for the peak and a broad gaussian offset. The fit seems nicer, but it really depends on what the data actually represents...
def sinc(x, x0=2.5, amp=300, width=1, amp_g=20, sigma=3):
return amp*(np.sinc((x-x0)/width))**2 + amp_g*np.exp(-(x-x0)**2/sigma**2)
fw.fit(sinc, df.time, df.acf1)
x0: 2.58884 +/- 0.0021 (0.1%) initial:2.5
amp: 303.84 +/- 3.7 (1.2%) initial:300
width: 0.49211 +/- 0.00565 (1.1%) initial:1
amp_g: 81.32 +/- 2.11 (2.6%) initial:20
sigma: 1.512 +/- 0.0351 (2.3%) initial:3
I'd add a constant to the Gaussian equation, and limit the range of that in the bounds parameter of curve fit, so that the graph isn't raised higher.
So your equation would be:
def gaus(y0,x,a,x0,sigma):
return y0 + a*np.exp(-(x-x0)**2/(2*sigma**2))
and the curve_fit bounds would be something like this:
curve_fit(..... ,bounds = [[0,a_min, x0_min, sigma_min],[0.1, a_max, x0_max, sigma_max]])
I implemented the code given by Cris Luengo for convolution in frequency in domain, however I'm not getting the intended gradient image in x direction.
Image without flipping the kernel in x and y direction:
Image after flipping the kernel:
If you notice, the second image is same as given by ImageKernel filter from the pillow library. Also, one thing to notice is I don't have to flip the kernel if I apply Sobel kernel in y direction, I get the exactly intended image.
This is my code:
import numpy as np
from scipy import misc
from scipy import fftpack
import matplotlib.pyplot as plt
from PIL import Image,ImageDraw,ImageOps,ImageFilter
from pylab import figure, title, imshow, hist, grid,show
im1=Image.open("astronaut.png").convert('L')
# im1=ImageOps.grayscale(im1)
img=np.array(im1)
# kernel = np.ones((3,3)) / 9
# kernel=np.array([[0,-1,0],[-1,4,-1],[0,-1,0]])
kernel=np.array([[-1,0,1],[-2,0,2],[-1,0,1]])
kernel=np.rot90(kernel,2)
print(kernel)
sz = (img.shape[0] - kernel.shape[0], img.shape[1] - kernel.shape[1]) # total
amount of padding
kernel = np.pad(kernel, (((sz[0]+1)//2, sz[0]//2), ((sz[1]+1)//2, sz[1]//2)),
'constant')
kernel = fftpack.ifftshift(kernel)
filtered = np.real(fftpack.ifft2(fftpack.fft2(img) *
fftpack.fft2(kernel)))+np.imag(fftpack.ifft2(fftpack.fft2(img) *
fftpack.fft2(kernel)))
filtered=np.maximum(0,np.minimum(filtered,255))
im2=Image.open("astronaut.png").convert('L')
u=im2.filter(ImageFilter.Kernel((3,3), [-1,0,1,-2,0,2,-1,0,1],
scale=1, offset=0))
fig2=figure()
ax1 = fig2.add_subplot(221)
ax2 = fig2.add_subplot(222)
ax3 = fig2.add_subplot(223)
ax1.title.set_text('Original Image')
ax2.title.set_text('After convolving in freq domain')
ax3.title.set_text('imagefilter conv')
ax1.imshow(img,cmap='gray')
ax2.imshow(filtered,cmap='gray')
ax3.imshow(np.array(u),cmap='gray')
show()
We can use np.fft module's FFT implementation too and here is how we can obtain convolution with sobel horizontal kernel in frequency domain (by the convolution theorem):
h, w = im.shape
kernel = np.array(array([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]])) #sobel_filter_x
k = len(kernel) // 2 # assuming odd-length square kernel, here it's 3x3
kernel_padded = np.pad(kernel, [(h//2-k-1, h//2-k), (w//2-k-1, w//2-k)])
im_freq = np.fft.fft2(im) # input image frequency
kernel_freq = np.fft.fft2(kernel_padded) # kernel frequency
out_freq = im_freq * kernel_freq # frequency domain convolution output
out = np.fft.ifftshift(np.fft.ifft2(out_freq)).real # spatial domain output
The below figure shows the input, kernel and output images in spatial and frequency domain:
I'm trying to understand scipy.signal.deconvolve.
From the mathematical point of view a convolution is just the multiplication in fourier space so I would expect
that for two functions f and g:
Deconvolve(Convolve(f,g) , g) == f
In numpy/scipy this is either not the case or I'm missing an important point.
Although there are some questions related to deconvolve on SO already (like here and here) they do not address this point, others remain unclear (this) or unanswered (here). There are also two questions on SignalProcessing SE (this and this) the answers to which are not helpful in understanding how scipy's deconvolve function works.
The question would be:
How do you reconstruct the original signal f from a convoluted signal,
assuming you know the convolving function g.?
Or in other words: How does this pseudocode Deconvolve(Convolve(f,g) , g) == f translate into numpy / scipy?
Edit: Note that this question is not targeted at preventing numerical inaccuracies (although this is also an open question) but at understanding how convolve/deconvolve work together in scipy.
The following code tries to do that with a Heaviside function and a gaussian filter.
As can be seen in the image, the result of the deconvolution of the convolution is not at
all the original Heaviside function. I would be glad if someone could shed some light into this issue.
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
# Define heaviside function
H = lambda x: 0.5 * (np.sign(x) + 1.)
#define gaussian
gauss = lambda x, sig: np.exp(-( x/float(sig))**2 )
X = np.linspace(-5, 30, num=3501)
X2 = np.linspace(-5,5, num=1001)
# convolute a heaviside with a gaussian
H_c = np.convolve( H(X), gauss(X2, 1), mode="same" )
# deconvolute a the result
H_dc, er = scipy.signal.deconvolve(H_c, gauss(X2, 1) )
#### Plot ####
fig , ax = plt.subplots(nrows=4, figsize=(6,7))
ax[0].plot( H(X), color="#907700", label="Heaviside", lw=3 )
ax[1].plot( gauss(X2, 1), color="#907700", label="Gauss filter", lw=3 )
ax[2].plot( H_c/H_c.max(), color="#325cab", label="convoluted" , lw=3 )
ax[3].plot( H_dc, color="#ab4232", label="deconvoluted", lw=3 )
for i in range(len(ax)):
ax[i].set_xlim([0, len(X)])
ax[i].set_ylim([-0.07, 1.2])
ax[i].legend(loc=4)
plt.show()
Edit: Note that there is a matlab example, showing how to convolve/deconvolve a rectangular signal using
yc=conv(y,c,'full')./sum(c);
ydc=deconv(yc,c).*sum(c);
In the spirit of this question it would also help if someone was able to translate this example into python.
After some trial and error I found out how to interprete the results of scipy.signal.deconvolve() and I post my findings as an answer.
Let's start with a working example code
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
# let the signal be box-like
signal = np.repeat([0., 1., 0.], 100)
# and use a gaussian filter
# the filter should be shorter than the signal
# the filter should be such that it's much bigger then zero everywhere
gauss = np.exp(-( (np.linspace(0,50)-25.)/float(12))**2 )
print gauss.min() # = 0.013 >> 0
# calculate the convolution (np.convolve and scipy.signal.convolve identical)
# the keywordargument mode="same" ensures that the convolution spans the same
# shape as the input array.
#filtered = scipy.signal.convolve(signal, gauss, mode='same')
filtered = np.convolve(signal, gauss, mode='same')
deconv, _ = scipy.signal.deconvolve( filtered, gauss )
#the deconvolution has n = len(signal) - len(gauss) + 1 points
n = len(signal)-len(gauss)+1
# so we need to expand it by
s = (len(signal)-n)/2
#on both sides.
deconv_res = np.zeros(len(signal))
deconv_res[s:len(signal)-s-1] = deconv
deconv = deconv_res
# now deconv contains the deconvolution
# expanded to the original shape (filled with zeros)
#### Plot ####
fig , ax = plt.subplots(nrows=4, figsize=(6,7))
ax[0].plot(signal, color="#907700", label="original", lw=3 )
ax[1].plot(gauss, color="#68934e", label="gauss filter", lw=3 )
# we need to divide by the sum of the filter window to get the convolution normalized to 1
ax[2].plot(filtered/np.sum(gauss), color="#325cab", label="convoluted" , lw=3 )
ax[3].plot(deconv, color="#ab4232", label="deconvoluted", lw=3 )
for i in range(len(ax)):
ax[i].set_xlim([0, len(signal)])
ax[i].set_ylim([-0.07, 1.2])
ax[i].legend(loc=1, fontsize=11)
if i != len(ax)-1 :
ax[i].set_xticklabels([])
plt.savefig(__file__ + ".png")
plt.show()
This code produces the following image, showing exactly what we want (Deconvolve(Convolve(signal,gauss) , gauss) == signal)
Some important findings are:
The filter should be shorter than the signal
The filter should be much bigger than zero everywhere (here > 0.013 is good enough)
Using the keyword argument mode = 'same' to the convolution ensures that it lives on the same array shape as the signal.
The deconvolution has n = len(signal) - len(gauss) + 1 points.
So in order to let it also reside on the same original array shape we need to expand it by s = (len(signal)-n)/2 on both sides.
Of course, further findings, comments and suggestion to this question are still welcome.
As written in the comments, I cannot help with the example you posted originally. As #Stelios has pointed out, the deconvolution might not work out due to numerical issues.
I can, however, reproduce the example you posted in your Edit:
That is the code which is a direct translation from the matlab source code:
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
x = np.arange(0., 20.01, 0.01)
y = np.zeros(len(x))
y[900:1100] = 1.
y += 0.01 * np.random.randn(len(y))
c = np.exp(-(np.arange(len(y))) / 30.)
yc = scipy.signal.convolve(y, c, mode='full') / c.sum()
ydc, remainder = scipy.signal.deconvolve(yc, c)
ydc *= c.sum()
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(4, 4))
ax[0][0].plot(x, y, label="original y", lw=3)
ax[0][1].plot(x, c, label="c", lw=3)
ax[1][0].plot(x[0:2000], yc[0:2000], label="yc", lw=3)
ax[1][1].plot(x, ydc, label="recovered y", lw=3)
plt.show()
My goal is to trace drawings that have a lot of separate shapes in them and to split these shapes into individual images. It is black on white. I'm quite new to numpy,opencv&co - but here is my current thought:
scan for black pixels
black pixel found -> watershed
find watershed boundary (as polygon path)
continue searching, but ignore points within the already found boundaries
I'm not very good at these kind of things, is there a better way?
First I tried to find the rectangular bounding box of the watershed results (this is more or less a collage of examples):
from numpy import *
import numpy as np
from scipy import ndimage
np.set_printoptions(threshold=np.nan)
a = np.zeros((512, 512)).astype(np.uint8) #unsigned integer type needed by watershed
y, x = np.ogrid[0:512, 0:512]
m1 = ((y-200)**2 + (x-100)**2 < 30**2)
m2 = ((y-350)**2 + (x-400)**2 < 20**2)
m3 = ((y-260)**2 + (x-200)**2 < 20**2)
a[m1+m2+m3]=1
markers = np.zeros_like(a).astype(int16)
markers[0, 0] = 1
markers[200, 100] = 2
markers[350, 400] = 3
markers[260, 200] = 4
res = ndimage.watershed_ift(a.astype(uint8), markers)
unique(res)
B = argwhere(res.astype(uint8))
(ystart, xstart), (ystop, xstop) = B.min(0), B.max(0) + 1
tr = a[ystart:ystop, xstart:xstop]
print tr
Somehow, when I use the original array (a) then argwhere seems to work, but after the watershed (res) it just outputs the complete array again.
The next step could be to find the polygon path around the shape, but the bounding box would be great for now!
Please help!
#Hooked has already answered most of your question, but I was in the middle of writing this up when he answered, so I'll post it in the hopes that it's still useful...
You're trying to jump through a few too many hoops. You don't need watershed_ift.
You use scipy.ndimage.label to differentiate separate objects in a boolean array and scipy.ndimage.find_objects to find the bounding box of each object.
Let's break things down a bit.
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
def draw_circle(grid, x0, y0, radius):
ny, nx = grid.shape
y, x = np.ogrid[:ny, :nx]
dist = np.hypot(x - x0, y - y0)
grid[dist < radius] = True
return grid
# Generate 3 circles...
a = np.zeros((512, 512), dtype=np.bool)
draw_circle(a, 100, 200, 30)
draw_circle(a, 400, 350, 20)
draw_circle(a, 200, 260, 20)
# Label the objects in the array.
labels, numobjects = ndimage.label(a)
# Now find their bounding boxes (This will be a tuple of slice objects)
# You can use each one to directly index your data.
# E.g. a[slices[0]] gives you the original data within the bounding box of the
# first object.
slices = ndimage.find_objects(labels)
#-- Plotting... -------------------------------------
fig, ax = plt.subplots()
ax.imshow(a)
ax.set_title('Original Data')
fig, ax = plt.subplots()
ax.imshow(labels)
ax.set_title('Labeled objects')
fig, axes = plt.subplots(ncols=numobjects)
for ax, sli in zip(axes.flat, slices):
ax.imshow(labels[sli], vmin=0, vmax=numobjects)
tpl = 'BBox:\nymin:{0.start}, ymax:{0.stop}\nxmin:{1.start}, xmax:{1.stop}'
ax.set_title(tpl.format(*sli))
fig.suptitle('Individual Objects')
plt.show()
Hopefully that makes it a bit clearer how to find the bounding boxes of the objects.
Use the ndimage library from scipy. The function label places a unique tag on each block of pixels that are within a threshold. This identifies the unique clusters (shapes). Starting with your definition of a:
from scipy import ndimage
image_threshold = .5
label_array, n_features = ndimage.label(a>image_threshold)
# Plot the resulting shapes
import pylab as plt
plt.subplot(121)
plt.imshow(a)
plt.subplot(122)
plt.imshow(label_array)
plt.show()