Python - plt.savefig() resulting in blank image without plt.show() - python

I work on a crowd simulation, and I tried to get a simple representation at a given time like this : new to the site so here is the link. I work with Spyder and the code works wonderfully when I display the image in ipython with plt.show(), but when i try to save the images with plt.savefig() (I removed plt.show() prior to that, not the issue) i ended up with blank images. Here is the code :
p,v,t = resolve() #p[c][i] is the position vector of individual i at time t[c]
N = len(t) # number of instant
n = len(m) # number of individual
murs_x = [w[0] for w in W] # wall points x coordinates
murs_y = [w[1] for w in W] # wall points y coordinates
conv = 39.3701 #inch/m
L = longueur*conv/50 # width of figure
H = (largeur + decalage)*conv/50 # height of figure
for c in range(N):
fig1 = plt.figure(num="aff",figsize = (L,H), dpi = 200) # arbitrary num, allow to recreate the figure
ax = fig1.add_axes([1,1,1,1])
ax.set_axis_off() # visual purpose
ax.set_frame_on(False) # visual purpose
ax.axis([0,longueur,0,largeur+decalage])
ax.scatter(murs_x,murs_y,s=0.01,marker='.')
for i in range(n):
if p[c][i][1] <= (largeur + r[i]): # presence condition for individual i
ax.add_artist(plt.Circle((p[c][i][0], p[c][i][1]), r[i], alpha=1))
# drawing of the circle representing individual i
# here is the plt.show(), unused
fig1.savefig(str(c)+".png") # trying to save instant c visual represention
fig1.clf()
Moreover, without the 2 lines for visual purposes, the images are not totally blank but rather like this : another link.
I first attempted to use matplotlib.animation to create a video, however i had the same issue of a blank video with 2 cropped zeros in the upper right corner. I suppose that the issue is linked to the artist class (I had better results using scattered points instead of circles to represent each individual) but I am a beginner and do not know how to handle it precisely. At least the size of the image is the one expected one.
Thanks for reading this.

Related

How to label inner/outer of contour when slice/rasterize 3D objects to image stack?

For 3D printing, we slice the digital objects into image stacks in order to stack them layer by layer using a 3D printer. And when the slice is done, how to label the inner/outer to set the solid parts?
The STL model:
The Slices:
Sample of one image stack (sliced):
but the need is to keep or label the inner/outer of contours, say the inner is black so the 3D printer will print it and skip the white outer. The goal is filled inner of contours as the following image:
Try 1
import pyvista as pv
mesh = pv.read('./haus.stl')
slices = mesh.slice_along_axis(n=20, axis='z', progress_bar=True)
# show single slice with camera setting
slices[15].plot(cpos=[0, 1, 1], line_width=5, parallel_projection=True,)
# save slices (outcome is as step.3 image stack)
for i in range(20):
p = pv.Plotter(off_screen=True)
p.add_mesh(slices[i])
p.camera_position = 'zy'
p.enable_parallel_projection()
im_name = "im_slice_" + str(i) + ".jpg"
p.screenshot(im_name)
# Try voxelize (as ans from https://stackoverflow.com/questions/75300529)
voxels = pv.voxelize(mesh, density=mesh.length / 100)
# Try pv.Plane() (not test yet)
plane=pv.Plane()
plane.compute_implicit_distance(mesh, inplace=True)
np.sign(plane.point_data['implicit_distance'])
#i_resolution=?, j_resolution=?
# Try vtk (not test yet)
# https://stackoverflow.com/questions/68191368
voxelize model:
voxelize sliced:
but voxelize sliced doesn't seem very suitable. A very fine mesh needs to be built to restore the boundaries.
Try 2 VTK example
show STL:
Just add STL reader and Mapper:
filename = './haus.stl'
reader = vtkSTLReader()
reader.SetFileName(filename)
reader.Update()
stlMapper = vtk.vtkPolyDataMapper()
stlMapper.SetInputConnection(reader.GetOutputPort())
polydata = stlMapper
print("Get GetOrigin", polydata.GetCenter())
sphereSource = reader
slice result:
Try 2 is almost done with the job, but can not figure out the SetExtent/SetOrigin effect. The output image all fit to the contours' dimensions so each output image WXH is not identical.
Try 3 3D Silcer example
Only change some code as following:
inputModelFile = "./data/haus.stl"
outputDir = "./outputs/"
...
for i in range(80,140, 10):
imageio.imwrite(f"{outputDir}/image_{i:03}.jpg", 255 - outputLabelmapVolumeArray[i]) # Inverting Colors
The result seems acceptable, but need future to revise some code to match the resolution, position, spacing, etc. So, is there a more lean and more efficient way to automate similar work?
You may want to try out the combination of vtkFeatureEdges, vtkStripper and vtkTriangleFilter, eg:
from vedo import *
msh = Mesh('https://vedo.embl.es/examples/data/cow.vtk')
slices = []
for z in np.arange(-.50, .50, .15):
line = msh.clone().cut_with_plane(origin=(0,0,z), normal='z')
cap = line.cap(True)
slices.append(cap)
show(slices, msh.alpha(0.1), axes=1)

matplotlib pcolormesh behaviour with large arrays

I am trying to plot an array, but run into a problem when the array gets too large. However, it seems to depend on whether or not the data is monotonic. In the example below, I increase the size of the array that I am plotting. The expected behaviour is the same image in the left and right hand sides, but this is not so. Furthermore, the size of the array I am able to plot correctly seems to depend on the size of the axis itself. When the figures are exported, there is no problem - just the display on the screen, which looks like the screenshot below:
len_xs = [4850,4860,4870] # sizes to try and plot
for j in np.arange(3):
x = np.arange(len_xs[j]); # dummy x variable
y = 2**np.linspace(-4.18676100,1.39659793,538) # log2 spaced y variable
z = np.random.random((len(y),len(x))) # random variable to plot
plt.subplot(3,2,j*2+1);
if j==0: plt.title('Right way up')
plt.pcolormesh(x,y,z,shading='Nearest');
plt.gca().set_yscale('log',base=2)```
plt.subplot(3,2,j*2+2);
if j==0: plt.title('Upside down')
plt.pcolormesh(x,y[::-1],z[::-1],shading='Nearest'); # flip y and z
plt.gca().set_yscale('log',base=2);
plt.text(1000,1,'nx = %g'%len_xs[j],bbox=dict(facecolor='white', alpha=0.8)) # annotate size of x
plt.savefig('demo.png',dpi=600)
The cause is unknown, but since the DPI of the saved image is high, it was displayed with the same content as the saved file, which matched the DPI of the display accordingly.
plt.rcParams['figure.dpi'] = 600

Mayavi Contour 3d

If I plot a 3d data using contour3d option of mayavi, there are 3 default contours but how they spaced. I understand the number of contours can be changed, but can they be at user specified values (I would surely guess that is possible). I would like to know how are the default 3 contours drawn. Depending on maximum value of scalar and how is it distributed.
As it happens I just had the same problem and found a solution.
Here is some sample code:
import numpy as np
from mayavi import mlab
from mayavi.api import Engine
def fun(x, y, z):
return np.cos(x) * np.cos(y) * np.cos(z)
# create engine and assign figure to it
engine = Engine()
engine.start()
fig = mlab.figure(figure=None, engine=engine)
contour3d = mlab.contour3d(x, y, z, fun, figure=fig)
scene = engine.scenes[0]
# get a handle for the plot
iso_surface = scene.children[0].children[0].children[0]
# the following line will print you everything that you can modify on that object
iso_surface.contour.print_traits()
# now let's modify the number of contours and the min/max
# you can also do these steps manually in the mayavi pipeline editor
iso_surface.compute_normals = False # without this only 1 contour will be displayed
iso_surface.contour.number_of_contours = 2
iso_surface.contour.minimum_contour = -1.3
iso_surface.contour.maximum_contour = 1.3
Now about the meaning of the contours. Well, the number obviously says how many contours are created. Then the values for min/max will define a linear space over which the contours will be spread. The value should basically influence the shrinkage/expansion along the surface normals.
Edit: Here's a tip. When you got your plot window, click on the mayavi pipeline icon in the top left. There you can modify your object (usually lowest in the tree). When you press the red record button and start modifying things it will give you the corresponding lines of code.

How to remove/omit smaller contour lines using matplotlib

I am trying to plot contour lines of pressure level. I am using a netCDF file which contain the higher resolution data (ranges from 3 km to 27 km). Due to higher resolution data set, I get lot of pressure values which are not required to be plotted (rather I don't mind omitting certain contour line of insignificant values). I have written some plotting script based on the examples given in this link http://matplotlib.org/basemap/users/examples.html.
After plotting the image looks like this
From the image I have encircled the contours which are small and not required to be plotted. Also, I would like to plot all the contour lines smoother as mentioned in the above image. Overall I would like to get the contour image like this:-
Possible solution I think of are
Find out the number of points required for plotting contour and mask/omit those lines if they are small in number.
or
Find the area of the contour (as I want to omit only circled contour) and omit/mask those are smaller.
or
Reduce the resolution (only contour) by increasing the distance to 50 km - 100 km.
I am able to successfully get the points using SO thread Python: find contour lines from matplotlib.pyplot.contour()
But I am not able to implement any of the suggested solution above using those points.
Any solution to implement the above suggested solution is really appreciated.
Edit:-
# Andras Deak
I used print 'diameter is ', diameter line just above del(level.get_paths()[kp]) line to check if the code filters out the required diameter. Here is the filterd messages when I set if diameter < 15000::
diameter is 9099.66295612
diameter is 13264.7838257
diameter is 445.574234531
diameter is 1618.74618114
diameter is 1512.58974168
However the resulting image does not have any effect. All look same as posed image above. I am pretty sure that I have saved the figure (after plotting the wind barbs).
Regarding the solution for reducing the resolution, plt.contour(x[::2,::2],y[::2,::2],mslp[::2,::2]) it works. I have to apply some filter to make the curve smooth.
Full working example code for removing lines:-
Here is the example code for your review
#!/usr/bin/env python
from netCDF4 import Dataset
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
import numpy as np
import scipy.ndimage
from mpl_toolkits.basemap import interp
from mpl_toolkits.basemap import Basemap
# Set default map
west_lon = 68
east_lon = 93
south_lat = 7
north_lat = 23
nc = Dataset('ncfile.nc')
# Get this variable for later calucation
temps = nc.variables['T2']
time = 0 # We will take only first interval for this example
# Draw basemap
m = Basemap(projection='merc', llcrnrlat=south_lat, urcrnrlat=north_lat,
llcrnrlon=west_lon, urcrnrlon=east_lon, resolution='l')
m.drawcoastlines()
m.drawcountries(linewidth=1.0)
# This sets the standard grid point structure at full resolution
x, y = m(nc.variables['XLONG'][0], nc.variables['XLAT'][0])
# Set figure margins
width = 10
height = 8
plt.figure(figsize=(width, height))
plt.rc("figure.subplot", left=.001)
plt.rc("figure.subplot", right=.999)
plt.rc("figure.subplot", bottom=.001)
plt.rc("figure.subplot", top=.999)
plt.figure(figsize=(width, height), frameon=False)
# Convert Surface Pressure to Mean Sea Level Pressure
stemps = temps[time] + 6.5 * nc.variables['HGT'][time] / 1000.
mslp = nc.variables['PSFC'][time] * np.exp(9.81 / (287.0 * stemps) * nc.variables['HGT'][time]) * 0.01 + (
6.7 * nc.variables['HGT'][time] / 1000)
# Contour only at 2 hpa interval
level = []
for i in range(mslp.min(), mslp.max(), 1):
if i % 2 == 0:
if i >= 1006 and i <= 1018:
level.append(i)
# Save mslp values to upload to SO thread
# np.savetxt('mslp.txt', mslp, fmt='%.14f', delimiter=',')
P = plt.contour(x, y, mslp, V=2, colors='b', linewidths=2, levels=level)
# Solution suggested by Andras Deak
for level in P.collections:
for kp,path in enumerate(level.get_paths()):
# include test for "smallness" of your choice here:
# I'm using a simple estimation for the diameter based on the
# x and y diameter...
verts = path.vertices # (N,2)-shape array of contour line coordinates
diameter = np.max(verts.max(axis=0) - verts.min(axis=0))
if diameter < 15000: # threshold to be refined for your actual dimensions!
#print 'diameter is ', diameter
del(level.get_paths()[kp]) # no remove() for Path objects:(
#level.remove() # This does not work. produces ValueError: list.remove(x): x not in list
plt.gcf().canvas.draw()
plt.savefig('dummy', bbox_inches='tight')
plt.close()
After the plot is saved I get the same image
You can see that the lines are not removed yet. Here is the link to mslp array which we are trying to play with http://www.mediafire.com/download/7vi0mxqoe0y6pm9/mslp.txt
If you want x and y data which are being used in the above code, I can upload for your review.
Smooth line
You code to remove the smaller circles working perfectly. However the other question I have asked in the original post (smooth line) does not seems to work. I have used your code to slice the array to get minimal values and contoured it. I have used the following code to reduce the array size:-
slice = 15
CS = plt.contour(x[::slice,::slice],y[::slice,::slice],mslp[::slice,::slice], colors='b', linewidths=1, levels=levels)
The result is below.
After searching for few hours I found this SO thread having simmilar issue:-
Regridding regular netcdf data
But none of the solution provided over there works.The questions similar to mine above does not have proper solutions. If this issue is solved then the code is perfect and complete.
General idea
Your question seems to have 2 very different halves: one about omitting small contours, and another one about smoothing the contour lines. The latter is simpler, since I can't really think of anything else other than decreasing the resolution of your contour() call, just like you said.
As for removing a few contour lines, here's a solution which is based on directly removing contour lines individually. You have to loop over the collections of the object returned by contour(), and for each element check each Path, and delete the ones you don't need. Redrawing the figure's canvas will get rid of the unnecessary lines:
# dummy example based on matplotlib.pyplot.clabel example:
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
plt.figure()
CS = plt.contour(X, Y, Z)
for level in CS.collections:
for kp,path in reversed(list(enumerate(level.get_paths()))):
# go in reversed order due to deletions!
# include test for "smallness" of your choice here:
# I'm using a simple estimation for the diameter based on the
# x and y diameter...
verts = path.vertices # (N,2)-shape array of contour line coordinates
diameter = np.max(verts.max(axis=0) - verts.min(axis=0))
if diameter<1: # threshold to be refined for your actual dimensions!
del(level.get_paths()[kp]) # no remove() for Path objects:(
# this might be necessary on interactive sessions: redraw figure
plt.gcf().canvas.draw()
Here's the original(left) and the removed version(right) for a diameter threshold of 1 (note the little piece of the 0 level at the top):
Note that the top little line is removed while the huge cyan one in the middle doesn't, even though both correspond to the same collections element i.e. the same contour level. If we didn't want to allow this, we could've called CS.collections[k].remove(), which would probably be a much safer way of doing the same thing (but it wouldn't allow us to differentiate between multiple lines corresponding to the same contour level).
To show that fiddling around with the cut-off diameter works as expected, here's the result for a threshold of 2:
All in all it seems quite reasonable.
Your actual case
Since you've added your actual data, here's the application to your case. Note that you can directly generate the levels in a single line using np, which will almost give you the same result. The exact same can be achieved in 2 lines (generating an arange, then selecting those that fall between p1 and p2). Also, since you're setting levels in the call to contour, I believe the V=2 part of the function call has no effect.
import numpy as np
import matplotlib.pyplot as plt
# insert actual data here...
Z = np.loadtxt('mslp.txt',delimiter=',')
X,Y = np.meshgrid(np.linspace(0,300000,Z.shape[1]),np.linspace(0,200000,Z.shape[0]))
p1,p2 = 1006,1018
# this is almost the same as the original, although it will produce
# [p1, p1+2, ...] instead of `[Z.min()+n, Z.min()+n+2, ...]`
levels = np.arange(np.maximum(Z.min(),p1),np.minimum(Z.max(),p2),2)
#control
plt.figure()
CS = plt.contour(X, Y, Z, colors='b', linewidths=2, levels=levels)
#modified
plt.figure()
CS = plt.contour(X, Y, Z, colors='b', linewidths=2, levels=levels)
for level in CS.collections:
for kp,path in reversed(list(enumerate(level.get_paths()))):
# go in reversed order due to deletions!
# include test for "smallness" of your choice here:
# I'm using a simple estimation for the diameter based on the
# x and y diameter...
verts = path.vertices # (N,2)-shape array of contour line coordinates
diameter = np.max(verts.max(axis=0) - verts.min(axis=0))
if diameter<15000: # threshold to be refined for your actual dimensions!
del(level.get_paths()[kp]) # no remove() for Path objects:(
# this might be necessary on interactive sessions: redraw figure
plt.gcf().canvas.draw()
plt.show()
Results, original(left) vs new(right):
Smoothing by resampling
I've decided to tackle the smoothing problem as well. All I could come up with is downsampling your original data, then upsampling again using griddata (interpolation). The downsampling part could also be done with interpolation, although the small-scale variation in your input data might make this problem ill-posed. So here's the crude version:
import scipy.interpolate as interp #the new one
# assume you have X,Y,Z,levels defined as before
# start resampling stuff
dN = 10 # use every dN'th element of the gridded input data
my_slice = [slice(None,None,dN),slice(None,None,dN)]
# downsampled data
X2,Y2,Z2 = X[my_slice],Y[my_slice],Z[my_slice]
# same as X2 = X[::dN,::dN] etc.
# upsampling with griddata over original mesh
Zsmooth = interp.griddata(np.array([X2.ravel(),Y2.ravel()]).T,Z2.ravel(),(X,Y),method='cubic')
# plot
plt.figure()
CS = plt.contour(X, Y, Zsmooth, colors='b', linewidths=2, levels=levels)
You can freely play around with the grids used for interpolation, in this case I just used the original mesh, as it was at hand. You can also play around with different kinds of interpolation: the default 'linear' one will be faster, but less smooth.
Result after downsampling(left) and upsampling(right):
Of course you should still apply the small-line-removal algorithm after this resampling business, and keep in mind that this heavily distorts your input data (since if it wasn't distorted, then it wouldn't be smooth). Also, note that due to the crude method used in the downsampling step, we introduce some missing values near the top/right edges of the region under consideraton. If this is a problem, you should consider doing the downsampling based on griddata as I've noted earlier.
This is a pretty bad solution, but it's the only one that I've come up with. Use the get_contour_verts function in this solution you linked to, possibly with the matplotlib._cntr module so that nothing gets plotted initially. That gives you a list of contour lines, sections, vertices, etc. Then you have to go through that list and pop the contours you don't want. You could do this by calculating a minimum diameter, for example; if the max distance between points is less than some cutoff, throw it out.
That leaves you with a list of LineCollection objects. Now if you make a Figure and Axes instance, you can use Axes.add_collection to add all of the LineCollections in the list.
I checked this out really quick, but it seemed to work. I'll come back with a minimum working example if I get a chance. Hope it helps!
Edit: Here's an MWE of the basic idea. I wasn't familiar with plt._cntr.Cntr, so I ended up using plt.contour to get the initial contour object. As a result, you end up making two figures; you just have to close the first one. You can replace checkDiameter with whatever function works. I think you could turn the line segments into a Polygon and calculate areas, but you'd have to figure that out on your own. Let me know if you run into problems with this code, but it at least works for me.
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
def checkDiameter(seg, tol=.3):
# Function for screening line segments. NB: Not actually a proper diameter.
diam = (seg[:,0].max() - seg[:,0].min(),
seg[:,1].max() - seg[:,1].min())
return not (diam[0] < tol or diam[1] < tol)
# Create testing data
x = np.linspace(-1,1, 21)
xx, yy = np.meshgrid(x,x)
z = np.exp(-(xx**2 + .5*yy**2))
# Original plot with plt.contour
fig0, ax0 = plt.subplots()
# Make sure this contour object actually has a tiny contour to remove
cntrObj = ax0.contour(xx,yy,z, levels=[.2,.4,.6,.8,.9,.95,.99,.999])
# Primary loop: Copy contours into a new LineCollection
lineNew = list()
for lineOriginal in cntrObj.collections:
# Get properties of the original LineCollection
segments = lineOriginal.get_segments()
propDict = lineOriginal.properties()
propDict = {key: value for (key,value) in propDict.items()
if key in ['linewidth','color','linestyle']} # Whatever parameters you want to carry over
# Filter out the lines with small diameters
segments = [seg for seg in segments if checkDiameter(seg)]
# Create new LineCollection out of the OK segments
if len(segments) > 0:
lineNew.append(mpl.collections.LineCollection(segments, **propDict))
# Make new plot with only these line collections; display results
fig1, ax1 = plt.subplots()
ax1.set_xlim(ax0.get_xlim())
ax1.set_ylim(ax0.get_ylim())
for line in lineNew:
ax1.add_collection(line)
plt.show()
FYI: The bit with propDict is just to automate bringing over some of the line properties from the original plot. You can't use the whole dictionary at once, though. First, it contains the old plot's line segments, but you can just swap those for the new ones. But second, it appears to contain a number of parameters that are in conflict with each other: multiple linewidths, facecolors, etc. The {key for key in propDict if I want key} workaround is my way to bypass that, but I'm sure someone else can do it more cleanly.

Correct method and Python package that can find width of an image's feature

The input is a spectrum with colorful (sorry) vertical lines on a black background. Given the approximate x coordinate of that band (as marked by X), I want to find the width of that band.
I am unfamiliar with image processing. Please direct me to the correct method of image processing and a Python image processing package that can do the same.
I am thinking PIL, OpenCV gave me an impression of being overkill for this particular application.
What if I want to make this an expert system that can classify them in the future?
I'll give a complete minimal working example (as suggested by sega_sai). I don't have access to your original image, but you'll see it doesn't really matter! The peak distributions found by the code below are:
Mean values at: 26.2840960523 80.8255092125
import Image
from scipy import *
from scipy.optimize import leastsq
# Load the picture with PIL, process if needed
pic = asarray(Image.open("band2.png"))
# Average the pixel values along vertical axis
pic_avg = pic.mean(axis=2)
projection = pic_avg.sum(axis=0)
# Set the min value to zero for a nice fit
projection /= projection.mean()
projection -= projection.min()
# Fit function, two gaussians, adjust as needed
def fitfunc(p,x):
return p[0]*exp(-(x-p[1])**2/(2.0*p[2]**2)) + \
p[3]*exp(-(x-p[4])**2/(2.0*p[5]**2))
errfunc = lambda p, x, y: fitfunc(p,x)-y
# Use scipy to fit, p0 is inital guess
p0 = array([0,20,1,0,75,10])
X = xrange(len(projection))
p1, success = leastsq(errfunc, p0, args=(X,projection))
Y = fitfunc(p1,X)
# Output the result
print "Mean values at: ", p1[1], p1[4]
# Plot the result
from pylab import *
subplot(211)
imshow(pic)
subplot(223)
plot(projection)
subplot(224)
plot(X,Y,'r',lw=5)
show()
Below is a simple thresholding method to find the lines and their width, it should work quite reliably for any number of lines. The yellow and black image below was processed using this script, the red/black plot illustrates the found lines using parameters of threshold = 0.3, min_line_width = 5)
The script averages the rows of an image, and then determines the basic start and end positions of each line based on a threshold (which you can set between 0 and 1), and a minimum line width (in pixels). By using thresholding and minimum line width you can easily filter your input images to get the lines out of them. The first function find_lines returns all the lines in an image as a list of tuples containing the start, end, center, and width of each line. The second function find_closest_band_width is called with the specified x_position, and returns the width of the closest line to this position (assuming you want distance to centre for each line). As the lines are saturated (255 cut-off per channel), their cross-sections are not far from a uniform distribution, so I don't believe trying to fit any kind of distribution is really going to help too much, just unnecessarily complicates.
import Image, ImageStat
def find_lines(image_file, threshold, min_line_width):
im = Image.open(image_file)
width, height = im.size
hist = []
lines = []
start = end = 0
for x in xrange(width):
column = im.crop((x, 0, x + 1, height))
stat = ImageStat.Stat(column)
## normalises by 2 * 255 as in your example the colour is yellow
## if your images start using white lines change this to 3 * 255
hist.append(sum(stat.sum) / (height * 2 * 255))
for index, value in enumerate(hist):
if value > threshold and end >= start:
start = index
if value < threshold and end < start:
if index - start < min_line_width:
start = 0
else:
end = index
center = start + (end - start) / 2.0
width = end - start
lines.append((start, end, center, width))
return lines
def find_closest_band_width(x_position, lines):
distances = [((value[2] - x_position) ** 2) for value in lines]
index = distances.index(min(distances))
return lines[index][3]
## set your threshold, and min_line_width for finding lines
lines = find_lines("8IxWA_sample.png", 0.7, 4)
## sets x_position to 59th pixel
print 'width of nearest line:', find_closest_band_width(59, lines)
I don't think that you need anything fancy for you particular task.
I would just use PIL + scipy. That should be enough.
Because you essentially need to take your image, make a 1D-projection of it
and then fit a Gaussian or something like that to it. The information about the approximate location of the band should be used a first guess for the fitter.

Categories