Checking for overlap of pixels from two different images - python

I have a color picture with red and blue colors. I separated the blue and red color signal sand created black and white images from them as such:
first image and second image
Now, I want to see how if the white spots in the second image overlap on top of the squiggly lines in the first image.
My approach is the following:
First detect the center coordinate of the white spots in the 2nd image. Avoiding the big white clusters. I only care about the white spots that are in the same vicinity of the squiggly lines in the first image.
Then use the following MATLAB code to see if the white spot is on top of the squiggly lines from the first image.
The code is courtesy of #rayryeng
val = 0; % Value to match
count = 0
N = 50; % Radius of neighbourhood
% Generate 2D grid of coordinates
[x, y] = meshgrid(1 : size(img, 2), 1 : size(img, 1));
% For each coordinate to check...
for kk = 1 : size(coord, 1)
a = coord(kk, 1); b = coord(kk, 2); % Get the pixel locations
mask = (x - a).^2 + (y - b).^2 <= N*N; % Get a mask of valid locations
% within the neighbourhood
pix = img(mask); % Get the valid pixels
count = count + any(pix(:) == val); % Add either 0 or 1 depending if
% we have found any matching pixels
end
Where I am stuck: I'm having trouble detecting the center point of the white spots in the 2nd image. Especially because I want to avoid the clusters of white spots. I just want to detect the spots that are in the same vicinity of the squiggly lines in the first image.
I am willing to try any language that has good image analysis libraries. How do I go about this?

This seemed to work pretty well:
pos = rp.WeightedCentroid; %all positions
for ct = size(pos,1):-1:1 %check them backwards
d = sum((pos-pos(ct,:)).^2); %distance to all other points (kd-trees could be used for speedup)
if min(d)<50^2,pos(ct,:)=[];end %remove if any point is too close
end

Related

Finding patches of certain colors with a size smaller than x. (x =number of pixels)

Segmentation
Blue Mask
In this example you can see the Segmentation and a mask showing all the places where the Segmentation has the color blue (0,0,155,255).
There is some blue noise, represented as small blue streak just bewteen the green and red area and between the green and orange areas.
I would like to remove the blue Segmentation if the segementation has a an area which is smaller than, lets say 50 pixels, and replace it by the color surronding the blue area, without mixing any colors. The end result should only conatin the 6 original colors.
Idealy I would like to perform this process for all 6 colors in the image.
How would I have to go about this, is there any inbuilt function that can do this?
I would apply findContours on the (thresholded) masks, per each color, and collect the segmented representations. Then render each of the colors separately as you've done with the blue mask.
Then I'd use these functions https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.html#contour-features
E.g. filtering by: area = cv2.contourArea(cnt) mark the small regions.
That is - iterate the contours and compare if area < ... --> collect:
For each selected small region, you may check the surroundings, which colors are adjacent. That could be done e.g. by sampling some point from the contour (it is a list of coordinates) and scanning in various directions and comparing the colour until finding a different one. That could be helped by finding the extreme points and starting from there, see below:
#... produce masked image for each color, put in masks = [] ...
#... colors = [] ... per each mask/segmented region etc.
for m in masks:
bw = cv2.cvtColor(m,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(bw,127,255,0) # or whatever appropriate params
contours, hierarchy = cv2.findContours(thresh, 1, 2)
streaks = []
for c in contours:
if cv2.contourArea(c) < minSize:
streaks.append(c)
# or process directly, maybe a function here, or could simplify the contour:
# reduce the number of points:
# epsilon = 0.1*cv2.arcLength(cnt,True)
# approx = cv2.approxPolyDP(cnt,epsilon,True)
for x,y in c: # or on the approx or with skipping ... or one point may be enough
# check with offset... Scan in some direction until black/different color or pointPolygonTest() is False etc.
'''
The extreme points could be found which may make the scanning more efficient - c is a contour:
leftmost = tuple(c[c[:,:,].argmin()][0]
rightmost = tuple(c[c[:,:,].argmax()][0]
So having the leftmost coordinate of the contour, the scan should be to the left and for the rightmost - to the right etc. There are bordercases when the small regions are near the border of the image, then the search should iterate over the directions.
Then you can change the color of these small regions to that adjacent one - either in the representation (some class or a tuple) or directly in the image with cv2.fillPoly(...). fillPoly may be used to reconstruct the image of the segmentation.
There could be several adjacent areas with different colours, so if it matters which colour to select, it needs more specifications, e.g. comparing the areas of these adjacent ones and selecting the bigger/smaller one, random etc.
Finding a suitable algorithm to fill in the small contours, colours around the object seemed to be too complicated so I came up with ( with the help of Todor) this:
import os
import numpy as np
from PIL import Image
import time
import cv2
from joblib import Parallel, delayed
root = 'Mask/'
files = os.listdir(root)
def despeckling(file):
#for file in files: -> if you don't want to use multiple threads to compute this.
imgpath = os.path.join(root, file)
img1 = Image.open(imgpath) #opening file
img1 = img1.convert("RGB") #convert to rgb
pixels1 = img1.load()
# blue 2 green
newimgarray0 = np.asarray(img1)
for y in range(img1.size[1]): #returning an binary img with...
for x in range(img1.size[0]):
if pixels1 [x,y] != (0, 0, 155): #the color you want to isolate, and ...
pixels1[x,y] = (0,0,0) #the background color (black)
img1arr = np.asarray(img1)
grayarr1 = cv2.cvtColor(img1arr, cv2.COLOR_RGB2GRAY) # you have to convert to grayscale as cv2.find contours can't process anything else
contours1, hierachy = cv2.findContours(grayarr1,cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) # returning the contours with out any extrapolation (cv2.CHAIN_APPROX_NONE) , disregarding hierachy (RETR_LIST)
shapes1 = [] # empty array to store the Contours in
for contour1 in contours1:
if cv2.contourArea(contour1) < 1000: # specifying the minimum contour area( here it is 1000)
shapes1.append(contour1) # storing the contours in the shapes1
newimgarray1 = cv2.fillPoly(newimgarray0, shapes1, color=(0,174,0)) # filling the contours, which are deemed to smal (<1000) with next color inline
newimg1 = Image.fromarray((newimgarray1))
#repeat for all colors until no small patch is left.
newdir = 'despeckled/'
newimg1.save(os.path.join(newdir, file))
run = Parallel(n_jobs=-1) (delayed(despeckling)(file) for file in files) #parallisation of the process
As I have 6 colors, this would be the order I would go through
blue -> green, green -> orange, orange -> red, red -> purple, purple -> blue, blue -> green, green -> orange, orange -> red, red -> purple
This way I can ensure, that all the small patches now belong to one bigger patch.
There are for sure better ways to do this, but this was the easiest for me since I am still a noob. :D

removing background 'noise' from signal images (RGB)

I have some signal images:
As you can inspect, some of them contain color signals and some are just gray/black color signals.
My task is to extract pure signal with white background only. That means I need to remove all but signal in the image.
I checked that dash lines, dotted lines, solid lines (top and bottom) have the same RGB value that are close to 0;0;0 (ex: 0;0;0, 2;2;2; or 8;8;8) in terms of RGB.
Therefore, first thing that came to my mind was to access RGB values of each pixel and assign white color if all RGB values are the same. Using this heavy computation I can extract all color signals, because RGB values are never same for colors like red, blue, green (or their shades to some extent).
However, that process would remove signals where signal's pixel values are the same. That happens with mostly black color signals (the first two samples for example).
I also thought of extracting the signal if it keeps its horizontal and some vertical continuity, but to be honest I don't know how to write the code for it.
I am not asking any code solution to this challenge.
I would like to have different opinions on how I can successfully extract the original signal.
I am looking forward to having your ideas, insights and sources. Thanks
Note: All of my images (about 3k) are in one folder and I am going to apply one universal algorithm to accomplish the task.
You can find the horizontal and vertical lines using Hough transform.
After finding the lines, it's simple to erase them.
Removing the lines is only the first stage, but it looks like a good starting point...
Keeping the colored pixels (as you suggested) is also simple task.
You have mentioned you are not asking any code solution, but I decided to demonstrate my suggestion using MATLAB code:
close all
clear
origI = imread('I.png'); %Read image
I = imbinarize(rgb2gray(origI)); %Convert to binary
I = ~I; %Invert - the line color should be white.
%Apply hough transform: Find lines with angles very close to 0 degrees and with angles close to 90 degrees.
[H,theta,rho] = hough(I, 'RhoResolution', 1, 'Theta', [-0.3:0.02:0.3, -90:0.02:-89.7, 89.7:0.02:89.98]);
P = houghpeaks(H, numel(H), 'Threshold', 0.1, 'NHoodSize', [11, 1]); %Use low thresholds
lines = houghlines(I,theta,rho,P,'FillGap',25,'MinLength',200); %Fill large gaps and keep only the long lines.
%Plot the lines for debugging, and erase them by drawing black lines over them
J = im2uint8(I);
figure, imshow(I), hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
% Draw black line over each line.
J = insertShape(J, 'Line', [xy(1,1), xy(1,2), xy(2,1), xy(2,2)], 'Color', 'Black');
end
%Covert J image to binary (because MATLAB function insertShape returns RGB output).
J = imbinarize(rgb2gray(J));
figure, imshow(J)
%Color mask: 1 where color is not black or white.
I = double(origI);
C = (abs(I(:,:,1) - I(:,:,2)) > 20) | (abs(I(:,:,1) - I(:,:,3)) > 20) | (abs(I(:,:,2) - I(:,:,3)) > 20);
figure, imshow(C)
%Build a mask that combines "lines" mask and "color" mask.
Mask = J | C;
Mask = cat(3, Mask, Mask, Mask);
%Put white color where mask value is 0.
K = origI;
K(~Mask) = 255;
figure, imshow(K)
Detected lines:
Result after deleting lines:
Final result:
As you can see there are still leftovers.
I Applied a second iteration (same code) over the above result.
Result was improved:
You may try removing the leftovers using morphological operations.
It's going to be difficult without erasing the dashed graph.
Iterating all the PNG image files:
Place the code in an m file (MATLAB script file).
Place the m file in the same folder of the PNG image files.
Here is the code:
%ExtractSignals.m
close all
clear
%List all PNG files in the working directory (where ExtractSignals.m is placed).
imagefiles = dir('*.png');
nfiles = length(imagefiles);
result_images = cell(1, nfiles); %Allocate cell array for storing output images
for ii = 1:nfiles
currentfilename = imagefiles(ii).name; %PNG file name
origI = imread(currentfilename); %Read image
%Verify origI is in RGB format (just in case...)
if (size(origI, 3) ~= 3)
error([currentfilename, ' is not RGB image format!']);
end
I = imbinarize(rgb2gray(origI)); %Convert to binary
I = ~I; %Invert - the line color should be white.
%Apply hough transform: Find lines with angles very close to 0 degrees and with angles close to 90 degrees.
[H,theta,rho] = hough(I, 'RhoResolution', 1, 'Theta', [-0.3:0.02:0.3, -90:0.02:-89.7, 89.7:0.02:89.98]);
P = houghpeaks(H, numel(H), 'Threshold', 0.1, 'NHoodSize', [11, 1]); %Use low thresholds
lines = houghlines(I,theta,rho,P,'FillGap',25,'MinLength',200); %Fill large gaps and keep only the long lines.
%Plot the lines for debugging, and erase them by drawing black lines over them
J = im2uint8(I);
%figure, imshow(I), hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
%plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
%plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
%plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
% Draw black line over each line.
J = insertShape(J, 'Line', [xy(1,1), xy(1,2), xy(2,1), xy(2,2)], 'Color', 'Black');
end
%Covert J image to binary (because MATLAB function insertShape returns RGB output).
J = imbinarize(rgb2gray(J));
%figure, imshow(J)
%Color mask: 1 where color is not black or white.
I = double(origI);
C = (abs(I(:,:,1) - I(:,:,2)) > 20) | (abs(I(:,:,1) - I(:,:,3)) > 20) | (abs(I(:,:,2) - I(:,:,3)) > 20);
%figure, imshow(C)
%Build a mask that combines "lines" mask and "color" mask.
Mask = J | C;
Mask = cat(3, Mask, Mask, Mask);
%Put white color where mask value is 0.
K = origI;
K(~Mask) = 255;
%figure, imshow(K)
%Second iteration - applied by "copy and paste" of the above code (it is recommended to use a function instead).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
origI = K; %Set origI to the result of the first iteration
I = imbinarize(rgb2gray(origI)); %Convert to binary
I = ~I; %Invert - the line color should be white.
%Apply hough transform: Find lines with angles very close to 0 degrees and with angles close to 90 degrees.
[H,theta,rho] = hough(I, 'RhoResolution', 1, 'Theta', [-0.3:0.02:0.3, -90:0.02:-89.7, 89.7:0.02:89.98]);
P = houghpeaks(H, numel(H), 'Threshold', 0.1, 'NHoodSize', [11, 1]); %Use low thresholds
lines = houghlines(I,theta,rho,P,'FillGap',25,'MinLength',200); %Fill large gaps and keep only the long lines.
%Plot the lines for debugging, and erase them by drawing black lines over them
J = im2uint8(I);
%figure, imshow(I), hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
% Draw black line over each line.
J = insertShape(J, 'Line', [xy(1,1), xy(1,2), xy(2,1), xy(2,2)], 'Color', 'Black');
end
%Covert J image to binary (because MATLAB function insertShape returns RGB output).
J = imbinarize(rgb2gray(J));
%figure, imshow(J)
%Color mask: 1 where color is not black or white.
I = double(origI);
C = (abs(I(:,:,1) - I(:,:,2)) > 20) | (abs(I(:,:,1) - I(:,:,3)) > 20) | (abs(I(:,:,2) - I(:,:,3)) > 20);
%figure, imshow(C)
%Build a mask that combines "lines" mask and "color" mask.
Mask = J | C;
Mask = cat(3, Mask, Mask, Mask);
%Put white color where mask value is 0.
K = origI;
K(~Mask) = 255;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Store result image in a cell array
result_images{ii} = K;
end
%Display all result images
for ii = 1:nfiles
figure;
imshow(result_images{ii});
title(['Processed ', imagefiles(ii).name]);
end

Any way to determine no of text lines in image?

Actually, I have to find no of text lines in the given image For e.g. If I am having two images
from PIL import ImageGrab
img1=ImageGrab.grab([0,0,200,80])
img2=ImageGrab.grab([300,0,500,80])
first one is img1
and second one is img2
How can I get the number of text lines in an image, so that it outputs 5 for img1, and 4 for img2?
If you want to do this without OCR-ing the text, the typical approach, is to determine for each line in the image if it has one or more than one color.
The lines with one color can be assumed to be background any transition from more than one color to a single color is the "bottom" line of a text row. Count those transitions and you'll have the number of lines of text in an image.
This assumes:
characters of one line do no extend completely to the bottom of the cell they are drawn in (that would mean there might never be an empty line if the top line has a g and the bottom one an f - or similar configurations)
there is only text and not pictures (as in you samples).
You can find the number of lines in a text image using open cv :
grayscale = cv2.cvtColor(your_text_image, cv2.COLOR_BGR2GRAY)
# converting to binary image
_, binary = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU)
# inverting to have white text on black background
binary = 255 - binary
# calculation y axis histogram
hist = cv2.reduce(binary, 1, cv2.REDUCE_AVG).reshape(-1)
# append every y position corresponding to a bottom of text line
lines = []
for y in range(h - 1):
if hist[y + 1] <= 2 < hist[y]:
lines.append(y)
number_of_lines = len(lines)
First Threshold the image.
Calculate mean pixel value of horizontally(top to bottom).
After getting all values find out the transitions/significant gap. If there is significant gap between black pixel then(you need to decide white pixel threshold: how many number of white pixels between two line).
Number of continuous black pixel cluster is your answer.

Most efficient way to find center of two circles in a picture

I'm trying to take a picture (.jpg file) and find the exact centers (x/y coords) of two differently colored circles in this picture. I've done this in python 2.7. My program works well, but it takes a long time and I need to drastically reduce the amount of time it takes to do this. I currently check every pixel and test its color, and I know I could greatly improve efficiency by pre-sampling a subset of pixels (e.g. every tenth pixel in both horizontal and vertical directions to find areas of the picture to hone in on). My question is if there are pre-developed functions or ways of finding the x/y coords of objects that are much more efficient than my code. I've already removed function calls within the loop, but that only reduced the run time by a few percent.
Here is my code:
from PIL import Image
import numpy as np
i = Image.open('colors4.jpg')
iar = np.asarray(i)
(numCols,numRows) = i.size
print numCols
print numRows
yellowPixelCount = 0
redPixelCount = 0
yellowWeightedCountRow = 0
yellowWeightedCountCol = 0
redWeightedCountRow = 0
redWeightedCountCol = 0
for row in range(numRows):
for col in range(numCols):
pixel = iar[row][col]
r = pixel[0]
g = pixel[1]
b = pixel[2]
brightEnough = r > 200 and g > 200
if r > 2*b and g > 2*b and brightEnough: #yellow pixel
yellowPixelCount = yellowPixelCount + 1
yellowWeightedCountRow = yellowWeightedCountRow + row
yellowWeightedCountCol = yellowWeightedCountCol + col
if r > 2*g and r > 2*b and r > 100: # red pixel
redPixelCount = redPixelCount + 1
redWeightedCountRow = redWeightedCountRow + row
redWeightedCountCol = redWeightedCountCol + col
print "Yellow circle location"
print yellowWeightedCountRow/yellowPixelCount
print yellowWeightedCountCol/yellowPixelCount
print " "
print "Red circle location"
print redWeightedCountRow/redPixelCount
print redWeightedCountCol/redPixelCount
print " "
Update: As I mentioned below, the picture is somewhat arbitrary, but here is an example of one frame from the video I am using:
First you have to do some clearing:
what do you consider fast enough? where is the sample image so we can see what are you dealing with (resolution, bit per pixel). what platform (especially CPU so we can estimate speed).
As you are dealing with circles (each one encoded with different color) then it should be enough to find bounding box. So find min and max x,y coordinates of the pixels of each color. Then your circle is:
center.x=(xmin+xmax)/2
center.y=(ymin+ymax)/2
radius =((xmax-xmin)+(ymax-ymin))/4
If coded right even with your approach it should take just few ms. on images around 1024x1024 resolution I estimate 10-100 ms on average machine. You wrote your approach is too slow but you did not specify the time itself (in some cases 1us is slow in other 1min is enough so we can only guess what you need and got). Anyway if you got similar resolution and time is 1-10 sec then you most likelly use some slow pixel access (most likely from GDI) like get/setpixel use bitmap Scanline[] or direct Pixel access with bitblt or use own memory for images.
Your approach can be speeded up by using ray cast to find approximate location of circles.
cast horizontal lines
their distance should be smaller then radius of smallest circle you search for. cast as many rays until you hit each circle with at least 2 rays
cast 2 vertical lines
you can use found intersection points from #1 so no need to cast many rays just 2 ... use the H ray where intersection points are closer together but not too close.
compute you circle properties
so from the 4 intersection points compute center and radius as it is axis aligned rectangle +/- pixel error it should be as easy just find the mid point of any diagonal and radius is also obvious as half of diagonal size.
As you did not share any image we can only guess what you got in case you do no have circles or need an idea for different approach see:
Algorithms: Ellipse matching
find archery target in image of different perspectives
If you are sure of the colours of the circle, easier method be to filter the colors using a mask and then apply Hough circles as Mathew Pope suggested.
Here is a snippet to get you started quick.
import cv2 as cv2
import numpy as np
fn = '200px-Traffic_lights_dark_red-yellow.svg.png'
# OpenCV reads image with BGR format
img = cv2.imread(fn)
# Convert to HSV format
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# lower mask (0-10)
lower_red = np.array([0, 50, 50])
upper_red = np.array([10, 255, 255])
mask = cv2.inRange(img_hsv, lower_red, upper_red)
# Bitwise-AND mask and original image
masked_red = cv2.bitwise_and(img, img, mask=mask)
# Check for circles using HoughCircles on opencv
circles = cv2.HoughCircles(mask, cv2.cv.CV_HOUGH_GRADIENT, 1, 20, param1=30, param2=15, minRadius=0, maxRadius=0)
print 'Radius ' + 'x = ' + str(circles[0][0][0]) + ' y = ' + str(circles[0][0][1])
One example of applying it on image looks like this. First is the original image, followed by the red colour mask obtained and the last is after circle is found using Hough circle function of OpenCV.
Radius found using the above method is Radius x = 97.5 y = 99.5
Hope this helps! :)

Trim scanned images with PIL?

What would be the approach to trim an image that's been input using a scanner and therefore has a large white/black area?
the entropy solution seems problematic and overly intensive computationally. Why not edge detect?
I just wrote this python code to solve this same problem for myself. My background was dirty white-ish, so the criteria that I used was darkness and color. I simplified this criteria by just taking the smallest of the R, B or B value for each pixel, so that black or saturated red both stood out the same. I also used the average of the however many darkest pixels for each row or column. Then I started at each edge and worked my way in till I crossed a threshold.
Here is my code:
#these values set how sensitive the bounding box detection is
threshold = 200 #the average of the darkest values must be _below_ this to count (0 is darkest, 255 is lightest)
obviousness = 50 #how many of the darkest pixels to include (1 would mean a single dark pixel triggers it)
from PIL import Image
def find_line(vals):
#implement edge detection once, use many times
for i,tmp in enumerate(vals):
tmp.sort()
average = float(sum(tmp[:obviousness]))/len(tmp[:obviousness])
if average <= threshold:
return i
return i #i is left over from failed threshold finding, it is the bounds
def getbox(img):
#get the bounding box of the interesting part of a PIL image object
#this is done by getting the darekest of the R, G or B value of each pixel
#and finding were the edge gest dark/colored enough
#returns a tuple of (left,upper,right,lower)
width, height = img.size #for making a 2d array
retval = [0,0,width,height] #values will be disposed of, but this is a black image's box
pixels = list(img.getdata())
vals = [] #store the value of the darkest color
for pixel in pixels:
vals.append(min(pixel)) #the darkest of the R,G or B values
#make 2d array
vals = np.array([vals[i * width:(i + 1) * width] for i in xrange(height)])
#start with upper bounds
forupper = vals.copy()
retval[1] = find_line(forupper)
#next, do lower bounds
forlower = vals.copy()
forlower = np.flipud(forlower)
retval[3] = height - find_line(forlower)
#left edge, same as before but roatate the data so left edge is top edge
forleft = vals.copy()
forleft = np.swapaxes(forleft,0,1)
retval[0] = find_line(forleft)
#and right edge is bottom edge of rotated array
forright = vals.copy()
forright = np.swapaxes(forright,0,1)
forright = np.flipud(forright)
retval[2] = width - find_line(forright)
if retval[0] >= retval[2] or retval[1] >= retval[3]:
print "error, bounding box is not legit"
return None
return tuple(retval)
if __name__ == '__main__':
image = Image.open('cat.jpg')
box = getbox(image)
print "result is: ",box
result = image.crop(box)
result.show()
For starters, Here is a similar question. Here is a related question. And a another related question.
Here is just one idea, there are certainly other approaches. I would select an arbitrary crop edge and then measure the entropy* on either side of the line, then proceed to re-select the crop line (probably using something like a bisection method) until the entropy of the cropped-out portion falls below a defined threshold. As I think, you may need to resort to a brute root-finding method as you will not have a good indication of when you have cropped too little. Then repeat for the remaining 3 edges.
*I recall discovering that the entropy method in the referenced website was not completely accurate, but I could not find my notes (I'm sure it was in a SO post, however.)
Edit:
Other criteria for the "emptiness" of an image portion (other than entropy) might be contrast ratio or contrast ratio on an edge-detect result.

Categories