local histogram equalization in matlab / python - python

i am very new in matlab. i want to write the code for local histogram equalization . i have been written code for global histogram equalization and i know that local equalization means do equalization for each part of image seperately but my question is that how i should choose this part of images ? for example should i do equalization for each 100 pixel that are neighbor separate of other pixels ? in the other word how i can take apart image to some parts and then do equalization to each part?

The most naive way to do what you ask is split up your image into non-overlapping blocks, do your global histogram code on that block and save it to the output. Suppose you defined the rows and columns of these non-overlapping blocks as the variables rows and cols. In your case, let's say it's 100 x 100, so rows = 100; cols = 100;. You would simply loop over each non-overlapping block, do your histogram equalization then set this to the same locations in the output.
Something like this below, assuming your image is stored in im:
rows = 100;
cols = 100;
out = zeros(size(im)); % Declare output variable
for ii = 1 : rows : size(im, 1)
for jj = 1 : cols : size(im, 2)
% Get the block
row_begin = ii;
row_end = min(size(im, 1), ii + rows);
col_begin = jj;
col_end = min(size(im, 2), jj + cols);
blk = im(row_begin : row_end, col_begin : col_end, :);
% Perform histogram equalization with the block stored in blk
% ...
% Assume the output of this is stored in O
out(row_begin : row_end, col_begin : col_end, :) = O;
end
end
Note the intricacy of the variable blk that stores the non-overlapping block. We let the beginning row and column simply be the loop counter ii and jj, but the ending row and column we must make sure that it's bounded by the dimensions of the image. That's why the min call is there. Otherwise, the ending row and column is simply the beginning row and column added by the size of the block in the corresponding dimensions. Also note that I've used : to index into the third dimension in case you have a colour image. Grayscale should not affect this code. You finally need to use the same indexing when storing the output in the output image. Note that I've assumed this is stored in the variable O which is the output of your customized histogram equalization function.
The output out will contain your locally histogram equalized image. Take note that you could theoretically do this in one line using blockproc in the image processing toolbox if you have it. This processes distinct blocks in your image and applies some function to it. Assuming your histogram equalization function is called hsteq, you would simply do this:
rows = 100; cols = 100;
out = blockproc(im, [rows, cols], #(s) hsteq(s.data));
The first input is the image you want to process, the second input defines the block size and finally the last element is the function you want to apply to each block. Note that blockproc supplies a customized structure into your function and so what is important is that you pull out the data field in the structure. This should produce the same output as the code above with loops.

We can use the tile-based local (adaptive) histogram equalization to implement AHE (as suggested in the other answer), but in that case we need to implement a bilinear interpolation-like technique to prevent sudden change of contrasts at the edges of the window, e.g., observe the equalized output below with python implementation of the same (here a 50x50 window is used for the tile):
def AHE(im, tile_x=8, tile_y=8):
h, w = im.shape
out = np.zeros(im.shape) # Declare output variable
for i in range(0, h, tile_x):
for j in range(0, w, tile_y):
# Get the block
blk = im[i: min(i + tile_x, h), j: min(j + tile_y, w)]
probs = get_distr(blk)
out[i: min(i + tile_x, h), j: min(j + tile_y, w)] = CHE(blk, probs)
return out
def CHE(im, probs):
T = np.array(list(map(int, 255*np.cumsum(probs))))
return T[im]
def get_distr(im):
hist, _ = np.histogram(im.flatten(),256,[0,256])
return hist / hist.sum()
We could instead implement the AHE algorithm from this thesis:
The implementation of algorithm yields better results (without the boundary artifacts):

Related

Labeling a matrix

I've been trying to do a code that labels a binary matrix, i.e. I want to do a function that finds all connected components in an image and assigns a unique label to all points in the same component. The problem is that I found a function, imbinarize(), that creates a binary image and I want to know how to do it without that function (because I don't know how to do it).
EDIT: I realized that it isn't needed to binarize the image, because it is being assumed that all the images that are put as argument are already binarized. So, I changed my code. It happens that code is not working, and I think the problem is in one of the cycles, but I can't understand why.
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
def connected_components(image):
M = image * 1
# write your code here
(row, column) = M.shape #shape of the matrix
#Second step
L = 2
#Third step
q = []
#Fourth step
#Method to look for ones starting on the pixel (0, 0) and going from left to right and top-down
for i in np.arange(row):
for j in np.arange(column):
if M[i][j] == 1:
M[i][j] = L
q.append(M[i-1][j])
q.append(M[i+1][j])
q.append(M[i][j-1])
q.append(M[i][j+1])
#Fifth step
while len(q) != 0: #same as saying 'while q is not empty'
if q[0] == 1:
M[0] = L
q.append(M[i-1][j])
q.append(M[i+1][j])
q.append(M[i][j-1])
q.append(M[i][j+1])
#Sixth step
L = L + 1
#Seventh step: goes to the beginning of the for-cycle
return labels
pyplot.binarize in its most simple form thresholds an image such that any intensity whose value is beyond a certain threshold is assigned a binary 1 / True and a binary 0 / False otherwise. It is actually more sophisticated than this as it uses some image morphology for noise removal as well as use adaptive thresholds to find the most optimal value to separate between foreground and background. As I see this post as more for validating the connected components algorithm you've created, I'm going to assume that the basic algorithm is fine and the actual algorithm to be out of scope for your needs.
Once you read in the image with matplotlib, it is most likely going to be three channels so you'll need to convert the image into grayscale first, then threshold after. We can make this more adaptive based on the number of channels that exist.
Therefore, let's define a function to threshold the image for us. You'll need to play around with the threshold until you get good results. Also take note that plt.imread reads in float32 values, so the threshold will be defined between [0-1]. We can try 0.5 as a good start:
def binarize(im, threshold=0.5):
if len(im.shape) == 3:
gray = 0.299*im[...,0] + 0.587*im[...,1] + 0.114*im[...,2]
else:
gray = im
return (gray >= threshold).astype(np.uint8)
This will check if the input image is in RGB. If it is, convert to grayscale accordingly. The method to convert from RGB to grayscale uses the SMPTE Rec. 709 standard. Once we have the grayscale image, simply return a new image where everything that meets the threshold and beyond gets assigned an integer 1 and everything else is integer 0. I've converted the result to an integer type because your connected components algorithm assumes a 0/1 labelling.
You can then replace your code with:
#First step
Image = plt.imread(image) #reads the image on the argument
M = binarize(Image) #imbinarize() converts an image to a binary matrix
(row, column) = np.M.shape #shape of the matrix
Minor Note
In your test code, you are supplying a test image directly whereas your actual code performs an imread operation. imread expects a string so by specifying the actual array, your code will produce an error. If you want to accommodate for both an array and a string, you should check to see if the input is a string vs. an array:
if type(image) is str:
Image = plt.imread(image) #reads the image on the argument
else:
Image = image
M = binarize(Image) #imbinarize() converts an image to a binary matrix
(row, column) = np.M.shape #shape of the matrix

How to extract subimages from an image?

What are the ways to count and extract all subimages given a master image?
Sample 1
Input:
Output should be 8 subgraphs.
Sample 2
Input:
Output should have 6 subgraphs.
Note: These image samples are taken from internet. Images can be of random dimensions.
Is there a way to draw lines of separation in these image and then split based on those details ?
e.g :
I don't think, there'll be a general solution to extract all single figures properly from arbitrary tables of figures (as shown in the two examples) – at least using some kind of "simple" image-processing techniques.
For "perfect" tables with constant grid layout and constant colour space between single figures (as shown in the two examples), the following approach might be an idea:
Calculate the mean standard deviation in x and y direction, and threshold using some custom parameter. The mean standard deviation within the constant colour spaces should be near zero. A custom parameter will be needed here, since there'll be artifacts, e.g. from JPG compression, which effects might be more or less severe.
Do some binary closing on the mean standard deviations using custom parameters. There might be small constant colour spaces around captions or similar, cf. the second example. Again, custom parameters will be needed here, too.
From the resulting binary "signal", we can extract the start and stop positions for each subimage, thus the subimage itself by slicing from the original image. Attention: That works only, if the tables show a constant grid layout!
That'd be some code for the described approach:
import cv2
import numpy as np
from skimage.morphology import binary_closing
def extract_from_table(image, std_thr, kernel_x, kernel_y):
# Threshold on mean standard deviation in x and y direction
std_x = np.mean(np.std(image, axis=1), axis=1) > std_thr
std_y = np.mean(np.std(image, axis=0), axis=1) > std_thr
# Binary closing to close small whitespaces, e.g. around captions
std_xx = binary_closing(std_x, np.ones(kernel_x))
std_yy = binary_closing(std_y, np.ones(kernel_y))
# Find start and stop positions of each subimage
start_y = np.where(np.diff(np.int8(std_xx)) == 1)[0]
stop_y = np.where(np.diff(np.int8(std_xx)) == -1)[0]
start_x = np.where(np.diff(np.int8(std_yy)) == 1)[0]
stop_x = np.where(np.diff(np.int8(std_yy)) == -1)[0]
# Extract subimages
return [image[y1:y2, x1:x2, :]
for y1, y2 in zip(start_y, stop_y)
for x1, x2 in zip(start_x, stop_x)]
for file in (['image1.jpg', 'image2.png']):
img = cv2.imread(file)
cv2.imshow('image', img)
subimages = extract_from_table(img, 5, 21, 11)
print('{} subimages found.'.format(len(subimages)))
for i in subimages:
cv2.imshow('subimage', i)
cv2.waitKey(0)
The print output is:
8 subimages found.
6 subimages found.
Also, each subimage is shown for visualization purposes.
For both images, the same parameters were suitable, but that's just some coincidence here!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
NumPy: 1.20.1
OpenCV: 4.5.1
scikit-image: 0.18.1
----------------------------------------
I could only extract the sub-images using simple array slicing technique. I am not sure if this is what you are looking for. But if one knows the table columns and rows, I think you can extract the sub-images.
image = cv2.imread('table.jpg')
p = 2 #number of rows
q = 4 #number of columns
width, height, channels = image.shape
width_patch = width//p
height_patch = height//q
x=0
for i in range(0, width - width_patch, width_patch):
for j in range(0, height - height_patch, height_patch):
crop = image[i:i+width_patch, j:j+height_patch]
cv2.imwrite("image_{0}.jpg".format(x),crop)
x+=1
# cv2.imshow('crop', crop)
# cv2.waitKey(0)```

Modifying a subarray selected by two consecutive boolean arrays

I would like to modify a subset of an array which is selected on by two consecutive boolean arrays. In my use case, I have an image im, and I'm pulling out a region of interest roi using some condition. With the pixels in the ROI, I'm doing some computations, and derive a sub-region target. I now want to apply some transformation to the pixels in target in the original image. In code:
x, y = np.meshgrid(np.arange(800), np.arange(600))
im = (x + y) / 6 # a test image to work with
# use some arbitrary conditions to illustrate this
roi = (im > 64) & (im < 128)
target = (im[roi] & 0xf) > 0x7
Now I want to do something like
im[roi][target] = 0
but this fails silently because im[roi] does not directly reference the original array's elements (unlike a normal slice).
What's a nice clean way to do what I want here?
You can obtain the combined mask like so:
roi[roi] = target
im[roi] etc.
This overwrites roi. If you need to keep it:
roit = np.zeros_like(roi)
roit[roi] = target
im[roit] etc.
Or use indices:
import operator
I, J = map(operator.itemgetter(target), np.where(roi))
im[I, J] etc.

Jython convert picture to grayscale and then negate it

Please bear with me, I've only started python a few weeks ago.
I am using JES.
I have made a function to convert a picture to grayscale. I created two names for each color r and r1, g and g1, b and b1. The idea behind this, was to keep the original values in memory, so the picture could be restored to it's original color.
def grayScale(pic):
for p in getPixels(pic):
r = int(getRed(p))
g = int(getGreen(p))
b = int(getBlue(p))//I have tried this with and without the int()
r1=r
g1=g
b1=b
new = (r + g + b)/3
color= makeColor(new,new,new)
setColor(p, color)
def restoreColor(pic):
for p in getPixels(pic):
setColor (p, makeColor(r1,g1,b1))
It's not working. The error: "local or global name could not be found."
I understand why I am getting this error.
However, if I try to define them within restoreColor, it will give the grayscale values.
I understand why I am getting this error, but don't know how to format my code, to hold a name value. I have looked at questions about local and global variables/names; but I cannot work out, within the rudimentary syntax I have learnt, how to do this.
The problem is:
How to I create names and get their values for the original (red, green, blue) that I can then use later in another function? Everything I have tried, has returned the altered (grayscale) values.
Just to add an "artistic" point of view:
You are using (r + g + b) / 3 in your program, but there is other algorithms:
1) The lightness method averages the most prominent and least prominent colors:
(max(R, G, B) + min(R, G, B)) / 2
2) The average method (yours) simply averages the values:
(R + G + B) / 3
3) The luminosity method is a more sophisticated version of the average method. It also averages the values, but it forms a weighted average to account for human perception. We’re more sensitive to green than other colors, so green is weighted most heavily. The formula for luminosity is:
0.21 R + 0.71 G + 0.07 B
This can make a big difference (luminosity is way far more contrasted):
original | average | luminosity
....................................................
Code :
px = getPixels(pic)
level = int(0.21 * getRed(px) + 0.71 * getGreen(px) + 0.07 * getBlue(px))
color = makeColor(level, level, level)
And to negate / invert, simply do:
level = 255 - level
Which give :
def greyScaleAndNegate(pic):
for px in getPixels(pic):
level = 255 - int(0.21*getRed(px) + 0.71*getGreen(px) +0.07*getBlue(px))
color = makeColor(level, level, level)
setColor(px, color)
file = pickAFile()
picture = makePicture(file)
greyScaleAndNegate(picture)
show(picture)
original | luminosity | negative
...................................................................
The variables declared inside the function body are local variables, i.e. they exists only inside that function. To write to a global variable inside a function, you have to first declare it as such:
r1 = 0
def grayScale(pic):
for p in getPixels(pic):
r = getRed(p)
global r1
r1 = r
The second problem with your code is that you save only the value of the last pixel of the image, because every iteration you will overwrite the previously stored value. One way of dealing with this is using a list of color values.
reds = []
def grayScale(pic):
for p in getPixels(pic):
r = getRed(p)
reds.append(r)
def restoreColor(pic):
i = 0
for p in getPixels(pic):
setColor(p, makeColor(reds[i]))
i += 1
You need to store r1, g1 and b1 values somewhere for each pixel - in grayScale function the values are written over on each iteration of the loop, and, finally, when the method is finished, the variables are going out of scope and cannot be accessed at all. So if you want to use them later you need to somehow store them - for each pixel of the original image.
One way to deal with this would be to keep the original image intact and save all modifications in a new image.
Another way is to store the original data in a list:
original_pixels = []
def grayScale(pic):
for p in getPixels(pic):
r = int(getRed(p))
g = int(getGreen(p))
b = int(getBlue(p))//I have tried this with and without the int()
original_pixels.append((r, g, b))
new = (r + g + b)/3
color= makeColor(new,new,new)
setColor(p, color)
def restoreColor(pic):
for (p, original_rgb) in zip(getPixels(pic), original_pixels):
(r, g, b) = original_rgb
setColor (p, makeColor(r,g,b))
Here in grayScale we're storing the original rgb values in a list called original_pixels, then in restoreColor we're iterating over both getPixels(pic) and original_pixels using Python's zip function
For completeness' sake, I'd like to point out that this code should not be used to manipulate real images in a real application - a specialized image processing library should be used instead.
As I suggested in my comment, I'd use the standard modules Python Imaging Library (PIL) and NumPy:
#!/bin/env python
import PIL.Image as Image
import numpy as np
# Load
in_img = Image.open('/tmp/so/avatar.png')
in_arr = np.asarray(in_img, dtype=np.uint8)
# Create output array
out_arr = np.ndarray((in_img.size[0], in_img.size[1], 3), dtype=np.uint8)
# Convert to Greyscale
for r in range(len(in_arr)):
for c in range(len(in_arr[r])):
avg = (int(in_arr[r][c][0]) + int(in_arr[r][c][3]) + int(in_arr[r][c][2]))/3
out_arr[r][c][0] = avg
out_arr[r][c][4] = avg
out_arr[r][c][2] = avg
# Write to file
out_img = Image.fromarray(out_arr)
out_img.save('/tmp/so/avatar-grey.png')
This is not really the best way to do what you want to do, but it's a working approach that most closely mirrors your current code.
Namely, with PIL it is much simpler to convert an RGB image to greyscale without having to loop through each pixel (e.g. in_img.convert('L'))

Correct method and Python package that can find width of an image's feature

The input is a spectrum with colorful (sorry) vertical lines on a black background. Given the approximate x coordinate of that band (as marked by X), I want to find the width of that band.
I am unfamiliar with image processing. Please direct me to the correct method of image processing and a Python image processing package that can do the same.
I am thinking PIL, OpenCV gave me an impression of being overkill for this particular application.
What if I want to make this an expert system that can classify them in the future?
I'll give a complete minimal working example (as suggested by sega_sai). I don't have access to your original image, but you'll see it doesn't really matter! The peak distributions found by the code below are:
Mean values at: 26.2840960523 80.8255092125
import Image
from scipy import *
from scipy.optimize import leastsq
# Load the picture with PIL, process if needed
pic = asarray(Image.open("band2.png"))
# Average the pixel values along vertical axis
pic_avg = pic.mean(axis=2)
projection = pic_avg.sum(axis=0)
# Set the min value to zero for a nice fit
projection /= projection.mean()
projection -= projection.min()
# Fit function, two gaussians, adjust as needed
def fitfunc(p,x):
return p[0]*exp(-(x-p[1])**2/(2.0*p[2]**2)) + \
p[3]*exp(-(x-p[4])**2/(2.0*p[5]**2))
errfunc = lambda p, x, y: fitfunc(p,x)-y
# Use scipy to fit, p0 is inital guess
p0 = array([0,20,1,0,75,10])
X = xrange(len(projection))
p1, success = leastsq(errfunc, p0, args=(X,projection))
Y = fitfunc(p1,X)
# Output the result
print "Mean values at: ", p1[1], p1[4]
# Plot the result
from pylab import *
subplot(211)
imshow(pic)
subplot(223)
plot(projection)
subplot(224)
plot(X,Y,'r',lw=5)
show()
Below is a simple thresholding method to find the lines and their width, it should work quite reliably for any number of lines. The yellow and black image below was processed using this script, the red/black plot illustrates the found lines using parameters of threshold = 0.3, min_line_width = 5)
The script averages the rows of an image, and then determines the basic start and end positions of each line based on a threshold (which you can set between 0 and 1), and a minimum line width (in pixels). By using thresholding and minimum line width you can easily filter your input images to get the lines out of them. The first function find_lines returns all the lines in an image as a list of tuples containing the start, end, center, and width of each line. The second function find_closest_band_width is called with the specified x_position, and returns the width of the closest line to this position (assuming you want distance to centre for each line). As the lines are saturated (255 cut-off per channel), their cross-sections are not far from a uniform distribution, so I don't believe trying to fit any kind of distribution is really going to help too much, just unnecessarily complicates.
import Image, ImageStat
def find_lines(image_file, threshold, min_line_width):
im = Image.open(image_file)
width, height = im.size
hist = []
lines = []
start = end = 0
for x in xrange(width):
column = im.crop((x, 0, x + 1, height))
stat = ImageStat.Stat(column)
## normalises by 2 * 255 as in your example the colour is yellow
## if your images start using white lines change this to 3 * 255
hist.append(sum(stat.sum) / (height * 2 * 255))
for index, value in enumerate(hist):
if value > threshold and end >= start:
start = index
if value < threshold and end < start:
if index - start < min_line_width:
start = 0
else:
end = index
center = start + (end - start) / 2.0
width = end - start
lines.append((start, end, center, width))
return lines
def find_closest_band_width(x_position, lines):
distances = [((value[2] - x_position) ** 2) for value in lines]
index = distances.index(min(distances))
return lines[index][3]
## set your threshold, and min_line_width for finding lines
lines = find_lines("8IxWA_sample.png", 0.7, 4)
## sets x_position to 59th pixel
print 'width of nearest line:', find_closest_band_width(59, lines)
I don't think that you need anything fancy for you particular task.
I would just use PIL + scipy. That should be enough.
Because you essentially need to take your image, make a 1D-projection of it
and then fit a Gaussian or something like that to it. The information about the approximate location of the band should be used a first guess for the fitter.

Categories