On this SO answer suggest me this code:
import SimpleITK as sitk
import numpy as np
# Create a noise Gaussian blob test image
img = sitk.GaussianSource(sitk.sitkFloat32, size=[240,240,48], mean=[120,120,24])
img = img + sitk.AdditiveGaussianNoise(img,10)
# Create a ramp image of the same size
h = np.arange(0.0, 255,1.0666666666, dtype='f4')
h2 = np.reshape(np.repeat(h, 240*48), (48,240,240))
himg = sitk.GetImageFromArray(h2)
print(himg.GetSize())
# Match the histogram of the Gaussian image with the ramp
result=sitk.HistogramMatching(img, himg)
# Display the 3d image
import itkwidgets
itkwidgets.view(result)
Why do I need two images to do Histogram equalization?
Because I want to do Histogram Equalization, and this is Histogram Matching. In this article explain the different.
It's a bit of a work-around to achieve histogram equalization through histogram matching.
'himg' is a ramp image, so the intensities go from 0 to 255. It has all intensities equally represented, so it's histogram is flat.
So we're matching your image's histogram with a flat histogram. The net result is histogram equalization.
Related
I'm trying to write code that takes TEM (Transmission Electron Microscope) TITFF images, and computes the FFT. But I always get plain Red, Green or Blue images.
Here's what the RAW TEM images look like :
Here's what the FFT image should look like :
But instead I get :
Here's my code :
import numpy as np
import diplib as dip
import matplotlib.pyplot as plt
from PIL import Image
from ncempy.io import dm
img1 = dip.ImageReadTIFF('RAW_FFT.tif')
f = np.fft.fft2(img1)
f = np.fft.fftshift(f)
plt.imshow(abs(f))
plt.show()
Do you have any idea what could be the problem? I even tried to convert the image to np.array and do FFT step by step but I get the same result.
FFT is complex and without a logarithm, Fourier transform would be so much brighter than all the other points that everything else will appear black.
see for details: https://homepages.inf.ed.ac.uk/rbf/HIPR2/fourier.htm
import cv2
import numpy as np
img=cv2.imread('inputfolder/yourimage.jpg',0)
def fft_image_inv(image):
f = np.fft.fft2(image)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 15*np.log(np.abs(fshift))
return magnitude_spectrum
fft= fft_image_inv(img)
cv2.imwrite('outputfolder/yourimage.jpg',fft)
output:
There are multiple issues here. First, sometimes grayscale images are written to file as if they were RGB images (in a TIFF file, this could be as simple as storing a grayscale color map, the pixel values will be interpreted as indices into the map, and the loaded image will be an RGB image instead of a grayscale image, even through it has only grayscale colors).
This is the case here. All three channels have exactly the same information, but there are three channels stored, and your FFT will compute the same thing three times!
After loading the image with dip.ImageReadTIFF(), you can use parentheses to index one of the channels:
img1 = dip.ImageReadTIFF('RAW_FFT.tif')
img1 = img1(0)
We now have an actual gray-scale image. This should get rid of the red color in the output.
After computing the FFT, we have a floating-point image with a very high dynamic range (the largest magnitude, at the middle pixel, is 437536704). pyplot, by default, will show floating-point images with 0 and all negative values as black, and 1 and all larger values as white (actual colors depend of course on the color map it uses). So your display will be all white. Use the vmax parameter to imshow to determine the value shown as white. Setting this to 1e6 should give you a similar display as in the GMS software.
Instead of pyplot you can use DIPlib for display. Its interactive viewer will let you use a slider to manually set the grayscale limits, and you can manually select to display the magnitude, as well as choose a logarithmic mapping (which tend to be most useful for displaying the frequency domain).
f = dip.FourierTransform(img)
dip.viewer.ShowModal(f)
Alternatively, you can use a static display, which uses pyplot under the hood:
f.Show((0, 1e6))
or
f.Show('log')
This question already has answers here:
How to fix or balance the image with an white color overlay?
(2 answers)
Closed 1 year ago.
I have an image with a faded square. I need the faded square to restore back to its original color which can be seen around the edge of the image. How do I only process the center square of the image to match the edges.
I tried using histogram equalization but with no success as the difference was only enhanced.
Original Image:
After histogram equalization:
You can try applying edge-preserving gaussian filtering instead of Histogram.
For example, you can try bilateral filter or guided filter
There are OpenCV implementations, but I never tried them.
Following MATLAB code demonstrates the filters:
I = rgb2gray(im2double(imread('I.jpg')));
G = imguidedfilter(I, 'DegreeOfSmoothing', 0.005);
J = imsharpen(G, 'Amount', 2);
figure;imshow(J)
B = imbilatfilt(I);
K = imsharpen(B, 'Amount', 2);
figure;imshow(K)
If this doesn't seem to work,
Try this approach, using Histogram:
from matplotlib import pyplot as plt
import cv2
# Load in image as grayscale
image = cv2.imread('1.jpg', 0)
plt.hist(image.ravel(), 256, [0,256])
The pixels are clustered around the mid range intensities. To increase the contrast of the image, histogram equalization stretches out the intensity values over the whole range to obtain a wider and more uniform distribution. You can do this with the built-in function, cv2.equalizeHist()
equalize = cv2.equalizeHist(image)
plt.hist(equalize.ravel(), 256, [0,256])
The intensity ranges are now evenly distributed. Histogram equalization considers the global contrast of the image and works great when the histogram of the image is confined to a particular region. Here's the result
In some cases where there are intensity variations across a large region, CLAHE (Contrast Limited Adaptive Histogram Equalization) may be better. CLAHE is implemented in OpenCV as cv2.createCLAHE()
clahe = cv2.createCLAHE().apply(image)
plt.hist(clahe.ravel(), 256, [0,256])
SO far I have been able to perform medianBlur and Edge Detection, Now I want to o further remove noise from the image, Matlabs region **property functions ** was used to remove all white regions that had a total pixel area of less than the mean pixel area value. How can I implement this on python
import matplotlib.image as mpimg
import numpy as np
import cv2
import os
import math
from collections import defaultdict
from matplotlib import pyplot as plt
import imutils
#import generalized_hough
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
print(gray.shape)
blur = cv2.bilateralFilter(gray,9,75,75)
median = cv2.medianBlur(gray,5)
# display input and output image
titles = ["bilateral Smoothing","median bulr"]
images = [ blur, median]
plt.figure(figsize=(20, 20))
for i in range(2):
plt.subplot(1,2,i+1)
plt.imshow(images[i])
plt.title(titles[i])
plt.xticks([]), plt.yticks([])
plt.show()
sobelX = cv2.Sobel(median,cv2.cv2.CV_64F, 1, 0)
sobelY = cv2.Sobel(median,cv2.cv2.CV_64F, 0,1)
sobelX = np.uint8(np.absolute(sobelX))
sobelY = np.uint8(np.absolute(sobelY))
SobelCombined = cv2.bitwise_or(sobelX,sobelY)
cv2.imshow('img', SobelCombined)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is a Matlab code that works for the same task.
close all
%upload image of farm
figure,
farm = imread('small_farms.JPG');%change this to the file path of image
imshow(farm);%this shows the original image
%convert the image to grayscale for 2D manipulation
gfarm = rgb2gray(farm);
figure,
imshow(gfarm);%show grayscaled image
%median filters take a m*n area around a coordinate and
%find the median pixel value and set that coordinate to that
%pixel value. It's a method of removing noise or details in an
%image. may want to tune dimensions of filter.
A = medfilt2(gfarm,[4 4]);
figure,
imshow(A);
%perform a logarithmic edge detection filter,
%this picks out the edges of the image, log setting
%was found to wrok best, although 'Sobel' can also be tried
B = edge(A,'log');
%show results of the edge filter
figure,
imshow(B,[]);
%find the areas of the lines made
areas = regionprops(B,'Area');
%find the mean and one standard deviation
men = mean([areas.Area])+0*std([areas.Area]);
%find max pixel area
big = max([areas.Area]);
%remove regions that are too small
C = bwpropfilt(B,'Area',[men big]);
%perform a dilation on the remaining pixels, this
%helps fill in gaps. The size and shape of the dilation
%can be tuned below.
SE = strel('square',4);
C = imdilate(C,SE);
areas2 = regionprops(C,'Area');
%place white border around image to find areas of farms
%that go off the picture
[h,w] = size(C);
C(1,:) = 1;
C(:,1) = 1;
C(h,:) = 1;
C(:,w) = 1;
C = C<1;
%fill in holes
C = imfill(C,'holes');
%show final processed image
figure,imshow(C);
%the section below is for display purpose
%it creates the boundaries of the image and displays them
%in a rainbow fashion
figure,
[B,L,n,A] = bwboundaries(C,'noholes');
imshow(label2rgb(L, #jet, [.5 .5 .5]))
hold on
for k = 1:length(B)
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'w', 'LineWidth', 2)
end
%The section below prints out the areas of each found
%region by pixel values. These values need to be scaled
%by the real measurements of the images to get relevant
%metrics
centers = regionprops(C,'Centroid','Area');
for k=1:length(centers)
if(centers(k).Area > mean([centers.Area])-std([areas.Area]))
text(centers(k).Centroid(1),centers(k).Centroid(2),string(centers(k).Area));
end
end
I'm working on a project to measure and visualize image similarity. The images in my dataset come from photographs of images in books, some of which have very high or low exposure rates. For example, the images below come from two different books; the one on the top is an over-exposed reprint of the one on the bottom, wherein the exposure looks good:
I'd like to normalize each image's exposure in Python. I thought I could do so with the following naive approach, which attempts to center each pixel value between 0 and 255:
from scipy.ndimage import imread
import sys
def normalize(img):
'''
Normalize the exposure of an image.
#args:
{numpy.ndarray} img: an array of image pixels with shape:
(height, width)
#returns:
{numpy.ndarray} an image with shape of `img` wherein
all values are normalized such that the min=0 and max=255
'''
_min = img.min()
_max = img.max()
return img - _min * 255 / (_max - _min)
img = imread(sys.argv[1])
normalized = normalize(img)
Only after running this did I realize that this normalization will only help images whose lightest value is less than 255 or whose darkest value is greater than 0.
Is there a straightforward way to normalize the exposure of an image such as the top image above? I'd be grateful for any thoughts others can offer on this question.
Histogram equalisation works surprisingly well for this kind of thing. It's usually better for photographic images, but it's helpful even on line art, as long as there are some non-black/white pixels.
It works well for colour images too: split the bands up, equalize each one separately, and recombine.
I tried on your sample image:
Using libvips:
$ vips hist_equal sample.jpg x.jpg
Or from Python with pyvips:
x = pyvips.Image.new_from_file("sample.jpg")
x = x.hist_equal()
x.write_to_file("x.jpg")
It's very hard to say if it will work for you without seeing a larger sample of your images, but you may find an "auto-gamma" useful. There is one built into ImageMagick and the description - so that you can calculate it yourself - is:
Automagically adjust gamma level of image.
This calculates the mean values of an image, then applies a calculated
-gamma adjustment so that the mean color in the image will get a value of 50%.
This means that any solid 'gray' image becomes 50% gray.
This works well for real-life images with little or no extreme dark
and light areas, but tend to fail for images with large amounts of
bright sky or dark shadows. It also does not work well for diagrams or
cartoon like images.
You can try it out yourself on the command line very simply before you go and spend a lot of time coding something that may not work:
convert Tribunal.jpg -auto-gamma result.png
You can do -auto-level as per your own code beforehand, and a thousand other things too:
convert Tribunal.jpg -auto-level -auto-gamma result.png
I ended up using a numpy implementation of the histogram normalization method #user894763 pointed out. Just save the below as normalize.py then you can call:
python normalize.py cats.jpg
Script:
import numpy as np
from scipy.misc import imsave
from scipy.ndimage import imread
import sys
def get_histogram(img):
'''
calculate the normalized histogram of an image
'''
height, width = img.shape
hist = [0.0] * 256
for i in range(height):
for j in range(width):
hist[img[i, j]]+=1
return np.array(hist)/(height*width)
def get_cumulative_sums(hist):
'''
find the cumulative sum of a numpy array
'''
return [sum(hist[:i+1]) for i in range(len(hist))]
def normalize_histogram(img):
# calculate the image histogram
hist = get_histogram(img)
# get the cumulative distribution function
cdf = np.array(get_cumulative_sums(hist))
# determine the normalization values for each unit of the cdf
sk = np.uint8(255 * cdf)
# normalize the normalization values
height, width = img.shape
Y = np.zeros_like(img)
for i in range(0, height):
for j in range(0, width):
Y[i, j] = sk[img[i, j]]
# optionally, get the new histogram for comparison
new_hist = get_histogram(Y)
# return the transformed image
return Y
img = imread(sys.argv[1])
normalized = normalize_histogram(img)
imsave(sys.argv[1] + '-normalized.jpg', normalized)
Output:
I have been having difficulty trying to generate a histogram for a 640x480 grayscale image I am working with.
I am using Python 2.7.3, OpenCV 2.4.6 (Python bindings) and Numpy
The image below was generated from the same image, using an executable software tool (programmed in C++)
The properties for this histogram were:
bins = 50
hist_width = 250
normalised_height_max = 50
The image specs are therefore 250x50
I have consulted this documentation:
Histogram Calculation in OpenCV
http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html
Hist.py - OpenCV Python Samples
https://github.com/Itseez/opencv/blob/master/samples/python2/hist.py
The code in the second reference compiles fine, yet I have tried to edit it to get these block style columns rather than the thin lines and I can't seem to get it right.
import cv2
import numpy as np
cv2.namedWindow('colorhist', cv2.CV_WINDOW_AUTOSIZE)
img = cv2.imread('sample_image.jpg')
h = np.zeros((50,256))
bins = np.arange(32).reshape(32,1)
hist_item = cv2.calcHist([img],0,None,[32],[0,256])
cv2.normalize(hist_item,hist_item,64,cv2.NORM_MINMAX)
hist=np.int32(np.around(hist_item))
pts = np.column_stack((bins,hist))
cv2.polylines(h,[pts],False,(255,255,255))
h=np.flipud(h)
cv2.imshow('colorhist',h)
cv2.waitKey(0)
I am aiming to make my histogram with the following specs:
bins = 32
hist_width = 256
normalised_height_max = 64
How can I fix this code in order to achieve a histogram like the one above with the specs specified?
I have managed to solve the problem:
import cv2
import numpy as np
#Create window to display image
cv2.namedWindow('colorhist', cv2.CV_WINDOW_AUTOSIZE)
#Set hist parameters
hist_height = 64
hist_width = 256
nbins = 32
bin_width = hist_width/nbins
#Read image in grayscale mode
img = cv2.imread('sample_image.jpg',0)
#Create an empty image for the histogram
h = np.zeros((hist_height,hist_width))
#Create array for the bins
bins = np.arange(nbins,dtype=np.int32).reshape(nbins,1)
#Calculate and normalise the histogram
hist_item = cv2.calcHist([img],[0],None,[nbins],[0,256])
cv2.normalize(hist_item,hist_item,hist_height,cv2.NORM_MINMAX)
hist=np.int32(np.around(hist_item))
pts = np.column_stack((bins,hist))
#Loop through each bin and plot the rectangle in white
for x,y in enumerate(hist):
cv2.rectangle(h,(x*bin_width,y),(x*bin_width + bin_width-1,hist_height),(255),-1)
#Flip upside down
h=np.flipud(h)
#Show the histogram
cv2.imshow('colorhist',h)
cv2.waitKey(0)
This was the result:
Note that the bottom of the image is slightly different to the C++ implementation. I assume this is due to rounding somewhere in the code