I am trying to use do some image analysis in python (I have to use python). I need to do both a global and local histogram equalization. The global version works well however the local version, using a 7x7 footprint, gives a very poor result.
This is the global version:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from scipy import ndimage,misc
import scipy.io as io
from scipy.misc import toimage
import numpy as n
import pylab as py
from numpy import *
mat = io.loadmat('image.mat')
image=mat['imageD']
def histeq(im,nbr_bins=256):
#get image histogram
imhist,bins = histogram(im.flatten(),nbr_bins,normed=True)
cdf = imhist.cumsum() #cumulative distribution function
cdf = 0.6 * cdf / cdf[-1] #normalize
#use linear interpolation of cdf to find new pixel values
im2 = interp(im.flatten(),bins[:-1],cdf)
#returns image and cumulative histogram used to map
return im2.reshape(im.shape), cdf
im=image
im2,cdf = histeq(im)
To do the local version, I am trying to use a generic filter like so (using the same image as loaded previously):
def func(x):
cdf=[]
xhist,bins=histogram(x,256,normed=True)
cdf = xhist.cumsum()
cdf = 0.6 * cdf / cdf[-1]
im_out = interp(x,bins[:-1],cdf)
midval=interp(x[24],bins[:-1],cdf)
return midval
print im.shape
im3=ndimage.filters.generic_filter(im, func,size=im.shape,footprint=n.ones((7,7)))
Does anyone have any suggestions/thoughts as to why the second version will not work? I'm really stuck and any comments would be greatly appreciated! Thanks in advance!
You could use the scikit-image library to perform Global and Local Histogram Equalization. Stealing with pride from the link, below is the snippet. The equalization is done with a disk shaped kernel (or footprint), but you could change this to a square, by setting kernel = np.ones((N,M)).
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from skimage import data
from skimage.util import img_as_ubyte
from skimage import exposure
import skimage.morphology as morp
from skimage.filters import rank
# Original image
img = img_as_ubyte(data.moon())
# Global equalize
img_global = exposure.equalize_hist(img)
# Local Equalization, disk shape kernel
# Better contrast with disk kernel but could be different
kernel = morp.disk(30)
img_local = rank.equalize(img, selem=kernel)
fig, (ax_img, ax_global, ax_local) = plt.subplots(1, 3)
ax_img.imshow(img, cmap=plt.cm.gray)
ax_img.set_title('Low contrast image')
ax_img.set_axis_off()
ax_global.imshow(img_global, cmap=plt.cm.gray)
ax_global.set_title('Global equalization')
ax_global.set_axis_off()
ax_local.imshow(img_local, cmap=plt.cm.gray)
ax_local.set_title('Local equalization')
ax_local.set_axis_off()
plt.show()
Related
I want to reduce the contrast of my whole dataset as an experiment. The dataset is in a NumPy array of np.array(X).reshape(-1, 28, 28, 1) I tried to use the library Albumentations, which works really well for motion blur and Gaussian noise, but I didn't find a way to reduce the contrast with that library. How can I do this?
You would need to rescale the difference to the mean value, here is an example, using this image as source. Extra code using PIL and imshow for visuals:
import numpy as np
import PIL.Image
import matplotlib.pyplot as plt
im = PIL.Image.open('a-600-600-image-of-a-building.jpg')
np_im = np.array(im)
np_mean = np.mean(np.array(im))
for factor in [1.0, 0.9, 0.5, 0.1]:
plt.figure()
plt.title("{}".format(factor))
reduced_contrast=(np_im-np_mean)*factor + np_mean
new_im = PIL.Image.fromarray(reduced_contrast)
plt.imshow(np.array(list(new_im.convert('RGBA').getdata())).reshape(new_im.height, new_im.width, 4))
plt.savefig("{}.png".format(factor))
Output:
The relevant line is reduced_contrast=(np_im-np_mean)*factor + np_mean
I'm trying to work with ITK in Python (instead of openCV as I'm mostly using 3D image data) but can't get the filters working.
I'll skip the exact error messages as they depend on what I'm trying. You can reproduce them with the example below based on the ITK documentation. I create a blob using a 2D Gaussian and then try to extract its contours.
The approximate_signed_distance_map_image_filter acts as expected but the contour_extractor2_d_image_filter crashes on me in various ways no matter what I do.
Any ideas on how to solve this?
Minimal (2D) example
import itk
import matplotlib.pyplot as plt
import numpy as np
fig, axs = plt.subplots(1,3)
print('creating blob from 2d gaussian histogram')
arr = np.random.multivariate_normal([0,0], [[1,0],[0,1]], 100000)
h = np.histogram2d(arr[:,0],arr[:,1], bins=[30,30])
axs[0].set_title('Blob')
axs[0].imshow(h[0], cmap='gray')
print('applying itk approximate_signed_distance_map_image_filter')
arr_image = itk.image_view_from_array(h[0])
asdm = itk.approximate_signed_distance_map_image_filter(arr_image, inside_value=1000, outside_value=0)
asdm_arr = itk.array_from_image(asdm)
axs[1].set_title('signed distance')
axs[1].imshow(asdm_arr)
print('applying itk contour_extractor2_d_image_filter')
ce2d = itk.contour_extractor2_d_image_filter(itk.output(asdm), contour_value=1000)
ce2d_arr = itk.array_from_image(ce2d)
# also not working
# ce2d = itk.ContourExtractor2DImageFilter.New()
# ce2d.SetInput(asdm);
# ce2d.SetContourValue(0);
# ce2d.Update()
# ce2d_arr = itk.array_from_image(ce2d.GetOutput())
axs[2].set_title('contour')
axs[2].imshow(ce2d_arr)
plt.show()
import nibabel as nib
import numpy as np
import torch
from skimage.transform import radon,iradon
dir = "/hdd1/Data/3D_CT/train/000000098656.nii.gz"
nib_loader = nib.load(dir).get_fdata()
theta = np.linspace(0,180,360)
slices = nib_loader[150,:,:]
rt = radon(slices,theta,circle=True)
print(slices.max())
print(slices.min())
print(rt.max())
print(rt.min())**
slices.max()=1220.0
slices.min()=-1024.0
rt.max()=0.0
rt.min()=-510128.35438634525
radon transform change image scale how to fix the radon transform scale?
i want radon transform image sclae same slices image scale
I am trying to cross-correlate two images, and thus locate the template image on the first image, by finding the maximum correlation value.
I drew an image with some random shapes (first image), and cut out one of these shapes (template). Now, when I use scipy's correlate2d, and locate point in the correlation with maximum values, several point appear. From my knowledge, shouldn't there only be one point where the overlap is at max?
The idea behind this exercise is to take some part of an image, and then correlate that to some previous images from a database. Then I should be able to locate this part on the older images based on the maximum value of correlation.
My code looks something like this:
from matplotlib import pyplot as plt
from PIL import Image
import scipy.signal as sp
img = Image.open('test.png').convert('L')
img = np.asarray(img)
temp = Image.open('test_temp.png').convert('L')
temp = np.asarray(temp)
corr = sp.correlate2d(img, temp, boundary='symm', mode='full')
plt.imshow(corr, cmap='hot')
plt.colorbar()
coordin = np.where(corr == np.max(corr)) #Finds all coordinates where there is a maximum correlation
listOfCoordinates= list(zip(coordin[1], coordin[0]))
for i in range(len(listOfCoordinates)): #Plotting all those coordinates
plt.plot(listOfCoordinates[i][0], listOfCoordinates[i][1],'c*', markersize=5)
This yields the figure:
Cyan stars are points with max correlation value (255).
I expect there to be only one point in "corr" to have the max value of correlation, but several appear. I have tried to use different modes of correlating, but to no avail.
This is the test image I use when correlating.
This is the template, cut from the original image.
Can anyone give some insight to what I might be doing wrong here?
You are probably overflowing the numpy type uint8.
Try using:
img = np.asarray(img,dtype=np.float32)
temp = np.asarray(temp,dtype=np.float32)
Untested.
Applying
img = img - img.mean()
temp = temp - temp.mean()
before computing the 2D cross-correlation corr should give you the expected result.
Cleaning up the code, for a full example:
from imageio import imread
from matplotlib import pyplot as plt
import scipy.signal as sp
import numpy as np
img = imread('https://i.stack.imgur.com/JL2LW.png', pilmode='L')
temp = imread('https://i.stack.imgur.com/UIUzJ.png', pilmode='L')
corr = sp.correlate2d(img - img.mean(),
temp - temp.mean(),
boundary='symm',
mode='full')
# coordinates where there is a maximum correlation
max_coords = np.where(corr == np.max(corr))
plt.plot(max_coords[1], max_coords[0],'c*', markersize=5)
plt.imshow(corr, cmap='hot')
I was calculating kl distance between 3 images histograms:
import numpy as np
import scipy.misc
from skimage.io import ImageCollection, imread
from skimage import color
import skimage
from sklearn.datasets import load_sample_image
# all images in grayscale
lena = scipy.misc.lena().astype('uint8')
china = skimage.img_as_ubyte(color.rgb2grey( load_sample_image("china.jpg")) )
flower = skimage.img_as_ubyte(color.rgb2grey( load_sample_image("flower.jpg")) )
# histograms for all images
hist_lena, bin_edges_lena = np.histogram(lena, bins = range(256))
hist_china, bin_edges_china = np.histogram(china, bins = range(256))
hist_flower, bin_edges_flower = np.histogram(flower, bins = range(256))
When I use scipy.stats.entropy to compare the same image I've got different results:
# http://docs.scipy.org/doc/scipy-dev/reference/generated/scipy.stats.entropy.html
from scipy.stats import entropy
print entropy(pk=hist_lena, qk=hist_lena) # nan
print entropy(pk=hist_china, qk=hist_china) # -0.0
print entropy(pk=hist_flower, qk=hist_flower) # nan
I was expecting zero (unsigned?) as results.
Am I applying entropy function correctly?
Does it seem correct to apply this function on images histograms?