imnoise in Python? - python

I'm trying to add white noise to an image after applying a lowpass filter. I know how to do it in matlab but don't know what to call for it to work in python.
import matplotlib.pyplot as plt
import numpy as np
import scipy.misc
from scipy import ndimage
import Image
J = imnoise(im,'salt & pepper',0.02);
figure.imshow(J)
What else do I need to import? Or is there another way to add noise?

scikit-image provides a function random_noise which is similar to imnoise in MATLAB.
skimage.util.random_noise(image, mode='gaussian', seed=None, clip=True, **kwargs)
It supports the following modes:
‘gaussian’ Gaussian-distributed additive noise.
‘localvar’ Gaussian-distributed additive noise, with specified
local variance at each point of image
‘poisson’ Poisson-distributed noise generated from the data.
‘salt’ Replaces random pixels with 1.
‘pepper’ Replaces random pixels with 0.
‘s&p’ Replaces random pixels with 0 or 1.
‘speckle’ Multiplicative noise using out = image + n*image, where
n is uniform noise with specified mean & variance.
Note that one difference from imnoise in MATLAB is that the output of this function would always be a floating-point image.
If the input image is a uint8 grayscale image for instance, it would be converted to float at first, but the output image wouldn't be converted to the same class as the input image.
Therefore if you care about the class of image, you should convert the output by yourself, for example using skimage.img_as_ubyte.

in discussing denoising, this tutorial adds white noise to an image with noisy = l + 0.4 * l.std() * np.random.random(l.shape)
where l is the image.
http://scipy-lectures.github.io/advanced/image_processing/#denoising
In general, you should be able to add noise simply by adding a matrix filled with the noise that you want to use to the original picture.

code sample.
import numpy as np
import cv2
from matplotlib import pyplot as plt
from skimage.util import random_noise
I = cv2.imread('image.jpg', 1); # 1/ -1: color mode; 0: gray mode
gauss = random_noise(I, mode='gaussian', seed=None, clip=True)
sp = random_noise(I, mode='s&p', seed=None, clip=True)
plt.subplot(231), plt.imshow(I), plt.title('Origin')
plt.subplot(232), plt.imshow(gauss), plt.title('Gaussian')
plt.subplot(233), plt.imshow(sp), plt.title('Salt & Pepper')
plt.show();
more info: http://scikit-image.org/docs/0.13.x/api/skimage.util.html#skimage.util.random_noise

Although there is no built-in functions like in matlab
"imnoise(image,noiseType,Amount_of_Noise)" but we can easily add required amount of random valued impulse noise or salt and pepper into an image manually.
1. to add random valued impulse noise.
import random as r
def addRvinGray(image,n): # add random valued impulse noise in grayscale
'''parameters:
image: type=numpy array. input image in which you want add noise.
n: noise level (in percentage)'''
k=0 # counter variable
ih=image.shape[0]
iw=image.shape[1]
noisypixels=(ih*iw*n)/100 # here we calculate the number of pixels to be altered.
for i in range(ih*iw):
if k<noisypixels:
image[r.randrange(0,ih)][r.randrange(0,iw)]=r.randrange(0,256) #access random pixel in the image gives random intensity (0-255)
k+=1
else:
break
return image
> to add salt and pepper noise
import random as r
def addSaltGray(image,n): #add salt-&-pepper noise in grayscale image
k=0
salt=True
ih=image.shape[0]
iw=image.shape[1]
noisypixels=(ih*iw*n)/100
for i in range(ih*iw):
if k<noisypixels: #keep track of noise level
if salt==True:
image[r.randrange(0,ih)][r.randrange(0,iw)]=255
salt=False
else:
image[r.randrange(0,ih)][r.randrange(0,iw)]=0
salt=True
k+=1
else:
break
return image
Note: for color images: first split image in to three or four
channels depending on the input image using opencv function: (B, G, R)
= cv2.split(image) (B, G, R, A) = cv2.split(image) after spliting perform the same operations on all channels. at the end merge all the
channels: merged = cv2.merge([B, G, R]) return merged I hope this will help someone.

Related

How can I add Rayleigh noise to an image in python?

I am doing the following image processing using cv2:
import numpy as np
import cv2
image = cv2.imread('./tomatoes.png',cv2.IMREAD_GRAYSCALE)
noise_std = 0.1
noise = np.random.rayleigh(noise_std, image.shape)
noisy_image = image + noise
cv2.imwrite('noisy_image.jpg', noisy_image)
cv2.imshow('Noisy Image', noisy_image)
cv2.waitKey(0)
The problem is: I only receive a white window dialog when the noise is added to the image.
Here is how to add Rayleigh noise in Python/OpenCV. You have a couple of issues. First, convert your image to float to match the result from the noise generation. Second use addWeighted to combine noise and image. Since the Rayleigh noise amplitude is very small, it needs a large weight. (Note: I have purposely chosen a very large weight to make the noise very visible)
Input:
import numpy as np
import cv2
img = cv2.imread('lena.png',cv2.IMREAD_GRAYSCALE)
image = img.astype(np.float64)
noise_std = 0.2
noise = np.random.rayleigh(noise_std, img.shape)
noisy_image = cv2.addWeighted(image, 1, noise, 70, 0.0).astype(np.uint8)
cv2.imwrite('lena_rayleigh_noise.png', noisy_image)
cv2.imshow('Image', img)
cv2.imshow('Noise', noise)
cv2.imshow('Noisy Image', noisy_image)
cv2.waitKey(0)
Result:
ADDITION
Here are the results of filtering the above image with Rayleigh noise using statistical filters in Imagemagick script, statsfilt, at http://www.fmwconcepts.com/imagemagick/statsfilt/index.php. See Wikipedia, for example, Lp mean at https://en.wikipedia.org/wiki/Lp_space and Contraharmonic Mean at https://en.wikipedia.org/wiki/Contraharmonic_mean. These two seem to be the best from others including the geometric mean and harmonic mean. I recommend using the exponent (p) as positive 2 or higher.
Lp Mean with p=2:
Contraharmonic Mean with p=2:
Contraharmonic Mean with p=3:
Imagemagick also has a noise reduction function called enhance. I did five iterations:
convert lena_rayleigh_noise.png -enhance -enhance -enhance -enhance -enhance lena_rayleigh_noise_enhance5.png
It does a nice job for the white noise, but leaves the black noise.

How to get the FFT image from a raw TEM tiff?

I'm trying to write code that takes TEM (Transmission Electron Microscope) TITFF images, and computes the FFT. But I always get plain Red, Green or Blue images.
Here's what the RAW TEM images look like :
Here's what the FFT image should look like :
But instead I get :
Here's my code :
import numpy as np
import diplib as dip
import matplotlib.pyplot as plt
from PIL import Image
from ncempy.io import dm
img1 = dip.ImageReadTIFF('RAW_FFT.tif')
f = np.fft.fft2(img1)
f = np.fft.fftshift(f)
plt.imshow(abs(f))
plt.show()
Do you have any idea what could be the problem? I even tried to convert the image to np.array and do FFT step by step but I get the same result.
FFT is complex and without a logarithm, Fourier transform would be so much brighter than all the other points that everything else will appear black.
see for details: https://homepages.inf.ed.ac.uk/rbf/HIPR2/fourier.htm
import cv2
import numpy as np
img=cv2.imread('inputfolder/yourimage.jpg',0)
def fft_image_inv(image):
f = np.fft.fft2(image)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 15*np.log(np.abs(fshift))
return magnitude_spectrum
fft= fft_image_inv(img)
cv2.imwrite('outputfolder/yourimage.jpg',fft)
output:
There are multiple issues here. First, sometimes grayscale images are written to file as if they were RGB images (in a TIFF file, this could be as simple as storing a grayscale color map, the pixel values will be interpreted as indices into the map, and the loaded image will be an RGB image instead of a grayscale image, even through it has only grayscale colors).
This is the case here. All three channels have exactly the same information, but there are three channels stored, and your FFT will compute the same thing three times!
After loading the image with dip.ImageReadTIFF(), you can use parentheses to index one of the channels:
img1 = dip.ImageReadTIFF('RAW_FFT.tif')
img1 = img1(0)
We now have an actual gray-scale image. This should get rid of the red color in the output.
After computing the FFT, we have a floating-point image with a very high dynamic range (the largest magnitude, at the middle pixel, is 437536704). pyplot, by default, will show floating-point images with 0 and all negative values as black, and 1 and all larger values as white (actual colors depend of course on the color map it uses). So your display will be all white. Use the vmax parameter to imshow to determine the value shown as white. Setting this to 1e6 should give you a similar display as in the GMS software.
Instead of pyplot you can use DIPlib for display. Its interactive viewer will let you use a slider to manually set the grayscale limits, and you can manually select to display the magnitude, as well as choose a logarithmic mapping (which tend to be most useful for displaying the frequency domain).
f = dip.FourierTransform(img)
dip.viewer.ShowModal(f)
Alternatively, you can use a static display, which uses pyplot under the hood:
f.Show((0, 1e6))
or
f.Show('log')

Python: Normalize image exposure

I'm working on a project to measure and visualize image similarity. The images in my dataset come from photographs of images in books, some of which have very high or low exposure rates. For example, the images below come from two different books; the one on the top is an over-exposed reprint of the one on the bottom, wherein the exposure looks good:
I'd like to normalize each image's exposure in Python. I thought I could do so with the following naive approach, which attempts to center each pixel value between 0 and 255:
from scipy.ndimage import imread
import sys
def normalize(img):
'''
Normalize the exposure of an image.
#args:
{numpy.ndarray} img: an array of image pixels with shape:
(height, width)
#returns:
{numpy.ndarray} an image with shape of `img` wherein
all values are normalized such that the min=0 and max=255
'''
_min = img.min()
_max = img.max()
return img - _min * 255 / (_max - _min)
img = imread(sys.argv[1])
normalized = normalize(img)
Only after running this did I realize that this normalization will only help images whose lightest value is less than 255 or whose darkest value is greater than 0.
Is there a straightforward way to normalize the exposure of an image such as the top image above? I'd be grateful for any thoughts others can offer on this question.
Histogram equalisation works surprisingly well for this kind of thing. It's usually better for photographic images, but it's helpful even on line art, as long as there are some non-black/white pixels.
It works well for colour images too: split the bands up, equalize each one separately, and recombine.
I tried on your sample image:
Using libvips:
$ vips hist_equal sample.jpg x.jpg
Or from Python with pyvips:
x = pyvips.Image.new_from_file("sample.jpg")
x = x.hist_equal()
x.write_to_file("x.jpg")
It's very hard to say if it will work for you without seeing a larger sample of your images, but you may find an "auto-gamma" useful. There is one built into ImageMagick and the description - so that you can calculate it yourself - is:
Automagically adjust gamma level of image.
This calculates the mean values of an image, then applies a calculated
-gamma adjustment so that the mean color in the image will get a value of 50%.
This means that any solid 'gray' image becomes 50% gray.
This works well for real-life images with little or no extreme dark
and light areas, but tend to fail for images with large amounts of
bright sky or dark shadows. It also does not work well for diagrams or
cartoon like images.
You can try it out yourself on the command line very simply before you go and spend a lot of time coding something that may not work:
convert Tribunal.jpg -auto-gamma result.png
You can do -auto-level as per your own code beforehand, and a thousand other things too:
convert Tribunal.jpg -auto-level -auto-gamma result.png
I ended up using a numpy implementation of the histogram normalization method #user894763 pointed out. Just save the below as normalize.py then you can call:
python normalize.py cats.jpg
Script:
import numpy as np
from scipy.misc import imsave
from scipy.ndimage import imread
import sys
def get_histogram(img):
'''
calculate the normalized histogram of an image
'''
height, width = img.shape
hist = [0.0] * 256
for i in range(height):
for j in range(width):
hist[img[i, j]]+=1
return np.array(hist)/(height*width)
def get_cumulative_sums(hist):
'''
find the cumulative sum of a numpy array
'''
return [sum(hist[:i+1]) for i in range(len(hist))]
def normalize_histogram(img):
# calculate the image histogram
hist = get_histogram(img)
# get the cumulative distribution function
cdf = np.array(get_cumulative_sums(hist))
# determine the normalization values for each unit of the cdf
sk = np.uint8(255 * cdf)
# normalize the normalization values
height, width = img.shape
Y = np.zeros_like(img)
for i in range(0, height):
for j in range(0, width):
Y[i, j] = sk[img[i, j]]
# optionally, get the new histogram for comparison
new_hist = get_histogram(Y)
# return the transformed image
return Y
img = imread(sys.argv[1])
normalized = normalize_histogram(img)
imsave(sys.argv[1] + '-normalized.jpg', normalized)
Output:

OpenCV/python: How to change image pixels' values using a formula?

I'm trying to stretch an image's histogram using a logarithmic transformation. Basically, I am applying a log operation to each pixel's intensity. When I'm trying to change image's value in each pixel, the new values are not saved but the histogram looks OK. Also, the maximum value is not correct. This is my code:
import cv2
import numpy as np
import math
from matplotlib import pyplot as plt
img = cv2.imread('messi.jpg',0)
img2 = img
for i in range(0,img2.shape[0]-1):
for j in range(0,img2.shape[1]-1):
if (math.log(1+img2[i,j],2)) < 0:
img2[i,j]=0
else:
img2[i,j] = np.int(math.log(1+img2[i,j],2))
print (np.int(math.log(1+img2[i,j],2)))
print (img2.ravel().max())
cv2.imshow('LSP',img2)
cv2.waitKey(0)
fig = plt.gcf()
fig.canvas.set_window_title('LSP histogram')
plt.hist(img2.ravel(),256,[0,256]); plt.show()
img3 = img2
B = np.int(img3.max())
A = np.int(img3.min())
print ("Maximum intensity = ", B)
print ("minimum intensity = ", A)
This is also the histogram I get:
However, the maximum intensity shows 186! This isn't applying the proper logarithmic operation at all.
Any ideas?
The code you wrote performs a logarithmic transformation applied to the image intensities. The reason why you are getting such a high spurious intensity as the maximum is because your for loops are wrong. Specifically, your range is incorrect. range is exclusive of the ending interval, which means that you must go up to img.shape[0] and img.shape[1] respectively, and not img.shape[0]-1 or img.shape[1]-1. Therefore, you are missing the last row and last column of the image, and these don't get touched by logarithmic operation. The maximum that is reported is from one of these pixels in the last row or column that you didn't touch.
Once you correct this, you don't get those bad intensities anymore:
for i in range(0,img2.shape[0]): # Change
for j in range(0,img2.shape[1]): # Change
if (math.log(1+img2[i,j],2)) < 0:
img2[i,j]=0
else:
img2[i,j] = np.int(math.log(1+img2[i,j],2))
Doing that now gives us:
('Maximum intensity = ', 7)
('minimum intensity = ', 0)
However, what you're going to get now is a very dark image. The histogram that you have shown us illustrates that all of the image pixels are in the dark range... roughly between [0-7]. Because of that, the majority of your image is going to be dark if you use uint8 as the data type for visualization. Take note that I searched for the Lionel Messi image that's part of the OpenCV tutorials, and this is the image I found:
Source: https://opencv-python-tutroals.readthedocs.org/en/latest/_images/roi.jpg
Your code is converting this to grayscale, and that's fine for the purpose of your question. Now, using the above image, if you actually show what the histogram count looks like as well as what the intensities are per bin in the histogram, this is what we get for img2:
In [41]: np.unique(img2)
Out[41]: array([0, 1, 2, 3, 4, 5, 6, 7], dtype=uint8)
In [42]: np.bincount(img2.ravel())
Out[42]: array([ 86, 88, 394, 3159, 14841, 29765, 58012, 19655])
As you can see, the bulk of the image pixels are hovering between the [0-7] range, which is why everything looks black. If you want to see this better, perhaps scale the image by roughly 255 / 7 = 36 or so we can see the image better:
img2 = 36*img2
cv2.imshow('LSP',img2)
cv2.waitKey(0)
We get this image:
I also get this histogram:
That personally looks very ugly... at least to me. As such, I would recommend that you choose a more meaningful image transformation if you want to stretch the histogram. In fact, the log operation compresses the dynamic range of the histogram. If you want to stretch the histogram, go the opposite way and try a power-law operation. Specifically, given an input intensity and the output is defined as:
out = c*in^(p)
in is the input intensity, p is a power and c is a constant to ensure that you scale the image so that the maximum intensity gets mapped to the same maximum intensity of the input when you're finished and not anything larger. That can be done by calculating c so that:
c = (img2.max()) / (img2.max()**p)
... where p is the power you want. In addition, the transformation via power-law can be explained with this nice diagram:
Source: http://www.nptel.ac.in/courses/117104069/chapter_8/8_14.html
Basically, powers that are less than 1 perform an intensity expansion where darker intensities get pushed towards the lighter side. Similarly, powers that are greater than 1 perform an intensity compression where lighter intensities get pushed to the darker side. In your case, you want to expand the histogram, and so you want the first option. Specifically, try making the intensities that are smaller go towards the larger range. This can be done by choosing a power that's smaller than 1... try 0.5 for example.
You'd modify your code so that it is like this:
img2 = img2.astype(np.float) # Cast to float
c = (img2.max()) / (img2.max()**(0.5))
for i in range(0,img2.shape[0]-1):
for j in range(0,img2.shape[1]-1):
img2[i,j] = np.int(c*img2[i,j]**(0.5))
# Cast back to uint8 for display
img2 = img2.astype(np.uint8)
Doing that, I get this image:
I also get this histogram:
Minor Note
If I can suggest something in terms of efficiency, I wouldn't recommend that you loop through the entire image and set each pixel individually... that's how numpy arrays were not supposed to be used. You can achieve what you want vectorized in a single line of code.
With your old code, use np.log2, not math.log with the base 2 with numpy arrays:
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Your code
img = cv2.imread('messi.jpg',0)
# New code
img2 = np.log2(1 + img.astype(np.float)).astype(np.uint8)
# Back to your code
img2 = 36*img2 # Edit from before
cv2.imshow('LSP',img2)
cv2.waitKey(0)
fig = plt.gcf()
fig.canvas.set_window_title('LSP histogram')
plt.hist(img2.ravel(),256,[0,256]); plt.show()
img3 = img2
B = np.int(img3.max())
A = np.int(img3.min())
print ("Maximum intensity = ", B)
print ("minimum intensity = ", A)
cv2.destroyAllWindows() # Don't forget this
Similarly, if you want to apply a power-law transformation, it's very simply:
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Your code
img = cv2.imread('messi.jpg',0)
# New code
c = (img2.max()) / (img2.max()**(0.5))
img2 = (c*img.astype(np.float)**(0.5)).astype(np.uint8)
#... rest of code as before

How can I prevent Numpy/ SciPy gaussian blur from converting image to grey scale?

I want to perform gaussian blur on an image but I don't want to be convert to grey scale. Is there anyway to perform this operation and keep the color?
from scipy import misc
import scipy
import numpy as np
a = misc.imread('A.jpg')
# A retains its color
misc.imsave('color.jpg', a)
# A_G_Blur gets converted to grey scale, I want to prevent this
a_g_blure = ndimage.uniform_filter(a, size=11)
# I want it to keep it's color
misc.imsave('now_grey.jpg', a)
a is a 3-d array with shape (M, N, 3). The problem is that ndimage.uniform_filter(a, size=11) applies a filter with length 11 to each dimension of a, include the third axis that holds the color channels. When you apply the filter with length 11 to an axis with length 3, the resulting values are all pretty close to the average of the three values, so you get something pretty close to a gray scale. (Depending on the image, you might have some color left.)
What you actually want is to apply a 2-d filter to each color channel separately. You can do this by giving a tuple as the size argument, using a size of 1 for the last axis:
a_g_blure = ndimage.uniform_filter(a, size=(11, 11, 1))
Note: uniform_filter is not a Gaussian blur. For that, you would use scipy.ndimage.gaussian_filter. You might also be interested in the filters provided by scikit-image. In particular, see skimage.filters.gaussian_filter.
For a gaussian blur, I recommend using skimage.filters.gaussian_filter.
from skimage.io import imread
from skimage.filters import gaussian_filter
sigma=5 # blur radius
img = imread('path/to/img')
# this will only return grayscale
grayscale_blur = gaussian_filter(src_img, sigma=sigma)
# passing multichannel param as True returns colors
color_blur = gaussian_filter(src_img, sigma=sigma, multichannel=True)

Categories