I have 10 greyscale brain MRI scans from BrainWeb. They are stored as a 4d numpy array, brains, with shape (10, 181, 217, 181). Each of the 10 brains is made up of 181 slices along the z-plane (going through the top of the head to the neck) where each slice is 181 pixels by 217 pixels in the x (ear to ear) and y (eyes to back of head) planes respectively.
All of the brains are type dtype('float64'). The maximum pixel intensity across all brains is ~1328 and the minimum is ~0. For example, for the first brain, I calculate this by brains[0].max() giving 1328.338086605072 and brains[0].min() giving 0.0003886114541273855. Below is a plot of a slice of a brain[0]:
I want to binarize all these brain images by rescaling the pixel intensities from [0, 1328] to {0, 1}. Is my method correct?
I do this by first normalising the pixel intensities to [0, 1]:
normalized_brains = brains/1328
And then by using the binomial distribution to binarize each pixel:
binarized_brains = np.random.binomial(1, (normalized_brains))
The plotted result looks correct:
A 0 pixel intensity represents black (background) and 1 pixel intensity represents white (brain).
I experimented by implementing another method to normalise an image from this post but it gave me just a black image. This is because np.finfo(np.float64) is 1.7976931348623157e+308, so the normalization step
normalized_brains = brains/1.7976931348623157e+308
just returned an array of zeros which in the binarizition step also led to an array of zeros.
Am I binarising my images using a correct method?
Your method of converting the image to a binary image basically amounts to random dithering, which is a poor method of creating the illusion of grey values on a binary medium. Old-fashioned print is a binary medium, they have fine-tuned the methods to represent grey-value photographs in print over centuries. This process is called halftoning, and is shaped in part by properties of ink on paper, that we do not have to deal with in binary images.
So what methods have people come up with outside of print? Ordered dithering (mostly Bayer matrix), and error diffusion dithering. Read more about dithering on Wikipedia. I wrote a blog post showing how to implement all of these methods in MATLAB some years ago.
I would recommend you use error diffusion dithering for your particular application. Here is some code in MATLAB (taken from my blog post liked above) for the Floyd-Steinberg algorithm, I hope that you can translate this to Python:
img = imread('https://i.stack.imgur.com/d5E9i.png');
img = img(:,:,1);
out = double(img);
sz = size(out);
for ii=1:sz(1)
for jj=1:sz(2)
old = out(ii,jj);
%new = 255*(old >= 128); % Original Floyd-Steinberg
new = 255*(old >= 128+(rand-0.5)*100); % Simple improvement
out(ii,jj) = new;
err = new-old;
if jj<sz(2)
% right
out(ii ,jj+1) = out(ii ,jj+1)-err*(7/16);
end
if ii<sz(1)
if jj<sz(2)
% right-down
out(ii+1,jj+1) = out(ii+1,jj+1)-err*(1/16);
end
% down
out(ii+1,jj ) = out(ii+1,jj )-err*(5/16);
if jj>1
% left-down
out(ii+1,jj-1) = out(ii+1,jj-1)-err*(3/16);
end
end
end
end
imshow(out)
Resampling the image before applying the dithering greatly improves the results:
img = imresize(img,4);
% (repeat code above)
imshow(out)
NOTE that the above process expects the input to be in the range [0,255]. It is easy to adapt to a different range, say [0,1328] or [0,1], but it is also easy to scale your images to the [0,255] range.
Have you tried a threshold on the image?
This is a common way to binarize images, rather than trying to apply a random binomial distribution. You could try something like:
binarized_brains = (brains > threshold_value).astype(int)
which returns an array of 0s and 1s according to whether the image value was less than or greater than your chosen threshold value.
You will have to experiment with the threshold value to find the best one for your images, but it does not need to be normalized first.
If this doesn't work well, you can also experiment with the thresholding options available in the skimage filters package.
IT is easy in OpenCV. as mentioned a very common way is defining a threshold, But your result looks like you are allocating random values to your intensities instead of thresholding it.
import cv2
im = cv2.imread('brain.png', cv2.CV_LOAD_IMAGE_GRAYSCALE)
(th, brain_bw) = cv2.threshold(imy, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
th = (DEFINE HERE)
im_bin = cv2.threshold(im, th, 255, cv
cv2.imwrite('binBrain.png', brain_bw)
brain
binBrain
Related
I am really new to opencv and a beginner to python.
I have this image:
I want to somehow apply proper thresholding to keep nothing but the 6 digits.
The bigger picture is that I intend to try to perform manual OCR to the image for each digit separately, using the k-nearest neighbours algorithm on a per digit level (kNearest.findNearest)
The problem is that I cannot clean up the digits sufficiently, especially the '7' digit which has this blue-ish watermark passing through it.
The steps I have tried so far are the following:
I am reading the image from disk
# IMREAD_UNCHANGED is -1
image = cv2.imread(sys.argv[1], cv2.IMREAD_UNCHANGED)
Then I'm keeping only the blue channel to get rid of the blue watermark around digit '7', effectively converting it to a single channel image
image = image[:,:,0]
# openned with -1 which means as is,
# so the blue channel is the first in BGR
Then I'm multiplying it a bit to increase contrast between the digits and the background:
image = cv2.multiply(image, 1.5)
Finally I perform Binary+Otsu thresholding:
_,thressed1 = cv2.threshold(image,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
As you can see the end result is pretty good except for the digit '7' which has kept a lot of noise.
How to improve the end result? Please supply the image example result where possible, it is better to understand than just code snippets alone.
You can try to medianBlur the gray(blur) image with different kernels(such as 3, 51), divide the blured results, and threshold it. Something like this:
#!/usr/bin/python3
# 2018/09/23 17:29 (CST)
# (中秋节快乐)
# (Happy Mid-Autumn Festival)
import cv2
import numpy as np
fname = "color.png"
bgray = cv2.imread(fname)[...,0]
blured1 = cv2.medianBlur(bgray,3)
blured2 = cv2.medianBlur(bgray,51)
divided = np.ma.divide(blured1, blured2).data
normed = np.uint8(255*divided/divided.max())
th, threshed = cv2.threshold(normed, 100, 255, cv2.THRESH_OTSU)
dst = np.vstack((bgray, blured1, blured2, normed, threshed))
cv2.imwrite("dst.png", dst)
The result:
Why not just keep values in the image that are above a certain threshold?
Like this:
import cv2
import numpy as np
img = cv2.imread("./a.png")[:,:,0] # the last readable image
new_img = []
for line in img:
new_img.append(np.array(list(map(lambda x: 0 if x < 100 else 255, line))))
new_img = np.array(list(map(lambda x: np.array(x), new_img)))
cv2.imwrite("./b.png", new_img)
Looks great:
You could probably play with the threshold even more and get better results.
It doesn't seem easy to completely remove the annoying stamp.
What you can do is flattening the background intensity by
computing a lowpass image (Gaussian filter, morphological closing); the filter size should be a little larger than the character size;
dividing the original image by the lowpass image.
Then you can use Otsu.
As you see, the result isn't perfect.
I tried a slightly different approach then Yves on the blue channel:
Apply median filter (r=2):
Use Edge detection (e.g. Sobel operator):
Automatic thresholding (Otsu)
Closing of the image
This approach seems to make the output a little less noisy. However, one has to address the holes in the numbers. This can be done by detecting black contours which are completely surrounded by white pixels and simply filling them with white.
I'm trying to emulate the linear invert function in GIMP with opencv and python. I can't find more information on how that function has been implemented, besides it being used under linear light.Since I read that opencv imports linear BGR images, I proceeded to trying normal inversion on a RGB opencv but I can only replicate the common inversion method on GIMP.
Inversion function:
def negative(image):
img_negative = (255-image)
return img_negative
Original
Linear Inverted (Negative) Image on GIMP
Inverted (Negative) Image on GIMP
Any insight would be appreciated.
It took a little bit of trial and error, but in addition to you inverting the image, you also have to do some scaling and translation as well.
What I did specifically was in addition to inverting the image, I truncated any values that were beyond intensity 153 for every channel and saturated them to 153. After using this intermediate output, I shift the range such that the lowest value gets mapped to 102 and the highest value gets mapped to 255. This is simply done by adding 102 to every value in the intermediate output.
When I did that, I got a similar image to the one you're after.
In other words:
import cv2
import numpy as np
im = cv2.imread('input.png') # Your image goes here
im_neg = 255 - im
im_neg[im_neg >= 153] = 153 # Step #1
im_neg = im_neg + 102 # Step #2
cv2.imshow('Output', np.hstack((im, im_neg)))
cv2.waitKey(0)
cv2.destroyWindow('Output')
Thankfully, the minimum and maximum values are 0 and 255 which makes this process simpler. I get this output and note that I'm concatenating both images together:
Take note your desired image is stored in im_neg. If you want to see just the image by itself:
Compared to yours:
It's not exactly what you see in the output image you provide, especially because there seems to be some noise around the coloured squares, but this is the closest I could get it to and one could argue that the result I produced is better perceptually.
Hope this helps!
Gimp as of 2.10 works in linear color space and if you look at the original source code for the function it's just bitwise not. SO, here's what the code should look like in opencv-python:
import numpy as np
import cv2
#https://www.pyimagesearch.com/2015/10/05/opencv-gamma-correction/
def adjust_gamma(image, gamma=1.0):
# build a lookup table mapping the pixel values [0, 255] to
# their adjusted gamma values
invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")
# apply gamma correction using the lookup table
return cv2.LUT(image, table)
def invert_linear(img):
x= adjust_gamma(img, 1/2.2)
x= cv2.bitwise_not(x)
y= adjust_gamma(x, 2.2)
return y
I'm working on a project to measure and visualize image similarity. The images in my dataset come from photographs of images in books, some of which have very high or low exposure rates. For example, the images below come from two different books; the one on the top is an over-exposed reprint of the one on the bottom, wherein the exposure looks good:
I'd like to normalize each image's exposure in Python. I thought I could do so with the following naive approach, which attempts to center each pixel value between 0 and 255:
from scipy.ndimage import imread
import sys
def normalize(img):
'''
Normalize the exposure of an image.
#args:
{numpy.ndarray} img: an array of image pixels with shape:
(height, width)
#returns:
{numpy.ndarray} an image with shape of `img` wherein
all values are normalized such that the min=0 and max=255
'''
_min = img.min()
_max = img.max()
return img - _min * 255 / (_max - _min)
img = imread(sys.argv[1])
normalized = normalize(img)
Only after running this did I realize that this normalization will only help images whose lightest value is less than 255 or whose darkest value is greater than 0.
Is there a straightforward way to normalize the exposure of an image such as the top image above? I'd be grateful for any thoughts others can offer on this question.
Histogram equalisation works surprisingly well for this kind of thing. It's usually better for photographic images, but it's helpful even on line art, as long as there are some non-black/white pixels.
It works well for colour images too: split the bands up, equalize each one separately, and recombine.
I tried on your sample image:
Using libvips:
$ vips hist_equal sample.jpg x.jpg
Or from Python with pyvips:
x = pyvips.Image.new_from_file("sample.jpg")
x = x.hist_equal()
x.write_to_file("x.jpg")
It's very hard to say if it will work for you without seeing a larger sample of your images, but you may find an "auto-gamma" useful. There is one built into ImageMagick and the description - so that you can calculate it yourself - is:
Automagically adjust gamma level of image.
This calculates the mean values of an image, then applies a calculated
-gamma adjustment so that the mean color in the image will get a value of 50%.
This means that any solid 'gray' image becomes 50% gray.
This works well for real-life images with little or no extreme dark
and light areas, but tend to fail for images with large amounts of
bright sky or dark shadows. It also does not work well for diagrams or
cartoon like images.
You can try it out yourself on the command line very simply before you go and spend a lot of time coding something that may not work:
convert Tribunal.jpg -auto-gamma result.png
You can do -auto-level as per your own code beforehand, and a thousand other things too:
convert Tribunal.jpg -auto-level -auto-gamma result.png
I ended up using a numpy implementation of the histogram normalization method #user894763 pointed out. Just save the below as normalize.py then you can call:
python normalize.py cats.jpg
Script:
import numpy as np
from scipy.misc import imsave
from scipy.ndimage import imread
import sys
def get_histogram(img):
'''
calculate the normalized histogram of an image
'''
height, width = img.shape
hist = [0.0] * 256
for i in range(height):
for j in range(width):
hist[img[i, j]]+=1
return np.array(hist)/(height*width)
def get_cumulative_sums(hist):
'''
find the cumulative sum of a numpy array
'''
return [sum(hist[:i+1]) for i in range(len(hist))]
def normalize_histogram(img):
# calculate the image histogram
hist = get_histogram(img)
# get the cumulative distribution function
cdf = np.array(get_cumulative_sums(hist))
# determine the normalization values for each unit of the cdf
sk = np.uint8(255 * cdf)
# normalize the normalization values
height, width = img.shape
Y = np.zeros_like(img)
for i in range(0, height):
for j in range(0, width):
Y[i, j] = sk[img[i, j]]
# optionally, get the new histogram for comparison
new_hist = get_histogram(Y)
# return the transformed image
return Y
img = imread(sys.argv[1])
normalized = normalize_histogram(img)
imsave(sys.argv[1] + '-normalized.jpg', normalized)
Output:
I have an image, using steganography I want to save the data in border pixels only.
In other words, I want to save data only in the least significant bits(LSB) of border pixels of an image.
Is there any way to get border pixels to store data( max 15 characters text) in the border pixels?
Plz, help me out...
OBTAINING BORDER PIXELS:
Masking operations are one of many ways to obtain the border pixels of an image. The code would be as follows:
a= cv2.imread('cal1.jpg')
bw = 20 //width of border required
mask = np.ones(a.shape[:2], dtype = "uint8")
cv2.rectangle(mask, (bw,bw),(a.shape[1]-bw,a.shape[0]-bw), 0, -1)
output = cv2.bitwise_and(a, a, mask = mask)
cv2.imshow('out', output)
cv2.waitKey(5000)
After I get an array of ones with the same dimension as the input image, I use cv2.rectangle function to draw a rectangle of zeros. The first argument is the image you want to draw on, second argument is start (x,y) point and the third argument is the end (x,y) point. Fourth argument is the color and '-1' represents the thickness of rectangle drawn (-1 fills the rectangle). You can find the documentation for the function here.
Now that we have our mask, you can use 'cv2.bitwise_and' (documentation) function to perform AND operation on the pixels. Basically what happens is, the pixels that are AND with '1' pixels in the mask, retain their pixel values. Pixels that are AND with '0' pixels in the mask are made 0. This way you will have the output as follows:
.
The input image was :
You have the border pixels now!
Using LSB planes to store your info is not a good idea. It makes sense when you think about it. A simple lossy compression would affect most of your hidden data. Saving your image as JPEG would result in loss of info or severe affected info. If you want to still try LSB, look into bit-plane slicing. Through bit-plane slicing, you basically obtain bit planes (from MSB to LSB) of the image. (image from researchgate.net)
I have done it in Matlab and not quite sure about doing it in python. In Matlab,
the function, 'bitget(image, 1)', returns the LSB of the image. I found a question on bit-plane slicing using python here. Though unanswered, you might want to look into the posted code.
To access border pixel and enter data into it.
A shape of an image is accessed by t= img.shape. It returns a tuple of the number of rows, columns, and channels.A component is RGB which 1,2,3 respectively.int(r[0]) is variable in which a value is stored.
import cv2
img = cv2.imread('xyz.png')
t = img.shape
print(t)
component = 2
img.itemset((0,0,component),int(r[0]))
img.itemset((0,t[1]-1,component),int(r[1]))
img.itemset((t[0]-1,0,component),int(r[2]))
img.itemset((t[0]-1,t[1]-1,component),int(r[3]))
print(img.item(0,0,component))
print(img.item(0,t[1]-1,component))
print(img.item(t[0]-1,0,component))
print(img.item(t[0]-1,t[1]-1,component))
cv2.imwrite('output.png',img)
I can only ever find examples in C/C++ and they never seem to map well to the OpenCV API. I'm loading video frames (both from files and from a webcam) and want to reduce them to 16 color, but mapped to a 24-bit RGB color-space (this is what my output requires - a giant LED display).
I read the data like this:
ret, frame = self._vid.read()
image = cv2.cvtColor(frame, cv2.COLOR_RGB2BGRA)
I did find the below python example, but cannot figure out how to map that to the type of output data I need:
import numpy as np
import cv2
img = cv2.imread('home.jpg')
Z = img.reshape((-1,3))
# convert to np.float32
Z = np.float32(Z)
# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 8
ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))
cv2.imshow('res2',res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
That obviously works for the OpenCV image viewer but trying to do the same errors on my output code since I need an RGB or RGBA format. My output works like this:
for y in range(self.height):
for x in range(self.width):
self._led.set(x,y,tuple(image[y,x][0:3]))
Each color is represented as an (r,g,b) tuple.
Any thoughts on how to make this work?
I think the following could be faster than kmeans, specially with a k = 16.
Convert the color image to gray
Contrast stretch this gray image to so that resulting image gray levels are between 0 and 255 (use normalize with NORM_MINMAX)
Calculate the histogram of this stretched gray image using 16 as the number of bins (calcHist)
Now you can modify these 16 values of the histogram. For example you can sort and assign ranks (say 0 to 15), or assign 16 uniformly distributed values between 0 and 255 (I think these could give you a consistent output for a video)
Backproject this histogram onto the stretched gray image (calcBackProject)
Apply a color-map to this backprojected image (you might want to scale the backprojected image befor applying a colormap using applyColorMap)
Tip for kmeans:
If you are using kmeans for video, you can use the cluster centers from the previous frame as the initial positions in kmeans for the current frame. That way, it'll take less time to converge, so kmeans in the subsequent frames will most probably run faster.
You can speed up your processing by applying the k-means on a downscaled version of your image. This will give you the cluster centroids. You can then quantify each pixel of the original image by picking the closest centroid.