Add diff of images into one image (Linux/Python) - python

I'm looking for a way to blend only the differences of images into one image. I'm looking for a linux command or a way to achieve this with python.
Example:
Source images:
The result should be:
Another usecase:
http://3.bp.blogspot.com/-h3yuVc0hyvc/ToqQDE0Bf4I/AAAAAAAAGj0/HON-gM_9PhU/s1600/JayBumpOllieStichedFinishedRS.jpg
Thanks!!
Vince

It would make sense to start from the image that contains background only and compare each frame with it. The background can be computed as median over the whole sequence. If we assume that background median image was a0.jpg and following three frames with 3 dots would be a1.jpg, a2.jpg and a3.jpg, then merging them together can be done using compare_images function of the scikit-image and modifying the values only at those pixels where the change was encountered. Note that due to compression there is a tolerance threshold (th) set to 0.1. You can play with that value (0,1) for more or less sensitivity.
Following script should to something like that:
import skimage.io as io
from skimage.util import compare_images
import numpy as np
im0 = io.imread('a0.jpg') # median of source images
im1 = io.imread('a1.jpg') # source image 1
im2 = io.imread('a2.jpg') # source image 2
im3 = io.imread('a3.jpg') # source image 3
im_all = np.copy(im0)
th = 0.1
# d = np.max(np.abs(im2 - im0), -1)
d = compare_images(im1, im0, method='diff')
d= np.max(np.abs(d), -1)
im_all[d>th] = im1[d>th]
io.imsave("d1.jpg", d>th)
d = compare_images(im2, im0, method='diff')
d= np.max(np.abs(d), -1)
im_all[d>th] = im2[d>th]
io.imsave("d2.jpg", d>th)
d = compare_images(im3, im0, method='diff')
d= np.max(np.abs(d), -1)
im_all[d>th] = im3[d>th]
io.imsave("d3.jpg", d>th)
io.imsave("im_all.jpg", im_all)

This is not exactly what I asked, but it does the job good enough for my needs:
convert 1.jpg 2.jpg 3.jpg -evaluate-sequence max evalresult.png
With the example image with the clouds it doesn't work really good (because the clouds are white), but in another context it is great (when the differences are brighter than the background)

Related

Python: Normalize image exposure

I'm working on a project to measure and visualize image similarity. The images in my dataset come from photographs of images in books, some of which have very high or low exposure rates. For example, the images below come from two different books; the one on the top is an over-exposed reprint of the one on the bottom, wherein the exposure looks good:
I'd like to normalize each image's exposure in Python. I thought I could do so with the following naive approach, which attempts to center each pixel value between 0 and 255:
from scipy.ndimage import imread
import sys
def normalize(img):
'''
Normalize the exposure of an image.
#args:
{numpy.ndarray} img: an array of image pixels with shape:
(height, width)
#returns:
{numpy.ndarray} an image with shape of `img` wherein
all values are normalized such that the min=0 and max=255
'''
_min = img.min()
_max = img.max()
return img - _min * 255 / (_max - _min)
img = imread(sys.argv[1])
normalized = normalize(img)
Only after running this did I realize that this normalization will only help images whose lightest value is less than 255 or whose darkest value is greater than 0.
Is there a straightforward way to normalize the exposure of an image such as the top image above? I'd be grateful for any thoughts others can offer on this question.
Histogram equalisation works surprisingly well for this kind of thing. It's usually better for photographic images, but it's helpful even on line art, as long as there are some non-black/white pixels.
It works well for colour images too: split the bands up, equalize each one separately, and recombine.
I tried on your sample image:
Using libvips:
$ vips hist_equal sample.jpg x.jpg
Or from Python with pyvips:
x = pyvips.Image.new_from_file("sample.jpg")
x = x.hist_equal()
x.write_to_file("x.jpg")
It's very hard to say if it will work for you without seeing a larger sample of your images, but you may find an "auto-gamma" useful. There is one built into ImageMagick and the description - so that you can calculate it yourself - is:
Automagically adjust gamma level of image.
This calculates the mean values of an image, then applies a calculated
-gamma adjustment so that the mean color in the image will get a value of 50%.
This means that any solid 'gray' image becomes 50% gray.
This works well for real-life images with little or no extreme dark
and light areas, but tend to fail for images with large amounts of
bright sky or dark shadows. It also does not work well for diagrams or
cartoon like images.
You can try it out yourself on the command line very simply before you go and spend a lot of time coding something that may not work:
convert Tribunal.jpg -auto-gamma result.png
You can do -auto-level as per your own code beforehand, and a thousand other things too:
convert Tribunal.jpg -auto-level -auto-gamma result.png
I ended up using a numpy implementation of the histogram normalization method #user894763 pointed out. Just save the below as normalize.py then you can call:
python normalize.py cats.jpg
Script:
import numpy as np
from scipy.misc import imsave
from scipy.ndimage import imread
import sys
def get_histogram(img):
'''
calculate the normalized histogram of an image
'''
height, width = img.shape
hist = [0.0] * 256
for i in range(height):
for j in range(width):
hist[img[i, j]]+=1
return np.array(hist)/(height*width)
def get_cumulative_sums(hist):
'''
find the cumulative sum of a numpy array
'''
return [sum(hist[:i+1]) for i in range(len(hist))]
def normalize_histogram(img):
# calculate the image histogram
hist = get_histogram(img)
# get the cumulative distribution function
cdf = np.array(get_cumulative_sums(hist))
# determine the normalization values for each unit of the cdf
sk = np.uint8(255 * cdf)
# normalize the normalization values
height, width = img.shape
Y = np.zeros_like(img)
for i in range(0, height):
for j in range(0, width):
Y[i, j] = sk[img[i, j]]
# optionally, get the new histogram for comparison
new_hist = get_histogram(Y)
# return the transformed image
return Y
img = imread(sys.argv[1])
normalized = normalize_histogram(img)
imsave(sys.argv[1] + '-normalized.jpg', normalized)
Output:

Remove background of the image using opencv Python

I have two images, one with only background and the other with background + detectable object (in my case its a car). Below are the images
I am trying to remove the background such that I only have car in the resulting image. Following is the code that with which I am trying to get the desired results
import numpy as np
import cv2
original_image = cv2.imread('IMG1.jpg', cv2.IMREAD_COLOR)
gray_original = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
background_image = cv2.imread('IMG2.jpg', cv2.IMREAD_COLOR)
gray_background = cv2.cvtColor(background_image, cv2.COLOR_BGR2GRAY)
foreground = np.absolute(gray_original - gray_background)
foreground[foreground > 0] = 255
cv2.imshow('Original Image', foreground)
cv2.waitKey(0)
The resulting image by subtracting the two images is
Here is the problem. The expected resulting image should be a car only.
Also, If you take a deep look in the two images, you'll see that they are not exactly same that is, the camera moved a little so background had been disturbed a little. My question is that with these two images how can I subtract the background. I do not want to use grabCut or backgroundSubtractorMOG algorithm right now because I do not know right now whats going on inside those algorithms.
What I am trying to do is to get the following resulting image
Also if possible, please guide me with a general way of doing this not only in this specific case that is, I have a background in one image and background+object in the second image. What could be the best possible way of doing this. Sorry for such a long question.
I solved your problem using the OpenCV's watershed algorithm. You can find the theory and examples of watershed here.
First I selected several points (markers) to dictate where is the object I want to keep, and where is the background. This step is manual, and can vary a lot from image to image. Also, it requires some repetition until you get the desired result. I suggest using a tool to get the pixel coordinates.
Then I created an empty integer array of zeros, with the size of the car image. And then I assigned some values (1:background, [255,192,128,64]:car_parts) to pixels at marker positions.
NOTE: When I downloaded your image I had to crop it to get the one with the car. After cropping, the image has size of 400x601. This may not be what the size of the image you have, so the markers will be off.
Afterwards I used the watershed algorithm. The 1st input is your image and 2nd input is the marker image (zero everywhere except at marker positions). The result is shown in the image below.
I set all pixels with value greater than 1 to 255 (the car), and the rest (background) to zero. Then I dilated the obtained image with a 3x3 kernel to avoid losing information on the outline of the car. Finally, I used the dilated image as a mask for the original image, using the cv2.bitwise_and() function, and the result lies in the following image:
Here is my code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load the image
img = cv2.imread("/path/to/image.png", 3)
# Create a blank image of zeros (same dimension as img)
# It should be grayscale (1 color channel)
marker = np.zeros_like(img[:,:,0]).astype(np.int32)
# This step is manual. The goal is to find the points
# which create the result we want. I suggest using a
# tool to get the pixel coordinates.
# Dictate the background and set the markers to 1
marker[204][95] = 1
marker[240][137] = 1
marker[245][444] = 1
marker[260][427] = 1
marker[257][378] = 1
marker[217][466] = 1
# Dictate the area of interest
# I used different values for each part of the car (for visibility)
marker[235][370] = 255 # car body
marker[135][294] = 64 # rooftop
marker[190][454] = 64 # rear light
marker[167][458] = 64 # rear wing
marker[205][103] = 128 # front bumper
# rear bumper
marker[225][456] = 128
marker[224][461] = 128
marker[216][461] = 128
# front wheel
marker[225][189] = 192
marker[240][147] = 192
# rear wheel
marker[258][409] = 192
marker[257][391] = 192
marker[254][421] = 192
# Now we have set the markers, we use the watershed
# algorithm to generate a marked image
marked = cv2.watershed(img, marker)
# Plot this one. If it does what we want, proceed;
# otherwise edit your markers and repeat
plt.imshow(marked, cmap='gray')
plt.show()
# Make the background black, and what we want to keep white
marked[marked == 1] = 0
marked[marked > 1] = 255
# Use a kernel to dilate the image, to not lose any detail on the outline
# I used a kernel of 3x3 pixels
kernel = np.ones((3,3),np.uint8)
dilation = cv2.dilate(marked.astype(np.float32), kernel, iterations = 1)
# Plot again to check whether the dilation is according to our needs
# If not, repeat by using a smaller/bigger kernel, or more/less iterations
plt.imshow(dilation, cmap='gray')
plt.show()
# Now apply the mask we created on the initial image
final_img = cv2.bitwise_and(img, img, mask=dilation.astype(np.uint8))
# cv2.imread reads the image as BGR, but matplotlib uses RGB
# BGR to RGB so we can plot the image with accurate colors
b, g, r = cv2.split(final_img)
final_img = cv2.merge([r, g, b])
# Plot the final result
plt.imshow(final_img)
plt.show()
If you have a lot of images you will probably need to create a tool to annotate the markers graphically, or even an algorithm to find markers automatically.
The problem is that you're subtracting arrays of unsigned 8 bit integers. This operation can overflow.
To demonstrate
>>> import numpy as np
>>> a = np.array([[10,10]],dtype=np.uint8)
>>> b = np.array([[11,11]],dtype=np.uint8)
>>> a - b
array([[255, 255]], dtype=uint8)
Since you're using OpenCV, the simplest way to achieve your goal is to use cv2.absdiff().
>>> cv2.absdiff(a,b)
array([[1, 1]], dtype=uint8)
I recommend using OpenCV's grabcut algorithm. You first draw a few lines on the foreground and background, and keep doing this until your foreground is sufficiently separated from the background. It is covered here: https://docs.opencv.org/trunk/d8/d83/tutorial_py_grabcut.html
as well as in this video: https://www.youtube.com/watch?v=kAwxLTDDAwU

Correlating two skeletized images : Python

import cv2
import numpy as np
from PIL import Image
from skimage import morphology
from scipy import signal
img = cv2.imread('thin.jpg',0)
img1 = cv2.imread('thin1.jpg',0)
cv2.imshow('image1',img)
cv2.imshow('image2',img1)
ret,img = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ret,img1 = cv2.threshold(img1,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
size = np.size(img)
size1 = np.size(img1)
skel = np.zeros(img.shape,np.uint8)
skel1 = np.zeros(img1.shape,np.uint8)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
img = 255 - img
img1 = 255 - img1
img = cv2.dilate(img,element,iterations=8)
img1 = cv2.dilate(img1,element,iterations=8)
done = False
while(not done):
eroded = cv2.erode(img,element)
eroded1 = cv2.erode(img1,element)
temp = cv2.dilate(eroded,element)
temp1 = cv2.dilate(eroded1,element)
temp = cv2.subtract(img,temp)
temp1 = cv2.subtract(img1,temp1)
skel = cv2.bitwise_or(skel,temp)
skel1 = cv2.bitwise_or(skel1,temp1)
img = eroded.copy()
img1 = eroded1.copy()
zeros = size - cv2.countNonZero(img)
if zeros==size:
done = True
cv2.imshow('IMAGE',skel)
cv2.imshow('TEMPLATE',skel1)
cv2.imwrite("image.jpg",skel)
if cv2.waitKey(0) & 0xFF == ord('q'):
cv2.destroyAllWindows()
This is the code that i tried to convert two grayscale image to two skeletized image using the method of binarization and thinning and the result is also obtained. Now with these two skeletized image , i want to do a comparison to see whether they match or not. How can i correlate each other? Do we need to convert this skeletized into 2d array? Can anyone suggest any solution. Thanks in advance.
There are a number of ways you can compare the images to see if they match. The simplest is to do a pixelwise subtraction to create a new image and then sum the pixels in the new image. If they sum to zero you have an exact match. The larger the sum the worse the match.
You will however have a problem using most comparison techniques on a skeletonized image. You take the image and reduce it to skinny little lines that are unlikely to overlap for images that only deviate from each other by a little bit.
With skeletonized images you often need to compare features. For example, identify the points of intersection of the skeleton, and use the location of those points for comparing images. In your sample image you might be able to extract the lines (I see three major ones) and then compare images based on the location of the lines.
Binary images are already represented as 2D numpy arrays.
This is a complex problem. You can do this by reshaping the images to two vectors (assuming they are exactly the same size), and then calculating the correlation coefficient:
np.corrcoef(img.reshape(-1), img1.reshape(-1))
One possible solution would be to correlate (or subtract) the blurred version of each skeletonized image with one another.
That way, the unavoidable little offsets between skeleton lines wouldn't have such a negative impact on the outcome as if you subtracted the skeletons directly (since the skeleton lines would most probably not overlay exactly over one another).
I'm assuming here that the original images weren't similar to each other in the first place, otherwise you wouldn't need to skeletonize them, right?

Image Analysis: Finding proteins in an image

I am attempting to write a program that will automatically locate a protein in an image, this will ultimately be used to differentiate between two proteins of different heights that are present.
The white area on top of the background is a membrane in which the proteins sit and the white blobs that are present are the proteins. The proteins have two lobes hence they appear in pairs (actually one protein).
I have been writing a script in Fiji (Jython) to try and locate the proteins so we can work out the height from the local background. This so far involves applying an adaptive histogram equalisation and then subtracting the background with a rolling ball of radius 10 pixels. After that I have been applying a kernel of sorts which is 10 pixels by 10 pixels and works out the average of the 5 centre pixels and divides it by the average of the pixels on the 4 edges of the kernel to get a ratio. if the ratio is above a certain value then it is a candidate.
the output I got was this image which apart from some wrapping and sensitivity (ratio=2.0) issues seems to be ok. My questions are:
Is this a reasonable approach or is there an obviously better way of doing this?
Can you suggest a way on from here? I am a little stuck now and not really sure how to proceed.
code if necessary: http://pastebin.com/D45LNJCu
Thanks!
Sam
How about starting off a bit more simple and using the Harris-point approach and detect local maxima. Eg.
import numpy as np
import Image
from scipy import ndimage
import matplotlib.pyplot as plt
roi = 2.5
peak_threshold = 120
im = Image.open('Q766c.png');
image = im.copy()
size = 2 * roi + 1
image_max = ndimage.maximum_filter(image, size=size, mode='constant')
mask = (image == image_max)
image *= mask
# Remove the image borders
image[:size] = 0
image[-size:] = 0
image[:, :size] = 0
image[:, -size:] = 0
# Find peaks
image_t = (image > peak_threshold) * 1
# get coordinates of peaks
f = np.transpose(image_t.nonzero())
# Show
img = plt.imshow(np.asarray(im))
plt.plot(f[:, 1], f[:, 0], 'o', markeredgewidth=0.45, markeredgecolor='b', markerfacecolor='None')
plt.axis('off')
plt.savefig('local_max.png', format='png', bbox_inches='tight')
plt.show()
Which gives this:
ImageJ "Find maxima" does also similar.
Here is the Jython code
from ij import ImagePlus, IJ, Prefs
from ij.plugin import RGBStackMerge
from ij.process import ImageProcessor, ImageConverter
from ij.plugin.filter import Binary, MaximumFinder
from jarray import array
# define background is black (0)
Prefs.blackBackground = True
# find maxima
#imp = IJ.getImage()
imp = ImagePlus('http://i.stack.imgur.com/Q766c.png')
ImageConverter(imp).convertToGray8()
ip = imp.getProcessor()
segip = MaximumFinder().findMaxima( ip, 10, 200, MaximumFinder.SINGLE_POINTS , False, False)
# display detection result
binner = Binary()
binner.setup("dilate", None)
binner.run(segip)
segimp = ImagePlus("seg", segip)
mergeimp = RGBStackMerge.mergeChannels(array([segimp, imp, None, None, None, None, None], ImagePlus), True)
mergeimp.show()
EDIT: Updated the code to allow processing PNG image (RGB), and directly loading image from this thread. See comments for more details.

Remove features from binarized image

I wrote a little script to transform pictures of chalkboards into a form that I can print off and mark up.
I take an image like this:
Auto-crop it, and binarize it. Here's the output of the script:
I would like to remove the largest connected black regions from the image. Is there a simple way to do this?
I was thinking of eroding the image to eliminate the text and then subtracting the eroded image from the original binarized image, but I can't help thinking that there's a more appropriate method.
Sure you can just get connected components (of certain size) with findContours or floodFill, and erase them leaving some smear. However, if you like to do it right you would think about why do you have the black area in the first place.
You did not use adaptive thresholding (locally adaptive) and this made your output sensitive to shading. Try not to get the black region in the first place by running something like this:
Mat img = imread("desk.jpg", 0);
Mat img2, dst;
pyrDown(img, img2);
adaptiveThreshold(255-img2, dst, 255, ADAPTIVE_THRESH_MEAN_C,
THRESH_BINARY, 9, 10); imwrite("adaptiveT.png", dst);
imshow("dst", dst);
waitKey(-1);
In the future, you may read something about adaptive thresholds and how to sample colors locally. I personally found it useful to sample binary colors orthogonally to the image gradient (that is on the both sides of it). This way the samples of white and black are of equal size which is a big deal since typically there are more background color which biases estimation. Using SWT and MSER may give you even more ideas about text segmentation.
I tried this:
import numpy as np
import cv2
im = cv2.imread('image.png')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
grayout = 255*np.ones((im.shape[0],im.shape[1],1), np.uint8)
blur = cv2.GaussianBlur(gray,(5,5),1)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
wcnt = 0
for item in contours:
area =cv2.contourArea(item)
print wcnt,area
[x,y,w,h] = cv2.boundingRect(item)
if area>10 and area<200:
roi = gray[y:y+h,x:x+w]
cntd = 0
for i in range(x,x+w):
for j in range(y,y+h):
if gray[j,i]==0:
cntd = cntd + 1
density = cntd/(float(h*w))
if density<0.5:
for i in range(x,x+w):
for j in range(y,y+h):
grayout[j,i] = gray[j,i];
wcnt = wcnt + 1
cv2.imwrite('result.png',grayout)
You have to balance two things, removing the black spots but balance that with not losing the contents of what is on the board. The output I got is this:
Here is a Python numpy implementation (using my own mahotas package) of the method for the top answer (almost the same, I think):
import mahotas as mh
import numpy as np
Imported mahotas & numpy with standard abbreviations
im = mh.imread('7Esco.jpg', as_grey=1)
Load the image & convert to gray
im2 = im[::2,::2]
im2 = mh.gaussian_filter(im2, 1.4)
Downsample and blur (for speed and noise removal).
im2 = 255 - im2
Invert the image
mean_filtered = mh.convolve(im2.astype(float), np.ones((9,9))/81.)
Mean filtering is implemented "by hand" with a convolution.
imc = im2 > mean_filtered - 4
You might need to adjust the number 4 here, but it worked well for this image.
mh.imsave('binarized.png', (imc*255).astype(np.uint8))
Convert to 8 bits and save in PNG format.

Categories