Algorithm to compare two images with pattern - Python - python

I would like to ask you for help. I am a student and for academic research I'm designing a system where one of the modules is responsible for comparison of low-resolution simple images (img, jpg, jpeg, png, gif). However, I need guidance if I can write an implementation in Python and how to get started. Maybe someone of you met once with something like this and would be able to share their knowledge.
Issue 1 - simple version
The input data must be compared with the pattern (including images) and the data output will contain information about the degree of similarity (percentage), and the image of the pattern to which the given input is the most similar. In this version, the presumption is that the input image is not modified in any way (ie not rotated, tilted, etc.)
Issue 2 - difficult version
The input data must be compared with the pattern (including images) and the data output will contain information about the degree of similarity (percentage), and the image of the pattern to which the given input is the most similar. In this version, the presumption is that the input image can be rotated
Can some of you guys tell me what I need to do that and how to start. I will appreciate any help.

As a starter, you could read in the images using matplotlib, or the python imaging library (PIL).
Comparing to a pattern could be done by a cross-correlation, which you could do using scipyor numpy. As you only have few pixels, I would go for numpy which does not use fourier transforms.
import pylab as P
import numpy as N
# read the images
im1 = P.imread('4Fsjx.jpg')
im2 = P.imread('xUHhB.jpg')
# do the crosscorrelation
conv = N.convolve(im1, im2)
# a measure for similarity then is:
sim = N.sum(N.flatten(conv))
please note, this is a very quick and dirty approach and you should spend quite some thoughts on how to improve it, not even including the rotation that you mentioned. Anyhow; this code can read in your images, and give you a measure for similarity, although the convolve will not work on color coded data. I hope it will give you something to start at.

Here is a start as some pseudo code. I would strongly recommend getting numpy/scipy to help with this.
#read the input image:
files = glob.glob('*.templates')
listOfImages = []
for elem in files:
imagea = scipy.misc.imread(elem)
listOfImages.append(imagea)
#read input/test imagea
targetImage = scipy.misc.imread(targetImageName)
now loop through each of the listOfImages and compute the "distance"
note that this is probably the hardest part. How will you decide
if two images are similar? Using direct pixel comparisons? Using
image histograms, using some image aligment metric(this would be useful
for your difficult version). Some of the simple gotchas, I noticed that your uploaded images were different sizes. If the images are of different sizes then you will have to
sweep over the images. Also, can the images be scaled? Then you will need to either have a scale invariant metric or try the sweep over different scales
#keep track of the min distance
minDistance = Distance(targetImage,listOfImages[0])
minIndex = 0
for index,elem in enumerate(listOfImages):
currentDistance = Distance(targetImage,elem)
if currentDistance < minDistance:
minDistance = currentDistance
minIndex = index
The distance function is where the challenges are, but I'll leave that
for you.

Related

improve accuracy of opencv alignmtb for hdr or exposure fusion

I am using opencv to exposure fuse bracketed images. I started based on this article https://learnopencv.com/exposure-fusion-using-opencv-cpp-python/
In that article alignMTB is used for aligning and MergeMertens for the exposure fusion.
The relevant part of my code is:
MAXBITS = 9
EXCLUDE_RANGE = 3
CUT = True
print("Aligning images using AlignMTB ... ")
for filename in file_list:
print("alignMTB: reading image " + filename)
im = cv2.imread(filename)
images.append(im)
alignMTB = cv2.createAlignMTB(MAXBITS, EXCLUDE_RANGE, CUT)
alignMTB.process(images, images)
# Merge using Exposure Fusion
print("\nMerging using Exposure Fusion ... ");
mergeMertens = cv2.createMergeMertens()
exposureFusion = mergeMertens.process(images)
(The parameter constants are in the top of my script, but for clarity I put them here in the copied code part)
However, the alignment of my hand-held images is pretty bad. I used to use align_image_stack and enfuse, and align_image_stack always aligns the images (much) better than alignMTB.
Now the createAlignMTB(MAXBITS, EXCLUDE_RANGE, CUT) accepts 3 optional parameters, but at first I simply used alignMTB = cv2.createAlignMTB() without parameters. However, as the alignment wasn't as good as align_image_stack (sometimes not at all), I then started playing with the parameters. But whatever values I use, it doesn't do anything. The created images, after fusing, are identical and there is absolutely no improvement or worsening in correct alignment or not.
I tried this with opencv-python 4.6.0.66 and now 4.7.0.68.
When I try the same with hdr using createCalibrateDebevec or one of the others, the alignment with alignMTB is just as meager or as bad. It is really the alignMTB step.
Something I do wrong or totally not understand?
EDIT: I also use ORB (preferably) and ECC to simply stack images to reduce noise. ORB does a great job in alignment and is fast, but I can't combine it with merging images. Is there an option to use ORB as alignment step?
EDIT 2: I can now use ORB (and ECC and SIFT) to align images pairwise and save the aligned image (compared to the "reference image") to a tmp folder and then merge them all. Compared to the total process (10~40 seconds ORB, the saving/loading of the tmp files is 0.1~2 seconds depending on the amount). So far, so good.
Still I would very much like to know how I can improve the alignMTB results.

calculate particle size distribution from AFM measurements

I am trying to obtain a radius and diameter distribution from some AFM (Atomic force microscopy) measurements. So far I am trying out Gwyddion, ImageJ and different workflows in Matlab.
At the moment the best results I have found is to use Gwyddion and to take the Phase image, high pass filter it and then try an edge detection with 'Laplacian of Gaussian'. The result is shown in figure 3. However this image is still too noisy and doesnt really capture the edges of all the particles. (some are merged together others do not have a clear perimeter).
In the end I need an image which segments each of the spherical particles which I can use for blob detection/analysis to obtain size/radius information.
Can anyone recommend a different method?
[
I would definitely try a Granulometry, it was designed for something really similar. There is a good explanation of granulometry here starting page 158.
The granulometry will perform consecutive / increasing openings that will erase the different patterns according to their dimensions. The bigger the pattern, the latter it will be erased. It will give you a curve that represent the pattern dimension distributions in your image, so exactly what you want.
However, it will not give you any information about the position inside the image. If you want to have a rough modeling of the blobs present in your image, you can take a look to the Ultimate Opening.
Maybe you can use Avizo, it's a powerful software for dealing with image issues, especially for three D data (CT)

pattern recognition of microscopy images with python

I would like to find a zoomed microscopy image in a dozen of overview images. I would prefer to find some python/numpy/scipy solution.
My knowledge about pattern recognition is negligible. Anyway, here is what I tried:
My first idea was to get the most important structures out of the images by
setting eveything greater[smaller] than some threshold to 255[0] in the greyscaled image.
For example, I then have the following pattern:
The overview image might then look like this:
Here is a version where the region of the pattern is highlighted:
I would now like to find a way to get the pixel number, at which the pattern occurs in the overview image.
It is very important to note, that I do not have information about:
the orientation of the two images with respect to each other
the scaling of the images
in principle, there might even be some stretching between the images, but this might be too hard to implement.
I do no know if the pattern is on the image. There I have to check about 20 images.
For a fixed scaling, my attempt was to use
result = scipy.signal.fftconvolve()
and see how the maximum of result then varies when I rotate the pattern before doing the convolution.
Taking the maximum value gives me the correct angle at which the images overlap.
However this is not a nice solution because it already takes some minutes. Also varying the scaling and even doing further transformations would take forever.
I guess there are better approaches out there!

Python/OpenCV: Computing a depth map from stereo images

I have two stereo images that I'd like to use to compute a depth map. While I unfortunately do not know C/C++, I do know python-- so when I found this tutorial, I was optimistic.
Unfortunately, the tutorial appears to be somewhat out of date. It not only needs to be tweaked to run at all (renaming 'createStereoBM' to 'StereoBM') but when it does run, it doesn't give a good result, even on the example stereo-images that were used in the tutorial itself.
Here's an example:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread('Yeuna9x.png',0)
imgR = cv2.imread('SuXT483.png',0)
stereo = cv2.StereoBM(1, 16, 15)
disparity = stereo.compute(imgL, imgR)
plt.imshow(disparity,'gray')
plt.show()
The result:
This looks very different from what the author of the tutorial achieves:
(source: opencv.org)
Tweaking the parameters does not improve matters. All documentation I've been able to find is for the original C-version of openCV code, not the python-library-equivalent. I unfortunately haven't been able to use this to improve things.
Any help would be appreciated!
You have the images the wrong way around.
Look at the images, the tin behind the lamp lets you work out the camera locations of the two images,
Just change this:
# v
imgR = cv2.imread('Yeuna9x.png',0)
imgL = cv2.imread('SuXT483.png',0)
# ^
If you look at the image in the tutorial which they say is the left frame, it the same as your right one.
Here's my result after the change.
It is possible that you need to keep adjusting the parameters of the block matching algorithm.
have a look at this blog article:https://erget.wordpress.com/2014/03/13/building-an-interactive-gui-with-opencv/
The article's author has composed a set of classes to make the process of calibrating the cameras more streamlined than the opencv tutorial. These classes are available as pypi package: https://github.com/erget/StereoVision
Hope this helps :)
The camera is translated vertically instead of horizontally. Rotate the images 90 degrees, then try. (Prove it to yourself by rotating the screen. I just picked up my laptop and turned it on its edge.)
You mention different software; perhaps a row-major/column-major kind of thing between the original and pyOpenCV.

How do I find images with a similar color using Python and PIL?

I have a lot of images in a folder, and I would like to find images with a similar color to a pre chosen image.
I would like to be able to do something like:
python find_similar.py sample.jpg
and have that return something like:
234324.jpg
55.jpg
9945.jpg
345434.jpg
104.jpg
Is this doable?
I cannot give you a canned solution, but here's an angle to tackle the problem. It's not PIL-specific, and it might be entirely bogus, since I have no experience in image processing.
Perform color quantization on the image. That gives you a palette that encodes the color information in the image without any shape information.
Run a principal components analysis to get the dominant components in the color cube. Strictly, you could run this without quantization first, but it might be too expensive.
Do a least-squares fitting on the principal components of different images.
Hope this helps.
The algorithm for finding similar images is discussed in a Question on Stackoverflow, you might want to implement one of those in Python & PIL.
Also, you can straightaway use the ImageChops module from PIL and use the difference method to compare two images like this:
import Image
import ImageChops
im1 = Image.open("original.jpg")
im2 = Image.open("sample.jpg")
diff = ImageChops.difference(im2, im1)
That might help you in getting some idea about the difference in your original image and the others.
There is another similar question on Stackoverflow which discusses this.

Categories