I need some help in trying to figure out something. I currently a python script which generates two images using the imshow method in matplotlib. My task is to find the correlation between these two images, or in other words the similarity between the two images. Both images are the same size and both use the jet colormap.
Let me know if this is clear enough or if i need to explain in more detail. It would be helpful if someone could provide an example code of how to do this.
Have you looked at the scipy signal processing kit?
from scipy import signal
cor = signal.correlate2d (im1, im2)
will calculate the 2D correlation for you.
Related
Suppose I have this image
I would like to check whether this image is a bimodal or not. How to do this using Python? After a couple of google searches, I found out that obtain in bimodal images, it's supposed to have two dominant peaks in the histogram. But not getting any idea how to replicate this into Python. Any advice? Thanks in advance.
I want to compare how close these two images are (red in similar area), but I can't go pixel by pixel because their color locations are not exactly the same. Anyone know what would be a good approach here?
Thanks,
I personally would advise using the indico image features API. Basically you pass in the image you're dealing with and get back a set of features that represent higher-level morphological-structures within that image.
If you compute cosine-similarity on top of those features you'll get a more intuitive similarity metric.
There's a great github link showing how to do exactly this with a front-end slapped on if that's what you're looking for here: https://github.com/IndicoDataSolutions/imagesimilarity
The code itself is pretty straightforward though:
from indicoio import image_features
from scipy import spatial
features_1 = image_features(<path_to_image>, <api_key>)
features_2 = image_features(<path_to_image>, <api_key>)
similarity = 1 - spatial.distance.cosine(dataSetI, dataSetII) # This is what you want
The full docs are here
Full disclosure: I am the CEO of indico, so I'm biased, but I really do think it would help in this case.
I've been looking for a way to align at least two images in Python.
There are some very nice scripts out there, but I'd like a script that accepts the coordinates of a point in one image and use these coordinates to align the second(or more) image.
I'v been reading about phase correlation algorithms but I'd appreciate some help on how to use the coordinates of a user defined point.
Thank you!
I know that there are many ways to upscale (interpolate) an image using bilinear, bicubic,... Somehow, these same algorithms can also be used to downscale an image. But when it comes to subsampling, I've come across only two methods: Gaussian-blur then discard some rows and columns, or block-averaging.
Is there such thing as bilinear, bicubic,... subsampling?
Instead of Gaussian-blurring then discarding or block-averaging, can/should I do Gaussian-blurring then block-averaging? or bilinear, bicubic,...subsampling if there is such thing?
What I'm doing is trying to make image Gaussian and Laplacian pyramids; but I'm not sure how to decimate and interpolate the image. Some Python code with Scipy/Numpy would really help. Thank you very much.
I have a lot of images in a folder, and I would like to find images with a similar color to a pre chosen image.
I would like to be able to do something like:
python find_similar.py sample.jpg
and have that return something like:
234324.jpg
55.jpg
9945.jpg
345434.jpg
104.jpg
Is this doable?
I cannot give you a canned solution, but here's an angle to tackle the problem. It's not PIL-specific, and it might be entirely bogus, since I have no experience in image processing.
Perform color quantization on the image. That gives you a palette that encodes the color information in the image without any shape information.
Run a principal components analysis to get the dominant components in the color cube. Strictly, you could run this without quantization first, but it might be too expensive.
Do a least-squares fitting on the principal components of different images.
Hope this helps.
The algorithm for finding similar images is discussed in a Question on Stackoverflow, you might want to implement one of those in Python & PIL.
Also, you can straightaway use the ImageChops module from PIL and use the difference method to compare two images like this:
import Image
import ImageChops
im1 = Image.open("original.jpg")
im2 = Image.open("sample.jpg")
diff = ImageChops.difference(im2, im1)
That might help you in getting some idea about the difference in your original image and the others.
There is another similar question on Stackoverflow which discusses this.