python template/image similarity in a larger picture - python

Most neural nets for images are great a detecting objects and labeling them. They can take a picture and label some of the objects in it. -- think yolo5
Template matching, on the other hand, looks for a template that is mostly the same in a larger image. -- opencv2.templateMatching
What I hope to have is something kind of "inbetween" the two version. Given a manual entered template image, give me the Rectangles in a larger picture where this template occurs - but must be scale invariant, and transform invariant (within reason).
The opencv2 version is too strict in what it counts as matches -- 10% size change can make matches fail, slight rotations can cause it to fail. This makes it not robust enough to be useful for.
Take for instance the following (below), where we see highlighted pictures of airplanes.
This would be the ideal output.
The input would be 1 of the small green squares, ideally any 1 of them would work.
Are there things out there that can do this already?
Essentially, a opencv2.templateMatching that is more reasonably "fuzzy".
Or if I was doing this with Balls, I would use as template a picture of a ball or even baseball as a clean template, and then highlight 3 balls in the following image.
I don't need image recognition, I need image...similarity with a given template (that is better than opencv2.templateMatching cause that one is terrible)

For those interested in the future, I ultimately had to go with a full YOLOv5 network to do a custom training.
I was unable to find a "cheaper" solution.

Related

Python quality inspection with opencv (ssim)

I'm currently an intern at a quality inspector company. My job is to write a program that can detect faulty products (for example, missing screw). They take a picture of every single product. My idea is that I choose an image which could serve as a benchmark and I would compare the other images to that, with the SSIM score, and maybe display the faulty part with a rectangle. Is this a viable idea? (Its a strange internship, because it seems like I'm the only one who can code there...) that's why I'm asking here.
It sounds good idea if your goal is to classify different objects within images comparing benchmark image.
But in my experience, SSIM score was sensitive to angle, light or environment.
So in conclusion, if your goal is to classify different objects in images, your idea would work. But if your goal is to classify exactly same objects, it might not be able to classify.

How to find an exact match of an image in hashed data with openCV

for my school project, I need to find images in a large dataset. I'm working with python and opencv. Until now, I've managed to find an exact match of an image in the dataset but it takes a lot of time even though I had 20 images for the test code. So, I've searched few pages of google and I've tried the code on these pages
image hashing
building an image hashing search engine
feature matching
Also, I've been thinking to search through the hashed dataset, save their paths, then find the best feature matching image among them. But most of the time, my narrowed down working area is so much different than what is my query image.
The image hashing is really great. It looks like what I need but there is a problem: I need to find an exact match, not similar photos. So, I'm asking you guys, if you have any suggestion or a piece of code might help or improve the reference code that I've linked, can you share it with me? I'd be really happy to try or research what you guys send or suggest.
opencv is probably the wrong tool for this. The algorithms there are geared towards finding similar matches, not exact ones. The general idea is to use machine learning to teach the code to recognize what a car looks like so it can detect cars in videos, even when the color or form changes (driving in the shadow, different make, etc).
I've found two approaches work well when trying to build an image database.
Use a normal hash algorithm like SHA-256 plus maybe some metadata (file or image size) to find matches
Resize the image down to 4x4 or even 2x2. Use the pixel RGB values as "hash".
The first approach is to reduce the image to a number. You can then put the number in a look up table. When searching for the image, apply the same hashing algorithm to the image you're looking for. Use the new number to look in the table. If it's there, you have a match.
Note: In all cases, hashing can produce the same number for different pictures. So you have to compare all the pixels of two pictures to make sure it's really an exact match. That's why it sometimes helps to add information like the picture size (in pixels, not file size in bytes).
The second approach allows to find pictures which very similar to the eye but in fact slightly different. Imagine cropping off a single pixel column on the left or tilting the image by 0.01°. To you, the image will be the same but for a computer, they will by totally different. The second approach tries to average small changes out. The cost here is that you will get more collisions, especially for B&W pictures.
Finding exact image matches using hash functions can be done with the undouble library (Disclaimer: I am also the author). It works using a multi-step process of pre-processing the images (grayscaling, normalizing, and scaling), computing the image hash, and the grouping of images based on a threshold value.

Discard images from a group of similar images

I am generating images (thumbnails) from a video every 3 seconds. Now I need to discard/remove all the similar images. Is there a way I could this?
I generate thumbnails using FFMPEG. I read about various image-diff solutions like given in this SO post, but I do not want to do this manually. How and what parameters should be considered that could tell if a particular image is similar to other images present.
You can calculate the Structural Similarity Index between images and based on the score keep or discard an image. There are other measures you can use, but basically a method that returns a score. Try PIL or OpenCV
https://pillow.readthedocs.io/en/3.1.x/reference/ImageChops.html?highlight=difference
https://www.pyimagesearch.com/2017/06/19/image-difference-with-opencv-and-python/
I dont have enough reputation to comment my idea on your problem, so i will just go ahead and post it as an answer in hope of helping you.
I am quite confused about the term "similar" but since you are reffering on video frames i am going to assume that you want to avoid having "similar" frames that have been captured because of poor camera movement. If that's the case you might want to consider using salient point descriptors.
To be more specific you can detect salient points (using for instance Harris) and then use a point descriptor algorithm (such as SURF) and discard the frames that have been found to have "too many" similar points with a pre-selected frame.
Keep in mind that in order for the above process to be successful, the frames must be as sharp as possible, i guess you don't want to extract as a thubnail a blurred frame anyway. So applying a blurred images detection might be useful in your case.

Simple quick robust image comparison

I have an image find- and "blur-compare"-task. I could not figure out which methods I should use.
The setup is this: A, say, 100x100 box either is mostly filled by an object or not. To the human eye this object is always almost the same, but might change by blur, slight rescaling, tilting 3-dimensionally, moving to the side or up/down by a or two pixel or other very small graphical changes.
What is a simple quick robust and reliable way to check if the transformed object is there or not? Points to python packages as well as code would be nice.
Not sure I entirely understand your question, but I'll give it a shot..
Assuming:
we just want to know if there is some object in a box.
the empty box is always the same
perfect box alignment etc.
You can do this:
subtract the query image from your empty box image.
sum all pixels
if the value is zero the images are identical, therefore no change, so no object.
Obviously there actually is some difference between the box parts of the two images, but the key thing is that the non-object part of the images are as similar as possible for both pictures, if this is the case, then we can use the above method but with a threshold test as the 3rd step. Provided the threshold is set reasonably, it should give a decent prediction of whether the box is empty or not..

Detect the location of an image within a larger image

How do you detect the location of an image within a larger image? I have an unmodified copy of the image. This image is then changed to an arbitrary resolution and placed randomly within a much larger image which is of an arbitrary size. No other transformations are conducted on the resulting image. Python code would be ideal, and it would probably require libgd. If you know of a good approach to this problem you'll get a +1.
There is a quick and dirty solution, and that's simply sliding a window over the target image and computing some measure of similarity at each location, then picking the location with the highest similarity. Then you compare the similarity to a threshold, if the score is above the threshold, you conclude the image is there and that's the location; if the score is below the threshold, then the image isn't there.
As a similarity measure, you can use normalized correlation or sum of squared differences (aka L2 norm). As people mentioned, this will not deal with scale changes. So you also rescale your original image multiple times and repeat the process above with each scaled version. Depending on the size of your input image and the range of possible scales, this may be good enough, and it's easy to implement.
A proper solution is to use affine invariants. Try looking up "wide-baseline stereo matching", people looked at that problem in that context. The methods that are used are generally something like this:
Preprocessing of the original image
Run an "interest point detector". This will find a few points in the image which are easily localizable, e.g. corners. There are many detectors, a detector called "harris-affine" works well and is pretty popular (so implementations probably exist). Another option is to use the Difference-of-Gaussians (DoG) detector, it was developed for SIFT and works well too.
At each interest point, extract a small sub-image (e.g. 30x30 pixels)
For each sub-image, compute a "descriptor", some representation of the image content in that window. Again, many descriptors exist. Things to look at are how well the descriptor describes the image content (you want two descriptors to match only if they are similar) and how invariant it is (you want it to be the same even after scaling). In your case, I'd recommend using SIFT. It is not as invariant as some other descriptors, but can cope with scale well, and in your case scale is the only thing that changes.
At the end of this stage, you will have a set of descriptors.
Testing (with the new test image).
First, you run the same interest point detector as in step 1 and get a set of interest points. You compute the same descriptor for each point, as above. Now you have a set of descriptors for the target image as well.
Next, you look for matches. Ideally, to each descriptor from your original image, there will be some pretty similar descriptor in the target image. (Since the target image is larger, there will also be "leftover" descriptors, i.e. points that don't correspond to anything in the original image.) So if enough of the original descriptors match with enough similarity, then you know the target is there. Moreover, since the descriptors are location-specific, you will also know where in the target image the original image is.
You probably want cross-correlation. (Autocorrelation is correlating a signal with itself; cross correlating is correlating two different signals.)
What correlation does for you, over simply checking for exact matches, is that it will tell you where the best matches are, and how good they are. Flip side is that, for a 2-D picture, it's something like O(N^3), and it's not that simple an algorithm. But it's magic once you get it to work.
EDIT: Aargh, you specified an arbitrary resize. That's going to break any correlation-based algorithm. Sorry, you're outside my experience now and SO won't let me delete this answer.
http://en.wikipedia.org/wiki/Autocorrelation is my first instinct.
Take a look at Scale-Invariant Feature Transforms; there are many different flavors that may be more or less tailored to the type of images you happen to be working with.

Categories