Separate Image Into Specific Lines and Match to Images Python - python

I have a bunch of images, all of which look fairly similar to the one below:
In these images, the lines are distinct (at least to human eyes) but they intersect often. I am not very familiar with python image processing but I have heard opencv can be useful for these kinds of things. I would like to separate the lines somehow.
Furthermore, I would like to compare these images to the NFL route tree (this I would be able to separate into 9 different routes by hand to have 9 different images to compare to, so just imagine that):
The question really boils down to this: how do I separate out the distinct lines in the image (ex. 8 different lines for the example image from Mark Andrews), and then compare these lines to find which of my 9 route images is the nearest match?
If opencv is not the correct package for this, just let me know, and let me know if there is a package or packages that will allow me to do this. Thanks!

Related

How to find an exact match of an image in hashed data with openCV

for my school project, I need to find images in a large dataset. I'm working with python and opencv. Until now, I've managed to find an exact match of an image in the dataset but it takes a lot of time even though I had 20 images for the test code. So, I've searched few pages of google and I've tried the code on these pages
image hashing
building an image hashing search engine
feature matching
Also, I've been thinking to search through the hashed dataset, save their paths, then find the best feature matching image among them. But most of the time, my narrowed down working area is so much different than what is my query image.
The image hashing is really great. It looks like what I need but there is a problem: I need to find an exact match, not similar photos. So, I'm asking you guys, if you have any suggestion or a piece of code might help or improve the reference code that I've linked, can you share it with me? I'd be really happy to try or research what you guys send or suggest.
opencv is probably the wrong tool for this. The algorithms there are geared towards finding similar matches, not exact ones. The general idea is to use machine learning to teach the code to recognize what a car looks like so it can detect cars in videos, even when the color or form changes (driving in the shadow, different make, etc).
I've found two approaches work well when trying to build an image database.
Use a normal hash algorithm like SHA-256 plus maybe some metadata (file or image size) to find matches
Resize the image down to 4x4 or even 2x2. Use the pixel RGB values as "hash".
The first approach is to reduce the image to a number. You can then put the number in a look up table. When searching for the image, apply the same hashing algorithm to the image you're looking for. Use the new number to look in the table. If it's there, you have a match.
Note: In all cases, hashing can produce the same number for different pictures. So you have to compare all the pixels of two pictures to make sure it's really an exact match. That's why it sometimes helps to add information like the picture size (in pixels, not file size in bytes).
The second approach allows to find pictures which very similar to the eye but in fact slightly different. Imagine cropping off a single pixel column on the left or tilting the image by 0.01°. To you, the image will be the same but for a computer, they will by totally different. The second approach tries to average small changes out. The cost here is that you will get more collisions, especially for B&W pictures.
Finding exact image matches using hash functions can be done with the undouble library (Disclaimer: I am also the author). It works using a multi-step process of pre-processing the images (grayscaling, normalizing, and scaling), computing the image hash, and the grouping of images based on a threshold value.

multi colored edges python

I'm testing an algorithm that finds a shortest path between two certain vertexes in graph and gives a list of vertexes after each turn (actually it gives three paths - one of them is a shortest path in this graph and two others are some kind of extra paths that are also important for us and are used for further shortest path calculations). On each turn the weights of graph edges change somehow so every turn we get a new triple of lists (paths). I would like to visualize the evolving of these paths by drawing a graph (this graph is actually a grid that represents a city, e.g. New York) and each kind of path would be represented with certain colour (so on each turn there would be a grid with three coloured paths). One more time - on every turn the paths will be different so the picture will change. What is the best way to represent it? And one more question - sometimes there would be edges that belong to two or maybe even three of these pathes and I'd like to show it, so it would be nice if there is an opportunity to colour this edge with two/three colours at once. It would be perfect if it was possible to make it look like two/three thinner edges put along together, but I could only find a situation where we draw several lines of different colour that are being put together consecutively (like that: enter image description here). Is there a way to make it the first way?
I'm sorry for being discursive but I've never dealt with graphics in Python and I desperately need help. Thanks!
If you want to show the image in a GUI, it depends on the GUI toolkit that you want to use. In the Tkinter toolkit that comes with most Python distributions you could use the Canvas widget. There are several tutorials online [1], [2]. Most GUI toolkits have a similar functionality, but they can have different names.
If you want to save an image to a file, there are many graphics libraries you could use, depending on what kind of format you want to save it to.
For example the Python bindings to the Cairo library can save a picture as PDF or SVG vector formats.
The Pillow library on the other hand supports many bitmap formats.
There are many others; matplotlib, agg, gd are just some examples.

Image recognition - finding similar images [duplicate]

This question already has answers here:
Checking images for similarity with OpenCV
(6 answers)
Closed 8 years ago.
Setup is as follows:
Database with paintings
robot that takes shots of paintings
I want to compare the shots the robot made with the images in our database.
Problem is that the shots won't be perfect. The painting will most likely be IN the shot, but the shot will also contain wall/other objects. The incidence of light will also cause problems. Therefore I want to be finding images in the database that are similar to a certain degree.
I've been reading up on PIL, scipy, openCV, machine learning.
Is there anything you guys can recommend for this problem?
Thanks in advance.
edit: I'm aware of the solutions presented at other posts. Such as: comparing histograms/template matching and feature matching. Comparing histograms is not going to cut it in my application. Neither will feature matching. As it is to much of a workload. Template matching might, however the angles at which the shots will be taken won't be any near perfect.
You could use the SSIM index. There is a python implementation in scikit-image package.
Your problem sounds more like an application of feature detection and matching. Given a shot captured by the robot, you extract features from it, and compare them against the list of features you have in your database (each image having a lot of features). You might want to look at SURF, or some other descriptor that does your job. OpenCV has very well documented implementations for many variants. Feature matching would be the last stage where you actually make a decision about a match or a non-match.
Note that all of this is really heavy on processing, so forget real-time.

Cell Counting: Selective; Only count cells positive for all stainings

If I take multiple images in different fluorescent channels (after staining the cells with some antibody/maker), how can I automatically quantitate the fraction of cells positive for each marker? Has anyone done something like this in Python?
I can already use Fiji (ImageJ) to count the cells containing only one staining type, but I can't make it run a selective count on merged images which contain two staining types. Since Fiji interacts well with python, I was thinking of writing a script that looks at each respective image containing only one staining type and then obtain the x-y coordinates of the respective image and check for matches between. I am not sure if that's a good idea though and I was wondering, if anyone has done something similar or has a more efficient way of getting the task done?
Thanks for your help!
You could use cont = cv2.findcontours to find the almost round shaped cells and count them
with len(cont).

Using OpenCV and Python to stitch puzzle images together

I am trying to use OpenCV and Python to stitch together several hundred puzzle piece images into one large, complete image. All of the images are digitized and are in a PNG format. The pieces were originally from a scan and extracted into individual pieces, so they have transparent backgrounds and are each a single piece. What is the process of comparing them and finding their matches using OpenCV?
The plan is that the images and puzzle pieces will always be different and this python program will take a scan of all the pieces laid out, crop out the pieces (which it does now), and build the puzzle back.
If this is a small fun project that you are trying to do, you can compare image histograms or use SIFT/SURF. I don't think there is implementation of SIFT, SURF in Python API. If you can find compatible equivalent, you can do it.
Comparing images are very much dependent on the data-set that you have. Some techniques work more better than the other.

Categories