I am trying to detect electrical symbol in electrical scheme.
Here I think 2 ways could be use:
classical way with OpenCV, I tried re to recognise shape with opencv and python but some symbole are too complexe
deep learning way: I tried with Mask-RCNN using a handmade dataset of symbol but nothing get really successful
Here is a really simple example of what I would like to do:
I think it could be easy to make a dataset of symbol but all symbol would be the same form and context of the image would not be represented.
How do you think I could handle this problem ?
QATM:Quality-Aware Template Matching For Deep Learning might be what you are looking for.
Original paper : https://arxiv.org/abs/1903.07254
And the following github contain an example with electric scheme:
https://github.com/kamata1729/QATM_pytorch
Since the components of electrical scheme are always the same, I would try first Template Matching with OpenCV. I guess you will have to cut the components and make rotated copies to find all of them. It would be also nice to have better resolution of the images.
Next idea, I would say is making convolution with kernel which would be basicly the component you are expecting to be in the image.
Lastly, idea, which will give you definitely more certain results, but its pretty much overkill, is to use google image recognition, which you can use with python and train it on your images
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html
Related
I want to identify three different objects from a satellite wind image. The problem is three of them are somewhat similar. I tried to identify using templete matching but it didn't work. Three objects are as follows.
Here the direction of the object is not important but the type of the head in the line is important. Can you suggest a way to proceed?
Assuming your image consists of only pure black and pure white pixels,
You can find contours and its bounding rectangle or minAreaRect for each of them.
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=minarearect#minarearect
Then iterate over the contours considering there rectangles as separate images. Now you do classification of these images. You may use template matching too.
Good luck!
Have you thought about machine learning?
for example a small cnn that is used for digits recognition could be "retrained" using a small set of your images, Keras also has an data augmentation feature to help ensure a robust classifier is trained.
There is a very good blog post by
Yash Katariya found # https://yashk2810.github.io/Applying-Convolutional-Neural-Network-on-the-MNIST-dataset/, in which the MNIST data set is loaded in and the network is trained, it goes through all of the stages you'd need to in order to use ML for your problem.
You mention you've tried template matching, however you also mention that the rotation is not important, which to me implies that an object could be rotated, that would cause failures for TM.
You could look into LBP (Local Binary Patterns), or maybe OpenCV's Haar Classifier (however it's sensitive to rotation).
Other than the items I have suggested there is a great tutorial found # https://gogul09.github.io/software/image-classification-python which uses features and machine learning you may benefit from looking at to apply to this problem.
I hope while not actually giving you an answer to your question directly, I have given you a set of tools you can use that will solve it with some time invested and some reading.
This is a fairly straightforward question, but I am new to the field. Using this tutorial I have a great way of detecting certain patterns or features. However, the images I'm testing are large and often the feature I'm looking for only occupies a small fraction of the image. When I run it on the entire picture the classification is bad, though when zoomed it and cropped the classification is good.
I've considered writing a script that breaks an image into many different images and runs the test on all (time isn't a huge concern). However, this still seems inefficient and unideal. I'm wondering about suggestions for the best, but also easiest to implement, solution for this.
I'm using Python.
This may seem to be a simple question, which it is, but the answer is not so simple. Localisation is a difficult task and requires much more leg work than classifying an entire image. There are a number of different tools and models that people have experimented with. Some models include R-CNN which looks at many regions in a manner not too dissimilar to what you suggested. Alternatively you could look at a model such as YOLO or TensorBox.
There is no one answer to this, and this gets asked a lot! For example: Does Convolutional Neural Network possess localization abilities on images?
The term you want to be looking for in research papers is "Localization". If you are looking for a dirty solution (that's not time sensitive) then sliding windows is definitely a first step. I hope that this gets you going in your project and you can progress from there.
Suppose I have an image of a car taken from my mobile camera and I have another image of the same car taken downloaded from the internet.
(For simplicity please assume that both the images contain the same side view projection of the same car.)
How can I detect that both the images are representing the same object i.e. the car, in this case, using OpenCV?
I've tried template matching, feature matching (ORB) etc but those are not working and are not providing satisfactory results.
SIFT feature matching might produce better results than ORB. However, the main problem here is that you have only one image of each type (from the mobile camera and from the Internet. If you have a large number of images of this car model, then you can train a machine learning system using those images. Later you can submit one image of the car to the machine learning system and there is a much higher chance of the machine learning system recognizing it.
From a machine learning point of view, using only one image as the master and matching another with it is analogous to teaching a child the letter "A" using only one handwritten letter "A", and expecting him/her to recognize any handwritten letter "A" written by anyone.
Think about how you can mathematically describe the car's features so that every car is different. Maybe every car has a different size of wheels? Maybe the distance from the door handle to bottom of the side window is a unique characteristic for every car? Maybe every car's proportion of front side window's to rear side window's width is an individual feature of that car?
You probably can't answer yes with 100% confidence to any of these quesitons. But, what you can do, is combine those into a multidimensional feature vector and perform classification.
Now, what will be the crucial part here is that since you're doing manual feature description, you need to take care of doing an excellent work and testing every step of the way. For example, you need to design features that will be scale and perspective invariant. Here, I'd recommend reading on how face detection was designed to fulfill that requirement.
Will Machine Learning be a better solution? Depends greatly on two things. Firstly, what kind of data are you planning to throw at the algorithm. Secondly, how well can you control the process.
What most people don't realize today, is that Machine Learning is not some magical solution to every problem. It is a tool and as every tool it needs proper handling to provide results. If I were to give you advice, I'd say you will not handle it very well yet.
My suggestion: get acquainted with basic feature extraction and general image processing algorithms. Edge detection (Canny, Sobel), contour finding, shape description, hough transform, morphological operations, masking, etc. Without those at your fingertips, I'd say in that particular case, even Machine Learning will not save you.
I'm sorry: there is no shortcut here. You need to do your homework in order to make that one work. But don't let that scare you. It's a great project. Good luck!
i want to find closest match of image . for example there is image of car and bike . If given image is of bike .then it should compare given image to stored car and bike and give result according to which ever is closed base on shape .
I wanted to know is it possible to find contour and match with closest one .
I tried doing using haar cascade in open cv .By training car images and bike .
but results were not correct may be bez of less no of training data.(i dont want to use this )
You're going to have an uphill struggle using standard techniques like contours for this.
Edit >> So, using your bike example, what will happen here:
?
Neural Networks and other machine learning algorithms are your best bet instead:
Specifically of the 'Multiclass Classification' variety. With training (time), they stand a better chance of sorting your cats from your bicycles.
As for code examples have a look here. Not an image based example, but you need to start somewhere :)
Also helpful (and tested working by myself) is the DLIB library:
With only 8 training images of a stop sign, its internal structure formed this representation:
Pretty nifty.
There is no reason you cant use your contours results as inputs to a ML classifier, as long as they increase the signal to noise ratio somehow... (i.e. they're somehow more 'bikey' than not)
In my opinion the following methods can be applied and tested ->
GF-HOG
Deformable Parts Model
Neural Networks
A combination would be an awesome idea but will GFHOG will add to overheads.
Study about it its pretty cool tech I think this will answer your query.
http://personal.ee.surrey.ac.uk/Personal/R.Hu/ICIP.html
As you may have heard of, there is an online font recognition service call WhatTheFont
I'm curious about the tech behind this tool. I think basically we can seperate this into two parts:
Generate images from font files of various format, refer to http://www.fileinfo.com/filetypes/font for a list of font file extensions.
Compare submitted image with all generated images
I appreciate you share some advice or python code to implement two steps above.
As the OP states, there are two parts (and probably also a third part):
Use PIL to generate images from fonts.
Use an image analysis toolkit, like OpenCV (which has Python bindings) to compare different shapes. There are a variety of standard techniques to compare different objects to see whether they're similar. For example, scale invariant moments work fairly well and are part of the OpenCv toolkit.
Most of the standard tools in #2 are designed to look for similar but not necessarily identical shapes, but for font comparison this might not be what you want, since the differences between fonts can be based on very fine details. For fine-detail analysis, try comparing the x and y profiles of a perimeter path around the each letter, appropriately normalized, of course. (This, or a more mathematically complicated variant of it, has been used with good success in font analysis.)
I can't offer Python code, but here are two possible approaches.
"Eigen-characters." In face recognition, given a large training set of normalized facial images, you can use principal component analysis (PCA) to obtain a set of "eigenfaces" which, when the training faces are projected upon this subspace, exhibit the greatest variance. The "coordinates" of the input test faces with respect to the space of eigenfaces can be used as the feature vector for classification. The same thing can be done with textual characters, i.e., many versions of the character 'A'.
Dynamic Time Warping (DTW). This technique is sometimes used for handwriting character recognition. The idea is that the trajectory taken by the tip of a pencil (i.e., d/dx, d/dy) is similar for similar characters. DTW makes invariant some of the variations across instances of single person's writing. Similarly, the outline of a character can represent a trajectory. This trajectory then becomes the feature vector for each font set. I guess the DTW part is not as necessary with font recognition because a machine creates the characters, not a human. But it may still be useful to disambiguate spatial ambiguities.
This question is a little old, so here goes an updated answer.
You should take a look into this paper DeepFont: Identify Your Font from An Image. Basically it's a neural network trained on tons of images. It was presented commercially in this video.
Unfortunately, there is no code available. However, there is an independent implementation available here. You'll need to train it yourself, since weights are not provided, but the code is really easy to follow. In addition to this, consider that this implementation is only for a few fonts.
There is also a link to the dataset and a repo to generate more data.
Hope it helps.