I have an image of analog voltmeter:
I'd like to detect the needle and then find it's direction/angle.
I'm using python and opencv and have tried contours but haven't managed to solve it so far:
#I'll insert my code attempts here soon!
Hough transform should work well for this kind of image.
Needle is the longest and the most powerful straight object, and proper threshold choice could give good result.
You can also remove some thin segments of scale and make needle thinner with morphological operation erosion
Here is what you can do:
Simple thresholding to detect all the dark patterns
Erase all the small patterns
Erase the patterns that touch the border.
Geodesic diameter computation in order to detect the longest pattern.
PCA in order to determine the orientation.
Related
I have the following JPG image. If I want to find the edges where the white page meets the black background. So I can rotate the contents a few degrees clockwise. My aim is to straighten the text for using with Tesseract OCR conversion. I don't see the need to rotate the text blocks as I have seen in similar examples.
In the docs Canny Edge Detection the third arg 200 eg edges = cv.Canny(img,100,200) is maxVal and said to be 'sure to be edges'. Is there anyway to determine these (max/min) values ahead of any trial & error approach?
I have used code examples which utilize the Python cv2 module. But the edge detection is set up for simpler applications.
Is there any approach I can use to take the text out of the equation. For example: only detecting edge lines greater than a specified length?
Any suggestions would be appreciated.
Below is an example of edge detection (above image same min/max values) The outer edge of the page is clearly defined. The image is high contrast b/w. It has even lighting. I can't see a need for the use of an adaptive threshold. Simple global is working. Its just at what ratio to use it.
I don't have the answer to this yet. But to add. I now have the contours of the above doc.
I used find contours tutorial with some customization of the file loading. Note: removing words gives a thinner/cleaner outline.
Consider Otsu.
Its chief virtue is that it is adaptive to local
illumination within the image.
In your case, blank margins might be the saving grace.
Consider working on a series of 2x reduced resolution images,
where new pixel is min() (or even max()!) of original four pixels.
These reduced images might help you to focus on the features
that matter for your use case.
The usual way to deskew scanned text is to binarize and
then keep changing theta until "sum of pixels across raster"
is zero, or small. In particular, with few descenders
and decent inter-line spacing, we will see "lots" of pixels
on each line of text and "near zero" between text lines,
when theta matches the original printing orientation.
Which lets us recover (1.) pixels per line, and (2.) inter-line spacing, assuming we've found a near-optimal theta.
In your particular case, focusing on the ... leader dots
seems a promising approach to finding the globally optimal
deskew correction angle. Discarding large rectangles of
pixels in the left and right regions of the image could
actually reduce noise and enhance the accuracy of
such an approach.
I need to delete the noise from this image. My problem is that I need a neat contour without all the lines like in this image.
Do you have any suggestions how to do that using python?
Looking at your example images, I suppose you are looking for an image processing algorithm that finds the edges of your image (in your case, the border lines of the ground plan).
Have a look the Canny edge detection algorithm which might be a well-suited for this task. A tutorial with an example implementation in python can be found here.
I am working on a form extraction module which detects text in specific segments of an image. So far I am able to remove the text and retain only the bounding box in the image.
My next step was to extract each box in the image. To do that I am trying to detect corners in the image. But here is where I am stuck. I tried template matching. This was the result. Although the results look promising the drawback is that this method is very time consuming. And few corners are still not detected.
I also tried Shi-Tomasi Corner Detector after dilating the image.
What would be the best approach to solve this problem?
I suggest you detect the lines instead, e.g. using Hough transform, followed by edge chaining, followed by robust line fitting on each chain.
I am trying to extract the tiles ( Letters ) placed on a Scrabble Board. The goal is to identify / read all possible words present on the board.
An example image -
Ideally, I would like to find the four corners of the scrabble Board, and apply perspective transform, for further processing.
After Perspective transform -
The algorithm that I am using is as follows -
Apply Adaptive thresholding to the gray scale image of the Scrabble Board.
Dilate / Close the image, find the largest contour in the given image, then find the convex hull, and completely fill the area enclosed by the convex hull.
Find the boundary points ( contour ) of the resultant image, then apply Contour approximation to get the corner points, then apply perspective transform
Corner Points found -
This approach works with images like these. But, as you can see, many square boards have a base, which is curved at the top and the bottom. Sometimes, the base is a big circular board. And with these images my approach fails. Example images and outputs -
Board with Circular base:
Points found using above approach:
I can post more such problematic images, but this image should give you an idea about the problem that I am dealing with. My question is -
How do I find the rectangular board when a circular board is also present in the image?
Some points I would like to state -
I tried using hough lines to detect the lines in the image, find the largest vertical line(s), and then find their intersections to detect the corner points. Unfortunately, because of the tiles, all lines seem to be distorted / disconnected, and hence my attempts have failed.
I have also tried to apply contour approximation to all the contours found in the image ( I was assuming that the large rectangle, too, would be a contour ), but that approach failed as well.
I have implemented the solution in openCV-python. Since the approach is what matters here, and the question was becoming a tad too long, I didn't post the relevant code.
I am willing to share more such problematic images as well, if it is required.
Thank you!
EDIT1
#Silencer's answer has been mighty helpful to me for identifying letters in the image, but I want to accurately find the placement of the words in the image. Hence, I feel identifying the rows and columns is necessary, and I can do that only when a perspective transform is applied to the board.
I wrote an answer on MSER text detection:
Trying to Plot OpenCV's MSER regions using matplotlib
The code generate the following results on your images.
You can have a try.
I think #silencer has already given quite promising solution.
But to perform perspective transform as you have mentioned that you have already tried with hough lines to find the largest rectangle but it fails because for tiles present.
Given you have large image data set may be more than 1000 images, you can also give a shot to Deep learning based approach where you can train a model with images as input and corresponding rectangle boundary points coordinate as outputs.
I am doing a dice value recognition hobby project that I want to run on a Raspberry Pi. For now, I am just learning OpenCV as that seems like the hardest thing for me. I have gotten this far, where I have dilated, eroded and canny filtered out the dice. This has given me a hierarchy of contours. The image shows the bounding rectangles for the parent contours:
My question is: how would I proceed to count the pips? Is it better to do some template matching for face values, or should I mathematically test if a pip is in a valid position within the bounding box?
There could be multiple ways to do it:
Use hole filling and then morphological operator to filter circles.
Simpler approach would be using white pixel density (% of white pixels). Five dot would have higher white pixel density.
Use image moments (mathematical property which represents shape and structure of image) to train the neural network for different kinds of dice faces.
Reference:
Morphology
http://blogs.mathworks.com/pick/2008/05/23/detecting-circles-in-an-image/
As Sivam Kalra Said, there are many valid approaches.
I would go with template matching, as it should be robust and relatively easy to implement.
using your green regions in the canny image, copy each found die face from the original grayscale image into a smaller search image. The search image should be slightly larger than a die face, and larger than your 6 pattern images.
optionally normalize the search image
use cvMatchTemplate with each of the 6 possible dice patterns (I recommend the CV_TM_SQDIFF_NORMED algorithm, but test which works best)
find and store the global minimum in the result image for each of the 6 matches
rotate the search image in ~2° steps from 0° to 90°, and repeat the template match for each step
the dice pattern with the lowest minimum over all steps is the correct one.
contour hierechy could be a good and very easy option, but you need a perpendicular vision.
so you can do it with contours but fitting circles with som threshold
(sorry about my apalling english)