cv2.contourArea not working properly - python

What I'm trying to do is find big light polluted areas in a photo of nighttime Earth. I converted the source photo to grayscale and then to binary photo with a threshold. cv2.findcontours works properly but when I try to get rid of small contours it only deletes a part of them.
Source image
import cv2
image_orig=cv2.imread('C:\Users\pc\Desktop\middleeast.jpg')
image_gray=cv2.cvtColor(image_orig,cv2.COLOR_BGR2GRAY)
_, image_threshold=cv2.threshold(image_gray,60,255,cv2.THRESH_BINARY)
_,contours,_=cv2.findContours(image_threshold,cv2.RETR_LIST,cv2.CHAIN_APPROX_NONE)
image_contours=image_orig.copy()
cv2.drawContours(image_contours,contours,-1,(255,255,0),1)
cv2.imshow('image_contours',image_contours)
cv2.imwrite('C:\Users\pc\Desktop\middleEastAllContours.jpg', image_contours)
for counter,contour in enumerate(contours):
if cv2.contourArea(contour)<250.0:
contours.pop(counter)
image_big_contours=image_orig.copy()
cv2.drawContours(image_big_contours,contours,-1,(255,255,0),1)
cv2.imshow('big contours',image_big_contours)
cv2.waitKey(0)
As you can see there still are plenty of small light polluted areas contoured. How can I get rid of them?
All contours comparison
Big contours comparison
Source image all contours Source image big contours

I think that the problem is in the pop of the for: when you pop a contour, you are skipping the next one.
For example, if you pop contour number 10, then the next countour will become number 10, but you will skip it because in the next iteration you will look at contour 11.
I am not a python expert (I don't really know how the two variables are used in the for) but you could try to do a counter=counter-1 after the pop. Another option is iterate the list backwards (start in the last element and finish at the first)

Related

Skewing text - How to take advantage of existing edges

I have the following JPG image. If I want to find the edges where the white page meets the black background. So I can rotate the contents a few degrees clockwise. My aim is to straighten the text for using with Tesseract OCR conversion. I don't see the need to rotate the text blocks as I have seen in similar examples.
In the docs Canny Edge Detection the third arg 200 eg edges = cv.Canny(img,100,200) is maxVal and said to be 'sure to be edges'. Is there anyway to determine these (max/min) values ahead of any trial & error approach?
I have used code examples which utilize the Python cv2 module. But the edge detection is set up for simpler applications.
Is there any approach I can use to take the text out of the equation. For example: only detecting edge lines greater than a specified length?
Any suggestions would be appreciated.
Below is an example of edge detection (above image same min/max values) The outer edge of the page is clearly defined. The image is high contrast b/w. It has even lighting. I can't see a need for the use of an adaptive threshold. Simple global is working. Its just at what ratio to use it.
I don't have the answer to this yet. But to add. I now have the contours of the above doc.
I used find contours tutorial with some customization of the file loading. Note: removing words gives a thinner/cleaner outline.
Consider Otsu.
Its chief virtue is that it is adaptive to local
illumination within the image.
In your case, blank margins might be the saving grace.
Consider working on a series of 2x reduced resolution images,
where new pixel is min() (or even max()!) of original four pixels.
These reduced images might help you to focus on the features
that matter for your use case.
The usual way to deskew scanned text is to binarize and
then keep changing theta until "sum of pixels across raster"
is zero, or small. In particular, with few descenders
and decent inter-line spacing, we will see "lots" of pixels
on each line of text and "near zero" between text lines,
when theta matches the original printing orientation.
Which lets us recover (1.) pixels per line, and (2.) inter-line spacing, assuming we've found a near-optimal theta.
In your particular case, focusing on the ... leader dots
seems a promising approach to finding the globally optimal
deskew correction angle. Discarding large rectangles of
pixels in the left and right regions of the image could
actually reduce noise and enhance the accuracy of
such an approach.

Extract Data from an Image with Python/OpenCV/Tesseract?

I'm trying to extract some contents from a cropped image. I tried pytesseract and opencv template matching but the results are very poor. OpenCV template matching sometimes fails due to poor quality of the icons and tesseract gives me a line of text with false characters.
I'm trying to grab the values like this:
0:26 83 1 1
Any thoughts or techniques?
A technique you could use would be to blur your image. From what it looks like, the image is kind of low res and blurry already, so you wouldn't need to blur the image super hard. Whenever I need to use a blur function in Opencv, I normally choose the gaussian blur, as its technique of blurring each pixel as well as each surrounding pixel is great. Once the image is blurred, I would threshold, or adaptive threshold the image. Once you have gotten this far, the image that should be shown should be mostly hard lines with little bits of short lines mixed between. Afterwards, dilate the threshold image just enough to have the bits where there are a lot of hard edges connect. Once a dilate has been performed, find the contours of that image, and sort based on their height with the image. Since I assume the position of those numbers wont change, you will only have to sort your contours based on the height of the image. Afterwards, once you have sorted your contours, just create bounding boxes over them, and read the text from there.
However, if you want to do this the quick and dirty way, you can always just manually create your own ROI's around each area you want to read and do it that way.
First Method
Gaussian blur the image
Threshold the image
Dilate the image
Find Contours
Sort Contours based on height
Create bounding boxes around relevent contours
Second Method
Manually create ROI's around the area you want to read text from

Extracting data from graphs in a scanned document

EDIT: This is a deeper explanation of a question I asked earlier, which is still not solved for me.
I'm currently trying to write some code that can extract data from some uncommon graphs in a book. I scanned the pages of the book, and by using opencv I would like to detect some features ofthe graphs in order to convert them into useable data. In the left graph I'm looking for the height of the "triangles" and in the right graph the distance from the center to the points where the dotted lines intersect with the gray area. In both cases I would like to convert these values into numeric data for further usage.
For the left graph, I thought of detecting all the individual colors and computing the area of each sector by counting the amount of pixels in that color. When I have the area of these sectors, I can easily calculate their heights, using basic math. The following code snippet shows how far I've gotten already with identifying different colors. However I can't manage to make this work accurately. It always seems to detect some colors of other sectors as well, or not detect all pixels of one sector. I think it has something to do with the boundaries I'm using. I can't quite figure out how to make them work. Does someone know how I can determine these values?
import numpy as np
import cv2
img = cv2.imread('images/test2.jpg')
lower = np.array([0,0,100])
upper = np.array([50,56,150])
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask = mask)
cv2.imshow('img', img)
cv2.imshow('mask', mask)
cv2.imshow('output', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
For the right graph, I still have no idea how to extract data from it. I thought of identifying the center by detecting all the dotted lines, and then by detecting the intersections of these dotted lines with the gray area, I could measure the distance between the center and these intersections. However I couldn't yet figure out how to do this properly, since it sounds quite complex. The following code snippet shows how far I've gotten with the line detection. Also in this case the detection is far from accurate. Does someone have an idea how to tackle this problem?
import numpy as np
import cv2
# Reading the image
img = cv2.imread('test2.jpg')
# Convert the image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Apply edge detection
edges = cv2.Canny(gray,50,150,apertureSize = 3)
# Line detection
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength=50,maxLineGap=20)
for line in lines:
x1,y1,x2,y2 = line[0]
cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
cv2.imwrite('linesDetected.jpg',img)
For the left image, using your approach, try to look at the RGB histogram, the colors should be significant peaks, if you would like to use the relative area of the segments.
Another alternative could be to use Hough Circle Transform, which should work on circle segments. See also here.
For the right image ... let me think ...
You could create a "empty" diagram with no data inside. You know the locations of the circle segment ("cake pieces"). Then you could identify the area where the data is (the dark ones), either by using a grey threshold, an RGB threshold, or Find Contours or look for Watershed / Distance Transform.
In the end the idea is to make a boolean overlay between the cleared image and the segments (your data) that was found. Then you can identify which share of your circle segments is covered, or knowing the center, find the farthest point from the center.

cv2.findContours recognising arbitrary contours

I am trying to identify numbers in images. I am using the cv2.findContours function to "separate" the digits in the photo. But even after several modifications to the image the function recognises arbitrary contours even in the absolute top left corner of the image even though the final modified image has an absolutely black background with only the digits being white. Why is it so? Complete source code along with photo and everything else :
https://github.com/tanmay-edgelord/HandwrittenDigitRecognition
Source code :
https://github.com/tanmay-edgelord/HandwrittenDigitRecognition/blob/master/performRecognition.ipynb
In this code I am using the rects = [cv2.boundingRect(ctr) for ctr in ctrs] line to identify the contours returned by the function. Upon printing it out I found out that many of the bounding rectangles are in (0,0,0,0). If any further details/clarification is required please comment.

Using findContour and the resulting contours to approximate line segments to find line intersection

I am attempting to write a program to handle the finding the intersections of the lines outlining a rectangular object, (eg. Computer screen). I attempted to use hough lines originally, however, due to variable lighting conditions as well as content appearing on the screen, the lines that are drawn are not always the outline of the screen. Additionally, there may be a large amount of other random objects in the frame.
My next approach was to use contours which always seem to outline both the screen as well as being able to handle the variable content encapsulated within it. How do I go about using the contours to approximate a line?
I used
print len(contours)
and received a contour fairly long contour length > 200 consistently.
Feel free to comment asking for clarity.
EDIT
The green "lines" are the contours found by findContours. I am primarily interested in the contour surrounding the screen content. How can I use those contours to find a line approximating those contours and then find the point of intersection of the two lines? This is from a webcam stream so conditions, angle, and distance may not stay constant.
A first step might be to use the size of the contours to filter out those you are not interested in since the smaller contours usually correspond to stuff on the screen.
Also, the findContour method can be used to return the contours in a hierarchy of nested contours. This will tell you which contour is contained within another and allow you to get the outer-most, second outer-most one etc. If you are trying to get the screen, then it would have to be a large contour that is possibly the second largest and nested just below the contour for the monitor.
After obtaining the potential contours, which are just list of points, you can do a robust fitting for a single rectangle using just these points either by RANSAC (with a model for rectangles, not homography), or hough transform modified for this case.

Categories