Draw contours correctly on the image (OpenCV/Python) - python

I am trying to draw contours around my test image. I am using canny edge detection in the background.
The findContours method works fine for my image but when I try to do drawContours method on that image. It does not show anything.
Here is what I have tried
import cv2
import numpy as np
image = cv2.imread('/path/to/image.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (11, 11), 0)
cv2.imshow("blurred", blurred)
canny = cv2.Canny(blurred, 30, 150)
(_, cnts, _) = cv2.findContours(canny.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
print "Contours in the image, %d" % (len(cnts))
shape = image.copy()
cv2.drawContours(shape.copy(), cnts, -1, (0, 255, 0), 2)
cv2.imshow("Edges", shape)
From what I gather from the docs, the fourth argument to the method drawContours would be used to specify the colour of the edge to be drawn over. But it does not show anything for instead of the green edge that I am expecting.
len(cnts) returns 2 for me
Here is the image I am trying it out with
I am using opencv version 3.0.0
Relevant SO question
EDIT: After changing the 3rd argument for cv2.findContours() to cv2.CHAIN_APPROX_NONE, it is still not showing the final green edges(or any colour for that matter) on the final cv2.imshow("Edges", shape) image. Here is what I get from the canny edge image

You have to modify the last to lines of your code to:
You are already storing a copy of image in shape
shape = image.copy()
So why do you use shape.copy() in cv2.drawContours() again?
Replace it as follows:
cv2.drawContours(shape, cnts, -1, (0, 255, 0), 2)
cv2.imshow("Edges", shape)
NOTE: You already have copied the image once, so use it to draw the contours. You don't have to use a copied version of the copied image.
This is what you get as a result:

Related

Drawing OpenCV contours and save as transparent image

I'm trying to draw contours i have found using findContours.
If i draw like this, i get a black background with the contour drawn on it.
out = np.zeros_like(someimage)
cv2.drawContours(out, contours, -1, 255, 1)
cv2.imwrite('contours.png',out)
If i draw like this, i get a fully transparent image with no drawn contours.
out = np.zeros((55, 55, 4), dtype=np.uint8)
cv2.drawContours(out, contours, -1, 255, 1)
cv2.imwrite('contours.png',out)
How do i go about making an image with size (55,55) and draw a contour on this, while keeping a transparent background?
Thanks
To work with transparent images in OpenCV you need to utilize the fourth channel after BGR called alpha with controls it. So instead of creating a three-channel image, create one with four channels, and also while drawing make sure you assign the fourth channel to 255.
mask = np.zeros((55, 55, 4), dtype=np.uint8)
cv2.drawContours(mask, cnts, -1, (255, 255, 255, 255), 1) #change first three channels to any color you want.
cv2.imwrite('res.png', mask)
Input image whose contours to draw.
Result
In Python/OpenCV, use the black and white image as the alpha channel as well as using it for a 3 channel BGR image.
cntr_img = np.zeros((55, 55, 4), dtype=np.uint8)
cv2.drawContours(cntr_img, contours, -1, 255, 1)
out = cv2.cvtColor(cntr_img, cv2.COLOR_GRAY2BGRA)
out[:,:,3] = cntr_img
cv2.imwrite('contours.png',out)
This works for me in Python/OpenCV. I am using a white blob on black background for input, since I do not have a contour image available. The contour image needs to be grayscale.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('mask.png')
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
out = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGRA)
out[:,:,3] = gray
# write output
cv2.imwrite('mask_transp.png',out)
# display it
cv2.imshow("out", out)
cv2.waitKey(0)
Transparent result (download to see it since it is white on transparent background):

Is it possible to use OpenCV contour in python in a way that corners are not cut off?

I am using the OpenCV contour function in python. For example on an image like this:
contours, _ = cv2.findContours(img_expanded_padded, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
It works well except that it cuts off the corners on the inside of the contour as seen above. Are there any options that would leave this corner in?
Travelling along the contours and filling them in manually will be too computationally expensive. The above is only an example. This will be performed many times on images 5400x5400 or more...
I can find the edges with the code below, and have filled corners as a result but then I need to extract these as contours again.
# FIND ALL HORIZONTAL AND VERTICAL EDGES AND COMBINE THEM
edges_expanded_x = np.absolute(cv2.Sobel(img_expanded_padded,cv2.CV_64F, 1, 0, ksize=3))
edges_expanded_y = np.absolute(cv2.Sobel(img_expanded_padded,cv2.CV_64F, 0, 1, ksize=3))
edges_expanded = np.logical_or(edges_expanded_x, edges_expanded_y)
# GET RID OF DOUBLE EDGE THAT RESULTS FROM SOBEL FILTER
edges_expanded = np.multiply(img_expanded_padded,edges_expanded)
Are there any OpenCV settings or functions I can use to accomplish this?
EDIT:
I should clarify, that my goal is to have a single pixel continous contour. I need the contours and not an array of the entire image including the contours.
EDIT: The above images are zoomed into my test image. The actual pixels are as shown by the red grids in the images below.
There is no need to use cv2.Sobel you can simply draw contours with cv2.drawContours on a black background. The black background can be drawn with the help of np.zeros.
img = cv2.imread('contouring.png',0)
contours, _ = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
bgr = np.zeros((img.shape[0], img.shape[1]), dtype= 'uint8')
cv2.drawContours(bgr, contours, -1, (255,255,255), 1)
If you want the contour lines to be thick then you use cv2.dilate for that. Then to prevent the cutting of corners cv2.bitwise_and can be used along with cv2.bitwise_not as shown below
bgr = cv2.dilate(bgr, np.ones((31, 31), np.uint8), iterations=1)
bgr = cv2.bitwise_and(bgr, cv2.bitwise_not(img))
This gives contours which are 15 pixels thick.
EDIT- The first image of thin contours is still cutting corners. To obtain single pixel contours which do not corners we can use a kernel size of 3*3.
img = cv2.imread('contouring.png',0)
contours, _ = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
bgr = np.zeros((img.shape[0], img.shape[1]), dtype= 'uint8')
cv2.drawContours(bgr, contours, -1, (255,255,255), 1)
bgr = cv2.dilate(bgr, np.ones((3, 3), np.uint8), iterations=1)
bgr = cv2.bitwise_and(bgr, cv2.bitwise_not(img))
This gives us
I have checked it by using cv2.bitwise_and between bgr and img and I obtain a black image indicating that no white pixels are cutting corners.

drawcontours on different position

I would like to draw contours in the middle of a blank image. I don't know how to set the contour location to be drawn. this is the line I use.
cv2.drawContours(bimg, c, -1, 255, 1)
bimg is the blank image, c is the contour I've extracted from an image. I believe I can move the contour by manipulating c, but I don't understand how c is written actually
You can look at the official documentation of opencv for contours. This code can be used to find contours of an image threshold and draw them on a white background with red color.
img = cv2.imread('image_name.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_,thresh1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
_, cnts, _ = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
bgr = np.ones((img.shape[0], img.shape[1]), dtype= 'uint8')*255 #this creates a white background of the same size of input shape
cv2.drawContours(bgr, cnts, -1, (0,0,255), 1)

Get external contour using opencv (python)

I am trying to get the external contour of an image using opencv and python.
I found a solution to this problem here (Process image to find external contour) but the solution does not work for me - instead of the contour image it opens two new images (one which is all black and the other one black and white).
This is the code I am using:
import cv2 # Import OpenCV
import numpy as np # Import NumPy
# Read in the image as grayscale - Note the 0 flag
im = cv2.imread("img.jpg", 0)
# Run findContours - Note the RETR_EXTERNAL flag
# Also, we want to find the best contour possible with CHAIN_APPROX_NONE
_ ,contours, hierarchy = cv2.findContours(im.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Create an output of all zeroes that has the same shape as the input
# image
out = np.zeros_like(im)
# On this output, draw all of the contours that we have detected
# in white, and set the thickness to be 3 pixels
cv2.drawContours(out, contours, -1, 255, 3)
# Spawn new windows that shows us the donut
# (in grayscale) and the detected contour
cv2.imshow('Donut', im)
cv2.imshow('Output Contour', out)
# Wait indefinitely until you push a key. Once you do, close the windows
cv2.waitKey(0)
cv2.destroyAllWindows()
The illustration shows the two windows I get instead of the contour.
You are doing some mistakes that compromise your result. Reading from the documentation it says that:
For better accuracy, use binary images (see step 3).
finding contours is like finding white object from black background (see step 2).
You don't stick with these rules so you don't get good results. Also you are plotting your results to a black image and they are not visible.
Below is the full solution for your case.
I am also using an adaptive threshold for better results.
# Step 1: Read in the image as grayscale - Note the 0 flag
im = cv2.imread("/home/jorge/Downloads/input.jpg", 0)
cv2.imshow('Original', im)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Step 2: Inverse the image to get black background
im2 = im.copy()
im2 = 255 - im2
cv2.imshow('Inverse', im2)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Step 3: Get an adaptive binary image
im3 = cv2.adaptiveThreshold(im2, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 11, 2)
cv2.imshow('Inverse_binary', im3)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Step 4: find contours
_, contours, hierarchy = cv2.findContours(im3.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Step 5: This creates a white image instead of a black one to plot contours being black
out = 255*np.ones_like(im)
cv2.drawContours(out, contours, -1, (0, 255, 0), 3)
cv2.drawContours(im, contours, -1, (0, 255, 0))

Python + OpenCV: OCR Image Segmentation

I am trying to do OCR from this toy example of Receipts. Using Python 2.7 and OpenCV 3.1.
Grayscale + Blur + External Edge Detection + Segmentation of each area in the Receipts (for example "Category" to see later which one is marked -in this case cash-).
I find complicated when the image is "skewed" to be able to properly transform and then "automatically" segment each segment of the receipts.
Example:
Any suggestion?
The code below is an example to get until the edge detection, but when the receipt is like the first image. My issue is not the Image to text. Is the pre-processing of the image.
Any help more than appreciated! :)
import os;
os.chdir() # Put your own directory
import cv2
import numpy as np
image = cv2.imread("Rent-Receipt.jpg", cv2.IMREAD_GRAYSCALE)
blurred = cv2.GaussianBlur(image, (5, 5), 0)
#blurred = cv2.bilateralFilter(gray,9,75,75)
# apply Canny Edge Detection
edged = cv2.Canny(blurred, 0, 20)
#Find external contour
(_,contours, _) = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
A great tutorial on the first step you described is available at pyimagesearch (and they have great tutorials in general)
In short, as described by Ella, you would have to use cv2.CHAIN_APPROX_SIMPLE. A slightly more robust method would be to use cv2.RETR_LIST instead of cv2.RETR_EXTERNAL and then sort the areas, as it should decently work even in white backgrounds/if the page inscribes a bigger shape in the background, etc.
Coming to the second part of your question, a good way to segment the characters would be to use the Maximally stable extremal region extractor available in OpenCV. A complete implementation in CPP is available here in a project I was helping out in recently. The Python implementation would go along the lines of (Code below works for OpenCV 3.0+. For the OpenCV 2.x syntax, check it up online)
import cv2
img = cv2.imread('test.jpg')
mser = cv2.MSER_create()
#Resize the image so that MSER can work better
img = cv2.resize(img, (img.shape[1]*2, img.shape[0]*2))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
vis = img.copy()
regions = mser.detectRegions(gray)
hulls = [cv2.convexHull(p.reshape(-1, 1, 2)) for p in regions[0]]
cv2.polylines(vis, hulls, 1, (0,255,0))
cv2.namedWindow('img', 0)
cv2.imshow('img', vis)
while(cv2.waitKey()!=ord('q')):
continue
cv2.destroyAllWindows()
This gives the output as
Now, to eliminate the false positives, you can simply cycle through the points in hulls, and calculate the perimeter (sum of distance between all adjacent points in hulls[i], where hulls[i] is a list of all points in one convexHull). If the perimeter is too large, classify it as not a character.
The diagnol lines across the image are coming because the border of the image is black. that can simply be removed by adding the following line as soon as the image is read (below line 7)
img = img[5:-5,5:-5,:]
which gives the output
The option on the top of my head requires the extractions of 4 corners of the skewed image. This is done by using cv2.CHAIN_APPROX_SIMPLE instead of cv2.CHAIN_APPROX_NONE when finding contours. Afterwards, you could use cv2.approxPolyDP and hopefully remain with the 4 corners of the receipt (If all your images are like this one then there is no reason why it shouldn't work).
Now use cv2.findHomography and cv2.wardPerspective to rectify the image according to source points which are the 4 points extracted from the skewed image and destination points that should form a rectangle, for example the full image dimensions.
Here you could find code samples and more information:
OpenCV-Geometric Transformations of Images
Also this answer may be useful - SO - Detect and fix text skew
EDIT: Corrected the second chain approx to cv2.CHAIN_APPROX_NONE.
Preprocessing the image by converting the desired text in the foreground to black while turning unwanted background to white can help to improve OCR accuracy. In addition, removing the horizontal and vertical lines can improve results. Here's the preprocessed image after removing unwanted noise such as the horizontal/vertical lines. Note the removed border and table lines
import cv2
# Load in image, convert to grayscale, and threshold
image = cv2.imread('1.jpg')
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Find and remove horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (35,2))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, (0,0,0), 3)
# Find and remove vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,35))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(thresh, [c], -1, (0,0,0), 3)
# Mask out unwanted areas for result
result = cv2.bitwise_and(image,image,mask=thresh)
result[thresh==0] = (255,255,255)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey()
Try using Stroke Width Transform. Python 3 implementation of the algorithm is present here at SWTloc
EDIT : v2.0.0 onwards
Install the Library
pip install swtloc
Transform The Image
import swtloc as swt
imgpath = 'images/path_to_image.jpeg'
swtl = swt.SWTLocalizer(image_paths=imgpath)
swtImgObj = swtl.swtimages[0]
# Perform SWT Transformation with numba engine
swt_mat = swtImgObj.transformImage(text_mode='lb_df', gaussian_blurr=False,
minimum_stroke_width=3, maximum_stroke_width=12,
maximum_angle_deviation=np.pi/2)
Localize Letters
localized_letters = swtImgObj.localizeLetters(minimum_pixels_per_cc=10,
localize_by='min_bbox')
Localize Words
localized_words = swtImgObj.localizeWords(localize_by='bbox')
There are multiple parameters in the of the .transformImage, .localizeLetters and .localizeWords function sthat you can play around with to get the desired results.
Full Disclosure : I am the author of this library

Categories