Creating your own contour in opencv using python - python

I have a set of boundary points of an object.
I want to draw it using opencv as contour.
I have no idea that how to convert my points to contour representation.
To the same contour representation which is obtained by following call
contours,_ = cv2.findContours(image,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
Any ideas?
Thanks

By looking at the format of the contours I would think something like this should be sufficient:
contours = [numpy.array([[1,1],[10,50],[50,50]], dtype=numpy.int32) , numpy.array([[99,99],[99,60],[60,99]], dtype=numpy.int32)]
This small program gives an running example:
import numpy
import cv2
contours = [numpy.array([[1,1],[10,50],[50,50]], dtype=numpy.int32) , numpy.array([[99,99],[99,60],[60,99]], dtype=numpy.int32)]
drawing = numpy.zeros([100, 100],numpy.uint8)
for cnt in contours:
cv2.drawContours(drawing,[cnt],0,(255,255,255),2)
cv2.imshow('output',drawing)
cv2.waitKey(0)

To create your own contour from a python list of points L
L=[[x1,y1],[x2,y2],[x3,y3],[x4,y4],[x5,y5],[x6,y6],[x7,y7],[x8,y8],[x9,y9],...[xn,yn]]
Create a numpy array ctr from L, reshape it and force its type
ctr = numpy.array(L).reshape((-1,1,2)).astype(numpy.int32)
ctr is our new countour, let's draw it on an existing image
cv2.drawContours(image,[ctr],0,(255,255,255),1)

A contour is simply a curve joining all continuous points so to create your own contour, you can create a np.array() with your (x,y) points in clockwise order
points = np.array([[25,25], [70,10], [150,50], [250,250], [100,350]])
That's it!
There are two methods to draw the contour onto an image depending on what you need:
Contour outline
If you only need the contour outline, use cv2.drawContours()
cv2.drawContours(image,[points],0,(0,0,0),2)
Filled contour
To get a filled contour, you can either use cv2.fillPoly() or cv2.drawContours() with thickness=-1
cv2.fillPoly(image, [points], [0,0,0]) # OR
# cv2.drawContours(image,[points],0,(0,0,0),-1)
Full example code for completeness
import cv2
import numpy as np
# Create blank white image
image = np.ones((400,400), dtype=np.uint8) * 255
# List of (x,y) points in clockwise order
points = np.array([[25,25], [70,10], [150,50], [250,250], [100,350]])
# Draw points onto image
cv2.drawContours(image,[points],0,(0,0,0),2)
# Fill points onto image
# cv2.fillPoly(image, [points], [0,0,0])
cv2.imshow('image', image)
cv2.waitKey()

To add to Cherif KAOUA's answer, I found I had to convert to list and zip my numpy array. Reading in an array of points from a text file:
contour = []
with open(array_of_points,'r') as f:
next(f) // line one of my file gives the number of points
for l in f:
row = l.split()
numbers = [int(n) for n in row]
contour.append(numbers)
ctr = np.array(contour).reshape((-1,1,2)).astype(np.int32)
ctr = ctr.tolist()
ctr = zip(*[iter(ctr)]*len(contour))

Related

opencv - Polyline and rectangle intersection

I would like to get if a polyline and rectangle intersect on opencv+Python:
A = cv2.rectangle(frame,(384,0),(510,128),(0,255,0),3)
pts = np.array([[1300,900],[1750,700],[1000,200],[600,200]], np.int32)
pts = pts.reshape((-1,1,2))
B = cv2.polylines(frame,[pts],True,(244,66,66),7)
How can I find if A intersect with B?
Thank you
Opencv and Numpy don't have direct geometry intersection functions.
You can write your own (see Numpy and line intersections) or a common technique is to draw the rectangle filled with a color and then check if the points along the line on the same image are that color.

Count protuberances in dendrite with openCV (python)

I'm trying to count dendritic spines (the tiny protuberances) in mouse dendrites obtained by fluorescent microscopy, using Python and OpenCV.
Here is the original image, from which I'm starting:
Raw picture:
After some preprocessing (code below) I've obtained these contours:
Raw picture with contours (White):
What I need to do is to recognize all protuberances, obtaining something like this:
Raw picture with contours in White and expected counts in red:
What I intended to do, after preprocessing the image (binarizing, thresholding and reducing its noise), was drawing the contours and try to find convex defects in them. The problem arose as some of the "spines" (the technical name of those protuberances) are not recognized as they en up bulged together in the same convexity defect, underestimating the result. Is there any way to be more "precise" when marking convexity defects?
Raw image with contour marked in White. Red dots mark spines that were identified with my code. Green dots mark spines I still can't recognize:
My Python code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
#Image loading and preprocessing:
img = cv2.imread('Prueba.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.pyrMeanShiftFiltering(img,5,11)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,thresh1 = cv2.threshold(gray,5,255,0)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
img1 = cv2.morphologyEx(thresh1, cv2.MORPH_OPEN, kernel)
img1 = cv2.morphologyEx(img1, cv2.MORPH_OPEN, kernel)
img1 = cv2.dilate(img1,kernel,iterations = 5)
#Drawing of contours. Some spines were dettached of the main shaft due to
#image bad quality. The main idea of the code below is to identify the shaft
#as the biggest contour, and count any smaller as a spine too.
_, contours,_ = cv2.findContours(img1,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
print("Number of contours detected: "+str(len(contours)))
cv2.drawContours(img,contours,-1,(255,255,255),6)
plt.imshow(img)
plt.show()
lengths = [len(i) for i in contours]
cnt = lengths.index(max(lengths))
#The contour of the main shaft is stored in cnt
cnt = contours.pop(cnt)
#Finding convexity points with hull:
hull = cv2.convexHull(cnt)
#The next lines are just for visualization. All centroids of smaller contours
#are marked as spines.
for i in contours:
M = cv2.moments(i)
centroid_x = int(M['m10']/M['m00'])
centroid_y = int(M['m01']/M['m00'])
centroid = np.array([[[centroid_x, centroid_y]]])
print(centroid)
cv2.drawContours(img,centroid,-1,(0,255,0),25)
cv2.drawContours(img,centroid,-1,(255,0,0),10)
cv2.drawContours(img,hull,-1,(0,255,0),25)
cv2.drawContours(img,hull,-1,(255,0,0),10)
plt.imshow(img)
plt.show()
#Finally, the number of spines is computed as the sum between smaller contours
#and protuberances in the main shaft.
spines = len(contours)+len(hull)
print("Number of identified spines: " + str(spines))
I know my code has many smaller problems to solve yet, but I think the biggest one is the one presented here.
Thanks for your help! and have a good one
I would approximate the contour to a polygon as Silencer suggests (don't use the convex hull). Maybe you should simplify the contour just a little bit to keep most of the detail of the shape.
This way, you will have many vertices that you have to filter: looking at the angle of each vertex you can tell if it is concave or convex. Each spine is one or more convex vertices between concave vertices (if you have several consecutive convex vertices, you keep only the sharper one).
EDIT: in order to compute the angle you can do the following: let's say that a, b and c are three consecutive vertices
angle1 = arctan((by-ay)/(bx-ax))
angle2 = arctan((cy-by)/(cx-bx))
angleDiff=angle2-angle1
if(angleDiff<-PI) angleDiff=angleDiff+2PI
if(angleDiff>0) concave
Else convex
Or vice versa, depending if your contour is clockwise or counterclockwise, black or white. If you sum all angleDiff of any polygon, the result should be 2PI. If it is -2PI, then the last "if" should be swapped.

Correctly closing of polygons generated using skimage.measure.find_contours()

I'm currently using skimage.measure.find_contours() to find contours on a surface. Now that I've found the contours I need to able to find the area enclosed within them.
When all of the vertices are within the data set this is fine as a have a fully enclosed polygon.
However, how do I ensure the polygon is fully enclosed if the contour breaches the edge of the surface, either at an edge or at a corner? When this happens I would like to use the edge of the surface as additional vertices to close off the polygon. For example in the following image, with contours shown, you can see that the contours end at the edge of the image, how do I close them up? Also in the example of the brown contour, which is just a single line, I don't think I want an area returned, how would I single out this case?
I know I can check for enclosed contours/polygons by checking if the last vertices of the polygon is the same as the first.
I have code for calculating the area inside a polygon, taken from here
def find_area(array):
a = 0
ox,oy = array[0]
for x,y in array[1:]:
a += (x*oy-y*ox)
ox,oy = x,y
return -a/2
I just need help in closing off the polygons. And checking for the different cases that might occur.
Thanks
Update:
After applying the solution suggested by #soupault I have this code:
import numpy as np
import matplotlib.pyplot as plt
from skimage import measure
# Construct some test data
x, y = np.ogrid[-np.pi:np.pi:100j, -np.pi:np.pi:100j]
r = np.sin(np.exp((np.sin(x)**3 + np.cos(y)**2)))
# Coordinates of point of interest
pt = [(49,75)]
# Apply thresholding to the surface
threshold = 0.8
blobs = r > threshold
# Make a labelled image based on the thresholding regions
blobs_labels = measure.label(blobs, background = 0)
# Show the thresholded regions
plt.figure()
plt.imshow(blobs_labels, cmap='spectral')
# Apply regionprops to charactersie each of the regions
props = measure.regionprops(blobs_labels, intensity_image = r)
# Loop through each region in regionprops, identify if the point of interest is
# in that region. If so, plot the region and print it's area.
plt.figure()
plt.imshow(r, cmap='Greys')
plt.plot(pt[0][0], pt[0][1],'rx')
for prop in props:
coords = prop.coords
if np.sum(np.all(coords[:,[1,0]] == pt[0], axis=1)):
plt.plot(coords[:,1],coords[:,0],'r.')
print(prop.area)
This solution assumes that each pixel is 1x1 in size. In my real data solution this isn't the case so I have also applied the following function to apply linear interpolation to the data. I believe you can also apply a similar function to make the area of each pixel smaller and increase the resolution of the data.
import numpy as np
from scipy import interpolate
def interpolate_patch(x,y,patch):
x_interp = np.arange(np.ceil(x[0]), x[-1], 1)
y_interp = np.arange(np.ceil(y[0]), y[-1], 1)
f = interpolate.interp2d(x, y, patch, kind='linear')
patch_interp = f(x_interp, y_interp)
return x_interp, y_interp, patch_interp
If you need to measure the properties of different regions, it is natural to start with finding the regions (not contours).
The algorithm will be the following, in this case:
Prepare a labeled image:
1.a Either fill the areas between different contour lines with the different colors;
1.b Or apply some image thresholding function, and then run skimage.measure.label (http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label);
Execute regionprops using the very labeled image as an input (http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops);
Iterate over regions in regionprops and calculate the desired parameters (area, perimeter, etc).
Once you identified the regions in your image via regionprops, you can call .coords for each of them to get the enclosed contour.
If someone will need close open contours by image edges (and make a polygon) here is:
import shapely.geometry as sgeo
import shapely.ops as sops
def close_contour_with_image_edge(contour, image_shape):
"""
this function uses shapely because its easiest way to do that
:param contour: contour generated by skimage.measure.find_contours()
:param image_shape: tuple (row, cols), standard return of numpy shape()
:return:
"""
# make contour linestring
contour_line = sgeo.LineString(contour)
# make image box linestring
box_rows, box_cols = image_shape[0], image_shape[1]
img_box = sgeo.LineString(coordinates=(
(0, 0),
(0, box_cols-1),
(box_rows-1, box_cols-1),
(box_rows-1, 0),
(0, 0)
))
# intersect box with non-closed contour and get shortest line which touch both of contour ends
edge_points = img_box.intersection(contour_line)
edge_parts = sops.split(img_box, edge_points)
edge_parts = list(part for part in edge_parts.geoms if part.touches(edge_points.geoms[0]) and part.touches(edge_points.geoms[1]))
edge_parts.sort(reverse=False, key=lambda x: x.length)
contour_edge = edge_parts[0]
# weld it
contour_line = contour_line.union(contour_edge)
contour_line = sops.linemerge(contour_line)
contour_polygon = sgeo.Polygon(contour_line.coords)
return contour_polygon

How do I display the contours of an image using OpenCV Python?

I followed this tutorial from official documentation. I run their code:
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, contours, -1, (0,255,0), 3)
That is ok: no errors, but nothing is displayed.I want to display the result they got as they showed it on the picture:
How can I display the result of the countours like that (just the left result or the right one) ?
I know I must use cv2.imshow(something) but how in this specific case ?
First off, that example only shows you how to draw contours with the simple approximation. Bear in mind that even if you draw the contours with the simple approximation, it will be visualized as having a blue contour drawn completely around the rectangle as seen in the left image. You will not be able to get the right image by simply drawing the contours onto the image. In addition, you want to compare two sets of contours - the simplified version on the right with its full representation on the left. Specifically, you need to replace the cv2.CHAIN_APPROX_SIMPLE flag with cv2.CHAIN_APPROX_NONE to get the full representation. Take a look at the OpenCV doc on findContours for more details: http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#findcontours
In addition, even though you draw the contours onto the image, it doesn't display the results. You'll need to call cv2.imshow for that. However, drawing the contours themselves will not show you the difference between the full and simplified version. The tutorial mentions that you need to draw circles at each contour point so we shouldn't use cv2.drawContours for this task. What you should do is extract out the contour points and draw circles at each point.
As such, create two images like so:
# Your code
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
## Step #1 - Detect contours using both methods on the same image
contours1, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
contours2, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
### Step #2 - Reshape to 2D matrices
contours1 = contours1[0].reshape(-1,2)
contours2 = contours2[0].reshape(-1,2)
### Step #3 - Draw the points as individual circles in the image
img1 = im.copy()
img2 = im.copy()
for (x, y) in contours1:
cv2.circle(img1, (x, y), 1, (255, 0, 0), 3)
for (x, y) in contours2:
cv2.circle(img2, (x, y), 1, (255, 0, 0), 3)
Take note that the above code is for OpenCV 2. For OpenCV 3, there is an additional output to cv2.findContours that is the first output which you can ignore in this case:
_, contours1, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
_, contours2, _ = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
Now let's walk through the code slowly. The first part of the code is what you provided. Now we move onto what is new.
Step #1 - Detect contours using both methods
Using the thresholded image, we detect contours using both the full and simple approximations. This gets stored in two lists, contours1 and contours2.
Step #2 - Reshape to 2D matrices
The contours themselves get stored as a list of NumPy arrays. For the simple image provided, there should only be one contour detected, so extract out the first element of the list, then use numpy.reshape to reshape the 3D matrices into their 2D forms where each row is a (x, y) point.
Step #3 - Draw the points as individual circles in the image
The next step would be to take each (x, y) point from each set of contours and draw them on the image. We make two copies of the original image in colour form, then we use cv2.circle and iterate through each pair of (x, y) points for both sets of contours and populate two different images - one for each set of contours.
Now, to get the figure you see above, there are two ways you can do this:
Create an image that stores both of these results together side by side, then show this combined image.
Use matplotlib, combined with subplot and imshow so that you can display two images in one window.
I'll show you how to do it using both methods:
Method #1
Simply stack the two images side by side, then show the image after:
out = np.hstack([img1, img2])
# Now show the image
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
I stack them horizontally so that they are a combined image, then show this with cv2.imshow.
Method #2
You can use matplotlib:
import matplotlib.pyplot as plt
# Spawn a new figure
plt.figure()
# Show the first image on the left column
plt.subplot(1,2,1)
plt.imshow(img1[:,:,::-1])
# Turn off axis numbering
plt.axis('off')
# Show the second image on the right column
plt.subplot(1,2,2)
plt.imshow(img2[:,:,::-1])
# Turn off the axis numbering
plt.axis('off')
# Show the figure
plt.show()
This should display both images in separate subfigures within an overall figure window. If you take a look at how I'm calling imshow here, you'll see that I am swapping the RGB channels because OpenCV reads in images in BGR format. If you want to display images with matplotlib, you'll need to reverse the channels as the images are in RGB format (as they should be).
To address your question in your comments, you would take which contour structure you want (contours1 or contours2) and search the contour points. contours is a list of all possible contours, and within each contour is a 3D matrix that is shaped in a N x 1 x 2 format. N would be the total number of points that represent the contour. I'm going to remove the singleton second dimension so we can get this to be a N x 2 matrix. Also, let's use the full representation of the contours for now:
points = contours1[0].reshape(-1,2)
I am going to assume that your image only has one object, hence my indexing into contours1 with index 0. I unravel the matrix so that it becomes a single row vector, then reshape the matrix so that it becomes N x 2. Next, we can find the minimum point by:
min_x = np.argmin(points[:,0])
min_point = points[min_x,:]
np.argmin finds the location of the smallest value in an array that you supply. In this case, we want to operate along the x coordinate, or the columns. Once we find this location, we simply index into our 2D contour point array and extract out the contour point.
You should add cv2.imshow("Title", img) at the end of your code. It should look like this:
import numpy as np
import cv2
im = cv2.imread('test.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,127,255,0)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im, contours, -1, (0,255,0), 3)
cv2.imshow("title", im)
cv2.waitKey()
Add these 2 lines at the end:
cv2.imshow("title", im)
cv2.waitKey()
Also, be aware that you have img instead of im in your last line.

Extracting images from contours

I am trying to figure out how to make a script that cuts out images from a sheet of images. I can't understand what to do after I get the contours of the images. My train of thought is to load a sheet, convert it to grayscale, find the contours, use them to cut out the images from the original colored image and save them individually.
import numpy as np
from sys import argv
from PIL import Image
from skimage import measure
# Inicialization
spritesToFind = argv[1]
spriteSize = argv[2]
sheet = Image.open(argv[3])
# To grayscale, so contour finding is easy
grayscale = sheet.convert('L')
# Let numpy do the heavy lifting for converting pixels to black or white
data = np.asarray(grayscale).copy()
# Find the contours we need
contours = measure.find_contours(data, 0.8)
# Now we put it back in PIL land
sprite = Image.fromarray(data)
sprite.save(str(spritesToFind), "PNG")
If you just want to cut out the smallest rectangle containing the contour you can use the (x,y) coordinates in the contour to create a bounding box going from (min(x), min(y)) to (max(x),max(y)).
If you want to zero out everything not inside a contour, you should look into how you can determine if a point is within a polygon, and then zero out everypoint point that is not inside the contour.
Where contours is a list of countours found using measure.find_contours, and x is your image. This shows how to do both a rectangle bounding box extract image_patch and how to extract just the pixels that are part of the polygon new_image:
from matplotlib import path
contour = contours[0]
path = path.Path(contour)
top = min(contour[:,0])
bottom = max(contour[:,0])
left = min(contour[:,1])
right = max(contour[:,1])
new_image = np.zeros([bottom-top,right-left])
for xr in range(new_image.shape[0]):
for yr in range(new_image.shape[1]):
if(path.contains_point([top+xr,left+yr])):
new_image[xr, yr] = x[top+xr, left+yr]
image_patch = x[top:bottom,left:right]
plt.imshow(image_patch)
plt.show()
plt.imshow(new_image)
plt.show()

Categories