Approximate polygons from image (map) - python

So I have this map:
I've already done some filtering and now I have the following image:
Finally, I want to find the coordinates for each of the polygons in the image, but using the find_contours function is not giving me good results, as you can see on the following image.
On the left side, you can see all of the contours (polygons) found (one color for each), and on the right side, it's an example of a polygon it has found (which is clearly wrong).
Having the black and white image, how can I find the coordinates for each of the polygons in it?

So when I downloaded the image, I got a very disjoint kind of image. So what I did was I dilated it.
import numpy as np
from skimage import io, measure, morphology
img = io.imread('img.png', as_grey=True)
img = morphology.binary_dilation(img, selem=np.ones((5,5)))
Then what I did was I inverted the white line with black and the area as white with this
img_inverted = np.invert(img)
Once you have the inverted image, Label it for easy processing.
img_inverted_labeled = measure.label(img_inverted)
n_lbls = np.unique(img_inverted_labeled)[1:]
Now append each part's polygon.
pols = []
for i in n_lbls:
img_part = (img_inverted_labeled == i)
pols.append(measure.find_contours(img_part, level=0)[0])
Hope this helps.

Related

Feature extraction and take color histogram

I am working on an image processing feature extraction. I have a photo of a bird in which I have to extract bird area and tell what color the bird has. I used canny feature extraction method to get the edges of a bird.
How to extract only bird area and make the background to blue color?
openCv solution should also be fine.
import skimage
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import os
filename = os.path.join(os.getcwd(),'image\image_bird.jpeg')
from skimage import io
bird =io.imread(filename,as_grey=True)
plt.imshow(bird)
from skimage import feature
edges = feature.canny(bird,sigma=1)
plt.imshow(edges )
Actual bird image can be taken from bird link
Identify the edges of your image
Binarize the image via automatic thresholding
Use contour detection to identify black regions which are inside a white region and merge them with the white region. (Mockup, image may slightly vary)
Use the created image as mask to color the background and color it
This can be done by simply setting each background pixel (black) to its respective color.
As you can see, the approach is far from perfect, but should give you a general idea about how to accomplish your task. The final image quality might be improved by slightly eroding the map to tighten it to the contours of the bird. You then also use the mask to calculate your color histogram by only taking foreground pixels into account.
Edit: Look here:
Eroded mask
Final image
According to this article https://www.pyimagesearch.com/2016/04/11/finding-extreme-points-in-contours-with-opencv/
and this question CV - Extract differences between two images
I wrote some python code as below. As my predecessor said it is also far from perfect. The main disadvantages of this code are constants value to set manually: minThres (50), maxThres(100), dilate iteration count and erode iteration count.
import cv2
import numpy as np
windowName = "Edges"
pictureRaw = cv2.imread("bird.jpg")
## set to gray
pictureGray = cv2.cvtColor(pictureRaw, cv2.COLOR_BGR2GRAY)
## blur
pictureGaussian = cv2.GaussianBlur(pictureGray, (7,7), 0)
## canny edge detector - you must specify threshold values
pictureCanny = cv2.Canny(pictureGaussian, 50, 100)
## perform a series of erosions + dilations to remove any small regions of noise
pictureDilate = cv2.dilate(pictureCanny, None, iterations=20)
pictureErode = cv2.erode(pictureDilate, None, iterations=5)
## find the nozero regions in the erode
imask2 = pictureErode>0
## create a Mat like pictureRaw
canvas = np.full_like(pictureRaw, np.array([255,0,0]), dtype=np.uint8)
## set mask
canvas[imask2] = pictureRaw[imask2]
cv2.imwrite("result.png", canvas)

Python: How to keep region inside canny close edge's area

I'm using canny algorithm to find the edges.
Next, I want to keep the region inside the closed curves.
My code sample is:
import cv2
import numpy as np
from matplotlib import pyplot as plt
import scipy.ndimage as nd
from skimage.morphology import watershed
from skimage.filters import sobel
img1 = cv2.imread('coins.jpg')
img = cv2.imread('coins.jpg',0)
edges= cv2.Canny(img,120,200)
markers = np.zeros_like(img)
markers[edges<50] = 0
markers[edges==255] = 1
img1[markers == 1] = [0,0,255]
img1[markers == 0] = [255,255,255]
cv2.imshow('Original', img)
cv2.imshow('Canny', img1)
#Wait for user to press a key
cv2.waitKey(0)
My output image is
I want to show the original pixels values inside the coins. Is that possible?
I suggest you use an union-find structure to get the connected components of white pixels of your img1. (You might want to find the details of this algorithm on Wikipedia : https://en.wikipedia.org/wiki/Disjoint-set_data_structure).
Once you have the connected components, my best idea is to consider the conected components that do not contain any point on the border of your picture (they should correspond to the interior of your coins) and color them in the color of img.
Sure, you may have some kind of triangles between your coins that will still be colored, but you could remove the corresponding connected components by hand.
Not really. The coin outlines are not continuous so that any kind of filling will leak.
You can repair the edges by some form of morphological processing (erosion), but this will bring the coins in contact and create unreachable regions between them.
As a fallback solution, you can try a Hough circle detector and mask inside the disks.

Draw circle around imperfect circular objects

I have this image of an eye where I want to get the center of the pupil:
Original Image
I applied adaptive threshold as well as laplacian to the image using this code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\Users\User\Documents\module4\input\left.jpg',0)
image = cv2.medianBlur(img,5)
th = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY,11,2)
laplacian = cv2.Laplacian(th,cv2.CV_64F)
cv2.imshow('output', laplacian)
cv2.imwrite('C:\Users\User\Documents\module4\output\output.jpg', laplacian)
cv2.waitKey(0)
cv2.destroyAllWindows
and the resulting image looks like this: Resulting image by applying adaptive threshold
I want to draw a circle around the smaller inner circle and get its center. I've tried using contours and circular hough transform but it does not correctly detect any circles in the image.
Here is my code for Circular Hough Transform:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\Users\User\Documents\module4\output\output.jpg',0)
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(img,(i[0],i[1]),i[2],(255,255,0),2)
# draw the center of the circle
cv2.circle(img,(i[0],i[1]),2,(255,0,255),3)
cv2.imshow('detected circles',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
And here is the code for applying contour:
import cv2
import numpy as np
img = cv2.imread('C:\Users\User\Documents\module4\output\output.jpg',0)
_, contours,hierarchy = cv2.findContours(img, 1, 2)
cnt = contours[0]
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
img = cv2.circle(img,center,radius,(0,255,255),2)
cv2.imshow('contour', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The resulting image of this code exactly looks like the image wherein I applied adaptive threshold. I would really appreciate it if anyone can help me solve my problem. I've been stuck with this for a while now. Also, if any of you guys can suggest a better way to detect the center of the pupil besides this method, I would also really appreciate it.
try to apply edge detection instead of shareholding after filtering of original image and then apply hough circle
My thought would be to use the Hough transform like you're doing. But another method might be template matching like this. This assumes you know the approximate radius of the pupil in the image, you can try to build a template.
import skimage
import numpy as np
import matplotlib.pyplot as plt
img = skimage.io.imread('Wjioe.jpg')
#just use grayscale, but you could make separate template for each r,g,b channel
img = np.mean(img, axis=2)
(M,N) = img.shape
mm = M-20
nn = N-20
template = np.zeros([mm,nn])
## Create template ##
#darkest inner circle (pupil)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,4.5, shape=template.shape)
template[rr,cc]=-2
#iris (circle surrounding pupil)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,8, shape=template.shape)
template[rr,cc] = -1
#Optional - pupil reflective spot (if centered)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,1.5, shape=template.shape)
template[rr,cc] = 1
plt.imshow(template)
normccf = skimage.feature.match_template(img, template,pad_input=True)
#center pixel
(i,j) = np.unravel_index( np.argmax(normccf), normccf.shape)
plt.imshow(img)
plt.plot(j,i,'r*')
You're defining a 3 channel color for a gray-scale image. Based on my test it will only read the first value in that tuple. Because the first value in your other colors (in the middle code) starts with 255, it draws a full white circle and because the first value in your last color (in your last code) starts with 0, it draws a full black circle which you can't see.
Just change your color values to a 1 channel color (an int between 0 and 255) and you'll be fine.

Filling "holes" of an image in python with cv2 not working

I am trying to fill the "holes" of red blood cells in an image after performing binary threshold. Almost all red blood cells have a black center when inverting the binary threshold. I want to remove them.
Example image:
This is my code:
import cv2
from PIL import Image
import numpy as np
from scipy import ndimage
from skimage.feature import peak_local_max
from skimage.morphology import watershed
image = cv2.imread("blood_cells.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
darker = cv2.equalizeHist(gray)
ret,thresh = cv2.threshold(darker,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
newimg = cv2.bitwise_not(thresh)
im2, contours, hierarchy = cv2.findContours(newimg,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
cv2.drawContours(newimg,[cnt],0,255,-1)
And it worked. I filled the holes using findContours() and drawContours().
but when I try to compute the euclidean distance, for applying the watershed algorithm, I get only 52 unique segments, however there should be more. Here is the code, if it might be helpful:
D = ndimage.distance_transform_edt(newimg)
localMax = peak_local_max(D, indices=False, min_distance=20, labels=thresh)
markers = ndimage.label(localMax, structure=np.ones((3, 3)))[0]
labels = watershed(-D, markers, mask=thresh)
print("[INFO] {} unique segments found".format(len(np.unique(labels)) - 1))
I tried to segment each cell, but the results were quite off. Only the inside of the cells that had "holes" got segmented.
First image shows my result, second shows how it should roughly look like:
.
I then filled the holes manually, just to see if my code for segmentation works - and it works. The error should be somewhere between the part where I drew the contours, and the part where I calculated the euclidean distance.. Could anybody explain to me what could be wrong? I am clueless.
Your problem lies in the following line:
labels = watershed(-D, markers, mask=thresh)
You're passing as mask an inverted, uncorrected result from thresholding:
Giving you this bad segmentation:
Whereas you should be passing the corrected, filled in mask:
labels = watershed(-D, markers, mask=newimg)
Giving you the result you probably expect:

Extracting images from contours

I am trying to figure out how to make a script that cuts out images from a sheet of images. I can't understand what to do after I get the contours of the images. My train of thought is to load a sheet, convert it to grayscale, find the contours, use them to cut out the images from the original colored image and save them individually.
import numpy as np
from sys import argv
from PIL import Image
from skimage import measure
# Inicialization
spritesToFind = argv[1]
spriteSize = argv[2]
sheet = Image.open(argv[3])
# To grayscale, so contour finding is easy
grayscale = sheet.convert('L')
# Let numpy do the heavy lifting for converting pixels to black or white
data = np.asarray(grayscale).copy()
# Find the contours we need
contours = measure.find_contours(data, 0.8)
# Now we put it back in PIL land
sprite = Image.fromarray(data)
sprite.save(str(spritesToFind), "PNG")
If you just want to cut out the smallest rectangle containing the contour you can use the (x,y) coordinates in the contour to create a bounding box going from (min(x), min(y)) to (max(x),max(y)).
If you want to zero out everything not inside a contour, you should look into how you can determine if a point is within a polygon, and then zero out everypoint point that is not inside the contour.
Where contours is a list of countours found using measure.find_contours, and x is your image. This shows how to do both a rectangle bounding box extract image_patch and how to extract just the pixels that are part of the polygon new_image:
from matplotlib import path
contour = contours[0]
path = path.Path(contour)
top = min(contour[:,0])
bottom = max(contour[:,0])
left = min(contour[:,1])
right = max(contour[:,1])
new_image = np.zeros([bottom-top,right-left])
for xr in range(new_image.shape[0]):
for yr in range(new_image.shape[1]):
if(path.contains_point([top+xr,left+yr])):
new_image[xr, yr] = x[top+xr, left+yr]
image_patch = x[top:bottom,left:right]
plt.imshow(image_patch)
plt.show()
plt.imshow(new_image)
plt.show()

Categories