Issue with precision of cv2.findContours - python

I'm trying to turn some Matlab's routines doing shape analysis in Python and I have an issue with cv2.findContours which can't separate two contours when difference is on the scale of one pixel. Instead bwboundary captures correctly the difference in contours. Any idea how I could do ?
Here's a working exemple:
in Python
import numpy as np
import cv2
import matplotlib.pyplot as plt
plt.interactive(True)
X,Y = np.meshgrid(np.linspace(-1,1,100),np.linspace(-1,1,100))
deltaX = np.min(np.diff(X[0,:]))
mask = np.zeros(X.shape)
I1 = X**2/0.75**2+Y**2/0.5**2 < 1
I2 = X**2/(0.75-deltaX)**2+Y**2/0.4**2 < 1
mask[I1] = 1
mask[I2]=0
plt.pcolormesh(mask)
contours, hierarchy = cv2.findContours(mask.astype('uint8'),cv2.RETR_CCOMP,cv2.CHAIN_APPROX_NONE)
[plt.plot(contours[r][:,0,0],contours[r][:,0,1],'m') for r in range(0,len(contours))]
in Matlab:
[X,Y] = meshgrid(linspace(-1,1,100),linspace(-1,1,100));
deltaX = min(diff(X(1,:)));
mask = zeros(size(X));
I1 = X.^2/0.75^2+Y.^2/0.5^2 < 1;
I2 = X.^2/(0.75-deltaX)^2+Y.^2/0.4^2 < 1;
mask(I1) = 1;
mask(I2)=0;
pcolor(mask);shading flat
[Boun,Label,nobj,Adj] = bwboundaries(mask);
hold on
for i=1:length(Boun)
plot(Boun{i}(:,2),Boun{i}(:,1),'m','linewidth',2);
end
giving respectively

Related

vectorizing custom python function with numpy array

Not sure if that is the correct terminology. Basically trying to take a black and white image and first transform it such that all the white pixels that border black-pixels remain white, else turn black. That part of the program works fine, and is done in find_edges. Next I need to calculate the distance from each element in the image to the closest white-pixel. Right now I am doing it by using a for-loop that is insanely slow. Is there a way to make the find_nearest_edge function written solely with numpy without the need for a for-loop to call it on each element? Thanks.
####
from PIL import Image
import numpy as np
from scipy.ndimage import binary_erosion
####
def find_nearest_edge(arr, point):
w, h = arr.shape
x, y = point
xcoords, ycoords = np.meshgrid(np.arange(w), np.arange(h))
target = np.sqrt((xcoords - x)**2 + (ycoords - y)**2)
target[arr == 0] = np.inf
shortest_distance = np.min(target[target > 0.0])
return shortest_distance
def find_edges(img):
img = img.convert('L')
img_np = np.array(img)
kernel = np.ones((3,3))
edges = img_np - binary_erosion(img_np, kernel)*255
return edges
a = Image.open('a.png')
x, y = a.size
edges = find_edges(a)
out = Image.fromarray(edges.astype('uint8'), 'L')
out.save('b.png')
dists =[]
for _x in range(x):
for _y in range(y):
dist = find_nearest_edge(edges,(_x,_y))
dists.append(dist)
print(dists)
Images:
You can use KDTree to compute distances fast.
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import binary_erosion
from scipy.spatial import KDTree
def find_edges(img):
img_np = np.array(img)
kernel = np.ones((3,3))
edges = img_np - binary_erosion(img_np, kernel)*255
return edges
def find_closest_distance(img):
# NOTE: assuming input is binary image and white is any non-zero value!
white_pixel_points = np.array(np.where(img))
tree = KDTree(white_pixel_points.T)
img_meshgrid = np.array(np.meshgrid(np.arange(img.shape[0]), np.arange(img.shape[1]))).T
distances, _ = tree.query(img_meshgrid)
return distances
test_image = np.zeros((200, 200))
rectangle = np.ones((30, 80))
test_image[20:50, 60:140] = rectangle
test_image[150:180, 60:140] = rectangle
test_image[60:140, 20:50] = rectangle.T
test_image[60:140, 150:180] = rectangle.T
test_image = test_image * 255
edge_image = find_edges(test_image)
distance_image = find_closest_distance(edge_image)
fig, axes = plt.subplots(1, 3, figsize=(12, 5))
axes[0].imshow(test_image, cmap='Greys_r')
axes[1].imshow(edge_image, cmap='Greys_r')
axes[2].imshow(distance_image, cmap='Greys_r')
plt.show()
You can make your code 25X faster by just changing find_nearest_edge as follows. Many other optimizations are possible, but this is the biggest bottleneck in your code.
from numba import njit
#njit
def find_nearest_edge(arr, point):
x, y = point
shortest_distance = np.inf
for i in range(arr.shape[0]):
for j in range(arr.shape[1]):
if arr[i,j] == 0: continue
shortest_distance = min(shortest_distance, (i-x)**2 + (j-y)**2)
return np.sqrt(shortest_distance)

How to create voronoi art using python?

I am trying to create a simple voronoi image based on another stackoverflow question I've seen: Create Voronoi art with rounded region edges
In the first example the user provides, he is "painting manually" each pixel based on the minimum distance to each centroid (each associated with a color). I have tried replicating his code but am having some trouble.
Voronoi.py:
import random
import numpy as np
import itertools as it
import matplotlib.pyplot as plt
n_centroids = 10
h = 5
w = 5
x = 5
y = 5
img = [x,y]
centroids = [(random.randint(0, h), random.randint(0, w)) for _ in range(n_centroids)]
colors = np.array([np.random.choice(range(256), size=3) for _ in range(n_centroids)]) / 255
for x, y in it.product(range(h), range(w)):
distances = np.sqrt([(x - c[0])**2 + (y - c[1])**2 for c in centroids])
centroid_i = np.argmin(distances)
img[x,y] = colors[centroid_i]
plt.imshow(img, cmap='gray')
plt.show()
When I try to assign img[x,y] = colors[centroid_i], I keep getting this error:
TypeError: list indices must be integers or slices, not tuple
I believe it is due to how I am declaring img but cannot quite figure it out.
This is what the end result should look like:

Generating boolean matrix from image

I am trying to classify an image by selecting a pixel at random, then finding all pixels in the image that are a certain euclidian distance in color space from that original pixel. My current script takes a prohibitively long time. I wonder if I am able to use this equation to generate a boolean matrix that will allow quicker manipulation of the image.
( x-cx ) ^2 + (y-cy) ^2 + (z-cz) ^ 2 < r^2
Here is the code I am using now:
import PIL, glob, numpy, random, math, time
def zone_map(picture, threshold):
im = PIL.Image.open(picture)
pix = im.load()
[width, height] = im.size
mask = numpy.zeros((width,height))
while 0 in mask:
x = random.randint(0, width)
y = random.randint(0, height)
if mask[x, y] == 0:
point = pix[x,y]
to_average = {(x, y): pix[x, y]}
start = time.clock()
for row in range(0, width):
for column in range(0, height):
if euclid_dist(point, pix[row,column]) <= threshold:
to_average[(row,column)] = pix[row, column]
#to_average = in_sphere(pix, point)
end = time.clock()
print(end - start)
to_average_sum = (0, 0, 0)
for value in to_average.values():
to_average_sum = tuple_sum(to_average_sum, value)
average = tuple_divide(to_average_sum, len(to_average.values()))
for coordinate in to_average.keys():
pix[coordinate] = average
mask[coordinate] = 1
unique, counts = numpy.unique(mask, return_counts=True)
progress = dict(zip(unique, counts))
print((progress[1] / progress[0])*100, '%')
im.save()
return im
def euclid_dist(tuple1, tuple2):
"""
Finds euclidian distance between two points in n dimensional sapce
"""
tot_sq = 0
for num1, num2 in zip(tuple1, tuple2):
tot_sq += (num1 + num2)**2
return math.sqrt(tot_sq)
def tuple_sum(tuple1, tuple2):
"""
Returns tuple comprised of sums of input tuples
"""
sums = []
for num1, num2 in zip(tuple1, tuple2):
sums.append(num1 + num2)
return tuple(sums)
def tuple_divide(tuple1, divisor):
"""
Divides numerical values of tuples by divisisor, yielding integer results
"""
quotients = []
for value in tuple1:
quotients.append(int(round(value/divisor)))
return tuple(quotients)
Any information on how to incorporate the described boolean matrix, or any other ideas on how to speed this up, would be greatly appreciated.
Just load the image as a numpy array, and then use array operations instead of looping over pixels:
import numpy as np
import matplotlib.pyplot as plt
import PIL
def zone_map(picture, threshold, show=True):
with PIL.Image.open(picture) as img:
rgb = np.array(img, dtype=np.float)
height, width, _ = rgb.shape
mask = np.zeros_like(rgb)
while not np.any(mask):
# get random pixel
position = np.random.randint(height), np.random.randint(width)
color = rgb[position]
# get euclidean distance of all pixels in colour space
distance = np.sqrt(np.sum((rgb - color)**2, axis=-1))
# threshold
mask = distance < threshold
if show: # show output
fig, (ax1, ax2) = plt.subplots(1,2)
ax1.imshow(rgb.astype(np.uint8))
ax2.imshow(mask, cmap='gray')
fig.suptitle('Random color: {}'.format(color))
return mask
def test():
zone_map("Lenna.jpg", threshold=20)
plt.show()

Python: How to get real depth from disparity map

I am a complete beginner I am trying to obtain real depth map from left and right image. I've used OpenCV to get the disparity map via block matching as you can see in the code bellow.
import cv2
import cv2.cv as cv
import sys
import numpy as np
def getDisparity(imgLeft, imgRight, method="BM"):
gray_left = cv2.cvtColor(imgLeft, cv.CV_BGR2GRAY)
gray_right = cv2.cvtColor(imgRight, cv.CV_BGR2GRAY)
print gray_left.shape
c, r = gray_left.shape
if method == "BM":
sbm = cv.CreateStereoBMState()
disparity = cv.CreateMat(c, r, cv.CV_32F)
sbm.SADWindowSize = 11
sbm.preFilterType = 1
sbm.preFilterSize = 5
sbm.preFilterCap = 61
sbm.minDisparity = -50
sbm.numberOfDisparities = 112
sbm.textureThreshold = 507
sbm.uniquenessRatio= 0
sbm.speckleRange = 8
sbm.speckleWindowSize = 0
gray_left = cv.fromarray(gray_left)
gray_right = cv.fromarray(gray_right)
cv.FindStereoCorrespondenceBM(gray_left, gray_right, disparity, sbm)
disparity_visual = cv.CreateMat(c, r, cv.CV_8U)
cv.Normalize(disparity, disparity_visual, 0, 255, cv.CV_MINMAX)
disparity_visual = np.array(disparity_visual)
elif method == "SGBM":
sbm = cv2.StereoSGBM()
sbm.SADWindowSize = 9;
sbm.numberOfDisparities = 0;
sbm.preFilterCap = 63;
sbm.minDisparity = -21;
sbm.uniquenessRatio = 7;
sbm.speckleWindowSize = 0;
sbm.speckleRange = 8;
sbm.disp12MaxDiff = 1;
sbm.fullDP = False;
disparity = sbm.compute(gray_left, gray_right)
disparity_visual = cv2.normalize(disparity, alpha=0, beta=255, norm_type=cv2.cv.CV_MINMAX, dtype=cv2.cv.CV_8U)
return disparity_visual
imgLeft = cv2.imread('1.png')
imgRight = cv2.imread('2.png')
try:
method = "BM"
except IndexError:
method = "BM"
disparity = getDisparity(imgLeft, imgRight, method)
cv2.imshow("disparity", disparity)
#cv2.imshow("left", imgLeft)
#cv2.imshow("right", imgRight)
cv2.waitKey(0)
My question is what is the easiest way to obtain real depth map (distance) from disparity using python?
In order to calculate depth for stereo, you need to know the translation and rotation between the cameras. If you have that, you can take each disparity value and use triangulation to calculate the depth for that 3D point.
I recommend reading http://www.robots.ox.ac.uk/~vgg/hzbook/
for a detailed explanation.
Assuming your cameras are calibrated, the images rectified, you can use the formula provided by this tutorial which is:
disparity = Baseline * focal-lens / depth
So,
depth = Baseline * focal-lens / disparity

What is a good way to get a similarity measure of two images that contain a line chart?

I have tried the dHash algorithm which is applied on each image, then a hamming_distance is calculated on both hashes, the lower the number, the higher the similarity.
from PIL import Image
import os
import shutil
import glob
from plotData import *
def hamming_distance(s1, s2):
#Return the Hamming distance between equal-length sequences
if len(s1) != len(s2):
raise ValueError("Undefined for sequences of unequal length")
return sum(ch1 != ch2 for ch1, ch2 in zip(s1, s2))
def dhash(image, hash_size = 8):
# Grayscale and shrink the image in one step.
image = image.convert('L').resize(
(hash_size + 1, hash_size),
Image.ANTIALIAS,
)
pixels = list(image.getdata())
# Compare adjacent pixels.
difference = []
for row in xrange(hash_size):
for col in xrange(hash_size):
pixel_left = image.getpixel((col, row))
pixel_right = image.getpixel((col + 1, row))
difference.append(pixel_left > pixel_right)
# Convert the binary array to a hexadecimal string.
decimal_value = 0
hex_string = []
for index, value in enumerate(difference):
if value:
decimal_value += 2**(index % 8)
if (index % 8) == 7:
hex_string.append(hex(decimal_value)[2:].rjust(2, '0'))
decimal_value = 0
return ''.join(hex_string)
orig = Image.open('imageA.png')
modif = Image.open('imageA.png')
hammingDistanceValue = hamming_distance(dhash(orig),dhash(modif))
print hammingDistanceValue
Unfortunately, this approach produces false positives because it does not really look at the line chart shapes as primary similarity feature. I guess, I'd need some kind of machine learning approach maybe from openCV or so. Can anyone guide me into the right direction to something that compares with high precision?
this is the initial image to compare against a collection of similar images.
this is a positive match
this is a false match
update: I added some opencv magic to jme's suggestion below. I try to detect significant features first. Howeve, it still produces false positives, since the overall indicator for similarity is the cummulated value over all features and does not take differences into account that can give a line chart a totally different meaning.
False Positive example
Example of preprocessed image with significant features marked as red dots
from PIL import Image
import os
import numpy as np
from scipy.interpolate import interp1d
import os.path
import shutil
import glob
from plotData import *
import cv2
from matplotlib import pyplot as plt
def load_image(path):
#data = Image.open(path)
img = cv2.imread(path)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
corners = cv2.goodFeaturesToTrack(gray,25,0.01,10)
corners = np.int0(corners)
for i in corners:
x,y = i.ravel()
cv2.circle(img,(x,y),3,255,-1)
return np.mean((255 - np.array(img))**2, axis=2)
symbol = "PBYI"
x = np.arange(1000)
if not os.path.exists('clusters1DSignal/'+symbol+'/'):
os.mkdir('clusters1DSignal/'+symbol+'/')
else:
shutil.rmtree('clusters1DSignal/'+symbol+'/')
os.mkdir('clusters1DSignal/'+symbol+'/')
shutil.copyfile('rendered/'+symbol+'.png', "clusters1DSignal/"+symbol+"/"+symbol+'.png')
img1 = load_image('rendered/'+symbol+'.png')
y1 = np.argmax(img1, axis=0)
f1 = interp1d(np.linspace(0, 1000, len(y1)), y1)
z1 = f1(x)
for filename in glob.iglob('rendered/*.png'):
try:
img2 = load_image(filename)
except:
continue
y2 = np.argmax(img2, axis=0)
f2 = interp1d(np.linspace(0, 1000, len(y2)), y2)
z2 = f2(x)
result = np.linalg.norm(z1 - z2)
if result < 2100:
print str(result) +": " +filename
symbolCompare = filename.split("/")[1].replace(".png","")
shutil.copyfile('rendered/'+symbolCompare+'.png', "clusters1DSignal/"+symbol+"/"+str(result)+"_"+symbolCompare+".png")
The approach I'd take is this: first, convert each image to a 1d signal by finding for each x pixel, a representative y pixel where the image is red. You can take the mean of the y pixels, but for simplicity, I'll just take the first that isn't white:
def load_image(path):
data = Image.open(path)
return np.mean((255 - np.array(data))**2, axis=2)
img1 = load_image("one.png")
img2 = load_image("two.png")
img3 = load_image("three.png")
y1 = np.argmax(img1, axis=0)
y2 = np.argmax(img2, axis=0)
y3 = np.argmax(img3, axis=0)
y1, y2, and y3 are 1d arrays which represent the functions in the first, second, and third images. Now we simply treat each array as a vector, and find the l2 distance between them. We prefer the l2 distance because the Hamming distance will be somewhat sensitive for this task.
We have a slight problem: the images have different widths, so the y arrays aren't of compatible size. A quick-and-dirty fix is to interpolate them to a longer length (we'll use 1000):
f1 = interp1d(np.linspace(0, 1000, len(y1)), y1)
f2 = interp1d(np.linspace(0, 1000, len(y2)), y2)
f3 = interp1d(np.linspace(0, 1000, len(y3)), y3)
x = np.arange(1000)
z1 = f1(x)
z2 = f2(x)
z3 = f3(x)
Now we can find the distance between the images:
>>> np.linalg.norm(z1 - z2)
2608.5368359281415
>>> np.linalg.norm(z1 - z3)
5071.1340610709549
>>> np.linalg.norm(z2 - z2)
5397.379183811714

Categories