I'm trying to skeletonize the features below, in order to extract information about 1) the length of the features 2) the curvature of the features. I came across this skeletonization approach, which iteratively erodes and erodes the image. Shown below is the result - which looks awful. Can some one recommend a different approach to skeletonizing my features?
Here's the example from the link above:
import cv2
import numpy as np
img = cv2.imread('sofsk.png',0)
size = np.size(img)
skel = np.zeros(img.shape,np.uint8)
ret,img = cv2.threshold(img,127,255,0)
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
done = False
while( not done):
eroded = cv2.erode(img,element)
temp = cv2.dilate(eroded,element)
temp = cv2.subtract(img,temp)
skel = cv2.bitwise_or(skel,temp)
img = eroded.copy()
zeros = size - cv2.countNonZero(img)
if zeros==size:
done = True
cv2.imshow("skel",skel)
cv2.waitKey(0)
cv2.destroyAllWindows()
Related
I have captured the same image with three browser using below code ( Height of the image dynamically change based on content adding and removing that particular page).
ele = driver.find_element_by_xpath('//div[#id="content" and #class="active bg-forms bg-forms-light "]')
time.sleep(2)
total_height = ele.size["height"]+100
print(total_height)
driver.set_window_size(1920, total_height)
time.sleep(5)
print(driver.get_window_size())
Now I want to compare these three images are same. For that I have used structural_similarity.
But due to different size I am getting error.
from skimage.metrics import structural_similarity as compare_ssim
import argparse
import imutils
import cv2
# load the two input images
imageA = cv2.imread("Test_chrome.jpg")
imageB = cv2.imread("Test_edge.jpg")
imageC = cv2.imread("Test_fire.jpg")
print(imageA.shape)
print(imageB.shape)
print(imageC.shape)
print(imageA.size)
print(imageB.size)
print(imageC.size)
# convert the images to grayscale
grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY)
grayC = cv2.cvtColor(imageC, cv2.COLOR_BGR2GRAY)
print(grayA.size)
print(grayB.size)
print(grayC.size)
# compute the Structural Similarity Index (SSIM) between the two
# images, ensuring that the difference image is returned
(scoreaandb, diffaandb) = compare_ssim(grayA, grayB, full=True)
diffaandb = (diffaandb * 255).astype("uint8")
print("SSIMaandb: {}".format(scoreaandb))
(scorebandc, diffbandc) = compare_ssim(grayB, grayC, full=True)
diffbandc = (diffbandc * 255).astype("uint8")
print("SSIMbandc: {}".format(scorebandc))
(scoreaandc, diffaandc) = compare_ssim(grayA, grayC, full=True)
diffaandc = (diffaandc * 255).astype("uint8")
print("SSIMaandc: {}".format(scoreaandc))
Output:
(5956, 1920, 3)
(7306, 1920, 3)
(5994, 1908, 3)
34306560
42082560
34309656
11435520
14027520
11436552
I have used CV2.Resized the image as below
from skimage.metrics import structural_similarity as compare_ssim
import argparse
import imutils
import cv2
# load the two input images
imageA = cv2.imread("Test_Chrome.jpg")
imageB = cv2.imread("Test_edge.jpg")
imageC = cv2.imread("Test_fire.jpg")
print(imageA.shape)
print(imageB.shape)
print(imageC.shape)
print(imageA.size)
print(imageB.size)
print(imageC.size)
Resized_ImageB = cv2.resize(imageB,(imageA.shape[1],imageA.shape[0]))
Resized_ImageC = cv2.resize(imageC,(imageA.shape[1],imageA.shape[0]))
# convert the images to grayscale
grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(Resized_ImageB, cv2.COLOR_BGR2GRAY)
grayC = cv2.cvtColor(Resized_ImageC, cv2.COLOR_BGR2GRAY)
print(grayA.size)
print(grayB.size)
print(grayC.size)
# compute the Structural Similarity Index (SSIM) between the two
# images, ensuring that the difference image is returned
(scoreaandb, diffaandb) = compare_ssim(grayA, grayB, full=True)
diffaandb = (diffaandb * 255).astype("uint8")
print("SSIMaandb: {}".format(scoreaandb))
(scorebandc, diffbandc) = compare_ssim(grayB, grayC, full=True)
diffbandc = (diffbandc * 255).astype("uint8")
print("SSIMbandc: {}".format(scorebandc))
(scoreaandc, diffaandc) = compare_ssim(grayA, grayC, full=True)
diffaandc = (diffaandc * 255).astype("uint8")
print("SSIMaandc: {}".format(scoreaandc))
I tried with imagehash also
import imagehash
hash0 = imagehash.average_hash(Image.open('planoverview_Chrome.jpg'))
hash1 = imagehash.average_hash(Image.open('planoverview_Chromemodified.jpg'))
cutoff = 1 # maximum bits that could be different between the hashes.
print(hash0)
print(hash1)
if hash0 - hash1 < cutoff:
print('images are similar')
else:
print('images are not similar')
O/P:
0307070307ffffff
0307070307ffffff
images are similar
Reporting both image are same.
Question:
I have used set (driver.set_window_size(1920, 6080))- before saving the screen short & compared the similarity and it's working fine for Egde & Chrome. But the problem here is when the content of the page grow the image is not capturing as expected.
Later I resize before converting as Grey scale and compared the three images. But SIMM is not detecting the minor changes either report all are same or the image is completely different.
Is there any effective I can compare different size image in python using some lib & tell all are same or differnet?
My approach would be to use normalized cross-correlation. If the content of the different images is the same, you need to resize the images first to have the same dimensions. OpenCV offers
void cv::matchTemplate (InputArray image, InputArray templ, OutputArray result, int method, InputArray mask=noArray())
As method I suggest to use cv.TM_CCORR_NORMED.
Goal
My goal is to detect PV modules on the dataset of infrared images. After the preprocessing phase, which is mainly removing noisy background, I want to find edges so they can be used for further processing (HoughLinesP etc.). I get already quite satisfying results, however, I’d like to verify my approach and ask for tips.
Attention images (preprocessed, background removed)
Then I increase contrast (that gives me slightly better results)
And then apply Canny Edge detection with some erosion and dilatation.
Final results
I would like to improve detection for images looking similar to the 3rd image - the edges are not detected on the left and right on both rectangles (middle is fine). I tried to increase the contrast and it worked fine on that particular image but it also resulted in more false edges to be detected on other images from my dataset. Is there a way to tackle it better?
Code
In case my approach makes sense there is my code.
import cv2
import numpy as np
def increase_contrast(input_image):
bgr_image = cv2.cvtColor(input_image, cv2.COLOR_GRAY2BGR)
lab = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2LAB)
l, a, b = cv2.split(lab)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
cl = clahe.apply(l)
limg = cv2.merge((cl, a, b))
increased_contrast_image = cv2.cvtColor(limg, cv2.COLOR_LAB2BGR)
return increased_contrast_image
def detect_edges(input_image):
hysteresis_min_thresh = 35
hysteresis_max_thresh = 45
canny_image = cv2.Canny(image=input_image, threshold1=hysteresis_min_thresh, threshold2=hysteresis_max_thresh,
apertureSize=3)
kernel_size = (7, 7)
kernel_shape = cv2.MORPH_CROSS
kernel = cv2.getStructuringElement(kernel_shape, kernel_size)
dilation_steps = 4
dilated = cv2.dilate(canny_image, (3, 3), iterations=dilation_steps)
size = np.size(dilated)
skel = np.zeros(dilated.shape, np.uint8)
img = dilated
done = False
while not done:
eroded = cv2.erode(img, kernel)
temp = cv2.dilate(eroded, kernel)
temp = cv2.subtract(img, temp)
skel = cv2.bitwise_or(skel, temp)
img = eroded.copy()
zeros = size - cv2.countNonZero(img)
if zeros == size:
done = True
return skel
def process_image(img_path):
input_image = cv2.imread(img_path)
input_image = cv2.cvtColor(input_image, cv2.COLOR_BGR2GRAY)
increased_contrast_image = increase_contrast(input_image)
cv2.imshow('increased_contrast_image', increased_contrast_image)
image_scaling = 3
scaled_image = cv2.resize(src=increased_contrast_image, dsize=(0, 0), fx=image_scaling, fy=image_scaling)
gaussian_blur = 7
blurred_image = cv2.blur(scaled_image, (gaussian_blur, gaussian_blur))
canny_image = detect_edges(blurred_image)
cv2.imshow('canny_image', canny_image)
cv2.waitKey()
# DON'T FORGET TO VERIFY THIS PATH
img_path = "data/plasma_results/background/9.JPG"
process_image(img_path)
Questions
Is my approach (Canny) valid and sensical approach for this case?
How can I improve my algorithm so it works better on images with less visible edges between modules?
Edit
Original images
As requested in the comments these are unprocessed infrared images in plasma palette.
Below I have attached two images. I want the first image to be cropped in a heart shape according to the mask image (2nd image).
I searched for solutions but I was not able to get the simple and easier way to do this. Kindly help me with the solution.
2 images:
Image to be cropped:
Mask image:
Let's start by loading the temple image from sklearn:
from sklearn.datasets import load_sample_images
dataset = load_sample_images()
temple = dataset.images[0]
plt.imshow(temple)
Since, we need to use the second image as mask, we must do a binary thresholding operation. This will create a black and white masked image, which we can then use to mask the former image.
from matplotlib.pyplot import imread
heart = imread(r'path_to_im\heart.jpg', cv2.IMREAD_GRAYSCALE)
_, mask = cv2.threshold(heart, thresh=180, maxval=255, type=cv2.THRESH_BINARY)
We can now trim the image so its dimensions are compatible with the temple image:
temple_x, temple_y, _ = temple.shape
heart_x, heart_y = mask.shape
x_heart = min(temple_x, heart_x)
x_half_heart = mask.shape[0]//2
heart_mask = mask[x_half_heart-x_heart//2 : x_half_heart+x_heart//2+1, :temple_y]
plt.imshow(heart_mask, cmap='Greys_r')
Now we have to slice the image that we want to mask, to fit the dimensions of the actual mask. Another shape would have been to resize the mask, which is doable, but we'd then end up with a distorted heart image. To apply the mask, we have cv2.bitwise_and:
temple_width_half = temple.shape[1]//2
temple_to_mask = temple[:,temple_width_half-x_half_heart:temple_width_half+x_half_heart]
masked = cv2.bitwise_and(temple_to_mask,temple_to_mask,mask = heart_mask)
plt.imshow(masked)
If you want to instead make the masked (black) region transparent:
tmp = cv2.cvtColor(masked, cv2.COLOR_BGR2GRAY)
_,alpha = cv2.threshold(tmp,0,255,cv2.THRESH_BINARY)
b, g, r = cv2.split(masked)
rgba = [b,g,r, alpha]
masked_tr = cv2.merge(rgba,4)
plt.axis('off')
plt.imshow(dst)
Since I am on a remote server, cv2.imshow doesnt work for me. I imported plt.
This code does what you are looking for:
import cv2
import matplotlib.pyplot as plt
img_org = cv2.imread('~/temple.jpg')
img_mask = cv2.imread('~/heart.jpg')
##Resizing images
img_org = cv2.resize(img_org, (400,400), interpolation = cv2.INTER_AREA)
img_mask = cv2.resize(img_mask, (400,400), interpolation = cv2.INTER_AREA)
for h in range(len(img_mask)):
for w in range(len(img_mask)):
if img_mask[h][w][0] == 0:
for i in range(3):
img_org[h][w][i] = 0
else:
continue
plt.imshow(img_org)
I am working on lane lines detection. My current working strategy is:
defining a region of interest where lane lines could be
Warping the image to get a bird eye view
Converting the image to YUV color space
Normalizing the Y channel
Fitting the second order polynomial and sliding window approach
every thing works fine but where there are shadows the algorithm do not work.
I have tried adaptive thresholding, otssu thresholding but not succeeded.
Source Image without Shadow:
Processed Source Image without Shadow:
Source Image with Shadow:
Processed Source Image with Shadow:
In the second Image it can be seen that the shadowed area is not detected. Actually shadows drops the image values down so i tried to threshold the image with new values lower than the previous one new image can be found here:
This technique does not work as it comes with a lot of noise
Currently I am trying background subtraction and shadow removal techniques but its not working. I am struck in this problem from last 2 3 weeks.
Any help will really be appreciated...
import cv2
import matplotlib.pyplot as plt
import numpy as np
from helper_functions import undistort, threshholding, unwarp,sliding_window_polyfit
from helper_functions import polyfit_using_prev_fit,calc_curv_rad_and_center_dist
from Lane_Lines_Finding import RoI
img = cv2.imread('./test_images/new_test.jpg')
new =undistort(img)
new = cv2.cvtColor(new, cv2.COLOR_RGB2BGR)
#new = threshholding(new)
h,w = new.shape[:2]
# define source and destination points for transform
imshape = img.shape
vertices = np.array([[
(257,670),
(590, 446),
(722, 440),
(1150,650)
]],
dtype=np.int32)
p1 = (170,670)
p2 = (472, 475)
p3 = (745, 466)
p4 = (1050,650)
vertices = np.array([[p1,
p2,
p3,
p4
]],
dtype=np.int32)
masked_edges = RoI(new, vertices)
#masked_edges = cv2.cvtColor(masked_edges, cv2.COLOR_RGB2BGR)
src = np.float32([(575,464),
(707,464),
(258,682),
(1049,682)])
dst = np.float32([(450,0),
(w-450,0),
(450,h),
(w-450,h)])
warp_img, M, Minv = unwarp(masked_edges, src, dst)
warp_img = increase_brightness_img(warp_img)
warp_img = contrast_img(warp_img)
YUV = cv2.cvtColor(warp_img, cv2.COLOR_RGB2YUV)
Y,U,V = cv2.split(YUV)
Y_equalized= cv2.equalizeHist(Y)
YUV = cv2.merge((Y,U,V))
thresh_min = 253
thresh_max = 255
binary = np.zeros_like(Y)
binary[(Y_equalized>= thresh_min) & (Y_equalized <= thresh_max)] = 1
kernel_opening= np.ones((3,3),np.uint8)
opening = cv2.morphologyEx(binary, cv2.MORPH_OPEN, kernel_opening)
kernel= np.ones((7,7),np.uint8)
dilation = cv2.dilate(opening,kernel,iterations = 3)
I have an image that contains ~400 dots. I've been using the Simpleblob detector to find the keypoints. I then for loop over all keypoints to find the center of each of the keypoints (code below). This works well, but I'm also interested in the moment information, as I would imagine .pt is only averaging the position of all of the pixels associated with the keypoints.
import cv2
import numpy as np
import csv
im = cv2.imread("8f3secshim.bmp", cv2.IMREAD_GRAYSCALE)
detector = cv2.SimpleBlobDetector_create()
keypoints = detector.detect(im)
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]),
(0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow("Keypoints", im_with_keypoints)
x = np.empty([len(keypoints), 2])
for i in range(len(keypoints)):
x[i] = keypoints[i].pt
I wanted to do something along the lines of this:
M = np.empty([lens(keypoints), 1])
for j in range(len(keypoints)):
M[j] = cv2.moments(keypoints[j])
but it fails.
I've tried abandoning the Simpledetector and using a treatment for moments listed here http://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html
, but that has failed as well.
If anyone has any suggestions they would be much appreciated.
I seem to have incorrectly worded my question. If anyone is looking to find the moments associated with individual objects found in a given image, the following code can be used.
import cv2
import numpy as np
img = cv2.imread('8f3secshim.bmp',0)
ret,thresh = cv2.threshold(img,127,255,0)
im2,contours,hierarchy = cv2.findContours(thresh, 1, 2)
print(len(contours))
a = np.empty([len(contours), 1])
cx = np.empty([len(contours), 1])
cy = np.empty([len(contours), 1])
for i in range(0, len(contours)):
Mi = cv2.moments(contours[i])
#if any m00 moment is equal to zero the code can not be completed...
if Mi['m00'] != 0:
cx[i]= Mi['m10']/Mi['m00']
cy[i]= Mi['m01']/Mi['m00']
a[i] = cv2.contourArea(contours[i])
x = np.hstack((cx, cy, a))
#x is a len(contours), 3 matrix.
I'm sure there is probably a more elegant way to do this, but this works.