I have 750+ images, like this 'test.png', that I need to subtract the vignetting in 'vig-raw.png' from. I just started using opencv-python, so "I don't even know what I don't know".
Using GIMP, I desaturated 'vig-raw.png' to create 'vig-desat.png', which I then converted with Color to Alpha to create 'vig-alpha.png'.
This is my attempt to subtract 'vig-alpha.png' from 'test.png'.
import cv2 as cv
import numpy as np
img1 = cv.imread('test.png',0)
img1 = cv.cvtColor(img1, cv.COLOR_BGR2BGRA) # add alpha channel to RGB image
print(img1[0][0]) # show alpha
img2 = cv.imread('vig-alpha.png',flags=cv.IMREAD_UNCHANGED) # read RGBA image
print(img2[0][0]) #show alpha
img3 = cv.subtract(img1, img2)
img3 = cv.resize(img3, (500,250))
print(img3[0][0]) # show alpha
cv.imshow('result',img3)
cv.waitKey()
cv.destroyAllWindows()
However, this is the 'result'. I need to produce a uniform shading throughout the image while leaving the original colors intact. I don't know the correct terminology for this sort of thing, and it's hard to search for a solution with what I do know. Thanks in advance.
EDIT: As per Rotem's answer, image file format matters. StackOverflow converted the PNG files I posted to JPEG, which did effect results while checking their answer. See the comment I left on Rotem's answer below for more information.
Vignette template is not supposed to be subtracted, it supposed to be scaled.
The vignette correction process is known as Flat-field correction applies:
G = m / (F - D)
C = (R - D) * G
When D is dark field or dark frame.
We don't have dark frame sample - we may assume that the dark frame is all zeros.
Assuming D=zeros, the correction formula is:
G = m / F
C = R * G
m = mean(F), and F applies vig-alpha.
R is test.png.
For computing G (name it inv_vig_norm, we may use the following stages):
Read vig-alpha.png as grayscale, and convert it to float in range [0, 1] (vig_norm applies F):
vig = cv2.imread('vig-alpha.png', cv2.IMREAD_GRAYSCALE)
vig_norm = vig.astype(np.float32) / 255
Divide m by F:
vig_mean_val = cv2.mean(vig_norm)[0]
inv_vig_norm = vig_mean_val / vig_norm # Compute G = m/F
Compute C = R * G - scale img1 by inv_vig_norm:
inv_vig_norm = cv2.cvtColor(inv_vig_norm, cv2.COLOR_GRAY2BGR)
img2 = cv2.multiply(img1, inv_vig_norm, dtype=cv2.CV_8U) # Compute: C = R * G
For removing noise and artifacts, we may apply Median Blur and Gaussian Blur over vig (it may be required because the site converted vig-alpha.png to JPEG format).
Code sample:
import cv2
import numpy as np
img1 = cv2.imread('test.png')
vig = cv2.imread('vig-alpha.png', cv2.IMREAD_GRAYSCALE) # Read vignette template as grayscale
vig = cv2.medianBlur(vig, 15) # Apply median filter for removing artifacts and extreem pixels.
vig_norm = vig.astype(np.float32) / 255 # Convert vig to float32 in range [0, 1]
vig_norm = cv2.GaussianBlur(vig_norm, (51, 51), 30) # Blur the vignette template (because there are still artifacts, maybe because SO convered the image to JPEG).
#vig_max_val = vig_norm.max() # For avoiding "false colors" we may use the maximum instead of the mean.
vig_mean_val = cv2.mean(vig_norm)[0]
# vig_max_val / vig_norm
inv_vig_norm = vig_mean_val / vig_norm # Compute G = m/F
inv_vig_norm = cv2.cvtColor(inv_vig_norm, cv2.COLOR_GRAY2BGR) # Convert inv_vig_norm to 3 channels before using cv2.multiply. https://stackoverflow.com/a/48338932/4926757
img2 = cv2.multiply(img1, inv_vig_norm, dtype=cv2.CV_8U) # Compute: C = R * G
cv2.imshow('inv_vig_norm', cv2.resize(inv_vig_norm / inv_vig_norm.max(), (500, 250))) # Show inv_vig_norm for testing
cv2.imshow('img1', cv2.resize(img1, (500, 250)))
cv2.imshow('result', cv2.resize(img2, (500, 250)))
cv2.waitKey()
cv2.destroyAllWindows()
Results:
img1:
inv_vig_norm:
img2:
for my class project I am trying to extract ridges and Valleys from the finger image. An example is given below.
#The code I am using
import cv2
import numpy as np
import fingerprint_enhancer
clip_hist_percent=25
image = cv2.imread("")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Calculate grayscale histogram
hist = cv2.calcHist([gray],[0],None,[256],[0,256])
hist_size = len(hist)
# Calculate cumulative distribution from the histogram
accumulator = []
accumulator.append(float(hist[0]))
for index in range(1, hist_size):
accumulator.append(accumulator[index -1] + float(hist[index]))
# Locate points to clip
maximum = accumulator[-1]
clip_hist_percent *= (maximum/100.0)
clip_hist_percent /= 2.0
# Locate left cut
minimum_gray = 0
while accumulator[minimum_gray] < clip_hist_percent:
minimum_gray += 1
# Locate right cut
maximum_gray = hist_size -1
while accumulator[maximum_gray] >= (maximum - clip_hist_percent):
maximum_gray -= 1
# Calculate alpha and beta values
alpha = 255 / (maximum_gray - minimum_gray)
beta = -minimum_gray * alpha
auto_result = cv2.convertScaleAbs(image, alpha=alpha, beta=beta)
gray = cv2.cvtColor(auto_result, cv2.COLOR_BGR2GRAY)
# compute gamma = log(mid*255)/log(mean)
mid = 0.5
mean = np.mean(gray)
gamma = math.log(mid*255)/math.log(mean)
# do gamma correction
img_gamma1 = np.power(auto_result,gamma).clip(0,255).astype(np.uint8)
g1 = cv2.cvtColor(img_gamma2, cv2.COLOR_BGR2GRAY)
# blur = cv2.GaussianBlur(g1,(2,1),0)
thresh2 = cv2.adaptiveThreshold(g1, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY, 199, 3)
# blur = cv2.GaussianBlur(thresh2,(2,1),0)
blur=((3,3),1)
erode_=(5,5)
dilate_=(3, 3)
dilate = cv2.dilate(cv2.erode(cv2.GaussianBlur(thresh2/255, blur[0],
blur[1]), np.ones(erode_)), np.ones(dilate_))*255
out = fingerprint_enhancer.enhance_Fingerprint(dilate)
I am having difficulty extracting the lines on the finger. I tried to adjust the brightness and contrast, applied calcHist, adaptive thresholding, applied blur, then applied the Gabor filters (as per UTKARSH code). The result look like above.
We could clearly see that the lower part of the image has many spurious lines. My project requirement is to get clear lines from the RGB image. Could anyone help me with the steps and the code?
Thank you in advance
reference:
https://github.com/Utkarsh-Deshmukh/Fingerprint-Enhancement-Python
https://ieeexplore.ieee.org/abstract/document/7358782
There are several strange things (IMO) about your code.
First you do a contrast stretch that sets the 12.5% darkest pixels to black and the 12.5% brightest pixels to white. You probably already have this number of white pixels, so not much happens there, but you do remove all the information in the darkest region of the finger print.
Next you threshold. Here you remove most of the remaining information. Thresholding is something you should leave until the very last step of any processing. In particular, the algorithm implemented in fingerprint_enhancer.enhance_Fingerprint() takes a gray-scale image as input. You should not binarize its input at all!
I would start with a local contrast stretch, then you can directly apply the enhancement algorithm:
import cv2
import fingerprint_enhancer
image = cv2.imread("zMxbO.jpg")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Apply local contrast stretch
se = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (25, 25)) # larger than the width of the widest ridges
low = cv2.morphologyEx(gray, cv2.MORPH_OPEN, se) # locally lowest grayvalue
high = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, se) # locally highest grayvalue
gray = (gray - o) / (c - o + 1e-6)
# Apply fingerprint enhancement
out = fingerprint_enhancer.enhance_Fingerprint(gray, resize=True)
The local contrast stretch yields this:
The finger print enhancement algorithm now yields this:
Note things go wrong around the edges, where the background was cut out and replaced with white, as well as in the dark region, where the noise dominates and the enhancement algorithm hallucinates a bit. I don't think you can extract meaningful information from that area, a better illumination would be necessary.
I startetd an ocr project a few days ago. The input image is a really noisy gray image with white letters. With the EAST text detector it is possible to recognize the text and draw borders around.
After that i crop the rectangle do some image processing. After that, I pass the processed parts to pytesseract, but with bad results. Images and source vode is below. Maybe some have a good idea for better image processing and/or pytesseract settings.
Images
Input image
Rectangles after Recognition
First part
Second part
Third part
Tesseract Result
AY U N74 O54
Sourcecode for image processing
kernel = cv2.getStructuringElement(cv2.MORPH_RECT , (8,8))
kernel2 = np.ones((3,3),np.uint8)
kernel3 = np.ones((5,5),np.uint8)
gray = cv2.cvtColor(cropped, cv2.COLOR_BGR2GRAY)
gray = cv2.resize(gray, None, fx=7, fy=7)
gray = cv2.GaussianBlur(gray, (5,5), 1)
#cv2.medianBlur(gray, 5)
gray = cv2.dilate(gray, kernel3, iterations = 1)
gray = cv2.erode(gray, kernel3, iterations = 1)
gray = cv2.morphologyEx(gray, cv2.MORPH_DILATE, kernel3)
gray = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
gray = cv2.bitwise_not(gray)
ts_img = Image.fromarray(gray)
txt = pytesseract.image_to_string(ts_img, config='--oem 3 --psm 12 -c tessedit_char_whitelist=12345678ABCDEFGHIJKLMNOPQRSTUVWXYZ load_system_dawg=false load_freq_dawg=false')
I tried some other psm settings like psm 11, psm 8 and ps6. The results are different, but also bad.
I guess the biggest problem are the black spots which are connected with the letters and digits but I have no idea how to remove them.
I appreciate every help :)
OCR software will perform poorly when interpreting this text as a word or sentence because it's expecting real English words and not a random combination of characters. I'd recommend analyzing the text as individual characters. I solved the (example) problem by first determining which groups of labeled pixels (connected components of a thresholded image) are characters based on the size and location of the group. Then for each image portion containing a (single) character I use easyocr to obtain the character. I found that pytesseract performs poorly or not at all on single characters (even when setting --psm 10 and other arguments). The code below produces this result:
OCR out: 6UAE005X0721295
import cv2
import matplotlib.pyplot as plt
import numpy as np
import easyocr
reader = easyocr.Reader(["en"])
# Threshold image and determine connected components
img_bgr = cv2.imread("C5U3m.png")
img_gray = cv2.cvtColor(img_bgr[35:115, 30:], cv2.COLOR_BGR2GRAY)
ret, img_bin = cv2.threshold(img_gray, 195, 255, cv2.THRESH_BINARY_INV)
retval, labels = cv2.connectedComponents(255 - img_bin, np.zeros_like(img_bin), 8)
fig, axs = plt.subplots(4)
axs[0].imshow(img_gray, cmap="gray")
axs[0].set_title("grayscale")
axs[1].imshow(img_bin, cmap="gray")
axs[1].set_title("thresholded")
axs[2].imshow(labels, vmin=0, vmax=retval - 1, cmap="tab20b")
axs[2].set_title("connected components")
# Find and process individual characters
OCR_out = ""
all_img_chars = np.zeros((labels.shape[0], 0), dtype=np.uint8)
labels_xmin = [np.argwhere(labels == i)[:, 1].min() for i in range(0, retval)]
# Process the labels (connected components) from left to right
for i in np.argsort(labels_xmin):
label_yx = np.argwhere(labels == i)
label_ymin = label_yx[:, 0].min()
label_ymax = label_yx[:, 0].max()
label_xmin = label_yx[:, 1].min()
label_xmax = label_yx[:, 1].max()
# Characters are large blobs that don't border the top/bottom edge
if label_yx.shape[0] > 250 and label_ymin > 0 and label_ymax < labels.shape[0]:
img_char = img_bin[:, label_xmin - 3 : label_xmax + 3]
all_img_chars = np.hstack((all_img_chars, img_char))
# Use EasyOCR on single char (pytesseract performs poorly on single characters)
OCR_out += reader.recognize(img_char, detail=0)[0]
axs[3].imshow(all_img_chars, cmap="gray")
axs[3].set_title("individual characters")
fig.show()
print("Thruth: 6UAE005X0721295")
print("OCR out: " + OCR_out)
One of the biggest challenges in tesseract OCR text recognition is the uneven illumination of images.
I need an algorithm that can decide the image is containing uneven illuminations or not.
Test Images
I Attached the images of no illumination image, glare image( white-spotted image) and shadow containing image.
If we give an image to the algorithm, the algorithm should divide into two class like
No uneven illumination - our no illumination image will fall into this category.
Uneven illumination - Our glare image( white-spotted image), shadow containing image will fall in this category.
No Illumination Image - Category A
UnEven Illumination Image (glare image( white-spotted image)) Category B
Uneven Illumination Image (shadow containing an image) Category B
Initial Approach
Change colour space to HSV
Histogram analysis of the value channel of HSV to identify the uneven illumination.
Instead of the first two steps, we can use the perceived brightness
channel instead of the value channel of HSV
Set a low threshold value to get the number of pixels which are less than the low threshold
Set a high threshold value to get the number of pixels which are higher than the high threshold
percentage of low pixels values and percentage of high pixel values to detect uneven lightning condition (The setting threshold for percentage as well )
But I could not find big similarities between uneven illumination
images. I just found there are some pixels that have low value and
some pixels have high value with histogram analysis.
Basically what I feel is if setting some threshold values in the low and to find how many pixels are less than the low threshold and setting some high threshold value to find how many pixels are greater than that threshold. with the pixels counts can we come to a conclusion to detect uneven lightning conditions in images? Here we need to finalize two threshold values and the percentage of the number of pixels to come to the conclusion.
def show_hist_v(img_path):
img = cv2.imread(img_path)
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv_img)
histr =cv2.calcHist(v, [0], None, [255],[0,255])
plt.plot(histr)
plt.show()
low_threshold =np.count_nonzero(v < 50)
high_threshold =np.count_nonzero(v >200)
total_pixels = img.shape[0]* img.shape[1]
percenet_low =low_threshold/total_pixels*100
percenet_high =high_threshold/total_pixels*100
print("Total Pixels - {}\n Pixels More than 200 - {} \n Pixels Less than 50 - {} \n Pixels percentage more than 200 - {} \n Pixel spercentage less than 50 - {} \n".format(total_pixels,high_threshold,low_threshold,percenet_low,percenet_high))
return total_pixels,high_threshold,low_threshold,percenet_low,percenet_high
So can someone improve my initial approach or give better than this approach to detect uneven illumination in images for general cases?
Also, I tried perceived brightness instead of the value channel since the value channel takes the maximum of (b,g,r) values the perceive brightness is a good choice as I think
def get_perceive_brightness( float_img):
float_img = np.float64(float_img) # unit8 will make overflow
b, g, r = cv2.split(float_img)
float_brightness = np.sqrt(
(0.241 * (r ** 2)) + (0.691 * (g ** 2)) + (0.068 * (b ** 2)))
brightness_channel = np.uint8(np.absolute(float_brightness))
return brightness_channel
def show_hist_v(img_path):
img = cv2.imread(img_path)
v = get_perceive_brightness(img)
histr =cv2.calcHist(v, [0], None, [255],[0,255])
plt.plot(histr)
plt.show()
low_threshold =np.count_nonzero(v < 50)
high_threshold =np.count_nonzero(v >200)
total_pixels = img.shape[0]* img.shape[1]
percenet_low =low_threshold/total_pixels*100
percenet_high =high_threshold/total_pixels*100
print("Total Pixels - {}\n Pixels More than 200 - {} \n Pixels Less than 50 - {} \n Pixels percentage more than 200 - {} \n Pixel spercentage less than 50 - {} \n".format(total_pixels,high_threshold,low_threshold,percenet_low,percenet_high))
return total_pixels,high_threshold,low_threshold,percenet_low,percenet_high
Histogram analysis of perceived brightness channel
As Ahmet suggested.
def get_percentage_of_binary_pixels(img=None, img_path=None):
if img is None:
if img_path is not None:
gray_img = cv2.imread(img_path, 0)
else:
return "No img or img_path"
else:
print(img.shape)
if len(img.shape) > 2:
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
else:
gray_img = img
h, w = gray_img.shape
guassian_blur = cv2.GaussianBlur(gray_img, (5, 5), 0)
thresh_value, otsu_img = cv2.threshold(guassian_blur, 0, 255,
cv2.THRESH_BINARY + cv2.THRESH_OTSU)
cv2.imwrite("binary/{}".format(img_path.split('/')[-1]), otsu_img)
black_pixels = np.count_nonzero(otsu_img == 0)
# white_pixels = np.count_nonzero(otsu_img == 255)
black_pixels_percentage = black_pixels / (h * w) * 100
# white_pixels_percentage = white_pixels / (h * w) * 100
return black_pixels_percentage
when we get more than 35% of black_ pixels percentage with otsu binarization, we can detect the uneven illumination images around 80 percentage. When the illumination occurred in a small region of the image, the detection fails.
Thanks in advance
I suggest using the division trick to separate text from the background, and then calculate statistics on the background only. After setting some reasonable thresholds it is easy to create classifier for the illumination.
def get_image_stats(img_path, lbl):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (25, 25), 0)
no_text = gray * ((gray/blurred)>0.99) # select background only
no_text[no_text<10] = no_text[no_text>20].mean() # convert black pixels to mean value
no_bright = no_text.copy()
no_bright[no_bright>220] = no_bright[no_bright<220].mean() # disregard bright pixels
print(lbl)
std = no_bright.std()
print('STD:', std)
bright = (no_text>220).sum()
print('Brigth pixels:', bright)
plt.figure()
plt.hist(no_text.reshape(-1,1), 25)
plt.title(lbl)
if std>25:
print("!!! Detected uneven illumination")
if no_text.mean()<200 and bright>8000:
print("!!! Detected glare")
This results in:
good_img
STD: 11.264569863071165
Brigth pixels: 58
glare_img
STD: 15.00149131296984
Brigth pixels: 15122
!!! Detected glare
uneven_img
STD: 57.99510339944441
Brigth pixels: 688
!!! Detected uneven illumination
Now let's analyze the histograms and apply some common sense. We expect background to be even and have low variance, like it is the case in "good_img". If it has high variance, then its standard deviation would be high and it is the case of uneven brightness. On the lower image you can see 3 (smaller) peaks that are responsible for the 3 different illuminated areas. The largest peak in the middle is the result of setting all black pixels to the mean value. I believe it is safe to call images with STD above 25 as "uneven illumination" case.
It is easy to spot a high amount of bright pixels when there is glare (see image on right). Glared image looks like a good image, besided the hot spot. Setting threshold of bright pixels to something like 8000 (1.5% of total image size) should be good to detect such images. There is a possibility that the background is very bright everywhere, so if the mean of no_text pixels is above 200, then it is the case and there is no need to detect hot spots.
Why don't you remove the lightning effect from the images?
For instance:
If we want to read with pytesseract output will be ' \n\f'
But if we remove the lightning:
import cv2
import pytesseract
img = cv2.imread('img2.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
smooth = cv2.GaussianBlur(gray, (95, 95), 0)
division = cv2.divide(gray, smooth, scale=192)
And read with the pytesseract, some part of the output will be:
.
.
.
Dosage & use
See package insert for compicic
information,
Instruction:
Keep all medicines out of the re.
Read the instructions carefully
Storage:
Store at temperature below 30°C.
Protect from Heat, light & moisture. BATCH NO. : 014C003
MFG. DATE - 03-2019
—— EXP. DATE : 03-2021
GENIX Distributed
AS Exclusi i :
genx PHARMA PRIVATE LIMITED Cevoka Pv 2 A ‘<
» 45-B, Kore ci
Karachi-75190, | Pakisier al Pei yaa fans
www.genixpharma.com
Repeat for the last image:
And read with the pytesseract, some part of the output will be:
.
.
.
Dosage & use
See package insert for complete prescribing
information. Rx Only
Instruction:
Keep all medicines out of the reach of children.
Read the instructions carefully before using.
Storage:
Store at temperature below 30°C. 5
Protect from Neat, light & moisture. BATCH NO, : 0140003
MFG. DATE : 03-2019
EXP. DATE : 03-2021
Manufactured by:
GENI N Exclusively Distributed by:
GENIX PHARMA PRIVATE LIMITED Ceyoka (Pvt) Ltd.
44, 45-B, Korangi Creek Road, 55, Negombe Road,
Karachi-75190, Pakistan. Peliyagoda, Snianka,
www. genixpharma.com
Update
You can find the illuminated part using erode and dilatation methods.
Result:
Code:
import cv2
import imutils
import numpy as np
from skimage import measure
from imutils import contours
img = cv2.imread('img2.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (95, 95), 0)
thresh = cv2.threshold(blurred, 200, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.erode(thresh, None, iterations=2)
thresh = cv2.dilate(thresh, None, iterations=4)
labels = measure.label(thresh, neighbors=8, background=0)
mask = np.zeros(thresh.shape, dtype="uint8")
for label in np.unique(labels):
if label == 0:
continue
labelMask = np.zeros(thresh.shape, dtype="uint8")
labelMask[labels == label] = 255
numPixels = cv2.countNonZero(labelMask)
if numPixels > 300:
mask = cv2.add(mask, labelMask)
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
cnts = contours.sort_contours(cnts)[0]
for (i, c) in enumerate(cnts):
(x, y, w, h) = cv2.boundingRect(c)
((cX, cY), radius) = cv2.minEnclosingCircle(c)
cv2.circle(img, (int(cX), int(cY)), int(radius),
(0, 0, 255), 3)
cv2.putText(img, "#{}".format(i + 1), (x, y - 15),
cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 0, 255), 2)
cv2.imshow("Image", img)
cv2.waitKey(0)
Though I only tested with the second-image. You may need to change the parameters for the other images.
Here is a quick solution in ImageMagick. But it can easily be implemented in Python/OpenCV as shown further down.
Use division normalization.
Read the input
Optionally convert to grayscale
Copy the image and blur it
Divide the blurred image by the original
Save the results
Input:
convert 8W0bp.jpg \( +clone -blur 0x13 \) +swap -compose divide -composite x1.png
convert ob87W.jpg \( +clone -blur 0x13 \) +swap -compose divide -composite x2.png
convert HLJuA.jpg \( +clone -blur 0x13 \) +swap -compose divide -composite x3.png
Results:
In Python/OpenCV:
import cv2
import numpy as np
import skimage.filters as filters
# read the image
img = cv2.imread('8W0bp.jpg')
#img = cv2.imread('ob87W.jpg')
#img = cv2.imread('HLJuA.jpg')
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# blur
smooth = cv2.GaussianBlur(gray, (33,33), 0)
# divide gray by morphology image
division = cv2.divide(gray, smooth, scale=255)
# sharpen using unsharp masking
sharp = filters.unsharp_mask(division, radius=1.5, amount=2.5, multichannel=False, preserve_range=False)
sharp = (255*sharp).clip(0,255).astype(np.uint8)
# save results
cv2.imwrite('8W0bp_division.jpg',division)
cv2.imwrite('8W0bp_division_sharp.jpg',sharp)
#cv2.imwrite('ob87W_division.jpg',division)
#cv2.imwrite('ob87W_division_sharp.jpg',sharp)
#cv2.imwrite('HLJuA_division.jpg',division)
#cv2.imwrite('HLJuA_division_sharp.jpg',sharp)
# show results
cv2.imshow('smooth', smooth)
cv2.imshow('division', division)
cv2.imshow('sharp', sharp)
cv2.waitKey(0)
cv2.destroyAllWindows()
Results:
Here my pipeline:
%matplotlib inline
import numpy as np
import cv2
from matplotlib import pyplot as plt
from scipy.signal import find_peaks
I use the functions:
def get_perceived_brightness( float_img):
float_img = np.float64(float_img) # unit8 will make overflow
b, g, r = cv2.split(float_img)
float_brightness = np.sqrt((0.241 * (r ** 2)) + (0.691 * (g ** 2)) + (0.068 * (b ** 2)))
brightness_channel = np.uint8(np.absolute(float_brightness))
return brightness_channel
# from: https://stackoverflow.com/questions/46300577/find-locale-minimum-in-histogram-1d-array-python
def smooth(x,window_len=11,window='hanning'):
if x.ndim != 1:
raise ValueError("smooth only accepts 1 dimension arrays.")
if x.size < window_len:
raise ValueError("Input vector needs to be bigger than window size.")
if window_len<3:
return x
if not window in ['flat', 'hanning', 'hamming', 'bartlett', 'blackman']:
raise ValueError("Window is on of 'flat', 'hanning', 'hamming', 'bartlett', 'blackman'")
s=np.r_[x[window_len-1:0:-1],x,x[-2:-window_len-1:-1]]
if window == 'flat': #moving average
w=np.ones(window_len,'d')
else:
w=eval('np.'+window+'(window_len)')
y=np.convolve(w/w.sum(),s,mode='valid')
return y
I load the image
image_file_name = 'im3.jpg'
image = cv2.imread(image_file_name)
# image category
category = 0
# gray convertion
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
height = image.shape[0]
width = image.shape[1]
First test. Does the image have any big white spots?
# First test. Does the image have any big white spots?
saturation_thresh = 250
raw_saturation_region = cv2.threshold(image_gray, saturation_thresh, 255, cv2.THRESH_BINARY)[1]
num_raw_saturation_regions, raw_saturation_regions,stats, _ = cv2.connectedComponentsWithStats(raw_saturation_region)
# index 0 is the background -> to remove
area_raw_saturation_regions = stats[1:,4]
min_area_bad_spot = 1000 # this can be calculated as percentage of the image area
if (np.max(area_raw_saturation_regions) > min_area_bad_spot):
category = 2 # there is at least one spot
The result for the image normal:
The result for the image with spots:
The result for the image with shadows:
If the image pass the first test, I process the second test. Is the image dark?
# Second test. Is the image dark?
min_mean_intensity = 60
if category == 0 :
mean_intensity = np.mean(image_gray)
if (mean_intensity < min_mean_intensity):
category = 3 # dark image
If the image pass also the second test, I process the third test. Is the image uniformy illuminatad?
window_len = 15 # odd number
delay = int((window_len-1)/2) # delay is the shift introduced from the smoothing. It's half window_len
# for example if the window_len is 15, the delay is 7
# infact hist.shape = 256 and smooted_hist.shape = 270 (= 256 + 2*delay)
if category == 0 :
perceived_brightness = get_perceived_brightness(image)
hist,bins = np.histogram(perceived_brightness.ravel(),256,[0,256])
# smoothed_hist is shifted from the original one
smoothed_hist = smooth(hist,window_len)
# smoothed histogram syncronized with the original histogram
sync_smoothed_hist = smoothed_hist[delay:-delay]
# if number the peaks with:
# 20<bin<250
# prominance >= mean histogram value
# the image could have shadows (but it could have also a background with some colors)
mean_hist = int(height*width / 256)
peaks, _ = find_peaks(sync_smoothed_hist, prominence=mean_hist)
selected_peaks = peaks[(peaks > 20) & (peaks < 250)]
if (selected_peaks.size>1) :
category = 4 # there are shadows
The histogram for the image normal:
The histogram for the image with spots:
The histogram for the image with shadows:
If the image pass all the tests, than it's normal
# all tests are passed. The image is ok
if (category == 0) :
category=1 # the image is ok
I want to select each component of this image :
In practice, each and every triangle, by its labels. I don't figure out how.
I have this code:
#!/usr/bin/python
import cv2
import numpy as np
img = cv2.imread('invMehs.png', -1)
imGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, imBw = cv2.threshold(imGray, 250, 255, cv2.THRESH_BINARY)
invBwMesh = cv2.bitwise_not(imBw)
Mask = np.ones(imBw.shape, dtype="uint8") * 255
connectivity = 4
output = cv2.connectedComponentsWithStats(imBw, connectivity, cv2.CV_32S)
num_labels = output[0]
labels = output[1]
stats = output[2]
centroids = output[3]
labels = labels + 1
b = ( labels == 1)
cv2.imwrite('tst.jpg',labels[b])
But the image is complety black :S
Thank you very much.
The image you want save (labels[b]) only contains the thin lines (greylevel 1). When saving image using JPEG format, the compression algorithm smooths them, but since they have only 1 greylevel with the background, they are erased. That's why you get a black image
Saving in PNG format do not alter the image labels.
In order to keep all labels for each connected component (0 for the background), the code to write should be :
cv2.imwrite('labels.png',output[1])