Subtract vignetting template from image in OpenCV Python - python

I have 750+ images, like this 'test.png', that I need to subtract the vignetting in 'vig-raw.png' from. I just started using opencv-python, so "I don't even know what I don't know".
Using GIMP, I desaturated 'vig-raw.png' to create 'vig-desat.png', which I then converted with Color to Alpha to create 'vig-alpha.png'.
This is my attempt to subtract 'vig-alpha.png' from 'test.png'.
import cv2 as cv
import numpy as np
img1 = cv.imread('test.png',0)
img1 = cv.cvtColor(img1, cv.COLOR_BGR2BGRA) # add alpha channel to RGB image
print(img1[0][0]) # show alpha
img2 = cv.imread('vig-alpha.png',flags=cv.IMREAD_UNCHANGED) # read RGBA image
print(img2[0][0]) #show alpha
img3 = cv.subtract(img1, img2)
img3 = cv.resize(img3, (500,250))
print(img3[0][0]) # show alpha
cv.imshow('result',img3)
cv.waitKey()
cv.destroyAllWindows()
However, this is the 'result'. I need to produce a uniform shading throughout the image while leaving the original colors intact. I don't know the correct terminology for this sort of thing, and it's hard to search for a solution with what I do know. Thanks in advance.
EDIT: As per Rotem's answer, image file format matters. StackOverflow converted the PNG files I posted to JPEG, which did effect results while checking their answer. See the comment I left on Rotem's answer below for more information.

Vignette template is not supposed to be subtracted, it supposed to be scaled.
The vignette correction process is known as Flat-field correction applies:
G = m / (F - D)
C = (R - D) * G
When D is dark field or dark frame.
We don't have dark frame sample - we may assume that the dark frame is all zeros.
Assuming D=zeros, the correction formula is:
G = m / F
C = R * G
m = mean(F), and F applies vig-alpha.
R is test.png.
For computing G (name it inv_vig_norm, we may use the following stages):
Read vig-alpha.png as grayscale, and convert it to float in range [0, 1] (vig_norm applies F):
vig = cv2.imread('vig-alpha.png', cv2.IMREAD_GRAYSCALE)
vig_norm = vig.astype(np.float32) / 255
Divide m by F:
vig_mean_val = cv2.mean(vig_norm)[0]
inv_vig_norm = vig_mean_val / vig_norm # Compute G = m/F
Compute C = R * G - scale img1 by inv_vig_norm:
inv_vig_norm = cv2.cvtColor(inv_vig_norm, cv2.COLOR_GRAY2BGR)
img2 = cv2.multiply(img1, inv_vig_norm, dtype=cv2.CV_8U) # Compute: C = R * G
For removing noise and artifacts, we may apply Median Blur and Gaussian Blur over vig (it may be required because the site converted vig-alpha.png to JPEG format).
Code sample:
import cv2
import numpy as np
img1 = cv2.imread('test.png')
vig = cv2.imread('vig-alpha.png', cv2.IMREAD_GRAYSCALE) # Read vignette template as grayscale
vig = cv2.medianBlur(vig, 15) # Apply median filter for removing artifacts and extreem pixels.
vig_norm = vig.astype(np.float32) / 255 # Convert vig to float32 in range [0, 1]
vig_norm = cv2.GaussianBlur(vig_norm, (51, 51), 30) # Blur the vignette template (because there are still artifacts, maybe because SO convered the image to JPEG).
#vig_max_val = vig_norm.max() # For avoiding "false colors" we may use the maximum instead of the mean.
vig_mean_val = cv2.mean(vig_norm)[0]
# vig_max_val / vig_norm
inv_vig_norm = vig_mean_val / vig_norm # Compute G = m/F
inv_vig_norm = cv2.cvtColor(inv_vig_norm, cv2.COLOR_GRAY2BGR) # Convert inv_vig_norm to 3 channels before using cv2.multiply. https://stackoverflow.com/a/48338932/4926757
img2 = cv2.multiply(img1, inv_vig_norm, dtype=cv2.CV_8U) # Compute: C = R * G
cv2.imshow('inv_vig_norm', cv2.resize(inv_vig_norm / inv_vig_norm.max(), (500, 250))) # Show inv_vig_norm for testing
cv2.imshow('img1', cv2.resize(img1, (500, 250)))
cv2.imshow('result', cv2.resize(img2, (500, 250)))
cv2.waitKey()
cv2.destroyAllWindows()
Results:
img1:
inv_vig_norm:
img2:

Related

How Do I Develop a negative film image using python

I have tried inverting a negative film images color with the bitwise_not() function in python but it has this blue tint. I would like to know how I could develop a negative film image that looks somewhat good. Here's the outcome of what I did. (I just cropped the negative image for a new test I was doing so don't mind that)
If you don't use exact maximum and minimum, but 1st and 99th percentile, or something nearby (0.1%?), you'll get some nicer contrast. It'll cut away outliers due to noise, compression, etc.
Additionally, you should want to mess with gamma, or scale the values linearly, to achieve white balance.
I'll apply a "gray world assumption" and scale each plane so the mean is gray. I'll also mess with gamma, but that's just messing around.
And... all of that completely ignores gamma mapping, both of the "negative" and of the outputs.
import numpy as np
import cv2 as cv
import skimage
im = cv.imread("negative.png")
(bneg,gneg,rneg) = cv.split(im)
def stretch(plane):
# take 1st and 99th percentile
imin = np.percentile(plane, 1)
imax = np.percentile(plane, 99)
# stretch the image
plane = (plane - imin) / (imax - imin)
return plane
b = 1 - stretch(bneg)
g = 1 - stretch(gneg)
r = 1 - stretch(rneg)
bgr = cv.merge([b,g,r])
cv.imwrite("positive.png", bgr * 255)
b = 1 - stretch(bneg)
g = 1 - stretch(gneg)
r = 1 - stretch(rneg)
# gray world
b *= 0.5 / b.mean()
g *= 0.5 / g.mean()
r *= 0.5 / r.mean()
bgr = cv.merge([b,g,r])
cv.imwrite("positive_grayworld.png", bgr * 255)
b = 1 - np.clip(stretch(bneg), 0, 1)
g = 1 - np.clip(stretch(gneg), 0, 1)
r = 1 - np.clip(stretch(rneg), 0, 1)
# goes in the right direction
b = skimage.exposure.adjust_gamma(b, gamma=b.mean()/0.5)
g = skimage.exposure.adjust_gamma(g, gamma=g.mean()/0.5)
r = skimage.exposure.adjust_gamma(r, gamma=r.mean()/0.5)
bgr = cv.merge([b,g,r])
cv.imwrite("positive_gamma.png", bgr * 255)
Here's what happens when gamma is applied to the inverted picture... a reasonably tolerable transfer function results from applying the same factor twice, instead of applying its inverse.
Trying to "undo" the gamma while ignoring that the values were inverted... causes serious distortions:
And the min/max values for contrast stretching also affect the whole thing.
A simple photo of a negative simply won't do. It'll include stray light that offsets the black point, at the very least. You need a proper scan of the negative.
Here is one simple way to do that in Python/OpenCV. Basically one stretches each channel of the image to full dynamic range separately. Then recombines. Then inverts.
Input:
import cv2
import numpy as np
import skimage.exposure
# read image
img = cv2.imread('boys_negative.png')
# separate channels
r,g,b = cv2.split(img)
# stretch each channel
r_stretch = skimage.exposure.rescale_intensity(r, in_range='image', out_range=(0,255)).astype(np.uint8)
g_stretch = skimage.exposure.rescale_intensity(g, in_range='image', out_range=(0,255)).astype(np.uint8)
b_stretch = skimage.exposure.rescale_intensity(b, in_range='image', out_range=(0,255)).astype(np.uint8)
# combine channels
img_stretch = cv2.merge([r_stretch, g_stretch, b_stretch])
# invert
result = 255 - img_stretch
cv2.imshow('input', img)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save results
cv2.imwrite('boys_negative_inverted.jpg', result)
Result:
Caveat: This works for this image, but may not be a universal solution for all images.
ADDITION
In the above, I did not clip when stretching as I wanted to preserver all information. But if one wants to clip and use skimage.exposure.rescale_intensity for stretching, then it is easy enough by the following:
import cv2
import numpy as np
import skimage.exposure
# read image
img = cv2.imread('boys_negative.png')
# separate channels
r,g,b = cv2.split(img)
# compute clip points -- clip 1% only on high side
clip_rmax = np.percentile(r, 99)
clip_gmax = np.percentile(g, 99)
clip_bmax = np.percentile(b, 99)
clip_rmin = np.percentile(r, 0)
clip_gmin = np.percentile(g, 0)
clip_bmin = np.percentile(b, 0)
# stretch each channel
r_stretch = skimage.exposure.rescale_intensity(r, in_range=(clip_rmin,clip_rmax), out_range=(0,255)).astype(np.uint8)
g_stretch = skimage.exposure.rescale_intensity(g, in_range=(clip_gmin,clip_gmax), out_range=(0,255)).astype(np.uint8)
b_stretch = skimage.exposure.rescale_intensity(b, in_range=(clip_bmin,clip_bmax), out_range=(0,255)).astype(np.uint8)
# combine channels
img_stretch = cv2.merge([r_stretch, g_stretch, b_stretch])
# invert
result = 255 - img_stretch
cv2.imshow('input', img)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save results
cv2.imwrite('boys_negative_inverted2.jpg', result)
Result:

How to remove noise around numbers using OpenCV

I'm trying to use Tesseract-OCR to get the readings on below images but having issues getting consistent results with the spotted background. I have below configuration on my pytesseract
CONFIG = f"—psm 6 -c tessedit_char_whitelist=01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZÅÄabcdefghijklmnopqrstuvwxyzåäö.,-"
I have also tried below image pre-processing with some good results, but still not perfect results
blur = cv2.blur(img,(4,4))
(T, threshInv) = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
What I want is to consistently be able to identify the numbers and the decimal separator. What image pre-processing could help in getting consistent results on images as below?
That was a challenge but i think i have an interesting approach: Pattern-matching
If you zoom in, you realize that the pattern in the back only has 4 possible dots, a single full pixle, a double full pixel and a double pixel with a medium left or right. So what i did was grab these 4 patterns from the image with 17.160.000,00 and got to work. Save these to load again, i just grabbed them on the fly
img = cv2.imread('C:/Users/***/17.jpg', cv2.IMREAD_GRAYSCALE)
pattern_1 = img[2:5,1:5]
pattern_2 = img[6:9,5:9]
pattern_3 = img[6:9,11:15]
pattern_4 = img[9:12,22:26]
# just to show it carries over to other pics ;)
img = cv2.imread('C:/Users/****/6.jpg', cv2.IMREAD_GRAYSCALE)
Actual Pattern Matching
Next we match all the patterns and threshold to find all occurrences, i used 0.7 but you can play around with it a little. These patterns take off some pixels on the side and only match a sigle pixel on the left so we pad twice (one with an extra) to hit both for the first 3 patterns. The last one is the single pixel so it doesnt need it
res_1 = cv2.matchTemplate(img,pattern_1,cv2.TM_CCOEFF_NORMED )
thresh_1 = cv2.threshold(res_1,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_1 = np.pad(thresh_1,((1,1),(1,2)),'constant')
pat_thresh_15 = np.pad(thresh_1,((1,1),(2,1)), 'constant')
res_2 = cv2.matchTemplate(img,pattern_2,cv2.TM_CCOEFF_NORMED )
thresh_2 = cv2.threshold(res_2,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_2 = np.pad(thresh_2,((1,1),(1,2)),'constant')
pat_thresh_25 = np.pad(thresh_2,((1,1),(2,1)), 'constant')
res_3 = cv2.matchTemplate(img,pattern_3,cv2.TM_CCOEFF_NORMED )
thresh_3 = cv2.threshold(res_3,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_3 = np.pad(thresh_3,((1,1),(1,2)),'constant')
pat_thresh_35 = np.pad(thresh_3,((1,1),(2,1)), 'constant')
res_4 = cv2.matchTemplate(img,pattern_4,cv2.TM_CCOEFF_NORMED )
thresh_4 = cv2.threshold(res_4,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_4 = np.pad(thresh_4,((1,1),(1,2)),'constant')
Editing the Image
Now the only thing left to do is remove all the matches from the image. Since we have a mostly white backround we just set them to 255 to blend in.
img[pat_thresh_1==1] = 255
img[pat_thresh_15==1] = 255
img[pat_thresh_2==1] = 255
img[pat_thresh_25==1] = 255
img[pat_thresh_3==1] = 255
img[pat_thresh_35==1] = 255
img[pat_thresh_4==1] = 255
Output
Edit:
Take a look at Abstracts answer as well for refining this output and tesseract finetuning
You may find a solution using a slightly more complex approach by filtering in the frequency domain instead of the spatial domain. The thresholds might require some tweaking depending on how tesseract performs with the output images.
Implementation:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\\Test\\number.jpg', cv2.IMREAD_GRAYSCALE)
# Perform 2D FFT
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 20*np.log(np.abs(fshift))
# Squash all of the frequency magnitudes above a threshold
for idx, x in np.ndenumerate(magnitude_spectrum):
if x > 195:
fshift[idx] = 0
# Inverse FFT back into the real-spatial-domain
f_ishift = np.fft.ifftshift(fshift)
img_back = np.fft.ifft2(f_ishift)
img_back = np.real(img_back)
img_back = cv2.normalize(img_back, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
out_img = np.copy(img)
# Use the inverted FFT image to keep only the black values below a threshold
for idx, x in np.ndenumerate(img_back):
if x < 100:
out_img[idx] = 0
else:
out_img[idx] = 255
plt.subplot(131),plt.imshow(img, cmap = 'gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(132),plt.imshow(img_back, cmap = 'gray')
plt.title('Reversed FFT'), plt.xticks([]), plt.yticks([])
plt.subplot(133),plt.imshow(out_img, cmap = 'gray')
plt.title('Output'), plt.xticks([]), plt.yticks([])
plt.show()
Output:
Median Blur Implementation:
import cv2
import numpy as np
img = cv2.imread('C:\\Test\\number.jpg', cv2.IMREAD_GRAYSCALE)
blur = cv2.medianBlur(img, 3)
for idx, x in np.ndenumerate(blur):
if x < 20:
blur[idx] = 0
cv2.imshow("Test", blur)
cv2.waitKey()
Output:
Final Edit:
So using Eumel's solution and combining this bit of code on the bottom of it yields a 100% successful result:
img[pat_thresh_1==1] = 255
img[pat_thresh_15==1] = 255
img[pat_thresh_2==1] = 255
img[pat_thresh_25==1] = 255
img[pat_thresh_3==1] = 255
img[pat_thresh_35==1] = 255
img[pat_thresh_4==1] = 255
# Eumel's code above this line
img = cv2.erode(img, np.ones((3,3)))
cv2.imwrite("out.png", img)
cv2.imshow("Test", img)
print(pytesseract.image_to_string(Image.open("out.png"), lang='eng', config='--psm 10 --oem 3 -c tessedit_char_whitelist=0123456789.,'))
Output Image Examples:
Whitelisting the tesseract characters appears to help quite a bit as well to prevent false identification.

How to augment scanned document image with creases, folds and wrinkles?

I am creating a synthetic dataset to train a model that needs to find documents in an image. the documents will be far from perfect, i.e they were folded, creased and wrinkled crinkled.
I could find a few ways of doing it in photoshop but I was wondering if someone has a better idea of doing this augmentation in opencv without trying to reverse engineer the photoshop process.
for example (from https://www.photoshopessentials.com/photo-effects/folds-creases/):
to:
or crinkles (from https://www.myjanee.com/tuts/crumple/crumple.htm):
I have tried to put all your distortions together in one script in Python/Opencv.
Input:
Wrinkles:
import cv2
import numpy as np
import math
import skimage.exposure
# read desert car image and convert to float in range 0 to 1
img = cv2.imread('desert_car.png').astype("float32") / 255.0
hh, ww = img.shape[:2]
# read wrinkle image as grayscale and convert to float in range 0 to 1
wrinkles = cv2.imread('wrinkles.jpg',0).astype("float32") / 255.0
# resize wrinkles to same size as desert car image
wrinkles = cv2.resize(wrinkles, (ww,hh), fx=0, fy=0)
# apply linear transform to stretch wrinkles to make shading darker
#wrinkles = skimage.exposure.rescale_intensity(wrinkles, in_range=(0,1), out_range=(0,1)).astype(np.float32)
# shift image brightness so mean is (near) mid gray
mean = np.mean(wrinkles)
shift = mean - 0.4
wrinkles = cv2.subtract(wrinkles, shift)
# create folds image as diagonal grayscale gradient as float as plus and minus equal amount
hh1 = math.ceil(hh/2)
ww1 = math.ceil(ww/3)
val = math.sqrt(0.2)
grady = np.linspace(-val, val, hh1, dtype=np.float32)
gradx = np.linspace(-val, val, ww1, dtype=np.float32)
grad1 = np.outer(grady, gradx)
# flip grad in different directions
grad2 = cv2.flip(grad1, 0)
grad3 = cv2.flip(grad1, 1)
grad4 = cv2.flip(grad1, -1)
# concatenate to form folds image
foldx1 = np.hstack([grad1-0.1,grad2,grad3])
foldx2 = np.hstack([grad2+0.1,grad3,grad1+0.2])
folds = np.vstack([foldx1,foldx2])
#folds = (1-val)*folds[0:hh, 0:ww]
folds = folds[0:hh, 0:ww]
# add the folds image to the wrinkles image
wrinkle_folds = cv2.add(wrinkles, folds)
# draw creases as blurred lines on black background
creases = np.full((hh,ww), 0, dtype=np.float32)
ww2 = 2*ww1
cv2.line(creases, (0,hh1), (ww-1,hh1), 0.25, 1)
cv2.line(creases, (ww1,0), (ww1,hh-1), 0.25, 1)
cv2.line(creases, (ww2,0), (ww2,hh-1), 0.25, 1)
# blur crease image
creases = cv2.GaussianBlur(creases, (3,3), 0)
# add crease to wrinkles_fold image
wrinkle_folds_creases = cv2.add(wrinkle_folds, creases)
# threshold wrinkles and invert
thresh = cv2.threshold(wrinkle_folds_creases,0.7,1,cv2.THRESH_BINARY)[1]
thresh = cv2.cvtColor(thresh,cv2.COLOR_GRAY2BGR)
thresh_inv = 1-thresh
# convert from grayscale to bgr
wrinkle_folds_creases = cv2.cvtColor(wrinkle_folds_creases,cv2.COLOR_GRAY2BGR)
# do hard light composite and convert to uint8 in range 0 to 255
# see CSS specs at https://www.w3.org/TR/compositing-1/#blendinghardlight
low = 2.0 * img * wrinkle_folds_creases
high = 1 - 2.0 * (1-img) * (1-wrinkle_folds_creases)
result = ( 255 * (low * thresh_inv + high * thresh) ).clip(0, 255).astype(np.uint8)
# save results
cv2.imwrite('desert_car_wrinkles_adjusted.jpg',(255*wrinkles).clip(0,255).astype(np.uint8))
cv2.imwrite('desert_car_wrinkles_folds.jpg', (255*wrinkle_folds).clip(0,255).astype(np.uint8))
cv2.imwrite('wrinkle_folds_creases.jpg', (255*wrinkle_folds_creases).clip(0,255).astype(np.uint8))
cv2.imwrite('desert_car_result.jpg', result)
# show results
cv2.imshow('wrinkles', wrinkles)
cv2.imshow('wrinkle_folds', wrinkle_folds)
cv2.imshow('wrinkle_folds_creases', wrinkle_folds_creases)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Wrinkles adjusted:
Wrinkles with folds:
Wrinkles with folds and creases:
Result:
The proper way to apply the wrinkles to the image is to use hardlight blending in Python/OpenCV.
Read the (cat) image as grayscale and convert to range 0 to 1
Read the wrinkles image as grayscale and convert to range 0 to 1
Resize the wrinkles image to the same dimensions as the cat image
Linearly stretch the wrinkles dynamic range to make the wrinkles more contrasted
Threshold the wrinkles image and also get its inverse
Shift the brightness of the wrinkles image so that the mean is mid-gray (important for hard light composition)
Convert the wrinkles image to 3 channel gray
Apply the hard light composition
Save the results.
Cat image:
Wrinkle image:
import cv2
import numpy as np
# read cat image and convert to float in range 0 to 1
img = cv2.imread('cat.jpg').astype("float32") / 255.0
hh, ww = img.shape[:2]
# read wrinkle image as grayscale and convert to float in range 0 to 1
wrinkles = cv2.imread('wrinkles.jpg',0).astype("float32") / 255.0
# resize wrinkles to same size as cat image
wrinkles = cv2.resize(wrinkles, (ww,hh), fx=0, fy=0)
# apply linear transform to stretch wrinkles to make shading darker
# C = A*x+B
# x=1 -> 1; x=0.25 -> 0
# 1 = A + B
# 0 = 0.25*A + B
# Solve simultaneous equations to get:
# A = 1.33
# B = -0.33
wrinkles = 1.33 * wrinkles -0.33
# threshold wrinkles and invert
thresh = cv2.threshold(wrinkles,0.5,1,cv2.THRESH_BINARY)[1]
thresh = cv2.cvtColor(thresh,cv2.COLOR_GRAY2BGR)
thresh_inv = 1-thresh
# shift image brightness so mean is mid gray
mean = np.mean(wrinkles)
shift = mean - 0.5
wrinkles = cv2.subtract(wrinkles, shift)
# convert wrinkles from grayscale to rgb
wrinkles = cv2.cvtColor(wrinkles,cv2.COLOR_GRAY2BGR)
# do hard light composite and convert to uint8 in range 0 to 255
# see CSS specs at https://www.w3.org/TR/compositing-1/#blendinghardlight
low = 2.0 * img * wrinkles
high = 1 - 2.0 * (1-img) * (1-wrinkles)
result = ( 255 * (low * thresh_inv + high * thresh) ).clip(0, 255).astype(np.uint8)
# save results
cv2.imwrite('cat_wrinkled.jpg', result)
# show results
cv2.imshow('Wrinkles', wrinkles)
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Wrinkled Cat image:
This is not an answer to your question. It's more about using a blending mode suitable for your application. See more details about blending modes in the wiki page. This might help you address the quality loss. Following code implements the first few blend modes under Multiply and Screen from the wiki page. This does not address the Plastic Wrap filter and the effects added using the Brushes given in the Photoshop tutorial you refer.
You'll still have to generate the overlays (image b in the code), and I agree with Nelly's comment regarding augmentation.
import cv2 as cv
import numpy as np
a = cv.imread("image.jpg").astype(np.float32)/255.0
b = cv.imread("gradients.jpg").astype(np.float32)/255.0
multiply_blended = a*b
multiply_blended = (255*multiply_blended).astype(np.uint8)
screen_blended = 1 - (1 - a)*(1 - b)
multiply_blended = (255*screen_blended).astype(np.uint8)
overlay_blended = 2*a*b*(a < 0.5).astype(np.float32) + (1 - 2*(1 - a)*(1 - b))*(a >= 0.5).astype(np.float32)
overlay_blended = (255*overlay_blended).astype(np.uint8)
photoshop_blended = (2*a*b + a*a*(1 - 2*b))*(b < 0.5).astype(np.float32) + (2*a*(1 - b) + np.sqrt(a)*(2*b - 1))*(b >= 0.5).astype(np.float32)
photoshop_blended = (255*photoshop_blended).astype(np.uint8)
pegtop_blended = (1 - 2*b)*a*a + 2*b*a
pegtop_blended = (255*pegtop_blended).astype(np.uint8)
Photoshop Soft Light:
Without too much work I came up with this result. It's far from perfect but I think it is in the right direction.
from PIL import Image, ImageDraw, ImageFilter
import requests
from io import BytesIO
response = requests.get('https://icatcare.org/app/uploads/2018/07/Thinking-of-getting-a-cat.png')
img1 = Image.open(BytesIO(response.content))
response = requests.get('https://st2.depositphotos.com/5579432/8172/i/950/depositphotos_81721770-stock-photo-paper-texture-crease-white-paper.jpg')
img2 = Image.open(BytesIO(response.content)).resize(img1.size)
final_img = Image.blend(img1, img2, 0.5)
From this:
And this:
We get this (blend 0.5):
Or this (blend 0.333):
Here is also one with folds:
As you are creating a static synthetic data set, a more realistic and possibly the simplest solution seems to be using DocCreator to randomly generate the data set for you.
With the given sample:
One can generate the following data set
Via Image > Degradation > Color Degradation > 3D distortion
Then you choose the Mesh (Load mesh...) and finally hit the save random images... button and select the constraints.
Generating a data set with more subtle distortions is possible by changing the Phy and the Theta upper and lower bounds.
The project offers a demo that allows one to better assess whether it is applicable to your purposes.

Image de-blurring

This post is divided in two
Part One
I have a little issue converting an image from grayscale back to RGB.
Image in question:
I use this code to convert it:
equ = cv2.cvtColor(equ, cv2.COLOR_GRAY2RGB)
without any success though...
Part Two
Moreover I need to de-blur such image. Here I found some code that uses a wiener filter to do so, but when I implement it it doesn't seem to work effectively. Here is the code:
psf = np.ones((5, 5)) / 25
img = convolve2d(equ, psf, 'same')
img += 0.1 * img.std() * np.random.standard_normal(img.shape)
#deconvolved_img = restoration.wiener(img, psf, 1100)
deconvolved = restoration.wiener(img, psf, 1, clip=False)
plt.imshow(deconvolved, cmap='gray')
and this is the output:
Any help for the two problems is greatly appreciated!
To equalize a color image, it seems a common thing to do is
convert the image to HSV or YUV
split the image into separate components (e.g. H, S, V)
equalize on Value channel (or all three if you want)
merge the channels back together
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
split = cv2.split(hsv) # split is a 3D array containing H S V info
split[2] = cv2.equalizeHist(split[2])
hsv = cv2.merge(split)
img = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
For "deblurring", I sometimes use an unsharp mask. From the Wikipedia page on unsharp masking, the formula for this operation is
sharpened = original + (original − blurred) × amount
which can be rearranged to
sharpened = original×(1 + amount) + blurred×(-amount)
Wikipedia says a good starting point for amount is 0.5 to 1.5. In my app I have a spinbox that let's it vary between 0 and 10. For blurring I use a Gaussian blur with kernel size varying from 1 to 31 (must be odd and integer). To do the matrix math, I prefer to use OpenCV functions because they are often faster than NumPy and they will usually autoscale output to values between 0 and 255 (e.g. for 8 bit and 8 bit/3 channel images). Here we use addWeighted which does
dst = src1*alpha + src2*beta + gamma;
amount = 1.5
ksize = 3
blur = cv2.GaussianBlur(img, ksize, 0, 0)
unsharp = cv.addWeighted(img, 1 + amount, blur, -amount, 0)

How do I adjust brightness, contrast and vibrance with opencv python?

I am new to image processing. I program in Python3 and uses the OpenCV image processing library.I want to adjust the following attributes.
Brightness
Contrast
Vibrance
Hue
Saturation
Lightness
For 4, 5, 6. I am using the following code to convert to HSV space.
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
h += value # 4
s += value # 5
v += value # 6
final_hsv = cv2.merge((h, s, v))
img = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)
The only tutorial I found for 1 and 2 is here. The tutorial uses C++, but I program in Python. Also, I do not know how to adjust 3. vibrance. I would very much appreciate the help, thanks!.
Thanks to #MarkSetchell for providing the link.
In short, the answers uses numpy only and the formula can be presented as in below.
new_image = (old_image) × (contrast/127 + 1) - contrast + brightness
Here contrast and brightness are integers in the range [-127,127]. The scalar 127 is used for this range.
Also, below is the code I used.
brightness = 50
contrast = 30
img = np.int16(img)
img = img * (contrast/127+1) - contrast + brightness
img = np.clip(img, 0, 255)
img = np.uint8(img)
a simple way for brightness adjustment, proper for both color and monochrome images is
img = cv2.imread('your path',0)
brt = 40
img[img < 255-brt] += brt
cv2.imshow('img'+ img)
where brt could be a positive number for increase brightness or a negative for darkness.
The following links for a before and after of an image processed in this code, when the brt = 40 :
input image
output image
I am not sure if this would help, but for changing Brightness, Contrast I personally switch the image to PIL.Image and use PIL.ImageEnhance which comes in handy when using the ratios or percentages.
image = PIL.Image.open("path_to_image")
#increasing the brightness 20%
new_image = PIL.ImageEnhance.Brightness(image).enhance(1.2)
#increasing the contrast 20%
new_image = PIL.ImageEnhance.Contrast(image).enhance(1.2)
I still have not found a clean way for Vibrance. For more on ImageEnahance, I'd suggest to read the official doc - https://pillow.readthedocs.io/en/stable/reference/ImageEnhance.html
For Conversion, I use this ..
NOTE - OpenCV uses BGR and PIL uses RGB channels. So, can get messy if not converted properly.
#convert pil.image to opencv (numpy.ndarray)
#need numpy library for this
cv_image = numpy.array(pil_image)
#convert opencv to pil.image
image = cv2.cvtColor(cv_image, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(image)
Here is one way to do the vibrance in Python/OpenCV.
Convert to HSV. Then create a sigmoid function LUT.
(The sigmoid function increases linearly from the origin, but then tapers off to flat.)
See https://en.wikipedia.org/wiki/Sigmoid_function
Apply the LUT to S channel.
Convert back to BGR.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('yellow_building.jpg')
# convert image to hsv colorspace as floats
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
print(np.amax(s), np.amin(s), s.dtype)
# set vibrance
vibrance=1.4
# create 256 element non-linear LUT for sigmoidal function
# see https://en.wikipedia.org/wiki/Sigmoid_function
xval = np.arange(0, 256)
lut = (255*np.tanh(vibrance*xval/255)/np.tanh(1)+0.5).astype(np.uint8)
# apply lut to saturation channel
new_s = cv2.LUT(s,lut)
# combine new_s with original h and v channels
new_hsv = cv2.merge([h,new_s,v])
# convert back to BGR
result = cv2.cvtColor(new_hsv, cv2.COLOR_HSV2BGR)
# save output image
cv2.imwrite('yellow_building_vibrance.jpg', result)
# display images
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

Categories