OpenCV Python Bindings for GrabCut Algorithm - python

I've been trying to use the OpenCV implementation of the grab cut method via the Python bindings. I have tried using the version in both cv and cv2 but I am having trouble finding out the correct parameters to use to get the method to run correctly. I have tried several permutations of the parameters and nothing seems to work (basically every example I've seen on Github). Here are a couple examples I have tried to follow:
Example 1
Example 2
And here is the method's documentation and a known bug report:
Documentation
Known Grabcut Bug
I can get the code to execute using the example below, but it returns a blank (all black) image mask.
img = Image("pills.png")
mask = img.getEmpty(1)
bgModel = cv.CreateMat(1, 13*5, cv.CV_64FC1)
fgModel = cv.CreateMat(1, 13*5, cv.CV_64FC1)
for i in range(0, 13*5):
cv.SetReal2D(fgModel, 0, i, 0)
cv.SetReal2D(bgModel, 0, i, 0)
rect = (150,70,170,220)
tmp1 = np.zeros((1, 13 * 5))
tmp2 = np.zeros((1, 13 * 5))
cv.GrabCut(img.getBitmap(),mask,rect,tmp1,tmp2,5,cv.GC_INIT_WITH_RECT)
I am using SimpleCV to load the images. The mask type and return type from img.getBitmap() are:
iplimage(nChannels=1 width=730 height=530 widthStep=732 )
iplimage(nChannels=3 width=730 height=530 widthStep=2192 )
If someone has a working example of this code I would love to see it. For what it is worth I am running on OSX Snow Leopard, and my version of OpenCV was installed from the SVN repository (as of a few weeks ago). For reference my input image is this:
I've tried changing the result mask enum values to something more visible. It is not the return values that are the problem. This returns a completely black image. I will try a couple more values.
img = Image("pills.png")
mask = img.getEmpty(1)
bgModel = cv.CreateMat(1, 13*5, cv.CV_64FC1)
fgModel = cv.CreateMat(1, 13*5, cv.CV_64FC1)
for i in range(0, 13*5):
cv.SetReal2D(fgModel, 0, i, 0)
cv.SetReal2D(bgModel, 0, i, 0)
rect = (150,70,170,220)
tmp1 = np.zeros((1, 13 * 5))
tmp2 = np.zeros((1, 13 * 5))
cv.GrabCut(img.getBitmap(), mask, rect, tmp1, tmp2, 5, cv.GC_INIT_WITH_MASK)
mask[mask == cv.GC_BGD] = 0
mask[mask == cv.GC_PR_BGD] = 0
mask[mask == cv.GC_FGD] = 255
mask[mask == cv.GC_PR_FGD] = 255
result = Image(mask)
result.show()
result.save("result.png")

Kat, this version of your code seems to work for me.
import numpy as np
import matplotlib.pyplot as plt
import cv2
filename = "pills.png"
im = cv2.imread(filename)
h,w = im.shape[:2]
mask = np.zeros((h,w),dtype='uint8')
rect = (150,70,170,220)
tmp1 = np.zeros((1, 13 * 5))
tmp2 = np.zeros((1, 13 * 5))
cv2.grabCut(im,mask,rect,tmp1,tmp2,10,mode=cv2.GC_INIT_WITH_RECT)
plt.figure()
plt.imshow(mask)
plt.colorbar()
plt.show()
Produces a figure like this, with labels 0,2 and 3.

Your mask is filled with the following values:
GC_BGD defines an obvious background pixels.
GC_FGD defines an obvious foreground (object) pixel.
GC_PR_BGD defines a possible background pixel.
GC_PR_FGD defines a possible foreground pixel.
Which are all part of an enum:
enum { GC_BGD = 0, // background
GC_FGD = 1, // foreground
GC_PR_BGD = 2, // most probably background
GC_PR_FGD = 3 // most probably foreground
};
Which translates to the colors: completely black, very black, dark black, and black. I think you'll find that if you add the following code (taken from your example 1 and slightly modified) your mask will look nicer:
mask[mask == cv.GC_BGD] = 0 //certain background is black
mask[mask == cv.GC_PR_BGD] = 63 //possible background is dark grey
mask[mask == cv.GC_FGD] = 255 //foreground is white
mask[mask == cv.GC_PR_FGD] = 192 //possible foreground is light grey

Related

Edge Detection minimum line length?

I'm trying to filter out short lines from my canny edge detection. Here's what I'm currently using as well as a brief explanation:
I start out by taking a single channel of the image and running CV2's Canny edge detection. Following that, I scan through each pixel and detect if there are any around it that are white (True, 255). If it is, I add it to a group of true pixels and then check every pixel around it (and keep looping until there are no white/True pixels left. I then replace all the pixels with black/False if the group count is less than a designated threshold (In this case, 100 pixels).
While this works (as shown below) it's awfully slow. I'm wondering if there's a faster, easier way to do this.
import cv2
img = cv2.imread("edtest.jpg")
img_r = img.copy()
img_r[:, :, 0] = 0
img_r[:, :, 1] = 0
img_r = cv2.GaussianBlur(img_r, (3, 3), 0)
basic_edge = cv2.Canny(img_r, 240, 250)
culled_edge = basic_edge.copy()
min_threshold = 100
for x in range(len(culled_edge)):
print(x)
for y in range(len(culled_edge[x])):
test_pixels = [(x, y)]
true_pixels = [(x, y)]
while len(test_pixels) != 0:
xorigin = test_pixels[0][0]
yorigin = test_pixels[0][1]
if 0 < xorigin < len(culled_edge) - 1 and 0 < yorigin < len(culled_edge[0]) - 1:
for testx in range(3):
for testy in range(3):
if culled_edge[xorigin-1+testx][yorigin - 1 + testy] == 255 and (xorigin-1+testx, yorigin-1+testy) not in true_pixels:
test_pixels.append((xorigin-1+testx, yorigin-1+testy))
true_pixels.append((xorigin-1+testx, yorigin-1+testy))
test_pixels.pop(0)
if 1 < len(true_pixels) < min_threshold:
for i in range(len(true_pixels)):
culled_edge[true_pixels[i][0]][true_pixels[i][1]] = 0
cv2.imshow("basic_edge", basic_edge)
cv2.imshow("culled_edge", culled_edge)
cv2.waitKey(0)
Source Image:
Canny Detection and Filtered (Ideal) Results:
The operation you are applying is called an area opening. I don't think there is an implementation in OpenCV, but you can find one in either scikit-image (skimage.morphology.area_opening) or DIPlib (dip.BinaryAreaOpening).
For example with DIPlib (disclosure: I'm an author) you'd amend your code as follows:
import diplib as dip
# ...
basic_edge = cv2.Canny(img_r, 240, 250)
min_threshold = 100
culled_edge = dip.BinaryAreaOpening(basic_edge > 0, min_threshold)
The output, culled_edge, is now a dip.Image object, which is compatible with NumPy arrays and you should be able to use it as such in many situations. If there's an issue, then you can cast it back to a NumPy array with culled_edge = np.array(culled_edge).

Creating custom colormap for opencv python

I created a custom colormap in a text file, read from the python 3.6.
To map each color in for loop it takes approx. 9 seconds.
Here is the snippet:
for x in range(256):
# z = int(rgb_c[x][0])
# r = int(rgb_c[x][1])
# g = int(rgb_c[x][2])
# b = int(rgb_c[x][3])
# Apply color to ndvi
# ndvi_col[ndvi_g == z[x]] = [r[x], g[x], b[x]]
ndvi_col[ndvi_g == int(rgb_c[x][0])] = [int(rgb_c[x][1]), int(rgb_c[x][2]), int(rgb_c[x][3])]
Heard about pypy jit compiler can increase speed and performance, will this impact on for loop? I even tried a separate list but nothing changed.
I am open for any suggestions to improve speed and performance.
Posted solution in case. Original code in github.
#!/usr/bin/env python
'''
OpenCV Custom Colormap Example
Copyright 2015 by Satya Mallick <spmallick#learnopencv.com>
'''
import cv2
import numpy as np
def applyCustomColorMap(im_gray) :
lut = np.zeros((256, 1, 3), dtype=np.uint8)
#Red
lut[:, 0, 0] = [255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,253,251,249,247,245,242,241,238,237,235,233,231,229,227,225,223,221,219,217,215,213,211,209,207,205,203,201,199,197,195,193,191,189,187,185,183,181,179,177,175,173,171,169,167,165,163,161,159,157,155,153,151,149,147,145,143,141,138,136,134,132,131,129,126,125,122,121,118,116,115,113,111,109,107,105,102,100,98,97,94,93,91,89,87,84,83,81,79,77,75,73,70,68,66,64,63,61,59,57,54,52,51,49,47,44,42,40,39,37,34,33,31,29,27,25,22,20,18,17,14,13,11,9,6,4,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
#Green
lut[:, 0, 1] = [ 255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,254,252,250,248,246,244,242,240,238,236,234,232,230,228,226,224,222,220,218,216,214,212,210,208,206,204,202,200,198,196,194,192,190,188,186,184,182,180,178,176,174,171,169,167,165,163,161,159,157,155,153,151,149,147,145,143,141,139,137,135,133,131,129,127,125,123,121,119,117,115,113,111,109,107,105,103,101,99,97,95,93,91,89,87,85,83,82,80,78,76,74,72,70,68,66,64,62,60,58,56,54,52,50,48,46,44,42,40,38,36,34,32,30,28,26,24,22,20,18,16,14,12,10,8,6,4,2,0 ]
#Blue
lut[:, 0, 2] = [195,194,193,191,190,189,188,187,186,185,184,183,182,181,179,178,177,176,175,174,173,172,171,170,169,167,166,165,164,163,162,161,160,159,158,157,155,154,153,152,151,150,149,148,147,146,145,143,142,141,140,139,138,137,136,135,134,133,131,130,129,128,127,126,125,125,125,125,125,125,125,125,125,125,125,125,125,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,127,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126,126]
#Apply custom colormap through LUT
im_color = cv2.LUT(im_gray, lut)
return im_color;
if __name__ == '__main__' :
im = cv2.imread("pluto.jpg", cv2.IMREAD_GRAYSCALE);
im = cv2.cvtColor(im, cv2.COLOR_GRAY2BGR);
im_color = applyCustomColorMap(im);
cv2.imwrite('/tmp/colormap_algae.jpg', im_color)
cv2.imshow("Pseudo Colored Image", im_color);
cv2.waitKey(0);

How do I do the equivalent of Gimp's Colors, Auto, White Balance in Python-Fu?

the only function I can find is : gimp-color-balance, which takes the applicable parameters : preserve-lum(osity), cyan-red, magenta-green, and yellow-blue.
I'm not sure what values to pass for these parameters to duplicate the menu option in the title.
To complete the answer of #banderlog013, I think the Gimp Doc specifies that the end pixels of each channel are first discarded, then the remaining ranges are stretched. I believe the right code is :
img = cv2.imread('test.jpg')
balanced_img = np.zeros_like(img) #Initialize final image
for i in range(3): #i stands for the channel index
hist, bins = np.histogram(img[..., i].ravel(), 256, (0, 256))
bmin = np.min(np.where(hist>(hist.sum()*0.0005)))
bmax = np.max(np.where(hist>(hist.sum()*0.0005)))
balanced_img[...,i] = np.clip(img[...,i], bmin, bmax)
balanced_img[...,i] = (balanced_img[...,i]-bmin) / (bmax - bmin) * 255
I obtain good results with it, try it out !
According to GIMP doc, we need to discard pixel colors at each end of the Red, Green and Blue histograms which are used by only 0.05% of the pixels in the image and stretch the remaining range as much as possible (Python code):
import numpy as np
import cv2 # opencv-python
import matplotlib.pyplot as plt
img = cv2.imread('test.jpg')
x = []
# get histogram for each channel
for i in cv2.split(img):
hist, bins = np.histogram(i, 256, (0, 256))
# discard colors at each end of the histogram which are used by only 0.05%
tmp = np.where(hist > hist.sum() * 0.0005)[0]
i_min = tmp.min()
i_max = tmp.max()
# stretch hist
tmp = (i.astype(np.int32) - i_min) / (i_max - i_min) * 255
tmp = np.clip(tmp, 0, 255)
x.append(tmp.astype(np.uint8))
# combine image back and show it
s = np.dstack(x)
plt.imshow(s[::,::,::-1])
The result is pretty the same as after GIMP's 'Colors -> Auto -> White Balance'
UPD: we need np.clip() because OpenCV and numpy differently casts int32 to uint8:
# Numpy
np.array([-10, 260]).astype(np.uint8)
>>> array([246, 4], dtype=uint8)
# but we need just [0, 255]
From what I understand after a quick look at the source code (and more or less confirmed with a test image), these are unrelated and under the hood,Colors>Auto>White Balance:
obtains the histogram for each channel
get the values that determine the bottom and top 0.6%
stretches the range of values for that channel using these two values as the black and white points using an internal call that is very similar to "Levels".
Proof with a synthetic image:
Before:
After:
All this isn't hard to do in Python.
How to essentially get the equivalent of GIMP's Colors --> Auto --> White Balance feature:
Tested on Ubuntu 20.04.
Download the below code from my eRCaGuy_hello_world repo here: python/auto_white_balance_img.py.
Install dependencies:
pip3 install opencv-python # for cv2
pip3 install numpy
Now here is some fully-functional code, unlike some of the other answers here which are snippets and lacking things like import statements. I'm borrowing from #Canette Ouverture's answer here, and #banderlog013's answer here.
Create file auto_white_balance_img.py:
#!/usr/bin/python3
import cv2
import numpy as np
file_in = 'test.jpg'
file_in_base = file_in[:-4] # strip file extension
file_in_extension = file_in[-4:]
img = cv2.imread(file_in)
# From #banderlog013's answer: https://stackoverflow.com/a/54864315/4561887
x = []
# get histogram for each channel
for i in cv2.split(img):
hist, bins = np.histogram(i, 256, (0, 256))
# discard colors at each end of the histogram which are used by only 0.05%
img_out1 = np.where(hist > hist.sum() * 0.0005)[0]
i_min = img_out1.min()
i_max = img_out1.max()
# stretch hist
img_out1 = (i.astype(np.int32) - i_min) / (i_max - i_min) * 255
img_out1 = np.clip(img_out1, 0, 255)
x.append(img_out1.astype(np.uint8))
# From #Canette Ouverture's answer: https://stackoverflow.com/a/56365560/4561887
img_out2 = np.zeros_like(img) # Initialize final image
for channel_index in range(3):
hist, bins = np.histogram(img[..., channel_index].ravel(), 256, (0, 256))
bmin = np.min(np.where(hist>(hist.sum()*0.0005)))
bmax = np.max(np.where(hist>(hist.sum()*0.0005)))
img_out2[...,channel_index] = np.clip(img[...,channel_index], bmin, bmax)
img_out2[...,channel_index] = ((img_out2[...,channel_index]-bmin) /
(bmax - bmin) * 255)
# Write new files
cv2.imwrite(file_in_base + '_out1' + file_in_extension, img_out1)
cv2.imwrite(file_in_base + '_out2' + file_in_extension, img_out2)
Make auto_white_balance_img.py executable:
chmod +x auto_white_balance_img.py
Now set the file_in variable in the file above to your desired input image path, then run it with:
python3 auto_white_balance_img.py
# OR
./auto_white_balance_img.py
Assuming you have set file_in = 'test.jpg', it will produce these two files:
test_out1.jpg # The result from #banderlog013's answer here
test_out2.jpg # The result from #Canette Ouverture's answer here
I use this function to auto white balance images. Unlike Gimp function, it does not normalize image contrast. So it is useful with low contrast images too.
import numpy as np
from imageio import imread
import matplotlib.pyplot as plt
def auto_white_balance(im, p=.6):
'''Stretch each channel histogram to same percentile as mean.'''
# get mean values
p0, p1 = np.percentile(im, p), np.percentile(im, 100-p)
for i in range(3):
ch = im[:,:,i]
# get channel values
pc0, pc1 = np.percentile(ch, p), np.percentile(ch, 100-p)
# stretch channel to same range as mean
ch = (p1 - p0) * (ch - pc0) / (pc1 - pc0) + p0
im[:,:,i] = ch
return im
def test():
im = imread('imageio:astronaut.png')
# distort white balance
im[:,:,0] = im[:,:,0] *.6
im[:,:,1] = im[:,:,1] *.8
plt.imshow(im)
plt.show()
im2 = auto_white_balance(im)
im2 = np.clip(im2, 0, 255) # or 0, 1 for float images
plt.imshow(im2)
plt.show()
if __name__ == "__main__":
test()
If you want equivalent of Gimp function, use fixed values instead:
p0, p1 = 0, 255
K, cool. Figured out how to script one up.
Use it if you like. Does alright by me.
https://github.com/doyousketch2/eAWB

How to get grabCut to work opencv python with GC_INIT_WITH_MASK

I am trying to get the messi example to work: https://docs.opencv.org/3.1.0/d8/d83/tutorial_py_grabcut.html
In my setup, I want the entire process to be automated.
For example, I grab an image from the web:
http://wanderlustandlipstick.com/travel-tips/opting-out-full-body-scanners/
And using some opencv tools I autogenerate the following mask:
Black is supposed to be a certain background, White is supposed to be a certain foreground, and Grey is supposed to be unknown.
Following the messi tutorial (https://docs.opencv.org/3.1.0/d8/d83/tutorial_py_grabcut.html), below is my code. However, it only shows the small white circle area, as if it is treating grey like black (certain background)
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread("imagescan.png")
dimy = np.shape(img)[0] # seems to be backwards (x,y)
# https://stackoverflow.com/questions/22490721/how-can-i-get-the-x-and-y-dimensions-of-a-ndarray-numpy-python
dimx = np.shape(img)[1]
mask = np.zeros((dimy,dimx),np.uint8) # zeroes as array/matrix size of image
bgdModel = fgdModel = np.zeros((1,65),np.float64)
newmask = cv2.imread('imagemask.png',0)
# informational purposes
removeBg = (newmask == 0)
removeBg = np.ravel(removeBg)
np.bincount(removeBg)
keepFg = (newmask == 255)
keepFg = np.ravel(keepFg)
np.bincount(keepFg)
#otherEl = (not (newmask == 0 or newmask == 255)) # throws error
#otherEl = np.ravel(otherEl)
#np.bincount(otherEl)
# appears at least one of each elements is required
# otherwise throws bgdSamples.empty error / fgdSamples.empty error
mask[newmask == 0] = 0
mask[newmask == 255] = 1
mask, bgdModel, fgdModel = cv2.grabCut(img,mask,None,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img2 = img*mask2[:,:,np.newaxis]
plt.imshow(img2),plt.colorbar(),plt.show()
The result is just a mask off the circle, as if the gray area is being treated as black.
In the mask image, you basically have 3 colors: black,white,grey. In the following lines of code, you're setting background and foreground, but not the probable foreground.
mask[newmask == 0] = 0
mask[newmask == 255] = 1
Try using using OpenCV provided constants (cv2.GC_BGD etc) to avoid confusion.
# this line sets the grey areas - meaning any color not 0 and not 255 - to probable foreground.
mask = np.where(((newmask>0) & (newmask<255)),cv2.GC_PR_FGD,0).astype('uint8')
mask[newmask == 0] = cv2.GC_BGD
mask[newmask == 255] = cv2.GC_FGD
.

How do I increase the contrast of an image in Python OpenCV

I am new to Python OpenCV. I have read some documents and answers here but I am unable to figure out what the following code means:
if (self.array_alpha is None):
self.array_alpha = np.array([1.25])
self.array_beta = np.array([-100.0])
# add a beta value to every pixel
cv2.add(new_img, self.array_beta, new_img)
# multiply every pixel value by alpha
cv2.multiply(new_img, self.array_alpha, new_img)
I have come to know that Basically, every pixel can be transformed as X = aY + b where a and b are scalars.. Basically, I have understood this. However, I did not understand the code and how to increase contrast with this.
Till now, I have managed to simply read the image using img = cv2.imread('image.jpg',0)
Thanks for your help
I would like to suggest a method using the LAB color space.
LAB color space expresses color variations across three channels. One channel for brightness and two channels for color:
L-channel: representing lightness in the image
a-channel: representing change in color between red and green
b-channel: representing change in color between yellow and blue
In the following I perform adaptive histogram equalization on the L-channel and convert the resulting image back to BGR color space. This enhances the brightness while also limiting contrast sensitivity. I have done the following using OpenCV 3.0.0 and python:
Code:
import cv2
import numpy as np
img = cv2.imread('flower.jpg', 1)
# converting to LAB color space
lab= cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
l_channel, a, b = cv2.split(lab)
# Applying CLAHE to L-channel
# feel free to try different values for the limit and grid size:
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
cl = clahe.apply(l_channel)
# merge the CLAHE enhanced L-channel with the a and b channel
limg = cv2.merge((cl,a,b))
# Converting image from LAB Color model to BGR color spcae
enhanced_img = cv2.cvtColor(limg, cv2.COLOR_LAB2BGR)
# Stacking the original image with the enhanced image
result = np.hstack((img, enhanced_img))
cv2.imshow('Result', result)
Result:
The enhanced image is on the right
You can run the code as it is.
To know what CLAHE (Contrast Limited Adaptive Histogram Equalization) is about, refer this Wikipedia page
For Python, I haven't found an OpenCV function that provides contrast. As others have suggested, there are some techniques to automatically increase contrast using a very simple formula.
In the official OpenCV docs, it is suggested that this equation can be used to apply both contrast and brightness at the same time:
new_img = alpha*old_img + beta
where alpha corresponds to a contrast and beta is brightness. Different cases
alpha 1 beta 0 --> no change
0 < alpha < 1 --> lower contrast
alpha > 1 --> higher contrast
-127 < beta < +127 --> good range for brightness values
In C/C++, you can implement this equation using cv::Mat::convertTo, but we don't have access to that part of the library from Python. To do it in Python, I would recommend using the cv::addWeighted function, because it is quick and it automatically forces the output to be in the range 0 to 255 (e.g. for a 24 bit color image, 8 bits per channel). You could also use convertScaleAbs as suggested by #nathancy.
import cv2
img = cv2.imread('input.png')
# call addWeighted function. use beta = 0 to effectively only operate one one image
out = cv2.addWeighted( img, contrast, img, 0, brightness)
output = cv2.addWeighted
The above formula and code is quick to write and will make changes to brightness and contrast. But they yield results that are significantly different than photo editing programs. The rest of this answer will yield a result that will reproduce the behavior in the GIMP and also LibreOffice brightness and contrast. It's more lines of code, but it gives a nice result.
Contrast
In the GIMP, contrast levels go from -127 to +127. I adapted the formulas from here to fit in that range.
f = 131*(contrast + 127)/(127*(131-contrast))
new_image = f*(old_image - 127) + 127 = f*(old_image) + 127*(1-f)
To figure out brightness, I figured out the relationship between brightness and levels and used information in this levels post to arrive at a solution.
#pseudo code
if brightness > 0
shadow = brightness
highlight = 255
else:
shadow = 0
highlight = 255 + brightness
new_img = ((highlight - shadow)/255)*old_img + shadow
brightness and contrast in Python and OpenCV
Putting it all together and adding using the reference "mandrill" image from USC SIPI:
import cv2
import numpy as np
# Open a typical 24 bit color image. For this kind of image there are
# 8 bits (0 to 255) per color channel
img = cv2.imread('mandrill.png') # mandrill reference image from USC SIPI
s = 128
img = cv2.resize(img, (s,s), 0, 0, cv2.INTER_AREA)
def apply_brightness_contrast(input_img, brightness = 0, contrast = 0):
if brightness != 0:
if brightness > 0:
shadow = brightness
highlight = 255
else:
shadow = 0
highlight = 255 + brightness
alpha_b = (highlight - shadow)/255
gamma_b = shadow
buf = cv2.addWeighted(input_img, alpha_b, input_img, 0, gamma_b)
else:
buf = input_img.copy()
if contrast != 0:
f = 131*(contrast + 127)/(127*(131-contrast))
alpha_c = f
gamma_c = 127*(1-f)
buf = cv2.addWeighted(buf, alpha_c, buf, 0, gamma_c)
return buf
font = cv2.FONT_HERSHEY_SIMPLEX
fcolor = (0,0,0)
blist = [0, -127, 127, 0, 0, 64] # list of brightness values
clist = [0, 0, 0, -64, 64, 64] # list of contrast values
out = np.zeros((s*2, s*3, 3), dtype = np.uint8)
for i, b in enumerate(blist):
c = clist[i]
print('b, c: ', b,', ',c)
row = s*int(i/3)
col = s*(i%3)
print('row, col: ', row, ', ', col)
out[row:row+s, col:col+s] = apply_brightness_contrast(img, b, c)
msg = 'b %d' % b
cv2.putText(out,msg,(col,row+s-22), font, .7, fcolor,1,cv2.LINE_AA)
msg = 'c %d' % c
cv2.putText(out,msg,(col,row+s-4), font, .7, fcolor,1,cv2.LINE_AA)
cv2.putText(out, 'OpenCV',(260,30), font, 1.0, fcolor,2,cv2.LINE_AA)
cv2.imwrite('out.png', out)
I manually processed the images in the GIMP and added text tags in Python/OpenCV:
Note: #UtkarshBhardwaj has suggested that Python 2.x users must cast the contrast correction calculation code into float for getting floating result, like so:
...
if contrast != 0:
f = float(131*(contrast + 127))/(127*(131-contrast))
...
Contrast and brightness can be adjusted using alpha (α) and beta (β), respectively. These variables are often called the gain and bias parameters. The expression can be written as
OpenCV already implements this as cv2.convertScaleAbs(), just provide user defined alpha and beta values
import cv2
image = cv2.imread('1.jpg')
alpha = 1.5 # Contrast control (1.0-3.0)
beta = 0 # Brightness control (0-100)
adjusted = cv2.convertScaleAbs(image, alpha=alpha, beta=beta)
cv2.imshow('original', image)
cv2.imshow('adjusted', adjusted)
cv2.waitKey()
Before -> After
Note: For automatic brightness/contrast adjustment take a look at automatic contrast and brightness adjustment of a color photo
There are quite a few answers here ranging from simple to complex. I want to add another on the simpler side that seems a little more practical for actual contrast and brightness adjustments.
def adjust_contrast_brightness(img, contrast:float=1.0, brightness:int=0):
"""
Adjusts contrast and brightness of an uint8 image.
contrast: (0.0, inf) with 1.0 leaving the contrast as is
brightness: [-255, 255] with 0 leaving the brightness as is
"""
brightness += int(round(255*(1-contrast)/2))
return cv2.addWeighted(img, contrast, img, 0, brightness)
We do the a*x+b adjustment through the addWeighted() function. However, to change the contrast without also modifying the brightness, the data needs to be zero centered. That's not the case with OpenCVs default uint8 datatype. So we also need to adjust the brightness according to how the distribution is shifted.
Best explanation for X = aY + b (in fact it f(x) = ax + b)) is provided at https://math.stackexchange.com/a/906280/357701
A Simpler one by just adjusting lightness/luma/brightness for contrast as is below:
import cv2
img = cv2.imread('test.jpg')
cv2.imshow('test', img)
cv2.waitKey(1000)
imghsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
imghsv[:,:,2] = [[max(pixel - 25, 0) if pixel < 190 else min(pixel + 25, 255) for pixel in row] for row in imghsv[:,:,2]]
cv2.imshow('contrast', cv2.cvtColor(imghsv, cv2.COLOR_HSV2BGR))
cv2.waitKey(1000)
raw_input()
img = cv2.imread("/x2.jpeg")
image = cv2.resize(img, (1800, 1800))
alpha=1.5
beta=20
new_image=cv2.addWeighted(image,alpha,np.zeros(image.shape, image.dtype),0,beta)
cv2.imshow("new",new_image)
cv2.waitKey(0)
cv2.destroyAllWindows()

Categories