RGB to HSV conversion using PIL - python

I'm trying to automate the enhancement of some images that are to be transfered to a digital frame. I have code in place that resizes, adds a date/time to the least-significant (least details) corner of the image and pastes together pairs of portrait images to avoid displaying a single portrait in the frame's 41:20 low resolution screen.
I've implemented a brightness-stretching filter for those pictures where the lighting wasn't so good, using the colorsys.rgb_to_hsv function to calculate H, S, V bands, operating on the V one and then converting back to RGB before saving a JPEG in the digital frame. Obviously, the conversion takes a lot of time, even using itertools tricks; I managed to improve things using psyco.
However, I noticed an example for the PIL Image.convert where RGB can be converted to XYZ color space using a 4×4 matrix as a second argument to the convert method, and I got to wonder:
How can I convert RGB to HSV (and then HSV back to RGB) using a custom matrix in the convert method call? (Minor rounding errors are not important in this case, so I don't mind that each band will be expressed as a series of 0…255 integers)
Thank you in advance.

Although I've seen references[1] that claim HSV color-space is linear transformation from RGB, which would seem to imply that it could be done with a matrix, I have been unable to find or determine for myself just what such a matrix would look like. In a way this doesn't really surprise me based on all the [similar] non-matrix procedural implementations I've also seen -- the way they go about it doesn't look linear.
Anyway, while looking into this, I ran across a [somewhat dated] article in former SGI researcher Paul Haeberli's online computer graphics notebook titled Matrix Operations for Image Processing which describes how to do a number of different color transformations using 4x4 matrices which might help you. All of the examples given operate directly on RGB color images and, like geometric matrix transformations, any sequence of them can be combined into a single matrix using concatenation.
Hope this helps.
[1]: Colour Space Conversions <http://www.poynton.com/PDFs/coloureq.pdf>:
2.7.3 HSL (Hue Saturation and Lightness)
This represents a wealth of similar
colour spaces, alternative names
include HSI (intensity), HSV (value),
HCI (chroma / colourfulness), HVC,
TSD (hue saturation and darkness) etc.
Most of these colour spaces are
linear transforms from RGB and are
therefore device dependent and
non–linear. Their advantage lies in
the extremely intuitive manner of
specifying colour. It is very easy to
select a desired hue and to then
modify it slightly by adjustment of
its saturation and intensity.

The formula to transform an RGB value to an HSV value can be found here: http://www.rapidtables.com/convert/color/rgb-to-hsv.htm. I once needed it the other way around, and made the following function for it.
def hsb2rgb(hsb):
'''
Transforms a hsb array to the corresponding rgb tuple
In: hsb = array of three ints (h between 0 and 360, s and v between 0 and 100)
Out: rgb = array of three ints (between 0 and 255)
'''
H = float(hsb[0] / 360.0)
S = float(hsb[1] / 100.0)
B = float(hsb[2] / 100.0)
if (S == 0):
R = int(round(B * 255))
G = int(round(B * 255))
B = int(round(B * 255))
else:
var_h = H * 6
if (var_h == 6):
var_h = 0 # H must be < 1
var_i = int(var_h)
var_1 = B * (1 - S)
var_2 = B * (1 - S * (var_h - var_i))
var_3 = B * (1 - S * (1 - (var_h - var_i)))
if (var_i == 0):
var_r = B ; var_g = var_3 ; var_b = var_1
elif (var_i == 1):
var_r = var_2 ; var_g = B ; var_b = var_1
elif (var_i == 2):
var_r = var_1 ; var_g = B ; var_b = var_3
elif (var_i == 3):
var_r = var_1 ; var_g = var_2 ; var_b = B
elif (var_i == 4):
var_r = var_3 ; var_g = var_1 ; var_b = B
else:
var_r = B ; var_g = var_1 ; var_b = var_2
R = int(round(var_r * 255))
G = int(round(var_g * 255))
B = int(round(var_b * 255))
return [R, G, B]

Related

Create depth map image as 24-bit (Carla)

I have a depth map encoded in 24 bits (labeled "Original").
With the code below:
carla_img = cv.imread('carla_deep.png', flags=cv.IMREAD_COLOR)
carla_img = carla_img[:, :, :3]
carla_img = carla_img[:,:,::-1]
gray_depth = ((carla_img[:,:,0] + carla_img[:,:,1] * 256.0 + carla_img[:,:,2] * 256.0 * 256.0)/((256.0 * 256.0 * 256.0) - 1))
gray_depth = gray_depth * 1000
I am able to convert it as in the "Converted" image.
As shown here: https://carla.readthedocs.io/en/latest/ref_sensors/
How can I reverse this process (Without using any larger external libraries and using at most openCV)? In Python I create a depth map with the help of OpenCV. I wanted to save the obtained depth map in the form of Carla (24-bit).
This is how I create depth map:
imgL = cv.imread('leftImg.png',0)
imgR = cv.imread('rightImg.png',0)
stereo = cv.StereoBM_create(numDisparities=128, blockSize=17)
disparity = stereo.compute(imgL,imgR)
CameraFOV = 120
Focus_length = width /(2 * math.tan(CameraFOV * math.pi / 360))
camerasBaseline = 0.3
depthMap = (camerasBaseline * Focus_length) / disparity
How can I save the obtained depth map in the same form as in the picture marked "Original"?
Docs say:
normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
in_meters = 1000 * normalized
So if you have a depth map in_meters, you do the reverse, by rearranging the equations.
You need to make sure your depth map (from block matching) is in units of meters. Your calculations there look sensible, assuming your cameras have a baseline of 0.3 meters.
First variant
take the calculation apart, using division and modulo operations.
Various .astype are required to turn floats into integers, and wider integers into narrow integers (assumption for pictures).
normalized = in_meters / 1000
BGR = (normalized * (2**24-1)).astype(np.uint32)
BG,R = np.divmod(BGR, 2**8)
B,G = np.divmod(BG, 2**8)
carla_img = np.dstack([B,G,R]).astype(np.uint8) # BGR order
Second variant
One could also do this with a view, reinterpreting the uint32 data as four uint8 values. This assumes a little endian system, which is a fair assumption but one needs to be aware of it.
...
reinterpreted = BGR.view(np.uint8) # lowest byte first, i.e. order is RGBx
reinterpreted.shape = BGR.shape + (4,) # np.view refuses to add a dimension
carla_img = reinterpreted[:,:,(2,1,0)] # select BGR
# this may require a .copy() to get data without holes (OpenCV may want this)
Disclaimer
I could not test the code in this answer because you haven't provided usable data.

Convert single 8 bit value (0-255) to 3, 8 bit values

Alright so the best way I can really explain this is on this site https://www.colorspire.com/rgb-color-wheel/ there is a color wheel when you first load the site don't move the little pointer in the square to select stuff. Just go to the bar that you can drag and drag it you'll see the r g b values change but you will also notice that one of the values is always 0, how can I recreate that but instead of the bar you drag you input a single 8 bit value (out of 255) so I can run the program with the value lets say 170 and it would somehow map that to a certain color.
You are actually changing the hue when you move the bar so try this
import colorsys
color = input("Enter a value from 0-359:")
test_color = colorsys.hsv_to_rgb(color/360.0, 1, 1)
References
HSV to RGB Color Conversion
test_color = colorsys.hsv_to_rgb(int(value)/255.0, 1, 1)
r, g, b = test_color
r = (r * 255)
r = math.trunc(r)
g = (g * 255)
g = math.trunc(g)
b = (b * 255)
b = math.trunc(b)
fg.orange = Style(RgbFg(r, g, b))
msg = fg.orange + str(f" r = {r}, g = {g}, b = {b}")
print(msg)

Python GDAL, how to change brightness without parsing each pixel

I'm currently trying to figure out how to increase/decrease the brightness of a .tiff file without parsing each pixel (too high power consumption). Right now, using the front micro-service, the user uses a ng-slider to change the value of the desired brightness,which goes directly to the back where it is parsed to try to compute a new .tiff.
So, I'm wondering if there isn't a gdal function I can't find to directly alter the images and increase/decrease the brightness at will!
The code currently looks like this (also trying to change the contrast, but I could find my way if I understand how to change the brightness) :
# Contrast & Luminosity
def get_correctMap(path, luminosity, contrast):
ds = gdal.Open(image_path)
#To normalize
band1 = ds.GetRasterBand(1)
#Get the max value
maxValue = int(2**16 -1)
if band1.DataType == gdal.GDT_UInt16:
maxValue = int(2**16 -1)
elif band1.DataType == gdal.GDT_Byte:
maxValue = int(2**8 -1)
else:
LOGGER.info(f"band type {band1.DataType} not handled: use default size of value (16 bits)")
band1 = ds.ReadAsArray(0,0,ds.RasterXSize,ds.RasterYSize)[0]
band2 = ds.ReadAsArray(0,0,ds.RasterXSize,ds.RasterYSize)[1]
band3 = ds.ReadAsArray(0,0,ds.RasterXSize,ds.RasterYSize)[2]
for x in range(0,ds.RasterXSize):
for y in range(0,ds.RasterXSize):
r = float(band1[x,y]) / maxValue
g = float(band2[x,y]) / maxValue
b = float(band3[x,y]) / maxValue
#Convert to HLS them apply luminosity and contrast
(h,l,s) = colorsys.rgb_to_hls(r, g, b)
l = min(max(0, l + (l - 0.5)*(luminosity - 0.5)) , 1)
s = min(max(0, s + (s - 0.5)*(contrast - 0.5)) , 1)
(r,g,b) = colorsys.hls_to_rgb(h, l, s)
band1[x,y] = int(r * maxValue)
band2[x,y] = int(g * maxValue)
band3[x,y] = int(b * maxValue)
#Need to save the changes, but obviously already too long to pursue this way
#and save the news bands
ds.flushCache()
return path
Hope you know a better way I can't find!
Thanks in advance.
A first lead could be to use the last features provide by OpenLayer for me, but it is not a back solution anymore, I'm digging it.
https://geoadmin.github.io/ol3/apidoc/ol.layer.Tile.html
EDIT: The constrast and luminosity feature are only implemented on OpenLayer 3 but not in the next version (including mine OL 5), so the proper answer is : it is not possible.

RGB to HSI Conversion - Hue always calculated as 0

So I've been trying to create this RGB to HSI conversion algorithm for a project I'm working on but I have run into several roadblocks while doing it.
I've so far narrowed the problems down to two possible issues:
The program will not detect which of the two values compared in the if-statement is true and just defaults to the initial if-statement
The program is not calculating the correct values when calculating the hue of the image as it always defaults to the inverse cosine's default value.
Here is the code:
import cv2
import numpy as np
def RGB_TO_HSI(img):
with np.errstate(divide='ignore', invalid='ignore'):
bgr = cv2.split(img)
intensity = np.divide(bgr[0] + bgr[1] + bgr[2], 3)
saturation = 1 - 3 * np.divide(np.minimum(bgr[2], bgr[1], bgr[0]), bgr[2] + bgr[1] + bgr[0])
def calc_hue(bgr):
blue = bgr[0]
green = bgr[1]
sqrt_calc = np.sqrt(((bgr[2] - bgr[1]) * (bgr[2] - bgr[1])) + ((bgr[2] - bgr[0]) * (bgr[1] - bgr[0])))
if green.any >= blue.any:
hue = np.arccos(1/2 * ((bgr[2]-bgr[1]) + (bgr[2] - bgr[0])) / sqrt_calc)
else:
hue = 360 - np.arccos(1/2 * ((bgr[2]-bgr[1]) + (bgr[2] - bgr[0])) / sqrt_calc)
hue = np.int8(hue)
return hue
hue = calc_hue(bgr)
hsi = cv2.merge((intensity, saturation, calc_hue(bgr)))
Here is the formula I used for the conversion
Thanks in advance for any tips or ideas
Ok there is a lot going on here. I must say this, I know you are new, but before you post a question check the docs otherwise you'll end up using the libs wrong. (i.e. np.minimum() does not compare 3 values at once).
Pay attention to your variables types. Your code does operations with np.uint8 as if they where np.int32. Always try to keep consistency with your variables types.
Enough said, without changing your code too much here is what I came up with:
import cv2
import numpy as np
from math import pi
def BGR_TO_HSI(img):
with np.errstate(divide='ignore', invalid='ignore'):
bgr = np.int32(cv2.split(img))
blue = bgr[0]
green = bgr[1]
red = bgr[2]
intensity = np.divide(blue + green + red, 3)
minimum = np.minimum(np.minimum(red, green), blue)
saturation = 1 - 3 * np.divide(minimum, red + green + blue)
sqrt_calc = np.sqrt(((red - green) * (red - green)) + ((red - blue) * (green - blue)))
if (green >= blue).any():
hue = np.arccos((1/2 * ((red-green) + (red - blue)) / sqrt_calc))
else:
hue = 2*pi - np.arccos((1/2 * ((red-green) + (red - blue)) / sqrt_calc))
hue = hue*180/pi
hsi = cv2.merge((hue, saturation, intensity))
return hsi
I hope it helped

How to generate random 'greenish' colors

Anyone have any suggestions on how to make randomized colors that are all greenish? Right now I'm generating the colors by this:
color = (randint(100, 200), randint(120, 255), randint(100, 200))
That mostly works, but I get brownish colors a lot.
Simple solution: Use the HSL or HSV color space instead of rgb (convert it to RGB afterwards if you need this). The difference is the meaning of the tuple: Where RGB means values for Red, Green and Blue, in HSL the H is the color (120 degree or 0.33 meaning green for example) and the S is for saturation and the V for the brightness. So keep the H at a fixed value (or for even more random colors you could randomize it by add/sub a small random number) and randomize the S and the V. See the wikipedia article.
As others have suggested, generating random colours is much easier in the HSV colour space (or HSL, the difference is pretty irrelevant for this)
So, code to generate random "green'ish" colours, and (for demonstration purposes) display them as a series of simple coloured HTML span tags:
#!/usr/bin/env python2.5
"""Random green colour generator, written by dbr, for
http://stackoverflow.com/questions/1586147/how-to-generate-random-greenish-colors
"""
def hsv_to_rgb(h, s, v):
"""Converts HSV value to RGB values
Hue is in range 0-359 (degrees), value/saturation are in range 0-1 (float)
Direct implementation of:
http://en.wikipedia.org/wiki/HSL_and_HSV#Conversion_from_HSV_to_RGB
"""
h, s, v = [float(x) for x in (h, s, v)]
hi = (h / 60) % 6
hi = int(round(hi))
f = (h / 60) - (h / 60)
p = v * (1 - s)
q = v * (1 - f * s)
t = v * (1 - (1 - f) * s)
if hi == 0:
return v, t, p
elif hi == 1:
return q, v, p
elif hi == 2:
return p, v, t
elif hi == 3:
return p, q, v
elif hi == 4:
return t, p, v
elif hi == 5:
return v, p, q
def test():
"""Check examples on..
http://en.wikipedia.org/wiki/HSL_and_HSV#Examples
..work correctly
"""
def verify(got, expected):
if got != expected:
raise AssertionError("Got %s, expected %s" % (got, expected))
verify(hsv_to_rgb(0, 1, 1), (1, 0, 0))
verify(hsv_to_rgb(120, 0.5, 1.0), (0.5, 1, 0.5))
verify(hsv_to_rgb(240, 1, 0.5), (0, 0, 0.5))
def main():
"""Generate 50 random RGB colours, and create some simple coloured HTML
span tags to verify them.
"""
test() # Run simple test suite
from random import randint, uniform
for i in range(50):
# Tweak these values to change colours/variance
h = randint(90, 140) # Select random green'ish hue from hue wheel
s = uniform(0.2, 1)
v = uniform(0.3, 1)
r, g, b = hsv_to_rgb(h, s, v)
# Convert to 0-1 range for HTML output
r, g, b = [x*255 for x in (r, g, b)]
print "<span style='background:rgb(%i, %i, %i)'> </span>" % (r, g, b)
if __name__ == '__main__':
main()
The output (when viewed in a web-browser) should look something along the lines of:
Edit: I didn't know about the colorsys module. Instead of the above hsv_to_rgb function, you could use colorsys.hsv_to_rgb, which makes the code much shorter (it's not quite a drop-in replacement, as my hsv_to_rgb function expects the hue to be in degrees instead of 0-1):
#!/usr/bin/env python2.5
from colorsys import hsv_to_rgb
from random import randint, uniform
for x in range(50):
h = uniform(0.25, 0.38) # Select random green'ish hue from hue wheel
s = uniform(0.2, 1)
v = uniform(0.3, 1)
r, g, b = hsv_to_rgb(h, s, v)
# Convert to 0-1 range for HTML output
r, g, b = [x*255 for x in (r, g, b)]
print "<span style='background:rgb(%i, %i, %i)'> </span>" % (r, g, b)
Check out the colorsys module:
http://docs.python.org/library/colorsys.html
Use the HSL or HSV color space. Randomize the hue to be close to green, then choose completely random stuff for the saturation and V (brightness).
If you stick with RGB, you basically just need to make sure the G value is greater than the R and B, and try to keep the blue and red values similar so that the hue doesn't go too crazy. Extending from Slaks, maybe something like (I know next to nothing about Python):
greenval = randint(100, 255)
redval = randint(20,(greenval - 60))
blueval = randint((redval - 20), (redval + 20))
color = (redval, greenval, blueval)
So in this case you are lucky enough to want variations on a primary color, but for artistic uses like this it is better to specify color wheel coordinates rather than primary color magnitudes.
You probably want something from the colorsys module like:
colorsys.hsv_to_rgb(h, s, v)
Convert the color from HSV coordinates to RGB coordinates.
The solution with HSx color space is a very good one. However, if you need something extremely simplistic and have no specific requirements about the distribution of the colors (like uniformity), a simplistic RGB-based solution would be just to make sure that G value is greater than both R and B
rr = randint(100, 200)
rb = randint(100, 200)
rg = randint(max(rr, rb) + 1, 255)
This will give you "greenish" colors. Some of them will be ever so slightly greenish. You can increase the guaranteed degree of greenishness by increasing (absolutely or relatively) the lower bound in the last randint call.
What you want is to work in terms of HSL instead of RGB. You could find a range of hue that satisfies "greenish" and pick a random hue from it. You could also pick random saturation and lightness but you'll probably want to keep your saturation near 1 and your lightness around 0.5 but you can play with them.
Below is some actionscript code to convert HSL to RGB. I haven't touched python in a while or it'd post the python version.
I find that greenish is something like 0.47*PI to 0.8*PI.
/**
#param h hue [0, 2PI]
#param s saturation [0,1]
#param l lightness [0,1]
#return object {r,g,b} {[0,1],[0,1][0,1]}
*/
public function hslToRGB(h:Number, s:Number, l:Number):Color
{
var q:Number = (l<0.5)?l*(1+s):l+s-l*s;
var p:Number = 2*l-q;
var h_k:Number = h/(Math.PI*2);
var t_r:Number = h_k+1/3;
var t_g:Number = h_k;
var t_b:Number = h_k-1/3;
if (t_r < 0) ++t_r; else if (t_r > 1) --t_r;
if (t_g < 0) ++t_g; else if (t_g > 1) --t_g;
if (t_b < 0) ++t_b; else if (t_b > 1) --t_b;
var c:Color = new Color();
if (t_r < 1/6) c.r = p+((q-p)*6*t_r);
else if (t_r < 1/2) c.r = q;
else if (t_r < 2/3) c.r = p+((q-p)*6*(2/3-t_r));
else c.r = p;
if (t_g < 1/6) c.g = p+((q-p)*6*t_g);
else if (t_g < 1/2) c.g = q;
else if (t_g < 2/3) c.g = p+((q-p)*6*(2/3-t_g));
else c.g = p;
if (t_b < 1/6) c.b = p+((q-p)*6*t_b);
else if (t_b < 1/2) c.b = q;
else if (t_b < 2/3) c.b = p+((q-p)*6*(2/3-t_b));
else c.b = p;
return c;
}
The simplest way to do this is to make sure that the red and blue components are the same, like this: (Forgive my Python)
rb = randint(100, 200)
color = (rb, randint(120, 255), rb)
I'd go with with the HSV approach everyone else mentioned. Another approach would be to get a nice high resolution photo which some greenery in it, crop out the non-green parts, and pick random pixels from it using PIL.

Categories