If I had a RGB decimal such as 255, 165, 0, what could I do to convert this to CMYK?
For example:
>>> red, green, blue = 255, 165, 0
>>> rgb_to_cmyk(red, green, blue)
(0, 35, 100, 0)
Here's a Python port of a Javascript implementation.
RGB_SCALE = 255
CMYK_SCALE = 100
def rgb_to_cmyk(r, g, b):
if (r, g, b) == (0, 0, 0):
# black
return 0, 0, 0, CMYK_SCALE
# rgb [0,255] -> cmy [0,1]
c = 1 - r / RGB_SCALE
m = 1 - g / RGB_SCALE
y = 1 - b / RGB_SCALE
# extract out k [0, 1]
min_cmy = min(c, m, y)
c = (c - min_cmy) / (1 - min_cmy)
m = (m - min_cmy) / (1 - min_cmy)
y = (y - min_cmy) / (1 - min_cmy)
k = min_cmy
# rescale to the range [0,CMYK_SCALE]
return c * CMYK_SCALE, m * CMYK_SCALE, y * CMYK_SCALE, k * CMYK_SCALE
The accepted answer provided a nice way to go from RGB to CMYK but question title also includes
vice versa
So here's my contribution for conversion from CMYK to RGB:
def cmyk_to_rgb(c, m, y, k, cmyk_scale, rgb_scale=255):
r = rgb_scale * (1.0 - c / float(cmyk_scale)) * (1.0 - k / float(cmyk_scale))
g = rgb_scale * (1.0 - m / float(cmyk_scale)) * (1.0 - k / float(cmyk_scale))
b = rgb_scale * (1.0 - y / float(cmyk_scale)) * (1.0 - k / float(cmyk_scale))
return r, g, b
Unlike patapouf_ai's answer, this function doesn't result in negative rgb values.
But converting full image RGB2CMYK or vice versa is as simple as
from PIL import Image
image = Image.open(path_to_image)
if image.mode == 'CMYK':
rgb_image = image.convert('RGB')
if image.mode == 'RGB':
cmyk_image = image.convert('CMYK')
Following up on Mr. Fooz's implementation.
There are two possible implementations of CMYK. There is the one where the proportions are with respect to white space (which is used for example in GIMP) and which is the one implemented by Mr. Fooz, but there is also another implementation of CMYK (used for example by LibreOffice) which gives the colour proportions with respect to the total colour space. And if you wish to use CMYK to model the mixing of paints or inks, than the second one might be better because colours can just be linearly added together using weights for each colour (0.5 for a half half mixture).
Here is the second version of CMYK with back conversion:
rgb_scale = 255
cmyk_scale = 100
def rgb_to_cmyk(r,g,b):
if (r == 0) and (g == 0) and (b == 0):
# black
return 0, 0, 0, cmyk_scale
# rgb [0,255] -> cmy [0,1]
c = 1 - r / float(rgb_scale)
m = 1 - g / float(rgb_scale)
y = 1 - b / float(rgb_scale)
# extract out k [0,1]
min_cmy = min(c, m, y)
c = (c - min_cmy)
m = (m - min_cmy)
y = (y - min_cmy)
k = min_cmy
# rescale to the range [0,cmyk_scale]
return c*cmyk_scale, m*cmyk_scale, y*cmyk_scale, k*cmyk_scale
def cmyk_to_rgb(c,m,y,k):
"""
"""
r = rgb_scale*(1.0-(c+k)/float(cmyk_scale))
g = rgb_scale*(1.0-(m+k)/float(cmyk_scale))
b = rgb_scale*(1.0-(y+k)/float(cmyk_scale))
return r,g,b
Using a CMYK conversion like the one given in the accepted answer (at the time of this writing) is not accurate for most practical purposes.
CMYK is based on how four kinds of ink form colors on paper; however, color mixture of inks is considerably complex, more so than the mixture of "lights" used to form colors in the RGB color model.
As CMYK is useful, above all, when printing images, any conversion to CMYK needs to take the printing condition into account, including what printer and what paper is used for printing. An accurate conversion to CMYK for printing purposes is not trivial and requires calibrating the printer and measuring CMYK patches on a test sheet, among other things.
There is no meaning for CMYK colors that is as ubiquitous as sRGB is for RGB, as illustrated by the International Color Consortium's page of CMYK characterization data.
See also my color article on this subject.
For this conversion to be useful, you need a color management system, with profiles describing the RGB system and the CMYK system being converted.
http://en.wikipedia.org/wiki/CMYK_color_model#Conversion
Here is a discussion of how to solve this problem using ICC profiles:
How can one perform color transforms with ICC profiles on a set of arbitrary pixel values (not on an image data structure)?
Here is a link to pyCMS, which uses ICC color profiles to do the conversion:
http://www.cazabon.com/pyCMS/
I tried using the back computation provided by bisounours_tronconneuse and it failed for CMYK (96 63 0 12). Result should be : like this
Converting w3schools javascript (code here) to python, the below code now returns correct results:
def cmykToRgb(c, m, y, k) :
c=float(c)/100.0
m=float(m)/100.0
y=float(y)/100.0
k=float(k)/100.0
r = round(255.0 - ((min(1.0, c * (1.0 - k) + k)) * 255.0))
g = round(255.0 - ((min(1.0, m * (1.0 - k) + k)) * 255.0))
b = round(255.0 - ((min(1.0, y * (1.0 - k) + k)) * 255.0))
return (r,g,b)
Related
Question:
I have defined my own colorspace (Yellow-Blue) using some loops, and want to convert a standard HD image from RGB to YB in real-time, with some post-processing filters, but the method I wrote performs the favorable task at a slow speed.
Context:
I was wondering what colors would dogs see, and found that they cannot distinguish between green and red:
So I decided to define my own YB colorspace, as shown in this scheme:
calculating.py
bits = 8
values = 2 ** bits - 1
color_count = values * 6
def hues():
lst = []
for i in range(color_count):
r = g = b = 0
turn = (i // values) + 1
if turn == 1:
r = values
g = i % values
b = 0
elif turn == 2:
r = values - i % values
g = values
b = 0
elif turn == 3:
r = 0
g = values
b = i % values
elif turn == 4:
r = 0
g = values - i % values
b = values
elif turn == 5:
r = i % values
g = 0
b = values
elif turn == 6:
r = values
g = 0
b = values - i % values
r = round(r / values * 255)
g = round(g / values * 255)
b = round(b / values * 255)
lst.append((r, g, b))
return lst
def dues():
lst = []
for i in range(color_count):
r = g = b = 0
turn = (i // values) + 1
if turn == 1:
r = values
g = values
b = round((values - i % values) / 2)
elif turn == 2:
r = values
g = values
b = round((i % values) / 2)
elif turn == 3:
if i % values < values / 2:
r = values
g = values
b = round((values / 2 + i % values))
else:
r = round((3 / 2 * values - i % values))
g = round((3 / 2 * values - i % values))
b = values
elif turn == 4:
r = round((values - i % values) / 2)
g = round((values - i % values) / 2)
b = values
elif turn == 5:
r = round((i % values) / 2)
g = round((i % values) / 2)
b = values
elif turn == 6:
if i % values < values / 2:
r = round((values / 2 + i % values))
g = round((values / 2 + i % values))
b = values
else:
r = values
g = values
b = round((3 / 2 * values - i % values))
r = round(r / values * 255)
g = round(g / values * 255)
b = round(b / values * 255)
lst.append((r, g, b))
return lst
def rgb_to_hsl(color: tuple):
r, g, b = color
r /= 255
g /= 255
b /= 255
cmax = max(r, g, b)
cmin = min(r, g, b)
delta = cmax - cmin
h = 0
l = (cmax + cmin) / 2
if delta == 0:
h = 0
elif cmax == r:
h = ((g - b) / delta) % 6
elif cmax == g:
h = ((b - r) / delta) + 2
elif cmax == b:
h = ((r - g) / delta) + 4
h *= 60
if delta == 0:
s = 0
else:
s = delta / (1 - abs(2 * l - 1))
return h, s, l
def hsl_to_rgb(color: tuple):
h, s, l = color
c = (1 - abs(2 * l - 1)) * s
x = c * (1 - abs((h / 60) % 2 - 1))
m = l - c / 2
r = g = b = 0
if 0 <= h < 60:
r = c
g = x
elif 60 <= h < 120:
r = x
g = c
elif 120 <= h < 180:
g = c
b = x
elif 180 <= h < 240:
g = x
b = c
elif 240 <= h < 300:
r = x
b = c
elif 300 <= h < 360:
r = c
b = x
r = round((r + m) * 255)
g = round((g + m) * 255)
b = round((b + m) * 255)
return r, g, b
On saving the list values I obtained the expected Hues:
Now the main processing includes pixel-by-pixel conversion of color in this order:
Obtaining RGB
RGB --> HSL
Change value of hue to corresponding value in dues_hsl list
New HSL --> RGB
Set new RGB value at same coordinates in another array
This is repeated for every pixel in the image, and took about 58 seconds on a test image of dimensions 481 x 396 pixels
Input and output:
Code for the same:
defining.py
from PIL import Image
import numpy as np
from calculating import hues, dues
from calculating import rgb_to_hsl as hsl
from calculating import hsl_to_rgb as rgb
hues = hues()
dues = dues()
# Hues = human hues
# Dues = dog hues
hues_hsl = [hsl(i) for i in hues]
dues_hsl = [hsl(i) for i in dues]
img = np.array(Image.open('dog.png').convert('RGB'))
arr_blank = np.zeros(img.shape[0:3])
print(arr_blank.shape)
print(img.shape[0:3])
total = img.shape[0] * img.shape[1]
for i in range(img.shape[0]):
for j in range(img.shape[1]):
hsl_val = hsl(tuple(img[i, j]))
h = dues_hsl[hues_hsl.index(min(hues_hsl, key=lambda x: abs(x[0] - hsl_val[0])))][0]
pixel = np.array(rgb((h, hsl_val[1], hsl_val[2])))
arr_blank[i, j, :] = pixel
print(f'{i * img.shape[1] + j} / {total} --- {(i * img.shape[1] + j)/total*100} %')
print(arr_blank)
data = Image.fromarray(arr_blank.astype('uint8'), 'RGB')
data.save('dog_color.png')
Conclusion:
After this I want to add a gaussian blur filter too, post-conversion in real-time, but this is taking long for just one frame. Is there a way the speed can be improved?
Machine info:
If this info is helpful: i7-10750H # 2.6Ghz, SSD, 16 gigs ram
Thanks!
I had forgotten Pillow also does HSV just as well, so no need for OpenCV.
This executes in about 0.45 seconds on my machine.
from PIL import Image
import numpy as np
values = 2 ** 8 - 1
color_count = values * 6
def dog_hues():
# ... from original post, removed for brevity...
return lst
# Convert the dog_hues() list into an image of size 256x1
hue_map_img = Image.new("RGB", (color_count, 1))
hue_map_img.putdata(dog_hues())
hue_map_img = hue_map_img.resize((256, 1), Image.LANCZOS)
# Get the hues out of it
hsv_array = np.array(hue_map_img.convert("HSV"))
hue_map = hsv_array[:, :, 0].flatten()
# Read in the dog, convert it to HSV
img = np.array(Image.open("dog.jpg").convert("HSV"))
# Remap hue
img[:, :, 0] = hue_map[img[:, :, 0]]
# Convert back to RGB and save
img = Image.fromarray(img, "HSV").convert("RGB")
img.save("dog_hsv.jpg")
1st remark: you can't really change colorspace like this. Because when you see a color, interpreted by human eye (and therefore by human rgb image formats) as yellow, like (255,255,0), you can't know whether that is made of a yellow frequency (570 nm for example) that excite both our red and green cones, but not the blue ones. Of if it is made of a mixture of red frequencies (690 nm for example) and green frequencies (530 nm) or any other spectrum that lead to the same red and green cones saturated (255, 255) and blue one not touched (0).
And you need that information to deduce how the two dog cones are impacted.
In other words there isn't any mapping between human color and dog color. In math words, there is a projection between real color space (∞ dimension, a spectrum) and human color space (3D, to simplify: r, g, and b). There is another projection between real color space and dog colorspace (2D, also to simplify). But those projection axes are not included one in the other. So, there isn't any projection between the 3d human color space and the 2d dog colorspace. There is no way to know how dog sees a color with only the knowledge of how human sees it; you need to know the real color. You could do this with hyperspectral cameras (and do both projections to compute both human rgb image, and dog yb image). And that is assuming the quite naive (but correct in first approximation) idea that those color follows elementary college-level linear algebra, which, in reality, it doesn't exactly.
That being said, PIL or OpenCV based solutions are a solution. But more generally speaking, if you don't trust PIL or OpenCV, or any existing library color model and really want to invent your wheel (I respect that; there is no better way to understand things that to reinvent the wheel), then one rule you must abide with is never ever iterate over pixels. If you do that, you have lost the performance match. Python is very very slow. The only reason why it is still a popular language, and why there are still fast programs made with python, is because python coder do whatever it takes so that the computation heavy loops (in image processing, those are the loops over the pixels) are not really made in python.
So you must rely on numpy to perform your operation on all pixels, and not write the for loops yourself.
For example, here a rewrite of your rgb_to_hsl making batch computation with numpy. That is, rgb_to_hsl is not made to be called with a single color, but with a whole array (a 2d array) of colors, that is an image
def rgb_to_hsl(image):
# rgb is the r,g,b channels between 0 and 1 (as you did for individual
# r,g,b variables, but it is easier (see below) to keep them as a single
# array. Rgb is not just a triplet (unlike your r,g,b) but a 2d-array of
# triplets (so a 3d-array)
rgb = image/255
# Likewise, cmax, cmin, delta are not scalar as in your code, but
# 2d array of such scalar
cmax = rgb.max(axis=2) # axis=2 means that axis 0 and 1 are kept, and max
# is computed along axis 2, that is along the 3
# values of each triplets. So rgb is a HxWx3
# 3d-array (axis 0 = y, axis 1=x, axis 2=color
# channel). cmax is a HxW 2d-array
cmin = rgb.min(axis=2) # likewise
delta = cmax - cmin # same code. But this is done on all HxW cmax and cmin
h = zeros_like(delta) # 2d-array of 0
l = (cmax + cmin) / 2 # 2d array of (cmax+cmin)/2
# Here come a trickier part. We need to separate cases, and do computation
# in each subsets concerning those cases
case1= delta==0
h[case1] = 0 # In reality, we could skip those, since h is already 0 everywhere
case2 = cmax==r
h[case2] = (rgb[case2,1]-rgb[case2,2])/delta[case2] % 6
case3 = cmax==g
h[case3] = (rgb[case3,2]-rgb[case3,0])/delta[case3] + 2
case4 = cmax==b
h[case4] = (rgb[case4,0]-rgb[case4,1])/delta[case4] + 4
h *= 60 # Same code, applies on all HxW values of h
s=np.zeros_like(h)
s[case1] = 0 # same remark. I just mimick your code as much as possible
# but that is already the default value
s[~case1] = delta[~case1] / (1-abs(2*l[~case1]-1))
# ~case1 is the opposite of case1. So, equivalent of the else in your code
# returns 3 2d HxW arrays for h, s and l
return h, s, l
I have an OpenCV image, as usual in BGR color space, and I need to convert it to CMYK. I searched online but found basically only (slight variations of) the following approach:
def bgr2cmyk(cv2_bgr_image):
bgrdash = cv2_bgr_image.astype(float) / 255.0
# Calculate K as (1 - whatever is biggest out of Rdash, Gdash, Bdash)
K = 1 - numpy.max(bgrdash, axis=2)
with numpy.errstate(divide="ignore", invalid="ignore"):
# Calculate C
C = (1 - bgrdash[..., 2] - K) / (1 - K)
C = 255 * C
C = C.astype(numpy.uint8)
# Calculate M
M = (1 - bgrdash[..., 1] - K) / (1 - K)
M = 255 * M
M = M.astype(numpy.uint8)
# Calculate Y
Y = (1 - bgrdash[..., 0] - K) / (1 - K)
Y = 255 * Y
Y = Y.astype(numpy.uint8)
return (C, M, Y, K)
This works fine, however, it feels quite slow - for an 800 x 600 px image it takes about 30 ms on my i7 CPU. Typical operations with cv2 like thresholding and alike take only a few ms for the same image, so since this is all numpy I was expecting this CMYK conversion to be faster.
However, I haven't found anything that makes this significantly fater. There is a conversion to CMYK via PIL.Image, but the resulting channels do not look as they do with the algorithm listed above.
Any other ideas?
There are several things you should do:
shake the math
use integer math where possible
optimize beyond what numpy can do
Shaking the math
Given
RGB' = RGB / 255
K = 1 - max(RGB')
C = (1-K - R') / (1-K)
M = (1-K - G') / (1-K)
Y = (1-K - B') / (1-K)
You see what you can factor out.
RGB' = RGB / 255
J = max(RGB')
K = 1 - J
C = (J - R') / J
M = (J - G') / J
Y = (J - B') / J
Integer math
Don't normalize to [0,1] for these calculations. The max() can be done on integers. The differences can too. K can be calculated entirely with integer math.
J = max(RGB)
K = 255 - J
C = 255 * (J - R) / J
M = 255 * (J - G) / J
Y = 255 * (J - B) / J
Numba
import numba
Numba will optimize that code beyond simply using numpy library routines. It will also do the parallelization as indicated. Choosing the numpy error model and allowing fastmath will cause division by zero to not throw an exception or warning, but also make the math a little faster.
Both variants significantly outperform a plain python/numpy solution. Much of that is due to better use of CPU registers caches, rather than intermediate arrays, as is usual with numpy.
First variant: ~1.9 ms
#numba.njit(parallel=True, error_model="numpy", fastmath=True)
def bgr2cmyk_v4(bgr_img):
bgr_img = np.ascontiguousarray(bgr_img)
(height, width) = bgr_img.shape[:2]
CMYK = np.empty((height, width, 4), dtype=np.uint8)
for i in numba.prange(height):
for j in range(width):
B,G,R = bgr_img[i,j]
J = max(R, G, B)
K = np.uint8(255 - J)
C = np.uint8(255 * (J - R) / J)
M = np.uint8(255 * (J - G) / J)
Y = np.uint8(255 * (J - B) / J)
CMYK[i,j] = (C,M,Y,K)
return CMYK
Thanks to Cris Luengo for pointing out further refactoring potential (pulling out 255/J), leading to a second variant. It takes ~1.6 ms
#numba.njit(parallel=True, error_model="numpy", fastmath=True)
def bgr2cmyk_v5(bgr_img):
bgr_img = np.ascontiguousarray(bgr_img)
(height, width) = bgr_img.shape[:2]
CMYK = np.empty((height, width, 4), dtype=np.uint8)
for i in numba.prange(height):
for j in range(width):
B,G,R = bgr_img[i,j]
J = np.uint8(max(R, G, B))
Jinv = np.uint16((255*256) // J) # fixed point math
K = np.uint8(255 - J)
C = np.uint8(((J - R) * Jinv) >> 8)
M = np.uint8(((J - G) * Jinv) >> 8)
Y = np.uint8(((J - B) * Jinv) >> 8)
CMYK[i,j] = (C,M,Y,K)
return CMYK
This fixed point math causes floor rounding. For round-to-nearest, the expression must be ((J - R) * Jinv + 128) >> 8. That would cost a bit more time then (~1.8 ms).
What else?
I think that numba/LLVM didn't apply SIMD here. Some investigation revealed that the Loop Vectorizer doesn't like any of the instances it was asked to consider.
An OpenCL kernel might be even faster. OpenCL can run on CPUs.
Numba can also use CUDA.
I would start by profiling which part is the bottleneck.
e.g how fast is it without the / (1 - K)calculation?
-> precalculate 1/(1-K) might help. Even precalculation of 255/(1-K) is possible.
K = 1 - numpy.max(bgrdash, axis=2)
kRez255=255/(1 - K)
with numpy.errstate(divide="ignore", invalid="ignore"):
# Calculate C
C = (1 - bgrdash[..., 2] - K) * kRez255
C = C.astype(numpy.uint8)
# Calculate M
M = (1 - bgrdash[..., 1] - K) * kRez255
M = M.astype(numpy.uint8)
# Calculate Y
Y = (1 - bgrdash[..., 0] - K) * kRez255
Y = Y.astype(numpy.uint8)
return (C, M, Y, K)
But only profiling can show if it is the calculation at all which slows down the conversion.
I am implementing Bilinear Interpolation to resize image. The function for bilinear interpolation and resizing is as follows:
def bl_resize(original_img, new_h, new_w):
old_h, old_w, c = original_img.shape
resized = np.ones((new_h, new_w, c))
w_scale_factor = (old_w - 1) / (new_w - 1) if new_h != 0 else 0
h_scale_factor = (old_h - 1) / (new_h - 1) if new_w != 0 else 0
for i in range(new_h):
for j in range(new_w):
for k in range(c):
x = i * h_scale_factor
y = j * w_scale_factor
x_floor = math.floor(x)
x_ceil = min( old_h - 1, math.ceil(x))
y_floor = math.floor(y)
y_ceil = min(old_w - 1, math.ceil(y))
if (x_ceil == x_floor) and (y_ceil == y_floor):
q = original_img[int(x), int(y), k]
else:
v1 = original_img[x_floor, y_floor, k]
v2 = original_img[x_ceil, y_floor, k]
v3 = original_img[x_floor, y_ceil, k]
v4 = original_img[x_ceil, y_ceil, k]
q1 = v1 * (x_ceil - x) + v2 * (x - x_floor)
q2 = v3 * (x_ceil - x) + v4 * (x - x_floor)
q = q1 * (y_ceil - y) + q2 * (y - y_floor)
resized[i,j,k] = q
return resized.astype(np.uint8)
I am using x_ceil = min( old_h - 1, math.ceil(x)) and y_ceil = min(old_w - 1, math.ceil(y)) to avoid access to index larger the the dimensions of the original image array. Without it I would get index out of range error for the last index in both dimensions.
The resized image using this code contains a black grid on it. Here are some output images. The first image is of a shrunken version of the original image and the second one is that of the enlarged one!
EDIT: I have identified what is exactly causing the problem, but I don't understand why it is causing a problem. Changing the scale factor for both the dimensions from (old/new) to (old - 1)/(new - 1) lead to grid free results. I want to understand how the scale factor values can create a problem.
Well, after doing some debugging I figured out the reason. The black grid is obtained because of incorrectly assigned zero values to pixels where either x or y take integer values, that results in q = 0.
I have documented everything here: https://meghal-darji.medium.com/implementing-bilinear-interpolation-for-image-resizing-357cbb2c2722#f91e-235aaa8634b8
I'm currently working on a volume rendering project in python where I use a compositing ray casting function to produce an image, given a 3D volume consisting of voxels. The function (which I show below) works correctly, but has a very long runtime. Do you guys have tips on how to make this function faster? The code is Python 3.6.8 and uses various numpy arrays.
def render_compositing(self, view_matrix: np.ndarray, volume: Volume, image_size: int, image: np.ndarray):
# Clear the image
self.clear_image()
# U, V, View vectors. See documentation in parent's class
u_vector = view_matrix[0:3]
v_vector = view_matrix[4:7]
view_vector = view_matrix[8:11]
# Center of the image. Image is squared
image_center = image_size / 2
# Center of the volume (3-dimensional)
volume_center = [volume.dim_x / 2, volume.dim_y / 2, volume.dim_z / 2]
# Define a step size to make the loop faster
step = 2 if self.interactive_mode else 1
for i in range(0, image_size, step):
for j in range(0, image_size, step):
sum_color = TFColor(0, 0, 0, 0)
for k in range(0, image_size, step):
# Get the voxel coordinate X
voxel_coordinate_x = u_vector[0] * (i - image_center) + v_vector[0] * (j - image_center) + \
view_vector[0] * (k - image_center) + volume_center[0]
# Get the voxel coordinate Y
voxel_coordinate_y = u_vector[1] * (i - image_center) + v_vector[1] * (j - image_center) + \
view_vector[1] * (k - image_center) + volume_center[1]
# Get the voxel coordinate Y
voxel_coordinate_z = u_vector[2] * (i - image_center) + v_vector[2] * (j - image_center) + \
view_vector[2] * (k - image_center) + volume_center[2]
color = self.tfunc.get_color(
get_voxel(volume, voxel_coordinate_x, voxel_coordinate_y, voxel_coordinate_z))
sum_color.r = color.a * color.r + (1 - color.a) * sum_color.r
sum_color.g = color.a * color.g + (1 - color.a) * sum_color.g
sum_color.b = color.a * color.b + (1 - color.a) * sum_color.b
sum_color.a = color.a + (1 - color.a) * sum_color.a
red = sum_color.r
green = sum_color.g
blue = sum_color.b
alpha = sum_color.a
# Compute the color value (0...255)
red = math.floor(red * 255) if red < 255 else 255
green = math.floor(green * 255) if green < 255 else 255
blue = math.floor(blue * 255) if blue < 255 else 255
alpha = math.floor(alpha * 255) if alpha < 255 else 255
# Assign color to the pixel i, j
image[(j * image_size + i) * 4] = red
image[(j * image_size + i) * 4 + 1] = green
image[(j * image_size + i) * 4 + 2] = blue
image[(j * image_size + i) * 4 + 3] = alpha
I don't understand why you want to use python for this code. Isn't using a shader the better approach if you are concerned about speed?
Anyways here are few things that can be done in the current code.
voxel coordinates can be calculated using a numpy. you can make a 3 channel 2d image and compute the x,y,z coordinates for an entire slice(k) in a single shot.
Above step can be further optimized by storing an image of x,y,z coordinated of first slice(k=0) and a constant view_directionstep (step_size). Now every other slice can be simply calculated by (XYZ#k=0) + kstep_size.
Use early ray termination by thresholding alpha value to 0.999 or 0.99. This does not look like much but gives a lot of speed gain.
Say I have 80 (or n) polar coordinates that are pretty evenly distributed across a circular area. I want a unique color for each polar coordinate.
If you imagine a color wheel like this (though it could be a different transformation if you like), I'd like one of its colors given a polar coordinate.
At first I was not using the actual polar coordinates, and just scaled one of the channels by some even stride, like RGB (255, i * stride, 255). But now I'd like different colors from all over the spectrum (or at least more than a single color tone).
I thought of just using an image of a color wheel and then sampling it, but that seems kind of weak. Isn't there a formula I could use to convert the polar coordinates to some assumed/generated RGB, HSV, or CMYK space?
I'm working in Python 3, but I'm mostly interested in the formulas/algorithm. I'm not using any specific plotting API.
You could use a conversion from HSV or HSL to RGB, many packages such as Colour (Numpy Vectorised) or python-colormath (Vanilla Python) have implementations:
From Colour, assuming you have Numpy and the tsplit and tstack definitions:
def RGB_to_HSV(RGB):
"""
Converts from *RGB* colourspace to *HSV* colourspace.
Parameters
----------
RGB : array_like
*RGB* colourspace array.
Returns
-------
ndarray
*HSV* array.
Notes
-----
- Input *RGB* colourspace array is in domain [0, 1].
- Output *HSV* colourspace array is in range [0, 1].
References
----------
- :cite:`EasyRGBj`
- :cite:`Smith1978b`
- :cite:`Wikipediacg`
Examples
--------
>>> RGB = np.array([0.49019608, 0.98039216, 0.25098039])
>>> RGB_to_HSV(RGB) # doctest: +ELLIPSIS
array([ 0.2786738..., 0.744 , 0.98039216])
"""
maximum = np.amax(RGB, -1)
delta = np.ptp(RGB, -1)
V = maximum
R, G, B = tsplit(RGB)
S = np.asarray(delta / maximum)
S[np.asarray(delta == 0)] = 0
delta_R = (((maximum - R) / 6) + (delta / 2)) / delta
delta_G = (((maximum - G) / 6) + (delta / 2)) / delta
delta_B = (((maximum - B) / 6) + (delta / 2)) / delta
H = delta_B - delta_G
H = np.where(G == maximum, (1 / 3) + delta_R - delta_B, H)
H = np.where(B == maximum, (2 / 3) + delta_G - delta_R, H)
H[np.asarray(H < 0)] += 1
H[np.asarray(H > 1)] -= 1
H[np.asarray(delta == 0)] = 0
HSV = tstack((H, S, V))
return HSV
def HSV_to_RGB(HSV):
"""
Converts from *HSV* colourspace to *RGB* colourspace.
Parameters
----------
HSV : array_like
*HSV* colourspace array.
Returns
-------
ndarray
*RGB* colourspace array.
Notes
-----
- Input *HSV* colourspace array is in domain [0, 1].
- Output *RGB* colourspace array is in range [0, 1].
References
----------
- :cite:`EasyRGBn`
- :cite:`Smith1978b`
- :cite:`Wikipediacg`
Examples
--------
>>> HSV = np.array([0.27867384, 0.74400000, 0.98039216])
>>> HSV_to_RGB(HSV) # doctest: +ELLIPSIS
array([ 0.4901960..., 0.9803921..., 0.2509803...])
"""
H, S, V = tsplit(HSV)
h = np.asarray(H * 6)
h[np.asarray(h == 6)] = 0
i = np.floor(h)
j = V * (1 - S)
k = V * (1 - S * (h - i))
l = V * (1 - S * (1 - (h - i))) # noqa
i = tstack((i, i, i)).astype(np.uint8)
RGB = np.choose(
i, [
tstack((V, l, j)),
tstack((k, V, j)),
tstack((j, V, l)),
tstack((j, k, V)),
tstack((l, j, V)),
tstack((V, j, k)),
],
mode='clip')
return RGB
def RGB_to_HSL(RGB):
"""
Converts from *RGB* colourspace to *HSL* colourspace.
Parameters
----------
RGB : array_like
*RGB* colourspace array.
Returns
-------
ndarray
*HSL* array.
Notes
-----
- Input *RGB* colourspace array is in domain [0, 1].
- Output *HSL* colourspace array is in range [0, 1].
References
----------
- :cite:`EasyRGBl`
- :cite:`Smith1978b`
- :cite:`Wikipediacg`
Examples
--------
>>> RGB = np.array([0.49019608, 0.98039216, 0.25098039])
>>> RGB_to_HSL(RGB) # doctest: +ELLIPSIS
array([ 0.2786738..., 0.9489796..., 0.6156862...])
"""
minimum = np.amin(RGB, -1)
maximum = np.amax(RGB, -1)
delta = np.ptp(RGB, -1)
R, G, B = tsplit(RGB)
L = (maximum + minimum) / 2
S = np.where(L < 0.5, delta / (maximum + minimum),
delta / (2 - maximum - minimum))
S[np.asarray(delta == 0)] = 0
delta_R = (((maximum - R) / 6) + (delta / 2)) / delta
delta_G = (((maximum - G) / 6) + (delta / 2)) / delta
delta_B = (((maximum - B) / 6) + (delta / 2)) / delta
H = delta_B - delta_G
H = np.where(G == maximum, (1 / 3) + delta_R - delta_B, H)
H = np.where(B == maximum, (2 / 3) + delta_G - delta_R, H)
H[np.asarray(H < 0)] += 1
H[np.asarray(H > 1)] -= 1
H[np.asarray(delta == 0)] = 0
HSL = tstack((H, S, L))
return HSL