I tried to execute this code here as described in this answer. Bu I can't seem to get away from dividing with zero value.
I tried to copy this code from caman Js for transforming from rgb to hsv but I get the same thing.
RuntimeWarning invalide value encountered in divide
caman code is
Convert.rgbToHSV = function(r, g, b) {
var d, h, max, min, s, v;
r /= 255;
g /= 255;
b /= 255;
max = Math.max(r, g, b);
min = Math.min(r, g, b);
v = max;
d = max - min;
s = max === 0 ? 0 : d / max;
if (max === min) {
h = 0;
} else {
h = (function() {
switch (max) {
case r:
return (g - b) / d + (g < b ? 6 : 0);
case g:
return (b - r) / d + 2;
case b:
return (r - g) / d + 4;
}
})();
h /= 6;
}
return {
h: h,
s: s,
v: v
};
};
my code based on the answer from here
import Image
import numpy as np
def rgb_to_hsv(rgb):
hsv = np.empty_like(rgb)
hsv[...,3] = rgb[...,3]
r,g,b = rgb[...,0], rgb[...,1], rgb[...,2]
maxc = np.amax(rgb[...,:3], axis=-1)
print maxc
minc = np.amin(rgb[...,:3], axis=-1)
print minc
hsv[...,2] = maxc
dif = (maxc - minc)
hsv[...,1] = np.where(maxc==0, 0, dif/maxc)
#rc = (maxc-r)/ (maxc-minc)
#gc = (maxc-g)/(maxc-minc)
#bc = (maxc-b)/(maxc-minc)
hsv[...,0] = np.select([dif==0, r==maxc, g==maxc, b==maxc], [np.zeros(maxc.shape), (g-b) / dif + np.where(g<b, 6, 0), (b-r)/dif + 2, (r - g)/dif + 4])
hsv[...,0] = (hsv[...,0]/6.0) % 1.0
idx = (minc == maxc)
hsv[...,0][idx] = 0.0
hsv[...,1][idx] = 0.0
return hsv
The exception I get it in both whereever I divide with maxc or with dif (because they have zero values).
I encounter the same problem on the original code by #unutbu, runtimewarning. Caman seems to do this in every pixel seperately that is for every r,g,b combinations.
I also get a ValueError of shape missmatch: Objexts cannot be broadcast to a single shape when the select function is executed. But i double checked all the shapes of the choices and they are all (256,256)
Edit:
I corrected the function using this wikipedia article, and updated the code...now i get only the runimeWarning
The error comes from the fact that numpy.where (and numpy.select) computes all its arguments, even if they aren't used in the output. So in your line hsv[...,1] = np.where(maxc==0, 0, dif/maxc), dif / maxc is computed even for elements where maxc == 0, but then only the ones where maxc != 0 are used. This means that your output is fine, but you still get the RuntimeWarning.
If you want to avoid the warning (and make your code a little faster), do something like:
nz = maxc != 0 # find the nonzero values
hsv[nz, 1] = dif[nz] / maxc[nz]
You'll also have to change the numpy.select statement, because it also evaluates all its arguments.
Related
Question:
I have defined my own colorspace (Yellow-Blue) using some loops, and want to convert a standard HD image from RGB to YB in real-time, with some post-processing filters, but the method I wrote performs the favorable task at a slow speed.
Context:
I was wondering what colors would dogs see, and found that they cannot distinguish between green and red:
So I decided to define my own YB colorspace, as shown in this scheme:
calculating.py
bits = 8
values = 2 ** bits - 1
color_count = values * 6
def hues():
lst = []
for i in range(color_count):
r = g = b = 0
turn = (i // values) + 1
if turn == 1:
r = values
g = i % values
b = 0
elif turn == 2:
r = values - i % values
g = values
b = 0
elif turn == 3:
r = 0
g = values
b = i % values
elif turn == 4:
r = 0
g = values - i % values
b = values
elif turn == 5:
r = i % values
g = 0
b = values
elif turn == 6:
r = values
g = 0
b = values - i % values
r = round(r / values * 255)
g = round(g / values * 255)
b = round(b / values * 255)
lst.append((r, g, b))
return lst
def dues():
lst = []
for i in range(color_count):
r = g = b = 0
turn = (i // values) + 1
if turn == 1:
r = values
g = values
b = round((values - i % values) / 2)
elif turn == 2:
r = values
g = values
b = round((i % values) / 2)
elif turn == 3:
if i % values < values / 2:
r = values
g = values
b = round((values / 2 + i % values))
else:
r = round((3 / 2 * values - i % values))
g = round((3 / 2 * values - i % values))
b = values
elif turn == 4:
r = round((values - i % values) / 2)
g = round((values - i % values) / 2)
b = values
elif turn == 5:
r = round((i % values) / 2)
g = round((i % values) / 2)
b = values
elif turn == 6:
if i % values < values / 2:
r = round((values / 2 + i % values))
g = round((values / 2 + i % values))
b = values
else:
r = values
g = values
b = round((3 / 2 * values - i % values))
r = round(r / values * 255)
g = round(g / values * 255)
b = round(b / values * 255)
lst.append((r, g, b))
return lst
def rgb_to_hsl(color: tuple):
r, g, b = color
r /= 255
g /= 255
b /= 255
cmax = max(r, g, b)
cmin = min(r, g, b)
delta = cmax - cmin
h = 0
l = (cmax + cmin) / 2
if delta == 0:
h = 0
elif cmax == r:
h = ((g - b) / delta) % 6
elif cmax == g:
h = ((b - r) / delta) + 2
elif cmax == b:
h = ((r - g) / delta) + 4
h *= 60
if delta == 0:
s = 0
else:
s = delta / (1 - abs(2 * l - 1))
return h, s, l
def hsl_to_rgb(color: tuple):
h, s, l = color
c = (1 - abs(2 * l - 1)) * s
x = c * (1 - abs((h / 60) % 2 - 1))
m = l - c / 2
r = g = b = 0
if 0 <= h < 60:
r = c
g = x
elif 60 <= h < 120:
r = x
g = c
elif 120 <= h < 180:
g = c
b = x
elif 180 <= h < 240:
g = x
b = c
elif 240 <= h < 300:
r = x
b = c
elif 300 <= h < 360:
r = c
b = x
r = round((r + m) * 255)
g = round((g + m) * 255)
b = round((b + m) * 255)
return r, g, b
On saving the list values I obtained the expected Hues:
Now the main processing includes pixel-by-pixel conversion of color in this order:
Obtaining RGB
RGB --> HSL
Change value of hue to corresponding value in dues_hsl list
New HSL --> RGB
Set new RGB value at same coordinates in another array
This is repeated for every pixel in the image, and took about 58 seconds on a test image of dimensions 481 x 396 pixels
Input and output:
Code for the same:
defining.py
from PIL import Image
import numpy as np
from calculating import hues, dues
from calculating import rgb_to_hsl as hsl
from calculating import hsl_to_rgb as rgb
hues = hues()
dues = dues()
# Hues = human hues
# Dues = dog hues
hues_hsl = [hsl(i) for i in hues]
dues_hsl = [hsl(i) for i in dues]
img = np.array(Image.open('dog.png').convert('RGB'))
arr_blank = np.zeros(img.shape[0:3])
print(arr_blank.shape)
print(img.shape[0:3])
total = img.shape[0] * img.shape[1]
for i in range(img.shape[0]):
for j in range(img.shape[1]):
hsl_val = hsl(tuple(img[i, j]))
h = dues_hsl[hues_hsl.index(min(hues_hsl, key=lambda x: abs(x[0] - hsl_val[0])))][0]
pixel = np.array(rgb((h, hsl_val[1], hsl_val[2])))
arr_blank[i, j, :] = pixel
print(f'{i * img.shape[1] + j} / {total} --- {(i * img.shape[1] + j)/total*100} %')
print(arr_blank)
data = Image.fromarray(arr_blank.astype('uint8'), 'RGB')
data.save('dog_color.png')
Conclusion:
After this I want to add a gaussian blur filter too, post-conversion in real-time, but this is taking long for just one frame. Is there a way the speed can be improved?
Machine info:
If this info is helpful: i7-10750H # 2.6Ghz, SSD, 16 gigs ram
Thanks!
I had forgotten Pillow also does HSV just as well, so no need for OpenCV.
This executes in about 0.45 seconds on my machine.
from PIL import Image
import numpy as np
values = 2 ** 8 - 1
color_count = values * 6
def dog_hues():
# ... from original post, removed for brevity...
return lst
# Convert the dog_hues() list into an image of size 256x1
hue_map_img = Image.new("RGB", (color_count, 1))
hue_map_img.putdata(dog_hues())
hue_map_img = hue_map_img.resize((256, 1), Image.LANCZOS)
# Get the hues out of it
hsv_array = np.array(hue_map_img.convert("HSV"))
hue_map = hsv_array[:, :, 0].flatten()
# Read in the dog, convert it to HSV
img = np.array(Image.open("dog.jpg").convert("HSV"))
# Remap hue
img[:, :, 0] = hue_map[img[:, :, 0]]
# Convert back to RGB and save
img = Image.fromarray(img, "HSV").convert("RGB")
img.save("dog_hsv.jpg")
1st remark: you can't really change colorspace like this. Because when you see a color, interpreted by human eye (and therefore by human rgb image formats) as yellow, like (255,255,0), you can't know whether that is made of a yellow frequency (570 nm for example) that excite both our red and green cones, but not the blue ones. Of if it is made of a mixture of red frequencies (690 nm for example) and green frequencies (530 nm) or any other spectrum that lead to the same red and green cones saturated (255, 255) and blue one not touched (0).
And you need that information to deduce how the two dog cones are impacted.
In other words there isn't any mapping between human color and dog color. In math words, there is a projection between real color space (∞ dimension, a spectrum) and human color space (3D, to simplify: r, g, and b). There is another projection between real color space and dog colorspace (2D, also to simplify). But those projection axes are not included one in the other. So, there isn't any projection between the 3d human color space and the 2d dog colorspace. There is no way to know how dog sees a color with only the knowledge of how human sees it; you need to know the real color. You could do this with hyperspectral cameras (and do both projections to compute both human rgb image, and dog yb image). And that is assuming the quite naive (but correct in first approximation) idea that those color follows elementary college-level linear algebra, which, in reality, it doesn't exactly.
That being said, PIL or OpenCV based solutions are a solution. But more generally speaking, if you don't trust PIL or OpenCV, or any existing library color model and really want to invent your wheel (I respect that; there is no better way to understand things that to reinvent the wheel), then one rule you must abide with is never ever iterate over pixels. If you do that, you have lost the performance match. Python is very very slow. The only reason why it is still a popular language, and why there are still fast programs made with python, is because python coder do whatever it takes so that the computation heavy loops (in image processing, those are the loops over the pixels) are not really made in python.
So you must rely on numpy to perform your operation on all pixels, and not write the for loops yourself.
For example, here a rewrite of your rgb_to_hsl making batch computation with numpy. That is, rgb_to_hsl is not made to be called with a single color, but with a whole array (a 2d array) of colors, that is an image
def rgb_to_hsl(image):
# rgb is the r,g,b channels between 0 and 1 (as you did for individual
# r,g,b variables, but it is easier (see below) to keep them as a single
# array. Rgb is not just a triplet (unlike your r,g,b) but a 2d-array of
# triplets (so a 3d-array)
rgb = image/255
# Likewise, cmax, cmin, delta are not scalar as in your code, but
# 2d array of such scalar
cmax = rgb.max(axis=2) # axis=2 means that axis 0 and 1 are kept, and max
# is computed along axis 2, that is along the 3
# values of each triplets. So rgb is a HxWx3
# 3d-array (axis 0 = y, axis 1=x, axis 2=color
# channel). cmax is a HxW 2d-array
cmin = rgb.min(axis=2) # likewise
delta = cmax - cmin # same code. But this is done on all HxW cmax and cmin
h = zeros_like(delta) # 2d-array of 0
l = (cmax + cmin) / 2 # 2d array of (cmax+cmin)/2
# Here come a trickier part. We need to separate cases, and do computation
# in each subsets concerning those cases
case1= delta==0
h[case1] = 0 # In reality, we could skip those, since h is already 0 everywhere
case2 = cmax==r
h[case2] = (rgb[case2,1]-rgb[case2,2])/delta[case2] % 6
case3 = cmax==g
h[case3] = (rgb[case3,2]-rgb[case3,0])/delta[case3] + 2
case4 = cmax==b
h[case4] = (rgb[case4,0]-rgb[case4,1])/delta[case4] + 4
h *= 60 # Same code, applies on all HxW values of h
s=np.zeros_like(h)
s[case1] = 0 # same remark. I just mimick your code as much as possible
# but that is already the default value
s[~case1] = delta[~case1] / (1-abs(2*l[~case1]-1))
# ~case1 is the opposite of case1. So, equivalent of the else in your code
# returns 3 2d HxW arrays for h, s and l
return h, s, l
I have an OpenCV image, as usual in BGR color space, and I need to convert it to CMYK. I searched online but found basically only (slight variations of) the following approach:
def bgr2cmyk(cv2_bgr_image):
bgrdash = cv2_bgr_image.astype(float) / 255.0
# Calculate K as (1 - whatever is biggest out of Rdash, Gdash, Bdash)
K = 1 - numpy.max(bgrdash, axis=2)
with numpy.errstate(divide="ignore", invalid="ignore"):
# Calculate C
C = (1 - bgrdash[..., 2] - K) / (1 - K)
C = 255 * C
C = C.astype(numpy.uint8)
# Calculate M
M = (1 - bgrdash[..., 1] - K) / (1 - K)
M = 255 * M
M = M.astype(numpy.uint8)
# Calculate Y
Y = (1 - bgrdash[..., 0] - K) / (1 - K)
Y = 255 * Y
Y = Y.astype(numpy.uint8)
return (C, M, Y, K)
This works fine, however, it feels quite slow - for an 800 x 600 px image it takes about 30 ms on my i7 CPU. Typical operations with cv2 like thresholding and alike take only a few ms for the same image, so since this is all numpy I was expecting this CMYK conversion to be faster.
However, I haven't found anything that makes this significantly fater. There is a conversion to CMYK via PIL.Image, but the resulting channels do not look as they do with the algorithm listed above.
Any other ideas?
There are several things you should do:
shake the math
use integer math where possible
optimize beyond what numpy can do
Shaking the math
Given
RGB' = RGB / 255
K = 1 - max(RGB')
C = (1-K - R') / (1-K)
M = (1-K - G') / (1-K)
Y = (1-K - B') / (1-K)
You see what you can factor out.
RGB' = RGB / 255
J = max(RGB')
K = 1 - J
C = (J - R') / J
M = (J - G') / J
Y = (J - B') / J
Integer math
Don't normalize to [0,1] for these calculations. The max() can be done on integers. The differences can too. K can be calculated entirely with integer math.
J = max(RGB)
K = 255 - J
C = 255 * (J - R) / J
M = 255 * (J - G) / J
Y = 255 * (J - B) / J
Numba
import numba
Numba will optimize that code beyond simply using numpy library routines. It will also do the parallelization as indicated. Choosing the numpy error model and allowing fastmath will cause division by zero to not throw an exception or warning, but also make the math a little faster.
Both variants significantly outperform a plain python/numpy solution. Much of that is due to better use of CPU registers caches, rather than intermediate arrays, as is usual with numpy.
First variant: ~1.9 ms
#numba.njit(parallel=True, error_model="numpy", fastmath=True)
def bgr2cmyk_v4(bgr_img):
bgr_img = np.ascontiguousarray(bgr_img)
(height, width) = bgr_img.shape[:2]
CMYK = np.empty((height, width, 4), dtype=np.uint8)
for i in numba.prange(height):
for j in range(width):
B,G,R = bgr_img[i,j]
J = max(R, G, B)
K = np.uint8(255 - J)
C = np.uint8(255 * (J - R) / J)
M = np.uint8(255 * (J - G) / J)
Y = np.uint8(255 * (J - B) / J)
CMYK[i,j] = (C,M,Y,K)
return CMYK
Thanks to Cris Luengo for pointing out further refactoring potential (pulling out 255/J), leading to a second variant. It takes ~1.6 ms
#numba.njit(parallel=True, error_model="numpy", fastmath=True)
def bgr2cmyk_v5(bgr_img):
bgr_img = np.ascontiguousarray(bgr_img)
(height, width) = bgr_img.shape[:2]
CMYK = np.empty((height, width, 4), dtype=np.uint8)
for i in numba.prange(height):
for j in range(width):
B,G,R = bgr_img[i,j]
J = np.uint8(max(R, G, B))
Jinv = np.uint16((255*256) // J) # fixed point math
K = np.uint8(255 - J)
C = np.uint8(((J - R) * Jinv) >> 8)
M = np.uint8(((J - G) * Jinv) >> 8)
Y = np.uint8(((J - B) * Jinv) >> 8)
CMYK[i,j] = (C,M,Y,K)
return CMYK
This fixed point math causes floor rounding. For round-to-nearest, the expression must be ((J - R) * Jinv + 128) >> 8. That would cost a bit more time then (~1.8 ms).
What else?
I think that numba/LLVM didn't apply SIMD here. Some investigation revealed that the Loop Vectorizer doesn't like any of the instances it was asked to consider.
An OpenCL kernel might be even faster. OpenCL can run on CPUs.
Numba can also use CUDA.
I would start by profiling which part is the bottleneck.
e.g how fast is it without the / (1 - K)calculation?
-> precalculate 1/(1-K) might help. Even precalculation of 255/(1-K) is possible.
K = 1 - numpy.max(bgrdash, axis=2)
kRez255=255/(1 - K)
with numpy.errstate(divide="ignore", invalid="ignore"):
# Calculate C
C = (1 - bgrdash[..., 2] - K) * kRez255
C = C.astype(numpy.uint8)
# Calculate M
M = (1 - bgrdash[..., 1] - K) * kRez255
M = M.astype(numpy.uint8)
# Calculate Y
Y = (1 - bgrdash[..., 0] - K) * kRez255
Y = Y.astype(numpy.uint8)
return (C, M, Y, K)
But only profiling can show if it is the calculation at all which slows down the conversion.
I am working on a 2D collider system that breaks up shapes into one possible primitive: impenetrable segments that are defined by two points. To provide collision detection for this system, I am using a static collision detection approach that calculates the distance between the edge of one segment and the currently handled segment (point/line distance) once every frame. If the distance is too small, a collision is triggered during that frame. This works fine but has the known problem of tunneling if one or more bodies exhibit high speeds. So I am tinkering with alternatives.
Now I want to introduce continuous collision detection (CCD) that operates on dynamic points / dynamic segments. My problem is: I don't exactly know how. I do know how to do continuous collision between two moving points, a moving point and a static segment but not how to do CCD between a moving point (defined by point P) and a moving segment (defined by points U and V, both can move completely freely).
illustration of problem
I have seen similar questions beeing asked on SO and other platforms, but not with these exact requirements:
both point and segment are moving
segment can be rotating and stretching (because U and V are moving freely)
collision time and collision point need to be found accurately between two frames (CCD, no static collision test)
I prefer a mathematically perfect solutiuon, if possible (no iterative approximation algorithms, swept volumes)
note: the swept line shape will not always be a convex polygon, because of the freedom of the U,V points (see image)
note: testing for a collision with the swept volume test is inaccurate because a collision point with the polygon does not mean a collision point in the actual movement (see image, the point will have left the polygon once the actual segment has crossed the trajectory of the point)
So far I came up with the following approach, given:
sP (P at start of frame),
eP (P at end of frame),
sU (U at start of frame),
eU (U at end of frame),
sV (V at start of frame),
eV (V at end of frame)
Question: Will they collide? If yes, when and where?
To answer the question of "if", I found this paper to be useful: https://www.cs.ubc.ca/~rbridson/docs/brochu-siggraph2012-ccd.pdf (section 3.1) but I could not derive the answers to "when" and "where". I also found an alternative explanation of the problem here: http://15462.courses.cs.cmu.edu/fall2018/article/13 (3rd Question)
Solution:
Model temporal trajectory of each point during a frame as linear movement (line trajectory for 0 <= t <= 1)
P(t) = sP * (1 - t) + eP * t
U(t) = sU * (1 - t) + eU * t
V(t) = sV * (1 - t) + eV * t
(0 <= a <= 1 represents a location on the segment defined by U and V):
UV(a, t) = U(t) * (1 - a) + V(t) * a
Model collision by equating point and segment equations:
P(t) = UV(a, t)
P(t) = U(t) * (1 - a) + V(t) * a
Derive a function for the vector from point P to a point on the segment (see picture of F):
F(a, t) = P(t) - (1 - a) * U(t) - a * V(t)
To now find a collision, one needs to find a and t, so that F(a, t) = (0, 0) and a,t in [0, 1]. This can be modeled as a root finding problem with 2 variables.
Insert the temporal trajectory equations into F(a, t):
F(a, t) = (sP * (1 - t) + eP * t) - (1 - a) * (sU * (1 - t) + eU * t) - a * (sV * (1 - t) + eV * t)
Separate the temporal trajectory equations by dimension (x and y):
Fx(a, t) = (sP.x * (1 - t) + eP.x * t) - (1 - a) * (sU.x * (1 - t) + eU.x * t) - a * (sV.x * (1 - t) + eV.x * t)
Fy(a, t) = (sP.y * (1 - t) + eP.y * t) - (1 - a) * (sU.y * (1 - t) + eU.y * t) - a * (sV.y * (1 - t) + eV.y * t)
Now we have two equations and two variables that we want to solve for (Fx, Fy and a, t respectively), so we should be able to use a solver to get a and t to only then check if they lie within [0, 1].. right?
When I plug this into Python sympy to solve:
from sympy import symbols, Eq, solve, nsolve
def main():
sxP = symbols("sxP")
syP = symbols("syP")
exP = symbols("exP")
eyP = symbols("eyP")
sxU = symbols("sxU")
syU = symbols("syU")
exU = symbols("exU")
eyU = symbols("eyU")
sxV = symbols("sxV")
syV = symbols("syV")
exV = symbols("exV")
eyV = symbols("eyV")
a = symbols("a")
t = symbols("t")
eq1 = Eq((sxP * (1 - t) + exP * t) - (1 - a) * (sxU * (1 - t) + exU * t) - a * (sxV * (1 - t) + exV * t))
eq2 = Eq((syP * (1 - t) + eyP * t) - (1 - a) * (syU * (1 - t) + eyU * t) - a * (syV * (1 - t) + eyV * t))
sol = solve((eq1, eq2), (a, t), dict=True)
print(sol)
if __name__ == "__main__":
main()
I get a solution that is HUGE in size and it takes sympy like 5 minutes to evaluate.
I cannot be using such a big expression in my actual engine code and this solutions just does not seem right to me.
What I want to know is:
Am I missing something here? I think this problem seems rather easy to understand but I cannot figure out a mathematically accurate way to find a time (t) and point (a) of impact solution for dynamic points / dynamic segments. Any help is greatly appreciated, even if someone tells me that
this is not possible to do like that.
TLDR
I did read "...like 5 minutes to evaluate..."
No way too long, this is a real-time solution for many lines and points.
Sorry this is not a complete answer (I did not rationalize and simplify the equation) that will find the point of intercept, that I leave to you.
Also I can see several approaches to the solution as it revolves around a triangle (see image) that when flat is the solution. The approach bellow finds the point in time when the long side of the triangle is equal to the sum of the shorter two.
Solving for u (time)
This can be done as a simple quadratic with the coefficients derived from the 3 starting points, the vector over unit time of each point. Solving for u
The image below give more details.
The point P is the start pos of point
The points L1, L2 are the start points of line ends.
The vector V1 is for the point, over unit time (along green line).
The vectors V2,V3 are for the line ends over unit time.
u is the unit time
A is the point (blue), and B and C are the line end points (red)
There is (may) a point in time u where A is on the line B,C. At this point in time the length of the lines AB (as a) and AC (as c) sum to equal the length of line BC (as b) (orange line).
That means that when b - (a + c) == 0 the point is on the line. In the image the points are squared as this simplifies it a little. b2 - (a2 + c2) == 0
At the bottom of image is the equation (quadratic) in terms of u, P, L1, L2, V1, V2, V3.
That equation needs to be rearranged such that you get (???)u2 + (???)u + (???) = 0
Sorry doing that manually is very tedious and very prone to mistakes. I don`t have the tools at hand to do that nor do I use python so the math lib you are using is unknown to me. However it should be able to help you find how to calculate the coefficients for (???)u2 + (???)u + (???) = 0
Update
Ignore most of the above as I made a mistake. b - (a + c) == 0 is not the same as b2 - (a2 + c2) == 0. The first one is the one needed and that is a problem when dealing with radicals (Note that there could still be a solution using a + bi == sqrt(a^2 + b^2) where i is the imaginary number).
Another solution
So I explored the other options.
The simplest has a slight flaw. It will return the time of intercept. However that must be validated as it will also return the time for intercepts when it intercepts the line, rather than the line segment BC
Thus when a result is found you then test it by dividing the dot product of the found point and line segment with the square of the line segments length. See function isPointOnLine in test snippet.
To solve I use the fact that the cross product of the line BC and the vector from B to A will be 0 when the point is on the line.
Some renaming
Using the image above I renamed the variables so that it is easier for me to do all the fiddly bits.
/*
point P is {a,b}
point L1 is {c,d}
point L2 is {e,f}
vector V1 is {g,h}
vector V2 is {i,j}
vector V3 is {k,l}
Thus for points A,B,C over time u */
Ax = (a+g*u)
Ay = (b+h*u)
Bx = (c+i*u)
By = (d+j*u)
Cx = (e+k*u)
Cy = (f+l*u)
/* Vectors BA and BC at u */
Vbax = ((a+g*u)-(c+i*u))
Vbay = ((b+h*u)-(d+j*u))
Vbcx = ((e+k*u)-(c+i*u))
Vbcy = ((f+l*u)-(d+j*u))
/*
thus Vbax * Vbcy - Vbay * Vbcx == 0 at intercept
*/
This gives the quadratic
0 = ((a+g*u)-(c+i*u)) * ((f+l*u)-(d+j*u)) - ((b+h*u)-(d+j*u)) * ((e+k*u)-(c+i*u))
Rearranging we get
0 = -((i*l)-(h*k)+g*l+i*h+(i+k)*j-(g+i)*j)*u* u -(d*g-c*l-k*b-h*e+l*a+g*f+i*b+c*h+(i+k)*d+(c+e)*j-((f+d)*i)-((a+c)*j))*u +(c+e)*d-((a+c)*d)+a*f-(c*f)-(b*e)+c*b
The coefficients are thus
A = -((i*l)-(h*k)+g*l+i*h+(i+k)*j-(g+i)*j)
B = -(d*g-c*l-k*b-h*e+l*a+g*f+i*b+c*h+(i+k)*d+(c+e)*j-((f+d)*i)-((a+c)*j))
C = (c+e)*d-((a+c)*d)+a*f-(c*f)-(b*e)+c*b
We can solve using the quadratic formula (see image top right).
Note that there could be two solutions. In the example I ignored the second solution. However as the first may not be on the line segment you need to keep the second solution if within the range 0 <= u <= 1 just in case the first fails. You also need to validate that result.
Testing
To avoid errors I had to test the solution
Below is a snippet that generates a random random pair of lines and then generate random lines until an intercept is found.
The functions of interest are
movingLineVPoint which return the unit time of first intercept if any.
isPointOnLine to validate the result.
const ctx = canvas.getContext("2d");
canvas.addEventListener("click",test);
const W = 256, H = W, D = (W ** 2 * 2) ** 0.5;
canvas.width = W; canvas.height = H;
const rand = (m, M) => Math.random() * (M - m) + m;
const Tests = 300;
var line1, line2, path, count = 0;
setTimeout(test, 0);
// creating P point L line
const P = (x,y) => ({x,y,get arr() {return [this.x, this.y]}});
const L = (l1, l2) => ({l1,l2,vec: P(l2.x - l1.x, l2.y - l1.y), get arr() {return [this.l1, this.l2]}});
const randLine = () => L(P(rand(0, W), rand(0, H)), P(rand(0, W), rand(0, H)));
const isPointOnLine = (p, l) => {
const x = p.x - l.l1.x;
const y = p.y - l.l1.y;
const u = (l.vec.x * x + l.vec.y * y) / (l.vec.x * l.vec.x + l.vec.y * l.vec.y);
return u >= 0 && u <= 1;
}
// See answer illustration for names
// arguments in order Px,Py,L1x,l1y,l2x,l2y,V1x,V1y,V2x,V2y,V3x,V3y
function movingLineVPoint(a,b, c,d, e,f, g,h, i,j, k,l) {
var A = -(i*l)-(h*k)+g*l+i*h+(i+k)*j-(g+i)*j;
var B = -d*g-c*l-k*b-h*e+l*a+g*f+i*b+c*h+(i+k)*d+(c+e)*j-((f+d)*i)-((a+c)*j)
var C = +(c+e)*d-((a+c)*d)+a*f-(c*f)-(b*e)+c*b
// Find roots if any. Could be up to 2
// Using the smallest root >= 0 and <= 1
var u, D, u1, u2;
// if A is tiny we can ignore
if (Math.abs(A) < 1e-6) {
if (B !== 0) {
u = -C / B;
if (u < 0 || u > 1) { return } // !!!! no solution !!!!
} else { return } // !!!! no solution !!!!
} else {
B /= A;
D = B * B - 4 * (C / A);
if (D > 0) {
D **= 0.5;
u1 = 0.5 * (-B + D);
u2 = 0.5 * (-B - D);
if ((u1 < 0 || u1 > 1) && (u2 < 0 || u2 > 1)) { return } // !!!! no solution !!!!
if (u1 < 0 || u1 > 1) { u = u2 } // is first out of range
else if (u2 < 0 || u2 > 1) { u = u1 } // is second out of range
else if (u1 < u2) { u = u1 } // first is smallest
else { u = u2 }
} else if (D === 0) {
u = 0.5 * -B;
if (u < 0 || u > 1) { return } // !!!! no solution !!!!
} else { return } // !!!! no solution !!!!
}
return u;
}
function test() {
if (count> 0) { return }
line1 = randLine();
line2 = randLine();
count = Tests
subTest();
}
function subTest() {
path = randLine()
ctx.clearRect(0,0,W,H);
drawLines();
const u = movingLineVPoint(
path.l1.x, path.l1.y,
line1.l1.x, line1.l1.y,
line2.l1.x, line2.l1.y,
path.vec.x, path.vec.y,
line1.vec.x, line1.vec.y,
line2.vec.x, line2.vec.y
);
if (u !== undefined) { // intercept found maybe
pointAt = P(path.l1.x + path.vec.x * u, path.l1.y + path.vec.y * u);
lineAt = L(
P(line1.l1.x + line1.vec.x * u, line1.l1.y + line1.vec.y * u),
P(line2.l1.x + line2.vec.x * u, line2.l1.y + line2.vec.y * u)
);
const isOn = isPointOnLine(pointAt, lineAt);
if (isOn) {
drawResult(pointAt, lineAt);
count = 0;
info.textContent = "Found at: u= " + u.toFixed(4) + ". Click for another";
return;
}
}
setTimeout((--count < 0 ? test : subTest), 18);
}
function drawLine(line, col = "#000", lw = 1) {
ctx.lineWidth = lw;
ctx.strokeStyle = col;
ctx.beginPath();
ctx.lineTo(...line.l1.arr);
ctx.lineTo(...line.l2.arr);
ctx.stroke();
}
function markPoint(p, size = 3, col = "#000", lw = 1) {
ctx.lineWidth = lw;
ctx.strokeStyle = col;
ctx.beginPath();
ctx.arc(...p.arr, size, 0, Math.PI * 2);
ctx.stroke();
}
function drawLines() {
drawLine(line1);
drawLine(line2);
markPoint(line1.l1);
markPoint(line2.l1);
drawLine(path, "#0B0", 1);
markPoint(path.l1, 2, "#0B0", 2);
}
function drawResult(pointAt, lineAt) {
ctx.clearRect(0,0,W,H);
drawLines();
markPoint(lineAt.l1, 2, "red", 1.5);
markPoint(lineAt.l2, 2, "red", 1.5);
markPoint(pointAt, 2, "blue", 3);
drawLine(lineAt, "#BA0", 2);
}
div {position: absolute; top: 10px; left: 12px}
canvas {border: 2px solid black}
<canvas id="canvas" width="1024" height="1024"></canvas>
<div><span id="info">Click to start</span></div>
There are two parts of #Blindman67's solution I don't understand:
Solving for b^2 - (a^2 + c^2) = 0 instead of sqrt(b^2)-(sqrt(a^2)+sqrt(b^2)) = 0
The returned timestamp being clamped in the range [0,1]
Maybe I'm missing something obvious, but in any case, I designed a solution that addresses these concerns:
All quadratic terms are solved for, not just one
The returned time stamp has no limits
sqrt(b^2)-(sqrt(a^2)+sqrt(b^2)) = 0 is solved for, instead of b^2 - (a^2 + c^2) = 0
Feel free to recommend ways this could be optimized:
# pnt, crt_1, and crt_2 are points, each with x,y and dx,dy attributes
# returns a list of timestamps for which pnt is on the segment
# whose endpoints are crt_1 and crt_2
def colinear_points_collision(pnt, crt_1, crt_2):
a, b, c, d = pnt.x, pnt.y, pnt.dx, pnt.dy
e, f, g, h = crt_1.x, crt_1.y, crt_1.dx, crt_1.dy
i, j, k, l = crt_2.x, crt_2.y, crt_2.dx, crt_2.dy
m = a - e
n = c - g
o = b - f
p = d - h
q = a - i
r = c - k
s = b - j
u = d - l
v = e - i
w = g - k
x = f - j
y = h - l
# Left-hand expansion
r1 = n * n + p * p
r2 = 2 * o * p + 2 * m * n
r3 = m * m + o * o
r4 = r * r + u * u
r5 = 2 * q * r + 2 * s * u
r6 = q * q + s * s
coef_a = 4 * r1 * r4 # t^4 coefficient
coef_b = 4 * (r1 * r5 + r2 * r4) # t^3 coefficient
coef_c = 4 * (r1 * r6 + r2 * r5 + r3 * r4) # t^2 coefficient
coef_d = 4 * (r2 * r6 + r3 * r5) # t coefficient
coef_e = 4 * r3 * r6 # constant
# Right-hand expansion
q1 = (w * w + y * y - n * n - p * p - r * r - u * u)
q2 = 2 * (v * w + x * y - m * n - o * p - q * r - s * u)
q3 = v * v + x * x - m * m - o * o - q * q - s * s
coef1 = q1 * q1 # t^4 coefficient
coef2 = 2 * q1 * q2 # t^3 coefficient
coef3 = 2 * q1 * q3 + q2 * q2 # t^2 coefficient
coef4 = 2 * q2 * q3 # t coefficient
coef5 = q3 * q3 # constant
# Moves all the coefficients onto one side of the equation to get
# at^4 + bt^3 + ct^2 + dt + e
# solve for possible values of t
p = np.array([coef1 - coef_a, coef2 - coef_b, coef3 - coef_c, coef4 - coef_d, coef5 - coef_e])
def fun(x):
return p[0] * x**4 + p[1] * x**3 + p[2] * x**2 + p[3] * x + p[4]
# could use np.root, but I found this to be more numerically stable
sol = optimize.root(fun, [0, 0], tol=0.002)
r = sol.x
uniques = np.unique(np.round(np.real(r[np.isreal(r)]), 4))
final = []
for r in uniques[uniques > 0]:
if point_between(e + g * r, f + h * r, i + k * r, j + l * r, a + c * r, b + d * r):
final.append(r)
return np.array(final)
# Returns true if the point (px,py) is between the endpoints
# of the line segment whose endpoints lay at (ax,ay) and (bx,by)
def point_between(ax, ay, bx, by, px, py):
# colinear already checked above, this checks between the other two.
return (min(ax, bx) <= px <= max(ax, bx) or abs(ax - bx) < 0.001) and (min(ay, by) <= py <= max(ay, by) or abs(ay - by) < 0.001)
An example (L1 and L2 are endpoints of line):
P = (0,0) with velocity (0, +1)
L1 = (-1,2) with velocity (0, -1)
L2 = (1,2) with velocity (0, -1)
The returned result would be t=1, because after 1 time step, P will be one unit higher, and both endpoints of the line will each be one unit lower, therefore, the point intersects the segment at t=1.
Anyone know of a good python algorithm for converting HSV color to RGB (and vice versa) that doesn't depend on any external modules? I'm working on some animation generation code and want to support the HSV colorspace, but it's running on a Raspberry Pi and I'm trying to avoid any floating point.
This site here takes you through the steps, including how to do it using integer division. Here is a python port of the RGB to HSV function described in there
def RGB_2_HSV(RGB):
''' Converts an integer RGB tuple (value range from 0 to 255) to an HSV tuple '''
# Unpack the tuple for readability
R, G, B = RGB
# Compute the H value by finding the maximum of the RGB values
RGB_Max = max(RGB)
RGB_Min = min(RGB)
# Compute the value
V = RGB_Max;
if V == 0:
H = S = 0
return (H,S,V)
# Compute the saturation value
S = 255 * (RGB_Max - RGB_Min) // V
if S == 0:
H = 0
return (H, S, V)
# Compute the Hue
if RGB_Max == R:
H = 0 + 43*(G - B)//(RGB_Max - RGB_Min)
elif RGB_Max == G:
H = 85 + 43*(B - R)//(RGB_Max - RGB_Min)
else: # RGB_MAX == B
H = 171 + 43*(R - G)//(RGB_Max - RGB_Min)
return (H, S, V)
Which gives correct results when compared to the colorsys functions
import colorsys
RGB = (127, 127, 127)
Converted_2_HSV = RGB_2_HSV(RGB)
Verify_RGB_2_HSV = colorsys.rgb_to_hsv(RGB[0], RGB[1], RGB[2])
print Converted_2_HSV
>>> (0, 0, 127)
print Verify_RGB_2_HSV # multiplied by 255 to bring it into the same scale
>>> (0.0, 0.0, 127.5)
And you can check that the output is still in fact an integer
type(Converted_2_HSV[0])
>>> <type 'int'>
Now for the reverse function. The original source can be found here, and here is the Python port.
def HSV_2_RGB(HSV):
''' Converts an integer HSV tuple (value range from 0 to 255) to an RGB tuple '''
# Unpack the HSV tuple for readability
H, S, V = HSV
# Check if the color is Grayscale
if S == 0:
R = V
G = V
B = V
return (R, G, B)
# Make hue 0-5
region = H // 43;
# Find remainder part, make it from 0-255
remainder = (H - (region * 43)) * 6;
# Calculate temp vars, doing integer multiplication
P = (V * (255 - S)) >> 8;
Q = (V * (255 - ((S * remainder) >> 8))) >> 8;
T = (V * (255 - ((S * (255 - remainder)) >> 8))) >> 8;
# Assign temp vars based on color cone region
if region == 0:
R = V
G = T
B = P
elif region == 1:
R = Q;
G = V;
B = P;
elif region == 2:
R = P;
G = V;
B = T;
elif region == 3:
R = P;
G = Q;
B = V;
elif region == 4:
R = T;
G = P;
B = V;
else:
R = V;
G = P;
B = Q;
return (R, G, B)
And we can verify the result in the same way as before
interger_HSV = (127, 127, 127)
Converted_2_RGB = HSV_2_RGB(interger_HSV)
Verify_HSV_2_RGB = colorsys.hsv_to_rgb(0.5, 0.5, 0.5)
print Converted_2_RGB
>>> (63, 127, 124)
print type(Converted_2_RGB[0])
>>> <type 'int'>
print Verify_HSV_2_RGB # multiplied these by 255 so they are on the same scale
>>> (63.75, 127.5, 127.5)
The integer arithmetic does introduce some errors, however depending on the application these might be ok.
So I am using one third party library which is not very well documented. It has a method which takes a picture with camera and this is what it returns:
'E{\x7fM{\x7f;\x89\x89:\x89\x89=\x81\x87<\x81\x87O\x90\x7fJ\x90\x7fB\x87\x80I\x87\x80<{\x81={\x81A\x81\x82A\x81\x82E\x81\x81:\x81\x818\x80\x81?\x80\x81?\x8c\x85C\x8c\x85Dw\x84Kw\x84K\x81}H\x81}S\x82|R\x82|N\x88xS\x88xP\x87|P\x87|H\x83}H\x83}J\x83|F\x83|S{\x80P{\x80G~zH~zDx\x7fDx\x7fI\x7f\x80M\x7f\x80I\x82yK\x82yH\x83\x80H\x83\x80K\x84\x80L\x84\x80K\x82|H\x82|G\x83\x83G\x83\x83M\x81\x80M\x81\x80K\x83~F\x83~H\x81~L\x81~N\x85|J\x85|B\x84\x82I\x84\x82K\x84\x7fJ\x84\x7fG\x83\x80F\x83\x80B~\x81G~\x81E}~G}~D}\x81B}\x81I|\x84I|\x84I\x82\x7fG\x82\x7fG\x80~E\x80~Iy\x81Jy\x81H|\x82M|\x82L\x81\x82I\x81\x82Gx\x82Hx\x82Ez\x7fGz\x7fL|\x81N|\x81G\x82\x80K\x82\x80L\x81}P\x81}J\x82\x7fH\x82\x7fCz|Dz|K}\x7fH}\x7fDs|Es|L\x83\x81I\x83\x81HxzHxzJ{\x83F{\x83G\x84\x81F\x84\x81I\x88\x85G\x88\x85Cu\x83#u\x83H}\x83D}\x83<u\x80;u\x80C\x88{C\x88{A\x7f\x82E\x7f\x82D\x84\x81C\x84\x81A}\x87A}\x87>|\x7fA|\x7fA}\x82;}\x82D\x83\x80?\x83\x80#\x80\x7fB\x80\x7fB\x80\x85A\x80\x85#u\x88>u\x888~\x848~\x84?w|=w|9|\x7f9|\x7f:\x84\x81;\x84\x81:~\x83;~\x836u\x87>u\x879{\x8d:{\x8d;\x7f\x86;\x7f\x86;y\x834y\x836\x82\x8c;\x82\x8c<y\x8b5y\x8bI~~M~~>\x88\x84:\x88\x84A\x81\x889\x81\x88B~\x81E~\x81A{\x81#{\x81O\x80\x83S\x80\x83\\\x85xf\x85xQ\x90\x80L\x90\x80G\x81\x7fK\x81\x7f#y\x83Dy\x83E~\x88E~\x88F\x81\x82M\x81\x82P\x82\x80O\x82\x80S\x85\x7fT\x85\x7fT\x83~U\x83~T\x88\x7fR\x88\x7fPz\x7fTz\x7fQ\x7f}P\x7f}Q}\x81U}\x81R\x7f\x7fO\x7f\x7fP\x83\x83N\x83\x83M\x85\x80R\x85\x80L\x87\x82O\x87\x82N\x82{N\x82{V|\x84P|\x84Rz\x83Qz\x83M\x89\x84R\x89\x84P\x8c\x85N\x8c\x85L\x80\x81I\x80\x81M|~N|~L}\x81J}\x81S\x82\x7fL\x82\x7fM}\x84I}\x84K~\x80L~\x80Lz\x80Iz\x80Hy\x80Ly\x80K~\x7fJ~\x7fJx\x82Ox\x82J\x7f\x81M\x7f\x81I\x80~J\x80~Q\x81\x82O\x81\x82M\x84\x7fH\x84\x7fO~\x80R~\x80I\x80\x84J\x80\x84J\x7f\x82L\x7f\x82N\x85\x85U\x85\x85R\x83\x87O\x83\x87U\x82\x80P\x82\x80N|\x85K|\x85O}~O}~J\x7f\x81K\x7f\x81M}\x86N}\x86I|\x82I|\x82Ix\x84Gx\x84My\x88Jy\x88J{\x7fH{\x7fF}{F}{G\x7f\x82K\x7f\x82E}\x7fE}\x7fBz\x80Dz\x80I}\x80J}\x80Dw\x81Dw\x81G\x82\x84H\x82\x84Fz\x85Cz\x85>\x85\x86;\x85\x86H\x8a\x84J\x8a\x84Cx\x80Cx\x80B\x80\x85B\x80\x85#|\x83B|\x83?{\x81#{\x81H{\x84#{\x84?y\x84Ay\x84A\x84\x85=\x84\x85;}\x81=}\x81=\x84\x86#\x84\x86:}\x85;}\x85=}\x83=}\x838\x81\x86=\x81\x86:\x81\x82>\x81\x82=w\x83?w\x83Ot\x81St\x81P\x86\x80L\x86\x80B\x83\x7f?\x83\x7f=\x82\x82>\x82\x82N{\x86H{\x86A\x82\x85K\x82\x85B\x85\x7fB\x85\x7fB\x8b\x85A\x8b\x85B\x83\x85#\x83\x85;\x86\x89?\x86\x89>\x86\x84A\x86\x848v\x81?v\x81I\x81\x81M\x81\x81R\x87\x7fU\x87\x7fT\x88~U\x88~R\x83~S\x83~N{yO{yR\x86\x80P\x86\x80T\x87\x7fQ\x87\x7fQ\x89\x81Q\x89\x81R\x88\x85S\x88\x85O\x80\x80N\x80\x80J\x83\x7fJ\x83\x7fN\x86\x7fM\x86\x7fL\x83\x88J\x83\x88M\x81\x82L\x81\x82O\x84\x82R\x84\x82R{\x80O{\x80K~}O~}R\x7f\x83N\x7f\x83N\x81\x86N\x81\x86Ny\x7fMy\x7fN\x84\x82N\x84\x82Ly\x82Ry\x82O\x81\x82M\x81\x82K|\x83Q|\x83O\x81\x82M\x81\x82J|\x80N|\x80I}\x84D}\x84K\x8a\x83M\x8a\x83M\x85}P\x85}R\x85\x83N\x85\x83Kz|Hz|H~}H~}Nm~Rm~N\x83~I\x83~O\x81\x80O\x81\x80J\x7f\x80K\x7f\x80J\x83\x80K\x83\x80Ix\x84Kx\x84L\x81\x84L\x81\x84J}\x83J}\x83K{\x7fK{\x7fGz}Cz}Ex\x83Hx\x83Jz{Jz{M\x7f\x82N\x7f\x82K}\x83F}\x83Ey\x7fEy\x7fGs\x7fHs\x7fG}\x83F}\x83Fv\x80Ev\x80Fw\x85Gw\x85G\x83\x84J\x83\x84B|\x85D|\x85#\x80\x80#\x80\x80D\x80\x82C\x80\x82B\x84\x86C\x84\x86A\x80\x81#\x80\x81Cu\x87Bu\x87I{\x83C{\x83C\x82\x82A\x82\x82#|\x85<|\x85#{\x88#{\x88D\x81\x89A\x81\x89;z\x857z\x85?\x7f\x84>\x7f\x84#\x81\x80A\x81\x80<\x83\x86;\x83\x86=}\x83:}\x83F\x7f|L\x7f|J\x8d\x85O\x8d\x85F\x8d\x85D\x8d\x85;}\x84B}\x84A\x81~D\x81~>\x80\x85A\x80\x85B{|={|?\x82\x82#\x82\x82;\x81\x827\x81\x82=w\x85=w\x85Gw\x82Iw\x82I}\x83K}\x83K\x8e\x80I\x8e\x80Q\x8d\x7fS\x8d\x7fR\x87\x7fU\x87\x7fQ\x88\x81Q\x88\x81N\x84\x83Q\x84\x83N\x85\x80O\x85\x80M\x88\x82P\x88\x82U\x81\x80P\x81\x80Tz\x7fQz\x7fOy\x7fOy\x7fN\x80\x82N\x80\x82Q\x85\x81R\x85\x81N\x87~L\x87~K\x87\x80M\x87\x80P\x8b\x85L\x8b\x85Q\x7f\x7fN\x7f\x7fQ\x7f\x81M\x7f\x81O\x84\x84Q\x84\x84Q\x80\x82Q\x80\x82I\x7f\x84J\x7f\x84Hn\x80On\x80I\x87\x87I\x87\x87Q\x7f\x80M\x7f\x80N\x83~L\x83~O\x81\x81O\x81\x81N\x7f\x80I\x7f\x80K\x82\x80N\x82\x80O\x80\x84Q\x80\x84O~\x83K~\x83Kt\x84Ot\x84P~\x85M~\x85L\x7f\x82K\x7f\x82Pz\x82Pz\x82J{\x81E{\x81L|\x81M|\x81J\x81\x81J\x81\x81K|\x83M|\x83K\x82\x80J\x82\x80I|\x82H|\x82P~\x89O~\x89Hx|Kx|Lw\x83Fw\x83N~\x87R~\x87P\x84\x80M\x84\x80>v|?v|E~\x81D~\x81Gx\x80Kx\x80I{\x7fD{\x7fB}\x7fD}\x7fG~\x84H~\x84J\x85\x80I\x85\x80H\x82\x80F\x82\x80=}\x80?}\x80D\x81\x82F\x81\x82C\x81\x84F\x81\x84Dt\x80Bt\x80B\x80\x81C\x80\x81A}\x82=}\x82C{\x86#{\x86Bv\x84Gv\x84?\x80\x7f=\x80\x7f?\x81\x89A\x81\x895q\x855q\x85#x\x82Ex\x82>|\x7f#|\x7f:~\x82>~\x82#\x84\x86#\x84\x867\x7f\x837\x7f\x83O~}T~}P\x8a\x83R\x8a\x83R\x8f\x7fL\x8f\x7f6x\x817x\x81?\x89\x83#\x89\x83H\x80\x80G\x80\x80I\x81\x83J\x81\x83H\x85\x85J\x85\x85J\x85\x82F\x85\x82M~\x80L~\x80Os\x80Ts\x80S~\x80R~\x80K\x86\x80N\x86\x80L\x8d\x81M\x8d\x81M\x87\x7fP\x87\x7fS\x84\x82Q\x84\x82M\x81{J\x81{S\x84}T\x84}T\x88\x82S\x88\x82U~\x7fT~\x7fO\x80}P\x80}Q\x85~Q\x85~P\x88\x82N\x88\x82M}\x80J}\x80M\x80\x81J\x80\x81I\x80\x82O\x80\x82S\x86\x83O\x86\x83T\x80\x7fR\x80\x7fL\x82\x7fK\x82\x7fT\x85\x80N\x85\x80L\x7f\x81O\x7f\x81P\x88\x80M\x88\x80Mw\x85Pw\x85O\x86\x87J\x86\x87Ov\x81Nv\x81M\x80\x84L\x80\x84Ox\x81Mx\x81M}\x80O}\x80P{\x83M{\x83M\x80\x82K\x80\x82H~\x80L~\x80Nz\x83Oz\x83M\x82}K\x82}O~\x83R~\x83R|\x85Q|\x85Nz\x7fKz\x7fQx~Rx~L}\x85K}\x85O\x82\x81N\x82\x81Q}\x81Q}\x81L~{M~{P\x7f~L\x7f~I\x80\x82H\x80\x82J\x7f\x83N\x7f\x83F}\x89J}\x89J\x7f\x83I\x7f\x83J\x81\x86I\x81\x86Et\x86Gt\x86K\x7f\x83J\x7f\x83I\x82\x84G\x82\x84Gz\x84Dz\x84Ky\x80Ey\x80G\x80\x80F\x80\x80G\x82\x85E\x82\x85>}\x81B}\x81Cv\x81Fv\x81=\x80\x84?\x80\x84C\x81\x81A\x81\x81C\x80\x82C\x80\x82#r\x82<r\x82>\x82\x83#\x82\x83C{\x82#{\x828w\x7f9w\x7f>x\x87<x\x87<|\x81>|\x81A{\x86#{\x86D\x80\x7fB\x80\x7f<v\x7f<v\x7f=\x83\x82>\x83\x82>}\x83<}\x83N}\x82R}\x82N\x8e\x81N\x8e\x81O\x8b|O\x8b|K\x87\x83K\x87\x83L\x82\x84M\x82\x84Q\x8d}R\x8d}N\x85}O\x85}M\x90{O\x90{U\x8d\x81Q\x8d\x81P\x89\x80O\x89\x80L\x7f~N\x7f~R\x88\x80S\x88\x80U\x8f\x84S\x8f\x84L\x81\x7fM\x81\x7fR\x82~S\x82~T\x84\x83V\x84\x83T\x83}R\x83}O\x7fzL\x7fzK\x86\x80P\x86\x80N\x7f\x83O\x7f\x83Q}\x81P}\x81S{}O{}Q\x86\x84P\x86\x84Q{}O{}J\x7f\x7fO\x7f\x7fL\x81\x81N\x81\x81Q\x80\x80K\x80\x80J\x81\x7fK\x81\x7fO}\x80N}\x80Q\x86\x82Q\x86\x82H}~N}~O{yO{yEr\x81Gr\x81S\x7f~O\x7f~O}\x80I}\x80S\x85\x86T\x85\x86Jy\x80Ly\x80P\x80\x84Q\x80\x84M\x82\x84M\x82\x84O\x80{R\x80{L\x7f}O\x7f}M~\x80I~\x80M|\x81K|\x81Ew~Fw~M\x82\x80L\x82\x80J|\x83O|\x83K{|L{|Ez\x87Jz\x87O{\x7fL{\x7fI}\x81L}\x81My\x80Py\x80N\x84\x81N\x84\x81M{\x88J{\x88Lz\x85Nz\x85?\x88\x80A\x88\x80J|\x86G|\x86Fx~Ex~G\x85~E\x85~E\x83\x8bH\x83\x8bD\x80\x82E\x80\x82H~\x84E~\x84Dz\x84Bz\x84Bx\x85Fx\x85E~\x84G~\x84=w\x82>w\x82E\x86\x7fE\x86\x7f>y\x81?y\x81E}\x86C}\x86#}\x87;}\x87F}\x81F}\x81<\x80\x7f;\x80\x7fA\x7f\x86A\x7f\x86Cy\x84Dy\x84?s\x82?s\x82;|\x85=|\x85>\x85\x83<\x85\x83A|\x87#|\x878\x84\x827\x84\x822\x8b\x879\x8b\x87:z\x862z\x86M|\x7fQ|\x7fP\x92|S\x92|W\x88~R\x88~I\x8a\x80H\x8a\x80L\x86}O\x86}R\x8b~U\x8b~R\x8a\x7fS\x8a\x7fP\x91}U\x91}V\x87~V\x87~T\x86\x7fQ\x86\x7fQ\x83}P\x83}N\x82zS\x82zR\x8b\x80S\x8b\x80U\x89{T\x89{X\x83{X\x83{L{\x80K{\x80T\x85\x7fS\x85\x7fR\x89\x81R\x89\x81Q\x82zQ\x82zQ\x7fzR\x7fzTu~Pu~U\x87\x81P\x87\x81R\x8b~P\x8b~M\x87\x86N\x87\x86Kx~Ox~N\x8d\x83P\x8d\x83N\x87|Q\x87|W\x85\x7fT\x85\x7fR\x7f\x82Q\x7f\x82N\x86\x80L\x86\x80L\x86\x80K\x86\x80M\x80~F\x80~N\x7f\x7fU\x7f\x7fO\x80\x81L\x80\x81L\x7f}K\x7f}L}}R}}R\x84\x7fQ\x84\x7fO\x80\x80O\x80\x80Ly\x7fQy\x7fQ\x85\x80N\x85\x80Rw\x87Qw\x87P\x87\x81Q\x87\x81N~\x82N~\x82J|yL|yL\x82\x84O\x82\x84M|\x84O|\x84M\x80\x81M\x80\x81Kx\x82Mx\x82Lx\x81Lx\x81L~\x82P~\x82N{~P{~Lw\x81Ow\x81J|}F|}Ls\x82Ls\x82D\x81\x85D\x81\x85Gt\x80Et\x80Kz\x80Iz\x80D\x80\x81F\x80\x81E\x80\x85F\x80\x85Iz\x82Fz\x82G|\x86F|\x86Fy\x82Gy\x82Hr\x80Fr\x80B\x80\x80#\x80\x80C\x7f\x82B\x7f\x82Fx\x80Hx\x80Cx\x85Hx\x85B}\x80E}\x80;w\x83<w\x83#|\x82#|\x82B{~B{~#\x83\x81?\x83\x81Bw\x7f?w\x7fBz\x88?z\x886x\x827x\x82<|\x81=|\x814\x81\x871\x81\x87Hv\x85;v\x85?\x87\x84N\x87\x848z\x82<z\x82F\x7f}N\x7f}Q\x8awR\x8awO\x8f|N\x8f|U\x84}T\x84}N{\x84H{\x84S\x8a~S\x8a~O\x8c|R\x8c|M\x84\x80S\x84\x80P\x82wS\x82wP\x89}M\x89}P\x89zS\x89zM\x84\x7fR\x84\x7fL\x86\x82N\x86\x82T\x80~U\x80~J\x89\x81K\x89\x81O\x7fzS\x7fzOy\x80Ny\x80R\x84\x7fP\x84\x7fQ|yM|yN\x80{M\x80{M}\x7fN}\x7fR\x88\x80Q\x88\x80J|\x7fN|\x7fS}\x81Q}\x81M\x84}M\x84}P\x8c\x80R\x8c\x80Q\x8a\x80O\x8a\x80R\x82\x7fN\x82\x7fQ\x83\x7fJ\x83\x7fU\x82\x84U\x82\x84M\x7f\x83Q\x7f\x83N\x7f\x81S\x7f\x81P\x83\x82L\x83\x82N\x81}M\x81}I\x82\x7fF\x82\x7fQ}\x82U}\x82N\x81\x7fL\x81\x7fN\x86\x81P\x86\x81M~\x7fJ~\x7fQ\x83}P\x83}O\x85\x7fL\x85\x7fG\x82{H\x82{P\x7f~N\x7f~K\x81~J\x81~F\x85\x80J\x85\x80O\x7f\x85Q\x7f\x85Py\x7fMy\x7fM~\x7fM~\x7fG\x81\x84K\x81\x84J|\x80J|\x80I\x82\x81Q\x82\x81P\x7f\x83L\x7f\x83H\x81\x81H\x81\x81I{\x86H{\x86I\x80\x7fJ\x80\x7fCz{Dz{Jx~Fx~H\x83\x83H\x83\x83?~~D~~F\x86\x81D\x86\x81D\x85\x80H\x85\x80D\x81\x82B\x81\x82B|\x87C|\x87Ex\x80Ex\x80?w\x88Aw\x88E\x84\x85H\x84\x85E~\x81I~\x81A}\x85A}\x85D\x81~A\x81~=\x84\x82?\x84\x82=\x7f\x80?\x7f\x80Ay\x82Ay\x82D}\x82?}\x82=z\x829z\x82:v\x819v\x81Xy}Ry}#y\x81My\x81Gv\x84#v\x84Gw\x85Ew\x859\x84\x87:\x84\x87Py\x80Ry\x80Q\x8b~N\x8b~L\x86yQ\x86yV\x88\x80S\x88\x80Q\x8b\x81S\x8b\x81O\x86\x80K\x86\x80U\x86{V\x86{O\x93}O\x93}Q\x84{Q\x84{T\x87\x80S\x87\x80R\x87\x82R\x87\x82V\x84{W\x84{O\x89xP\x89xU\x86xV\x86xQ\x87~R\x87~N\x87~P\x87~M\x81\x80T\x81\x80U|\x80P|\x80Z{\x80U{\x80K}\x81M}\x81T\x87\x83M\x87\x83Q\x82xP\x82xM\x84\x84R\x84\x84R|\x81M|\x81L\x80}K\x80}O\x84~O\x84~R\x89}O\x89}Q\x85\x80S\x85\x80T\x84\x80U\x84\x80P\x82\x83M\x82\x83M\x7fzO\x7fzS\x80\x81N\x80\x81W\x81\x87R\x81\x87O\x82\x7fO\x82\x7fJ\x86~J\x86~L\x83\x81Q\x83\x81Rw\x82Ww\x82M|\x81K|\x81Q\x87\x84Q\x87\x84Q\x86\x83O\x86\x83P\x85\x7fO\x85\x7fN\x7f\x80O\x7f\x80S~\x83S~\x83N\x85\x7fK\x85\x7fK\x82\x86M\x82\x86Ku\x7fPu\x7fJx~Lx~P}\x82R}\x82I}\x7fL}\x7fL}\x80K}\x80L\x84\x81K\x84\x81Rx\x89Qx\x89Mx\x80Jx\x80L\x7f\x85H\x7f\x85M\x83\x85M\x83\x85K\x83\x82H\x83\x82F\x80\x80H\x80\x80M\x81\x86I\x81\x86J\x80\x87K\x80\x87H\x85yI\x85yDx\x8aCx\x8aB}~E}~K\x7f~I\x7f~J\x80\x80I\x80\x80B~\x7fC~\x7fJ\x83\x80I\x83\x80H~\x81J~\x81#|\x82=|\x82?y\x80?y\x80E\x87\x7fE\x87\x7fGt\x80Ht\x80Cx\x81Dx\x81C\x8b\x85D\x8b\x856z\x829z\x827u\x876u\x87>y\x876y\x87#y\x88Cy\x881\x84\x86:\x84\x86Gy\x88;y\x88Y{\x83_{\x83J\x81\x84R\x81\x84P\x8e\x80Q\x8e\x80R\x88~R\x88~R\x8a\x7fT\x8a\x7fS\x8e|Q\x8e|N\x83\x81M\x83\x81T\x89{T\x89{T\x89\x80Q\x89\x80T\x86\x7fS\x86\x7fO\x8c|R\x8c|T\x8e\x81U\x8e\x81M\x88\x80Q\x88\x80S\x89\x81S\x89\x81Q\x83\x81R\x83\x81U\x86xN\x86xM\x86|Q\x86|T\x84\x80U\x84\x80O}}Q}}P\x84\x7fR\x84\x7fP|\x7fP|\x7fL\x86}O\x86}S}|S}|T\x8a\x82V\x8a\x82Q}~R}~R\x87\x7fP\x87\x7fO|\x81R|\x81M\x88~S\x88~Ry\x7fRy\x7fP\x85\x80P\x85\x80N\x87\x81O\x87\x81R\x80\x81T\x80\x81P\x83}N\x83}R}\x7fM}\x7fQ~~P~~Q\x81\x83N\x81\x83S~\x80P~\x80Ns|Qs|R\x81\x85S\x81\x85T\x85\x81N\x85\x81P{}O{}O|\x7fO|\x7fT\x82\x81S\x82\x81Q\x82\x86Q\x82\x86M\x7f\x82J\x7f\x82O\x81\x86O\x81\x86O|}M|}Is\x81Os\x81Kx\x85Gx\x85NrzKrzK{\x7fL{\x7fG}\x81G}\x81K\x89\x80J\x89\x80J|\x80L|\x80I}\x86K}\x86K{~L{~G\x80\x87K\x80\x87Hw\x82Hw\x82E\x82~E\x82~C\x84}E\x84}G}\x80L}\x80H|\x83H|\x83K}\x88O}\x88I{\x85G{\x85L\x84\x82K\x84\x82G\x80\x80I\x80\x80F\x7f\x82E\x7f\x82C\x80\x81E\x80\x81D{\x7fA{\x7fCz\x83Bz\x83C\x7f\x86F\x7f\x86D~\x88F~\x88A~\x83#~\x83>\x80\x80;\x80\x80#\x86\x85#\x86\x85?{\x81Z{\x81Gk\x838k\x83m~~a~~F\x82}T\x82}G\x80~F\x80~X|\x80N|\x80F\x7f~R\x7f~Q\x84\x80R\x84\x80U\x8b\x80S\x8b\x80P\x8a\x80Q\x8a\x80I\x8byL\x8byV\x8avM\x8avN\x89\x7fR\x89\x7fL}~P}~V\x81\x83O\x81\x83M\x88\x80Q\x88\x80U\x8a\x7fU\x8a\x7fO\x86\x7fM\x86\x7fS\x88{U\x88{R\x7f\x81Q\x7f\x81R\x88~P\x88~N\x83{J\x83{T\x80\x7fS\x80\x7fR\x84\x83Q\x84\x83N\x83\x81P\x83\x81Qy\x7fSy\x7fK\x80}L\x80}P\x83\x81P\x83\x81P\x83}R\x83}T\x85\x85V\x85\x85P\x84\x80R\x84\x80J\x86\x7fN\x86\x7fT}}O}}X\x86\x80U\x86\x80W\x80\x80S\x80\x80S}\x84R}\x84Q\x82\x84T\x82\x84R\x83\x80Q\x83\x80S\x80}S\x80}U\x80\x84U\x80\x84U|\x81Q|\x81S\x80{U\x80{O{\x7fN{\x7fPy\x80Sy\x80R\x82\x7fR\x82\x7fQ{\x85O{\x85S}\x81O}\x81T~\x86T~\x86Q\x80\x87T\x80\x87Q\x84\x84M\x84\x84L~\x80L~\x80Qz\x80Mz\x80Lv\x85Mv\x85L\x81\x82M\x81\x82M\x81~M\x81~H|\x85L|\x85M{\x81M{\x81Hy\x81My\x81Iw\x80Kw\x80K\x7f}G\x7f}F\x80\x83G\x80\x83N{\x80K{\x80My\x82Ky\x82I}\x87K}\x87K\x84\x83E\x84\x83A\x80\x7fB\x80\x7fK\x81\x85L\x81\x85H\x82\x85H\x82\x85E{\x80H{\x80Fz~Hz~Jt\x82Gt\x82I\x81\x82J\x81\x82D\x80\x87B\x80\x87Ev\x82Dv\x82I\x80\x81H\x80\x81F\x82\x8cI\x82\x8cB\x82\x80D\x82\x80D\x82\x87B\x82\x87Aw\x83#w\x83:z}7z}>{\x80H{\x805y\x895y\x89Aw\x856w\x85Qz\x85Uz\x85<\x82\x84L\x82\x84Tp\x89Fp\x89E\x80\x82P\x80\x82L\x84{M\x84{V\x8a\x85S\x8a\x85V\x87\x83X\x87\x83N\x8b}Q\x8b}V\x91\x81R\x91\x81J\x87\x7fM\x87\x7fR\x85yP\x85yT\x89}S\x89}P\x8a~O\x8a~U~}W~}Q\x86\x7fQ\x86\x7fV\x88yT\x88yL\x87zL\x87zS~xQ~xZ~~W~~Mx\x81Nx\x81O\x85\x82U\x85\x82L\x84}L\x84}S\x82}V\x82}Q\x83zP\x83zU\x7f\x7fT\x7f\x7fT\x89\x85S\x89\x85U\x7f\x80S\x7f\x80I|}L|}H\x84}F\x84}N\x82\x82O\x82\x82P\x83\x84O\x83\x84N\x89\x7fN\x89\x7fR\x84\x81R\x84\x81P\x82~K\x82~Pr|Kr|Lw\x80Ow\x80N\x82\x83Q\x82\x83Nx\x7fNx\x7fFy\x7fNy\x7fX|\x84P|\x84P\x7f\x7fO\x7f\x7fM\x85\x82Q\x85\x82U\x80\x80Q\x80\x80Py\x7fNy\x7fO\x82\x7fR\x82\x7fR\x84\x84S\x84\x84Q}\x83S}\x83N}\x80I}\x80Q\x82\x80N\x82\x80I\x82\x82I\x82\x82K~\x82N~\x82O\x81\x84L\x81\x84N\x80\x80L\x80\x80R\x82\x83P\x82\x83Q|\x81T|\x81K}\x7fM}\x7fN\x7f\x80I\x7f\x80K\x82\x85K\x82\x85Jy\x83Jy\x83K\x82\x82G\x82\x82I|\x81J|\x81H\x80\x85L\x80\x85N~\x82N~\x82L\x82\x86I\x82\x86B\x81\x84F\x81\x84Hz~Ez~F\x80~H\x80~H~\x80E~\x80E\x82\x80I\x82\x80E~\x7fG~\x7fE{\x81E{\x81I\x82~K\x82~E}\x85H}\x85=\x82}#\x82}?y~?y~<x\x7f=x\x7fCx\x84Ax\x84w{zn{zHz~az~Hvz?vzUz\x7fKz\x7fGv\x86Pv\x86E\x85\x87C\x85\x87H\x81\x83M\x81\x83O\x86\x85Q\x86\x85M\x82\x81L\x82\x81O\x7f\x84N\x7f\x84S\x85\x80P\x85\x80U\x85\x80T\x85\x80P\x84\x7fO\x84\x7fN\x80|L\x80|U\x86\x7fS\x86\x7fT\x82\x84Q\x82\x84M\x87{M\x87{N\x87uL\x87uL\x88\x7fL\x88\x7fM\x87\x80P\x87\x80T\x86\x80R\x86\x80N\x85\x7fS\x85\x7fH\x80\x82I\x80\x82S|~V|~Qu}Pu}S\x87}T\x87}U\x82\x80R\x82\x80M\x88\x86S\x88\x86Q\x7f\x86O\x7f\x86Qx\x81Ux\x81T{}S{}Q\x89\x83N\x89\x83O\x83}Q\x83}T~\x82O~\x82N\x80\x81N\x80\x81T\x80}V\x80}S~\x81Q~\x81M\x81\x80O\x81\x80Ry\x7fOy\x7fPs\x81Qs\x81P\x7f\x80P\x7f\x80N\x81~R\x81~Q\x86\x82P\x86\x82Q\x84\x81O\x84\x81R\x84\x82S\x84\x82P{\x84U{\x84O{\x7fO{\x7fS\x82\x82T\x82\x82T~\x82U~\x82P\x81\x85O\x81\x85R\x83\x80O\x83\x80V\x80~R\x80~T~\x83N~\x83J\x82\x81M\x82\x81K\x83\x85M\x83\x85O{\x81R{\x81Nz\x80Mz\x80O\x80\x81O\x80\x81L\x81|K\x81|M|\x86L|\x86I\x87\x84J\x87\x84I\x83\x83L\x83\x83H{\x82G{\x82Lq~Oq~Hx\x82Kx\x82E\x7f\x82J\x7f\x82N{\x85M{\x85J|\x7fL|\x7fCv\x84Bv\x84K\x7f\x86K\x7f\x86G\x7f~K\x7f~I}\x85K}\x85B\x7f\x80C\x7f\x80Cz\x80Bz\x80?...
That's not the whole string, it is much longer. What I know is that it is a 160*120 image and it uses YUV colorspace. It has 3 layers.
The documentation to the library I'm using does not provide any example how to decode this string into an image so I need some help with it. I seems that the string contains information about pixels but I do not understand the format of the string.
I have found this C++ function to convert YUV to RGB but I don't know how to use it on the string I have. Any ideas?
void yuvToRgb(byte *y, byte *u, byte *v, byte *r, byte *g, byte *b) {
int c = (*y) - 16;
int d = (*u) - 128;
int e = (*v) - 128;
int R = (298 * c) + (409 * e) + (128);
int G = (298 * c) - (100 * d) - (208 * e) + 128;
int B = (298 * c) + (516 * d) + (128);
R >>= 8;
G >>= 8;
B >>= 8;
//Change the values
(*r) = clip(R);
(*g) = clip(G);
(*b) = clip(B);
}
The data looks to me like 4:4:4 YUV with the data samples interleaved rather than planar. Converting that to English, the bytes decode as
Y1 U1 V1 Y2 U2 V2 Y3 U3 V3 ...
so the Y, U and V values of the first pixel, then of the second pixel, and so on.
I'm guessing at that because of the good correllation between every third value. This should make it pretty straightforward to convert to an RGB triple using the code you have.
Once you have the RGB triples it's likely that they will be in a simple scan, so knowing that it's 160x120 is very useful (i.e. the first 160 RGB value are the top line, the next 160 the 2nd line and so on).
My completely untested translation of the C++ code to Python (2.6+) would be something like this:
def clip(v):
# Clip to 0-255
v = max(v, 0)
v = min(v, 255)
return v
def yuvToRgb(y, u, v):
c = y - 16
d = u - 128
e = v - 128
R = (298 * c) + (409 * e) + 128
G = (298 * c) - (100 * d) - (208 * e) + 128
B = (298 * c) + (516 * d) + 128
R >>= 8
G >>= 8
B >>= 8
return (clip(R), clip(G), clip(B))
b = bytearray('\x84K\x7f\x86K\x7f\x86G\x7f~K\x7f~I}\x85K}\x85') # etc...
RGB = []
for i in xrange(0, len(b), 3):
RGB.append(yuvToRgb(b[3*i], b[3*i+1], b[3*i+2]))
I hope that's useful to you.
An alternative method would be just to use the Python Imaging Library. I'm not too familiar with it myself, but if you go in assuming it's 160x120 interleaved 4:4:4 YUV then it might be quite easy.