Python string text to set of points - python

What is the easiest way to obtain a 'key' point set from some string text in Python? Please see the example. Key points are highlighted with blue and yellow.
I need to build 3D figure based on this set later.

So for a 3D Model you perhaps need more than only some few points think about the letter "S". Created some code to get generic the coordinates. You can play around with the size of the font, and picture size, to get more or less points. The arial.ttf was downloaded from https://github.com/JotJunior/PHP-Boleto-ZF2/blob/master/public/assets/fonts/arial.ttf
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
import numpy as np
import cv2
print np.__version__
def getPixelColor(pixel, x, y):
try:
return pixel[x,y]
except IndexError:
return (255,255,255)
def hasNeighbourColor(color, pixels, x, y):
if getPixelColor(pixels,x-1,y-1) == color or getPixelColor(pixels,x,y-1) == color or getPixelColor(pixels,x+1,y-1) == color \
or getPixelColor(pixels,x-1,y) == color or getPixelColor(pixels,x+1,y) == color\
or getPixelColor(pixels,x-1,y+1) == color or getPixelColor(pixels,x,y+1) == color or getPixelColor(pixels,x+1,y+1) == color:
return True
return False
def createMapOfLetter(letter):
print "LETTER %s:" % letter
print "---------"
img = Image.new('RGB', (230, 230), "white")
d = ImageDraw.Draw(img)
font = ImageFont.truetype("arial.ttf", 300)
d.text((15, -50), letter, fill=(0, 0, 0), font=font)
img.save('letter-%s.png' % letter, 'png')
pixels = img.load()
ans = []
for x in range(230):
for y in range(230):
if pixels[x,y] == (0,0,0):
if hasNeighbourColor((255,255,255), pixels, x, y) and not hasNeighbourColor((255,0,0), pixels, x, y):
pixels[x,y] = (255,0,0)
ans.append([x,y])
for x in range(230):
for y in range(230):
if pixels[x,y] != (255,0,0):
pixels[x,y] = (255,255,255)
img.save('letter_map-%s.png' % letter, 'png')
print ans
createMapOfLetter('A')
The script creates the letter-[letter].png which looks like this. Left A right S
And then creates a list of coordinates. The coordinates in a picture look like this. The script automatically creates a letter_map-[letter].png. Left A, Right S
The output of the coordinates for A look like this it.
[[15, 221], [17, 215], [17, 221], [19, 221], [21, 221], [22, 202], [23, 221], [25, 194], [25, 221], [27, 221],
[29, 221], [30, 181], [31, 221], [33, 173], [33, 221], [35, 221], [37, 221], [38, 160], [39, 221], [41, 152],
[41, 221], [43, 221], [45, 218], [46, 139], [49, 131], [52, 198], [53, 195], [54, 118], [54, 192], [57, 110],
[61, 172], [62, 97], [62, 169], [63, 166], [65, 89], [67, 156], [69, 156], [70, 76], [71, 156], [73, 68],
[73, 156], [75, 156], [76, 134], [77, 129], [77, 156], [78, 55], [78, 134], [79, 156], [80, 134], [81, 47],
[81, 118], [81, 156], [82, 134], [83, 156], [84, 134], [85, 107], [85, 156], [86, 34], [86, 134], [87, 156],
[88, 134], [89, 26], [89, 96], [89, 156], [90, 134], [91, 156], [92, 134], [93, 85], [93, 156], [94, 13], [94, 134],
[95, 156], [96, 134], [97, 7], [97, 74], [97, 156], [98, 71], [98, 134], [99, 7], [99, 156], [100, 134], [101, 7],
[101, 156], [102, 134], [103, 7], [103, 56], [103, 156], [104, 134], [105, 7], [105, 156], [106, 46], [106, 134],
[107, 7], [107, 156], [108, 134], [109, 7], [109, 35], [109, 156], [110, 31], [110, 134], [111, 7], [111, 156],
[112, 134], [113, 7], [113, 156], [114, 37], [114, 134], [115, 7], [115, 40], [115, 156], [116, 43], [116, 134],
[117, 7], [117, 46], [117, 156], [118, 49], [118, 134], [119, 7], [119, 156], [120, 134], [121, 7], [121, 156],
[122, 134], [123, 7], [123, 63], [123, 156], [124, 66], [124, 134], [125, 7], [125, 156], [126, 134], [127, 74],
[127, 156], [128, 134], [129, 156], [130, 134], [131, 85], [131, 156], [132, 134], [133, 156], [134, 134], [135, 96],
[135, 156], [136, 134], [137, 32], [137, 156], [138, 104], [138, 134], [139, 37], [139, 156], [140, 134], [141, 42],
[141, 156], [142, 115], [142, 134], [143, 47], [143, 156], [144, 134], [145, 52], [145, 123], [145, 156], [146, 134],
[147, 57], [147, 156], [148, 134], [149, 62], [149, 156], [151, 67], [151, 156], [153, 72], [153, 156], [155, 77],
[155, 156], [157, 82], [157, 156], [159, 87], [160, 161], [164, 172], [168, 183], [171, 191], [175, 202], [178, 210],
[180, 139], [182, 144], [182, 221], [184, 149], [184, 221], [186, 154], [186, 221], [188, 159], [188, 221], [190, 164],
[190, 221], [192, 169], [192, 221], [194, 174], [194, 221], [196, 179], [196, 221], [198, 184], [198, 221], [200, 189],
[200, 221], [202, 194], [202, 221], [204, 199], [204, 221], [206, 221], [208, 221], [210, 221], [212, 221]]
The output for the coordinates of S look like this
[[30, 162], [31, 167], [35, 179], [38, 52], [39, 187], [41, 81], [43, 193], [47, 198],
[49, 28], [52, 203], [57, 101], [57, 161], [58, 19], [61, 172], [62, 174], [64, 15],
[66, 52], [66, 68], [67, 49], [67, 181], [68, 73], [68, 108], [69, 45], [70, 76], [70, 109],
[71, 42], [71, 185], [74, 10], [75, 81], [77, 36], [78, 190], [78, 218], [80, 34], [85, 220], [86, 31],
[89, 30], [89, 116], [89, 221], [90, 88], [91, 5], [91, 196], [93, 89], [97, 4], [98, 28], [103, 120],
[107, 93], [107, 121], [111, 94], [111, 122], [127, 28], [128, 4], [130, 127], [134, 223], [135, 100],
[136, 30], [139, 6], [139, 31], [140, 130], [140, 222], [143, 131], [146, 8], [148, 133], [154, 136],
[156, 44], [157, 189], [157, 217], [159, 49], [161, 141], [162, 15], [162, 109], [163, 61], [163, 143],
[164, 110], [166, 111], [167, 149], [167, 212], [168, 19], [169, 171], [177, 118], [177, 205], [182, 200],
[188, 129], [189, 49], [190, 132], [190, 189], [195, 178], [197, 150]]

Related

plt.imshow() returns TypeError when run with certain images, but not with others

I'm writing an image optimisation algorythm (I know it's horribly inefficient, it's just my 1st attempt), and here's what I currently have. The directory of the project looks like this:
image_encoding.py takes an image and encodes it using the following logic:
if the pixel is new, it encodes its RGB value
else, it encodes the index of a pixel with the same color
If you need more insight, here's my code for image_encoding.py(it writes the encoded image into image_code.txt):
# imports
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# reading the image and displaying it
image = mpimg.imread('./100x150.jpg')
size = [image.shape[0], image.shape[1]]
print(f'size of the image: {size}')
plt.imshow(image)
plt.show()
# flattening the image for further processing
new_image = []
for imgline in image:
for pixel in imgline:
new_image.append(list(pixel))
image = new_image
# encodind the image
print('started encoding')
image_coding = []
# for a simple 2x2 image of a black square, image_coding will look like this:
# [ [ 255, 255, 255 ], 0, 0, 0 ]
i = 0
for pixel in image:
i += 1
print(f'{str(i / len(image) * 100)[:5]}% done', end='\r')
if pixel not in image_coding:
image_coding.append(pixel)
else:
image_coding.append(image_coding.index(pixel))
# write the image coding into a file
print('writing image coding into image_code.txt')
f = open("image_code.txt", "w")
f.write(str(image_coding))
f.close()
f = open("image_code.txt", "a")
f.write(f'\n{size}')
f.close()
print('image successfully encoded into image_code.txt')
Then, the image_code.txt file will look like this:
[[67, 58, 41], [68, 59, 42], [70, 61, 44], [71, 62, 45], [72, 63, 46], 4, 4, 3, [75, 66, 49], 8, 8, 8, [73, 64, 47], 3, 2, [67, 60, 41], [66, 59, 40], [73, 68, 48], [82, 76, 60], [95, 89, 77], [113, 106, 100], [131, 123, 121], [137, 128, 133], [134, 124, 132], [125, 115, 126], [118, 108, 119], [105, 95, 104], [88, 78, 86], [77, 68, 73], [76, 67, 68], [85, 76, 77], [93, 88, 85], [86, 82, 79], [83, 82, 77], [77, 74, 69], [66, 63, 54], [61, 59, 47], [64, 62, 47], [66, 63, 48], [65, 62, 45], [78, 73, 54], [75, 70, 51], 41, [72, 67, 48], [70, 62, 49], [78, 70, 57], [82, 74, 63], [73, 65, 52], [76, 68, 55], [77, 70, 54], [77, 69, 56], 48, 48, [80, 72, 59], [86, 78, 67], [92, 84, 73], [89, 80, 71], [86, 77, 68], [81, 72, 63], [76, 67, 58], [71, 62, 55], [69, 60, 53], [67, 58, 51], [67, 59, 48], [68, 56, 42], [67, 56, 38], [65, 54, 36], [64, 53, 35], [66, 55, 37], [70, 59, 41], [75, 64, 46], [78, 67, 49], [79, 68, 50], 71, 71, [76, 65, 47], 70, [74, 63, 45], [73, 62, 44], 78, 3, 4, 12, 12, 2, 1, [66, 57, 40], [65, 56, 39], [62, 53, 36], [64, 55, 38], 89, 88, [59, 50, 33], [58, 49, 32], 93, 93, [57, 50, 31], 96, 96, [56, 49, 30], 1, [69, 60, 43], 3, 12, [74, 65, 48], 104, 104, 12, 104, 104, 104, 104, 12, 3, 101, [67, 60, 42], 16, [73, 68, 49], [83, 77, 63], [96, 89, 79], [115, 107, 104], [134, 125, 126], [143, 133, 141], [142, 132, 141], [140, 129, 143], [134, 123, 137], [121, 111, 122], [103, 93, 102], [90, 80, 88], [87, 78, 81], [93, 84, 87], [99, 93, 93], [95, 91, 90], [93, 92, 90], [89, 85, 82], [79, 76, 69], [73, 70, 61], [73, 71, 59], [73, 69, 57], [69, 66, 51], [78, 72, 56], [76, 70, 54], 140, [75, 69, 53], [71, 63, 50], 48, [79, 71, 60], 44, 48, 50, 45, 50, 48, [79, 71, 58], [84, 76, 65], [89, 81, 70], [90, 81, 72], [88, 79, 70], [84, 75, 66], [80, 71, 64], [76, 67, 60], [73, 64, 57], 60, [71, 61, 52], [72, 60, 46], 69, 65, 66, 66, [68, 57, 39], [71, 60, 42], 77, 70, 70, 77, 78, [72, 61, 43], 170, 69, [69, 58, 40], 3, 12, 104, 8, 4, 101, 87, [63, 54, 37], 88, 89, 86, 87, 88, 92, 93, 93, 96, 96, 96, 99, 101, 3, 12, 8, [76, 67, 50], [77, 68, 51], 204, 204, 8, 204, 204, 8, 104, 4, 2, [68, 61, 43], [64, 57, 41], [69, 63, 51], [79, 72, 62], [92, 85, 79], [111, 102, 103], [132, 123, 128], [146, 136, 147], [152, 141, 155], [158, 147, 164], [153, 142, 159], [140, 129, 145], [123, 112, 126], [107, 97, 108], [99, 89, 97], [100, 90, 98], [103, 97, 101], [103, 98, 102], [106, 101, 105], [104, 100, 101], [96, 92, 91], [92, 87, 84], [89, 84, 78], [84, 80, 71], [77, 73, 62], [81, 75, 63], [79, 73, 61], [82, 74, 61], 153, 144, 47, [76, 66, 54], [69, 59, 47], [74, 64, 52], [75, 65, 55], 249, 249, 249, [79, 69, 59], [85, 75, 66], [90, 80, 71], [93, 82, 76], [92, 81, 75], [91, 80, 74], [88, 77, 73], [84, 73, 69], [80, 69, 65], [77, 66, 62], [75, 65, 56], [76, 64, 50], 78, 179, 68, 67, 68, 169, 69, 176, 170, 69, 179, 169, 65, 68, 68, 4, 8, [78, 69, 52], [80, 71, 54], [79, 70, 53], 104, 0, 88, [60, 51, 34], 187, 0, 101, 87, [61, 52, 35], 92, 288, [58, 51, 32], 296, 96, 96, 2, 4, 104, 204, 282, 284, 284, 284, 205, 282, 282, 282, 204, 104, 4, [70, 63, 47], [63, 55, 42], [65, 58, 48], [70, 63, 55], [80, 72, 69], [98, 89, 92], [121, 111, 119], [142, 132, 143], [155, 144, 160], [167, 156, 173], [163, 152, 169], [154, 143, 160], [137, 126, 142], [119, 109, 120], [107, 97, 106], [103, 93, 101], [104, 94, 102], [106, 99, 106], [112, 107, 113], [115, 110, 116], [113, 108, 112], [111, 105, 107], [107, 101, 101], [99, 94, 91], [91, 86, 82], [90, 83, 77], [84, 77, 69], 158, 146, [69, 61, 48], 44, 248, [70, 60, 48], [70, 60, 50], 348, [72, 59, 50], 348, [73, 60, 51], [77, 67, 57], [88, 75, 67], [93, 83, 74], [97, 84, 78], [96, 85, 79], [97, 83, 80], [94, 83, 79], [93, 79, 76], [87, 76, 72], [85, 71, 68], [83, 70, 62], [79, 67, 53], 75, 176, 169, 68, 68, 169, 69, 170, 170, 69, 179, 65, 68, [251, 214, 185], [253, 217, 191], [255, 222, 196], [255, 224, 200], [255, 224, 199], [255, 222, 197], [251, 219, 196], [250, 218, 195], [255, 224, 203], 2857, [255, 227, 206], [255, 228, 207], [255, 228, 209], [254, 229, 209], [254, 227, 208], 2857, [247, 212, 182], [255, 226, 190], [255, 223, 187], [239, 205, 168], [243, 211, 173], [255, 234, 198], [245, 219, 186], [197, 172, 142], [59, 34, 12], 2433, 2468, [44, 24, 17], [62, 43, 37], [40, 22, 18], [37, 19, 17], [62, 49, 43], 823, 49, 2786, 2883, 2884, [87, 79, 68], 2985, [85, 76, 67], 158, [89, 80, 73], [95, 86, 79], [101, 92, 87], [104, 95, 90], [104, 94, 92], [102, 92, 90], [100, 91, 86], [87, 80, 70], [85, 79, 67], 240, 2798, 1714, [84, 78, 56], 1714, 1603, 1605, 1609, [70, 64, 42], 1609, 3006, 1609, 1708, 1605, [77, 71, 49], 1703, 3001, 1613, [89, 78, 58], [88, 77, 57], [76, 63, 46], 1971, [38, 21, 11], 2136, [43, 26, 18], [42, 23, 16], [53, 32, 27], [49, 26, 20], [26, 4, 0], 2532, [66, 40, 23], [134, 109, 89], [178, 154, 130], [197, 166, 137], [202, 157, 118], [216, 163, 121], [216, 164, 124], [216, 167, 127], [224, 176, 138], 2442, [227, 183, 148], [235, 191, 156], [239, 196, 162], [243, 202, 170], [249, 208, 176], [251, 211, 176], [251, 208, 174], [250, 207, 172], [254, 211, 176], [255, 216, 181], [251, 215, 183], [251, 216, 188], [253, 218, 190], 2851, [255, 222, 195], 3052, [253, 219, 194], [249, 217, 192], [253, 221, 198], [253, 223, 199], [254, 223, 202], [252, 224, 202], [252, 224, 203], [250, 223, 202], [250, 221, 203], [251, 220, 199], [241, 204, 177], [255, 227, 194], [255, 231, 194], [249, 214, 174], [239, 206, 163], [242, 210, 169], [245, 217, 177], [246, 220, 185], [156, 132, 104], [70, 48, 27], 2433, [35, 15, 8], [43, 24, 20], [26, 8, 8], [31, 12, 14], [53, 38, 35], [64, 54, 44], 47, 46, 46, 46, 2987, 57, [85, 76, 69], 2794, [87, 78, 73], [93, 84, 79], [99, 89, 87], 2993, [106, 96, 95], 3093, [106, 96, 94], [93, 86, 76], [89, 83, 71], [84, 78, 66], 241, 2601, 1602, 2602, 1803, 1690, 2193, 2189, 1907, 1990, 1606, 1690, 1297, 1715, 1296, 2580, 1694, [97, 89, 68], [79, 68, 48], [49, 38, 20], [28, 15, 0], [26, 12, 0], [39, 22, 12], [46, 29, 19], [45, 27, 17], [42, 20, 9], [43, 19, 7], [33, 7, 0], [55, 28, 9], [129, 98, 77], [184, 154, 128], [196, 165, 137], [201, 165, 133], [202, 154, 116], [217, 166, 123], [217, 168, 127], [214, 166, 126], [223, 177, 141], [232, 188, 153], [237, 194, 160], [243, 200, 166], [242, 201, 169], [243, 204, 173], [246, 207, 176], [249, 210, 179], [251, 210, 178], 3043, [250, 210, 175], [250, 209, 177], [250, 213, 184], [248, 211, 184], [247, 210, 184], [249, 213, 187], [252, 218, 193], [254, 220, 195], 3054, [251, 216, 194], [248, 216, 193], 3156, [249, 217, 196], [248, 217, 196], [248, 215, 196], [246, 213, 194], [244, 211, 192], [245, 210, 190], [252, 215, 188], [253, 214, 183], [246, 208, 172], [250, 212, 173], [255, 219, 175], [241, 207, 162], [235, 202, 159], [252, 220, 181], [255, 229, 195], [136, 109, 82], [48, 21, 2], [33, 9, 0], [28, 8, 1], [19, 1, 0], [33, 17, 17], [56, 41, 36], [65, 53, 41], [78, 66, 52], 2177, 2420, 146, 2317, [82, 73, 66], 159, 159, [84, 75, 68], 441, 2893, 2994, 3095, [108, 98, 96], [108, 99, 94], [95, 87, 76], [91, 83, 70], 2884, 153, 154, 242, 2515, 50, 2694, 2688, 1623, 47, [76, 68, 57], 48, 2886, 2884, 2886, 2278, 1621, 2182, [100, 90, 78], [50, 41, 26], [37, 27, 15], [45, 33, 21], [39, 27, 15], 1940, 1448, [30, 13, 0], 2264, [44, 20, 0], [76, 49, 20], [132, 101, 72], [179, 144, 114], [199, 163, 131], [208, 169, 138], [218, 175, 143], [217, 171, 137], [221, 176, 137], 3136, [222, 176, 140], [220, 176, 141], [224, 181, 146], [234, 191, 157], 3139, [235, 194, 162], [236, 195, 163], [236, 197, 166], [237, 198, 167], [240, 201, 170], [244, 205, 172], [247, 208, 175], [248, 209, 180], [249, 209, 184], [250, 209, 189], [251, 210, 190], [250, 212, 191], [251, 215, 193], [252, 217, 195], 2954, [252, 220, 199], [250, 218, 197], [251, 219, 198], 3257, [253, 218, 198], [252, 215, 196], [249, 211, 192], [246, 208, 189], [244, 206, 185], [244, 208, 182], [245, 210, 180], [246, 210, 178], [245, 205, 170], [241, 199, 161], [239, 196, 154], [242, 197, 155], [243, 200, 158], [255, 213, 175], [233, 195, 159], [144, 110, 82], [49, 23, 0], [28, 7, 0], [40, 24, 9], [42, 32, 22], [50, 38, 26], 749, [108, 91, 71], [111, 95, 79], [89, 77, 61], 1922, 2415, [78, 71, 61], [80, 73, 63], [77, 68, 59], 58, [87, 77, 67], [91, 81, 71], [94, 84, 75], [97, 87, 78], [102, 91, 85], [106, 96, 87], 2918, [98, 84, 71], [93, 79, 66], [87, 73, 60], 2317, 2317, 2317, [79, 70, 61], 3303, [78, 69, 60], 3288, 59, [75, 66, 57], 59, 158, 56, [91, 82, 73], [101, 92, 83], [105, 96, 87], [97, 88, 79], [57, 47, 37], [37, 27, 17], [44, 34, 24], [52, 42, 32], [46, 34, 22], [34, 22, 8], [30, 14, 0], [41, 25, 2], [55, 35, 10], [77, 54, 23], [121, 92, 58], [168, 135, 100], [199, 160, 127], [207, 167, 132], [211, 166, 135], [215, 170, 137], [216, 171, 138], 3236, [223, 179, 144], 3334, [222, 179, 144], [225, 182, 147], 3238, [242, 199, 165], 3240, 3240, [235, 196, 165], 3243, 3244, 3141, [245, 209, 177], [246, 209, 182], [247, 206, 184], [248, 207, 189], [249, 208, 190], 3261, [249, 214, 194], 3158, 3256, [250, 219, 198], 3063, 3255, 3255, [254, 219, 199], [254, 216, 197], [252, 214, 195], [253, 212, 194], 3251, [245, 211, 186], [243, 212, 184], [245, 210, 182], 3141, [241, 197, 162], [239, 194, 155], [242, 193, 153], [242, 195, 153], [246, 201, 160], [254, 212, 174], [216, 180, 146], [127, 98, 68], [56, 35, 8], [49, 32, 12], [62, 51, 33], [63, 50, 33], [97, 79, 57], [126, 105, 84], [127, 110, 90], [98, 85, 68], 1921, 1623, [77, 70, 60], 3287, 3303, 46, [89, 76, 67], 1317, 3291, 355, 3293, 1319, [97, 83, 70], [97, 81, 66], [92, 76, 61], [88, 72, 57], [77, 68, 61], [78, 69, 62], [81, 72, 65], 2794, 2794, 3186, 3402, 159, 3400, 3400, 3087, [93, 84, 77], [101, 92, 85], 724, [108, 99, 92], 157, 3317, [31, 21, 11], [36, 26, 16], [39, 29, 19], [48, 36, 24], [48, 36, 22], [45, 29, 13], [62, 46, 23], [86, 66, 41], [120, 97, 66], [165, 136, 104], [196, 163, 128], [209, 170, 137], [208, 167, 135], 3330, [213, 168, 135], 3332, [221, 177, 142], [226, 182, 147], 3038, [226, 183, 149], [228, 185, 151], 3238, 3040, [234, 193, 161], 3440, 3342, 3243, 3244, [244, 205, 174], [245, 208, 179], [247, 210, 183], [245, 207, 184], [245, 209, 187], [247, 211, 189], [248, 213, 191], 3156, [248, 218, 194], [248, 220, 196], [249, 221, 197], [250, 222, 198], 3455, [250, 220, 196], 2954, [253, 218, 196], [254, 218, 196], [255, 218, 197], [254, 219, 197], [247, 216, 195], [247, 217, 193], [245, 213, 188], [242, 207, 179], [239, 198, 166], [236, 193, 158], [235, 189, 153], [235, 190, 151], [243, 199, 160], [253, 213, 177], [252, 216, 182], [183, 152, 121], [78, 52, 25], [42, 22, 0], [63, 46, 26], [72, 55, 35], [111, 91, 67], [133, 113, 89], [133, 116, 96], [101, 88, 71], 248, 1623, 3286, [78, 71, 63], 3186, 2987, 2819, 2819, 834, [89, 78, 72], 258, [96, 83, 75], [95, 81, 68], 1973, [90, 77, 61], [87, 74, 58], 3400, [79, 70, 63], 2794, 2793, [87, 78, 71], 3087, 2794, 3402, 3402, 3402, 2793, 3411, [105, 96, 89], 725, 3414, 159, [35, 25, 16], 3419, 3419, [32, 22, 12], [45, 32, 23], [55, 43, 29], [59, 43, 27], 1052, [114, 94, 69], [146, 123, 92], [180, 151, 119], [195, 162, 129], [200, 161, 128], [204, 163, 131], [213, 168, 137], [216, 171, 140], [217, 172, 139], [222, 178, 143], [228, 184, 149], [231, 187, 152], [231, 188, 154], 3536, 3238, 3138, 3440, 3440, 3342, 3243, [241, 202, 171], [245, 206, 175], [246, 209, 180], 3149, [250, 214, 190], [249, 214, 192], 3155, 2955, [251, 221, 197], [251, 223, 199], [251, 225, 200], [252, 226, 201], [248, 222, 197], [247, 221, 196], [247, 219, 195], [249, 219, 195], 2954, 3056, [255, 223, 201], [255, 226, 205], 2958, [252, 225, 204], 3063, 3466, [240, 205, 175], [235, 196, 163], [231, 191, 156], [228, 188, 152], [243, 203, 167], [241, 203, 167], [249, 213, 179], [224, 191, 160], [147, 120, 91], [92, 66, 41], [91, 69, 45], [111, 89, 65], [112, 90, 66], [123, 103, 79], [121, 104, 84], [97, 84, 67], 246, 3303, [82, 75, 67], [75, 68, 60], [88, 79, 72], [90, 81, 74], [95, 82, 76], [94, 81, 75], [90, 79, 75], [89, 78, 74], [91, 80, 78], 256, 1230, [95, 82, 66], 1037, 3498, [79, 70, 65], 732, [85, 76, 71], [88, 79, 74], 3603, [86, 77, 72], [84, 75, 70], 732, 3089, 3089, 3603, 441, [102, 93, 88], [118, 109, 104], 629, 60, [32, 22, 13], [51, 41, 31], [60, 50, 41], 1824, [43, 30, 21], 2721, [63, 47, 32], [94, 77, 57], [125, 105, 80], [148, 124, 96], [172, 143, 111], [182, 149, 116], [192, 153, 120], [205, 164, 132], [217, 172, 143], [220, 175, 144], [220, 175, 142], [224, 180, 145], [230, 185, 152], [234, 189, 156], 3238, 3238, [235, 192, 160], [236, 193, 161], [234, 193, 163], [235, 194, 164], 3242, [238, 199, 168], [242, 203, 174], [247, 208, 179], [249, 212, 183], [250, 215, 187], 2953, 3057, [254, 224, 200], [253, 225, 201], [254, 226, 202], [254, 228, 203], 3653, [254, 230, 204], [249, 225, 199], [249, 223, 198], 3657, 3456, 2863, [255, 227, 203], [255, 229, 206], [255, 231, 210], [255, 235, 216], [255, 233, 216], 2960, [250, 222, 200], [246, 214, 189], [239, 205, 177], [233, 198, 168], [229, 193, 161], [230, 194, 160], [236, 200, 166], [240, 205, 173], [246, 213, 180], [227, 196, 167], [168, 139, 109], [121, 91, 63], [116, 90, 63], [108, 84, 58], [111, 91, 66], [115, 98, 78], 3483, [80, 70, 58], 2317, [85, 78, 70], [77, 70, 64], 630, [96, 87, 82], [101, 87, 84], [100, 86, 83], [96, 85, 83], [95, 83, 83], [97, 85, 87], [99, 88, 86], [99, 86, 77], [97, 85, 71], 1321, 1428, [81, 72, 67], 3186, 3602, 3504, 3089, 3504, 3602, 3189, [91, 82, 77], 3411, 3708, 3504, [95, 86, 81], 724, 2893, [60, 51, 44], [47, 37, 28], [55, 45, 35], [58, 48, 39], [40, 30, 21], [37, 24, 15], 2721, 1350, [97, 80, 60], [118, 98, 74], [138, 114, 86], [161, 132, 100], [177, 144, 111], [195, 156, 125], [210, 169, 137], [221, 176, 147], [222, 177, 146], [224, 179, 146], 3038, [231, 186, 153], [235, 190, 157], [236, 193, 159], 3138, [237, 194, 162], 3738, 3641, [236, 195, 165], 3243, 3244, [245, 206, 177], [249, 210, 181], 2948, 3050, 2952, [255, 225, 201], 3749, 3651, 3651, 3555, 3555, 3555, 3554, 3554, 3555, 3652, 2755, [255, 231, 207], [255, 233, 209], [255, 233, 212], [255, 235, 218], 3665, [255, 230, 211], 2958, [251, 223, 201], [247, 217, 191], [241, 210, 182], [236, 205, 176], [221, 188, 157], [238, 205, 174], [232, 199, 168], [234, 201, 170], [249, 216, 185], [214, 181, 150], [143, 110, 79], [102, 73, 43], [103, 79, 51], [103, 83, 58], [118, 101, 81], 964, [87, 77, 65], 3288, 341, [82, 75, 69], [96, 86, 84], 3091, [104, 90, 89], [105, 91, 91], [102, 90, 90], [103, 91, 93], [105, 93, 97], [108, 96, 98], [106, 93, 87], 1419, [100, 87, 78], 1318, 3501, 2317, 3186, 158, 2793, 157, 2989, 56, 3589, [96, 87, 78], [96, 87, 80], 57, 2989, [103, 94, 85], 3504, [48, 39, 30], [52, 42, 33], [42, 32, 23], [43, 32, 26], [33, 23, 14], [33, 20, 11], 2777, [76, 60, 45], [93, 76, 56], [107, 87, 63], [128, 104, 76], [155, 126, 96], 3727, [197, 158, 127], [210, 169, 139], [222, 177, 148], [227, 182, 151], [228, 183, 150], [229, 184, 151], [232, 187, 154], 3735, 3639, [238, 195, 163], 2456, 2456, 2457, [238, 197, 167], [239, 200, 171], 3644, [247, 207, 181], [252, 212, 186], [254, 217, 190], [255, 220, 192], 2953, [253, 221, 196], [252, 220, 195], [250, 220, 194], [249, 219, 193], [245, 218, 191], [244, 217, 190], 3854, [248, 221, 194], [249, 222, 195], [251, 224, 197], 2656, 2660, [255, 228, 203], 3861, 2758, [255, 225, 207], [253, 224, 208], [254, 225, 207], [255, 227, 209], [255, 230, 209], [255, 228, 206], 3553, [247, 220, 193], [235, 205, 177], [239, 210, 180], [232, 201, 172], 3774, [250, 217, 186], 3265, [189, 154, 124], [124, 93, 64], [99, 73, 46], [91, 71, 46], [114, 98, 75], [126, 113, 96], 2079, 2317, [84, 77, 71], [85, 77, 74], [94, 84, 83], [97, 87, 86], [104, 90, 90], [107, 92, 95], [106, 94, 98], [108, 95, 102], [112, 99, 108], [116, 103, 110], [116, 105, 103], [114, 103, 99], [111, 100, 96], [108, 97, 93], 2693, 2693, 146, 46, 2790, 155, 55, 1684, [88, 80, 69], 729, [99, 91, 80], 2985, 54, 729, 2515, [40, 32, 21], [27, 17, 8], [25, 15, 6], [46, 35, 29], [45, 34, 28], 1744, 3279, 1349, [87, 70, 52], [102, 82, 58], [123, 99, 71], [152, 123, 93], [176, 143, 112], [193, 154, 123], [206, 165, 135], 3830, [232, 187, 158], [231, 186, 155], 3734, 3834, 3635, 3639, [239, 196, 164], [239, 195, 166], 3938, 3841, [239, 198, 168], [240, 201, 172], [243, 204, 175], [248, 208, 182], [253, 213, 187], [255, 219, 192], [255, 221, 195], [253, 217, 193], [251, 217, 192], [249, 215, 190], [247, 213, 188], 3364, [241, 209, 184], [242, 208, 183], [239, 207, 182], [246, 212, 187], 3668, 3152, 2854, 2953, [255, 223, 198], 2953, [255, 220, 198], [250, 215, 196], [248, 215, 198], [251, 220, 200], [255, 226, 206], 3763, [255, 236, 213], [255, 233, 210], [255, 229, 204], [254, 228, 201], [235, 208, 179], [234, 204, 176], [242, 211, 182], [248, 214, 186], [249, 214, 184], [206, 169, 140], [130, 96, 68], [95, 69, 42], [79, 59, 34], [103, 87, 64], [128, 115, 96], [107, 97, 85], 2889, [87, 80, 74], [83, 75, 72], [91, 81, 80], [95, 85, 84], [103, 88, 91], 3891, 3893, [111, 98, 105], [116, 103, 113], [120, 107, 116], [127, 115, 119], [125, 113, 113], [122, 110, 112], [120, 108, 108], 50, 48, 48, 50, 153, 2883, 730, 1820, [88, 80, 67], [99, 91, 78], 2278, [84, 76, 63], [95, 87, 74], [96, 88, 75], [67, 59, 46], [34, 26, 13], [29, 19, 10], [39, 28, 22], [40, 29, 23], 4017, [47, 34, 26], [51, 38, 29], [60, 44, 31], [73, 56, 38], [96, 76, 52], [118, 94, 68], 2261, [163, 130, 99], [191, 152, 121], [219, 178, 148], [231, 185, 159], [225, 180, 151], [228, 183, 152], [230, 185, 154], [234, 189, 158], [235, 190, 159], [234, 190, 161], [235, 191, 162], 2456, [240, 196, 167], [239, 198, 170], [241, 200, 172], [243, 203, 177], [250, 210, 184], [255, 216, 190], 2850, [255, 218, 192], [252, 215, 189], [249, 211, 188], [250, 212, 189], 4048, 3448, [240, 202, 179], [237, 199, 176], [239, 198, 176], [238, 200, 177], [245, 204, 182], 3448, [250, 209, 187], [248, 210, 187], [247, 209, 186], 4060, 4049, [252, 214, 191], [247, 209, 188], 3263, [243, 206, 187], [247, 215, 194], 2958, [255, 237, 215], [255, 242, 219], [255, 242, 216], 4071, [255, 236, 209], 3871, 3974, [234, 200, 173], 3050, [236, 199, 172], [158, 124, 96], [96, 70, 43], [68, 48, 23], [78, 62, 39], [121, 108, 89], 826, 3305, 219, 3887, [88, 78, 77], [93, 83, 82], [102, 87, 90], [105, 90, 93], [106, 93, 100], [113, 100, 107], [125, 112, 122], [135, 122, 132], [138, 127, 135], 637, [148, 137, 145], 636, 2085, 2085, 49, 49, 2277, 2502, 2378, [87, 80, 64], 2378, 2280, 4107, 2184, [92, 85, 69], [89, 82, 66], [61, 54, 38], 4015, 4016, [38, 27, 21], 4018, 4018, [48, 35, 29], [49, 36, 27], [55, 39, 26], [65, 48, 30], [98, 77, 56], [117, 93, 67], [141, 111, 83], [164, 130, 102], [192, 153, 124], [217, 176, 146], [229, 183, 157], [226, 180, 154], 3830, [223, 178, 147], [226, 181, 150], 3932, 4037, 3938, [243, 199, 170], [244, 200, 171], [238, 197, 169], [240, 199, 171], [245, 205, 179], [251, 211, 185], [255, 215, 189], 4044, [253, 216, 190], 2849, [255, 214, 192], [254, 212, 190], [250, 208, 186], [244, 202, 180], [240, 195, 174], [236, 191, 170], [235, 190, 169], 4154, [239, 194, 173], [241, 196, 175], [243, 198, 177], 4158, [240, 198, 176], 4160, [243, 201, 179], [246, 204, 182], [244, 199, 176], [244, 202, 178], [246, 205, 183], [247, 211, 187], 3950, [248, 218, 192], [245, 219, 194], [243, 219, 193], [236, 212, 186], [246, 220, 195], [254, 227, 200], 3851, [233, 201, 176], [236, 202, 175], [232, 196, 170], [187, 153, 126], [105, 79, 52], [70, 50, 25], [67, 51, 28], [105, 92, 75], [117, 107, 95], 157, [95, 88, 82], [86, 78, 75], [90, 80, 79], 3889, [106, 92, 92], [109, 94, 97], [108, 96, 100], [115, 102, 109], [127, 114, 123], [137, 124, 134], [145, 133, 145], [154, 144, 155], [161, 149, 161], [157, 147, 158], 2303, 2303, 2277, 2277, 2303, 2501, 2184, 2784, 2184, [86, 79, 63], 2502, 2502, [88, 81, 65], 2085, [52, 45, 29], [35, 27, 14], [34, 24, 15], [42, 31, 25], [43, 32, 28], [41, 30, 24], [45, 32, 26], [44, 31, 23], 2674, [66, 49, 31], [96, 75, 54], [112, 88, 62], [138, 108, 80], [166, 132, 104], [195, 156, 127], [214, 173, 145], [225, 179, 155], [225, 179, 153], [227, 182, 153]]
[150, 100]
As you can see, it has both integer and list values. Then, I pass the file into my image_decoding.py script. Here's how it looks:
# imports
import matplotlib.pyplot as plt
# decoding
f = open("image_code.txt", "r")
Lines = f.readlines()
size = Lines[1].strip('][').split(', ')
image_coding = Lines[0][:-2].strip('][').split(', ')
size[0], size[1] = int(size[0]), int(size[1])
f.close()
print('started reshaping the list')
for i in range(len(image_coding)):
try:
value = image_coding[i]
if value[0] == '[':
image_coding[i] = [int(image_coding[i][1:]), int(
image_coding[i + 1]), int(image_coding[i + 2][:-1])]
if i != 0 and type(image_coding[i - 1]) != list and type(image_coding[i]) != list:
try:
image_coding[i] = int(image_coding[i])
except ValueError:
pass
except IndexError:
break
counter = 1
for j in image_coding:
if type(j) != int and j[-1] == ']':
image_coding.remove(j)
print(
f'Stage 1: {counter} / {len(image_coding)} ({str(counter / len(image_coding) * 100)[:7]}%) done', end='\r')
counter += 1
counter = 1
for l in image_coding:
if type(l) == str:
image_coding.remove(l)
print(
f'Stage 2: {counter} / {len(image_coding)} ({str(counter / len(image_coding) * 100)[:7]}%) done', end='\r')
counter += 1
print('started decoding')
i = 0
image = []
for pixel in image_coding:
i += 1
print(f'Decoding: {str(i / len(image_coding) * 100)[:7]}% done', end='\r')
if type(pixel) == list:
image.append(pixel)
else:
image.append(image_coding[pixel])
# unflattening the image to display it correctly
print('\nstarted unflattening')
new_image = []
for i in range(size[0]):
imageline = []
for j in range(size[1]):
imageline.append(image[i*size[1] + j])
new_image.append(imageline)
image = new_image
plt.imshow(image)
plt.show()
When running the script on the 50x100.jpg image (which looks like this):
The code works flawlessly. However, when I try to run it with the 100x100.jpg image, 100x150.jpg image or 300x200.jpg image, which look like this:
image_decoding.py returns the following TypeError:
Traceback (most recent call last):
File "c:\Users\pc\Desktop\Programming\imageCoding\image_decoding.py", line 82, in <module>
plt.imshow(image)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\matplotlib\_api\deprecation.py", line 454, in wrapper
return func(*args, **kwargs)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\matplotlib\pyplot.py", line 2611, in imshow
__ret = gca().imshow(
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\matplotlib\_api\deprecation.py", line 454, in wrapper
return func(*args, **kwargs)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\matplotlib\__init__.py", line 1423, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\matplotlib\axes\_axes.py", line 5572, in imshow
im.set_data(X)
File "C:\Users\pc\AppData\Local\Programs\Python\Python310\lib\site-packages\matplotlib\image.py", line 701, in set_data
raise TypeError("Image data of dtype {} cannot be converted to "
TypeError: Image data of dtype object cannot be converted to float
P. S. There might be some really stupid bug in the code (e.g. missing bracket), I'm still quite new to this :)
It's quite a lot of code you have there... I hope my answer is clear without reposting everything :)
Your bug is in the file reading and parsing (recreating the original data-types etc.), so I have the following suggestion for you.
Python comes with a package called pickle that you can use to store python objects and read them back into memory.
I'd suggest the following changes to your code.
In image_encoding.py replace
f = open("image_code.txt", "w")
f.write(str(image_coding))
f.close()
f = open("image_code.txt", "a")
f.write(f'\n{size}')
f.close()
with
import pickle # put that at the top of your file
with open("image_code.txt", "wb") as file:
pickle.dump({"image_coding": image_coding, "size": size}, file)
And then in image_decoding.py instead of
f = open("image_code.txt", "r")
Lines = f.readlines()
size = Lines[1].strip('][').split(', ')
image_coding = Lines[0][:-2].strip('][').split(', ')
size[0], size[1] = int(size[0]), int(size[1])
f.close()
print('started reshaping the list')
for i in range(len(image_coding)):
try:
value = image_coding[i]
if value[0] == '[':
image_coding[i] = [int(image_coding[i][1:]), int(
image_coding[i + 1]), int(image_coding[i + 2][:-1])]
if i != 0 and type(image_coding[i - 1]) != list and type(image_coding[i]) != list:
try:
image_coding[i] = int(image_coding[i])
except ValueError:
pass
except IndexError:
break
you can use:
import pickle
with open("test_data/image_code.txt", "rb") as file:
data = pickle.load(file)
size = data.get("size")
image_coding = data.get("image_coding")
and have your original variables back without checking for "[" and casting back to integers etc.
Hope that helps. If you have any question, dont hesitate to ask.

How to calculate which Voronoi cells are in specific area in Python?

I have some points which illustrates heads of pedestrians during an experiment in each frame. I need to calculate which Voronoi Cells are in specific area - measurement square:
x_range = (-0.4, 0.4)
y_range = (0.5, 1.3)
So I've adapted a sample of generating Voronoi cells from points + I've added measurement area (blue lines) and walls (black), here is result for frame 0:
And here is part of code (adapted from the sample):
entries_for_frame = get_entries_at_frame(entries, frame)
points = points_from_entries(entries_for_frame)
vor = scipy.spatial.Voronoi(points)
scipy.spatial.voronoi_plot_2d(vor)
plt.show()
As I know in order to calculate which cells are inside the measurement area I need to check which lines of cells are crossing with the measurement square or are inside its.
So according to the documentation of scipy.spatial.Voronoi the interesting attributes are: vertices, which is returning those orange vertices. I need also to have edges of vertices inside the measurement area, the attribute according to the documentation of scipy.spatial.Voronoi is ridge_vertices, but unfortunately it is returning something strange:
[[0, 19], [0, 2], [1, 17], [1, 3], [2, 3], [17, 19], [-1, 22], [-1, 15], [15, 16], [16, 21], [21, 22], [-1, 0], [2, 23], [22, 23], [28, 32], [28, 29], [29, 30], [30, 31], [31, 32], [12, 13], [12, 28], [13, 25], [25, 29], [-1, 24], [-1, 31], [24, 30], [-1, 26], [26, 27], [27, 32], [-1, 33], [19, 20], [20, 34], [33, 34], [35, 36], [-1, 35], [36, 37], [-1, 37], [-1, 4], [4, 5], [5, 35], [6, 37], [6, 7], [7, 36], [38, 39], [38, 40], [39, 41], [40, 41], [-1, 40], [-1, 8], [8, 38], [-1, 9], [9, 10], [10, 41], [10, 43], [39, 42], [42, 43], [52, 53], [52, 57], [53, 54], [54, 55], [55, 56], [56, 57], [13, 52], [25, 57], [48, 49], [48, 54], [49, 55], [9, 50], [24, 56], [49, 50], [17, 59], [18, 61], [18, 20], [59, 61], [11, 46], [11, 60], [18, 47], [46, 47], [60, 61], [58, 63], [58, 60], [59, 62], [62, 63], [26, 64], [27, 65], [64, 65], [21, 67], [23, 68], [67, 68], [42, 45], [43, 69], [44, 45], [44, 72], [69, 72], [50, 70], [69, 70], [48, 71], [70, 71], [4, 76], [5, 75], [75, 76], [33, 77], [76, 77], [34, 78], [77, 78], [47, 79], [78, 79], [80, 82], [80, 81], [81, 83], [82, 84], [83, 84], [14, 53], [14, 80], [71, 82], [72, 84], [14, 51], [51, 87], [81, 85], [85, 87], [88, 90], [88, 89], [89, 93], [90, 91], [91, 92], [92, 93], [44, 88], [83, 89], [85, 86], [86, 93], [11, 91], [58, 92], [94, 95], [94, 97], [95, 96], [96, 98], [97, 99], [98, 99], [12, 94], [51, 95], [65, 97], [101, 104], [101, 102], [102, 103], [103, 105], [104, 105], [15, 101], [16, 104], [64, 102], [99, 103], [66, 67], [66, 105], [1, 106], [3, 107], [106, 107], [68, 108], [107, 108], [8, 73], [45, 109], [73, 110], [109, 110], [111, 115], [111, 113], [112, 113], [112, 114], [114, 115], [46, 74], [74, 111], [79, 113], [75, 112], [7, 114], [116, 117], [116, 118], [117, 120], [118, 119], [119, 121], [120, 121], [96, 118], [98, 100], [100, 116], [87, 119], [86, 121], [63, 120], [122, 127], [122, 123], [123, 124], [124, 125], [125, 126], [126, 127], [100, 127], [117, 122], [62, 123], [106, 124], [108, 125], [66, 126], [128, 129], [128, 130], [129, 132], [130, 131], [131, 133], [132, 134], [133, 134], [90, 128], [109, 129], [74, 130], [110, 132], [115, 131], [6, 133], [73, 134]]
In documentation I don't see how to understand returned numbers. Also in tutorials in explaining how to solve my problem. So my question is: how to calculate which Voronoi cells are inside the measurement area with at least single point?
I believe that your best bet is to use some kind of multiple polygon intersection algorithm using the cell vertices to describe the polygons.
You can whittle down the number of polygons by discarding those whose rightmost vertex is left of the blue rectangle, those whose leftmost vertex is to the right, and so on for up and down. This leaves you with the yellow polygons only.
You can also quickly eliminate (only, in this case you mark them as "intersecting") all those whose center or vertex lies inside the rectangle. This also is very quick.
In this example this is enough to locate all cells.
In other cases (for example, in the figure below, if the bottom-left yellow cell was shifted slightly upwards) you will have cells that have all vertices and the Delaunay center outside the rectangle, and yet one edge crosses it, so there is an intersection. To recognize those, you can exploit the fact that a rectangle is a convex figure, and check whether, among the cells you've left, there is one that contains at least one of the rectangle's corners. This is a slightly more complex check ("whether a point lies inside a convex polygon"), but not too complex since the cell is also convex and can be trivially decomposed in triangles.
The pseudo algorithm would be:
for all Voronoi cells:
get list of vertices.
are they all left/below/above/right of the rectangle?
YES: this cell does not intersect. Continue.
for all the vertices plus the cell center:
is this point inside the rectangle?
YES: we have intersection. Report this cell and continue.
decompose the cell in a list of triangles with vertex in the
Delaunay center, taking ordered vertex pairs.
for each triangle
for each vertex of the rectangle
is the vertex inside the triangle?
YES: we have intersection. Report and continue
this cell does not intersect the rectangle.

Color calibration with color checker using using Root-Polynomial Regression not giving correct results

For a quantification project, I am in need of colour corrected images which produce the same result over and over again irrespective of lighting conditions.
Every image includes a X-Rite color-checker of which the colors are known in matrix format:
Reference=[[170, 189, 103],[46, 163, 224],[161, 133, 8],[52, 52, 52],[177, 128, 133],[64, 188, 157],[149, 86, 187],[85, 85, 85],[67, 108, 87],[108, 60, 94],[31, 199, 231],[121, 122, 122], [157, 122, 98],[99, 90, 193],[60, 54, 175],[160, 160, 160],[130, 150, 194],[166, 91, 80],[70, 148, 70],[200, 200, 200],[68, 82, 115],[44, 126, 214],[150, 61, 56],[242, 243, 243]]
For every image I calculate the same matrix for the color card present as an example:
Actual_colors=[[114, 184, 137], [2, 151, 237], [118, 131, 55], [12, 25, 41], [111, 113, 177], [33, 178, 188], [88, 78, 227], [36, 64, 85], [30, 99, 110], [45, 36, 116], [6, 169, 222], [53, 104, 138], [98, 114, 123], [48, 72, 229], [29, 39, 211], [85, 149, 184], [66, 136, 233], [110, 79, 90], [41, 142, 91], [110, 180, 214], [7, 55, 137], [0, 111, 238], [82, 44, 48], [139, 206, 242]]
Then I calibrate the entire image using a color correction matrix which was derived from the coefficient from the input and output matrices:
for im in calibrated_img:
im[:]=colour.colour_correction(im[:], Actual_colors, Reference, "Finlayson 2015")
The results are as follows:
Where the top image represents the input and the down image the output.
Lighting plays a key role in the final result for the color correction, but the first two images on the left should generate the same output. Once the images become too dark, white is somehow converted to red.. I am not able to understand why.
I have tried to apply a gamma correction before processing with no success.
The other two models Cheung 2004 and Vandermonde gave worse results, as did partial least squares. The images are pretty well corrected from the yellow radiating lamps, but the final result is not clean white, instead they have a blueish haze over the image. White should be white.. What can I do to further improve these results?
Edit 23-08-2020:
Based on #Kel Solaar his comments I have made changes to my script to include the steps mentioned by him as follows
#Convert image from int to float
Float_image=skimage.img_as_float(img)
#Normalise image to have pixel values from 0 to 1
Normalised_image = (Float_image - np.min(Float_image))/np.ptp(Float_image)
#Decoded the image with sRGB EOTF
Decoded_img=colour.models.eotf_sRGB(Normalised_image)
#Performed Finlayson 2015 color correction to linear data:
for im in Decoded_img:
im[:]=colour.colour_correction(im[:], Image_list, Reference, "Finlayson 2015")
#Encoded image back to sRGB
Encoded_img=colour.models.eotf_inverse_sRGB(Decoded_img)
#Denormalized image to fit 255 pixel values
Denormalized_image=Encoded_img*255
#Converted floats back to integers
Integer_image=Denormalised_image.astype(int)
This greatly improved image quality as can be seen below:
However, lighting/color differences between corrected images are unfortunately still present.
Raw images can be found here but due note that they are upside down.
Measured values of color cards in images:
IMG_4244.JPG
[[180, 251, 208], [62, 235, 255], [204, 216, 126], [30, 62, 97], [189, 194, 255], [86, 250, 255], [168, 151, 255], [68, 127, 167], [52, 173, 193], [111, 87, 211], [70, 244, 255], [116, 185, 228], [182, 199, 212], [102, 145, 254], [70, 102, 255], [153, 225, 255], [134, 214, 255], [200, 156, 169], [87, 224, 170], [186, 245, 255], [44, 126, 235], [45, 197, 254], [166, 101, 110], [224, 255, 252]]
IMG_4243.JPG
[[140, 219, 168], [24, 187, 255], [148, 166, 73], [17, 31, 53], [141, 146, 215], [42, 211, 219], [115, 101, 255], [33, 78, 111], [24, 118, 137], [63, 46, 151], [31, 203, 255], [67, 131, 172], [128, 147, 155], [61, 98, 255], [42, 59, 252], [111, 181, 221], [88, 168, 255], [139, 101, 113], [47, 176, 117], [139, 211, 253], [19, 78, 178], [12, 146, 254], [110, 60, 64], [164, 232, 255]]
IMG_4241.JPG
[[66, 129, 87], [0, 90, 195], [65, 73, 26], [9, 13, 18], [60, 64, 117], [20, 127, 135], [51, 38, 176], [15, 27, 39], [14, 51, 55], [21, 15, 62], [1, 112, 180], [29, 63, 87], [54, 67, 69], [20, 33, 179], [10, 12, 154], [38, 92, 123], [26, 81, 178], [58, 44, 46], [23, 86, 54], [67, 127, 173], [5, 26, 77], [2, 64, 194], [43, 22, 25], [84, 161, 207]]
IMG_4246.JPG
[[43, 87, 56], [2, 56, 141], [38, 40, 20], [3, 5, 6], [31, 31, 71], [17, 85, 90], [19, 13, 108], [7, 13, 20], [4, 24, 29], [8, 7, 33], [1, 68, 123], [14, 28, 46], [28, 34, 41], [6, 11, 113], [0, 1, 91], [27, 53, 83], [11, 44, 123], [32, 21, 23], [11, 46, 26], [32, 77, 115], [2, 12, 42], [0, 29, 128], [20, 9, 11], [49, 111, 152]]
Actual colors of color card (or reference) are given in the top of this post and are in the same order as values given for images.
Edit 30-08-2020, I have applied #nicdall his comments:
#Remove color chips which are outside of RGB range
New_reference=[]
New_Actual_colors=[]
for L,K in zip(Actual_colors, range(len(Actual_colors))):
if any(m in L for m in [0, 255]):
print(L, "value outside of range")
else:
New_reference.append(Reference[K])
New_Actual_colors.append(Actual_colors[K])
In addition to this, I realized I was using a single pixel from the color card, so I started to take 15 pixels per color chip and averaged them to make sure it is a good balance. The code is too long to post here completely but something in this direction (don't judge my bad coding here):
for i in Chip_list:
R=round(sum([rotated_img[globals()[i][1],globals()[i][0],][0],
rotated_img[globals()[i][1]+5,globals()[i][0],][0],
rotated_img[globals()[i][1]+10,globals()[i][0],][0],
rotated_img[globals()[i][1],(globals()[i][0]+5)][0],
rotated_img[globals()[i][1],(globals()[i][0]+10)][0],
rotated_img[globals()[i][1]+5,(globals()[i][0]+5)][0],
rotated_img[globals()[i][1]+10,(globals()[i][0]+10)][0]])/(number of pixels which are summed up))
The result was dissapointing, as the correction seemed to have gotten worse but it is shown below:
New_reference = [[170, 189, 103], [161, 133, 8], [52, 52, 52], [177, 128, 133], [64, 188, 157], [85, 85, 85], [67, 108, 87], [108, 60, 94], [121, 122, 122], [157, 122, 98], [60, 54, 175], [160, 160, 160], [166, 91, 80], [70, 148, 70], [200, 200, 200], [68, 82, 115], [44, 126, 214], [150, 61, 56]]
#For Image: IMG_4243.JPG:
New_Actual_colors= [[139, 218, 168], [151, 166, 74], [16, 31, 52], [140, 146, 215], [44, 212, 220], [35, 78, 111], [25, 120, 137], [63, 47, 150], [68, 132, 173], [128, 147, 156], [40, 59, 250], [110, 182, 222], [141, 102, 115], [48, 176, 118], [140, 211, 253], [18, 77, 178], [12, 146, 254], [108, 59, 62]]
#The following values were omitted in IMG_4243:
[23, 187, 255] value outside of range
[115, 102, 255] value outside of range
[30, 203, 255] value outside of range
[61, 98, 255] value outside of range
[88, 168, 255] value outside of range
[163, 233, 255] value outside of range
I have started to approach the core of the problem but I am not a mathematician, however the correction itself seems to be the problem..
This is the color correction matrix for IMG4243.jpg generated and utilized by the colour package:
CCM=colour.characterisation.colour_correction_matrix_Finlayson2015(New_Actual_colors, New_reference, degree=1 ,root_polynomial_expansion=True)
print(CCM)
[[ 1.10079803 -0.03754644 0.18525637]
[ 0.01519612 0.79700086 0.07502735]
[-0.11301282 -0.05022718 0.78838144]]
Based on what I understand from the colour package code the New_Actual_colors is converted with the CCM as follows:
Converted_colors=np.reshape(np.transpose(np.dot(CCM, np.transpose(New_Actual_colors))), shape)
When we compare the Converted_colors with the New_reference, we can see that the correction is getting a long way, but differences are still present (so the endgoal is to convert New_Actual_colors with the color correction matrix (CCM) to Converted_colors which should exactly match the New_reference):
print("New_reference =",New_reference)
print("Converted_colors =",Converted_colors)
New_reference = [[170, 189, 103],[161, 133, 8],[52, 52, 52],[177, 128, 133],[64, 188, 157],[85, 85, 85],[67, 108, 87],[108, 60, 94],[121, 122, 122],[157, 122, 98],[60, 54, 175],[160, 160, 160],[166, 91, 80],[70, 148, 70],[200, 200, 200],[68, 82, 115],[44, 126, 214],[150, 61, 56]]
Converted_colors = [[176, 188, 106],[174, 140, 33],[26, 29, 38],[188, 135, 146],[81, 186, 158],[56, 71, 80],[48, 106, 99],[95, 50, 109],[102, 119, 122],[164, 131, 101],[88, 66, 190],[155, 163, 153],[173, 92, 70],[68, 150, 79],[193, 189, 173],[50, 75, 134],[55, 136, 192],[128, 53, 34]]
When substracted the differences become clear, and the question is how to overcome these differences?:
list(np.array(New_reference) - np.array(Converted_colors))
[array([-6, 1, -3]),
array([-13, -7, -25]),
array([26, 23, 14]),
array([-11, -7, -13]),
array([-17, 2, -1]),
array([29, 14, 5]),
array([ 19, 2, -12]),
array([ 13, 10, -15]),
array([19, 3, 0]),
array([-7, -9, -3]),
array([-28, -12, -15]),
array([ 5, -3, 7]),
array([-7, -1, 10]),
array([ 2, -2, -9]),
array([ 7, 11, 27]),
array([ 18, 7, -19]),
array([-11, -10, 22]),
array([22, 8, 22])]
Here are a few recommendations:
As stated in my comment above we had an implementation issue with the Root-Polynomial variant from Finlayson (2015) which should be fixed in the develop branch.
You are passing integer and encoded values to the colour.colour_correction definition. I would strongly recommend that you:
Convert the datasets to floating-point representation.
Scale it from range [0, 255] to range [0, 1].
Decode it with the sRGB EOTF.
Perform the colour correction onto that linear data.
Encode back and scale back to integer representation.
Your images seem to be an exposure wedge, ideally, you would compute a single matrix for the appropriate reference exposure, normalise the other images exposure to it and apply the matrix on it.
An additional recommendation more on the physical side of the problem : I see some of the RGB values in the high and low exposure images are outside of the unsaturated range of the camera (0 and 255 values). This means that some information on the actual measured color is lost at the time of the image capture, because some of the calibration patches are either over- or under-exposed. This is a known problem in RGB colorimetry, and it is actually mentioned in (Finlayson, 2015) : "an additional assumption is that both v and kv are in the unsaturated range of the camera"
If possible, try to have a look at the histogram while you take the images so that all pixels have a value in the unsaturated range ([1, 254] at most).
Otherwise, if taking new images is out of the question, you can try ignoring the saturated patch (which have either 0 or 255 in any of R, G or B values) in the calibration process (make sure that you ignore the patches both in the image and in the reference). This could improve your calibration for the overall image as you do not make your model fit saturated values.

How to convert pixels stored in a list into an image with python?

pix = [
[[90, 94, 6], [126, 108, 24], [180, 116, 42], [166, 116, 46], [72, 94, 31]],
[[101, 96, 14], [190, 165, 84], [202, 134, 63], [170, 115, 50], [40, 50, 0]],
[[145, 125, 53], [150, 112, 40], [148, 73, 6], [156, 90, 31], [25, 11, 1]],
[[133, 124, 57], [165, 142, 75], [195, 142, 77], [169, 120, 62], [82, 74, 28]],
[[73, 105, 40], [56, 77, 10], [138, 135, 67], [97, 95, 34], [45, 69, 21]],
]
I have a bunch of pixels stored in the list and now I want to convert it to an image. How can I turn that list into an image? Thank you
Using PIL, you can create an image using an array:
from PIL import Image
import numpy as np
img = Image.fromarray(np.array(pix).astype(np.uint8))
Now, you may look at the image:
img.show()
Good thing is, from now on, you can benefit from all of PIL's toolcase for image processing (resize, thumbnail, filters, ...).
Here's how to do it using OpenCV. By default, OpenCV uses Numpy arrays to display images so you can simply convert the list into a <class 'numpy.ndarray'>.
Result:
import numpy as np
import cv2
pix = [
[[90, 94, 6], [126, 108, 24], [180, 116, 42], [166, 116, 46], [72, 94, 31]],
[[101, 96, 14], [190, 165, 84], [202, 134, 63], [170, 115, 50], [40, 50, 0]],
[[145, 125, 53], [150, 112, 40], [148, 73, 6], [156, 90, 31], [25, 11, 1]],
[[133, 124, 57], [165, 142, 75], [195, 142, 77], [169, 120, 62], [82, 74, 28]],
[[73, 105, 40], [56, 77, 10], [138, 135, 67], [97, 95, 34], [45, 69, 21]],
]
# Convert to ndarray
img = np.array(pix).astype(np.uint8)
# Save image
cv2.imwrite('img.png', img)
# Display image
cv2.imshow('img', img)
cv2.waitKey()
The answer above transforms your list into a PIL Image. If you just want to see the image, you can do this:
import matplotlib.pyplot as plt
plt.imshow(pix)

Find maximum value of minimum elements in tuple

If I have a list
[[209, 34], [50, 170], [197, 32], [75, 156], [176, 51], [54, 141], [205, 19], [35, 173]]
How would I go about finding the sublist with the maximum minimum element?
ie, in the case above, it would be index[3] - [75,156] because it's minimum value is greater than the minimum value of all other elements.
It should be as simple as:
max(list_of_iterables, key=min)
i.e.:
>>> lst = [[209, 34], [50, 170], [197, 32], [75, 156], [176, 51], [54, 141], [205, 19], [35, 173]]
[[209, 34], [50, 170], [197, 32], [75, 156], [176, 51], [54, 141], [205, 19], [35, 173]]
>>> max(lst, key=min)
[75, 156]
The max (and min) functions work by walking through an iterable and comparing each element in the iterable picking out the biggest (or smallest for min) element. The catch is that the thing compared is the result of the key function applied to the individual element. By default, the key function is just the identity function -- but it can be anything you want. In this case, my key function is min which picks out the minimum value of the sub-list. We then compare the sublists based on their minimum value and pick out the max which is exactly what your question asked for.
You can use sorted function.
>>> lst = [[209, 34], [50, 170], [197, 32], [75, 156], [176, 51], [54, 141], [205, 19], [35, 173]]
[[209, 34], [50, 170], [197, 32], [75, 156], [176, 51], [54, 141], [205, 19], [35, 173]]
>>> sorted(lst, key=min, reverse=True)
[[75, 156],
[54, 141],
[176, 51],
[50, 170],
[35, 173],
[209, 34],
[197, 32],
[205, 19]]
key=min means it will use min function when sorting the list.
Then you can find the index of the value with index method. Like:
>>> lst.index([75, 156])
3

Categories