Related
I have a 4d array of arrays where for example, a[0] looks like this :
array([[[135, 105, 95],
[109, 78, 60],
[101, 78, 54],
...,
[ 32, 21, 22],
[ 32, 21, 23],
[ 35, 28, 31]],
[[144, 119, 107],
[117, 87, 68],
[115, 94, 74],
...,
[ 32, 21, 22],
[ 33, 22, 24],
[ 33, 22, 26]],
[[145, 127, 113],
[140, 116, 102],
[128, 104, 87],
...,
[ 29, 22, 20],
[ 28, 21, 19],
[ 33, 23, 20]],
...,
[[105, 70, 62],
[109, 81, 75],
[142, 123, 117],
...,
[ 52, 41, 39],
[ 62, 49, 47],
[ 52, 38, 33]],
[[ 90, 55, 50],
[ 96, 67, 65],
[133, 111, 108],
...,
[ 45, 37, 34],
[ 48, 36, 32],
[ 48, 37, 30]],
[[129, 111, 106],
[124, 103, 101],
[116, 94, 90],
...,
[ 50, 40, 35],
[ 53, 39, 35],
[ 48, 37, 32]]], dtype=uint8)
Every array in the 4d array of arrays represents an image (pixels). I want to calculate kurtosis for every array in the 4d array by using a loop. So, could someone please help me with this?
Thanks in advance for your help
Without having an example, you could try something similar to this:
from scipy.stats import kurtosis
k = []
for elem in a:
k.append(kurtosis(elem))
This will output an array. If you want to output a single number, you should set axis=None when calling kurtosis().
Why is there a difference between the pixel values if I open an image with skimage.io.imread(image) and tf.image.decode_image(image)?
For example:
import skimage.io
original_img = skimage.io.imread("test.jpg")
print(original_img.shape)
print(np.amax(original_img))
print(original_img)
The output is:
(110, 150, 3)
255
array([[[ 29, 65, 117],
[ 45, 43, 90],
[ 78, 39, 68],
...,
[ 30, 46, 95],
[ 30, 43, 96],
[ 31, 44, 97]],
[[ 41, 54, 89],
[ 95, 89, 123],
[ 57, 39, 65],
...,
[ 32, 46, 91],
[ 32, 46, 95],
[ 32, 45, 97]],
[[ 62, 49, 69],
[ 84, 76, 97],
[ 68, 70, 95],
...,
[ 18, 30, 70],
[ 35, 47, 95],
[ 34, 47, 99]],
...,
[[136, 124, 22],
[144, 136, 53],
[134, 123, 44],
...,
[ 16, 74, 16],
[ 39, 89, 52],
[ 53, 108, 69]],
[[161, 125, 5],
[149, 129, 42],
[129, 116, 48],
...,
[ 67, 119, 73],
[ 39, 80, 48],
[ 33, 69, 41]],
[[196, 127, 6],
[160, 111, 32],
[141, 108, 55],
...,
[ 26, 56, 32],
[ 8, 29, 10],
[ 12, 24, 12]]], dtype=uint8)
And if I open the same image with Tensorflow:
import tensorflow as tf
original_img = tf.image.decode_image(tf.io.read_file("test.jpg"))
print(np.amax(original_img))
print(original_img)
The output is:
255
<tf.Tensor: shape=(110, 150, 3), dtype=uint8, numpy=
array([[[ 44, 57, 101],
[ 40, 42, 80],
[ 65, 41, 65],
...,
[ 25, 42, 88],
[ 33, 49, 100],
[ 25, 41, 92]],
[[ 47, 53, 89],
[ 96, 95, 127],
[ 60, 44, 70],
...,
[ 29, 43, 88],
[ 40, 54, 103],
[ 19, 35, 84]],
[[ 59, 54, 74],
[ 72, 69, 90],
[ 70, 70, 96],
...,
[ 23, 35, 77],
[ 16, 29, 74],
[ 50, 64, 111]],
...,
[[145, 116, 24],
[161, 131, 43],
[141, 113, 30],
...,
[ 19, 67, 19],
[ 49, 95, 58],
[ 53, 97, 64]],
[[164, 119, 16],
[166, 123, 28],
[143, 108, 27],
...,
[ 73, 119, 80],
[ 29, 68, 37],
[ 39, 75, 47]],
[[182, 128, 20],
[160, 112, 14],
[149, 112, 32],
...,
[ 11, 57, 21],
[ 7, 44, 13],
[ 0, 14, 0]]], dtype=uint8)>
I have also noticed that if I open an image with tensorflow, make some changes in this image, save the image on the disk and open it again with tf.image.decode_image(image) the pixel values are again different, but this time, not so much.
This is due to the algorithm used for decompression. By default, the system-specific method is used. tf.image.decode_image() does not provide any possibility to change the method.
In tf.image.decode_jpeg() there is the dct_method argument which can be used to change the method for decompression. Currently there are two valid values that can be set: INTEGER_FAST and INTEGER_ACCURATE.
If you open the image in the following way you should have the same output as with skimage.io.imread(image):
original_img = tf.image.decode_jpeg(tf.io.read_file("test.jpg"),dct_method="INTEGER_ACCURATE")
For a quantification project, I am in need of colour corrected images which produce the same result over and over again irrespective of lighting conditions.
Every image includes a X-Rite color-checker of which the colors are known in matrix format:
Reference=[[170, 189, 103],[46, 163, 224],[161, 133, 8],[52, 52, 52],[177, 128, 133],[64, 188, 157],[149, 86, 187],[85, 85, 85],[67, 108, 87],[108, 60, 94],[31, 199, 231],[121, 122, 122], [157, 122, 98],[99, 90, 193],[60, 54, 175],[160, 160, 160],[130, 150, 194],[166, 91, 80],[70, 148, 70],[200, 200, 200],[68, 82, 115],[44, 126, 214],[150, 61, 56],[242, 243, 243]]
For every image I calculate the same matrix for the color card present as an example:
Actual_colors=[[114, 184, 137], [2, 151, 237], [118, 131, 55], [12, 25, 41], [111, 113, 177], [33, 178, 188], [88, 78, 227], [36, 64, 85], [30, 99, 110], [45, 36, 116], [6, 169, 222], [53, 104, 138], [98, 114, 123], [48, 72, 229], [29, 39, 211], [85, 149, 184], [66, 136, 233], [110, 79, 90], [41, 142, 91], [110, 180, 214], [7, 55, 137], [0, 111, 238], [82, 44, 48], [139, 206, 242]]
Then I calibrate the entire image using a color correction matrix which was derived from the coefficient from the input and output matrices:
for im in calibrated_img:
im[:]=colour.colour_correction(im[:], Actual_colors, Reference, "Finlayson 2015")
The results are as follows:
Where the top image represents the input and the down image the output.
Lighting plays a key role in the final result for the color correction, but the first two images on the left should generate the same output. Once the images become too dark, white is somehow converted to red.. I am not able to understand why.
I have tried to apply a gamma correction before processing with no success.
The other two models Cheung 2004 and Vandermonde gave worse results, as did partial least squares. The images are pretty well corrected from the yellow radiating lamps, but the final result is not clean white, instead they have a blueish haze over the image. White should be white.. What can I do to further improve these results?
Edit 23-08-2020:
Based on #Kel Solaar his comments I have made changes to my script to include the steps mentioned by him as follows
#Convert image from int to float
Float_image=skimage.img_as_float(img)
#Normalise image to have pixel values from 0 to 1
Normalised_image = (Float_image - np.min(Float_image))/np.ptp(Float_image)
#Decoded the image with sRGB EOTF
Decoded_img=colour.models.eotf_sRGB(Normalised_image)
#Performed Finlayson 2015 color correction to linear data:
for im in Decoded_img:
im[:]=colour.colour_correction(im[:], Image_list, Reference, "Finlayson 2015")
#Encoded image back to sRGB
Encoded_img=colour.models.eotf_inverse_sRGB(Decoded_img)
#Denormalized image to fit 255 pixel values
Denormalized_image=Encoded_img*255
#Converted floats back to integers
Integer_image=Denormalised_image.astype(int)
This greatly improved image quality as can be seen below:
However, lighting/color differences between corrected images are unfortunately still present.
Raw images can be found here but due note that they are upside down.
Measured values of color cards in images:
IMG_4244.JPG
[[180, 251, 208], [62, 235, 255], [204, 216, 126], [30, 62, 97], [189, 194, 255], [86, 250, 255], [168, 151, 255], [68, 127, 167], [52, 173, 193], [111, 87, 211], [70, 244, 255], [116, 185, 228], [182, 199, 212], [102, 145, 254], [70, 102, 255], [153, 225, 255], [134, 214, 255], [200, 156, 169], [87, 224, 170], [186, 245, 255], [44, 126, 235], [45, 197, 254], [166, 101, 110], [224, 255, 252]]
IMG_4243.JPG
[[140, 219, 168], [24, 187, 255], [148, 166, 73], [17, 31, 53], [141, 146, 215], [42, 211, 219], [115, 101, 255], [33, 78, 111], [24, 118, 137], [63, 46, 151], [31, 203, 255], [67, 131, 172], [128, 147, 155], [61, 98, 255], [42, 59, 252], [111, 181, 221], [88, 168, 255], [139, 101, 113], [47, 176, 117], [139, 211, 253], [19, 78, 178], [12, 146, 254], [110, 60, 64], [164, 232, 255]]
IMG_4241.JPG
[[66, 129, 87], [0, 90, 195], [65, 73, 26], [9, 13, 18], [60, 64, 117], [20, 127, 135], [51, 38, 176], [15, 27, 39], [14, 51, 55], [21, 15, 62], [1, 112, 180], [29, 63, 87], [54, 67, 69], [20, 33, 179], [10, 12, 154], [38, 92, 123], [26, 81, 178], [58, 44, 46], [23, 86, 54], [67, 127, 173], [5, 26, 77], [2, 64, 194], [43, 22, 25], [84, 161, 207]]
IMG_4246.JPG
[[43, 87, 56], [2, 56, 141], [38, 40, 20], [3, 5, 6], [31, 31, 71], [17, 85, 90], [19, 13, 108], [7, 13, 20], [4, 24, 29], [8, 7, 33], [1, 68, 123], [14, 28, 46], [28, 34, 41], [6, 11, 113], [0, 1, 91], [27, 53, 83], [11, 44, 123], [32, 21, 23], [11, 46, 26], [32, 77, 115], [2, 12, 42], [0, 29, 128], [20, 9, 11], [49, 111, 152]]
Actual colors of color card (or reference) are given in the top of this post and are in the same order as values given for images.
Edit 30-08-2020, I have applied #nicdall his comments:
#Remove color chips which are outside of RGB range
New_reference=[]
New_Actual_colors=[]
for L,K in zip(Actual_colors, range(len(Actual_colors))):
if any(m in L for m in [0, 255]):
print(L, "value outside of range")
else:
New_reference.append(Reference[K])
New_Actual_colors.append(Actual_colors[K])
In addition to this, I realized I was using a single pixel from the color card, so I started to take 15 pixels per color chip and averaged them to make sure it is a good balance. The code is too long to post here completely but something in this direction (don't judge my bad coding here):
for i in Chip_list:
R=round(sum([rotated_img[globals()[i][1],globals()[i][0],][0],
rotated_img[globals()[i][1]+5,globals()[i][0],][0],
rotated_img[globals()[i][1]+10,globals()[i][0],][0],
rotated_img[globals()[i][1],(globals()[i][0]+5)][0],
rotated_img[globals()[i][1],(globals()[i][0]+10)][0],
rotated_img[globals()[i][1]+5,(globals()[i][0]+5)][0],
rotated_img[globals()[i][1]+10,(globals()[i][0]+10)][0]])/(number of pixels which are summed up))
The result was dissapointing, as the correction seemed to have gotten worse but it is shown below:
New_reference = [[170, 189, 103], [161, 133, 8], [52, 52, 52], [177, 128, 133], [64, 188, 157], [85, 85, 85], [67, 108, 87], [108, 60, 94], [121, 122, 122], [157, 122, 98], [60, 54, 175], [160, 160, 160], [166, 91, 80], [70, 148, 70], [200, 200, 200], [68, 82, 115], [44, 126, 214], [150, 61, 56]]
#For Image: IMG_4243.JPG:
New_Actual_colors= [[139, 218, 168], [151, 166, 74], [16, 31, 52], [140, 146, 215], [44, 212, 220], [35, 78, 111], [25, 120, 137], [63, 47, 150], [68, 132, 173], [128, 147, 156], [40, 59, 250], [110, 182, 222], [141, 102, 115], [48, 176, 118], [140, 211, 253], [18, 77, 178], [12, 146, 254], [108, 59, 62]]
#The following values were omitted in IMG_4243:
[23, 187, 255] value outside of range
[115, 102, 255] value outside of range
[30, 203, 255] value outside of range
[61, 98, 255] value outside of range
[88, 168, 255] value outside of range
[163, 233, 255] value outside of range
I have started to approach the core of the problem but I am not a mathematician, however the correction itself seems to be the problem..
This is the color correction matrix for IMG4243.jpg generated and utilized by the colour package:
CCM=colour.characterisation.colour_correction_matrix_Finlayson2015(New_Actual_colors, New_reference, degree=1 ,root_polynomial_expansion=True)
print(CCM)
[[ 1.10079803 -0.03754644 0.18525637]
[ 0.01519612 0.79700086 0.07502735]
[-0.11301282 -0.05022718 0.78838144]]
Based on what I understand from the colour package code the New_Actual_colors is converted with the CCM as follows:
Converted_colors=np.reshape(np.transpose(np.dot(CCM, np.transpose(New_Actual_colors))), shape)
When we compare the Converted_colors with the New_reference, we can see that the correction is getting a long way, but differences are still present (so the endgoal is to convert New_Actual_colors with the color correction matrix (CCM) to Converted_colors which should exactly match the New_reference):
print("New_reference =",New_reference)
print("Converted_colors =",Converted_colors)
New_reference = [[170, 189, 103],[161, 133, 8],[52, 52, 52],[177, 128, 133],[64, 188, 157],[85, 85, 85],[67, 108, 87],[108, 60, 94],[121, 122, 122],[157, 122, 98],[60, 54, 175],[160, 160, 160],[166, 91, 80],[70, 148, 70],[200, 200, 200],[68, 82, 115],[44, 126, 214],[150, 61, 56]]
Converted_colors = [[176, 188, 106],[174, 140, 33],[26, 29, 38],[188, 135, 146],[81, 186, 158],[56, 71, 80],[48, 106, 99],[95, 50, 109],[102, 119, 122],[164, 131, 101],[88, 66, 190],[155, 163, 153],[173, 92, 70],[68, 150, 79],[193, 189, 173],[50, 75, 134],[55, 136, 192],[128, 53, 34]]
When substracted the differences become clear, and the question is how to overcome these differences?:
list(np.array(New_reference) - np.array(Converted_colors))
[array([-6, 1, -3]),
array([-13, -7, -25]),
array([26, 23, 14]),
array([-11, -7, -13]),
array([-17, 2, -1]),
array([29, 14, 5]),
array([ 19, 2, -12]),
array([ 13, 10, -15]),
array([19, 3, 0]),
array([-7, -9, -3]),
array([-28, -12, -15]),
array([ 5, -3, 7]),
array([-7, -1, 10]),
array([ 2, -2, -9]),
array([ 7, 11, 27]),
array([ 18, 7, -19]),
array([-11, -10, 22]),
array([22, 8, 22])]
Here are a few recommendations:
As stated in my comment above we had an implementation issue with the Root-Polynomial variant from Finlayson (2015) which should be fixed in the develop branch.
You are passing integer and encoded values to the colour.colour_correction definition. I would strongly recommend that you:
Convert the datasets to floating-point representation.
Scale it from range [0, 255] to range [0, 1].
Decode it with the sRGB EOTF.
Perform the colour correction onto that linear data.
Encode back and scale back to integer representation.
Your images seem to be an exposure wedge, ideally, you would compute a single matrix for the appropriate reference exposure, normalise the other images exposure to it and apply the matrix on it.
An additional recommendation more on the physical side of the problem : I see some of the RGB values in the high and low exposure images are outside of the unsaturated range of the camera (0 and 255 values). This means that some information on the actual measured color is lost at the time of the image capture, because some of the calibration patches are either over- or under-exposed. This is a known problem in RGB colorimetry, and it is actually mentioned in (Finlayson, 2015) : "an additional assumption is that both v and kv are in the unsaturated range of the camera"
If possible, try to have a look at the histogram while you take the images so that all pixels have a value in the unsaturated range ([1, 254] at most).
Otherwise, if taking new images is out of the question, you can try ignoring the saturated patch (which have either 0 or 255 in any of R, G or B values) in the calibration process (make sure that you ignore the patches both in the image and in the reference). This could improve your calibration for the overall image as you do not make your model fit saturated values.
I do not understand why the two cases below give different results.
I want to reorder over axis 1 (0-based counting), and only take the elements with index 0 on axis 3.
(Changing (1,2,0) to [1,2,0] or np.array((1,2,0)) makes no difference.)
>> w # (3,3,5,5) input
array([[[[206, 172, 9, 2, 43],
[232, 101, 85, 251, 150],
[247, 99, 6, 88, 100],
[250, 124, 244, 2, 73],
[ 49, 23, 227, 3, 125]],
[[110, 162, 246, 123, 110],
[ 67, 197, 87, 230, 29],
[110, 51, 79, 136, 155],
[ 86, 62, 121, 18, 113],
[ 59, 197, 149, 112, 172]],
[[198, 231, 137, 2, 238],
[ 47, 97, 94, 102, 206],
[ 1, 232, 189, 173, 75],
[207, 171, 40, 23, 102],
[243, 232, 13, 109, 26]]],
[[[114, 218, 50, 173, 95],
[ 92, 29, 170, 247, 42],
[ 75, 251, 65, 246, 231],
[151, 210, 79, 27, 175],
[105, 55, 224, 79, 4]],
[[172, 230, 0, 115, 38],
[ 10, 165, 169, 230, 163],
[159, 142, 15, 134, 124],
[ 91, 161, 19, 103, 214],
[102, 168, 181, 20, 75]],
[[ 78, 65, 245, 29, 155],
[ 40, 108, 198, 180, 231],
[202, 47, 60, 156, 183],
[210, 74, 18, 113, 148],
[231, 177, 240, 15, 200]]],
[[[ 28, 40, 169, 249, 218],
[ 96, 205, 3, 38, 106],
[229, 129, 78, 113, 13],
[243, 170, 186, 35, 74],
[111, 224, 132, 184, 23]],
[[ 21, 181, 126, 5, 42],
[135, 93, 133, 166, 111],
[ 85, 85, 31, 220, 124],
[ 61, 5, 94, 216, 135],
[ 4, 225, 204, 128, 115]],
[[ 63, 23, 122, 146, 140],
[245, 139, 76, 173, 12],
[ 31, 195, 239, 188, 254],
[253, 231, 187, 22, 15],
[ 59, 40, 61, 185, 216]]]], dtype=uint16)
>> w[:,(1,2,0),:,0:1] # case 1 without squeeze
array([[[[110],
[ 67],
[110],
[ 86],
[ 59]],
[[198],
[ 47],
[ 1],
[207],
[243]],
[[206],
[232],
[247],
[250],
[ 49]]],
[[[172],
[ 10],
[159],
[ 91],
[102]],
[[ 78],
[ 40],
[202],
[210],
[231]],
[[114],
[ 92],
[ 75],
[151],
[105]]],
[[[ 21],
[135],
[ 85],
[ 61],
[ 4]],
[[ 63],
[245],
[ 31],
[253],
[ 59]],
[[ 28],
[ 96],
[229],
[243],
[111]]]], dtype=uint16)
>> w[:,(1,2,0),:,0:1].squeeze() # case 1 with squeeze, for readability and comparability
array([[[110, 67, 110, 86, 59],
[198, 47, 1, 207, 243],
[206, 232, 247, 250, 49]],
[[172, 10, 159, 91, 102],
[ 78, 40, 202, 210, 231],
[114, 92, 75, 151, 105]],
[[ 21, 135, 85, 61, 4],
[ 63, 245, 31, 253, 59],
[ 28, 96, 229, 243, 111]]], dtype=uint16)
>> w[:,(1,2,0),:,0] # case 2: 0 index instead of length-1 slice 0:1
array([[[110, 67, 110, 86, 59],
[172, 10, 159, 91, 102],
[ 21, 135, 85, 61, 4]],
[[198, 47, 1, 207, 243],
[ 78, 40, 202, 210, 231],
[ 63, 245, 31, 253, 59]],
[[206, 232, 247, 250, 49],
[114, 92, 75, 151, 105],
[ 28, 96, 229, 243, 111]]], dtype=uint16)
I converted an image to numpy array and it returned a 3D array instead of 2D (width and height).
My code is:
import PIL
from PIL import Image
import numpy as np
samp_jpg = "imgs_subset/w_1.jpg"
samp_img = Image.open(samp_jpg)
print samp_img.size
(3072, 2048)
I = np.asarray(samp_img)
I.shape
(2048, 3072, 3)
The 3D matrix looks like:
array([[[ 58, 95, 114],
[ 54, 91, 110],
[ 52, 89, 108],
...,
[ 48, 84, 106],
[ 50, 85, 105],
[ 51, 86, 106]],
[[ 63, 100, 119],
[ 61, 97, 119],
[ 59, 95, 117],
...,
[ 48, 84, 106],
[ 50, 85, 105],
[ 51, 86, 106]],
[[ 66, 102, 124],
[ 66, 102, 124],
[ 65, 101, 125],
...,
[ 48, 84, 106],
[ 50, 85, 105],
[ 51, 86, 106]],
...,
[[ 69, 106, 135],
[ 66, 103, 132],
[ 61, 98, 127],
...,
[ 49, 85, 111],
[ 51, 87, 113],
[ 53, 89, 115]],
[[ 59, 98, 127],
[ 57, 96, 125],
[ 56, 95, 124],
...,
[ 51, 85, 113],
[ 52, 86, 114],
[ 53, 87, 115]],
[[ 63, 102, 131],
[ 62, 101, 130],
[ 60, 101, 129],
...,
[ 53, 86, 117],
[ 52, 85, 116],
[ 51, 84, 115]]], dtype=uint8)
I'm wondering what does the 3rd dimension mean? It is an array of length 3 (each line in the output above).
Red, green and blue channels, naturally.