Related
I have a 4d array of arrays where for example, a[0] looks like this :
array([[[135, 105, 95],
[109, 78, 60],
[101, 78, 54],
...,
[ 32, 21, 22],
[ 32, 21, 23],
[ 35, 28, 31]],
[[144, 119, 107],
[117, 87, 68],
[115, 94, 74],
...,
[ 32, 21, 22],
[ 33, 22, 24],
[ 33, 22, 26]],
[[145, 127, 113],
[140, 116, 102],
[128, 104, 87],
...,
[ 29, 22, 20],
[ 28, 21, 19],
[ 33, 23, 20]],
...,
[[105, 70, 62],
[109, 81, 75],
[142, 123, 117],
...,
[ 52, 41, 39],
[ 62, 49, 47],
[ 52, 38, 33]],
[[ 90, 55, 50],
[ 96, 67, 65],
[133, 111, 108],
...,
[ 45, 37, 34],
[ 48, 36, 32],
[ 48, 37, 30]],
[[129, 111, 106],
[124, 103, 101],
[116, 94, 90],
...,
[ 50, 40, 35],
[ 53, 39, 35],
[ 48, 37, 32]]], dtype=uint8)
Every array in the 4d array of arrays represents an image (pixels). I want to calculate kurtosis for every array in the 4d array by using a loop. So, could someone please help me with this?
Thanks in advance for your help
Without having an example, you could try something similar to this:
from scipy.stats import kurtosis
k = []
for elem in a:
k.append(kurtosis(elem))
This will output an array. If you want to output a single number, you should set axis=None when calling kurtosis().
Why is there a difference between the pixel values if I open an image with skimage.io.imread(image) and tf.image.decode_image(image)?
For example:
import skimage.io
original_img = skimage.io.imread("test.jpg")
print(original_img.shape)
print(np.amax(original_img))
print(original_img)
The output is:
(110, 150, 3)
255
array([[[ 29, 65, 117],
[ 45, 43, 90],
[ 78, 39, 68],
...,
[ 30, 46, 95],
[ 30, 43, 96],
[ 31, 44, 97]],
[[ 41, 54, 89],
[ 95, 89, 123],
[ 57, 39, 65],
...,
[ 32, 46, 91],
[ 32, 46, 95],
[ 32, 45, 97]],
[[ 62, 49, 69],
[ 84, 76, 97],
[ 68, 70, 95],
...,
[ 18, 30, 70],
[ 35, 47, 95],
[ 34, 47, 99]],
...,
[[136, 124, 22],
[144, 136, 53],
[134, 123, 44],
...,
[ 16, 74, 16],
[ 39, 89, 52],
[ 53, 108, 69]],
[[161, 125, 5],
[149, 129, 42],
[129, 116, 48],
...,
[ 67, 119, 73],
[ 39, 80, 48],
[ 33, 69, 41]],
[[196, 127, 6],
[160, 111, 32],
[141, 108, 55],
...,
[ 26, 56, 32],
[ 8, 29, 10],
[ 12, 24, 12]]], dtype=uint8)
And if I open the same image with Tensorflow:
import tensorflow as tf
original_img = tf.image.decode_image(tf.io.read_file("test.jpg"))
print(np.amax(original_img))
print(original_img)
The output is:
255
<tf.Tensor: shape=(110, 150, 3), dtype=uint8, numpy=
array([[[ 44, 57, 101],
[ 40, 42, 80],
[ 65, 41, 65],
...,
[ 25, 42, 88],
[ 33, 49, 100],
[ 25, 41, 92]],
[[ 47, 53, 89],
[ 96, 95, 127],
[ 60, 44, 70],
...,
[ 29, 43, 88],
[ 40, 54, 103],
[ 19, 35, 84]],
[[ 59, 54, 74],
[ 72, 69, 90],
[ 70, 70, 96],
...,
[ 23, 35, 77],
[ 16, 29, 74],
[ 50, 64, 111]],
...,
[[145, 116, 24],
[161, 131, 43],
[141, 113, 30],
...,
[ 19, 67, 19],
[ 49, 95, 58],
[ 53, 97, 64]],
[[164, 119, 16],
[166, 123, 28],
[143, 108, 27],
...,
[ 73, 119, 80],
[ 29, 68, 37],
[ 39, 75, 47]],
[[182, 128, 20],
[160, 112, 14],
[149, 112, 32],
...,
[ 11, 57, 21],
[ 7, 44, 13],
[ 0, 14, 0]]], dtype=uint8)>
I have also noticed that if I open an image with tensorflow, make some changes in this image, save the image on the disk and open it again with tf.image.decode_image(image) the pixel values are again different, but this time, not so much.
This is due to the algorithm used for decompression. By default, the system-specific method is used. tf.image.decode_image() does not provide any possibility to change the method.
In tf.image.decode_jpeg() there is the dct_method argument which can be used to change the method for decompression. Currently there are two valid values that can be set: INTEGER_FAST and INTEGER_ACCURATE.
If you open the image in the following way you should have the same output as with skimage.io.imread(image):
original_img = tf.image.decode_jpeg(tf.io.read_file("test.jpg"),dct_method="INTEGER_ACCURATE")
import numpy as np
from skimage import io
import matplotlib.pyplot as plt
jeju = io.imread('jeju.jpg')
jeju.shape
> (960,1280,3)
jeju
> Array([[[171, 222, 251],
[172, 223, 252],
[172, 223, 252],
...,
[124, 189, 255],
[121, 189, 254],
[120, 188, 253]],
[[173, 224, 253],
[173, 224, 253],
[173, 224, 253],
...,
[124, 189, 255],
[122, 190, 255],
[121, 189, 254]],
[[174, 225, 254],
[174, 225, 254],
[175, 226, 255]
...,
[125, 190, 255],
[122, 190, 255],
[122, 190, 255]],
...,
[[ 66, 93, 26],
[ 89, 114, 46],
[ 49, 72, 2],
...,
[ 2, 29, 0],
[ 34, 59, 17],
[ 40, 63, 21]],
[[ 44, 71, 4],
[ 23, 50, 0],
[ 29, 52, 0],
...,
[ 40, 67, 22],
[ 0, 19, 0],
[ 16, 41, 0]],
[[ 29, 58, 0],
[ 44, 71, 2],
[ 84, 110, 37],
...,
[ 17, 44, 1],
[ 33, 60, 17],
[ 18, 43, 1]]], dtype=uint8)
plt.imshow(jeju)
plt.imshow(jeju[:,:,0])
jeju[:,:,0]
>Array([[171, 172, 172, ..., 124, 121, 120],
[173, 173, 173, ..., 124, 122, 121],
[174, 174, 175, ..., 125, 122, 122],
...,
[ 66, 89, 49, ..., 2, 34, 40],
[ 44, 23, 29, ..., 40, 0, 16],
[ 29, 44, 84, ..., 17, 33, 18]], dtype=uint8)
---------------------------------------------
As above, I read picture from directory and index it to make picture red.
Because (960, 1280, 3) from jeju.shape is (height,width,rgb) and I thought that if I used [:,:,0], 0 meant red.( I thought r=0,g=1,b=2)
But result was not red picture but picture full of green and blue.
Why this thing happened? What [:,:,0] means in real?
You are right that it represents the red channel. However, the function imshow, from the official documentation stated that for a 2d array, The values are mapped to colors using normalization and a colormap.
If you want to plot your red channel only you can do this
red_image = np.zeros(np.shape(jeju))
red_image[:, :, 0] = jeju[:, :, 0]
plt.imshow(red_image.astype('uint8'))
I do not understand why the two cases below give different results.
I want to reorder over axis 1 (0-based counting), and only take the elements with index 0 on axis 3.
(Changing (1,2,0) to [1,2,0] or np.array((1,2,0)) makes no difference.)
>> w # (3,3,5,5) input
array([[[[206, 172, 9, 2, 43],
[232, 101, 85, 251, 150],
[247, 99, 6, 88, 100],
[250, 124, 244, 2, 73],
[ 49, 23, 227, 3, 125]],
[[110, 162, 246, 123, 110],
[ 67, 197, 87, 230, 29],
[110, 51, 79, 136, 155],
[ 86, 62, 121, 18, 113],
[ 59, 197, 149, 112, 172]],
[[198, 231, 137, 2, 238],
[ 47, 97, 94, 102, 206],
[ 1, 232, 189, 173, 75],
[207, 171, 40, 23, 102],
[243, 232, 13, 109, 26]]],
[[[114, 218, 50, 173, 95],
[ 92, 29, 170, 247, 42],
[ 75, 251, 65, 246, 231],
[151, 210, 79, 27, 175],
[105, 55, 224, 79, 4]],
[[172, 230, 0, 115, 38],
[ 10, 165, 169, 230, 163],
[159, 142, 15, 134, 124],
[ 91, 161, 19, 103, 214],
[102, 168, 181, 20, 75]],
[[ 78, 65, 245, 29, 155],
[ 40, 108, 198, 180, 231],
[202, 47, 60, 156, 183],
[210, 74, 18, 113, 148],
[231, 177, 240, 15, 200]]],
[[[ 28, 40, 169, 249, 218],
[ 96, 205, 3, 38, 106],
[229, 129, 78, 113, 13],
[243, 170, 186, 35, 74],
[111, 224, 132, 184, 23]],
[[ 21, 181, 126, 5, 42],
[135, 93, 133, 166, 111],
[ 85, 85, 31, 220, 124],
[ 61, 5, 94, 216, 135],
[ 4, 225, 204, 128, 115]],
[[ 63, 23, 122, 146, 140],
[245, 139, 76, 173, 12],
[ 31, 195, 239, 188, 254],
[253, 231, 187, 22, 15],
[ 59, 40, 61, 185, 216]]]], dtype=uint16)
>> w[:,(1,2,0),:,0:1] # case 1 without squeeze
array([[[[110],
[ 67],
[110],
[ 86],
[ 59]],
[[198],
[ 47],
[ 1],
[207],
[243]],
[[206],
[232],
[247],
[250],
[ 49]]],
[[[172],
[ 10],
[159],
[ 91],
[102]],
[[ 78],
[ 40],
[202],
[210],
[231]],
[[114],
[ 92],
[ 75],
[151],
[105]]],
[[[ 21],
[135],
[ 85],
[ 61],
[ 4]],
[[ 63],
[245],
[ 31],
[253],
[ 59]],
[[ 28],
[ 96],
[229],
[243],
[111]]]], dtype=uint16)
>> w[:,(1,2,0),:,0:1].squeeze() # case 1 with squeeze, for readability and comparability
array([[[110, 67, 110, 86, 59],
[198, 47, 1, 207, 243],
[206, 232, 247, 250, 49]],
[[172, 10, 159, 91, 102],
[ 78, 40, 202, 210, 231],
[114, 92, 75, 151, 105]],
[[ 21, 135, 85, 61, 4],
[ 63, 245, 31, 253, 59],
[ 28, 96, 229, 243, 111]]], dtype=uint16)
>> w[:,(1,2,0),:,0] # case 2: 0 index instead of length-1 slice 0:1
array([[[110, 67, 110, 86, 59],
[172, 10, 159, 91, 102],
[ 21, 135, 85, 61, 4]],
[[198, 47, 1, 207, 243],
[ 78, 40, 202, 210, 231],
[ 63, 245, 31, 253, 59]],
[[206, 232, 247, 250, 49],
[114, 92, 75, 151, 105],
[ 28, 96, 229, 243, 111]]], dtype=uint16)
I have experimenting with a python script which scales the images by 2 times and it is working fine, but the problem is how to store this resulted image in my disk so I can compare the results before and after.
import cv2
import numpy as np
img = cv2.imread('input.jpg')
res = cv2.resize(img,None,fx=2, fy=2, interpolation = cv2.INTER_CUBIC)
Resultant is stored in res variable but it should be created as new image. How?
My desired output should be result.jpg
What i got when printed res
>>> res
array([[[ 39, 43, 44],
[ 40, 44, 44],
[ 41, 45, 46],
...,
[ 54, 52, 52],
[ 52, 50, 50],
[ 51, 49, 49]],
[[ 38, 42, 44],
[ 39, 43, 44],
[ 41, 45, 46],
...,
[ 55, 53, 53],
[ 54, 52, 52],
[ 53, 51, 51]],
[[ 37, 40, 43],
[ 38, 41, 44],
[ 40, 43, 46],
...,
[ 58, 56, 55],
[ 56, 54, 54],
[ 56, 53, 53]],
...,
[[ 52, 135, 94],
[ 54, 137, 95],
[ 59, 141, 99],
...,
[ 66, 139, 101],
[ 62, 135, 96],
[ 60, 133, 94]],
[[ 47, 131, 89],
[ 49, 133, 91],
[ 55, 138, 96],
...,
[ 56, 129, 91],
[ 54, 127, 89],
[ 54, 127, 88]],
[[ 44, 128, 86],
[ 47, 130, 88],
[ 53, 136, 94],
...,
[ 50, 123, 85],
[ 50, 123, 85],
[ 50, 123, 85]]], dtype=uint8)
You can use imwrite function.
You can find the description of this function here