When I look at imread function in skimage.io, it doesn't say what calculation is used when as_grey=True is set, is there a way to find the calculation going on behind the scenes?
Link to lib:
(http://scikit-image.org/docs/dev/api/skimage.io.html#skimage.io.imread)
Text from link above:
as_grey : bool
If True, convert color images to grey-scale (64-bit floats). Images that are already in grey-scale format are not converted.
Example:
RGB - [108 123 128]
When I use convert('L'), it gets converted to 119 and that's inline with the formula on this post How can I convert an RGB image into grayscale in Python?
But when I use imread(img, as_grey = True), it gives me a value of 0.47126667, which is lower than the value if I were to divide the 119 value with the max value of pixel in that image to convert the values to 0-1 scale.
If you want to view the results, here's sample code:
from __future__ import division
from skimage.io import imread
import numpy as np
image_open = Image.open(image).convert('L')
np_image_open = np.array(image_open)
print (np_image_open[:10,0])
print (np_image_open[:10,0]/np.max(np_image_open))
image_open = imread(image, as_grey = True)
print (image_open[:10,0])
image_open = imread(image)
print (image_open[:10,0])
Related
I am experimenting with PIL and trying to analyze the image I attached.
Since my goal is to eventually be able to recognize it via neural networks,
I expect all pixels to have different intensities and therefore different values
ranging from 0 to 255. I am not sure why, every single pixel of this image is equal to
255. How so? What exactly am I doing wrong?
import numpy as np
import pandas as pd
import PIL
from PIL import Image
img = Image.open(r'key_1.jpg')
print(img.format)
print(img.size)
print(img.mode)
img # displays the image
img_sequence = img.getdata()
img_array = np.array(img_sequence)
print((img_array)) # all pixels = 255
Actually, they are not. You can see the top and the last few rows only where it shows 255 because the pixels there are white. If you try reading the same array using PIL/OpenCV, the result will show properly.
However, you can see all the results of the NumPy array using this method-
img_sequence = img.getdata()
img_array = np.array(img_sequence)
np.set_printoptions(threshold=np.inf)
print(img_array)
Sample screenshot of a random part of the output I got for the same image -
I want to create a jpg image with size 343 by 389 (Height by Width) with height as pixel values. For example for the whole topmost pixels, I need to give it as value 1, the next row of pixels should have a value 2. and finally, the last pixel with value 343. then export that image in jpg format. How to do this? either in python or in Matlab?
In MATLAB
A solution in MATLAB using the meshgrid() function may work. An important part is to caste the array Image of type double into an unsigned 8-bit integer array, uint8 before exporting it as a .jpg using the imwrite() function.
[~,Image] = meshgrid((1:389),(1:343));
imwrite(uint8(Image),"Depth.jpg");
Did it
from PIL import Image
import numpy as np
a = np.empty(shape=(343, 389), dtype=int)
for i in range(343):
for j in range(389):
a[i,j]=i
im = Image.fromarray(a,'L')
im.save('depth.jpg')
I get an image stored as an object from a camera that look like this (here reduced to make it understandable):
image = np.array([['#49312E', '#4A3327', '#493228', '#472F2A'],
['#452C29', '#49312E', '#4B3427', '#49312A'],
['#473026', '#472F2C', '#48302B', '#4C342B']])
is it possible to 'import' it as an 'image' in opencv?
I tried to look at the documentation of cv2.imdecode but could get it to work.
I could preprocess this array to get it to another format but I am not sure what could 'fit' to opencv.
Thank you for your help
This is a very succinct and pythonic (using NumPy) way to implement a conversion from your hexadecimal values matrix to an RGB matrix that could be read by OpenCV.
image = np.array([['#49312E', '#4A3327', '#493228', '#472F2A'],
['#452C29', '#49312E', '#4B3427', '#49312A'],
['#473026', '#472F2C', '#48302B', '#4C342B']])
def to_rgb(v):
return np.array([np.int(v[1:3],16), np.int(v[3:5],16) , np.int(v[5:7],16)])
image_cv = np.array([to_rgb(h) for h in image.flatten()]).reshape(3, 4, 3)
cv2.imwrite('result.png', image_cv)
OpenCV requires either a RGB or a BGR input, which is to say you need to give the values of Red Green Blue or Blue Green Red on a scale from 0-255 (8 bit). I have shared with you the code to convert your array to an image.
Initially, I count the number of rows to find the height in terms of pixels. Then I count the number of items in a row to find the width.
Then I create an empty array of the given dimensions using np.zeros.
I then go to each cell and convert the hex code to its RGB equivalent, using the following formula #RRGGBB, R = int(RR,16), G = int(GG, 16), B = int(BB, 16). This converts the hexadecimal string to int.
#!/usr/bin/env python3
import numpy as np
import re
import cv2
# Your image
image = np.array([['#49312E', '#4A3327', '#493228', '#472F2A'],
['#452C29', '#49312E', '#4B3427', '#49312A'],
['#473026', '#472F2C', '#48302B', '#4C342B']])
# Enter the image height and width
height = int(len(image[0]))
width = int(len(image[0][0]))
# Create numpy array of BGR triplets
im = np.zeros((height,width,3), dtype=np.uint8)
for row in range (height):
for col in range(width):
hex = image[row, col][1:]
R = int(hex[0:2],16)
G = int(hex[2:4],16)
B = int(hex[4:6],16)
im[row,col] = (B,G,R)
# Save to disk
cv2.imwrite('result.png', im)
With the newest version of SciPy the bytescale() function is removed. So I tried an alternative with scikit-image. While bytescale()(scipy) converts an uint16 image (.tif) correctly scaled to an uint8 image, util.img_as_ubyte() (skimage) returns an image in which the highest grey value is 8 and the lowest is 0, instead of 255 and 0.
I need an uint8 image for further image processing (Otsu and Canny Edge detection) and everything is working perfectly with the bytescale from skimage, but as soon as I try the SciPy version, everything is getting messy.
The code snippet is as followed:
...
import numpy as np
from scipy.misc import bytescale
from skimage import io, util
def convertToByteImg(imagePath)
image = io.imread(imagePath)
img1 = util.img_as_ubyte(image)
img2 = bytescale(image)
...
bytescale always expands the range of your input image to the maximum range of uint8 (0-255). This is sometimes what you want but sometimes not, because you lose information about the relative range of different images in your dataset. In this case, your input image must have range approximately in [0, 2048], while img_as_ubyte rescales the range [0, 65535] to [0, 255].
Assuming you want to expand your input image to the full range, you need skimage.exposure.rescale_intensity:
from skimage import io, util, exposure
# narrow range 16-bit
image = io.imread(image_path)
# full range 16-bit
image_rescaled = exposure.rescale_intensity(image)
# full range 8-bit
image_uint8 = util.img_as_ubyte(image_rescaled)
I am using simpleITK to process MRI images in .mha format. I subsequently convert it into a numpy array. I am able to visualize the images using matplotlib.However, if I perform any prerprocessing or I multiply the image by its binary mask, all I get is a blank image. Is there something I am missing. My simplified code is shown below.
import SimpleITK as sitk
import numpy as np
from matplotlib import pyplot as plt
input_image = sitk.ReadImage('MRI.mha')
input_array = sitk.GetArrayFromImage(input_image)
plt.imshow(input_array[0,:,:],cmap = 'gray') # I get an image for this. No preprocessing has been performed.
plt.show()
# However, if I replace input_array after preprocessing, I get a black square.
I think this has something to do with the range of the data, but I am not able to pinpoint where. The image visualized before preprocessing has a maximum value of 744. After preprocessing, this drops down to 4, and that is when problems crop up. Any pointers to where I might be going wrong?
You should check your image pixel type before any processing. The MRI image volume which you are testing on has a sitkInt32 (Signed 32 bit integer) pixel type. So there is a high chance that your processing (e.g. your division operations) would make your pixel values zeros and you get black images.
You can either cast your image to float using SimpleITK:
input_image = sitk.ReadImage('MRI.mha')
print(input_image.GetPixelIDTypeAsString())
input_image = sitk.Cast(input_image,sitk.sitkFloat32)
input_array = sitk.GetArrayFromImage(input_image)
or change your numpy array data type before processing:
input_image = sitk.ReadImage('MRI.mha')
input_array = sitk.GetArrayFromImage(input_image)
input_array = input_array.astype(np.float32)
Read more about pixel types at SimpleITK Image Basics notebook.