Change image interleave to BIL - python

I am working with AVIRIS Classic data which has an interleave of BIP, or Band Interleaved by Pixel. I want to convert the datatype to BIL (Band Interleaved by Line). In the image processing language IDL, you can do this using the function CONVERT_DOIT, but this uses a proprietary software. Are there any python libraries that have a function to carry out this task?

I am completely unfamiliar with AVIRIS data and its processing, so there may be much simpler or better methods of accessing it of which I am unaware. However, I found and downloaded a smallish sample from the linked website as follows:
Reading the .hdr file (which is ASCII fortunately), I was able to work out that the data are signed 16-bit integers, band-interleaved-by-pixel of 224 bands, and 735 samples/line and 2017 lines. So, I can then load the image and process it with Numpy as follows:
import numpy as np
from PIL import Image
# Define datafile parameters
channels, samples, lines = 224, 735, 2017
# Load file and reshape
im = np.fromfile('f090710t01p00r11rdn_b_ort_img', dtype=np.int16).reshape(lines,samples,channels)
The data are signed integers in the range -32768...+32767, so if we add 32768 the data will be 0..65536 and then multiply by 255/65535 we should get a viewable, but not radiometrically correct, image to prove the reading from file:
# That's kind of all - just do crude scaling to show we have it correctly
a = (im.astype(np.float)+32768.0)*255.0/65535.0
Now select band 0, and save (using PIL, but we could use OpenCV or tifffile):
Image.fromarray(a[:,:,0].astype(np.uint8)).save('result.png')
Presumably you can now arrange the data however you like with Numpy as we have read it successfully. So, say you want line 7 of band 4, you could use:
a[7,:,4]
Or line 0, all bands:
a[0,:,:]
If you wanted to make a false colour composite from 3 of the 224 bands, you can use np.dstack() like this - my bands were chosen at random:
FalseColour = np.dstack((a[...,37], a[...,164], a[...,200])).astype(np.uint8)
Image.fromarray(FalseColour).save('result.png')
Keywords: Python, AVIRIS, hyperspectral image processing, hyper-spectral, band-interleaved by pixel, by line, planar, interlace.

Related

How do I create image from binary data BSQ?

I've got a problem. I'm trying create image from binary data which I got from hyperspectral camera. The file which I have is in BSQ uint16 format. From the documentation I found out that images contained in the file (.dat) have a resolution of 1024x1024 and there are 24 images in total. The whole thing is to form a kind of "cube" which I want use in the future to creat multi-layered orthomosaic.
I would also like to add that I am completely new in python but I try to be up to date with everything I need. I hope that everything what I have written is clear and uderstandable.
At first I tried to use Numpy liblary to creating 3D array but ended up with an arrangement of random pixels.
from PIL import Image
import numpy as np
file=open('Sequence 1_000021.dat','rb')
myarray=np.fromfile(file,dtype=np.uint16)
print('Size of new array',":", len(myarray))
con_array=np.reshape(myarray,(24,1024,1024),'C')
naPIL=Image.fromarray(con_array[1,:,:])
naPIL.save('naPIL.tiff')
The result: enter image description here
Example of image which I want to achieve (thumbnail): enter image description here
As suspected it's just byte order, I get a sensible looking image when running the following code in a Jupyter notebook:
import numpy as np
from PIL import Image
# open as big-endian, convert to native order, then reshape as appropriate
raw = np.fromfile(
'./Sequence 1_000021.dat', dtype='>u2'
).astype('uint16').reshape((24, 1024, 1024))
# display inline
Image.fromarray(raw[1,:,:])

How can I visualize a large file read in numpy memmap format?

I am trying to read czi format images, But because they need a lot of memmory I tried reading them in memmap file.
Here is the code I used>
import czifile as czi
fileName = "Zimt3.czi"
# read file to binary
file = czi.CziFile(fileName)
imageArr = file.asarray(out="/media/my drive/Temp/temp.bin")
Now imageArr is a variable with dimensons of (9,3,29584,68084,1) in memmap. These are high resolution microscopic images from Carl Zeiss device.
Here is an screenshot of more specifications.
I think this means that imageArr contains 9 images with the dimention of (29584,68084,3)
But I cant extract this kind of numpy array to visualize as an image.
Can you please help me convert (9,3,29584,68084,1) in memmap to (29584,68084,3) images please.
It looks like a very large file. If you just want to visualize it, you can use slideio python package (http://slideio.com). It makes use of internal image pyramids. You can read the image partially with high resolution or the whole image with low resolution. The code below rescales the image so that the width of the delivered raster will be 500 pixels (the height is computed to keep the image size ratio).
import slideio
import matplotlib.pyplot as plt
slide = slideio.open_slidei(file_path="Zimt3.czi",driver_id="CZI")
scene = slide.get_scene(0)
block = scene.read_block(size=(500,0))
plt.imshow(scene.read_block())
Be aware that matplotlib can display images if they have 1 or 3 channels. A CZI file can have an arbitrary number of channels. In this case you have to select what channels you want to display:
block = scene.read_block(size=(500,0), channel_indices=[0,2,5])
Another problem with visualization can be if your file is a 3 or 4D image. In this case, slideio returns 3d or 4d numpy array. Matplotlib cannot display 3d or 4d images. You will need to look for a specific visualization package or select a z-slice and/or time-frame:
block = scene.read_block(size=(500,0), channel_indices=[0,2,5], slices=(0,1), frames=(1,2))
For more details see the package documentation.

From Raw binary image data to PNG in Python

After searching for a few hours, I ended up on this link. A little background information follows.
I'm capturing live frames of a running embedded device via a hardware debugger. The captured frames are stored as raw binary files, without headers or format. After looking at the above link and understanding, albeit perfunctorily, the NumPY and Matplotlib, I was able to convert the raw binary data to an image successfully. This is important because I'm not sure if the link to the raw binary file will help any one.
I use this code:
import matplotlib.pyplot as plt # study documentation
import numpy as np # " "
iFile = "FramebufferL0_0.bin" # Layer-A
shape = (430, 430) # length and width of the image
dtype = np.dtype('<u2') # unsigned 16 bit little-endian.
oFile = "FramebufferL0_0.png"
fid = open(iFile, 'rb')
data = np.fromfile(fid, dtype)
image = data.reshape(shape)
plt.imshow(image, cmap = "gray")
plt.savefig(oFile)
plt.show()
Now, the image I'm showing is black and white because the color map is gray-scale (right?). The actual captured frame is NOT black and white. That is, the image I see on my embedded device is "colorful".
My question is, how can I calculate actual color of each pixel from the raw binary file? Is there a way I can get the actual color map of the image from the raw binary? I looked into this example and I'm sure that, if I'm able to calculate the R, G and B channels (and Alpha too), I'll be able to recreate the exact image. An example code would be of much help.
An RGBA image has 4 channels, one for each color and one for the alpha value. The binary file seems to have a single channel, as you don't report an error when performing the data.reshape(shape) operation (the shape for the corresponding RGBA image would be (430, 430, 4)).
I see two potential reasons:
The image actual does have colour information but when you are grabbing the data you are only grabbing one of the four channels.
The image is actually a gray-scale image, but the embedded device shows a pseudocolor image, creating the illusion of colour information. Without knowing what the colourmap is being used, it is hard to help you, other than point you towards matplotlib.pyplot.colormaps(), which lists all already available colour maps in matplotlib.
Could you
a) explain the exact source / type of imaging modality, and
b) show a photo of the output of the embedded device?
PS: Also, at least in my hands, the pasted binary file seems to have a size of 122629, which is incongruent with an image shape of (430,430).

how to open a large bin file ( ~ 8 GB image stream) in opencv python?

Background
The binary file contain successive raw output from a camera sensor which is in the form of a bayer pattern. i.e. the data is successive blocks containing information of the form shown below and where each block is a image in image stream
[(bayer width) * (bayer height) * sizeof(short)]
Objective
To read information from a specific block of data and store it as an array for processing. I was digging through the opencv documentation, and totally lost on how to proceed. I apologize for the novice question but any suggestions?
Assuming you can read the binary file (as a whole), I would try to use
Numpy to read it into a numpy.array. You can use numpy.fromstring and depending on the system the file was written on (little or big endian), use >i2 or <i2 as your data type (you can find the list of data types here).
Also note that > means big endian and < means little endian (more on that here)
You can set an offset and specify the length in order to read to read a certain block.
import numpy as np
with open('datafile.bin','r') as f:
dataBytes = f.read()
data = np.fromstring(dataBytes[blockStartIndex:blockEndIndex], dtype='>i2')
In case you cannot read the file as a whole, I would use mmap (requires a little knowledge of C) in order to break it down to multiple files and then use the method above.
OP here, with #lsxliron's suggestion I looked into using Numpy to achieve my goals and this is what I ended up doing
import numpy as np
# Data read length
length = (bayer width) * (bayer height)
# In terms of bytes: short = 2
step = 2 * length
# Open filename
img = open("filename","rb")
# Block we are interested in i
img.seek(i * step)
# Copy data as Numpy array
Bayer = np.fromfime(img,dtype=np.uint16,count=length)
Bayer now holds the bayer pattern values in the form of an numpy array success!!

Read .img medical image without header in python

I have a radiograph .img file without the header file. However, the researchers who have published the file have given this information about it
High resolution (2048 × 2048 matrix size, 0.175mm pixel size)
Wide density range (12-bit, 4096 gray scale)
Universal image format (no header, big-endian raw data)
I am trying to open the file using Python but unable to do so. Could someone suggest any method to read this image file?
I found some radiograph images, like yours, by downloading the JSRT database. I have tested the following code on the first image of this database: JPCLN001.IMG.
import matplotlib.pyplot as plt
import numpy as np
# Parameters.
input_filename = "JPCLN001.IMG"
shape = (2048, 2048) # matrix size
dtype = np.dtype('>u2') # big-endian unsigned integer (16bit)
output_filename = "JPCLN001.PNG"
# Reading.
fid = open(input_filename, 'rb')
data = np.fromfile(fid, dtype)
image = data.reshape(shape)
# Display.
plt.imshow(image, cmap = "gray")
plt.savefig(output_filename)
plt.show()
It produces an output file JPCLN001.PNG which looks like this:
I hope I have answered to your question.
Happy coding!
Just in case anybody else is looking at these images and wants to convert them in batches, or outside of Python and without needing any programming knowledge... you can convert them pretty readily at the command line in Terminal with ImageMagick (which is installed on most Linux distros anyway, and available for OS X and Windows) like this:
convert -size 2048x2048 -depth 16 -endian MSB -normalize gray:JPCLN130.IMG -compress lzw result.tif
which makes them into compressed 16-bit TIF files that can be viewed in any application. They also then take up half the space on disk without loss of quality since I specified LZW compression.
Likewise, if you want 16-bit PNG files, you can use:
convert -size 2048x2048 -depth 16 -endian MSB -normalize gray:JPCLN130.IMG result.png

Categories