Reading MONO 16-bit images using PyCapture2 - python

I am using the CMLN-13S2M-CS camera from PointGrey. This camera has a MONO 16-bit pixel format.Using the PyCapture2 wrapper from PointGrey I am unable to retrieve the image the camera is recording.
I have the following code:
import sys
import numpy
import PyCapture2
## Connect camera
bus = PyCapture2.BusManager()
c = PyCapture2.Camera()
c.connect(bus.getCameraFromIndex(0))
## Configure camera format7 settings
fmt7imgSet = PyCapture2.Format7ImageSettings(0, 0, 0, 1296, 964, PyCapture2.PIXEL_FORMAT.MONO16)
fmt7pktInf, isValid = c.validateFormat7Settings(fmt7imgSet)
c.setFormat7ConfigurationPacket(fmt7pktInf.recommendedBytesPerPacket, fmt7imgSet)
## Start capture and retrieve buffer
c.startCapture()
im = c.retrieveBuffer()
print im.getData().shape
print numpy.max(im.getData())
The following is returned by the print statements: (2498688,) and 240. The shape is exactly 2 x (964 x 1296). How should I reshape this? Also, the maximum value when saturated is 255. This is odd as this corresponds to MONO 8 Pixel format. What am I doing wrong?

Here's a quick demo that shows how to convert a 1D array of uint8 to a 2D array of uint16. The key function we need here is view.
import numpy as np
# Make 24 bytes of fake data
raw = np.arange(24, dtype=np.uint8)
#Convert
out = raw.view(np.uint16).reshape(3, 4)
print(out)
print(out.dtype)
output
[[ 256 770 1284 1798]
[2312 2826 3340 3854]
[4368 4882 5396 5910]]
uint16
Thanks to Andras Deak for his assistance!
If the resulting image doesn't look correct, you may need to swap the byte ordering of the 16 bit integers. You can read about byte ordering in Numpy here.
And if that still doesn't look correct, then the data may be organized as two planes, with one plane for the low-order bits of a pixel and the other plane for the high-order bits. That's also easy to deal with, but hopefully it won't come to that. ;)

Related

maximum allowed dimention exceeded

I am attempting to make a painting based on the mass of the universe with pi and the gravitational constant of earth at sea level converted to binary. i've done the math and i have the right dimentions and it should only be less than a megabyte of ram but im running into maximum allowed dimention exceeded value error.
Here is the code:
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
boshi = 123456789098765432135790864234579086542098765432135321 # universal mass
genesis = boshi ** 31467 # padding
artifice = np.binary_repr(genesis) # formatting
A = int(artifice)
D = np.array(A).reshape(A, (1348, 4117))
plt.imsave('hello_world.png', D, cmap=cm.gray) # save image
I keep running into the error at D = np.array..., and maybe my reshape is too big but its only a little bigger than 4k. seems like this should be no problem for gpu enhanced colab. Doesn't run on my home machine either with the same error. Would this be fixed with more ram?
Making it Work
The problem is that artifice = np.binary_repr(genesis) creates a string. The string consists of 1348 * 4117 = 5549716 digits, all of them zeros and ones. If you convert the string to a python integer, A = int(artifice), you will (A) wait a very long time, and (B) get a non-iterable object. The array you create with np.array(A) will have a single element.
The good news is that you can bypass the time-consuming step entirely using the fact that the string artifice is already an iterable:
D = np.array(list(artifice), dtype=np.uint8).reshape(1348, 4117)
The step list(artifice) will take a couple of seconds since it has to split up the string, but everything else should be quite fast.
Plotting is easy from there with plt.imsave('hello_world.png', D, cmap=cm.gray):
Colormaps
You can easily change the color map to coolwarm or whatever you want when you save the image. Keep in mind that your image is binary, so only two of the values will actually matter:
plt.imsave('hello_world2.png', D, cmap=cm.coolwarm)
Exploration
You have an opportunity here to add plenty of color to your image. Normally, a PNG is 8-bit. For example, instead of converting genesis to bits, you can take the bytes from it to construct an image. You can also take nibbles (half-bytes) to construct an indexed image with 16 colors. With a little padding, you can even make sure that you have a multiple of three data points, and create a full color RGB image in any number of ways. I will not go into the more complex options, but I would like to explore making a simple image from the bytes.
5549716 bits is 693715 = 5 * 11 * 12613 bytes (with four leading zero bits). This is a very nasty factorization leading to an image size of 55x12613, so let's remove that upper nibble: while 693716's factorization is just as bad as 693715's, 693714 factors very nicely into 597 * 1162.
You can convert your integer to an array of bytes using its own to_bytes method:
from math import ceil
byte_genesis = genesis.to_bytes(ceil(genesis.bit_length() / 8), 'big')
The reason that I use the built-in ceil rather than np.ceil is that it return an integer rather than a float.
Converting the huge integer is very fast because the bytes object has direct access to the data of the integer: even if it makes a copy, it does virtually no processing. It may even share the buffer since both bytes and int are nominally immutable. Similarly, you can create a numpy array from the bytes as just a view to the same memory location using np.frombuffer:
img = np.frombuffer(byte_genesis, dtype=np.uint8)[1:].reshape(597, 1162)
The [1:] is necessary to chop off the leading nibble, since bytes_genesis must be large enough to hold the entirety of genesis. You could also chop off on the bytes side:
img = np.frombuffer(byte_genesis[1:], dtype=np.uint8).reshape(597, 1162)
The results are identical. Here is what the picture looks like:
plt.imsave('hello_world3.png', img, cmap=cm.viridis)
The result is too large to upload (because it's not a binary image), but here is a randomly selected sample:
I am not sure if this is aesthetically what you are looking for, but hopefully this provides you with a place to start looking at how to convert very large numbers into data buffers.
More Options, Because this is Interesting
I wanted to look at using nibbles rather than bytes here, since that would allow you to have 16 colors per pixel, and twice as many pixels. You can get an 1162x1194 image starting from
temp = np.frombuffer(byte_genesis, dtype=np.uint8)[1:]
Here is one way to unpack the nibbles:
img = np.empty((1162, 1194), dtype=np.uint8)
img.ravel()[::2] = np.bitwise_and(temp >> 4, 0x0F)
img.ravel()[1::2] = np.bitwise_and(temp, 0x0F)
With a colormap like jet, you get:
plt.imsave('hello_world4.png', img, cmap=cm.jet)
Another option, going in the opposite direction in a manner of speaking) is not to use colormaps at all. Instead, you can divide your space by a factor of three and generate your own colors in RGB space. Luckily, one of the prime factors of 693714 is 3. You can therefore have a 398x581 image (693714 == 3 * 398 * 581). How you interpret the data is even more than usual up to you.
Side Note Before I Continue
With the black-and-white binary image, you could control the color, size and orientation of the image. With 8-bit data, you could control how the bits were sampled (8 or fewer, as in the 4-bit example), the endianness of your interpretation, the color map, and the image size. With full color, you can treat each triple as a separate color, treat the entire dataset as three consecutive color planes, or even do something like apply a Bayer filter to the array. All in addition to the other options like size, ordering, number of bits per sample, etc.
The following will show the color triples and three color planes options for now.
Full Color Images
To treat each set of 3 consecutive bytes as an RGB triple, you can do something like this:
img = temp.reshape(398, 581, 3)
plt.imsave('hello_world5.png', img)
Notice that there is no colormap in this case.
Interpreting the data as three color planes requires an extra step because plt.imsave expects the last dimension to have size 3. np.rollaxis is a good tool for this:
img = np.rollaxis(temp.reshape(3, 398, 581), 0, 3)
plt.imsave('hello_world6.png', img)
I could not reproduce your problem, because the line A = int(artifice) took like forever. I replaced it with a ,for loop to cast each digit on its own. The code worked then and produced the desired image.
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
boshi = 123456789098765432135790864234579086542098765432135321
genesis = boshi ** 31467
artifice = np.binary_repr(genesis)
D = np.zeros((1348, 4117), dtype=int)
for i, val in enumerate(D):
D[i] = int(artifice[i])
plt.imsave('hello_world.png', D, cmap=cm.gray)

How to convert (or scale) a FITS image with Astropy

Using the Astropy library, I created a FITS image which is made by interpolation from 2 actual FITS images (they are scaled as "int16", the right format for the software I use : Maxim DL).
But the scale of this image is float64 and not int16. And any astronomical processing software can't read it (except FITS Liberator)
Do you have an idea how to proceed ? Can we convert a FITS image just by changing the "BITPIX" in the header ?
I tried: (following this method : Why is an image containing integer data being converted unexpectedly to floats?
from astropy.io import fits
hdu1=fits.open('mypicture.fit')
image=hdu1[0]
print(image.header['BITPIX']) # it gives : -64
image.scale('int16')
data=image.data
data.dtype
print(image.header['BITPIX']) # it gives : 16
hdu1.close()
However, when I check the newly-modified scale of "mypicture.fit", it still displays -64 !
No change was saved and applied!
If I understand your problem correctly, this should work.
from astropy.io import fits
import numpy as np
# create dummy fits file
a = np.array([[1,2,3],
[4,5,6],
[7,8,9]],dtype=np.float64)
hdu = fits.PrimaryHDU()
hdu.data = a
# looking at the header object confirms BITPIX = -64
hdu.header
# change data type
hdu.data = np.int16(hdu.data)
# look again to confirm BITPIX = 16
hdu.header

Python image analysis: reading a multidimensional TIFF file from confocal microscopy

I have a TIFF image file from a confocal microscope which I can open in ImageJ, but which I would like to get into Python.
The format of the TIFF is as follows:
There are 30 stacks in the Z dimension. Each Z layer has three channels from different fluorescent markers. Each channel has a depth of 8 bits. The image dimensions are 1024x1024.
I can, in principle, read the file with skimage (which I plan to use to further analyse the data) using the tifffile plugin. However, what I get is not quite what I expect.
merged = io.imread("merge.tif", plugin="tifffile")
merged.shape
# (30, 3, 3, 1024, 1024)
# (zslice, RGB?, channel?, height, width)
merged.dtype
# dtype('uint16')
What confused me initially was the fact that I get two axes of length 3. I think that this is because tifffile treats each channel as separate RGB images, but I can work around this by subsetting or using skimage.color.rgb2grey on the individual channels. What concerns me more is that the file is imported as a 16 bit image. I can convert it back using skimage.img_as_ubyte, but afterwards, the histogram does no longer match the one I see in ImageJ.
I am not fixated on using skimage to import the file, but I would like to get the image into a numpy array eventually to use skimage's functionality on it.
I've encountered the same issue working on .tif files. I recommend to use bioformats python package.
import javabridge
import bioformats
javabridge.start_vm(class_path=bioformats.JARS)
path_to_data = '/path/to/data/file_name.tif'
# get XML metadata of complete file
xml_string = bioformats.get_omexml_metadata(path_to_data)
ome = bioformats.OMEXML(xml_string) # be sure everything is ascii
print ome.image_count
depending on data, one file can hold multiple images. Each image can be accessed as follows:
# read some metadata
iome = ome.image(0) # e.g. first image
print iome.get_Name()
print iome.get_ID()
# get pixel meta data
print iome.Pixels.get_DimensionOrder()
print iome.Pixels.get_PixelType()
print iome.Pixels.get_SizeX()
print iome.Pixels.get_SizeY()
print iome.Pixels.get_SizeZ()
print iome.Pixels.get_SizeT()
print iome.Pixels.get_SizeC()
print iome.Pixels.DimensionOrder
loading raw data of image 0 into numpy array is done like that:
reader = bioformats.ImageReader(path_to_data)
raw_data = []
for z in range(iome.Pixels.get_SizeZ()):
# returns 512 x 512 x SizeC array (SizeC = number of channels)
raw_image = reader.read(z=z, series=0, rescale=False)
raw_data.append(raw_image)
raw_data = np.array(raw_data) # 512 x 512 x SizeC x SizeZ array
Hope this helps processing .tif files, Cheers!
I am not sure if the 'hyperstack to stack' function is that what you want. Hyperstacks are simply multidimensional images, could be 4D or 5D (width, hight, slices, channels (e.g. 3 for RGB) and time frames). In ImageJ you have a slider for each dimension in a hyperstack.
Stacks are just stacked 2D images that are somehow related and you have only one slider, in the simplest case it represents the z-slices in a 3D data set.
The 'hyperstack to stack' function stacks all dimensions in your hyperstack. So if you have a hyperstack with 3 channels, 4 slices and 5 time frames (3 sliders) you will get a stack of 3x4x5 = 60 images (one slider). Basically the same thing as you mentioned above with sliding through the focal planes on a per-channel basis. You can go the other way around using 'stack to hyperstack' and make a hyperstack by defining which slices from your stack represent which dimension. In the example file I mentioned above just select order xyzct, 3 channels and 7 time points.
So if your tiff file has 2 sliders, it seems that it is a 4D hyperstack with hight, width, 30 slices and 3 channels. 'hyperstack to stack' would stack all dimensions on top of each other, so you will get 3x30=90 slices.
However, according to the skimage tiff reader it seems that your tiff file is some kind of a 5D hyperstack. Width, hight (1024x1024), 30 z-slices, 3 channels (RGB) and another dimension with 3 entries (e.g. time frames).
In order to find out what is wrong, I would suggest to compare the dimensions with 3 entries of the array you get from skimage. Find out which one of them represents the RGB channels and what the other one is. You can for example use pyqtgraph's image function:
import pyqtgraph as pg
merged = io.imread("merge.tif", plugin="tifffile")
#pg.image takes the dimensions in the following order: z-slider,x,y,RGB channel
#if merged.shape = (30, 3, 3, 1024, 1024), you have to compare the 1st and 2nd dimension
pg.image(merged[:,0,:,:,:].transpose(0, 2, 3, 1))
pg.image(merged[:,1,:,:,:].transpose(0, 2, 3, 1))
pg.image(merged[:,2,:,:,:].transpose(0, 2, 3, 1))
pg.image(merged[:,:,0,:,:].transpose(0, 2, 3, 1))
pg.image(merged[:,:,1,:,:].transpose(0, 2, 3, 1))
pg.image(merged[:,:,2,:,:].transpose(0, 2, 3, 1))

Numpy image slicing returning black patches/ wrong values

The end goal is to take an image and slice it up into samples that I save. The problem is that my slices are randomly returning black/ incorrect patches. Bellow is a small sample program.
import scipy.ndimage as ndimage
import scipy.misc as misc
import numpy as np
image32 = misc.imread("work0.png")
patches = np.zeros((36, 8, 8))
for i in range(4):
for j in range(4):
patches[i*4 + j] = image32[i:i+8,j:j+8]
misc.imsave("{0}{1}.png".format(i,j), patches[i*4 + j])
An example of my image would be:
Patch of 0,0 of 8x8 patch yields:
Two things:
You are initializing your patch matrix to be the wrong data type. By default, numpy will make patches matrix a np.float64 type and if you use this with saving, you won't get the results you would expect. Specifically, if you consult Mr. F's answer, there is actually some scaling performed on floating-point images where the minimum and maximum values of the image get scaled to black and white respectively and so if you have an image that is completely uniform in background, both the minimum and maximum will be the same and will get visualized to black. As such, the best thing is to respect the original image's data type, namely setting the dtype of your patches matrix to np.uint8.
Judging from your for loop indexing, you want to extract out 8 x 8 patches that are non-overlapping. This means that if you have a 32 x 32 image with 8 x 8 patches, you have 16 patches in total arranged in a 4 x 4 grid.
Therefore, you need to change the patches statement so that it has 16 in the first dimension, not 36. In addition, you'll have to adjust the way you're indexing into your image to extract out the 8 x 8 patches because right now, the patches are overlapping. Specifically, you want to make the image patch indexing go from 8*i to 8*(i+1) for the rows and 8*j to 8*(j+1) for the columns. If you substitute sample values of i and j yourself, you'll see that we get unique 8 x 8 patches for each grid in your image.
With both of the above things I noted, the modified code should be:
import scipy.ndimage as ndimage
import scipy.misc as misc
import numpy as np
image32 = misc.imread('work0.png')
patches = np.zeros((16,8,8), dtype=np.uint8) # Change
for i in range(4):
for j in range(4):
patches[i*4 + j] = image32[8*i:8*(i+1),8*j:8*(j+1)] # Change
misc.imsave("{0}{1}.png".format(i,j), patches[i*4 + j])
When I do this and take a look at the output images, I get what I expect.
To be absolutely sure, let's plot the segments using matplotlib. You've conveniently saved all of the patches in patches so it shouldn't be a problem showing what we need. However, I'll place some code in comments so that you can read in the images that were saved from disk with your above code so you can verify that it still works, regardless of looking at patches or the images on disk:
import matplotlib.pyplot as plt
plt.figure()
for i in range(4):
for j in range(4):
plt.subplot(4, 4, 4*i + j + 1)
img = patches[4*i + j]
# or you can do this:
# img = misc.imread('{0}{1}.png'.format(i,j))
img = np.dstack([img, img, img])
plt.imshow(img)
plt.show()
The weird thing about matplotlib.pyplot.imshow is that if you have an image that is single channel (such as your case) that has the same intensity all around, it gets visualized to black no matter what the colour map is, much like what we experienced with imsave. Therefore, I had to artificially make this a RGB image but with all of the channels to be the same so this gets visualized as grayscale before we show the image.
We get:
According to this answer the issue is that imsave normalizes the data so that the computed minimum is defined as black (and, if there is a distinct maximum, that is defined as white).
This led me to go digging as to why the suggested use of uint8 did work to create the desired output. As it turns out, in the source there is a function called bytescale that gets called internally.
Actually, imsave itself is a very thin wrapper around toimage followed by save (from the image object). Inside of toimage if mode is None (which it is by default), that's when bytescale gets invoked.
It turns out that bytescale has an if statement that checks for the uint8 data type, and if the data is in that format, it returns the data unaltered. But if not, then the data is scaled according to a max and min transformation (where 0 and 255 are the default low and high pixel values to compare to).
This is the full snippet of code linked above:
if data.dtype == uint8:
return data
if high < low:
raise ValueError("`high` should be larger than `low`.")
if cmin is None:
cmin = data.min()
if cmax is None:
cmax = data.max()
cscale = cmax - cmin
if cscale < 0:
raise ValueError("`cmax` should be larger than `cmin`.")
elif cscale == 0:
cscale = 1
scale = float(high - low) / cscale
bytedata = (data * 1.0 - cmin) * scale + 0.4999
bytedata[bytedata > high] = high
bytedata[bytedata < 0] = 0
return cast[uint8](bytedata) + cast[uint8](low)
For the blocks of your data that are all 255, cscale will be 0, which will be checked for and changed to 1. Then the line
bytedata = (data * 1.0 - cmin) * scale + 0.4999
will result in the whole image block having the float value of 0.4999, thus set explicitly to 0 in the next chunk of code (when casted to uint8 from float) as for example:
In [102]: np.cast[np.uint8](0.4999)
Out[102]: array(0, dtype=uint8)
You can see in the body of bytescale that there are only two possible ways to return: either your data is type uint8 and it's returned as-is, or else it goes through this kind of silly scaling process. So in the end, it is indeed correct, and good practice, to be using uint8 for the pieces of your code that specifically load from or save to an image format via these functions.
So this cascade of stuff is why you were getting all zeros in the outputted image file and why the other suggestion of using dtype=np.uint8 actually helps you. It's not because you need to avoid floating point data for images, just because of this bizarre convention to check and scale data on the part of imsave.

How to reduce an image size in image processing (scipy/numpy/python)

Hello I have an image ( 1024 x 1024) and I used "fromfile" command in numpy to put every pixel of that image into a matrix.
How can I reduce the size of the image ( ex. to 512 x 512) by modify that matrix a?
a = numpy.fromfile(( - path - ,'uint8').reshape((1024,1024))
I have no idea how to modify the matrix a to reduce the size of the image. So if somebody has any idea, please share your knowledge and I will be appreciated. Thanks
EDIT:
When I look at the result, I found that the reader I got read the image and put it into a "matrix". So I changed the "array" to matrix.
Jose told me I can take only even column and even row and put it into a new matrix . That will reduce the image to half size. What command in scipy/numpy do I need to use to do that?
Thanks
If you want to resize to specific resolution, use scipy.misc.imresize:
import scipy.misc
i_width = 640
i_height = 480
scipy.misc.imresize(original_image, (i_height, i_width))
Use the zoom function from scipy:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.zoom.html#scipy.ndimage.zoom
from scipy.ndimage.interpolation import zoom
a = np.ones((1024, 1024))
small_a = zoom(a, 0.5)
I think the easyiest way is to take only some columns and some rows of the image. Makeing a sample of the array. Take for example, only those even rows and the even columns, put it in a new array and you would have a half size new image.

Categories