I have a collection of rather large tiff files (typical resolution is 30k x 30k to 50k x 50k). These have some interesting properties: 1. they contain several different resolutions of the same image, and 2. the data seems to be compressed as JPEG tiles (512x512). They also seem to be BigTIFFs.
I'd like to read specific regions of interest (more or less random access pattern) and I'd like it to be as fast as possible (~about as fast as decompressing the needed tiles plus a little bit of overhead). Decompressing the whole file, or even one of the resolution layers, is not practical. I'm using python.
I tried opening with skimage.io.MultiImage('test.tiff'))[level], and this works (produces correct data). It does decompress each resolution layer completely, although not the whole file; so okay at the low resolutions but not really useful for the highest resolutions. There didn't seem to be any way to select a region of interest in skimage.io, or in any of the libraries it uses (imread, PIL, ...).
I also tried using OpenSlide using img.read_region((x,y), level, (width, height)). This library seems made exactly for this type of data, and is very fast, but unfortunately produces incorrect data for some regions. Until the bug is fixed upstream, I can't use it.
Lastly, using a very recent version of tifffile=2020.6.3 and imagecodecs=2020.5.30 (older versions don't work - I think at least 2018.10.18 is needed), I could list the tiles, using code modified from the tifffile documentation:
with tifffile.TiffFile('test.tiff') as tif:
fh = tif.filehandle
for page in tif.pages:
jpegtables = page.tags.get('JPEGTables', None)
if jpegtables is not None:
jpegtables = jpegtables.value
for index, (offset, bytecount) in enumerate(
zip(page.dataoffsets, page.databytecounts)
):
fh.seek(offset)
data = fh.read(bytecount)
tile, indices, shape = page.decode(data, index, jpegtables)
print(tile.shape, indices, shape)
It seems the page.decode() call actually decompresses each tile (tile is a numpy array with pixel data). It is not obvious how to only get the index but not decompress. I'm also not sure how fast this would be. This leaves any selection of a region of interest and reassembly of tiles as an exercise to the user.
How do I efficiently read regions of interest out of files like this? Does someone have example code to do that with tifffile? Or, is there another library that would do the trick?
Related
I am trying to build up an algorithm to detect some objects and track them over time. My input data is a tif multi-stack file, which I read as a np array. I apply a U-Net model to create a binary mask and then identify the coordinates of single objects using scipy.
Up to here everything kind of works but I just cannot get my head around the tracking. I have a dictionary where keys are the frame numbers and values are lists of tuples. Each tuple contain the coordinates of each object.
Now I have to link the objects together, which on paper seems pretty simple. I was hoping there was a function or a package to do so (ideally something similar to trackMate or M2track on ImageJ), but I cannot find anything like that. I am considering writing my own nearest neighbor tool but I'd like to know whether there is a less painful way (and also, I would like to consider also more advanced metrics).
The other option I considered is using cv2, but this would require converting the data in a format cv2 likes, which will significantly slow down the code. In addition, I would like to keep the data as close as possible to the original input, so no cv2 for me.
I solved it using trackpy.
http://soft-matter.github.io/trackpy/v0.5.0/
trackpy properly reads multistack tiff files (OpenCv can't).
I'm interested in migrating from psychtoolbox to shady for my stimulus presentation. I looked through the online docs, but it is not very clear to me how to replicate what I'm currently doing in matlab in shady.
What I do is actually very simple. For each trial,
I load from disk a single image (I do luminance linearization off-line), which contains all the frames I plan to display in that trial (the stimulus is 1000x1000 px, and I present 25 frames, hence the image is 5000x5000px. I only use BW images, so I have a single int8 value per pixel).
I transfer the entire image from the CPU to the GPU
At some point (externally controlled) I copy the first frame to the video buffer and present it
At some other point (externally controlled) I trigger the presentation of the
remaining 24 frames (copying the relevant part of the image to video buffer for each video frame, and then calling flip()).
The external control happens by having another machine communicate with the stimulus presentation code over TCP/IP. After the control PC sends a command to the presentation PC and this is executed, the presentation PC needs to send back an acknowledgement message to the control PC. I need to send three ACK messages, one when the first frame appears on screen, one when the 2nd frame appears on screen, and one when the 25th frame appears on screen (this way the control PC can easily verify if a frame has been dropped).
In matlab I do this by calling the blocking method flip() to present a frame, and when it returns I send the ACK to the control PC.
That's it. How would I do that in shady? Is there an example that I should look at?
The places to look for this information are the docstrings of Shady.Stimulus and Shady.Stimulus.LoadTexture, as well as the included example script animated-textures.py.
Like most things Python, there are multiple ways to do what you want. Here's how I would do it:
w = Shady.World()
s = w.Stimulus( [frame00, frame01, frame02, ...], multipage=True )
where each frameNN is a 1000x1000-pixel numpy array (either floating-point or uint8).
Alternatively you can ask Shady to load directly from disk:
s = w.Stimulus('trial01/*.png', multipage=True)
where directory trial01 contains twenty-five 1000x1000-pixel image files, named (say) 00.png through 24.png so that they get sorted correctly. Or you could supply an explicit list of filenames.
Either way, whether you loaded from memory or from disk, the frames are all transferred to the graphics card in that call. You can then (time-critically) switch between them with:
s.page = 0 # or any number up to 24 in your case
Note that, due to our use of the multipage option, we're using the "page" animation mechanism (create one OpenGL texture per frame) instead of the default "frame" mechanism (create one 1000x25000 OpenGL texture) because the latter would exceed the maximum allowable dimensions for a single texture on many graphics cards. The distinction between these mechanisms is discussed in the docstring for the Shady.Stimulus class as well as in the aforementioned interactive demo:
python -m Shady demo animated-textures
To prepare the next trial, you might use .LoadPages() (new in Shady version 1.8.7). This loops through the existing "pages" loading new textures into the previously-used graphics-card texture buffers, and adds further pages as necessary:
s.LoadPages('trial02/*.png')
Now, you mention that your established workflow is to concatenate the frames as a single 5000x5000-pixel image. My solutions above assume that you have done the work of cutting it up again into 1000x1000-pixel frames, presumably using numpy calls (sounds like you might be doing the equivalent in Matlab at the moment). If you're going to keep saving as 5000x5000, the best way of staying in control of things might indeed be to maintain your own code for cutting it up. But it's worth mentioning that you could take the entirely different strategy of transferring it all in one go:
s = w.Stimulus('trial01_5000x5000.png', size=1000)
This loads the entire pre-prepared 5000x5000 image from disk (or again from memory, if you want to pass a 5000x5000 numpy array instead of a filename) into a single texture in the graphics card's memory. However, because of the size specification, the Stimulus will only show the lower-left 1000x1000-pixel portion of the array. You can then switch "frames" by shifting the carrier relative to the envelope. For example, if you were to say:
s.carrierTranslation = [-1000, -2000]
then you would be looking at the frame located one "column" across and two "rows" up in your 5x5 array.
As a final note, remember that you could take advantage of Shady's on-the-fly gamma-correction and dithering–they're happening anyway unless you explicitly disable them, though of course they have no physical effect if you leave the stimulus .gamma at 1.0 and use integer pixel values. So you could generate your stimuli as separate 1000x1000 arrays, each containing unlinearized floating-point values in the range [0.0,1.0], and let Shady worry about everything beyond that.
I saved the image to the clipboard, and when I read the image information from the clipboard and saved it locally, the image quality changed. How can I save it to maintain the original high quality?
from PIL import ImageGrab
im = ImageGrab.grabclipboard()
im.save('somefile.png','PNG')
I tried adding the parameter 'quality=95' in im.save(), but it didn't work. The original image quality is 131K, and the saved image is 112K.
The size of the file is not directly related to the quality of the image. It also depends on how efficiently the encoder does its job. As it is PNG, the process is lossless, so you don't need to worry - the quality is retained.
Note that the quality parameter has a different meaning when saving JPEG files versus PNG files:
With JPEG files, if you specify a lower quality you are effectively allowing the encoder to discard more information and give up image quality in return for a smaller file size.
With PNG, your encoding and decoding are lossless. The quality is a hint to the decoder as to how much time to spend compressing the file (always losslessly) and about the types of filtering/encoding that may suit best. It is more akin to the parameter to gzip like --best or --fast.
Further information about PNG format is here on Wikipedia.
Without analysing the content of the two images it is impossible to say why the sizes differ - there could be many reasons:
One encoder may have noticed that the image contains fewer than 256 colours and so has decided to use a palette whereas the other may not have done. That could make the images size differ by a factor of 3 times, yet the quality would be identical.
One encoder may use a larger buffer and spend longer looking for repeating patterns in the image. For a simplistic example, imagine the image was 32,000 pixels wide and each line was the same as the one above. If one encoder uses an 8kB buffer, it can never spot that the image just repeats over and over down the page so it has to encode every single line in full, whereas an encoder with a 64kB buffer might just be able to use 1 byte per line and use the PNG filtering to say "same as line above".
One encoder might decide, on grounds of simplicity of code or for lack of code space, to always encode the data in a 16-bit version even if it could use just 8 bits.
One encoder might decide it is always going to store an alpha layer even if it is opaque because that may make the code/data cleaner simpler.
One encoder may always elect to do no filtering, whilst the other has the code required to do sub, up, average or Paeth filtering.
One encoder may not have enough memory to hold the entire image, so it may have to use a simplistic approach to be assured that it can handle whatever turns up later in the image stream.
I just made these examples up - don't take them was gospel - I am just trying to illustrate some possibilities.
To reproduce an exact copy of file from a clipboard, the only way is if the clipboard contains a byte-for-byte copy of the original. This does not happen when the content comes from the "Copy" function in a program.
In theory a program could be created to do that by setting a blob-type object with a copy of the original file, but that would be highly inefficient and defeat the purpose of the clipboard.
Some points:
- When you copy into the clipboard using the file manager, the clipboard will have a reference to the original file (not the entire file which can potentially be much larger than ram)
- Most programs will set the clipboard contents to some "useful version" of the displayed or selected data. This is very much subject to interpretation by the creator of the program.
- Parsing the clipboard content when reading an image is again subject to the whims of the library used to process the data and pack it back into an image format.
Generally if you want to copy a file exactly you will be better off just copying the original file.
Having said that: Evaluate the purpose of the copy-paste process and decide whether the data you get from the clipboard is "good enough" for the intended purpose. This obviously depends on what you want to use it for.
I'm writing some image processing routines for a micro-controller that supports MicroPython. The bad news is that it only has 0.5 MB of RAM. This means that if I want to work with relatively big images/matrices like 256x256, I need to treat it as a collection of smaller matrices (e.g. 32x32) and perform the operation on them. Leaving at aside the fact of reconstructing the final output of the orignal (256x256) matrix from its (32x32) submatrices, I'd like to focus on how to do the loading/saving from/to disk (an SD card in this case) of this smaller matrices from a big image.
Given that intro, here is my question: Assuming I have a 256x256 on disk that I'd like to apply some operation onto (e.g. convolution), what's the most convenient way of storing that image so it's easy to load it into 32x32 image patches? I've seen there is a MicroPython implementation of the pickle module, is this a good idea for my problem?
Sorry, but your question contains the answer - if you need to work with 32x32 tiles, the best format is that which represents your big image as a sequence of tiles (and e.g. not as one big 256x256 image, though reading tiles out of it is also not a rocket science and should be fairly trivial to code in MicroPython, though 32x32 tiles would be more efficient of course).
You don't describe the exact format of your images, but I wouldn't use pickle module for it, but store images as raw bytes and load them into array.array() objects (using inplace .readinto() operation).
I want to write a python code that reads a .jpg picture, alter some of its RBG components and save it again, without changing the picture size.
I tried to load the picture using OpenCV and PyGame, however, when I tried a simple Load/Save code, using three different functions, the resulting images is greater in size than the initial image. This is the code I used.
>>> import cv, pygame # Importing OpenCV & PyGame libraries.
>>> image_opencv = cv.LoadImage('lena.jpg')
>>> image_opencv_matrix = cv.LoadImageM('lena.jpg')
>>> image_pygame = pygame.image.load('lena.jpg')
>>> cv.SaveImage('lena_opencv.jpg', image_opencv)
>>> cv.SaveImage('lena_opencv_matrix.jpg', image_opencv_matrix)
>>> pygame.image.save(image_pygame, 'lena_pygame.jpg')
The original size was 48.3K, and the resulting are 75.5K, 75.5K, 49.9K.
So, I'm not sure I'm missing something that makes the picture original size changes, although I only made a Load/Save, or not?
And is there a better library to use rather than OpenCV or PyGame ?!
JPEG is a lossy image format. When you open and save one, you’re encoding the entire image again. You can adjust the quality settings to approximate the original file size, but you’re going to lose some image quality regardless. There’s no general way to know what the original quality setting was, but if the file size is important, you could guess until you get it close.
The size of a JPEG output depends on 3 things:
The dimensions of the original image. In your case these are the same for all 3 examples.
The color complexity within the image. An image with a lot of detail will be bigger than one that is totally blank.
The quality setting used in the encoder. In your case you used the defaults, which appear to be higher for OpenCV vs. PyGame. A better quality setting will generate a file that's closer to the original (less lossy) but larger.
Because of the lossy nature of JPEG some of this is slightly unpredictable. You can save an image with a particular quality setting, open that new image and save it again at the exact same quality setting, and it will probably be slightly different in size because of the changes introduced when you saved it the first time.