Load portions of matrix into RAM - python

I'm writing some image processing routines for a micro-controller that supports MicroPython. The bad news is that it only has 0.5 MB of RAM. This means that if I want to work with relatively big images/matrices like 256x256, I need to treat it as a collection of smaller matrices (e.g. 32x32) and perform the operation on them. Leaving at aside the fact of reconstructing the final output of the orignal (256x256) matrix from its (32x32) submatrices, I'd like to focus on how to do the loading/saving from/to disk (an SD card in this case) of this smaller matrices from a big image.
Given that intro, here is my question: Assuming I have a 256x256 on disk that I'd like to apply some operation onto (e.g. convolution), what's the most convenient way of storing that image so it's easy to load it into 32x32 image patches? I've seen there is a MicroPython implementation of the pickle module, is this a good idea for my problem?

Sorry, but your question contains the answer - if you need to work with 32x32 tiles, the best format is that which represents your big image as a sequence of tiles (and e.g. not as one big 256x256 image, though reading tiles out of it is also not a rocket science and should be fairly trivial to code in MicroPython, though 32x32 tiles would be more efficient of course).
You don't describe the exact format of your images, but I wouldn't use pickle module for it, but store images as raw bytes and load them into array.array() objects (using inplace .readinto() operation).

Related

How to efficiently read part of a huge tiff image?

I have a collection of rather large tiff files (typical resolution is 30k x 30k to 50k x 50k). These have some interesting properties: 1. they contain several different resolutions of the same image, and 2. the data seems to be compressed as JPEG tiles (512x512). They also seem to be BigTIFFs.
I'd like to read specific regions of interest (more or less random access pattern) and I'd like it to be as fast as possible (~about as fast as decompressing the needed tiles plus a little bit of overhead). Decompressing the whole file, or even one of the resolution layers, is not practical. I'm using python.
I tried opening with skimage.io.MultiImage('test.tiff'))[level], and this works (produces correct data). It does decompress each resolution layer completely, although not the whole file; so okay at the low resolutions but not really useful for the highest resolutions. There didn't seem to be any way to select a region of interest in skimage.io, or in any of the libraries it uses (imread, PIL, ...).
I also tried using OpenSlide using img.read_region((x,y), level, (width, height)). This library seems made exactly for this type of data, and is very fast, but unfortunately produces incorrect data for some regions. Until the bug is fixed upstream, I can't use it.
Lastly, using a very recent version of tifffile=2020.6.3 and imagecodecs=2020.5.30 (older versions don't work - I think at least 2018.10.18 is needed), I could list the tiles, using code modified from the tifffile documentation:
with tifffile.TiffFile('test.tiff') as tif:
fh = tif.filehandle
for page in tif.pages:
jpegtables = page.tags.get('JPEGTables', None)
if jpegtables is not None:
jpegtables = jpegtables.value
for index, (offset, bytecount) in enumerate(
zip(page.dataoffsets, page.databytecounts)
):
fh.seek(offset)
data = fh.read(bytecount)
tile, indices, shape = page.decode(data, index, jpegtables)
print(tile.shape, indices, shape)
It seems the page.decode() call actually decompresses each tile (tile is a numpy array with pixel data). It is not obvious how to only get the index but not decompress. I'm also not sure how fast this would be. This leaves any selection of a region of interest and reassembly of tiles as an exercise to the user.
How do I efficiently read regions of interest out of files like this? Does someone have example code to do that with tifffile? Or, is there another library that would do the trick?

Presenting parts of a pre-prepared image array in Shady

I'm interested in migrating from psychtoolbox to shady for my stimulus presentation. I looked through the online docs, but it is not very clear to me how to replicate what I'm currently doing in matlab in shady.
What I do is actually very simple. For each trial,
I load from disk a single image (I do luminance linearization off-line), which contains all the frames I plan to display in that trial (the stimulus is 1000x1000 px, and I present 25 frames, hence the image is 5000x5000px. I only use BW images, so I have a single int8 value per pixel).
I transfer the entire image from the CPU to the GPU
At some point (externally controlled) I copy the first frame to the video buffer and present it
At some other point (externally controlled) I trigger the presentation of the
remaining 24 frames (copying the relevant part of the image to video buffer for each video frame, and then calling flip()).
The external control happens by having another machine communicate with the stimulus presentation code over TCP/IP. After the control PC sends a command to the presentation PC and this is executed, the presentation PC needs to send back an acknowledgement message to the control PC. I need to send three ACK messages, one when the first frame appears on screen, one when the 2nd frame appears on screen, and one when the 25th frame appears on screen (this way the control PC can easily verify if a frame has been dropped).
In matlab I do this by calling the blocking method flip() to present a frame, and when it returns I send the ACK to the control PC.
That's it. How would I do that in shady? Is there an example that I should look at?
The places to look for this information are the docstrings of Shady.Stimulus and Shady.Stimulus.LoadTexture, as well as the included example script animated-textures.py.
Like most things Python, there are multiple ways to do what you want. Here's how I would do it:
w = Shady.World()
s = w.Stimulus( [frame00, frame01, frame02, ...], multipage=True )
where each frameNN is a 1000x1000-pixel numpy array (either floating-point or uint8).
Alternatively you can ask Shady to load directly from disk:
s = w.Stimulus('trial01/*.png', multipage=True)
where directory trial01 contains twenty-five 1000x1000-pixel image files, named (say) 00.png through 24.png so that they get sorted correctly. Or you could supply an explicit list of filenames.
Either way, whether you loaded from memory or from disk, the frames are all transferred to the graphics card in that call. You can then (time-critically) switch between them with:
s.page = 0 # or any number up to 24 in your case
Note that, due to our use of the multipage option, we're using the "page" animation mechanism (create one OpenGL texture per frame) instead of the default "frame" mechanism (create one 1000x25000 OpenGL texture) because the latter would exceed the maximum allowable dimensions for a single texture on many graphics cards. The distinction between these mechanisms is discussed in the docstring for the Shady.Stimulus class as well as in the aforementioned interactive demo:
python -m Shady demo animated-textures
To prepare the next trial, you might use .LoadPages() (new in Shady version 1.8.7). This loops through the existing "pages" loading new textures into the previously-used graphics-card texture buffers, and adds further pages as necessary:
s.LoadPages('trial02/*.png')
Now, you mention that your established workflow is to concatenate the frames as a single 5000x5000-pixel image. My solutions above assume that you have done the work of cutting it up again into 1000x1000-pixel frames, presumably using numpy calls (sounds like you might be doing the equivalent in Matlab at the moment). If you're going to keep saving as 5000x5000, the best way of staying in control of things might indeed be to maintain your own code for cutting it up. But it's worth mentioning that you could take the entirely different strategy of transferring it all in one go:
s = w.Stimulus('trial01_5000x5000.png', size=1000)
This loads the entire pre-prepared 5000x5000 image from disk (or again from memory, if you want to pass a 5000x5000 numpy array instead of a filename) into a single texture in the graphics card's memory. However, because of the size specification, the Stimulus will only show the lower-left 1000x1000-pixel portion of the array. You can then switch "frames" by shifting the carrier relative to the envelope. For example, if you were to say:
s.carrierTranslation = [-1000, -2000]
then you would be looking at the frame located one "column" across and two "rows" up in your 5x5 array.
As a final note, remember that you could take advantage of Shady's on-the-fly gamma-correction and dithering–they're happening anyway unless you explicitly disable them, though of course they have no physical effect if you leave the stimulus .gamma at 1.0 and use integer pixel values. So you could generate your stimuli as separate 1000x1000 arrays, each containing unlinearized floating-point values in the range [0.0,1.0], and let Shady worry about everything beyond that.

How to improve the quality of function ‘image.save()’?

I saved the image to the clipboard, and when I read the image information from the clipboard and saved it locally, the image quality changed. How can I save it to maintain the original high quality?
from PIL import ImageGrab
im = ImageGrab.grabclipboard()
im.save('somefile.png','PNG')
I tried adding the parameter 'quality=95' in im.save(), but it didn't work. The original image quality is 131K, and the saved image is 112K.
The size of the file is not directly related to the quality of the image. It also depends on how efficiently the encoder does its job. As it is PNG, the process is lossless, so you don't need to worry - the quality is retained.
Note that the quality parameter has a different meaning when saving JPEG files versus PNG files:
With JPEG files, if you specify a lower quality you are effectively allowing the encoder to discard more information and give up image quality in return for a smaller file size.
With PNG, your encoding and decoding are lossless. The quality is a hint to the decoder as to how much time to spend compressing the file (always losslessly) and about the types of filtering/encoding that may suit best. It is more akin to the parameter to gzip like --best or --fast.
Further information about PNG format is here on Wikipedia.
Without analysing the content of the two images it is impossible to say why the sizes differ - there could be many reasons:
One encoder may have noticed that the image contains fewer than 256 colours and so has decided to use a palette whereas the other may not have done. That could make the images size differ by a factor of 3 times, yet the quality would be identical.
One encoder may use a larger buffer and spend longer looking for repeating patterns in the image. For a simplistic example, imagine the image was 32,000 pixels wide and each line was the same as the one above. If one encoder uses an 8kB buffer, it can never spot that the image just repeats over and over down the page so it has to encode every single line in full, whereas an encoder with a 64kB buffer might just be able to use 1 byte per line and use the PNG filtering to say "same as line above".
One encoder might decide, on grounds of simplicity of code or for lack of code space, to always encode the data in a 16-bit version even if it could use just 8 bits.
One encoder might decide it is always going to store an alpha layer even if it is opaque because that may make the code/data cleaner simpler.
One encoder may always elect to do no filtering, whilst the other has the code required to do sub, up, average or Paeth filtering.
One encoder may not have enough memory to hold the entire image, so it may have to use a simplistic approach to be assured that it can handle whatever turns up later in the image stream.
I just made these examples up - don't take them was gospel - I am just trying to illustrate some possibilities.
To reproduce an exact copy of file from a clipboard, the only way is if the clipboard contains a byte-for-byte copy of the original. This does not happen when the content comes from the "Copy" function in a program.
In theory a program could be created to do that by setting a blob-type object with a copy of the original file, but that would be highly inefficient and defeat the purpose of the clipboard.
Some points:
- When you copy into the clipboard using the file manager, the clipboard will have a reference to the original file (not the entire file which can potentially be much larger than ram)
- Most programs will set the clipboard contents to some "useful version" of the displayed or selected data. This is very much subject to interpretation by the creator of the program.
- Parsing the clipboard content when reading an image is again subject to the whims of the library used to process the data and pack it back into an image format.
Generally if you want to copy a file exactly you will be better off just copying the original file.
Having said that: Evaluate the purpose of the copy-paste process and decide whether the data you get from the clipboard is "good enough" for the intended purpose. This obviously depends on what you want to use it for.

Fast access of Large Image Dataset

I have multiple large image data set that contains north of 500 images that are 1080p or 4K. I need to process two consecutive images at a time (so pair wise) and the processing needs to be done on individual pixels.
I have tried using python (PIL) but loading the image into memory and then accessing the pixel values takes a lot of time.
What is the fastest way to load these images into memory and run some algorithm's on them?
PS: A solution in any language would do.
Thanks

How fast is Python Image Library's (PIL) ImageDraw Module, for instance as compared to OpenGL?

I have some .png images and I want to be able to quickly:
(a) Load a .png from a file.
(b) Draw some simple lines on top of the .png.
(c) Get the contents (bytes) of the resulting image to return as the result of an http request.
It sounds like PIL is a good candidate for doing this with relatively little code. However, I'm trying to understand how efficient it is, especially when I have, say, thousands of lines to draw in step (b). The alternative is using PyOpenGL, but before getting into that I wanted to understand if PIL was already fast enough.
I was going to ask if PIL used OpenGL under the covers. But that might be the wrong question, because my understanding is that to get the real speed benefit from PyOpenGL I'd want to submit my line vertexes as NumPy arrays. So presumably even if PIL uses OpenGL, I'm going to lose a lot of that benefit when I make an individual PIL call to draw each of my lines?
Anybody have concrete data for speed of PIL when drawing lots of primitives?
"Draw some simple lines on top of the .png" is not a computationally intensive task.
This doesn't seems to be a good candidate for the GPU since they are better suited for more complex tasks. You've got to realize that the image is initially loaded on the RAM, making it your job to send this data to the GPU memory and then retrieve it back. This operation consumes a few milliseconds, depending on the size of the image, that could be better used for CPU processing.
Your application would only benefit from the GPU if it had high arithmetic intensity.

Categories