I have thousands of grayscale tiles of 256 x 256 pixels with dtype np.uint8 and I want to combine those into one BigTiff pyramidical image as fast as possible.
My current approach is to create a numpy array with the size of the final image, in which I paste all the tiles (This only takes a few seconds). For saving I have looked into multiple approaches.
1) Tifffile, using the imsave function, which turned out to be very slow, I would estimate over 10 minutes at least for a file that would end up at around 700MB
2) pyvips, by converting the massive numpy image to a pyvips image using pyvips.Image.new_from_memory, and then saving it using this:
vips_img.tiffsave(filename, tile=True, compression='lzw', bigtiff=True, pyramid=True, Q=80)
Constructing the vips_img takes ~42 seconds and saving it to disk takes another ~30, but this is all done using a single thread. I am wondering if there is any way to do this more time efficiently, either using a different method or leverage multithreading. High speed storage is available, so things could potentially be saved in a different format first or transferred to a different programming language if needed.
Just brainstorming: all the tiles come from an already existing BigTiff image and have been put through a preprocessing pipeline and now need to be saved again. I'm wondering if there could potentially be a way to copy the original file and replace data in there efficiently.
edit with more information:
The dimensions of the image are roughly 55k by 45k, but I would like to use this code for larger images too, up to 150k by 150k for example.
For the image of 55k by 45k and tiles of 256 by 256, we're talking about ~53k tiles. These tiles don't all contain information i'm interested in, so in the end I might end up with 50% of the tiles that I want to save again, the remained of the image can be black. Saving the processed in the same format seems the most convenient approach to me, as I would like to display it as an overlay
edit with intermediate solution
Earlier I mentioned that creating a pyvips image from a numpy array took 40 seconds. The cause of this was that my input was a transposed numpy array. The transpose operation itself is very fast, but I suspect it remained in memory as before, which caused a lot of cache misses when reading from it in transposed form.
So currently the following line takes 30 seconds (to write a 200MB file)
vips_img.tiffsave(filename, tile=True, compression='lzw', bigtiff=True, pyramid=True, Q=80)
It would be nice if this could be faster, but it seems reasonable.
Code Example
In my case, only ~15% of the tiles is interesting and will be preprocessed. These are all over the image though. I would still like to save this in a gigapixel format, as that allows me to use openslide to retrieve parts of the image using their convenient library. In the example I just generated ~15% random data to simulate the percentage of black / information and the performance of the example is similar to the actual implementation where the data is more scattered over the image.
import numpy as np
import pyvips
def numpy2vips(a):
dtype_to_format = {
'uint8': 'uchar',
'int8': 'char',
'uint16': 'ushort',
'int16': 'short',
'uint32': 'uint',
'int32': 'int',
'float32': 'float',
'float64': 'double',
'complex64': 'complex',
'complex128': 'dpcomplex',
}
height, width, bands = a.shape
linear = a.reshape(width * height * bands)
vi = pyvips.Image.new_from_memory(linear.data, width, height, bands,
dtype_to_format[str(a.dtype)])
return vi
left = np.random.randint(0, 256, (7500, 45000), np.uint8)
right = np.zeros((50000, 45000), np.uint8)
img = np.vstack((left, right))
vips_img = numpy2vips(np.expand_dims(img, axis=2))
start = time.time()
vips_img.tiffsave("t1", tile=True, compression='deflate', bigtiff=True, pyramid=True)
print("pyramid deflate took: ", time.time() - start)
start = time.time()
vips_img.tiffsave("t2", tile=True, compression='lzw', bigtiff=True, pyramid=True)
print("pyramid lzw took: ", time.time() - start)
start = time.time()
vips_img.tiffsave("t3", tile=True, compression='jpeg', bigtiff=True, pyramid=True)
print("pyramid jpg took: ", time.time() - start)
start = time.time()
vips_img.dzsave("t4", tile_size=256, depth='one', overlap=0, suffix='.jpg[Q=75]')
print("dzi took: ", time.time() - start)
output
pyramid deflate took: 32.69183301925659
pyramid lzw took: 32.10764741897583
pyramid jpg took: 59.79427194595337
I did not wait for the dzsave to finish, as it was taking over a couple of minutes.
I tried your test program on my laptop (ubuntu 19.10) and I see:
pyramid deflate took: 35.757954359054565
pyramid lzw took: 42.69455623626709
pyramid jpg took: 26.614688634872437
dzi took: 44.16632699966431
I'd guess you are not using libjpeg-turbo, the SIMD libjpeg fork. Unfortunately it's very difficult to install on macOS, due to brew being stuck on the non-SIMD version, but it should be easy on your deployment system, just install the libjpeg-turbo package instead of libjpeg (they are binary compatible).
There are various similar projects for zlib that should speed up deflate compression dramatically.
Related
I have a program that processes huge RGB images in the range of 30000x30000 px.
To load I use Pillow, which works good.
Then I process it with NumPy and then I need to save it lossless as tiff.
However, whether I'm using Pillow or OpenCV, this takes very long compared to the runtime of all the other stuff. I think this is because of the image compression. Without compression, the saving does not take long at all but my files are >2 GB.
I found the module tifffile but it takes just as long as OpenCV, unless I missed a parameter.
Is there a module that can compress faster? The ones I tried only use one CPU core.
It also seems, that it's faster on an intel machine with i7-9700k 16GB than on my PC with AMD Ryzen 5600X 32GB?
Here is the code I used to test:
from PIL import Image
import cv2
import tifffile
import numpy as np
import time
arr = np.random.default_rng().integers(0, 255, size=(30000,30000,3), endpoint=True, dtype=np.uint8)
st = time.time()
Image.fromarray(arr).save("test_pil.tiff", compression="tiff_adobe_deflate")
print(f"Pil took {time.time()-st} s")
st = time.time()
cv2.imwrite("test_cv2.tiff", arr, params=(cv2.IMWRITE_TIFF_COMPRESSION, 32946))
print(f"Opencv took {time.time()-st} s")
st = time.time()
tifffile.imwrite("test_tifff.tiff", arr, compression="zlib", compressionargs={'level':5}, predictor=True, tile=(64,64))
print(f"Tifffile took {time.time()-st} s")
I know these also use different compression algorithms, but I haven't found matching parameters. This feature is generally very poorly documented.
Result (intel):
Pil took 32.01173210144043 s
Opencv took 60.46461296081543 s
Tifffile took 59.410102128982544 s
I am experimenting with a 3-dimensional zarr-array, stored on disk:
Name: /data
Type: zarr.core.Array
Data type: int16
Shape: (102174, 1100, 900)
Chunk shape: (12, 220, 180)
Order: C
Read-only: True
Compressor: Blosc(cname='zstd', clevel=3, shuffle=BITSHUFFLE, blocksize=0)
Store type: zarr.storage.DirectoryStore
No. bytes: 202304520000 (188.4G)
No. bytes stored: 12224487305 (11.4G)
Storage ratio: 16.5
Chunks initialized: 212875/212875
As I understand it, zarr-arrays can also reside in memory - compressed, as if they were on disk. So I thought why not try to load the entire thing into RAM on a machine with 32 GByte memory. Compressed, the dataset would require approximately 50% of RAM. Uncompressed, it would require about 6 times more RAM than available.
Preparation:
import os
import zarr
from numcodecs import Blosc
import tqdm
zpath = '...' # path to zarr data folder
disk_array = zarr.open(zpath, mode = 'r')['data']
c = Blosc(cname = 'zstd', clevel=3, shuffle = Blosc.BITSHUFFLE)
memory_array = zarr.zeros(
disk_array.shape, chunks = disk_array.chunks,
dtype = disk_array.dtype, compressor = c
)
The following experiment fails almost immediately with an out of memory error:
memory_array[:, :, :] = disk_array[:, :, :]
As I understand it, disk_array[:, :, :] will try to create an uncompressed, full-size numpy array, which will obviously fail.
Second attempt, which works but is agonizingly slow:
chunk_lines = disk_array.chunks[0]
chunk_number = disk_array.shape[0] // disk_array.chunks[0]
chunk_remain = disk_array.shape[0] % disk_array.chunks[0] # unhandled ...
for chunk in tqdm.trange(chunk_number):
chunk_slice = slice(chunk * chunk_lines, (chunk + 1) * chunk_lines)
memory_array[chunk_slice, :, :] = disk_array[chunk_slice, :, :]
Here, I am trying to reads a certain number of chunks at a time and put them into my in-memory array. It works, but it is about 6 to 7 times slower than what it took to write this thing to disk in the first place. EDIT: Yes, it's still slow, but the 6 to 7 times happened due to a disk issue.
What's an intelligent and fast way of achieving this? I'd guess, besides not using the right approach, my chunks might also be too small - but I am not sure.
EDIT: Shape, chunk size and compression are supposed to be identical for the on-disk array and the in-memory array. It should therefore be possible to eliminate the decompress-compress procedure in my example above.
I found zarr.convenience.copy but it is marked as an experimental feature, subject to further change.
Related issue on GitHub
You could conceivably try with fsspec.implementations.memory.MemoryFileSystem, which has a .make_mapper() method, with which you can make the kind of object expected by zarr.
However, this is really just a dict of path:io.BytesIO, which you could make yourself, if you want.
There are a couple of ways one might solve this issue today.
Use LRUStoreCache to cache (some) compressed data in memory.
Coerce your underlying store into a dict and use that as your store.
The first option might be appropriate if you only want some frequently used data in-memory. Of course how much you load into memory is something you can configure. So this could be the whole array. This will only happen with data on-demand, which may be useful for you.
The second option just creates a new in-memory copy of the array by pulling all of the compressed data from disk. The one downside is if you intend to write back to disk this will be something you need to do manually, but it is not too difficult. The update method is pretty handy for facilitating this copying of data between different stores.
I am trying to load a data set of 1.000.000 images into memory. As standard numpy arrays (uint8) all images combined fill around 100 GB of RAM, but I need to get this down to < 50 GB while still being able to quickly read the images back into numpy (that's the whole point of keeping everything in memory). Lossless compression like blosc only reduces file size by around 10%, so I went to JPEG compression. Minimum example:
import io
from PIL import Image
numpy_array = (255 * np.random.rand(256, 256, 3)).astype(np.uint8)
image = Image.fromarray(numpy_array)
output = io.BytesIO()
image.save(output, format='JPEG')
At runtime I am reading the images with:
[np.array(Image.open(output)) for _ in range(1000)]
JPEG compression is very effective (< 10 GB), but the time it takes to read 1000 images back into numpy array is around 2.3 seconds, which seriously hurts the performance of my experiments. I am searching for suggestions that give a better trade-off between compression and read-speed.
I am still not certain I understand what you are trying to do, but I created some dummy images and did some tests as follows. I'll show how I did that in case other folks feel like trying other methods and want a data set.
First, I created 1,000 images using GNU Parallel and ImageMagick like this:
parallel convert -depth 8 -size 256x256 xc:red +noise random -fill white -gravity center -pointsize 72 -annotate 0 "{}" -alpha off s_{}.png ::: {0..999}
That gives me 1,000 images called s_0.png through s_999.png and image 663 looks like this:
Then I did what I think you are trying to do - though it is hard to tell from your code:
#!/usr/local/bin/python3
import io
import time
import numpy as np
from PIL import Image
# Create BytesIO object
output = io.BytesIO()
# Load all 1,000 images and write into BytesIO object
for i in range(1000):
name="s_{}.png".format(i)
print("Opening image: {}".format(name))
im = Image.open(name)
im.save(output, format='JPEG',quality=50)
nbytes = output.getbuffer().nbytes
print("BytesIO size: {}".format(nbytes))
# Read back images from BytesIO ito list
start=time.clock()
l=[np.array(Image.open(output)) for _ in range(1000)]
diff=time.clock()-start
print("Time: {}".format(diff))
And that takes 2.4 seconds to read all 1,000 images from the BytesIO object and turn them into numpy arrays.
Then, I palettised the images by reducing to 256 colours (which I agree is lossy - just as your method) and saved a list of palettised image objects which I can readily later convert back to numpy arrays by simply calling:
np.array(ImageList[i].convert('RGB'))
Storing the data as a palettised image saves 66% of the space because you only store one byte of palette index per pixel rather than 3 bytes of RGB, so it is better than the 50% compression you seek.
#!/usr/local/bin/python3
import io
import time
import numpy as np
from PIL import Image
# Empty list of images
ImageList = []
# Load all 1,000 images
for i in range(1000):
name="s_{}.png".format(i)
print("Opening image: {}".format(name))
im = Image.open(name)
# Add palettised image to list
ImageList.append(im.quantize(colors=256, method=2))
# Read back images into numpy arrays
start=time.clock()
l=[np.array(ImageList[i].convert('RGB')) for i in range(1000)]
diff=time.clock()-start
print("Time: {}".format(diff))
# Quick test
# Image.fromarray(l[999]).save("result.png")
That now takes 0.2s instead of 2.4s - let's hope the loss of colour accuracy is acceptable to your unstated application :-)
First some background
I am trying to write my own set of tools for video analysis, mainly for detecting render errors like flashing frames and possibly some other stuff in the future.
The (obvious) goal is to write a script, that is faster and more accurate than me watching the file in real time.
Using OpenCV, I have something that looks like this:
import cv2
vid = cv2.VideoCapture("Video/OpenCV_Testfile.mov", cv2.CAP_FFMPEG)
width = 1024
height = 576
length = vid.get(cv2.CAP_PROP_FRAME_COUNT)
for f in range(length):
blue_values = []
vid.set(cv2.CAP_PROP_POS_FRAMES, f)
is_read, frame = vid.read()
if is_read:
for row in range(height):
for col in range(width):
blue_values.append(frame[row][col][0])
print(blue_values)
vid.release()
This just prints out a list of all blue values of every frame.
- Just for simplicity (My actual script compares a few values across each frame and only saves the frame number when all are equal)
Although this works, it is not a very fast operation. (Nested loops, but most important, the read() method has to be called for every frame, which is rather slow.
I tried to use multiprocessing but basically ended up having the same crashes as described here:
how to get frames from video in parallel using cv2 & multiprocessing in python
I have a 20s long 1024x576#25fps Testfile which performs as follows:
mov, ProRes: 15s
mp4, h.264: 30s (too slow)
My machine is capable of playing back h.264 in 1920x1080#50fps with mplayer (which uses ffmpeg to decode). So, I should be able to get more out of this. Which leads me to
my Question
How can I decode a video and simply dump all pixel values into a list for further (possibly multithreaded) operations? Speed is really all that matters. Note: I'm not fixated on OpenCV. Whatever works best.
Thanks!
I have a small problem using the video creation capability of OpenCV.
For the same images, I get a weird output depending on the output size I want.
Here is an example of the results I can get.
http://www.youtube.com/watch?v=1wm8VjyfdyA&feature=youtu.be
I tried with several different sets of images, and on different computers.
It seems to run fine on Windows, and I have problems with the Opencv that ships in Ubuntu packages (current 2.3.1-7).
As the problem is not reproductible on my windows, I guess its was either fixed in the 2.4 or specific to Linux.
Here is a (python) test code that highlight the problem :
import os
import cv
in_dir = "../data/inputs/sample-test"
out = "output.avi"
# loading images, create Guys and store it into guys
frameSize = (652, 498)
#frameSize = (453, 325)
fourcc = cv.CV_FOURCC('F', 'M', 'P', '4')
my_video = cv.CreateVideoWriter(out,
fourcc,
15,
frameSize,
1)
for root, _, files in os.walk(in_dir):
for a_file in files:
guy_source = os.path.join(in_dir, a_file)
print guy_source
image = cv.LoadImage(guy_source)
small_im = cv.CreateImage(frameSize,
image.depth ,
image.nChannels)
cv.Resize(image, small_im, cv.CV_INTER_LINEAR)
cv.WriteFrame(my_video, small_im)
print "Finished !"
My concern is that depending on the output size, the video is fine (652, 498 is ok for example).
The behaviour is the same whatever codec I use.
If not a fix, I´d like some more information about the reason for this bug.
As I want to ship for Ubuntu, I´d better use their packaging system and keep the 2.3 for some time.
So I would like to know how I can wisely solve the problem, by choosing educated sizes.
Any information is welcome
Thx !
This is a common problem in video coding. As you can see, the image is shifted with a small amount to left each row.
As you may know, the image is saved as a long row of chars: BGRBGRBGR....
It is also defined by its width and height, and by step - the distance, in bytes, between two consecutive rows. A naive supposition is that the step is 3(channels)*width. But in addition, for memory alignment reasons, the image rows are padded with some extra bits, in order to make the step value a multiple of 4 (usually) or 16. The reason is that hardware codec acceleration works with aligned data - 32bit architectures read 32bits at once, and for SIMD processing, aligned data is loaded faster.
So the image will be represented as
BGRBGR00
BGRBGR00
Now, if a codec does not know of this padding, it will read the width of the image as 2, and will interpret the data as follows:
BGRBGR
00BGRB
0000BG // note the extra padding
To make sure you do not experience this issue, you should select image width in such a way that the step value (channels*width) is a multiple of four. All of the standard resolutions have this property, and this is one of the reasons they were selected so:
640x480
1024x768
etc