Is it possible to use the Python wrappers for GDCM to decode the image data in a DICOM file?
I have a byte array / string with the bytes of the image data in a DICOM file (i.e. the contents of tag 7fe0,0010 ("Pixel Data")) and I want to decode the image to something raw RGB or greyscale.
I am thinking of something along the lines of this but working with just the image data and not a path to the actual DICOM file itself.
You can read the examples, there is one that shows how one can take an input compressed DICOM and convert it to an uncompressed one. See the code online here:
Decompress Image
If you are a big fan of NumPy, checkout:
This module add support for converting a gdcm.Image to a numpy array.
This is sort of a low level example, which shows how to retrieve that actual raw buffer of the image.
A much nicer class for handling Transfer Syntax conversion would be to use gdcm.ImageChangeTransferSyntax class (allow decompression of icon)
gdcm::ImageChangeTransferSyntax Class Reference
If you do not mind reading a little C++, you can trivially convert the following code from C++ to Python:
Compress Image
Pay attention that this example is actually compressing the image rather than decompressing it.
Finally if you really only have access to the data values contains in the Pixel Data attribute, then you should really have a look at (C# syntax should be close to Python syntax):
Decompress JPEG File
This example shows how one can decompress a JPEG Lossless, Nonhierarchical, First- Order Prediction file. The JPEG data is read from file but it should work if the data is already in memory for your case.
Related
I am trying to read image.jpg (RGB) into an array in python without any additional module but it doesn't work?
pic = open('image.jpg')
array=[]
with open(p, 'rb') as inf:
jpgdata = inf.read()
values=jpgdata.split()
array=array.append(values[:][:])
print (array)
Can anyone help me how to read an image 3 bands (RGB) in python without using external module?
A JPEG image is not just a series of pixels, unlike some other formats like BMP.
In order to get the pixel data from a JPEG image you need to decompress it, which involves reading its header data, then rebuilding the data from 8x8px blocks which contain information regarding the brightness and color (YCbCr).
You need to:
Build the Huffman tree and revert the blocks
Invert the discrete cosine transform with the given parameters
Revert the YCbCr into RGB
Place each block into its corresponding location in the image
Building a simple decoder from scratch is certainly possible, but it's not going to be done in a few lines.
I'm working on a image transmition project in which my JPEG image must be transfered via LoRa, so there are a lot of limitations.
I'm working transfering the image in small chunks but their actual size aren't good enough and I can't reduce their individual sizes by dividing the image even more cause the time to send each chunk is significative.
So, I'm looking for alternatives to compress the data of these small chunks but didn't found anything in Python that allow me to do this to an Image loaded with Pillow.
Note that I don't want to resize the image, just to compress it's data.
I'm looking for suggestion on how to do this.
Must say that I can change my mind on using Pillow if necessary.
One strange effect that is happening and I don't know why is that I never get a chunk with less than about 600bytes. I need something close to 300 bytes.
Modern image formats such PNG and JPEG are already compressed and my general recommendation is take Brendan Long's advice and use those formats and exploit all the work that's been put into them.
That said, if you want to compress the contents of any arbitrary file in Python, here's a very simple example:
import zlib
with open("MyImage.jpg", "rb") as in_file:
compressed = zlib.compress(in_file.read(), 9)
with open("MyCompressedFile", "wb") as out_file:
out_file.write(compressed)
I'm an experienced Python programmer with plenty of image manipulation and computer vision experience. I'm very familiar with all of the standard tools like PIL, Pillow, opencv, numpy, and scikit-image.
How would I go about reading an image into a Python data format like a nested list, bytearray, or similar, if I only had the standard library to work with?
I realize that different image formats have different specifications. My question is how I would even begin to build a function that reads any given format.
NOTE Python 2.6 had a jpeg module in the standard library that has since been deprecated. Let's not discuss that since it is unsupported.
If you're asking how to implement these formats "from scratch" (since the standard libraries don't do this), then a good starting point would be the format specification.
For PNG, this is https://www.w3.org/TR/2003/REC-PNG-20031110/. It defines the makeup of a PNG stream, consisting of the signature (eight bytes, 8950 4e47 0d0a 1a0a, which identifies the file as a PNG image) and a number of data chunks that contain meta data, palette information and the image itself. (It's certainly a substantial project to take on, if you really don't want to use the existing libraries, but not overly so.)
For BMP, it's a bit easier since the file already contains the uncompressed pixel data and you only need to know how to find the size and offset; some of the format definition is on Wikipedia (https://en.wikipedia.org/wiki/BMP_file_format) and here: http://www.digicamsoft.com/bmp/bmp.html
JPG is much trickier. The file doesn't store pixels, but rather "wavelets" which are transformed into the pixel map you see on the screen. To read this format, you'll need to implement this transformation function.
I know that ghostscript can convert pdf to tiff and even has python bindings but I'm wondering whether there is a way to avoid writing the resulting tiff to disk (-SOutputFile=/path/to/file.tiff. Rather I want to keep the resulting tiff in memory and use it as a PIL image.
Basically, no, not using the standard devices. That's because the strip size of the TIFF file (and potentially other entries in the TIFF header) can't be written until after the compressed bitmap has been created because the sizes aren't known. So you need to be able to seek back to the beginning and update the header.
Now you could modify the standard TIFF output devices so that they maintain the output in memory, rather than writing to disk, but that's not how they currently work.
I'm looking to be able to read in pixel values as captured in a raw NEF image, process the data for noise removal, and then save the new values back into the raw image format maintaining all the metadata for later use. I've seen dcraw can read in raw format and output the Bayer pattern data as a tiff or other image but I can't save it back to my NEF. I've also been attempting to read in and save the image with simple python file open or numpy memmap but have no clue how to handle the binary data. Any help would be appreciated. Thanks!