I'm trying to save a numpy array as a png but the output is always blurry or the size of the image is very small. I've tried looking around SO and various other sources but I haven't been able to figure this one out.
(1) Using the following technique I get an image of the appropriate size (800x600px) but the matrix is blurry:
import matplotlib.pyplot as plt
plt.imshow(matrix)
plt.savefig(filename)
(2) Using another technique, the image is very small (60x60px) but the matrix is not blurry:
import matplotlib
matplotlib.image.imsave('name.png', array)
(3) After running my python program to produce a numpy array A, if I then run the command
matshow(A)
and then I hit the save icon to save the image, the result is an image of a larger size (800x600px) and the matrix is not blurry.
Using technique (3) wouldn't be an issue, but the thing is I have a lot of numpy arrays that I need to save as images, so it would be very time consuming to use technique (3) for each array.
Note: I can't post the images because I don't have enough reputation, so here is a link to them instead: https://www.dropbox.com/sh/mohtpm97xyy8ahi/AACgIpY5ECBDIbcnNRS9kd8La?dl=0
If anyone has had success producing clearer images of a larger size, like those in technique 3, any insight would be much appreciated.
Related
I am working on Gan networks, and it happens that I have a numpy array of images with size (26600,256,256,3). is there any method that can resize the images inside that numpy array to have in the output a numpy array of (26600,64,64,3).
PS. don't take in consideration the real sizes I provided I just wanted to know the method without resizing the images manually. thank you.
I have some 3D Niftii datasets of brain MRI scans (FLAIR, T1, T2,..).
The FLAIR scans for example are 144x512x512 with Voxel Size of 1.1, 0.5, 0.5 and I want to have 2D-slices from axial, coronal and sagittal view, which I use as input for my CNN.
What I want to do:
Read in .nii files with nibabel, save them as Numpy array and store the slices from axial, coronal and sagittal as 2D-PNGs.
What I tried:
-Use med2image python library
-wrote own python script with nibabel, Numpy and image
PROBLEM: The axial and coronal pictures are somehow stretched in one direction. Sagittal works out like it should.
I tried to debug the python script and used Matplotlib to show the array, that I get, after
image = nibabel.load(inputfile)
image_array=image.get_fdata()
by using for example:
plt.imshow(image_array[:,:, 250])
plt.show()
and found out, the data is already stretched there.
I could figure out to get the desired output with
header = image.header
sX=header['pixdim'][1]
sY=header['pixdim'][2]
sZ=header['pixdim'][3]
plt.imshow(image_array[:,:, 250],aspect=sX/sZ)
But how can I apply something like "aspect", when saving my image? Or is there a possibility to already load the .nii file with parameters like that, to have data, that I can work with?
It looks like, the pixel dimensions are not taken care of, when nibabel loads the .nii image. But unfortunately there's no way for me to solve this problem..
I found out, it doesn't make a difference for training my ML Model, if the pictures are stretched, or not, since I also do this in Data augmentation.
Opening the nifty volumes in Slicer or MRICroGL showed the volumes, as expected, since these programs also take the Header into account. And also the predictions were perfectly fine (even though, the pictures were "stretched", when saved slice-wise somehow)
Still, it annoyed me, to look at stretched pictures and I just implemented some resizing with cv2:
def saveSlice(img, fname, path):
img=numpy.uint8(img*255)
fout=os.path.join(path, f'{fname}.png')
img = cv2.resize(img, dsize=(IMAGE_WIDTH, IMAGE_HEIGTH), interpolation=cv2.INTER_LINEAR)
cv2.imwrite(fout, img)
print(f'[+] Slice saved: {fout}', end='\r')
The results are really good and it works pretty well for me.
I'm training a neural net using simulated images, and one of the things that happens in real life is low quality JPEG compression. It fuzzes up sharp edges in a particular way. Does anyone have an efficient way to simulate these effects? By that I mean create a corrupted version of a clean input. The images are grayscale, stored as numpy arrays.
Thanks to the answers in the comments, here is a solution which saves the image as JPEG and reads it back in, all in memory using standard python libraries.
import io
import imageio
# Image is 2D numpy array, q is quality 0-100
def jpegBlur(im,q):
buf = io.BytesIO()
imageio.imwrite(buf,im,format='jpg',quality=q)
s = buf.getbuffer()
return imageio.imread(s,format='jpg')
In my function I also pre- and post-scaled the image to convert from float64 to uint8 and back again, but this is the basic idea.
I have CSV files that I need to feed to a Deep-Learning network. Currently my CSV files are of size 360*480, but the network restricts them to be of size 224*224. I am using Python and Keras for the deep-learning part. So how can I resize the matrices?
I was thinking that since aspect ratio is 3:4, so if I resize them to 224:(224*4/3) = 224:299, and then crop the width of the matrix to 224, it could serve the purpose. But I cannot find a suitable function to do that. Please suggest.
I think you're looking for cv.resize() if you're using images.
If not, try numpy.ndarray.resize()
Image processing
If you want to do nontrivial alterations to the data as images (i.e. interpolating between pixel values, assuming that they represent photographs) then you might want to use proper image processing libraries for that. You'd need to treat them not as raw matrixes (csv of numbers) but convert them to rgb images, do the transformations you desire, and convert them back to a numpy matrix.
OpenCV (https://docs.opencv.org/3.4/da/d6e/tutorial_py_geometric_transformations.html)
or Pillow (https://pillow.readthedocs.io/en/3.1.x/reference/Image.html) might be useful to do that.
I found a short and simple way to solve this. This uses the Python Image Library/Pillow.
import numpy as np
import pylab as pl
from PIL import Image
matrix = np.array(list(csv.reader(open('./path/mat.csv', "r"), delimiter=","))).astype("uint8") #read csv
imgObj = Image.fromarray(matrix) #convert matrix to Image object
resized_imgObj = img.resize((224,224)) #resize Image object
imgObj.show()
resized_imgObj.show()
resized_matrix = np.asarray(img) #convert Image object to matrix
While numpy module also has a resize function, but it is not as useful as the aforementioned way.
When I tried it, the resized matrix had lost all the intricacies and aesthetic aspect of the original matrix. This is probably due to the fact that numpy.ndarray.resize doesn't interpolate and missing entries are filled with zeros.
So, for this case Image.resize() is more useful.
You could also convert the csv file to a list, truncate the list, and then convert the list to a numpy array and then use np.reshape.
I would like to know why I should or shouldn't use a matplotlib image over a PIL image. I know that matplotlib uses PIL to load any image that is not a PNG, but what is the advantage in having it in a numpy array over the PIL backend representation?
The PIL API includes function for performing a wide range of image manipulations. But of course, it does not provide a function for all conceivable operations. Numpy is useful when you have a mathematical operation to perform on the image which is not built into the PIL API. PIL has a way of altering pixels one-by-one but because if its reliance on Python loops it can be a very slow way to manipulate a large image (or many images).
Numpy math is relatively fast and it has an expressive syntax which
can make coding new image manipulations easier. Moreover, scipy has many additional image manipulating functions which can be applied to numpy arrays.
Here are a few examples:
emboss
converting rgb to hsv
replacing a color