I have a microscopy array and I want to plot them.
The shape is:
(1, 2208, 2752, 3)
And Im triying it to plot with the following code:
from PIL import Image
im = Image.fromarray(image_array)
im.show()
And get this error:
Traceback (most recent call last):
File "/Users/x/anaconda3/envs/x/lib/python3.6/site-packages/PIL/Image.py", line 2515, in fromarray
mode, rawmode = _fromarray_typemap[typekey]
KeyError: ((1, 1, 2752, 3), '|u1')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/x/Desktop/x/x/test.py", line 21, in <module>
im = Image.fromarray(image_arrays)
File "/Users/x/x/x/x/lib/python3.6/site-packages/PIL/Image.py", line 2517, in fromarray
raise TypeError("Cannot handle this data type")
TypeError: Cannot handle this data type
If I resize the array to (2000,2000,3) this working, but with the 1 in the first dimension I have no Idea how can this work. The filetype is .czi and its a normal image.
You need a x by y by (r,g,b) matrix to display an image. You seem to have a fourth dimension on your matrix, so I'm guessing whatever routine you are using to create the array is actually creating an array of images.
Since you only have one image, you can just remove the first axis using image_array = numpy.squeeze(image_array, axis=0) This returns just the one image array in shape (2208, 2752, 3). Alternately, you can do: im = Image.fromarray(image_array[0])
from PIL import Image
image_array = numpy.squeeze(image_array, axis=0)
im = Image.fromarray(image_array)
im.show()
Related
Im building a image classification model but first i need to read the images used for training and i have 105,392 images, so i read the local images i have downloaded with cv2.imread_grayscale and resized the image to 100x100
train_images = []
img_array = cv2.imread(os.path.join(path,str(train_data.filename[idx])), cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array,(img_size,img_size))
train_images.append(new_array)
then i add the image data into a list then converted them into nparray and tried to flatten them to 1 dimension so i can save into a text file for easier processing
ndarray = np.array(train_images)
ndarray2 = np.ndarray(ndarray)
flattenarray = ndarray2.flatten(order='C')
flattenarray.tofile('train_images_bitmap.txt')
but im getting this error
Traceback (most recent call last):
File "comp2.py", line 36, in <module>
ndarray2 = np.ndarray(ndarray)
ValueError: maximum supported dimension for an ndarray is 32, found 105392
Any help is appreciated !!
Your array of ndarray has shape of (105392,100,100) to flatten this array.
flatten_array = ndarray.reshape(1,105392*100*100)
I'm trying to apply a flood_fill method to a certain image. Unfortunately, even though it works on an exemplary image, it doesn't work on mine, which is already binarized.
The code that works:
from skimage import data, filters
from skimage.segmentation import flood, flood_fill
import cv2 as cv
cameraman = data.camera()
flooded = flood_fill(cameraman, (200, 100), 255, tolerance=10)
cv.imshow("aaa",flooded)
cv.waitKey()
And the code that does not:
from skimage import data, filters
from skimage.segmentation import flood, flood_fill
import cv2 as cv
import numpy as np
img = cv.imread("Tubka_binar.png")
flooded = flood_fill(img, (200, 100), 100, tolerance = 10)
cv.imshow("aaa",flooded)
cv.waitKey()
And the errors I get:
Traceback (most recent call last):
File "C:/Users/User/Documents/PW/MAGISTERSKIE/__PRACA/Python/Grubość Tuby.py", line 8, in <module>
flooded = flood_fill(img, (200, 100), 100, tolerance = 10)
File "C:\Users\User\Desktop\PROJEKT_PYTHONOWY\venv\lib\site-packages\skimage\morphology\_flood_fill.py", line 104, in flood_fill
tolerance=tolerance)
File "C:\Users\User\Desktop\PROJEKT_PYTHONOWY\venv\lib\site-packages\skimage\morphology\_flood_fill.py", line 235, in flood
working_image.shape, order=order)
File "<__array_function__ internals>", line 6, in ravel_multi_index
ValueError: parameter multi_index must be a sequence of length 3
Process finished with exit code 1
The image variables in both cases seem to be the same type. The image that I read in the second case is a binarized photo, that takes only two values: 0 and 255.
What is causing this?
Best regards
It looks to me like your second image is not actually grayscale but rather saved (or loaded) as a 3-channel RGB image. If you print img.shape, I bet it’ll be something like (512, 512, 3). You can fix this by changing your reading code to:
img = cv.imread("Tubka_binar.png")[..., 0]
I am trying to save a gray scale image (256,256,1) and show it in the output.
im = data.astype(np.uint8)
print im.shape
im = np.transpose(im, (2,1,0))
print im.shape
im.show()
However, I am getting the following error:
(256, 256, 1)
Traceback (most recent call last):
File "lmdb_reader.py", line 37, in <module>
plt.imshow(im)
File "/home/se/anaconda2/envs/caffeenv/lib/python2.7/site-packages/matplotlib/pyplot.py", line 3029, in imshow
**kwargs)
File "/home/se/anaconda2/envs/caffeenv/lib/python2.7/site-packages/matplotlib/__init__.py", line 1819, in inner
return func(ax, *args, **kwargs)
File "/home/se/anaconda2/envs/caffeenv/lib/python2.7/site-packages/matplotlib/axes/_axes.py", line 4922, in imshow
im.set_data(X)
File "/home/se/anaconda2/envs/caffeenv/lib/python2.7/site-packages/matplotlib/image.py", line 453, in set_data
raise TypeError("Invalid dimensions for image data")
TypeError: Invalid dimensions for image data
Note that im.show() does not exist, but it might just be a typo in the question.
The real problem is the following:
Matplotlib's pyplot.imshow can plot images of dimension (N,M) (grayscale) or (N,M,3) (rgb color). Your image is (N,M,1); we therefore need to get rid of the last dimension.
import matplotlib.pyplot as plt
import numpy as np
#create data of shape (256,256,1)
data = np.random.rand(256,256,1)*255
im = data.astype(np.uint8)
print im.shape # prints (256L, 256L, 1L)
# (256,256,1) cannot be plotted, therefore
# we need to get rid of the last dimension:
im = im[:,:,0]
print im.shape # (256L, 256L)
# now the image can be plotted
plt.imshow(im, cmap="gray")
plt.show()
You need to specify the colormap before calling matplotlib.pyplot.show().
By default the function expects RGB images when you pass a 3D array.
Example:
im = np.squeeze(im)
plt.imshow(im,cmap='gray')
plt.show()
I'm trying to zoom in an image.
import numpy as np
from scipy.ndimage.interpolation import zoom
import Image
zoom_factor = 0.05 # 5% of the original image
img = Image.open(filename)
image_array = misc.fromimage(img)
zoomed_img = clipped_zoom(image_array, zoom_factor)
misc.imsave('output.png', zoomed_img)
Clipped Zoom Reference:
Scipy rotate and zoom an image without changing its dimensions
This doesn't works and throws this error:
ValueError: could not broadcast input array from shape
Any Help or Suggestions on this
Is there a way to zoom an image given a zoom factor. And what's the problem ?
Traceback:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1443, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "title_apis_proxy.py", line 798, in get
image, msg = resize_image(image_local_file, aspect_ratio, image_url, scheme, radius, sigma)
File "title_apis_proxy.py", line 722, in resize_image
z = clipped_zoom(face, 0.5, order=0)
File "title_apis_proxy.py", line 745, in clipped_zoom
out[top:top+zh, left:left+zw] = zoom(img, zoom_factor, **kwargs)
ValueError: could not broadcast input array from shape (963,1291,2) into shape (963,1291,3)
The clipped_zoom function you're using from my previous answer was written for single-channel images only.
At the moment it's applying the same zoom factor to the "color" dimension as well as the width and height dimensions of your input array. The ValueError occurs because the the out array is initialized to the same number of channels as the input, but the result of zoom has fewer channels because of the zoom factor.
To make it work for multichannel images you could either pass each color channel separately to clipped_zoom and concatenate the results, or you could pass a tuple rather than a scalar as the zoom_factor argument to scipy.ndimage.zoom.
I've updated my previous answer using the latter approach, so that it will now work for multichannel images as well as monochrome.
I've tried to resize image with scipy and everything seems to work fine until I try to save the image. When I try to save image I get error that you can see in title. Full traceback is available below.
import numpy as np
import scipy.misc
from PIL import Image
image_path = "img0.jpg"
def load_image(img_path):
img = Image.open(img_path)
img.load()
data = np.asarray(img, dtype="int32")
return data
def save_image(npdata, outfilename):
img = Image.fromarray(np.asarray(np.clip(npdata, 0, 255), dtype="uint8"), "L")
img.save(outfilename)
array_image = load_image(image_path)
array_resized_image = scipy.misc.imresize(array_image, (320, 240), interp='nearest', mode=None)
save_image(array_resized_image, "i1.jpg")
Full traceback of the error:
Traceback (most recent call last):
File "D:/Python/Playground/resize image with scipy.py", line 26, in <module>
save_image(array_resized_image, "i1.jpg")
File "D:/Python/Playground/resize image with scipy.py", line 16, in save_image
img = Image.fromarray(np.asarray(np.clip(npdata, 0, 255), dtype="uint8"), "L")
File "C:\Anaconda2\lib\site-packages\PIL\Image.py", line 2154, in fromarray
raise ValueError("Too many dimensions: %d > %d." % (ndim, ndmax))
ValueError: Too many dimensions: 3 > 2.
don't you need to convert it to a two dimensional array before doing the fromarray(... 'L')?
You can do that using a scipy function or, actually quicker, to multiply the RGB by factors. Like this
npdata = (npdata[:,:,:3] * [0.2989, 0.5870, 0.1140]).sum(axis=2)
array_resized_image has a shape of (320, 240, 3) - three dimensional because red, green and blue components are stored in this way. You can use scipy.misc.imread and scipy.misc.imsave for easier handling file loading and storing, so your example boils down to this:
import scipy.misc
image_path = "img0.jpg"
array_image = scipy.misc.imread(image_path)
array_resized_image = scipy.misc.imresize(array_image, (320, 240), interp='nearest', mode=None)
scipy.misc.imsave("i1.jpg", array_resized_image)