I'm trying to zoom in an image.
import numpy as np
from scipy.ndimage.interpolation import zoom
import Image
zoom_factor = 0.05 # 5% of the original image
img = Image.open(filename)
image_array = misc.fromimage(img)
zoomed_img = clipped_zoom(image_array, zoom_factor)
misc.imsave('output.png', zoomed_img)
Clipped Zoom Reference:
Scipy rotate and zoom an image without changing its dimensions
This doesn't works and throws this error:
ValueError: could not broadcast input array from shape
Any Help or Suggestions on this
Is there a way to zoom an image given a zoom factor. And what's the problem ?
Traceback:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1443, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "title_apis_proxy.py", line 798, in get
image, msg = resize_image(image_local_file, aspect_ratio, image_url, scheme, radius, sigma)
File "title_apis_proxy.py", line 722, in resize_image
z = clipped_zoom(face, 0.5, order=0)
File "title_apis_proxy.py", line 745, in clipped_zoom
out[top:top+zh, left:left+zw] = zoom(img, zoom_factor, **kwargs)
ValueError: could not broadcast input array from shape (963,1291,2) into shape (963,1291,3)
The clipped_zoom function you're using from my previous answer was written for single-channel images only.
At the moment it's applying the same zoom factor to the "color" dimension as well as the width and height dimensions of your input array. The ValueError occurs because the the out array is initialized to the same number of channels as the input, but the result of zoom has fewer channels because of the zoom factor.
To make it work for multichannel images you could either pass each color channel separately to clipped_zoom and concatenate the results, or you could pass a tuple rather than a scalar as the zoom_factor argument to scipy.ndimage.zoom.
I've updated my previous answer using the latter approach, so that it will now work for multichannel images as well as monochrome.
Related
This is the image and my aim is to detect only the middle nuclei of cell
Code for detecting nuclei shape (similar to circle) from below image
import cv2
import numpy as np
planets = cv2.imread('52.BMP')
gray_img = cv2.cvtColor(planets, cv2.COLOR_BGR2GRAY)
img = cv2.medianBlur(gray_img, 5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,120,param1=100,param2=30,minRadius=0,maxRadius=0)
circles=circles.astype(float)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(planets,(i[0],i[1]),i[2],(0,255,0),6)
cv2.circle(planets,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow("HoughCirlces", planets)
cv2.waitKey()
cv2.destroyAllWindows()
I'm getting this error everytime
AttributeError: 'NoneType' object has no attribute 'rint'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\ABHISHEK\PycharmProjects\cervical_project1\hsv.py", line 33, in <module>
circles = np.uint64(np.around(circles))
File "<__array_function__ internals>", line 180, in around
File "C:\python310\lib\site-packages\numpy\core\fromnumeric.py", line 3348, in around
return _wrapfunc(a, 'round', decimals=decimals, out=out)
File "C:\python310\lib\site-packages\numpy\core\fromnumeric.py", line 54, in _wrapfunc
return _wrapit(obj, method, *args, **kwds)
File "C:\python310\lib\site-packages\numpy\core\fromnumeric.py", line 43, in _wrapit
result = getattr(asarray(obj), method)(*args, **kwds)
TypeError: loop of ufunc does not support argument 0 of type NoneType which has no callable rint method
Is there any other way to detect only the nuclei of cell?
For large images hough transformation based searches can become slow.
Since you have a contrast difference between cell and nucleus you could transform your image into a grayscale image (maybe even using only one color channel if the contrast is better in a single one) and then applying blob detection. That also allows for filtering the nuclei according to certain shapes or sizes.
Edit:
Your submitted traceback says that the error comes from line 33 of hsv.py. As Christoph Rackwitz already stated, the provided code is missing this part or the error message doesn't correspond to your code.
Anyway in your line
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,120,param1=100,param2=30,minRadius=0,maxRadius=0)
you are setting minRadius and maxRadius to 0. Probably because of this no circle is found and np.around tries to act on None which leads to your final error message.
Binarization of the green component gives a usable result. Use contouring, and filter out the unwanted blobs. (You could rely on the isoperimetric ratio.)
Unfortunately, this method will be very sensitive to the color mixture.
I have a microscopy array and I want to plot them.
The shape is:
(1, 2208, 2752, 3)
And Im triying it to plot with the following code:
from PIL import Image
im = Image.fromarray(image_array)
im.show()
And get this error:
Traceback (most recent call last):
File "/Users/x/anaconda3/envs/x/lib/python3.6/site-packages/PIL/Image.py", line 2515, in fromarray
mode, rawmode = _fromarray_typemap[typekey]
KeyError: ((1, 1, 2752, 3), '|u1')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/x/Desktop/x/x/test.py", line 21, in <module>
im = Image.fromarray(image_arrays)
File "/Users/x/x/x/x/lib/python3.6/site-packages/PIL/Image.py", line 2517, in fromarray
raise TypeError("Cannot handle this data type")
TypeError: Cannot handle this data type
If I resize the array to (2000,2000,3) this working, but with the 1 in the first dimension I have no Idea how can this work. The filetype is .czi and its a normal image.
You need a x by y by (r,g,b) matrix to display an image. You seem to have a fourth dimension on your matrix, so I'm guessing whatever routine you are using to create the array is actually creating an array of images.
Since you only have one image, you can just remove the first axis using image_array = numpy.squeeze(image_array, axis=0) This returns just the one image array in shape (2208, 2752, 3). Alternately, you can do: im = Image.fromarray(image_array[0])
from PIL import Image
image_array = numpy.squeeze(image_array, axis=0)
im = Image.fromarray(image_array)
im.show()
I am trying to save an image that is in RGB pixel form. For whatever reason, when I call matplotlib.pyplot.imshow, the image is displayed properly.
However, when I call matplotlib.pyplot.imsave, I get an error that the array has to be in float. When I decided to change all the values to float, the image is completely modified. It makes no sense to me.
def display_image(pixels, name=""):
if name:
plt.imsave(name, array)
plt.imshow(array)
pixels = im.getdata()
pixelMap = im.load()
npImage = np.array(pixels).reshape(671, 450, 3)
display_image(npImage) ## great!
# things that I tried
display_image(npImage, "image.jpg") ## error, must be floats.
# changes the images
pixels = list(pixels) ## pixels var originally imagecore class
for index in range(len(pixels)):
pixels[index] = (float(pixels[index][0]), float(pixels[index][1]), float(pixels[index][2]))
display_image(npImage, "monaLisa.jpg") ## works but incorrect image
Before I'll give an answer to your actual question, a few words regarding your comment, and why you might got a downvote on your question. Your code is no minimal, reproducible example, and also not working as is.
You're missing all the necessary imports:
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
One of your method parameters is wrong:
def display_image(pixels, name=""): # <-- I assume, pixels should be array here.
if name:
plt.imsave(name, array)
plt.imshow(array)
plt.show() # <-- Without, there's no actual output.
What's im? Most likely, that's some PIL image. Also, you hard-coded some values, without providing the corresponding image.
im = Image.open('someImage.jpg') # <-- Need to load some image at all.
pixels = im.getdata()
pixelMap = im.load()
npImage = np.array(pixels).reshape(671, 450, 3) # <-- That only works for images with the specific size
display_image(npImage) ## great!
In the last test case, you're not updating npImage:
pixels = list(pixels) ## pixels var originally imagecore class
for index in range(len(pixels)):
pixels[index] = (float(pixels[index][0]), float(pixels[index][1]), float(pixels[index][2]))
# <-- You updated pixels, but not npImage
display_image(npImage, "monaLisa.jpg")
Only after fixing all these issues, one comes to the actual error, you're talking about in your question!
Now, to your error:
Traceback (most recent call last):
File "[...].py", line 18, in <module>
display_image(npImage, "image.jpg") ## error, must be floats.
[...]
ValueError: Image RGB array must be uint8 or floating point; found int32
Your npImage is of type int32, but plt.imsave expects uint8 (values 0 ... 255)or float (values 0.0 ... 1.0) for writing.
So, let's enforce uint8:
npImage = np.array(pixels, dtype=np.uint8).reshape(..., 3)
After adding the necessary update of npImage as mentioned above
pixels = list(pixels) ## pixels var originally imagecore class
for index in range(len(pixels)):
pixels[index] = (float(pixels[index][0]), float(pixels[index][1]), float(pixels[index][2]))
npImage = np.array(pixels).reshape(..., 3) # <-- This line
display_image(npImage, "monaLisa.jpg") ## works but incorrect image
the following error occurs:
Traceback (most recent call last):
File "[...].py", line 25, in <module>
display_image(npImage, "monaLisa.jpg") ## works but incorrect image
[...]
ValueError: Floating point image RGB values must be in the 0..1 range.
You haven't scaled your float values to 0.0 ... 1.0. So, we just divide by 255:
npImage = np.array(pixels).reshape(..., 3) / 255
I couldn't reproduce your "completely modified image" (due to possibly missing code fragments), but most likely this was also due to the unscaled float values, and you possibly saw clipping. (Was the image almost white?)
Hope that helps!
I've been working on building a machine learning algorithm to recognize images, starting by creating my own h5 database. I've been following this tutorial, and it's been useful, but I keep running into one major error - when using OpenCV in the image processing section of the code, the program is unable to save the processed image because it keeps flipping the height and width of my images. When I try to compile, I get the following error:
Traceback (most recent call last):
File "array+and+label+data.py", line 79, in <module>
hdf5_file["train_img"][i, ...] = img[None]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/Users/USER/miniconda2/lib/python2.7/site-packages/h5py/_hl/dataset.py", line 631, in __setitem__
for fspace in selection.broadcast(mshape):
File "/Users/USER/miniconda2/lib/python2.7/site-packages/h5py/_hl/selections.py", line 299, in broadcast
raise TypeError("Can't broadcast %s -> %s" % (target_shape, count))
TypeError: Can't broadcast (1, 240, 320, 3) -> (1, 320, 240, 3)
My images are supposed to all be sized to 320 by 240, but you can see that this is being flipped somehow. Researching around has shown me that this is because OpenCV and NumPy use different conventions for height and width, but I'm not sure how to reconcile this issue within this code without patching my installation of OpenCV. Any ideas on how I can fix this? I'm a relative newbie to Python and all its libraries (though I know Java well)!
Thank you in advance!
Edit: adding more code for context, which is very similar to what's in the tutorial under the "Load images and save them" code example.
The size of my arrays:
train_shape = (len(train_addrs), 320, 240, 3)
val_shape = (len(val_addrs), 320, 240, 3)
test_shape = (len(test_addrs), 320, 240, 3)
The code that loops over the image addresses and resizes them:
# Loop over training image addresses
for i in range(len(train_addrs)):
# print how many images are saved every 1000 images
if i % 1000 == 0 and i > 1:
print ('Train data: {}/{}'.format(i, len(train_addrs)))
# read an image and resize to (320, 240)
# cv2 load images as BGR, convert it to RGB
addr = train_addrs[i]
img = cv2.imread(addr)
img = cv2.resize(img, (320, 240), interpolation=cv2.INTER_CUBIC)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# save the image and calculate the mean so far
hdf5_file["train_img"][i, ...] = img[None]
mean += img / float(len(train_labels))
Researching around has shown me that this is because OpenCV and NumPy use different conventions for height and width
Not exactly. The only thing that is tricky about images is 2D arrays/matrices are indexed with (row, col) which is opposite from normal Cartesian coordinates (x, y) that we might use for images. Because of this, sometimes when you specify points in OpenCV functions, it wants them in (x, y) coordinates---and similarly, it wants the dimensions of the image to be specified in (w, h) instead of (h, w) like an array would be made. And this is the case inside OpenCV's resize() function. You're passing it in (h, w) but it actually wants (w, h). From the docs for resize():
dsize – output image size; if it equals zero, it is computed as:
dsize = Size(round(fx*src.cols), round(fy*src.rows))
Either dsize or both fx and fy must be non-zero.
So you can see here that the number of columns is the first dimension (the width) and the number of rows is the second (the height).
The simple fix is just to swap your (h, w) to (w, h) inside the resize() function:
img = cv2.resize(img, (240, 320), interpolation=cv2.INTER_CUBIC)
I am trying to save a gray scale image (256,256,1) and show it in the output.
im = data.astype(np.uint8)
print im.shape
im = np.transpose(im, (2,1,0))
print im.shape
im.show()
However, I am getting the following error:
(256, 256, 1)
Traceback (most recent call last):
File "lmdb_reader.py", line 37, in <module>
plt.imshow(im)
File "/home/se/anaconda2/envs/caffeenv/lib/python2.7/site-packages/matplotlib/pyplot.py", line 3029, in imshow
**kwargs)
File "/home/se/anaconda2/envs/caffeenv/lib/python2.7/site-packages/matplotlib/__init__.py", line 1819, in inner
return func(ax, *args, **kwargs)
File "/home/se/anaconda2/envs/caffeenv/lib/python2.7/site-packages/matplotlib/axes/_axes.py", line 4922, in imshow
im.set_data(X)
File "/home/se/anaconda2/envs/caffeenv/lib/python2.7/site-packages/matplotlib/image.py", line 453, in set_data
raise TypeError("Invalid dimensions for image data")
TypeError: Invalid dimensions for image data
Note that im.show() does not exist, but it might just be a typo in the question.
The real problem is the following:
Matplotlib's pyplot.imshow can plot images of dimension (N,M) (grayscale) or (N,M,3) (rgb color). Your image is (N,M,1); we therefore need to get rid of the last dimension.
import matplotlib.pyplot as plt
import numpy as np
#create data of shape (256,256,1)
data = np.random.rand(256,256,1)*255
im = data.astype(np.uint8)
print im.shape # prints (256L, 256L, 1L)
# (256,256,1) cannot be plotted, therefore
# we need to get rid of the last dimension:
im = im[:,:,0]
print im.shape # (256L, 256L)
# now the image can be plotted
plt.imshow(im, cmap="gray")
plt.show()
You need to specify the colormap before calling matplotlib.pyplot.show().
By default the function expects RGB images when you pass a 3D array.
Example:
im = np.squeeze(im)
plt.imshow(im,cmap='gray')
plt.show()