I want to save an image without any channel, so the dimension would only be 2. Thus, is there a way I do it in matplotlib?
I have already tried using
matplotlib.pyplot.imsave('img.png', image, cmap='gray')
but when I read it using
matplotlib.pyplot.imread('img.png')
The dimension is 3. So I'm confusing how. I know maybe I can't use imread but what can I do instead?
If you have opencv installed, you can try:
cv2.imread('1.png', cv2.IMREAD_GRAYSCALE)
Also, you can also try PIL.
from PIL import Image
Image.fromarray(array)
I didn't see this one on the internet, but this one works! thanks to my teacher!
skimage.io.imsave('1.png', np.around(image*255).astype(np.uint8))
To use this, you have to have skimage preinstalled.
pip3 install scikit-image
Thanks #Cris Luengo in the comment above to point out that
"From the docs, it looks like matplotlib only saves RGB or RGBA format
image files. You’ll have to use a different package to save a
gray-scale image. OpenCV as suggested below is just one option. There
are many. Try PIL."
Give him an upvote when you saw it!
Related
I am new to Pillow and I would like to learn how to use it.
I would like to seek your helps and expertise to suggest if I could use Pillow to find the connected component of an image?
For example, if I have an image such as the following
May I ask if I could use Pillow to give me the shapes and positions of my two components in my example ? They are a square and a circle, and the circle is inside of the square.
Thank you very much,
Mi
Using Pillow you can find the edges in the given image.
from PIL import Image
from PIL import ImageFilter
image = Image.open("c1LDc.png")
image = image.convert('RGB')
imageWithEdges = image.filter(ImageFilter.FIND_EDGES)
image.show()
imageWithEdges.show()
Output:
You can't use Pillow for object detection, as answered here
I'm trying to use Pillow (PIL fork) to convert an image to greyscale and apply the pixel luminosity of the original image over it. I'm new to both Pillow and Python and am having difficulty with this. I tried using histogram data, the point function, etc., but most simply result in an increased contrast that I would get from calling ImageEnhance.Contrast(). Is there something simple that I'm missing or would this also require something like Numpy?
Thank you for any information you can provide.
I want to read pgm image in python. I use cv2.imread('a.pgm') but it returns wrong results. In Matlab, I use imread and get the right result which is a single channel 16-bit image. But cv2.imread in python returns a 3-channel image and the pixel values are also wrong.
Why it happens?
How should I read the 16-bit pgm images in python?
And what libraries?
Thanks in advance.
I got it.
cv2.imread('a.pgm',-1)
works.
You can also use skimage.io library
from skimage.io import imread
image = imread("a.pgm")
How can i transform Image1 to Image2 using matplotlib.pyplot or another library in Python?
Image 1:
Image 2:
(This image turned out to be confidential,i removed it because i can't delete the post. Sorry for the inconvenience)
Any help is appreciated.
Have a look at the Python Imageing Library (which also has python bindings, by the way), especially the ImageFilter module.
But tools like ImageMagick or one of the built-in filters in Gimp might be more suitable for experimenting.
Is it the experimental data and already filtered from image 1 to image 2 by someone else? I wonder whether you have the point spread function along with the raw image 1?
I am having to do a lot of vision related work in Python lately, and I am facing a lot of difficulties switching between formats. When I read an image using Mahotas, I cannot seem to get it to cv2, though they are both using numpy.ndarray. SimpleCV can take OpenCV images easily, but getting SimpleCV image out for legacy cv or mahotas seems to be quite a task.
Some format conversion syntaxes would be really appreciated. For example, if I open a greyscale image using mahotas, it is treated to be in floating point colour space by default, as I gather. Even when I assign the type as numpy.uint8, cv2 cannot seem to recognise it as an array. I do not know how to solve this problem. I am not having much luck with colour images either. I am using Python 2.7 32bit on Ubuntu Oneiric Ocelot.
Thanks in advance!
I have never used mahotas. But I'm currently working on SimpleCV. I have just sent a pull request for making SimpleCV numpy array compatible with cv2.
So, basically,
Image.getNumpy() -> numpy.ndarray for cv2
Image.getBitmap() -> cv2.cv.iplimage
Image.getMatrix() -> cv2.cv.cvmat
To convert cv2 numpy array to SimpleCV Image object,
Image(cv2_image) -> SimpleCV.ImageClass.Image
With only experience in cv2 and SimpleCV, to convert from SimpleCV to cv2:
cv2_image = simplecv_image.getNumpyCv2()
To convert from cv2 to SimpleCV:
simplecv_image = Image(cv2_image.transpose(1, 0, 2)[:, :, ::-1])