Why does plt.imshow flip coordinates compared to plt.scatter? - python

The coordinate axis directions seem to be inverted between array indices and plots. Does someone have a satisfactory answer as to why matplotlib likes to display images inverted?
Code that demonstrates my question:
import numpy as np
import matplotlib.pyplot as plt
img = np.zeros((10, 10))
img[2, 4] = 1
plt.imshow(img)
plt.scatter(2, 4)
The output image will show a white pixel and a dot at positiones mirrored at the image diagonal.
This effect caused a lot of confusion for me (and others I have talked to) and I was wondering if there is a sensible explanation for it. To me, it seems that matplotlib interprets the first axis of a numpy array as the y-coordinate and the second as x. As this is not aligned with the mathematical x-y right hand notation this is confusing.

Related

Scipy/Numpy: Apply cmap to actual data

I am trying to segment some DICOM images, and was trying to see if it was possible to apply a cmap filter on the actual numpy arrays
The left image is my goal, the right is what I currently have
I am able to get the left image by applying imshow(image, cmap='nipy_spectral')
but that doesn't change the actual numpy array on the right.
How would I go about actually applying cmap=nipy_spectral so it would actually transform the image on the right
Thanks
The colormap functions will accept greyscale and return rgb, which I believe is what you're after.
from matplotlib import pyplot as plt
from skimage.data import coins
coins().shape
>>> (303, 384)
rgb = plt.cm.nipy_spectral(coins())
rgb.shape
>>> (303, 384, 4) # now an RGBA array
In case anybody else is looking, I found my answer here.
You can simply apply a cmap on the numpy array as follows:
colormap=plt.cm.gray
colormapped_image = colormap(image).
But as stated in the link you have to apply normalization to the image beforehand.

Converting numpy array to picture

So I have got a string of characters and I am representing it by a number between 1-5 in a numpy array. Now I want to convert it to a pictorial form by first repeating the string of numbers downwards so the picture becomes broad enough to be visible (since single string will give a thin line of picture). My main problem is how do I convert the array of numbers to a picture?
This would be a minimal working example to visualize with matplotlib:
import numpy as np
import matplotlib.pyplot as plt
# generate 256 by 1 vector of values 1-5
img = np.random.randint(1,6, 256)
# transpose for visualization
img = np.expand_dims(img, 1).T
# force aspect ratio
plt.imshow(img, aspect=100)
# or, alternatively use aspect='auto'
plt.show()
You can force the aspect ratio of the plotted figure by simply setting the aspect option of imshow()

How to fill a region with half-transparent color in Matplotlib?

Many papers on image segmentation provide examples where each segment is covered with half-transparent color mask:
If I have an image and a mask, can I achieve the same result in Matplotlib?
EDIT:
The mask in my case is defined as an array with the same width and height as an image filled with numbers from 0 to (num_segments+1), where 0 means "don't apply any color" and other numbers mean "cover this pixel with some distinct color". Yet, if another representation of a mask is more suitable, I can try to convert to it.
Here are a couple of complications that I found in this task so that it doesn't sound that trivial:
Colored regions are not regular shapes like lines, squares or circles so functions like plot(..., 'o'), fill() or fill_between() don't work here. They are not even contours (or at least I don't see how to apply them here).
Modifying alpha channel isn't the most popular thing in plots, so is rarely mentioned in Matplotlib's docs.
This can surely be done. The implementation would depend on how your mask looks like.
Here is an example
import matplotlib.pyplot as plt
import numpy as np
image = plt.imread("https://i.stack.imgur.com/9qe6z.png")
ar= np.zeros((image.shape[0],image.shape[1]) )
ar[100:300,50:150] = np.ones((200,100))
ar[:,322:] = np.zeros((image.shape[0],image.shape[1]-322) )*np.nan
fig,ax=plt.subplots()
ax.imshow(image)
ax.imshow(ar, alpha=0.5, cmap="RdBu")
plt.show()

Matplolib: imgshw() and contourfc() coherent representation?

I'm trying to draw 2-d contours of a 2d matrix. I am curious to know if it is normal that imshow() and contour()/contourfc() of matplotlib package work differently with respect the origin of the matrix coordinates. It appears that imshow() treats the origin of coordinates flipped with respect to contour().
I illustrate this with a quite simple example:
import numpy as np
from matplotlib import pyplot as plt
a = np.diag([1.0, 2, 3])
plt.imshow(a)
produces a nice picture with colors along the main diagonal (from left-upper corner to right-bottom corner), But if instead I execute
plt.contour(a)
the figure is a set of contours aligned along the perpendicular diagonal (form left-bottom corner to upper-right corner).
If I flip the array with numpy flipud() function
plt.contour(np.flipud(a))
then both contour() and imgshw() coincide.
Maybe is a stupid question, I apologize for that !
Thanks anyway

matplotlib imshow plots different if using colormap or RGB array

I am having the following problem: I am saving 16-bit tiff images with a microscope and I need to analyze them. I want to do that with numpy and matplotlib, but when I want to do something as simple as plotting the image in green (I will later need to superpose other images), it fails.
Here is an example when I try to plot the image either as a RGB array, or with the default jet colormap.
import numpy as np
import matplotlib.pyplot as plt
import cv2
imageName = 'image.tif'
# image as luminance
img1 = cv2.imread(imageName,-1)
# image as RGB array
shape = (img1.shape[0], img1.shape[1], 3)
img2 = np.zeros(shape,dtype='uint16')
img2[...,1] += img1
fig = plt.figure(figsize=(20,8))
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
im1 = ax1.imshow(img1,interpolation='none')
im2 = ax2.imshow(img2,interpolation='none')
fig.show()
Which to me yields the following figure:
I am sorry if the question is too basic, but I have no idea why the right plot is showing this artifacts. I would like to get with the green scale, something like how the figure looks (imageJ also yields somthing similar to the left plot).
Thank you very much for your collaboration.
I find the right plot much more artistic...
matplotlib is rather complicated when it comes to interpreting images. It goes roughly as follows:
if the image is a NxM array of any type, it is interpreted through the colormap (autoscale, if not indicated otherwise). (In principle, if the array is a float array scaled to 0..1, it should be interpreted as a grayscale image. This is what the documentation says, but in practice this does not happen.)
if the image is a NxMx3 float array, the RGB components are interpreted as RGB components between 0..1. If the values are outside of this range, they are taken with positive modulo 1, i.e. 1.2 -> 0.2, -1.7 -> 0.3, etc.
if the image is a NxMx3 uint8 array, it is interpreted as a standard image (0..255 components)
if the image is NxMx4, the interpretation is as above, but the fourth component is the opacity (alpha)
So, if you give matplotlib a NxMx3 array of integers other than uint8 or float, the results are not defined. However, by looking at the source code, the odd behavour can be understood:
if A.dtype != np.uint8:
A = (255*A).astype(np.uint8)
where A is the image array. So, if you give it uint16 values 0, 1, 2, 3, 4..., you get 0, 255, 254, 253, ... Yes, it will look very odd. (IMHO, the interpretation could be a bit more intuitive, but this is how it is done.)
In this case the easiest solution is to divide the array by 65535., and then the image should be as expected. Also, if your original image is truly linear, then you'll need to make the reverse gamma correction:
img1_corr = (img1 / 65535.)**(1/2.2)
Otherwise your middle tones will be too dark.
I approached this by normalising the image by the maximum value of the given datatype, which said by DrV, for uint16 is 65535. The helper function would look something like:
def normalise_bits(img):
bits = 1.0 # catch all
try:
# Test integer value, e.g. np.uint16
bits = np.iinfo(img.dtype).max
except ValueError:
# Try float maximum, e.g. np.float32
bits = np.finfo(img.dtype).max
return (img / bits).astype(float)
Then the image can be handled by matplotlib as a float [0.0, 1.0]

Categories