how to take a grey scale numpy image and tint it red - python

I have a 2D grey scale image, loaded using imread.0
I want to colourise it.
whats the best way to use numpy/skimage/python to achieve this?

It will depend a bit on the exact format of your input. But the basic procedure should be as simple as:
>>> import numpy as np
>>> from skimage import data, io
>>>
# an example grey scale image
>>> grey = data.coins()
# a helper for convenient channel (RGB) picking
>>> RGB = np.array((*"RGB",))
# the actual coloring can be written as an outer product
>>> red = np.multiply.outer(grey, RGB=='R')
# save for posterity
>>> io.imsave('red.png', red)

if this is a single channel image you could convert it to a "redscale" image by doing something like this:
zero_channel = np.zeros_like(greyscale_array)
redscale = np.stack([greyscale_array, zero_channel, zero_channel], axis=2)
without fully understanding the shape of your array it's difficult to answer though

import matplotlib.pyplot as plt
from skimage import color
from skimage import img_as_float
from PIL import Image
jpgfile = Image.open("pp.jpg")
grayscale_image = img_as_float(jpgfile)
image = color.gray2rgb(grayscale_image)
red_multiplier = [1, 0, 0]
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(8, 4),
sharex=True, sharey=True)
ax1.imshow(red_multiplier * image)
plt.show()

Related

How to retrieve the raw figure data from matplotlib?

I am using matplotlib to generate matrices I can train on. I need to get to the raw figure data.
Saving and reading the .png works fine, but my code runs 10x longer. Another stack overflow asked a similar question and the solution was to grab the canvas, but that related logic generated a numpy error. Here is my mwe.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.transforms import IdentityTransform
px = 1/plt.rcParams['figure.dpi'] # pixel in inches
fig, ax = plt.subplots(figsize=(384*px, 128*px))
i = 756
plt.text(70, 95, "value {:04d}".format(i), color="black", fontsize=30, transform=IdentityTransform())
plt.axis('off')
plt.savefig("xrtv.png") # I dont want to do this ...
rtv = plt.imread("xrtv.png") # or this, but I want access to what imread returns.
gray = lambda rgb: np.dot(rgb[..., :3], [0.299, 0.587, 0.114])
gray = gray(rtv)
Disabling rendering was a good hint. Consider using a memory buffer to I/O rather than playing with strings. Here is a full example:
import numpy as np
import io
import matplotlib.pyplot as plt
from PIL import Image
# disable rendering and dump to buffer
plt.ioff()
fig,ax = plt.subplots()
ax.plot(np.sin(np.arange(100)))
buf = io.BytesIO()
fig.savefig(buf,format='RGBA')
# plot from buffer
shape = (int(fig.bbox.bounds[-1]),int(fig.bbox.bounds[-2]),-1)
img_array = np.frombuffer(buf.getvalue(),dtype=np.uint8).reshape(shape)
Image.fromarray(img_array)

How to extract rgb values of this colorbar image in python?

Image
I want to make a colormap used in the attached image colorbar. So far I tried the code given below but didn't get the result I was looking for.
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
import numpy as np
img = plt.imread('Climat.png')
colors_from_img = img[:, 0, :]
my_cmap = LinearSegmentedColormap.from_list('my_cmap', colors_from_img, N=651)
y = random_sample((100, 100))
imshow(y, cmap=my_cmap);plt.colorbar()
Looking for your suggestions. Thank you in advance.
bicarlsen has given you the correct direction. Restrict the points from which you extract the colors to the colored rectangles:
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
import numpy as np
img = plt.imread('Climat.png')
colors_from_img = img[80::88, 30, :]
my_cmap = LinearSegmentedColormap.from_list('my_cmap', colors_from_img[::-1], N=len(colors_from_img))
y = np.random.random_sample((100, 100))
plt.imshow(y, cmap=my_cmap)
plt.colorbar()
plt.show()
Sample output:
P.S.: Initially, I thought a more general approach with
colors_from_img = np.unique(img[:, 30, :], axis=0)
was possible but as the input image is rasterized, all kinds of mixed colors are present where the black lines separate colored rectangles.

Python: how could this image be properly segmented?

I would like to segment (isolate) the rod-like structures shown in this image:
The best I've managed to do is this
# Imports the libraries.
from skimage import io, filters
import matplotlib.pyplot as plt
import numpy as np
# Imports the image as a numpy array.
img = io.imread('C:/Users/lopez/Desktop/Test electron/test.tif')
# Thresholds the images using a local threshold.
thresh = filters.threshold_local(img,301,offset=0)
binary_local = img > thresh # Thresholds the image
binary_local = np.invert(binary_local) # inverts the thresholded image (True becomes False and vice versa).
# Shows the image.
plt.figure(figsize=(10,10))
plt.imshow(binary_local,cmap='Greys')
plt.axis('off')
plt.show()
Which produces this result
However, as you can see from the segmented image, I haven't managed to isolate the rods. What should be black background is filled with interconnected structures. Is there a way to neatly isolate the rod-like structures from all other elements in the image?
The original image can be downloaded from this website
https://dropoff.nbi.ac.uk/pickup.php
Claim ID: qMNrDHnfEn4nPwB8
Claim Passcode: UkwcYoYfXUfeDto8
Here is my attempt using a Meijering filter. The Meijering filter relies on symmetry when it looks for tubular structures and hence the regions where rods overlap (breaking the symmetry of the tubular shape) are not that well recovered, as can be seen in the overlay below.
Also, there is some random crap that I have trouble getting rid off digitally, but maybe you can clean your prep a bit more before imaging.
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
from skimage.io import imread
from skimage.transform import rescale
from skimage.restoration import denoise_nl_means
from skimage.filters import meijering
from skimage.measure import label
from skimage.color import label2rgb
def remove_small_objects(binary_mask, size_threshold):
label_image = label(binary_mask)
object_sizes = np.bincount(label_image.ravel())
labels2keep, = np.where(object_sizes > size_threshold)
labels2keep = labels2keep[1:] # remove the first label, which corresponds to the background
clean = np.in1d(label_image.ravel(), labels2keep).reshape(label_image.shape)
return clean
if __name__ == '__main__':
raw = imread('test.tif')
raw -= raw.min()
raw /= raw.max()
# running everything on the large image took too long for my patience;
raw = rescale(raw, 0.25, anti_aliasing=True)
# smooth image while preserving edges
smoothed = denoise_nl_means(raw, h=0.05, fast_mode=True)
# filter for tubular shapes
sigmas = range(1, 5)
filtered = meijering(smoothed, sigmas=sigmas, black_ridges=False)
# Meijering filter always evaluates to high values at the image frame;
# we hence set the filtered image to zero at those locations
frame = np.ones_like(filtered, dtype=np.bool)
d = 2 * np.max(sigmas) + 1 # this is the theoretical minimum ...
d += 2 # ... but doesn't seem to be enough so we increase d
frame[d:-d, d:-d] = False
filtered[frame] = np.min(filtered)
thresholded = filtered > np.percentile(filtered, 80)
cleaned = remove_small_objects(thresholded, 200)
overlay = raw.copy()
overlay[np.invert(cleaned)] = overlay[np.invert(cleaned)] * 2/3
fig, axes = plt.subplots(2, 3, sharex=True, sharey=True)
axes = axes.ravel()
axes[0].imshow(raw, cmap='gray')
axes[1].imshow(smoothed, cmap='gray')
axes[2].imshow(filtered, cmap='gray')
axes[3].imshow(thresholded, cmap='gray')
axes[4].imshow(cleaned, cmap='gray')
axes[5].imshow(overlay, cmap='gray')
for ax in axes:
ax.axis('off')
fig, ax = plt.subplots()
ax.imshow(overlay, cmap='gray')
ax.axis('off')
plt.show()
If this code makes it into a paper, I want an acknowledgement and a copy of the paper. ;-)

Matplotlib Colormap showing Incorrect Color

I need to make a colormap with 256 colors from red to white and display the red channel in Python but it looks like this thing it's done wrong and I don't understand why. This is my code:
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# How to create an array filled with zeros
img = np.zeros([256,256])
colormap1 = np.zeros([256,1])
#image:
for i in range(256):
img[:,i] = i #on all columns I have the same value
#to go from red to white we'll have: [1,0,0]...,[1,0.5,0.5],..[1,1,1]
for i in range(128):
colormap1[i,1] = i/127
#display the thing:
colormap1 = mpl.colors.ListedColormap(colormap1)
plt.figure(), plt.imshow(img, cmap = colormap1)
You can answer it like that :
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# How to create an array filled with zeros
img = np.zeros([256,256])
colormap = np.zeros([256,3])
#image:
for i in range(256):
img[:,i] = i #on all columns I have the same value
#color map:
for i in range(256):
colormap[i,0] = 1
colormap[i,1] = (i+1)/256
colormap[i,2] = (i+1)/256
#display the thing:
colormap = mpl.colors.ListedColormap(colormap)
plt.figure(), plt.imshow(img, cmap = colormap)
almost like you did in here Colormap it's not composed of correct color.
You just need to write the second part of your code (from red to white) and do it in 256 moves instead of 128.

Matplotlib : What is the function of cmap in imshow?

I'm trying to learn opencv using python and came across this code below:
import cv2
import numpy as np
from matplotlib import pyplot as plt
BLUE = [255,0,0]
img1 = cv2.imread('opencv_logo.png')
replicate = cv2.copyMakeBorder(img1,10,10,10,10,cv2.BORDER_REPLICATE)
reflect = cv2.copyMakeBorder(img1,10,10,10,10,cv2.BORDER_REFLECT)
reflect101 = cv2.copyMakeBorder(img1,10,10,10,10,cv2.BORDER_REFLECT_101)
wrap = cv2.copyMakeBorder(img1,10,10,10,10,cv2.BORDER_WRAP)
constant= cv2.copyMakeBorder(img1,10,10,10,10,cv2.BORDER_CONSTANT,value=BLUE)
plt.subplot(231),plt.imshow(img1,'gray'),plt.title('ORIGINAL')
plt.subplot(232),plt.imshow(replicate,'gray'),plt.title('REPLICATE')
plt.subplot(233),plt.imshow(reflect,'gray'),plt.title('REFLECT')
plt.subplot(234),plt.imshow(reflect101,'gray'),plt.title('REFLECT_101')
plt.subplot(235),plt.imshow(wrap,'gray'),plt.title('WRAP')
plt.subplot(236),plt.imshow(constant,'gray'),plt.title('CONSTANT')
plt.show()
source : http://docs.opencv.org/master/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.html#exercises
What does plt.imshow(img1, 'gray') do? I tried searching Google and all I could understand was that the 'gray' argument was a Color map. But my image (pic is there on the site. see link) is not displayed in grayscale. I tried removing the second argument. So the code was like plt.imshow(img1). It executes. The image remains same as before. Then what does the second argument 'gray' do? Can someone explain all this to me? Any help appreciated. Thanks.
PS. I'm totally new to Matplotlib
When img1 has shape (M,N,3) or (M,N,4), the values in img1 are interpreted as RGB or RGBA values. In this case the cmap is ignored. Per the help(plt.imshow) docstring:
cmap : ~matplotlib.colors.Colormap, optional, default: None
If None, default to rc image.cmap value. cmap is ignored when
X has RGB(A) information
However, if img were an array of shape (M,N), then the cmap controls the colormap used to display the values.
import numpy as np
import matplotlib.pyplot as plt
import mpl_toolkits.axes_grid1 as axes_grid1
np.random.seed(1)
data = np.random.randn(10, 10)
fig = plt.figure()
grid = axes_grid1.AxesGrid(
fig, 111, nrows_ncols=(1, 2), axes_pad = 0.5, cbar_location = "right",
cbar_mode="each", cbar_size="15%", cbar_pad="5%",)
im0 = grid[0].imshow(data, cmap='gray', interpolation='nearest')
grid.cbar_axes[0].colorbar(im0)
im1 = grid[1].imshow(data, cmap='jet', interpolation='nearest')
grid.cbar_axes[1].colorbar(im1)
plt.savefig('/tmp/test.png', bbox_inches='tight', pad_inches=0.0, dpi=200,)

Categories