I read the images using imread and then I would like to compute the average image. How can I add(and divide) using matplotlib?
I'm searching for something like imadd in matlab.
Code:
img1 = matplotlib.image.imread("path")
img2 = matplotlib.image.imread("path1")
img3 = matplotlib.image.imread("path2")
Thanks
You can use the normal sum operations:
img4 = img1 + img2 + img3
This, however, is not exactly the same as imadd from matlab. Matplotlib works with RGB values from 0 to 1. As so the sum in some pixels will provide values superior to 1 (which for the array type is valid; the same would not be true if the data type were uint8). As so perform the following operation to guarantee that your data comes out correct:
img1 = matplotlib.image.imread("path1")
img2 = matplotlib.image.imread("path2")
img3 = np.clip(img1 + img2, 0, 1)
Notice that all images must have the same size.
matplotlib.image is probably what you are looking for. You'll also need numpy if you want to manipulate the images otherwise, because they are basically just arrays in the size of the image (e.g. 1920 x 1080) with 3 or 4 dimensions (RGB or RGBA).
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
img1 = mpimg.imread("foo.png")
img2 = mpimg.imread("bar.png")
Now you are setup for image manipulation. In the case that your images are both in the same format and size (e.g. RGB. Check by using img1.shape and img2.shape) you can do:
img3 = plt.imshow((img1 + img2) / 2)
Related
I am working with 3D CT images and trying to remove the lines from the bed.
A slice from the original Image:
Following is my code to generate the mask:
segmentation = morphology.dilation(image_norm, np.ones((1, 1, 1)))
labels, label_nb = ndimage.label(segmentation)
label_count = np.bincount(labels.ravel().astype(int))
label_count[0] = 0
mask = labels == label_count.argmax()
mask = morphology.dilation(mask, np.ones((40, 40, 40)))
mask = ndimage.morphology.binary_fill_holes(mask)
mask = morphology.dilation(mask, np.ones((1, 1, 1)))
This results in the following image:
As you can see, in the above image the CT scan as distorted as well.
If I change: mask = morphology.dilation(mask, np.ones((40, 40, 40))) to mask = morphology.dilation(mask, np.ones((100, 100, 100))), the resulting image is as follows:
How can I remove only the two lines under the image without changing the image area? Any help is appreciated.
You've probably found another solution by now. Regardless, I've seen similar CT processing questions on SO, and figured it would be helpful to demonstrate a Scikit-Image solution. Here's the end result.
Here's the code to produce the above images.
from skimage import io, filters, color, morphology
import matplotlib.pyplot as plt
import numpy as np
image = color.rgba2rgb(
io.imread("ctimage.png")[9:-23,32:-9]
)
gray = color.rgb2gray(image)
tgray = gray > filters.threshold_otsu(gray)
keep_mask = morphology.remove_small_objects(tgray,min_size=463)
keep_mask = morphology.remove_small_holes(keep_mask)
maskedimg = np.einsum('ijk,ij->ijk',image,keep_mask)
fig,axes = plt.subplots(ncols=3)
image_list = [image,keep_mask,maskedimg]
title_list = ["Original","Mask","Imgage w/mask"]
for i,ax in enumerate(axes):
ax.imshow(image_list[i])
ax.set_title(title_list[i])
ax.axis("off")
fig.tight_layout()
Notes on code
image = color.rgba2rgb(
io.imread("ctimage.png")[9:-23,32:-9]
)
gray = color.rgb2gray(image)
The image saved as RGBA when I loaded it from SO. It needs to be in grayscale for use in the threshold function.
Your image might already by in grayscale.
Also, the downloaded image showed axis markings. That's why I've trimmed the image.
maskedimg = np.einsum('ijk,ij->ijk',image,keep_mask)
I wanted to apply keep_mask to every channel of the RGB image. The mask is a 2D array, and the image is a 3D array. I referenced this previous question in order to apply the mask to the image.
My code:
import cv2
import numpy as np
imgL = cv2.imread('Blender_Suzanne1.jpg')
img1 = cv2.cvtColor(imgL, cv2.COLOR_BGR2GRAY)
imgR = cv2.imread('Blender_Suzanne2.jpg')
img2 = cv2.cvtColor(imgR, cv2.COLOR_BGR2GRAY)
stereo = cv2.StereoBM_create(numDisparities = 16, blockSize = 17)
disparity = stereo.compute(img2, img1)
cv2.imshow('DepthMap', disparity)
cv2.waitKey()
cv2.destroyAllWindows()
When I run it, I see a window which is all grey? I think it is wrong.
I used this code from the OpenCV docs website.
Can anyone help?
PS: First I had some error which did not allow the output window to pop up. So, I added the two lines namely img1 and img 2 in my code.
You can display the result disparity using cv2.imshow() as well after you normalize it.
norm_image = cv2.normalize(disparity, None, alpha = 0, beta = 1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
cv2.imshow('norm_image', norm_image)
Notice the change of data type after normalizing the image. Prior to normalization disparity was of type int16. After normalization it is float32 (mentioned within the function cv2.normalize())
Instead of using imshow use matplotlib to visualization as per the documentation. Also you can convert image into gray in the same line you read the image as follows.
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread('Blender_Suzanne1.jpg',0)
imgR = cv2.imread('Blender_Suzanne2.jpg',0)
stereo = cv2.StereoBM_create(numDisparities = 16, blockSize = 17)
disparity = stereo.compute(imgL, imgR)
plt.imshow(disparity,'gray')
plt.show()
I try to access a DICOM file's RGB pixel array with unknown compression (maybe none). Extracting grayscale pixel arrays works completely fine.
However, using
import dicom
import numpy as np
data_set = dicom.read_file(path)
pixel_array = data_set.pixel_array
size_of_array = pixel_array.shape
if len(size_of_array ) == 3:
chanR = pixel_array[0][0:size_of_array[1], 0:size_of_array[2]]
chanG = pixel_array[1][0:size_of_array[1], 0:size_of_array[2]]
chanB = pixel_array[2][0:size_of_array[1], 0:size_of_array[2]]
output_array = (0.299 ** chanR) + (0.587 ** chanG) + (0.114 ** chanB)
with the goal to convert it to an common grayscale array. Unfortunately the result array output_array is not containing correct pixel data. Contents are not false scaled, they are spatially disturbed. Where is the issue?
It is not RGB pixel array and the better way is converting to gray image.
The way to get CT Image is to get the attribute of pixel_array in CT dicom file.
The type of elements in pixel_array of CT dicom file are all uint16.But a lot of tool in python, like OpenCV, Some AI stuff, cannot be compatible with the type.
After getting pixel_array (CT Image) from CT dicom file, you always need to convert the pixel_array into gray image, so that you can process this gray image by a lot of image processing tool in python.
The following code is a working example to convert pixel_array into gray image.
import matplotlib.pyplot as plt
import os
import pydicom
import numpy as np
# Abvoe code is to import dependent libraries of this code
# Read some CT dicom file here by pydicom library
ct_filepath = r"<YOUR_CT_DICOM_FILEPATH>"
ct_dicom = pydicom.read_file(ct_filepath)
img = ct_dicom.pixel_array
# Now, img is pixel_array. it is input of our demo code
# Convert pixel_array (img) to -> gray image (img_2d_scaled)
## Step 1. Convert to float to avoid overflow or underflow losses.
img_2d = img.astype(float)
## Step 2. Rescaling grey scale between 0-255
img_2d_scaled = (np.maximum(img_2d,0) / img_2d.max()) * 255.0
## Step 3. Convert to uint
img_2d_scaled = np.uint8(img_2d_scaled)
# Show information of input and output in above code
## (1) Show information of original CT image
print(img.dtype)
print(img.shape)
print(img)
## (2) Show information of gray image of it
print(img_2d_scaled.dtype)
print(img_2d_scaled.shape)
print(img_2d_scaled)
## (3) Show the scaled gray image by matplotlib
plt.imshow(img_2d_scaled, cmap='gray', vmin=0, vmax=255)
plt.show()
And the following is result of what I print out.
You probably worked around this by now, but I think pydicom doesn't interpret planar configuration correctly.
You need to do this first:
img = data_set.pixel_array
img = img.reshape([img.shape[1], img.shape[2], 3])
From here on your image will have shape [rows cols 3], with the channels separated
As said by #Daniel since you have a PlanarConfiguration== 1 you have to rearrange your colors in columns through np.reshape and then converting to grayscale, for example using OpenCV:
import pydicom as dicom
import numpy as np
import cv2 as cv
data_set = dicom.read_file(path)
pixel_array = data_set.pixel_array
## converting to shape (m,n,3)
pixel_array_rgb = pixel_array.reshape((pixel_array.shape[1], pixel_array.shape[2], 3))
## converting to grayscale
pixel_array_gs = cv.cvtColor(pixel_array_rgb, cv.COLOR_RGB2GRAY)
Error : Assertion failed (0 < cn && cn <= CV_CN_MAX) in merge
In the merge function
cv2.merge(channels,img2)
if the arguments are replaced as shown:
cv2.merge(img2,channels)
it will not give an error, but the histograms will be the same before and after equalization. What can I do in this piece of code.
Code:
import cv2,cv
import cv2.cv as cv
import numpy as np
from matplotlib import pyplot as plt
capture = cv.CaptureFromCAM(0)
img = cv.QueryFrame(capture)
img_size = cv.GetSize(img)
width,height = img_size
size = width,height,3
channels = np.zeros(size , np.uint8)
while (1):
img = cv.QueryFrame(capture)
img = np.asarray(img[:,:])
cv2.imshow("original",img)
hist = cv2.calcHist([img],[2],None,[256],[0,256])
#convert img to YCR_CB
img2 = cv2.cvtColor(img,cv2.COLOR_BGR2YCR_CB)
#split image to Y, CR, CB
cv2.split(img2,channels)
#histogram equalization to Y-MATRIX
cv2.equalizeHist(channels[0],channels[0])
#merge this matrix to reconstruct our colored image
cv2.merge(channels,img2)
#convert this output image to rgb
rgb = cv2.cvtColor(img2,cv2.COLOR_YCR_CB2BGR)
hist2 = cv2.calcHist([rgb],[2],None,[256],[0,256])
plt.plot(hist)
plt.plot(hist2)
plt.show()
Instead of using split and merge, take advantage of numpy slicing.
img2[:, :, 0] = cv2.equalizeHist(img2[:, :, 0])
# or run a small loop over each channel
you got the split() wrong here. it returns the channels.
since you don't catch the return values, your channels are not initialized
>>> import cv2
>>> help(cv2.split)
Help on built-in function split in module cv2:
split(...)
split(m[, mv]) -> mv
so it should look like:
channels = cv2.split(img2)
and please, avoid the old cv api, instead stick with cv2 consistently. (use cv2.VideoCapture, not cv.CaptureFromCAM)
I have an image and I want to extract a region from it. I have coordinates of left upper corner and right lower corner of this region. In gray scale I do it like this:
I = cv2.imread("lena.png")
I = cv2.cvtColor(I, cv2.COLOR_RGB2GRAY)
region = I[248:280,245:288]
tools.show_1_image_pylab(region)
I can't figure it out how to do it in color. I thought of extracting each channel R, G, B; slicing this region from each of the channels and to merge them back together but there is gotta be a shorter way.
There is a slight difference in pixel ordering in OpenCV and Matplotlib.
OpenCV follows BGR order, while matplotlib likely follows RGB order.
So when you display an image loaded in OpenCV using pylab functions, you may need to convert it into RGB mode. ( I am not sure if any easy method is there). Below method demonstrate it:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread('messi4.jpg')
b,g,r = cv2.split(img)
img2 = cv2.merge([r,g,b])
plt.subplot(121);plt.imshow(img) # expects distorted color
plt.subplot(122);plt.imshow(img2) # expect true color
plt.show()
cv2.imshow('bgr image',img) # expects true color
cv2.imshow('rgb image',img2) # expects distorted color
cv2.waitKey(0)
cv2.destroyAllWindows()
NB : Please check #Amro 's comment below for better method of conversion between BGR and RGB. img2 = img[:,:,::-1] . Very simple.
Run this code and see the difference in result yourself. Below is what I got :
Using Matplotlib :
Using OpenCV :
2 more options not mentioned yet:
img[..., ::-1] # same as the mentioned img[:, :, ::-1] but slightly shorter
and the versatile
cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
Best way to do this is to use :-
img2 = cv2.cvtColor(img , cv2.COLOR_BGR2RGB)
This will convert the BGR 'img' array to RGB 'img2' array. Now you can use img2 array for imshow() function of matplotlib.
Refer Link:- cvtColor