Related
I am working with 3D CT images and trying to remove the lines from the bed.
A slice from the original Image:
Following is my code to generate the mask:
segmentation = morphology.dilation(image_norm, np.ones((1, 1, 1)))
labels, label_nb = ndimage.label(segmentation)
label_count = np.bincount(labels.ravel().astype(int))
label_count[0] = 0
mask = labels == label_count.argmax()
mask = morphology.dilation(mask, np.ones((40, 40, 40)))
mask = ndimage.morphology.binary_fill_holes(mask)
mask = morphology.dilation(mask, np.ones((1, 1, 1)))
This results in the following image:
As you can see, in the above image the CT scan as distorted as well.
If I change: mask = morphology.dilation(mask, np.ones((40, 40, 40))) to mask = morphology.dilation(mask, np.ones((100, 100, 100))), the resulting image is as follows:
How can I remove only the two lines under the image without changing the image area? Any help is appreciated.
You've probably found another solution by now. Regardless, I've seen similar CT processing questions on SO, and figured it would be helpful to demonstrate a Scikit-Image solution. Here's the end result.
Here's the code to produce the above images.
from skimage import io, filters, color, morphology
import matplotlib.pyplot as plt
import numpy as np
image = color.rgba2rgb(
io.imread("ctimage.png")[9:-23,32:-9]
)
gray = color.rgb2gray(image)
tgray = gray > filters.threshold_otsu(gray)
keep_mask = morphology.remove_small_objects(tgray,min_size=463)
keep_mask = morphology.remove_small_holes(keep_mask)
maskedimg = np.einsum('ijk,ij->ijk',image,keep_mask)
fig,axes = plt.subplots(ncols=3)
image_list = [image,keep_mask,maskedimg]
title_list = ["Original","Mask","Imgage w/mask"]
for i,ax in enumerate(axes):
ax.imshow(image_list[i])
ax.set_title(title_list[i])
ax.axis("off")
fig.tight_layout()
Notes on code
image = color.rgba2rgb(
io.imread("ctimage.png")[9:-23,32:-9]
)
gray = color.rgb2gray(image)
The image saved as RGBA when I loaded it from SO. It needs to be in grayscale for use in the threshold function.
Your image might already by in grayscale.
Also, the downloaded image showed axis markings. That's why I've trimmed the image.
maskedimg = np.einsum('ijk,ij->ijk',image,keep_mask)
I wanted to apply keep_mask to every channel of the RGB image. The mask is a 2D array, and the image is a 3D array. I referenced this previous question in order to apply the mask to the image.
I'm trying to use Tesseract-OCR to get the readings on below images but having issues getting consistent results with the spotted background. I have below configuration on my pytesseract
CONFIG = f"—psm 6 -c tessedit_char_whitelist=01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZÅÄabcdefghijklmnopqrstuvwxyzåäö.,-"
I have also tried below image pre-processing with some good results, but still not perfect results
blur = cv2.blur(img,(4,4))
(T, threshInv) = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
What I want is to consistently be able to identify the numbers and the decimal separator. What image pre-processing could help in getting consistent results on images as below?
That was a challenge but i think i have an interesting approach: Pattern-matching
If you zoom in, you realize that the pattern in the back only has 4 possible dots, a single full pixle, a double full pixel and a double pixel with a medium left or right. So what i did was grab these 4 patterns from the image with 17.160.000,00 and got to work. Save these to load again, i just grabbed them on the fly
img = cv2.imread('C:/Users/***/17.jpg', cv2.IMREAD_GRAYSCALE)
pattern_1 = img[2:5,1:5]
pattern_2 = img[6:9,5:9]
pattern_3 = img[6:9,11:15]
pattern_4 = img[9:12,22:26]
# just to show it carries over to other pics ;)
img = cv2.imread('C:/Users/****/6.jpg', cv2.IMREAD_GRAYSCALE)
Actual Pattern Matching
Next we match all the patterns and threshold to find all occurrences, i used 0.7 but you can play around with it a little. These patterns take off some pixels on the side and only match a sigle pixel on the left so we pad twice (one with an extra) to hit both for the first 3 patterns. The last one is the single pixel so it doesnt need it
res_1 = cv2.matchTemplate(img,pattern_1,cv2.TM_CCOEFF_NORMED )
thresh_1 = cv2.threshold(res_1,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_1 = np.pad(thresh_1,((1,1),(1,2)),'constant')
pat_thresh_15 = np.pad(thresh_1,((1,1),(2,1)), 'constant')
res_2 = cv2.matchTemplate(img,pattern_2,cv2.TM_CCOEFF_NORMED )
thresh_2 = cv2.threshold(res_2,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_2 = np.pad(thresh_2,((1,1),(1,2)),'constant')
pat_thresh_25 = np.pad(thresh_2,((1,1),(2,1)), 'constant')
res_3 = cv2.matchTemplate(img,pattern_3,cv2.TM_CCOEFF_NORMED )
thresh_3 = cv2.threshold(res_3,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_3 = np.pad(thresh_3,((1,1),(1,2)),'constant')
pat_thresh_35 = np.pad(thresh_3,((1,1),(2,1)), 'constant')
res_4 = cv2.matchTemplate(img,pattern_4,cv2.TM_CCOEFF_NORMED )
thresh_4 = cv2.threshold(res_4,0.7,1,cv2.THRESH_BINARY)[1].astype(np.uint8)
pat_thresh_4 = np.pad(thresh_4,((1,1),(1,2)),'constant')
Editing the Image
Now the only thing left to do is remove all the matches from the image. Since we have a mostly white backround we just set them to 255 to blend in.
img[pat_thresh_1==1] = 255
img[pat_thresh_15==1] = 255
img[pat_thresh_2==1] = 255
img[pat_thresh_25==1] = 255
img[pat_thresh_3==1] = 255
img[pat_thresh_35==1] = 255
img[pat_thresh_4==1] = 255
Output
Edit:
Take a look at Abstracts answer as well for refining this output and tesseract finetuning
You may find a solution using a slightly more complex approach by filtering in the frequency domain instead of the spatial domain. The thresholds might require some tweaking depending on how tesseract performs with the output images.
Implementation:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\\Test\\number.jpg', cv2.IMREAD_GRAYSCALE)
# Perform 2D FFT
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 20*np.log(np.abs(fshift))
# Squash all of the frequency magnitudes above a threshold
for idx, x in np.ndenumerate(magnitude_spectrum):
if x > 195:
fshift[idx] = 0
# Inverse FFT back into the real-spatial-domain
f_ishift = np.fft.ifftshift(fshift)
img_back = np.fft.ifft2(f_ishift)
img_back = np.real(img_back)
img_back = cv2.normalize(img_back, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
out_img = np.copy(img)
# Use the inverted FFT image to keep only the black values below a threshold
for idx, x in np.ndenumerate(img_back):
if x < 100:
out_img[idx] = 0
else:
out_img[idx] = 255
plt.subplot(131),plt.imshow(img, cmap = 'gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(132),plt.imshow(img_back, cmap = 'gray')
plt.title('Reversed FFT'), plt.xticks([]), plt.yticks([])
plt.subplot(133),plt.imshow(out_img, cmap = 'gray')
plt.title('Output'), plt.xticks([]), plt.yticks([])
plt.show()
Output:
Median Blur Implementation:
import cv2
import numpy as np
img = cv2.imread('C:\\Test\\number.jpg', cv2.IMREAD_GRAYSCALE)
blur = cv2.medianBlur(img, 3)
for idx, x in np.ndenumerate(blur):
if x < 20:
blur[idx] = 0
cv2.imshow("Test", blur)
cv2.waitKey()
Output:
Final Edit:
So using Eumel's solution and combining this bit of code on the bottom of it yields a 100% successful result:
img[pat_thresh_1==1] = 255
img[pat_thresh_15==1] = 255
img[pat_thresh_2==1] = 255
img[pat_thresh_25==1] = 255
img[pat_thresh_3==1] = 255
img[pat_thresh_35==1] = 255
img[pat_thresh_4==1] = 255
# Eumel's code above this line
img = cv2.erode(img, np.ones((3,3)))
cv2.imwrite("out.png", img)
cv2.imshow("Test", img)
print(pytesseract.image_to_string(Image.open("out.png"), lang='eng', config='--psm 10 --oem 3 -c tessedit_char_whitelist=0123456789.,'))
Output Image Examples:
Whitelisting the tesseract characters appears to help quite a bit as well to prevent false identification.
I am currently working on the below and am struggling to understand the best approach.
I've searched a lot but was not able to find answers that would match what I am trying to do
The problem:
Relocating an Object (e.g. Shoe) within the existing image (white background) to certain location (e.g. move up)
Inserting and positioning the Object (e.g. Shoe) at by the user specified location within a new background (still white) with by the user specified new height / width
How far I got:
I've managed identify the object within the picture using CV2, got the outer contours, added a little padding and cropped the object (see below). I am happy with cropping it that way as all my images have a one coloured background and I will keep the background in the same colour.
Where I am stuck:
My cropped Object and old image background / new background do not share the same shape, hence I am not able to overlay / concatenate / merge ...
Given both images are store as np arrays, I assume the answer will be to somehow place the Shoe crop np.array within the background np.array, however I have no clue how to do this.
Maybe there is an easier / different way to do this?
Would be very grateful to hear from anyone who can lead me into the right direction.
Code
#importing dependencies
import os
import numpy as np
import cv2
from matplotlib import pyplot as plt
# Config
path = '/Users/..../Shoes/'
img_list = os.listdir(path)
img_path = path + img_list[0]
#Outline
color = (0,255,0)
thickness = 3
padding = 10
# convert to RGB
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# create a binary thresholded image
_, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
# find the contours from the thresholded image
contours, hierarchy = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Identifying outer contours
x_axis = []
y_axis = []
for i in range(len(contours)):
for y in range (len(contours[i])):
x_axis.append(contours[i][y][0][0])
y_axis.append(contours[i][y][0][1])
min_x = min(x_axis) - padding
min_y = min(y_axis) - padding
max_x = max(x_axis) + padding
max_y = max(y_axis) + padding
# Defining start and endpoint of outline Rectangle based on identified outer corners + Padding
start_point = (min_x, min_y)
end_point = (max_x, max_y)
image_outline = cv2.rectangle(image, start_point, end_point, color, thickness)
plt.imshow(image_outline)
plt.show()
#Crop Image
crop_img = image[min_y:max_y, min_x:max_x]
print(crop_img.shape)
plt.imshow(crop_img)
plt.show()
I think I got to the solution, this centers the image for any new given background height/width
Still interested in quicker / cleaner ways
#Define the new height and width you want to have
new_height = 1200
new_width = 1200
#Check current hight and with of Cropped image
crop_height = crop_img.shape[0]
crop_width = crop_img.shape[1]
#calculate how much you need add to the sides and top - basically halft of the remaining height / with ... currently not working correctly for odd numbers
add_sides = int((new_width - crop_width)/2)
add_top_and_btm = int((new_height - crop_height)/2)
# Adding background to the sides
bg_sides = np.zeros(shape=[crop_height, add_sides, 3], dtype=np.uint8)
bg_sides2 = 255 * np.ones(shape=[crop_height, add_sides, 3], dtype=np.uint8)
new_crop_img = np.insert(crop_img, [1], bg_sides2, axis=1)
new_crop_img = np.insert(new_crop_img, [-1], bg_sides2, axis=1)
# Then adding Background to top and bottom
bg_top_and_btm = np.zeros(shape=[add_top_and_btm, new_width, 3],
dtype=np.uint8)
bg_top_and_btm2 = 255 * np.ones(shape=[add_top_and_btm, new_width, 3],
dtype=np.uint8)
new_crop_img = np.insert(new_crop_img, [1], bg_top_and_btm2, axis=0)
new_crop_img = np.insert(new_crop_img, [-1], bg_top_and_btm2, axis=0)
plt.imshow(new_crop_img)
Below I have attached two images. I want the first image to be cropped in a heart shape according to the mask image (2nd image).
I searched for solutions but I was not able to get the simple and easier way to do this. Kindly help me with the solution.
2 images:
Image to be cropped:
Mask image:
Let's start by loading the temple image from sklearn:
from sklearn.datasets import load_sample_images
dataset = load_sample_images()
temple = dataset.images[0]
plt.imshow(temple)
Since, we need to use the second image as mask, we must do a binary thresholding operation. This will create a black and white masked image, which we can then use to mask the former image.
from matplotlib.pyplot import imread
heart = imread(r'path_to_im\heart.jpg', cv2.IMREAD_GRAYSCALE)
_, mask = cv2.threshold(heart, thresh=180, maxval=255, type=cv2.THRESH_BINARY)
We can now trim the image so its dimensions are compatible with the temple image:
temple_x, temple_y, _ = temple.shape
heart_x, heart_y = mask.shape
x_heart = min(temple_x, heart_x)
x_half_heart = mask.shape[0]//2
heart_mask = mask[x_half_heart-x_heart//2 : x_half_heart+x_heart//2+1, :temple_y]
plt.imshow(heart_mask, cmap='Greys_r')
Now we have to slice the image that we want to mask, to fit the dimensions of the actual mask. Another shape would have been to resize the mask, which is doable, but we'd then end up with a distorted heart image. To apply the mask, we have cv2.bitwise_and:
temple_width_half = temple.shape[1]//2
temple_to_mask = temple[:,temple_width_half-x_half_heart:temple_width_half+x_half_heart]
masked = cv2.bitwise_and(temple_to_mask,temple_to_mask,mask = heart_mask)
plt.imshow(masked)
If you want to instead make the masked (black) region transparent:
tmp = cv2.cvtColor(masked, cv2.COLOR_BGR2GRAY)
_,alpha = cv2.threshold(tmp,0,255,cv2.THRESH_BINARY)
b, g, r = cv2.split(masked)
rgba = [b,g,r, alpha]
masked_tr = cv2.merge(rgba,4)
plt.axis('off')
plt.imshow(dst)
Since I am on a remote server, cv2.imshow doesnt work for me. I imported plt.
This code does what you are looking for:
import cv2
import matplotlib.pyplot as plt
img_org = cv2.imread('~/temple.jpg')
img_mask = cv2.imread('~/heart.jpg')
##Resizing images
img_org = cv2.resize(img_org, (400,400), interpolation = cv2.INTER_AREA)
img_mask = cv2.resize(img_mask, (400,400), interpolation = cv2.INTER_AREA)
for h in range(len(img_mask)):
for w in range(len(img_mask)):
if img_mask[h][w][0] == 0:
for i in range(3):
img_org[h][w][i] = 0
else:
continue
plt.imshow(img_org)
I need help for image segmentation. I have a MRI image of brain with tumor. I need to remove cranium (skull) from MRI and then segment only tumor object. How could I do that in python? with image processing. I have tried make contours, but I don't know how to find and remove the largest contour and get only brain without a skull.
Thank's a lot.
def get_brain(img):
row_size = img.shape[0]
col_size = img.shape[1]
mean = np.mean(img)
std = np.std(img)
img = img - mean
img = img / std
middle = img[int(col_size / 5):int(col_size / 5 * 4), int(row_size / 5):int(row_size / 5 * 4)]
mean = np.mean(middle)
max = np.max(img)
min = np.min(img)
img[img == max] = mean
img[img == min] = mean
kmeans = KMeans(n_clusters=2).fit(np.reshape(middle, [np.prod(middle.shape), 1]))
centers = sorted(kmeans.cluster_centers_.flatten())
threshold = np.mean(centers)
thresh_img = np.where(img < threshold, 1.0, 0.0) # threshold the image
eroded = morphology.erosion(thresh_img, np.ones([3, 3]))
dilation = morphology.dilation(eroded, np.ones([5, 5]))
These images are similar to the ones I'm looking at:
Thanks for answers.
Preliminaries
Some preliminary code:
%matplotlib inline
import numpy as np
import cv2
from matplotlib import pyplot as plt
from skimage.morphology import extrema
from skimage.morphology import watershed as skwater
def ShowImage(title,img,ctype):
plt.figure(figsize=(10, 10))
if ctype=='bgr':
b,g,r = cv2.split(img) # get b,g,r
rgb_img = cv2.merge([r,g,b]) # switch it to rgb
plt.imshow(rgb_img)
elif ctype=='hsv':
rgb = cv2.cvtColor(img,cv2.COLOR_HSV2RGB)
plt.imshow(rgb)
elif ctype=='gray':
plt.imshow(img,cmap='gray')
elif ctype=='rgb':
plt.imshow(img)
else:
raise Exception("Unknown colour type")
plt.axis('off')
plt.title(title)
plt.show()
For reference, here's one of the brain+skulls you linked to:
#Read in image
img = cv2.imread('brain.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ShowImage('Brain with Skull',gray,'gray')
Extracting a Mask
If the pixels in the image can be classified into two different intensity classes, that is, if they have a bimodal histogram, then Otsu's method can be used to threshold them into a binary mask. Let's check that assumption.
#Make a histogram of the intensities in the grayscale image
plt.hist(gray.ravel(),256)
plt.show()
Okay, the data is nicely bimodal. Let's apply the threshold and see how we do.
#Threshold the image to binary using Otsu's method
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
ShowImage('Applying Otsu',thresh,'gray')
Things are easier to see if we overlay our mask onto the original image
colormask = np.zeros(img.shape, dtype=np.uint8)
colormask[thresh!=0] = np.array((0,0,255))
blended = cv2.addWeighted(img,0.7,colormask,0.1,0)
ShowImage('Blended', blended, 'bgr')
Extracting the Brain
The overlap of the brain (shown in red) with the mask is so perfect, that we'll stop right here. To do so, let's extract the connected components and find the largest one, which will be the brain.
ret, markers = cv2.connectedComponents(thresh)
#Get the area taken by each component. Ignore label 0 since this is the background.
marker_area = [np.sum(markers==m) for m in range(np.max(markers)) if m!=0]
#Get label of largest component by area
largest_component = np.argmax(marker_area)+1 #Add 1 since we dropped zero above
#Get pixels which correspond to the brain
brain_mask = markers==largest_component
brain_out = img.copy()
#In a copy of the original image, clear those pixels that don't correspond to the brain
brain_out[brain_mask==False] = (0,0,0)
ShowImage('Connected Components',brain_out,'rgb')
Considering the Second Brain
Running this again with your second image produces a mask with many holes:
We can close many of these holes using a closing transformation:
brain_mask = np.uint8(brain_mask)
kernel = np.ones((8,8),np.uint8)
closing = cv2.morphologyEx(brain_mask, cv2.MORPH_CLOSE, kernel)
ShowImage('Closing', closing, 'gray')
We can now extract the brain:
brain_out = img.copy()
#In a copy of the original image, clear those pixels that don't correspond to the brain
brain_out[closing==False] = (0,0,0)
ShowImage('Connected Components',brain_out,'rgb')
If you need to cite this for some reason:
Richard Barnes. (2018). Using Otsu's method for skull-brain segmentation (v1.0.1). Zenodo. https://doi.org/10.5281/zenodo.6042312
Have you perhaps tried to use python skull_stripping.py
You can modify the parameters but it normally works good.
There are some new studies using deep learning for skull stripping which I found it interesting:
https://github.com/mateuszbuda/brain-segmentation/tree/master/skull-stripping
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 28 17:10:56 2021
#author: K Somasundaram, ka.somasundaram#gmail.com
"""
import numpy as npy
from skimage.filters import threshold_otsu
from skimage import measure
# import image reading module image from matplotlib
import matplotlib.image as img
#import image ploting module pyplot from matplotlib
import matplotlib.pyplot as plt
inim=img.imread('015.bmp')
#Find the dimension of the input image
dimn=inim.shape
print('dim=',dimn)
plt.figure(1)
plt.imshow(inim)
#-----------------------------------------------
# Find a threshold for the image using Otsu method in filters
th=threshold_otsu(inim)
print('Threshold = ',th)
# Binarize using threshold th
binim1=inim>th
plt.figure(2)
plt.imshow(binim1)
#--------------------------------------------------
# Erode the binary image with a structuring element
from skimage.morphology import disk
import skimage.morphology as morph
#Erode it with a radius of 5
eroded_image=morph.erosion(binim1,disk(3))
plt.figure(3)
plt.imshow(eroded_image)
#---------------------------------------------
#------------------------------------------------
# label the binar image
labelimg=measure.label(eroded_image,background=0)
plt.figure(4)
plt.imshow(labelimg)
#--------------------------------------------------
# Find area of the connected regiond
prop=measure.regionprops(labelimg)
# Find the number of objecte in the image
ncount=len(prop)
print ( 'Number of regions=',ncount)
#-----------------------------------------------------
# Find the LLC index
argmax=0
maxarea=0
#Find the largets connected region
for i in range(ncount):
if(prop[i].area >maxarea):
maxarea=prop[i].area
argmax=i
print('max area=',maxarea,'arg max=',argmax)
print('values=',[region.area for region in prop])
# Take only the largest connected region
# Generate a mask of size of th einput image with all zeros
bmask=npy.zeros(inim.shape,dtype=npy.uint8)
# Set all pixel values in whole image to the LCC index to 1
bmask[labelimg == (argmax+1)] =1
plt.figure(5)
plt.imshow(bmask)
#------------------------------------------------
#Dilate the isolated region to recover the pixels lost in erosion
dilated_mask=morph.dilation(bmask,disk(6))
plt.figure(6)
plt.imshow(dilated_mask)
#---------------------------------------
# Extract the brain using the barinmask
brain=inim*dilated_mask
plt.figure(7)
plt.imshow(brain)
-----------------------------------------
Input Image
--------------------