Measuring the Feret diameter of multiple particles per TIFF image - python

I am looking to measure the minimum and maximum diameter of multiple particles (groups) in a TIFF image.
This is my current code:
from PIL import Image
import numpy as np
from skimage import measure
import os
import pandas as pd
import warnings; warnings.filterwarnings(action='once')
dirname1 = path
final1 = []
for fname in os.listdir(dirname1):
im = Image.open(os.path.join(dirname1, fname))
imarray = np.array(im)
final1.append(imarray)
final1 = np.asarray(final1)
groups, group_count = measure.label(final1 == 0, return_num = True, connectivity = 2)
print('Groups: \n', groups)
print(f'Number of particles: {group_count}')
df1 = (pd.DataFrame(dict(zip(['Particle #', 'Size [pixel #]'],
np.unique(groups, return_counts=True))))
.loc[lambda d: d['Particle #'].ne(0)]
)
df1.index -= 1
props = measure.regionprops_table(groups, properties = ['label', 'equivalent_diameter'])
df1_new = pd.DataFrame(props)
My TIFF image looks like this: Image example (I normally work with multiple TIFF images)
In my code, I have used skimage to calculate the equivalent diameter. However, I need the min/max Feret diameter in the df1 DataFrame as well.
Thank you.

If you change the last two lines in your code to:
props = measure.regionprops_table(groups[0], properties = ['label', 'equivalent_diameter', 'axis_major_length', 'axis_minor_length' ])
props['equiv_dia'] = props['equivalent_diameter'] ; props.pop('equivalent_diameter')
props['min_feret'] = props['axis_minor_length' ] ; props.pop('axis_minor_length' )
props['max_feret'] = props['axis_major_length' ] ; props.pop('axis_major_length' )
df1_new = pd.DataFrame(props)
print(df1_new)
You will get for the following image:
as output:
label equiv_dia min_feret max_feret
0 1 2.580762 0.000000 3.651484
1 2 10.802272 3.651484 26.226566
2 3 3.059832 2.000000 3.651484
3 4 3.578801 3.096570 4.195087
4 5 5.497605 3.651484 8.032033
The changes to your code are solving following issues:
Error message because of bad array shapes
Lack of columns for feret values in the dataframe
The measure.regionprops_table() method comes with 'axis_major_length', 'axis_minor_length' options for the calculated properties which seem to be be equivalent to the min/max Feret values.
If you are not satisfied with the values this properties provide (I'm not) I suggest to decide which other tool you want to use for this calculation to obtain the values satisfying your needs what is maybe worth to ask another question about calculating Feret values.
As I created the tag feret-values you can use it along with python in a new question about differences in the values of min/max Feret calculated by different Python scripts/modules.
I have myself checked out 'RegionProperties' object has no attribute 'feret_diameter_max' , but the Feret property in scikit throws an Error.

Related

Best path in Dynamic Time Warping

I’ve been using different python DTW to find the shift between two images. Suppose we have image1 and image2 with the same shape. I apply dtw to each row of the two, i.e.
for i in range(image1.shape[0]):
alignment = dtw(image2[i,:], image1[I,:])
shift = alignment.index1 – alignment.index2
all_shifts.append(shift)
I need all_shifts to have the same shape as both images. However, in all the python dtw packages, the length of the alignment.index1 and index2 are larger than the length of s1 and s2, since there are duplicates. I have tried to select the unique indices by simple criteria. For example, in a synthetic test, I’ve selected those indices that have the maximum or minimum shifts, but the results are noisy. But in the real case, I don’t know the shifts between the two images in advance. So finding unique indices is not straightforward.
Could anyone advise me how I can find the best unique indices of the alignment, so that I could have a shift with the same length as s1 and s2?
Thank you,
I have tried the following code:
import numpy as np
import pandas as pd
from dtw import *
def warp_function(image2, image1):
warp_arr = np.zeros(image1.shape)
warped_img = np.zeros(image2.shape)
for i in range(image1.shape[0]):
print(f'[Warping function]: trace {i} out of {image1.shape[0]}')
reference = np.zeros(image1.shape[1])
query = np.zeros(image2.shape[1])
reference[:] = image1[i, :]
query[:] = image2[i, :]
alignment = dtw(query,reference)
shift = alignment.index2 - alignment.index1
warp_dict = {'shift': shift, 'index1': alignment.index1, 'index2': alignment.index2}
df = pd.DataFrame(warp_dict)
#df2 = df.drop_duplicates(subset=['index2'], keep='last')
df2 = find_uniques(df)
warp_arr[i,:] = df2['shift'].to_numpy()
warp_index = df2['index1'].to_numpy()
warped_img[i,:] = query[warp_index]
return warp_arr, warped_img
Find_unique is a function that finds minimum or maximum shifts for the duplicated indices.
I've been trying to find the shift between the two images using dtw. When I apply the shift, image2 and warped_img are expected to be the same. However, they are not. It shows that find_unique doesn't find the best unique indices.

Filter regions using skimage regionprops and create a mask with filtered components

I have generated a mask through Otsu's Method and now I have to eliminate certain closed regions from it that do not satisfy area or eccentricity conditions.
I label the mask with skimage.measure.label function and then apply skimage.measure.regionprops to it to get information about the area and eccentricity of the regions. The code is the following:
lab_img = label(mask)
regions = regionprops(lab_img)
for region in regions:
if (condition):
# Remove this region
# final_mask = mask (but without the discarded regions)
Does anyone know an efficient way to do this?
Thank you!
Here's one way to get what you want. The critical function is map_array, which lets you remap the values in an array based on input and output values, like with a Python dictionary. So you could create a table of properties using regionprops_table, then use NumPy to filter those columns quickly, and finally, remap using the labels column. Here's an example:
from skimage import measure, util
...
lab_image = measure.label(mask)
# table is a dictionary mapping column names to data columns
# (NumPy arrays)
table = regionprops_table(
lab_image,
properties=('label', 'area', 'eccentricity'),
)
condition = (table['area'] > 10) & (table['eccentricity'] < 0.5)
# zero out labels not meeting condition
input_labels = table['label']
output_labels = input_labels * condition
filtered_lab_image = util.map_array(
lab_image, input_labels, output_labels
)

How to extract subimages from an image?

What are the ways to count and extract all subimages given a master image?
Sample 1
Input:
Output should be 8 subgraphs.
Sample 2
Input:
Output should have 6 subgraphs.
Note: These image samples are taken from internet. Images can be of random dimensions.
Is there a way to draw lines of separation in these image and then split based on those details ?
e.g :
I don't think, there'll be a general solution to extract all single figures properly from arbitrary tables of figures (as shown in the two examples) – at least using some kind of "simple" image-processing techniques.
For "perfect" tables with constant grid layout and constant colour space between single figures (as shown in the two examples), the following approach might be an idea:
Calculate the mean standard deviation in x and y direction, and threshold using some custom parameter. The mean standard deviation within the constant colour spaces should be near zero. A custom parameter will be needed here, since there'll be artifacts, e.g. from JPG compression, which effects might be more or less severe.
Do some binary closing on the mean standard deviations using custom parameters. There might be small constant colour spaces around captions or similar, cf. the second example. Again, custom parameters will be needed here, too.
From the resulting binary "signal", we can extract the start and stop positions for each subimage, thus the subimage itself by slicing from the original image. Attention: That works only, if the tables show a constant grid layout!
That'd be some code for the described approach:
import cv2
import numpy as np
from skimage.morphology import binary_closing
def extract_from_table(image, std_thr, kernel_x, kernel_y):
# Threshold on mean standard deviation in x and y direction
std_x = np.mean(np.std(image, axis=1), axis=1) > std_thr
std_y = np.mean(np.std(image, axis=0), axis=1) > std_thr
# Binary closing to close small whitespaces, e.g. around captions
std_xx = binary_closing(std_x, np.ones(kernel_x))
std_yy = binary_closing(std_y, np.ones(kernel_y))
# Find start and stop positions of each subimage
start_y = np.where(np.diff(np.int8(std_xx)) == 1)[0]
stop_y = np.where(np.diff(np.int8(std_xx)) == -1)[0]
start_x = np.where(np.diff(np.int8(std_yy)) == 1)[0]
stop_x = np.where(np.diff(np.int8(std_yy)) == -1)[0]
# Extract subimages
return [image[y1:y2, x1:x2, :]
for y1, y2 in zip(start_y, stop_y)
for x1, x2 in zip(start_x, stop_x)]
for file in (['image1.jpg', 'image2.png']):
img = cv2.imread(file)
cv2.imshow('image', img)
subimages = extract_from_table(img, 5, 21, 11)
print('{} subimages found.'.format(len(subimages)))
for i in subimages:
cv2.imshow('subimage', i)
cv2.waitKey(0)
The print output is:
8 subimages found.
6 subimages found.
Also, each subimage is shown for visualization purposes.
For both images, the same parameters were suitable, but that's just some coincidence here!
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
NumPy: 1.20.1
OpenCV: 4.5.1
scikit-image: 0.18.1
----------------------------------------
I could only extract the sub-images using simple array slicing technique. I am not sure if this is what you are looking for. But if one knows the table columns and rows, I think you can extract the sub-images.
image = cv2.imread('table.jpg')
p = 2 #number of rows
q = 4 #number of columns
width, height, channels = image.shape
width_patch = width//p
height_patch = height//q
x=0
for i in range(0, width - width_patch, width_patch):
for j in range(0, height - height_patch, height_patch):
crop = image[i:i+width_patch, j:j+height_patch]
cv2.imwrite("image_{0}.jpg".format(x),crop)
x+=1
# cv2.imshow('crop', crop)
# cv2.waitKey(0)```

How do I filter by area or eccentricity using skimage.measure.regionprops on a binary image in Python

I have a binary image of a road surface and I am trying to isolate the pothole only. Using skimage.measure.regionprops and skimage.measure.label I can produce a table of properties for different labels within the image.
How do I then filter using those values? - for instance using area or axis length or eccentricity to turn off certain labels.
Input, labled Image and properties table
using python 3
I would use pandas together with skimage.measure.regionprops_table to get what you want:
import pandas as pd
import imageio as iio
from skimage.measure import regionprops_table, label
image = np.asarray(iio.imread('path/to/image.png'))
labeled = label(image > 0) # ensure input is binary
data = regionprops_table(
labeled,
properties=('label', 'eccentricity'),
)
table = pd.DataFrame(data)
table_sorted_by_ecc = table.sort_values(
by='eccentricity', ascending=False
)
# print e.g. the 10 most eccentric labels
print(table_sorted.iloc[:10])
If you then want to e.g. produce the label image with only the most eccentric label, you can do:
eccentric_label = table['labels'].iloc[np.argmax(table['eccentricity'])]
labeled_ecc = np.where(labeled == eccentric_label, eccentric_label, 0)
You can also do more sophisticated things, e.g. make a label image with only labels above a certain eccentricity. Below, we use NumPy elementwise multiplication to produce an array that is the original label if that label has high eccentricity, or 0 otherwise. We then use the skimage.util.map_array function to map the original labels to either themselves or 0, again, depending on the eccentricity.
from skimage.util import map_array
ecc_threshold = 0.3
eccentric_labels = table['labels'] * (table['eccentricity'] > ecc_threshold)
new_labels = map_array(
labeled,
np.asarray(table['labels']),
np.asarray(eccentric_labels),
)

Search for all templates using scikit-image

I am trying to follow the tutorial from scikit-image regarding Template Matching (check it here).
Using just this example, I would like to find all matching coins (maxima) in the image, not only this one which gave the highest score. I was thinking about using:
maxima = argrelextrema(result, np.greater)
but the problem is that it finds also very small local maxima, which are just a noise. Is there any way to screen numpy array and find the strongest maxima? Thanks!
To find all the coins the documentation suggests "...you should use a proper peak-finding function." The easiest of these is probably peak_local_max (as suggested in the comments) which is also from skimage, and has a manual page here. Using some reasonable numbers in the *args gets the peaks out of the response image.
The second comment about the peaks being displaced is also discussed in the documentation
"Note that the peaks in the output of match_template correspond to the origin (i.e. top-left corner) of the template."
One can manually correct for this (by translating the peaks by the side lengths of the template), or you can set the pad_input bool to True (source), which as a by-product means that the peaks in the response function line up with the center of the template at the point of maximal overlap.
Combining these two bits into a script we get something like:
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.feature import match_template
from skimage.feature import peak_local_max # new import!
image = data.coins()
coin = image[170:220, 75:130]
result = match_template(image, coin,pad_input=True) #added the pad_input bool
peaks = peak_local_max(result,min_distance=10,threshold_rel=0.5) # find our peaks
# produce a plot equivalent to the one in the docs
plt.imshow(result)
# highlight matched regions (plural)
plt.plot(peaks[:,1], peaks[:,0], 'o', markeredgecolor='r', markerfacecolor='none', markersize=10)
I have been digging and found some solution but unfortunately I am not sure if I know what exactly is done in the script. I have slightly modified script found here:
neighborhood_size = 20 #how many pixels
threshold = 0.01 #threshold of maxima?
data_max = filters.maximum_filter(result, neighborhood_size)
maxima = (result == data_max)
data_min = filters.minimum_filter(result, neighborhood_size)
diff = ((data_max - data_min) > threshold)
maxima[diff == 0] = 0
x_image,y_image = [], []
temp_size = coin.shape[0]
labeled, num_objects = ndimage.label(maxima)
slices = ndimage.find_objects(labeled)
x, y = [], []
for dy,dx in slices:
x_center = (dx.start + dx.stop - 1)/2
x.append(x_center)
y_center = (dy.start + dy.stop - 1)/2
y.append(y_center)
fig, (raw,found) = plt.subplots(1,2)
raw.imshow(image,cmap=plt.cm.gray)
raw.set_axis_off()
found.imshow(result)
found.autoscale(False)
found.set_axis_off()
plt.plot(x,y, 'ro')
plt.show()
and does this:
I also realized, that coordinates of found peaks are shifted in comparison to raw image. I think the difference comes from the template size. I will update when I will find out more.
EDIT: with slight code modification I was able also to find places on the input image:
x_image_center = (dx.start + dx.stop - 1 + temp_size) / 2
x_image.append(x_image_center)
y_image_center = (dy.start + dy.stop - 1 + temp_size) / 2
y_image.append(y_image_center)

Categories