Segmentation of 3D mesh and feature extraction - python

I have five pictures of wound models taken from different angles links for the pictures is provided here
I have used SFM for computing a mesh , the picture of the mesh is presented below . I would like to extract only the features associated with the wound region compute volume and depth accordingly.
In order to solve this question, i have used U-Net segmentation to generate a 2D mask of the wound from 2d pictures , an example of 2D mask generated using U-Net is shown below .
I would like to know how i can map this mask onto 3D mesh and extract specific region within the 3D mesh which deals with wound part while removing other regions.
Any other ideas on how to segment the 3D mesh and extract specific region of interest are greatly appreciated, since i don't have different wound models i cannot apply supervised learning using 3D U-Net.

Convert the image and mesh to numpy array and follow the steps:
Duplicate the 2D mesh and stack it to make it a 3D mesh. for example, like this:
from skimage.transform import resize
mesh = np.array(mesh)
mesh = resize(mesh, (HEIGHT, WIDTH, 3))
mesh3D = np.array([mesh]*3).reshape(HEIGHT, WIDTH, 3)
Convert the pixel value of the mesh to binary (0,1). Set the part of the mesh where the wound is present to 1 and the rest to 0.
Multiply the mesh with the image.
Part of the mesh where the value is 1, that part of the image will remain as it is and the part of the mesh where the value is 0, that part of the image will be set to 0

Related

Density or area based heatmap or contourmap

I have one image given
here and I have centroids and area of every small and big defect present here, for example I have three lists x, y and area where x and y are coordinates of centroids of defect(every yellow object considers defect) in the image and area is area of defect computed from contour. I want to show density map or heatmap on this image where it is clearly shown that defect with higher area is having more peak compare to defect with lower area, how can I do this in python? for reference I have attached one more image from one paper given here, here based one kde and weighted kde of image it is clearly shown where bigger defect(big yellow circle) is having more area.
So you are trying to draw a heatmap superimposed on an image, to represent what you are calling the "defects" in the image (it's not clear from your explanation what those are--maybe deviations from a reference image?)? This sounds like it would be VERY confusing for a viewer to interpret, having to mentally separate the heatmap pixels from the pixels of the image itself. Much better would be to create a new blank image with the same dimensions as the original, then plot points in that image whose center (x,y) represent the location in the original image, and whose radius/color represent area.

Project 3D mesh on 2d image using camera intrinsic matrix

I've been trying to use the HOnnotate dataset to extract perspective correct hand and object masks as shown in the images of Task-3 of the Hands-2019 challenge.
The data set comes with the following annotations:
annotations:
The annotations are provided in pickled files under meta folder for each sequence. The pickle files in the training data contain a dictionary with the following keys:
objTrans: A 3x1 vector representing object translation
objRot: A 3x1 vector representing object rotation in axis-angle representation
handPose: A 48x1 vector represeting the 3D rotation of the 16 hand joints including the root joint in axis-angle representation. The ordering of the joints follow the MANO model convention (see joint_order.png) and can be directly fed to MANO model.
handTrans: A 3x1 vector representing the hand translation
handBeta: A 10x1 vector representing the MANO hand shape parameters
handJoints3D: A 21x3 matrix representing the 21 3D hand joint locations
objCorners3D: A 8x3 matrix representing the 3D bounding box corners of the object
objCorners3DRest: A 8x3 matrix representing the 3D bounding box corners of the object before applying the transormation
objName: Name of the object as given in YCB dataset
objLabel: Object label as given in YCB dataset
camMat: Intrinsic camera parameters
handVertContact: A 778D boolean vector whose each element represents whether the corresponding MANO vertex is in contact with the object. A MANO vertex is in contact if its distance to the object surface is <4mm
handVertDist: A 778D float vector representing the distance of MANO vertices to the object surface.
handVertIntersec: A 778D boolean vector specifying if the MANO vertices are inside the object surface.
handVertObjSurfProj: A 778x3 matrix representing the projection of MANO vertices on the object surface.
It also comes with a visualization script (https://github.com/shreyashampali/ho3d) that can render the annotations as 3D meshes (using Open3D) or 2D projects of on object corners and hand points (using Matplotlib):
What I am trying to do is project the visualization created by Open3D back to the original image.
So far I have not been able to do this. What I have been able to do is get the point cloud from 3d mesh and apply the camera intrinsic on it to make it perspective correct, now the question is how to create a mask out of the point-cloud for both hands and objects like the one from Open3d rendering.
# code looks as follows
# "mesh" is an Open3D triangle mesh ie "open3d.geometry.TriangleMesh()"
pcd = open3d.geometry.PointCloud()
pcd.points = mesh.vertices
pcd.colors = mesh.vertex_colors
pcd.normals = mesh.vertex_normals
pts3D = np.asarray(pcd.points)
# hand/object along negative z-axis so need to correct perspective when plotting using OpenCV
cord_change_mat = np.array([[1., 0., 0.], [0, -1., 0.], [0., 0., -1.]], dtype=np.float32)
pts3D = pts3D.dot(cord_change_mat.T)
# "anno['camMat']" is camera intrinsic matrix
img_points, _ = cv2.projectPoints(pts3D, (0, 0, 0), (0, 0, 0), anno['camMat'], np.zeros(4, dtype='float32'))
# draw perspective correct point cloud back on the image
for point in img_points:
p1, p2 = int(point[0][0]), int(point[0][1])
img[p2, p1] = (255, 255, 255)
Basically, I'm trying to get this segmentation mask out:
PS. Sorry if this doesn't make much sense, I'm very much new to 3D meshes, point clouds and their projections. I don't know all the correct technical words from them, yet. Leave a comment with a question and I can try to explain it as far as I can.
Turns out there is an easy way to do this task using Open3D and the camera intrinsic values. Basically we instruct Open3D to render the image from the POV of the camera.
import open3d
import open3d.visualization.rendering as rendering
# Create a renderer with a set image width and height
render = rendering.OffscreenRenderer(img_width, img_height)
# setup camera intrinsic values
pinhole = open3d.camera.PinholeCameraIntrinsic(img_width, img_height, fx, fy, cx, cy)
# Pick a background colour of the rendered image, I set it as black (default is light gray)
render.scene.set_background([0.0, 0.0, 0.0, 1.0]) # RGBA
# now create your mesh
mesh = open3d.geometry.TriangleMesh()
mesh.paint_uniform_color([1.0, 0.0, 0.0]) # set Red color for mesh
# define further mesh properties, shape, vertices etc (omitted here)
# Define a simple unlit Material.
# (The base color does not replace the mesh's own colors.)
mtl = o3d.visualization.rendering.Material()
mtl.base_color = [1.0, 1.0, 1.0, 1.0] # RGBA
mtl.shader = "defaultUnlit"
# add mesh to the scene
render.scene.add_geometry("MyMeshModel", mesh, mtl)
# render the scene with respect to the camera
render.scene.camera.set_projection(camMat, 0.1, 1.0, 640, 480)
img_o3d = render.render_to_image()
# we can now save the rendered image right at this point
open3d.io.write_image("output.png", img_o3d, 9)
# Optionally, we can convert the image to OpenCV format and play around.
# For my use case I mapped it onto the original image to check quality of
# segmentations and to create masks.
# (Note: OpenCV expects the color in BGR format, so swap red and blue.)
img_cv2 = cv2.cvtColor(np.array(img_o3d), cv2.COLOR_RGBA2BGR)
cv2.imwrite("cv_output.png", img_cv2)
This answer borrows a lot from this answer

Detect surfaces from a binary numpy array (image)

Assume that I have a binary numpy array (0 or 1 / True or False) that come from a .jpg image (2D array, from a grayscale image). I just made some processing to get the edges of the image, based on color change.
Now, from every surface/body from this array I need to get its centers.
Here the original image:
Here the processed one:
Now I need to get the centers of each surface generated for this lines (i.e. indexes that more or less point the center of each surface generated).
In the case you are interested, you can find the file (.npy) here:
https://gofile.io/d/K8U3ZK
Thanks a lot!
Found a solution that works. scipy.ndimage.label assigns a unique int. to each label or area, to validate the results I simply plot the output array
from scipy.ndimage import label
labeled_array, no_feats = label(my_binary_flower)
plt.imshow(labeled_array)

Overplot SunPy HEK polygon mask on custom numpy array instead of Sunpy Map

I'm working on sunspot detection and I'm trying to build ground truth masks using sunpy.net.hek client to download solar events from the knowledge base.
I followed this tutorial.
My problem is that I'm not able to get the polygon pixel coordinates after the rotation. That is:
ch_boundary = SkyCoord( [(float(v[0]), float(v[1])) * u.arcsec for v in p3],
obstime=ch_date,
frame=frames.Helioprojective)
rotated_ch_boundary = solar_rotate_coordinate(ch_boundary, aia_map.date)
Where p3 holds the original coordinates of the event (they have to be rotated because your picture could not have the same timing as the event on hek). rotated_ch_boundary is an Astropy SkyCoord but cannot figure out how to get the coordinates in pixel relative to the image from that.
Then in the tutorial it just plots the coordinates using Sunpy Map and matplotlib:
aia_map.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
I cannot do that because I want to print the polygon (filled) on a numpy array and save it.
I also tried to build a custom Sunpy map and use the same function to plot:
from sunpy.net.helioviewer import HelioviewerClient
hv = HelioviewerClient()
filepath = hv.download_jp2('2017/07/10 10:00:00', observatory='SDO',
instrument='HMI', detector='HMI', measurement='continuum')
hmi = sunpy.map.Map(filepath)
# QUERY AND ROTATION CODE HERE...
hmi.plot(axes=ax)
ax.plot_coord(rotated_ch_boundary, color='c')
but it doesn't even show the polygon on the plot, I don't know if it's for the different resolution or whatever.
Do you have any idea on how I can plot the polygon on a custom image and save it in order to use it later?
My purpose is to create a black image with a white polygon highlighted. The polygon should be in the exact same position as the sunspot in the corresponding image, let's say an SDO HMI intensitygram of the same day I downloaded from helioviewer.
Solution:
aia_map.world_to_pixel(rotated_ch_boundary)
or
rotated_ch_boundary.to_pixel(aia_map.wcs)
Thanks to fraserwatson for this post

Python - Get coordinates of important value of 2D array

I would like to determine an angle from an image (2D array).
I can get the coordinates of the point whose intensity is maximum with "unravel_index" and "argmax" but i would like to know how to get an another point whose intensity is high in order to calculate my angle.
I have to automatise that because i have a great number of images for post-treatement
So for the first coordinates, i can do that :
import numpy as np
from numpy import unravel_index
t = unravel_index(eyy.argmax(), eyy.shape)
And i need an another coordinates in order to calculate my angle...
t2 = ....
theta = np.arctan2(t[0]-t2[0],t[1]-t2[1])
What you could try is to look into the Hough Transform (Wikipedia - Hough Transform). The Hough Transform is a tool developed for finding lines and their orientation in images.
There is a Python implementation of the Hough Transform over at Rosetta Code.
I'm not sure if the lines in your data are distinct enough for the Hough Transform to yield good results but I hope it helps.
You can put your array in a masked array, find the pixel with the maximum intensity, then mask it, then find the next pixel with the maximum intensity.

Categories