Change the origin of a 3D mesh vtk - python

I'm working with 3D meshes using vtk. I want to apply the scale transformation filter but it works holding, as a reference, the origin of the mesh. Now I would like to apply the same transformation but using a different fixed point.
The idea is to change the origin of the mesh to that point and than to apply the filter. Is it possible in vtk or with other libraries? Is there another method that works better?
I attach here two images that explain the problem.
Thank you in advance!

Related

Triangle mesh generation of 3D objects using Python

Hey everyone I'm pretty new to graphics programming and I want some guidance or working samples, I want to make any 3D objects but for starters I want to make a 3D cube using triangle mesh with given coordinates via python, and export all the mesh in an .obj file so I can load the mesh and view it in meshlab.
I would really love some guidance on this thanks.
if you are comfortable with meshlab you could find useful pymeshlab
https://pymeshlab.readthedocs.io
Here is a simple tutorial on how to create a mesh from just array of coords and indexes...
https://pymeshlab.readthedocs.io/en/latest/tutorials/import_mesh_from_arrays.html

How to analyse/calculate circumference of human body parts from point cloud or 3d objects?

I am using win10, python and c#. I want to calculate circumference of human parts (belly, biceps etc) with using point cloud or 3d scans like .stl .obj .ply. Now I can get the point cloud of human body with kinect v2. I have point cloud of human body, scanned 3d human body in .stl .obj .ply formats.
I need some ideas and infos about it. I don't know how to analyse the things I have and how to calculate what I want.
Here I found an example of what I am trying to do but It doesn't need to be perfectly stable like that, Its for a school homework. Maybe you can give me some ideas about how to achieve my goal. Thank you for your help.
https://www.youtube.com/watch?time_continue=48&v=jOvaZGloNRo
I get 3d scanned object with kinect v2 and use PCL to convert it into point cloud.
I don't know about using PCL with Python or C#. In general you are looking at the following steps:
Filtering the points to the interested region
Segmenting the shape
Extracting the parameters
If you're interested in only Python, then OpenCV might be the best option. You can also develop the core logic in C++ and wrap it for Python or C#. C++ also has some nice UI libaries (Qt, nanogui), please see the following details for achieving the objective with PCL
Filtering
CropBox or PassThrough can be used for this. It'll result in similar results as shown in the image assuming that the frame has been chosen properly. If not, the points cloud can be easily transformed
Segmenting the shape
Assuming you want an average circumference, you might need to experiment with Circle 2D, Circle 3D and Cylinder models. More details regarding usage and API are here. The method chosen can be simple SAC (Sample Consensus) like RANSAC (Random SAC) or advanced method like LMEDS (Least Median of Squares) or MLESAC (Max Likelihood Estimation SAC)
Extracting the parameters
All models have a radius field which can be used to find the circumference using standard formula (2*pi*r)
Disclaimer: Please take note that the shape is circular, not ellipse and the cylinder are right angled cylinders. So if the object measured (arm, or bicep) is not circular, the computed value might not be close to ground truth in extreme cases

How to generate bounding box data from heatmap data of an image?

I have a group of images and some separate heatmap data which (imperfectly) explains where subject of the image is. The heatmap data is in a numpy array with shape (224,224,3). I would like to generate bounding box data from this heatmap data.
The heatmaps are not always perfect, So I guess I'm wondering if anyone can think of an intelligent way to do this.
Here are some examples of what happens when I apply the heatmap data to the image:
I found a solution to this in matlab, but I have no idea how to read this code! I am a python programmer, unfortunately.
https://github.com/metalbubble/CAM/tree/master/bboxgenerator
Anyone have any ideas about how to approach something like this?
I am not quite sure how the heatmap data of your project exactly looks like, but it seems to me that you can use something like Selective Search. You can also have a look on this interesting paper. Maybe you can use this approach on your dataset.
I'm attempting a similar method for automating the creation of bounding boxes (since, lets face it: creating boxes manually takes along time)
this other stackpost covers a similar idea:
EDIT: (i originally had put a link to the current stack post 🤦 - but here is the stack post i was referring to)
Generating bounding boxes from heatmap data
the problem at hand that i recognize is that heatmaps can be fragmented and a bit arbitrary. the solution that comes to mind initially is setting a threshold of the heat map. So in the case of the example heat map images - when applying a bounding box cover all regions that are yellow/orange/red than say green/blue.
It depends on how many bounding boxes you need. You can set a threshold and have multiple bounding boxes for each of the highly activated regions, or try connecting the regions (by a morphological operation maybe) and calculate a single bounding box for connected activated pixels.

image alignment with set points-python

I've been looking for a way to align at least two images in Python.
There are some very nice scripts out there, but I'd like a script that accepts the coordinates of a point in one image and use these coordinates to align the second(or more) image.
I'v been reading about phase correlation algorithms but I'd appreciate some help on how to use the coordinates of a user defined point.
Thank you!

Binary images into polygons

How do I transform a binary image with one single mask in it (whose values are one) into a polygon in PYTHON? My goal is to calculate the inner-angles of this mask and the orientation of the countor-lines. I assume I have to transform the mask into a polygon before I can use other libraries that do these calculations for me. I rather not use Open Cv to tdo this transformation since I have faced problems installing it in a Windows 64/Spyder envronment. Thanks for any help!
While you can surely write your own code, I suggest to have a look at libraries like AutoTrace or potrace. They should already do most of the work. Just run them via the command line and read the resulting vector output.
If you want to do it yourself, try to find the rough outline and then apply an algorithm to smooth the outline.
Related:
Simplified (or smooth) polygons that contain the original detailed polygon
How to intelligently degrade or smooth GIS data (simplifying polygons)?

Categories