How to draw polygons using Abaqus python script - python

I have (X,Y) coordinates of polygons as shown in the picture pores image. How can I import them to Abaqus and how to create a surface in a way that subtract the internal shapes (polygon1,poly2....) from the external shape (the frame (a rectangle)). Rectangle-poly1-poly2....

Try recording a macro. Manually create the model using Abaqus CAE. I assumed you're doing a 2-D model, so your geometry will probably be:
A solid rectangle.
2a. Planar cut on the rectangle created through a sketch.
2b. Alternatively, you can use the "create wire" feature to input your shape coordinates (right-click to read coordinates from a text file). Use merge scheme "Separate wire". Then, use the "Partition" tool to partition the holes, and then use "Geometry Edit" to remove the undesired faces.
Anyways, the macro recorder will help prototype the code that you will need to generate. Once you do that, you're going to need to do typical Python file open/read operations to read the input coordinates and develop the program to generate the model and mesh. Or, you can extruciatingly generate your model manually using the method I outlined above. There's a lot of ways to skin this chicken.

Related

Load Mesh from a glb file using trimesh

I am new to 3D geometry. I am trying to extract a mesh from a glb file using python library trimesh. I couldn't figure out the proper way of doing that.
My requirement is that I need a 3D mesh (as adjacency matrix) of an object so that I can apply non-euclidian convolutional operators on them.
Any suggestions on what I should be looking at?
You can use trimesh.load to load your glTF file. Note that the return type depends on the filetype of your model file. For glTF files it will return an instance of trimesh.Scene. The scene has all sorts of attributes like the camera, lights but also geometries. This is because glTF files can contain more that just model data. Each geometry is an instance of trimesh.Trimesh, which is the base class for geometries and has a edges_sparse property which represents the adjacency matrix of the model.
To put it all together:
scene = trimesh.load("some_file.glb")
geometries = list(scene.geometry.values())
geometry = geometries[0]
adjacency_matrix = geometry.edges_sparse
It's a bit tedious to figure this out using the documentation. I tend to look at the source code or turn on the debugger in my IDE:
trimesh.Scene: https://github.com/mikedh/trimesh/blob/master/trimesh/scene/scene.py
trimesh.Trimesh: https://github.com/mikedh/trimesh/blob/master/trimesh/base.py

How do I label the pointcloud if I have the 3D boxes of the objects annotated?

I am trying to annotate my point cloud data. I found a number of tools but could only access the demo version of 3D Point cloud tool by Supervisely. Once the annotation is complete by drawing 3D boxes around the objects, the output annotation file is in the form of a JSON file. This contains, the class and global position, orientation and dimensions of the box. How do I use this file to assign labels to the points inside these 3D boxes. I want the output format in either .pcd file or .bin file.
The output of the JSON file is as follows:
[{"id":36698,"name":"vel_1558647824006782.pcd","annotations":[{"className":"Car","geometryType":"cuboid","classId":957,"geometry":{"position":{"x":9.539855967959713,"y":18.342023271012913,"z":0.43944128482454614},"rotation":{"x":0,"y":0,"z":0},"dimensions":{"x":5.691547052392309,"y":1.6625674002633986,"z":1.757779283656416}}},{"className":"ground","geometryType":"cuboid","classId":958,"geometry":{"position":{"x":28.890481890779242,"y":8.463823613489927,"z":-1.0314986175132965},"rotation":{"x":0,"y":0,"z":0},"dimensions":{"x":96.34273328620523,"y":18.714553504372063,"z":1.0544185995045456}}}]}]
I thought of using a crop_box filter by PCL. But is there any other way around it. It would also help if someone can point me to other point cloud annotation tools which could best help me solve the problem?
I was able to write a c++ script to read the json file and use PCL's cropbox filter to solve the problem. The methodology used is as follows:
1) Read the json file using nlohmann/json
std::ifstream ifs("somepath.json");
json j = json::parse(ifs);
2) Extract the "position" (centroid of cuboid), "orientation" of cuboid, "dimensions" of the cuboid and the "className" for each box. The code below shows a way to extract the position data into std::vector. (i is iterating over the number of boxes.)
std::vector<float> position {
j[0]["annotations"][i]["geometry"]["position"]["x"].get<float>(),
j[0]["annotations"][i]["geometry"]["position"]["y"].get<float>(),
j[0]["annotations"][i]["geometry"]["position"]["z"].get<float>()};
3) Get the max and min (x,y,z) coordinate points of the vertices of the box. This goes as input to the cropbox filter. (Note: the x,y,z point does not have to relate to a single vertex. Xmin will be minimum x of all 8 vertices and Ymin will minium of all 8 vertices and so on.)
4) Use cropbox filter from PCL. This allows you to get indices of all the points inside the given box. Examples can be found here and here.
5) Depending upon the class of the box, assign different colors to the points at those indices of the point cloud.
6) Save the point cloud.
This would be a generalized way of labeling every point of a point cloud from the an input json file which consists information of the position, dimension and orientation of the 3D cubes.

How to analyse/calculate circumference of human body parts from point cloud or 3d objects?

I am using win10, python and c#. I want to calculate circumference of human parts (belly, biceps etc) with using point cloud or 3d scans like .stl .obj .ply. Now I can get the point cloud of human body with kinect v2. I have point cloud of human body, scanned 3d human body in .stl .obj .ply formats.
I need some ideas and infos about it. I don't know how to analyse the things I have and how to calculate what I want.
Here I found an example of what I am trying to do but It doesn't need to be perfectly stable like that, Its for a school homework. Maybe you can give me some ideas about how to achieve my goal. Thank you for your help.
https://www.youtube.com/watch?time_continue=48&v=jOvaZGloNRo
I get 3d scanned object with kinect v2 and use PCL to convert it into point cloud.
I don't know about using PCL with Python or C#. In general you are looking at the following steps:
Filtering the points to the interested region
Segmenting the shape
Extracting the parameters
If you're interested in only Python, then OpenCV might be the best option. You can also develop the core logic in C++ and wrap it for Python or C#. C++ also has some nice UI libaries (Qt, nanogui), please see the following details for achieving the objective with PCL
Filtering
CropBox or PassThrough can be used for this. It'll result in similar results as shown in the image assuming that the frame has been chosen properly. If not, the points cloud can be easily transformed
Segmenting the shape
Assuming you want an average circumference, you might need to experiment with Circle 2D, Circle 3D and Cylinder models. More details regarding usage and API are here. The method chosen can be simple SAC (Sample Consensus) like RANSAC (Random SAC) or advanced method like LMEDS (Least Median of Squares) or MLESAC (Max Likelihood Estimation SAC)
Extracting the parameters
All models have a radius field which can be used to find the circumference using standard formula (2*pi*r)
Disclaimer: Please take note that the shape is circular, not ellipse and the cylinder are right angled cylinders. So if the object measured (arm, or bicep) is not circular, the computed value might not be close to ground truth in extreme cases

Given position and rotation position, how to i translate coordinates from one frame to another? (using baxter robot, ROS and python)

So what i want to do essentially, is transforming a set of coordinates, from one frame to another. I have my camera set on my robot's hand (which i know the position and orientation), and i'm viewing a certain object, and reading coordinates from the cameras frame.
How do i convert those coordinate to my base frame? I know that i can just first reverse the orientation, using the inverse orientation matrix, and then use some kind of translational matrix, but how do i obtain that matrix? Once the orientation is corrected, how do i do the translation?
Note: I think this is better suited as a comment but I lack the reputation points.
Assuming both your frames are available in ROS, i.e. if you run (as specified here):
rosrun tf tf_echo /source_frame /target_frame
you should see the translation and rotation between both frames. Then you could use lookupTransform to obtain this information inside your code (see: TF tutorial).
Hope this helps.

Extracting data from BVH file and applying it to my 3d model?

i have been trying to get motion data from a bvh file and applying it to my 3d models.I made study about bvh file format, and parsed its data and written the data in a text file, lets give the new extension (.motion) for the text file. Everything is fine, but problem arises when i try to apply .motion data to my 3d model's joints.
Because the rest pose of the bvh file differs.My model's rest pose is T-Pose,but bvh file's is standing pose.So the whole animation sequence applied in my 3d model also differs in its rotation.
i also verified whether the rotation data is applied to the correct joint or not
so everything is perfect except the rest pose. is there any other way to overcome this rest position difference. Can i use matrix multiplication to solve this problem ?
Answering my own question.I have written the python script that writes the global rotation of the bones for each frame, and i just applied the rotations to other armature (with different rest pose)and now the animation can be easily copied independent of rest pose.
pose.bone[boneIndex].matrix.decompose()[1] //global rotation of bone in Quaternion
if we apply the global rotation of one armature to another , then the rest poses of both the armatures are not necessary to be same.But if we try to apply the local (relative )rotations of the bones to other armature , then the both armatures should have same rest pose.
Steps :
1) Import the BVH file in blender.
2) Run the python script the writes the global rotation(in quaternion) of each bone per frame .
3) Now apply the quaternion directly to your 3d model's bones.

Categories