i have been trying to get motion data from a bvh file and applying it to my 3d models.I made study about bvh file format, and parsed its data and written the data in a text file, lets give the new extension (.motion) for the text file. Everything is fine, but problem arises when i try to apply .motion data to my 3d model's joints.
Because the rest pose of the bvh file differs.My model's rest pose is T-Pose,but bvh file's is standing pose.So the whole animation sequence applied in my 3d model also differs in its rotation.
i also verified whether the rotation data is applied to the correct joint or not
so everything is perfect except the rest pose. is there any other way to overcome this rest position difference. Can i use matrix multiplication to solve this problem ?
Answering my own question.I have written the python script that writes the global rotation of the bones for each frame, and i just applied the rotations to other armature (with different rest pose)and now the animation can be easily copied independent of rest pose.
pose.bone[boneIndex].matrix.decompose()[1] //global rotation of bone in Quaternion
if we apply the global rotation of one armature to another , then the rest poses of both the armatures are not necessary to be same.But if we try to apply the local (relative )rotations of the bones to other armature , then the both armatures should have same rest pose.
Steps :
1) Import the BVH file in blender.
2) Run the python script the writes the global rotation(in quaternion) of each bone per frame .
3) Now apply the quaternion directly to your 3d model's bones.
Related
I have a time-series point cloud which I wish to animate on Openlayers. The caveat is that the data is not georeferenced (ie, my x & y runs from 0-10km & 0-5km respectively) and the user will need to define the starting point(lonlat of where x&y=0) as well as angle(heading) which pivots around the starting point.
Thus far my solution has been to
Rasterise the point cloud frame-by-frame into a folder of png files (using matplotlib savefig)
Rotate each png file by the angle provided
Expanding/contracting the bounds based on the new dimensions. (step #2 and this is done using PIL's rotate)
Mathematically calculate the new lonlat of where x&y=0 of the new image. (geopy)
Create a rest api so that I can call each image frame as a imagesource in openlayers. (flask)
Here are my questions
Is it possible to rotate an imagelayer in openlayers? (this will remove the need for me to rotate it server-side)
Step #4 seems rather tedious as I need to calculate the shift in x&y, add the difference to a original lonlat to get a new starting point. Is there a library or function that is normally used for this?
Otherwise is there a better way to achieve what I want? (Did I miss a simpler solution?)
Some other methods I have tried:
Creating an animated gif instead. (Did not like this solution since it would introduce more complexity if the user needs to be able to pause or go to specific time)
Rasterising into a folder of geotiff images instead of png. (Geotiff is significantly larger than PNGs and achieve literally the same functionality with the added step of having to setup a wms server or geoserver)
You can try to use a GeoImage layer to place an image on a map with a rotation.
See https://viglino.github.io/ol-ext/examples/layer/map.geoimage.html
I am currently doing a computer vision task and got stuck on the following problem. I need ground truth values for my sequence. I have a nice sequence where the camera moves trough my scene and captures the RGB-frames. Now, I need a corresponding frame for every RGB-frame, but instead of the RGB-values it should store the distance. I know, that you can get the total depth (euclidean distance from camera to real-world-object) by connecting the 'Depth'-output of the Render Layer Node to a File Output Node in the Compositing workspace using the EXR file format. But I just need the Z-component of the distance to the camera and I don't want to convert it afterwards with the camera parameters (already did it, but I need a cleaner workflow).
I stumbled upon this function: bpy_extras.view3d_utils.region_2d_to_location_3d , but could find almost nothing about how to use it properly. I don't know what I should give it as input.
Has anyone a solution or maybe already used the function and can explain me how i would use it in the default Blender setup (just a Cube, Camera and Light) and if it does what I expect it to do (giving the x,y,z and then I can strip the unnecessary information)?
I already tried using the world_to_camera_view function, but this only works for object-parameters like vertices and not for the whole surface to create a dense map.
I also know, that you can render the scene with a Cycles material to store x,y,z in the RGB-channels, but there you only get world-coordinates and you have to change the materials after you rendered the real sequence.
I really need just a frame with z-distance to the camera for every pixel in the frame.
I would be really grateful if someone could help me, because I've been trying to do that for days now.
I am using win10, python and c#. I want to calculate circumference of human parts (belly, biceps etc) with using point cloud or 3d scans like .stl .obj .ply. Now I can get the point cloud of human body with kinect v2. I have point cloud of human body, scanned 3d human body in .stl .obj .ply formats.
I need some ideas and infos about it. I don't know how to analyse the things I have and how to calculate what I want.
Here I found an example of what I am trying to do but It doesn't need to be perfectly stable like that, Its for a school homework. Maybe you can give me some ideas about how to achieve my goal. Thank you for your help.
https://www.youtube.com/watch?time_continue=48&v=jOvaZGloNRo
I get 3d scanned object with kinect v2 and use PCL to convert it into point cloud.
I don't know about using PCL with Python or C#. In general you are looking at the following steps:
Filtering the points to the interested region
Segmenting the shape
Extracting the parameters
If you're interested in only Python, then OpenCV might be the best option. You can also develop the core logic in C++ and wrap it for Python or C#. C++ also has some nice UI libaries (Qt, nanogui), please see the following details for achieving the objective with PCL
Filtering
CropBox or PassThrough can be used for this. It'll result in similar results as shown in the image assuming that the frame has been chosen properly. If not, the points cloud can be easily transformed
Segmenting the shape
Assuming you want an average circumference, you might need to experiment with Circle 2D, Circle 3D and Cylinder models. More details regarding usage and API are here. The method chosen can be simple SAC (Sample Consensus) like RANSAC (Random SAC) or advanced method like LMEDS (Least Median of Squares) or MLESAC (Max Likelihood Estimation SAC)
Extracting the parameters
All models have a radius field which can be used to find the circumference using standard formula (2*pi*r)
Disclaimer: Please take note that the shape is circular, not ellipse and the cylinder are right angled cylinders. So if the object measured (arm, or bicep) is not circular, the computed value might not be close to ground truth in extreme cases
I have (X,Y) coordinates of polygons as shown in the picture pores image. How can I import them to Abaqus and how to create a surface in a way that subtract the internal shapes (polygon1,poly2....) from the external shape (the frame (a rectangle)). Rectangle-poly1-poly2....
Try recording a macro. Manually create the model using Abaqus CAE. I assumed you're doing a 2-D model, so your geometry will probably be:
A solid rectangle.
2a. Planar cut on the rectangle created through a sketch.
2b. Alternatively, you can use the "create wire" feature to input your shape coordinates (right-click to read coordinates from a text file). Use merge scheme "Separate wire". Then, use the "Partition" tool to partition the holes, and then use "Geometry Edit" to remove the undesired faces.
Anyways, the macro recorder will help prototype the code that you will need to generate. Once you do that, you're going to need to do typical Python file open/read operations to read the input coordinates and develop the program to generate the model and mesh. Or, you can extruciatingly generate your model manually using the method I outlined above. There's a lot of ways to skin this chicken.
So what i want to do essentially, is transforming a set of coordinates, from one frame to another. I have my camera set on my robot's hand (which i know the position and orientation), and i'm viewing a certain object, and reading coordinates from the cameras frame.
How do i convert those coordinate to my base frame? I know that i can just first reverse the orientation, using the inverse orientation matrix, and then use some kind of translational matrix, but how do i obtain that matrix? Once the orientation is corrected, how do i do the translation?
Note: I think this is better suited as a comment but I lack the reputation points.
Assuming both your frames are available in ROS, i.e. if you run (as specified here):
rosrun tf tf_echo /source_frame /target_frame
you should see the translation and rotation between both frames. Then you could use lookupTransform to obtain this information inside your code (see: TF tutorial).
Hope this helps.