[I have been trying many solutions posted here and there, yet I cannot make this work]
I have a python script on blender side exporting object rotations like this
m = obj.matrix_world.to_euler('XYZ')
rot = mathutils.Vector((math.degrees(m.x),math.degrees(m.y),math.degrees(m.z)))
on unreal side, still in python, setting up rotators for imported objects :
def createRotator(actor_rotation):
rotatorX = unreal.MathLibrary.rotator_from_axis_and_angle(unreal.Vector(0.1,0.0,0.0),actor_rotation.x)
rotatorY = unreal.MathLibrary.rotator_from_axis_and_angle(unreal.Vector(0.0,0.1,0.0),actor_rotation.y)
rotatorZ = unreal.MathLibrary.rotator_from_axis_and_angle(unreal.Vector(0.0,0.0,0.1),actor_rotation.z)
temp = unreal.MathLibrary.compose_rotators(rotatorX,rotatorY)
rotator = unreal.MathLibrary.compose_rotators(temp,rotatorZ)
return rotator
yet the objects are not placed correctly (in term of rotation) in unreal
I tried:
adding - in front of different rotation members in blender
flipping the order of the unreal rotators are combined in unreal
tried y=z and z=-y
tried z = -z or y=-y
(I could go on since I been busting my head on this for a whole week)
I been through so many posts, every answer is different, and none of them is working
can someone more clever than I am end my misery and tell me how to solve this ?
thanks
[edit:]
I spawn the actor like this
pos = unreal.Vector(float(row["posx"]),float(row["posy"]),float(row["posz"]))
rot = unreal.Vector(float(row["rotx"]),float(row["roty"]),float(row["rotz"]))
rotator = createRotator(rot)
staticMeshActor = unreal.EditorLevelLibrary.spawn_actor_from_object(staticMesh, pos, rotator)
Related
I'm getting very confused trying to setup my simulation correctly in PyDrake. What I want is to have an actuated robot (with e.g. an InverseDynamicsController on it) together with an object in the scene that the robot will manipulate. However, I'm struggling to sort out how to create and use the MultibodyPlant, SceneGraph, Context, Simulator combination correctly.
Here is roughly what I've tried to do:
builder = DiagramBuilder()
plant, scene_graph = AddMultibodyPlantSceneGraph(builder, time_step=1e-4)
parser = Parser(plant, scene_graph)
# Add my robot
robot = parser.AddModelFromFile(robot_urdf)
robot_base = plant.GetFrameByName('robot_base')
plant.WeldFrames(plant.world_frame(), robot_base)
# Add my object
parser.AddModelFromFile(FindResourceOrThrow("drake/my_object.urdf"))
plant.finalize()
# Add my controller
Kp = np.full(6, 100)
Ki = 2 * np.sqrt(Kp)
Kd = np.full(6, 1)
controller = builder.AddSystem(InverseDynamicsController(plant, Kp, Ki, Kd, False))
controller.set_name("sim_controller");
builder.Connect(plant.get_state_output_port(robot),
controller.get_input_port_estimated_state())
builder.Connect(controller.get_output_port_control(),
plant.get_actuation_input_port())
# Get the diagram, simulator, and contexts
diagram = builder.Build()
simulator = Simulator(diagram)
context = simulator.get_mutable_context()
plant_context = plant.GetMyContextFromRoot(context)
However, this has some undesirable qualities. First, as long as I've added the object, then I get this error:
Failure at systems/controllers/inverse_dynamics_controller.cc:32 in SetUp(): condition 'num_positions == dim' failed.
Second, with the object added, the object pose becomes part of my InverseKinematics problem, and when I do SetPositions with plant_context, I have to set both my arm joints AND the pose of the object, when I feel like I should only be setting the robot's joint positions with SetPositions.
I realize I've done something wrong with this setup, and I'm just wondering what is the correct way to have an instance of Simulator that I can run simulations with that has both an actuated robot, and a manipulable object? Am I supposed to create multiple plants? Multiple contexts? Who shares what with who?
I'd really appreciate some advice on this, or a pointer to an example. Drake is great, but I struggle to find minimal examples that do what I want.
Yes, you can add a separate MultibodyPlant for control. See https://github.com/RobotLocomotion/drake/blob/master/examples/planar_gripper/planar_gripper_simulation.cc for an example. The setup is similar to yours, though it's in C++. You can try mimicking the way the diagram is wired up there.
When you do have two plants, you want to call SetPositions on the simulation plant (not the control plant). You can set only the robot positions by using ModelInstanceIndex.
# Add my robot
robot = parser.AddModelFromFile(robot_urdf)
...
plant.SetPositions(plant_context, robot, robot_positions)
I'm quite a newbie with networkx and it seems that i'm having RAM issues when running a function that merges two different graphs. This function adds up the weight of edges that are common to both graphs.
I have a list of 8~9 graphs each containing about 1000-2000 nodes which I merge in this loop:
FinalGraph = nx.Graph()
while len(graphs_list)!=0:
FinalGraph = merge_graphs(FinalGraph,graphs_list.pop())
using this function
def merge_graphs(graph1, graph2):
edges1 = graph1.edges
edges2 = graph2.edges
diff1 = edges1 - edges2
if diff1==edges1:
return nx.compose(graph1,graph2)
else:
common_edges = list(edges1 - diff1)
for edges in common_edges:
graph1[edges[0]][edges[1]]['weight'] += graph2[edges[0]][edges[1]]['weight']
return nx.compose(graph2, graph1)
When running my script, my computer will always freeze when reaching this loop. Am i doing some kind of bad reference cycle or something ? Am i missing something in the networkx doc more effective that could help me not use this function for my purpose ?
Thanks for reading me, I hope i'm making sense
There seems to be a lot of extra work going on here caused by you trying to check if the conditions allow you to use compose. This may be contributing to the trouble. I think it might work better to just iterate through the edges and nodes of the graph. The following looks like a more direct way to do it (and doesn't require creating as many variables, which might be contributing to the memory issues)
final_graph = nx.Graph()
for graph in graphs_list:
final_graph.add_nodes_from(graph.nodes())
for u, v, w in graph.edges(data=True):
if final_graph.has_edge(u,v):
final_graph[u][v]['weight'] += w
else:
final_graph.add_edge(u,v,weight = w)
I'm quite a noob in Python.
I can write a simple script to assign a cluster to vertexes of the selected objects.
Like this:
import maya.cmds as cmds
activeSelection = cmds.ls(selection=True)
for i in activeSelection:
cmds.polyListComponentConversion( i, ff=True, tv=True, internal=True )
cmds.cluster( i, rel=True )
But it turned out I need to assign a cluster to vertexes of each individual polygon shell of the object. I've spent few hours searching and trying different scripts and trying to modify them but nothing seems to really work.
Would you guys be so kind to give a hint?
Thank you,
Anton
If you don't want to separate and then re-combine your mesh (to keep clean history, or you're restricted from modifying the geo in any way except deformers for some reason...), you could use this to "separate" your shells out non-destructively
import maya.cmds as mc
shells = []
face_count = mc.polyEvaluate(geom, f=True)
faces = set(range(face_count))
for face in xrange(face_count):
if face in faces:
shell_indices = mc.polySelect(geom, q=True, extendToShell=face)
shell_faces = ['%s.f[%d]' %(geom, i) for i in shell_indices]
shells.append(mc.polyListComponentConversion(shell_faces, toVertex=True))
faces -= set(shell_indices)
elif not faces:
break
this will give you a list where each item is a list of cps for a shell. All that's left to do is to cluster each item of the shells list
I'd try using cmds.polySeparate() to split the mesh into shells, then cluster the pieces and re-assemble them into a combo mesh. Something like this:
sel = cmds.ls(sl=True)
pieces = cmds.listRelatives(cmds.polySeparate(sel), c=True)
clusters = [cmds.cluster(p, rel=True) for p in pieces]
cmds.polyUnite(pieces)
Depending on the application you might not really need the clusters, since the polySeparate will give you one transform per shell and you'll be able to animated the original shell transforms directly while keeping the combined mesh
I am basically building a 3D scatter plot using primitive UV spheres and am running into memory issues when attempting to create more than a couple hundred points at one time. I am limited on my laptop with a 2.1Ghz processor but wanted to know if there is a better way to write this:
import bpy
import random
while count < 5:
bpy.ops.mesh.primitive_uv_sphere_add(size=.3,\
location=(random.randint(-9,9), random.randint(-9,9),\
random.randint(-9,9)), rotation=(0,0,0))
count += 1
I realize that with such a simple script any performance increase is likely negligible but wanted to give it a shot anyway.
Some possible suggestions
I would pre-calculate the x,y,z values, store them in a mathutil vector and add it to a dict to be iterated over.
Duplication should provide a smaller memory footprint than
instantiating new objects. bpy.ops.object.duplicate_move(OBJECT_OT_duplicate=(linked:false, TRANSFORM_OT_translate=(transform)
Edit:
Doing further research it appears each time a bpy.ops.* is called the redraw function . One user documentented exponential increase in time taken to genenerate UV sphere.
CoDEmanX provided the following code snippet to another user.
import bpy
bpy.ops.object.select_all(action='DESELECT')
bpy.ops.mesh.primitive_uv_sphere_add()
sphere = bpy.context.object
for i in range(-1000, 1000, 2):
ob = sphere.copy()
ob.location.y = i
#ob.data = sphere.data.copy() # uncomment this, if you want full copies and no linked duplicates
bpy.context.scene.objects.link(ob)
bpy.context.scene.update()
Then it is just a case of adapting the code to set the object locations
obj.location = location_dict[i]
Edit (Original post below):
So I have come up with the following code. I can export the mesh, bone structure and animations. I can animate a simple skeleton. But for some reason if I animate more than one bone, something goes wrong and the arm will move in the wrong axis.
My cpp code is here: http://kyuu.co.uk/so/main.cpp
My python export code is here: http://kyuu.co.uk/so/test.py
Could someone please tell me what I am doing wrong? I think it might be something to do with the bone roll in blender. I have seen many posts about that.
Thanks.
(Original post:)
I have been working on this problem for a while now and still cannot figure out what I am missing, so I am hoping someone kind will help me :3
Right, I have something like the following code in my application:
class bone {
bone * child;
Eigen::Matrix4f local_bind_pose; // this i read from a file
Eigen::Matrix4f final_skinning_matrix; // i then transform each vertex by this matrix
// temporary variables
Eigen::Matrix4f inv_bind_pose;
Eigen::Matrix4f world_bind_pose;
}
I.e. a simple bone hierarchy.
I believe I can work out the inv_bind_pose with:
world_bind_pose = bone.parent.world_bind_pose * local_bind_pose
inv_bind_pose = world_bind_pose.inverse()
I know that the bind_pose must be relative to the parent bone.
I know that blender is z = up and I am using y = up.
But I cannot get this information exported from blender. I am using version 2.56.3.
Would the rotation part of the matrix be bone.matrix_local? Would the translation part be bone.tail() - bone.head()?
What about bone roll? It seems that does affect the result.
Some references:
http://www.gamedev.net/topic/571044-skeletal-animation-bonespace
http://blenderartists.org/forum/showthread.php?209221-calculate-bone-location-rotation-from-fcurve-animation-data
http://www.blender.org/development/release-logs/blender-240/how-armatures-work
http://code.google.com/p/gamekit/source/browse/trunk/Engine/Loaders/Blender/gkSkeletonLoader.cpp?spec=svn482&r=482
Thank-you so much!
We use blender bones extensively. I think you might find this snippet useful.
import gzip
import struct
import bpy
groups = [x.name for x in bpy.context.object.vertex_groups]
rig = bpy.context.object.parent
buf = bytearray()
buf.extend(struct.pack('ii', len(groups), 60))
for i in range(60):
bpy.context.scene.frame_set(i)
for name in groups:
base = rig.pose.bones[name].bone.matrix_local.inverted()
mat = rig.pose.bones[name].matrix # base
x, y, z = mat.to_translation()
rw, rx, ry, rz = mat.to_quaternion()
buf.extend(struct.pack('3f4x4f', x, y, z, rx, ry, rz, rw))
open('output.rig.gz', 'wb').write(gzip.compress(buf))
This exports blender bones directly for the GPU.
Here we use it
This script can be run in Object Mode. I hope it helps.