I'm interested in comparing the quaternions of an object presented in the real-world (with ArUco marker on top of it) and its simulated version in Unity3D.
To do this, I generated different scenes in Unity with the object in different locations. I stored its position and orientation relative to the camera in a csv file. where quaternions is looking something like this (for one example):
[-0.492555320262909 -0.00628990028053522 0.00224017538130283 0.870255589485168]
In ArUco, after using estimatePoseSingleMarkers I got a compact version of Angle-Axis, and I converted it to Quaternion using the following function:
def find_quat(rvecs):
a = np.array(rvecs[0][0])
theta = math.sqrt(a[0]**2 + a[1]**2 + a[2]**2)
b = a/theta
qx = b[0] * math.sin(theta/2)
qy = -b[1] * math.sin(theta/2) # left-handed vs right handed
qz = b[2] * math.sin(theta/2)
qw = math.cos(theta/2)
print(qx, qy, qz, qw)
where rvecs is the return value of ArUco
However, after doing this I'm still getting way different results, example of the same scene:
[0.9464098048208864 -0.02661258975275046 -0.009733748408866453 0.321722715311581] << aruco result
[-0.492555320262909 -0.00628990028053522 0.00224017538130283 0.870255589485168] << Unity's result
Sample input to find_quat:
[[[ 2.4849011 0.04546755 -0.030406 ]]]
which is the output of estimatePoseSingleMarkers function
Unity's Quaternion is found as follows:
GameObject.Find("Cube").transform.localRotation;
Am I missing something?
For anyone coming here trying to find an answer.
My problem was that I was having the marker on top of the cube (so rotated by -90) which made converting the orientation impossible.
Change your pivot point in Unity and rotate it by -90. Then convert by
(x,y,z,w) = (-x,y,-z,w)
Related
I am trying to convert my visual image to Polar form using Vispy
Something similar to below images..
Original Image generated using below code of Vispy
scene.visuals.Image(self.img_data,interpolation=interpolation,parent = self.viewbox.scene, cmap=self.cmap, method='subdivide', clim=(-65,40))
Required Polar Image:
I did try to implement polarTransform using PolarTransform Example from Vispy but couldn't succeed.
Can anyone please guide my on how to do polarTransform of above Image to Polar using Vispy.
Thanks
Reply to #djhoese:
Image generated by Vispy before PolarTransform
Image generated by Vispy after PolarTransform
Code for PolarTransform:
self.img.transform = PolarTransform()
ImageVisual and PolarTransform do not play well together without some nudging.
VisPy has two methods of drawing, subdivide and impostor. I'll concentrate on subdivide here. This won't work with impostor.
First, create the ImageVisual like that:
img = visuals.ImageVisual(image,
grid=(1, N),
method='subdivide')
For N use a reasonable high number (eg 360). Playing with that number you'll immediately see how the polar resolution is affected.
Further you need to setup some specific transform chain:
transform = (
# move to final location and scale to your liking
STTransform(scale=(scx,scy), translate=(xoff,yoff))
# 0
# just plain simple polar transform
*PolarTransform()
# 1
# pre scale image to work with polar transform
# PolarTransform does not work without this
# scale vertex coordinates to 2*pi
* STTransform(scale=(2 * np.pi / img.size[0], 1.0))
# 2
# origin switch via translate.y, fix translate.x
* STTransform(translate=(img.size[0] * (ori0 % 2) * 0.5,
-img.size[1] * (ori0 % 2)))
# 3
# location change via translate.x
* STTransform(translate=(img.size[0] * (-loc0 - 0.25), 0.0))
# 4
# direction switch via inverting scale.x
* STTransform(scale=(-dir0, 1.0))
)
# set transform
img.transform = transform
dir0 - Direction cw/ccw (takes values -1/1, respectively)
loc0 - Location of Zero (value between 0 and 2 * np.pi, counter clockwise)
ori0 - Side which will be transformed to center of polar image (takes values 0, 1 for top or bottom
The bottom four STTransform can surely be simplified. They are split apart to show the different changes and how they have to be applied.
An example will be added to the VisPy examples section later.
I'm trying to mirror the CMU motion capture dataset(.bvh format)
along world-yz plane with python code.
I already parsed them and converted the euler angles representation to quaternion representation.
I found some answers for the mirrorin by negating y and z components.
qx qy qz qw -> qx -qy -qz qw
However, this does not seem to work for all joints for skeletal animation.
I checked the mirroring above works for a single object rotation in unity3d engine.
The step I used for mirroring is same as below,
1. exchange left-joint local rotations and right-joint local rotations
2. negate qy and qz for all joint rotations
3. negate x of root trajectory
def mirror_sequence(sequence):
mirrored_rotations = sequence[:, 1:, :]
mirrored_trajectory = np.expand_dims(sequence[:, 0, :], axis=1)
temp = mirrored_rotations
# Flip left/right joints
mirrored_rotations[:, joints_left] = temp[:, joints_right]
mirrored_rotations[:, joints_right] = temp[:, joints_left]
mirrored_rotations[:, :, [1, 2]] *= -1
mirrored_trajectory[:, :, 0] *= -1
mirrored_sequence = np.concatenate((mirrored_trajectory, mirrored_rotations), axis=1)
return mirrored_sequence
My goal is to make an animation which has pelvis trajectory mirrored along world-yz plane and left / right joint animation swapped.
Thank you for your help!
The answer was so simple...
temp = mirrored_rotations
I use to code in C# and thought and dealing with temp would not change
values in mirrored_rotations...
temp = mirrored_rotations.copy() works well.
Perhaps this is a bit overkill, but I had this issue recently and it was non-trivial for me to solve, even using the above method. For others looking at this, a great mocap library that can do this is PyMo.
Wtih this library you can mirror over a particular axis (in this case, X) as well as do other fun things:
from pymo.parsers import BVHParser
from pymo.writers import BVHWriter
from pymo.preprocessing import *
p = BVHParser()
dat = []
for f in bvh_files:
data_all.append(p.parse(f))
data_pipe = Pipeline([
('dwnsampl', DownSampler(tgt_fps=fps, keep_all=False)),
('root', RootTransformer('hip_centric')),
('mir', Mirror(axis='X', append=False)), # <-- the relevant line
('jtsel', JointSelector(['Spine','Spine1','Spine2','Spine3','Neck','Neck1','Head','RightShoulder', 'RightArm', 'RightForeArm', 'RightHand', 'LeftShoulder', 'LeftArm', 'LeftForeArm', 'LeftHand'], include_root=True)),
('exp', MocapParameterizer('expmap')),
('cnst', ConstantsRemover()),
('np', Numpyfier())
])
out_data = data_pipe.fit_transform(data_all)
# and then to write your transformed files out:
inv_data = data_pipeline.inverse_transform(out_data)
writer = BVHWriter()
for i in range(0, out_data.shape[0]):
with open(bvh_files[i], "w") as f:
writer.write(inv_data[i], f, framerate=fps)
Be wary that the library changes often, but even as it changes the bare bones to do many useful transformations on bvh data is there and very solid.
I'm trying for a computer vision project to determine the projection transformation occurring in a football image. I detect the vanishing points, get 2 point matches, and calculate the projection from model field points to image points based on cross ratios. This works really well for almost all points, but for points (which lie behind the camera) the projection goes completely wrong. Do you know why and how I can fix this?
It's based on the article Fast 2D model-to-image registration using vanishing points for sports video analysis and I use this projection function given on the page 3. I tried calculating the result using different methods, too (namely based on intersections), but the result is the same:
There should be a bottom field line, but that one is projected to way out far to the right.
I also tried using decimal to see if it was a negative overflow error, but that wouldn't have made much sense to me, since the same result showed up on Wolfram Alpha with testing.
def Projection(vanpointH, vanpointV, pointmatch2, pointmatch1):
"""
:param vanpointH:
:param vanpointV:
:param pointmatch1:
:param pointmatch2:
:returns function that takes a single modelpoint as input:
"""
X1 = pointmatch1[1]
point1field = pointmatch1[0]
X2 = pointmatch2[1]
point2field = pointmatch2[0]
point1VP = linecalc.calcLineEquation([[point1field[0], point1field[1], vanpointH[0], vanpointH[1], 1]])
point1VP2 = linecalc.calcLineEquation([[point1field[0], point1field[1], vanpointV[0], vanpointV[1], 1]])
point2VP = linecalc.calcLineEquation([[point2field[0], point2field[1], vanpointV[0], vanpointV[1], 1]])
point2VP2 = linecalc.calcLineEquation([[point2field[0], point2field[1], vanpointH[0], vanpointH[1], 1]])
inters = linecalc.calcIntersections([point1VP, point2VP])[0]
inters2 = linecalc.calcIntersections([point1VP2, point2VP2])[0]
def lambdaFcnX(X, inters):
# This fcn provides the solution of where the point to be projected is, according to the matching,
# on the line connecting point1 and vanpointH. Based only on that the cross ratio is the same as in the model field
return (((X[0] - X1[0]) * (inters[1] - point1field[1])) / ((X2[0] - X1[0]) * (inters[1] - vanpointH[1])))
def lambdaFcnX2(X, inters):
# This fcn provides the solution of where the point to be projected is, according to the matching,
# on the line connecting point2 and vanpointH, Based only on that the cross ratio is the same as in the model field
return (((X[0] - X1[0]) * (point2field[1] - inters[1])) / ((X2[0] - X1[0]) * (point2field[1] - vanpointH[1])))
def lambdaFcnY(X, v1, v2):
# return (((X[1] - X1[1]) * (np.subtract(v2,v1))) / ((X2[1] - X1[1]) * (np.subtract(v2, vanpointV))))
return (((X[1] - X1[1]) * (v2[0] - v1[0])) / ((X2[1] - X1[1]) * (v2[0] - vanpointV[0])))
def projection(Point):
lambdaPointx = lambdaFcnX(Point, inters)
lambdaPointx2 = lambdaFcnX2(Point, inters2)
v1 = (np.multiply(-(lambdaPointx / (1 - lambdaPointx)), vanpointH) + np.multiply((1 / (1 - lambdaPointx)),
point1field))
v2 = (np.multiply(-(lambdaPointx2 / (1 - lambdaPointx2)), vanpointH) + np.multiply((1 / (1 - lambdaPointx2)),
inters2))
lambdaPointy = lambdaFcnY(Point, v1, v2)
point = np.multiply(-(lambdaPointy / (1 - lambdaPointy)), vanpointV) + np.multiply((1 / (1 - lambdaPointy)), v1)
return point
return projection
match1 = ((650,390,1),(2478,615,1))
match2 = ((740,795,1),(2114,1284,1))
vanpoint1 = [-2.07526585e+03, -5.07454315e+02, 1.00000000e+00]
vanpoint2 = [ 5.53599881e+03, -2.08240612e+02, 1.00000000e+00]
model = Projection(vanpoint2,vanpoint1,match2,match1)
model((110,1597))
Suppose the vanishing points are
vanpoint1 = [-2.07526585e+03, -5.07454315e+02, 1.00000000e+00]
vanpoint2 = [ 5.53599881e+03, -2.08240612e+02, 1.00000000e+00]
and two matches are:
match1 = ((650,390,1),(2478,615,1))
match2 = ((740,795,1),(2114,1284,1))
These work for almost all points as seen in the picture. The left bottom point, however, is completely off and gets image coordinates
[ 4.36108177e+04, -1.13418258e+04] This happens going down from (312,1597); for (312,1597) the result is [-2.34989787e+08, 6.87155603e+07] which is where it's supposed to be.
Why does it shift all the way to 4000? It would make sense perhaps if I calculated the camera matrix and then the point was behind the camera. But since what I do is actually similar to homography estimation (2D mapping) I cannot make geometric sense of this. However, my knowledge of this is definitely limited.
Edit: does this perhaps have to do with the topology of the projective plane and that it's non orientable (wraps around)? My knowledge of topology is not what it should be...
Okay, figured it out. This might not make too much sense to others, but it does for me (and if anyone ever has the same problem...)
Geometrically, I realized the following when using an equivalent approach, where v1 and v2 are calculated based on the different vanishing points and I project based on the intersection of the lines connecting points with the vanishing points. Here at some point, these lines become parallel, and after that the intersection actually lies completely on the other side. And that makes sense; it just took me a while to realize it does.
In the code above, the last cross ratio, called lambdapointy, goes to 1 and after that above. Here the same thing happens, but it was easiest to visualize based on the intersections.
Also know how to solve it; this is just in case anyone else tries such code.
sorry for such specific question guys , I think people only with knowledge of Maya will answer tho. In Maya I have cubes different sizes and I need to find with python which face of cube is pointing Y axis down. (Pivot is in center) Any tips will be appreciated
Thanks a lot :)
import re
from maya import cmds
from pymel.core.datatypes import Vector, Matrix, Point
obj = 'pCube1'
# Get the world transformation matrix of the object
obj_matrix = Matrix(cmds.xform(obj, query=True, worldSpace=True, matrix=True))
# Iterate through all faces
for face in cmds.ls(obj + '.f[*]', flatten=True):
# Get face normal in object space
face_normals_text = cmds.polyInfo(face, faceNormals=True)[0]
# Convert to a list of floats
face_normals = [float(digit) for digit in re.findall(r'-?\d*\.\d*', face_normals_text)]
# Create a Vector object and multiply with matrix to get world space
v = Vector(face_normals) * obj_matrix
# Check if vector faces downwards
if max(abs(v[0]), abs(v[1]), abs(v[2])) == -v[1]:
print face, v
If you just need a quick solution without vector math and Pymel or the the API, you can use cmds.polySelectConstraint to find the faces aligned with a normal. All you need to do is select all the faces, then use the constraint to get only the ones pointing the right way. This will select all the faces in a mesh that are pointing along a given axis:
import maya.cmds as cmds
def select_faces_by_axis (mesh, axis = (0,1,0), tolerance = 45):
cmds.select(mesh + ".f[*]")
cmds.polySelectConstraint(mode = 3, type = 8, orient = 2, orientaxis = axis, orientbound = (0, tolerance))
cmds.polySelectConstraint(dis=True) # remember to turn constraint off!
The axis is the x,y,z axis you want and tolerance is the slop in degrees you'll tolerate. To get the downward faces you'd do
select_faces_by_axis ('your_mesh_here', (0,0,-1))
or
select_faces_by_axis ('your_mesh_here', (0,0,-1), 1)
# this would get faces only within 1 degree of downard
This method has the advantage of operating mostly in Maya's C++, it's going to be faster than python-based methods that loop over all the faces in a mesh.
With pymel the code can be a bit more compact. Selecting the faces pointing downwards:
n=pm.PyNode("pCubeShape1")
s = []
for f in n.faces:
if f.getNormal(space='world')[1] < 0.0:
s.append(f)
pm.select(s)
I want to generate a surface which should look like a hemisphere.. What I have done so far is to read an already existing BEM mesh and try to show the scalar values on it. But now I have to show the scalar values on a hemisphere instead of the Bem mesh. And I don't know how to generate using a triangular mesh that looks like an hemisphere.
This hemisphere needs to contain a set of N number of points(x,y,z)[using the mlab.triangular_mesh] and at each vertex I need to represent N data(float) either as a value or using variations in colormap(eg: blue(lowest value of the data) to red(highest value of the data)). data=its an array of size 2562, a set of float values, could be randomly generated as its part of another codes. Points were part of another set of code too.its of shape(2562,3). but the shape is not a hemisphere
This was the program I used for viewing using the BEM surface
fname = data_path + '/subjects/sample/bem/sample-5120-5120-5120-bem-sol.fif'
surfaces = mne.read_bem_surfaces(fname, add_geom=True)
print "Number of surfaces : %d" % len(surfaces)
head_col = (0.95, 0.83, 0.83) # light pink
colors = [head_col]
try:
from enthought.mayavi import mlab
except:
from mayavi import mlab
mlab.figure(size=(600, 600), bgcolor=(0, 0, 0))
for c, surf in zip(colors, surfaces):
points = surf['rr']
faces = surf['tris']
s=data
mlab.triangular_mesh(points[:, 0], points[:, 1], points[:, 2],faces,color=c, opacity=1,scalars=s[:,0])
#mesh= mlab.triangular_mesh(x,y,z,triangles,representation='wireframe',opacity=0) #point_data=mesh.mlab_source.dataset.point_data
#point_data.scalars=t
#point_data.scalars.name='Point data'
#mesh2= mlab.pipeline.set_active_attribute(mesh,point_scalars='Point data')
As others have pointed out your question is not very clear, and does not include an easily reproducible example -- your example would take considerable work for us to reproduce and you have not described the steps you have taken very clearly.
What you are trying to do is easy. Scalars can be defined for each vertex (i.e., each VTK point):
surf = mlab.triangular_mesh(x,y,z,triangles)
surf.mlab_source.scalars = t
And you need to set a flag to get them to appear, which I think might be your problem:
surf.actor.mapper.scalar_visibility=True
Here is some code to generate a half-sphere. It produces a VTK polydata. I'm not 100% sure if the mayavi source is the same source type as triangular_mesh but I think it is.
res = 250. #desired resolution (number of samples on sphere)
phi,theta = np.mgrid[0:np.pi:np.pi/res, 0:np.pi:np.pi/res]
x=np.cos(theta) * np.sin(phi)
y=np.sin(theta) * np.sin(phi)
z=np.cos(phi)
mlab.mesh(x,y,z,color=(1,1,1))