I'm trying to create a script for mirroring transforms across the yz plane in Maya.
I was able to set up a node network that gets the desired results. I took a node at the origin with sz set to -1 and a source node from the left side (lf_grp for this test), and fed their worldMatrix attrs into a multMatrix node. Then I passed the output (multMatrix.matrixSum) through a decompose matrix and into my destination node.
I'd really prefer to not create a bunch of nodes to do my mirroring - running a create/connect/disconnect/delete cycle every time is slow and painful... I'd rather just "math the crap out of it" through my script, but I can't seem to figure out how to actually multiply my two matrices...
Oh, I'm using the MTransformationMatrix since it handles a few things for you that the MMatrix does not - like rotation order (at least from what I've read...)
Thank you for any help you can give!
import maya.cmds as mc
import maya.OpenMaya as om
src_xfm = 'lf_grp'
mir_matrix_vals = [-1.0, -0.0, -0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0]
# get src xfm matrix
#
selList = om.MSelectionList()
selList.add(src_xfm)
mDagPath = om.MDagPath()
selList.getDagPath(0, mDagPath)
src_xfmFn = om.MFnTransform(mDagPath)
src_matrix = src_xfmFn.transformation()
# construct mir xfm matrix
#
mir_matrix = om.MTransformationMatrix()
tmp_matrix = om.MMatrix()
om.MScriptUtil().createMatrixFromList(mir_matrix_vals, tmp_matrix)
mir_matrix = om.MTransformationMatrix(tmp_matrix)
# multiply matrices to get mirrored matrix
#
dst_matrix = src_matrix * mir_matrix # HOW DO YOU DO THIS????
Here's how do to it using the openMaya api version 2.
Nowadays this is the preferred method for doing Python api work - among other things it's a lot less wordy and avoids MScriptUtil, which is prone to crashiness if used incorrectly. It's also faster for most things.
This is the plain matrix multiplication:
from maya.api.OpemMaya import MMatrix
mat1 = MMatrix ([0.707107, 0, -0.707107, 0, 0.5, 0.707107, 0.5, 0, 0.5, -0.707107, 0.5, 0, 0, 0, 0, 1])
mat2 = MMatrix([1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 100, 200, 300, 1])
print mat1 * mat2
# (((0.707107, 0, -0.707107, 0), (0.5, 0.707107, 0.5, 0), (0.5, -0.707107, 0.5, 0), (100, 200, 300, 1)))
You can't directly multiply an MTransformationMatrix -- that class isn't a linear algebra matrix, it's an accessor for the various position, rotation, scale, shear and pivot data functions of a matrix. You use it if you want get around doing all of the concatenating math yourself on a transform node, like setting its rotation without changing its scale.
You can get the underlyying matrix from an MTransformationMatrix with its asMatrix() function. To apply a matrix to an object :
from maya.api.OpenMaya import MTransformationMatrix, MGlobal, MSelectionList, MFnDagNode
sel = MGlobal.getActiveSelectionList() # selection
dagpath = sel.getDependNode(0) # first node
transform_node = MFnTransform(dagpath) # MFnTransform
xfm= transform_node.transformation().asMatrix() # matrix
new_matrix = mat1 * xfm # math
new_trans = MTransformationMatrix(new_matrix)
transform_node.setTransformation(new_trans)
Related
as a homework task I was given to create an RGB spectrum image just with numpy functions.
This is my current code:
zero = np.dstack([
np.linspace(0.0, 1.0, self.resolution),
np.linspace(0.0, 0.0, self.resolution),
np.linspace(1.0, 0.0, self.resolution)
])
spectrum = np.tile(zero, (self.resolution, 1, 1))
What this produces is a gradient from red to blue. Now, what is left is to linspace the green value into the third dimension.
Anyone here who has some tips how to do that?
Edit: Let me re-phrase - how can I avoid this loop with numpy?
spectrum = np.tile(zero, (self.resolution, 1, 1))
for i in range(self.resolution):
spectrum[i, :, 1] = green[i]
Your last for loop is:
spectrum[:, :, 1] = np.linspace(0.0, 1.0, resolution)[:, None]
Edit: after playing with your spectrum, this also do the job:
res = np.linspace(0.0, 1.0, resolution)
s = np.meshgrid(res, res)
spectrum = np.stack([s[0], s[1], 1-s[0]],axis=-1)
I'm starting to learn PyOpenGL, and I'm following this tutorial. At one point the instructor creates a single array from which he extracts the information to construct a triangle: vetices and their color (I added the Numpy line here):
#-----------|-Vertices pos--|---Colors----|-----------
vertices = [-0.5, -0.5, 0.0, 1.0, 0.0, 0.0,
0.5, -0.5, 0.0, 0.0, 1.0, 0.0,
0.0, 0.5, 0.0, 0.0, 0.0, 1.0]
vertices = np.array(vertices, dtype = np.float32)
The information of this array is passed to glVertexPointer() and glColorPointer() in the display function:
def display():
glClear(GL_COLOR_BUFFER_BIT)
glEnableClientState(GL_VERTEX_ARRAY)
glEnableClientState(GL_COLOR_ARRAY)
glVertexPointer(3, GL_FLOAT, 12, vertices)
glColorPointer(3, GLFLOAT, 12, vertices + 3)
glDrawArrays(GL_TRIANGLES, 0, 3)
glDisableClientState(GL_VERTEX_ARRAY)
glDisableClientState(GL_COLOR_ARRAY)
glutSwapBuffers()
My problem is with the last argument of those functions, in the tutorial (since he is using C++), he can write vertices + 3 in order to tell the program to start reading from the third position of the array, I cannot do this in python.
Can some one guide me on how can I define this pointer? Or how can extract the information from my array.
Note: I'm aware that I can split the information of vertices and colors in differnt arrays, but I want to know if it is possible to do it using one array.
EDIT - Adding the complete code:
from OpenGL.GL import *
from OpenGL.GLUT import *
import numpy as np
import ctypes
#-----------|-Vertices pos--|---Colors----|-----------
vertices = [-0.5, -0.5, 0.0, 1.0, 0.0, 0.0,
0.5, -0.5, 0.0, 0.0, 1.0, 0.0,
0.0, 0.5, 0.0, 0.0, 0.0, 1.0]
vertices = np.array(vertices, dtype = np.float32)
buffer_offset = ctypes.c_void_p
float_size = ctypes.sizeof(ctypes.c_float)
#-----------------------------------------------------
def display():
glClear(GL_COLOR_BUFFER_BIT)
glEnableClientState(GL_VERTEX_ARRAY)
glEnableClientState(GL_COLOR_ARRAY)
glVertexPointer(3, GL_FLOAT, 24, buffer_offset(vertices.ctypes.data))
glColorPointer(3, GL_FLOAT, 24, buffer_offset(vertices.ctypes.data + float_size * 3))
glDrawArrays(GL_TRIANGLES, 0, 3)
glDisableClientState(GL_VERTEX_ARRAY)
glDisableClientState(GL_COLOR_ARRAY)
glutSwapBuffers()
def reshape(w,h):
glViewport(0,0,w,h)
def initOpenGL():
glClearColor(0,0,0,1)
#-----------------------------------------------------
glutInit()
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH)
glutInitWindowSize(500,500)
glutCreateWindow(b'Test')
glutDisplayFunc(display)
glutIdleFunc(display)
glutReshapeFunc(reshape)
initOpenGL()
glutMainLoop()
This depends on whether you have a vertex buffer bound or not. The last parameter pointer in gl*Pointer() is either.
No Vertex Buffer Bound: It's the address to the vertices
Vertex Buffer Bound: It's a relative offset in relation to the address of the buffer
You can utilize ctypes for this.
import ctypes
buffer_offset = ctypes.c_void_p
float_size = ctypes.sizeof(ctypes.c_float)
Assuming you have a vertex buffer bound, then you'd simply do:
glVertexPointer(3, GL_FLOAT, 12, buffer_offset(0))
glColorPointer(3, GLFLOAT, 12, buffer_offset(float_size * 3))
If you are just using that array and nothing else, then I would assume you could just get the address and equally offset it.
glVertexPointer(3, GL_FLOAT, 12, buffer_offset(vertices.ctypes.data))
glColorPointer(3, GLFLOAT, 12, buffer_offset(vertices.ctypes.data + float_size * 3))
But I have to admit, I've never had the need for this in Python, so I can't confirm it.
I tried to use the contrib metrics for the first time and didn't manage to make them work.
Here is the metrics I tried to use, and how they were implemented:
y_pred_labels = y[:, 1]
y_true_labels = tf.cast(y_[:, 1], tf.int32)
with tf.name_scope('auc'):
auc_score, update_op_auc = tf.contrib.metrics.streaming_auc(
predictions=y_pred_labels,
labels=y_true_labels
)
tf.summary.scalar('auc', auc_score)
with tf.name_scope('accuracy_contrib'):
accuracy_contrib, update_op_acc = tf.contrib.metrics.streaming_accuracy(
predictions=y_pred_labels,
labels=y_true_labels
)
tf.summary.scalar('accuracy_contrib', accuracy_contrib)
with tf.name_scope('error_contrib'):
error_contrib, update_op_error = tf.contrib.metrics.streaming_mean_absolute_error(
predictions=y_pred_labels,
labels=y_[:, 1] ## Needs to use float32 and not int32
)
tf.summary.scalar('error_contrib', error_contrib)
This code perfectly execute and during execution I obtain the following:
########################################
Accuracy at step 1000: 0.633333 # This is computed by another displayed not displayed above
Accuracy Contrib at step 1000: (0.0, 0.0)
AUC Score at step 1000: (0.0, 0.0)
Error Contrib at step 1000: (0.0, 0.0)
########################################
Here is the format of the data inputed:
y_pred_labels = [0.1, 0.5, 0.6, 0.8, 0.9, 0.1, ...] #Represent a binary probability
y_true_labels = [1, 0, 1, 1, 1, 0, 0, ...] # Represent the true class {0 or 1}
y_[:, 1] = [1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, ...] # Same as y_true_labels formated as float32
I think I've understood in the official documentation that it is normal behavior under certain conditions ... However, I don't manage to obtain the values of my metric.
Secondly, I have noticed two of the metrics are called: streaming_accuracy and streaming_auc, how does it behave differently than in a "non streaming" accuracy or auc metric? And is there any way to make it "non streaming" if necessary ?
I encountered the same problem just now. And found out:
You need to run update_ops such as sess.run(update_op_auc), while running metric operations such as sess.run(auc_score).
What is the cleanest way to get the view direction relative to your scene in vispy?
view.scene.transform contains a whole chain of transforms:
In [88]: view.scene.transform
Out[88]:
<ChainTransform [<STTransform scale=[ 960. -540. 1. 1.] translate=[ 960. 540. 0. 0.] at 0x139757309901840>,
MatrixTransform(matrix=[[26.44507, 0.0, 0.0, 0.0],
[0.0, 47.013458, 0.0, 0.0],
[0.0, 0.0, -1e-06, 0.0],
[-0.0, -0.0, -0.0, 1.0]] at 0x7f1bc8d526d0),
<Inverse of '<ChainTransform [MatrixTransform(matrix=[[0.64390097845776273, -0.18562251042644023, -0.74225050593726238, 0.0],\n [0.74851597030808681, 0.35377196489238, 0.56086472437650059, 0.0],\n [0.15847830177938896, -0.91672770247177038, 0.36673552784799862, 0.0],\n [0.002241050448888897, 0.013296952664039196, 0.015024409939918581, 1.0]] at 0x7f1bc8c81710)] at 0x7f1bc8cb7e90>'>] at 0x7f1bc8e75490>
I could write something to parse lists of transforms of varous types and compose them, and extract the view direction from the composed transform, but I suspect I'm swimming upstream.
Vispy transformations have a map and imap function you can use to map coordinates between scene and screen coordinates in either direction. I used them on points and threw in a lot of assertions to be safe; there are probably simpler implementations. I tested this for orthographic projection. I ~think it will work for perspective projections too as long as the center of projection is in the middle of the screen.
def get_view_direction_in_scene_coordinates(view):
import numpy
tform=view.scene.transform
w,h = view.canvas.size
screen_center = numpy.array([w/2,h/2,0,1]) # in homogeneous screen coordinates
d1 = numpy.array([0,0,1,0]) # in homogeneous screen coordinates
point_in_front_of_screen_center = screen_center + d1 # in homogeneous screen coordinates
p1 = tform.imap(point_in_front_of_screen_center) # in homogeneous scene coordinates
p0 = tform.imap(screen_center) # in homogeneous screen coordinates
assert(abs(p1[3]-1.0) < 1e-5) # normalization necessary before subtraction
assert(abs(p0[3]-1.0) < 1e-5)
d2 = p1 - p0 # in homogeneous screen coordinates
assert(abs(d2[3])< 1e-5)
d3 = d2[0:3] # in 3D screen coordinates
d4 = d3 / numpy.linalg.norm(d3)
return d4
I'm trying to work through the beginning of the OpenGL redbook for version 2.1 and translate what I learn to the PyOpenGL binding while using Qt for the windowing framework. For some reason though, I can't seem to get my call to glDrawElements() to actually draw anything to the screen. Here are the relevant functions I have so far.
def initializeGL(self):
self.qglClearColor(QtGui.QColor(0,0,150))
self.initGeometry()
GL.glEnable(GL.GL_DEPTH_TEST)
self.buffers = GL.glGenBuffers(2)
def paintGL(self):
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT)
GL.glLoadIdentity()
GL.glTranslate(0.0, 0.0, -50.0)
GL.glScale(20.0, 20.0, 20.0)
GL.glRotate(self.yRotDeg, 0.2, 1.0, 0.3)
GL.glTranslate(-0.5, -0.5, -0.5)
VERTICES = 0
INDICES = 1
GL.glBindBuffer(GL.GL_ARRAY_BUFFER, self.buffers[VERTICES])
GL.glBufferData(GL.GL_ARRAY_BUFFER, len(self.cubeVtxArray), self.cubeVtxArray, GL.GL_STATIC_DRAW)
offset = ctypes.c_void_p(0)
GL.glVertexPointer(3, GL.GL_FLOAT, 0, offset)
#GL.glVertexPointerf(self.cubeVtxArray)
GL.glEnableClientState(GL.GL_VERTEX_ARRAY)
GL.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, self.buffers[INDICES])
GL.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, len(self.cubeIdxArray), self.cubeIdxArray, GL.GL_STATIC_DRAW)
GL.glDrawElements(GL.GL_QUADS, 24, GL.GL_UNSIGNED_BYTE, offset)
#GL.glDrawArrays(GL.GL_QUADS, 0, 24)
def initGeometry(self):
self.cubeVtxArray = np.array(
[[0.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[1.0, 1.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0],
[1.0, 0.0, 1.0],
[1.0, 1.0, 1.0],
[0.0, 1.0, 1.0]], dtype=np.float32)
self.cubeIdxArray = np.array([
0, 1, 2, 3,
3, 2, 6, 7,
1, 0, 4, 5,
2, 1, 5, 6,
0, 3, 7, 4,
7, 6, 5, 4], dtype=np.uint8)
When I run the program, it does clear the screen to the correct color, but the cube isn't drawn. Interestingly, if I try and render using the glDrawArray() function, it does render (although it doesn't look like a cube since it's rendering the indices). What might be going wrong here?
EDIT:
Here are a couple videos of the results of glDrawElements() and glDrawArrays().
EDIT2:
My problem (as user1118321 pointed out) was that I was passing an array length as the second parameter to glBufferData() where I should have been passing a size in bytes. The solution for python is:
from OpenGL.arrays.arraydatatype import ArrayDatatype
Use ArrayDatatype.arrayByteCount(self.cubeVtxArray) as the second parameter to glBufferData() (and similarly for any other buffers).
EDIT 3:
I'd actually like to make one more edit to this since I just ended up with another related problem from my calls to glBufferData(). I thought naively that I should also be able to use sys.getsizeof() in the same way as ArrayDatatype.arrayByteCount(). This is not the case though if your buffer data is a numpy array as I ended up using. sys.getsizeof() returns the wrong size and will inadvertently chop your array a bit. Goodbye three days of my life....
One thing that looks wrong to me is that you're sending the array size as the second argument to glBufferData. You probably need to send the number of bytes of the data as that argument. So it would be something like:
len(self.cubeVtxArray) * numBytesPerElement
where numBytesPerElement would be 4 bytes per float times 3 floats per vertex = 12 bytes.
In Python, you can get the number of bytes in an array by doing the following:
from OpenGL.arrays.arraydatatype import ArrayDatatype
Use ArrayDatatype.arrayByteCount(self.cubeVtxArray) as the second parameter to glBufferData() (and similarly for any other buffers).
And you'll need to do the same thing for self.cubeIdxArray, though the numBytesPerElement will be 1 in that case.