I'm trying to import an animation from a file to Maya, but it gives me odd results between the interpolations:
https://i.imgur.com/cP27Yai.mp4
It was weird because they keyframes looked right at first until i looked at the graph editor.
thought this was at first a gimbal lock, so i used the Euler Filter, but it gave no solution to it. Sometimes, the difference between one key and another is 180, which is why, by just seeing the animation, the keys look fine, but the interpolation makes it do a 180 rotation.
So if i go one by one, and subtract the vaule of the key by 180, and then invert the number (to positive or negative depending on the case), i can make it work by tweaking it a bit.
However this is too much work, specially for being biped animations, it could took me forever.
Is this a common issue or something that happened before to anyone else? Is there any way to fix this? Maybe it's the way i'm applying the euler angles, since they were initially Quaternions, but i did not find a way to apply the quaternions directly:
#Taking a rotation from the QUATERNION Array and converting it to Euler.
arot = anim.AnimRot[index].normal().asEulerRotation()
frot = MEulerRotation(arot.x*180/math.pi, arot.y*180/math.pi, arot.z*180/math.pi)
cmds.setAttr((obj + ".rotate"), frot.x, frot.y, frot.z)
cmds.setKeyframe(obj, time=anim.TotalKeys[i])
Is there any way to fix this from the editor or the script? any that fixes it would do me a great favor for importing this biped animation. I believe this is due to euler conversion, but i found no way to apply a quaternion to a bone in the maya API.
If the rotations already are quaternions, you might want to simply set the anim curves to quaternion interpolation using something like
cmds.rotationInterpolation( 'pSphere2.rotateX', 'pSphere2.rotateY', 'pSphere2.rotateZ', c="quaternionSquad")
To be safe I'd set one key, then apply the rotationInterpolation to change the keys to quats, then step through applying your original quaternions. Since you're already in API land you can make an MTransformationMatrix and use its setRotationComponents method to set the quat values so you don't ever convert to eulers.
Related
I'm trying to build a simulation that will take place in a 1000x1000x1000 space. For each point in space, I need to be able to encode 2 or 3 properties.
I also need to be able to do some basic operations on the space, such as, given a point, find the properties of the 26 adjacent 3D neighbors of the point.
All points will lie on vertices in the 1000x1000x1000 space (i.e. every point is discrete).
I wrote up a version in python using numpy and it is much too slow. I've also looked for libraries that would speed it up but couldn't find anything.
Does anyone know of a python library that would provide useful helper functions and would be appropriate for a simulation of this size?
Using Numpy to together with the Numba python-compiler for the more intricate algorithms can take you a long way.
Also, I think you are refering to a "stencil" algorithm, and numba has specific stencil-functionality that could help you.
But start with a smaller grid during development!
after some attempts I managed to get a pretty accurate disparity map of the scene I am filming with my stereo camera, using OpenCV's StereoSGBM function after rectification and calibration of the camera, and computing the disparity. I have also applied the Weighted Least Squares (WLS) filter on the final result which made me obtain something way more homogenous and nicely looking:
However, there is still a depth map "flickering" that needs to be fixed, meaning that stable objects change their depth grey value from frame to frame making the information non reliable. I read it is a common problem but have not found a way to solve it.
The depth map is recalculating depths for each frame while something time consistent is needed. Any idea on how to solve this?
I don't know if you found a solution for this but I'm experiencing a similar problem. What I understood so far is that this "flickering" can be mainly because the normalization operation of the depth values.
In my case I noticed that when there are blobs, the upper and lower values for the depth is inside a big range and this lead to a different values normalization. How do you use to normalize the depth map? This can be relevant!
Another thing that I suggest you to investigate are the parameters of the stereo algorithm. For StereoSGBM you have a lot of parameters to play with, try using a different combination of them.
P.S. = If you found a solution for this, I would be more than happy to know how you figured it out, if you can share the solution I will appreciate it. Mine are just some ideas and starting point that are in my opinion the major causes.
I'm having some trouble understanding seam insertion for image enlarging with Seam Carving. AFIK to enlarge an image by k pixels it's necessary to remove k seams, recording their coordinates and using them to reproduce the process backwards, i.e. re-add the deleted seams but duplicating them and applying some kind of average with neighbouring seams (I'm not concerned with this, since it should be the easy part). My confussion comes with the correctness of the recorded coordinates: they are local to the image from which the seam was removed, so by 'restoring' the first seam every other recorded seam coordinates turn invalid. Am I supposed to correct these coordinates checking if every pixel coordinate of every remaining to add seam comes after the previously added seams? This seems rather cumbersome and highly inefficient, given that I've read inserting seams should be trivial once the seam removal part has been achieved (it has).
I'm not sure if I'm communicating my doubt properly. Let me know if that's not the case, although I tried to be as clear as possible.
As pointed out in the cooments, you have to fix indices anyway when inserting even if you can avoid "fixing" in removal part.
You can find a full implementation of seam carving and seam insertion in python here.
I'm trying to make a (sort-of) clone of Asteroids in Python using Pyglet. I figured I'd try to get a little fancy and implement the separating axis theorem to do collision. I got it to work, but the problem is that it's miserably slow. I test collision between bullets that the player shoots and the asteroids on the screen in a double for-loop, which I believe is quadratic time, but the frame rate drops from about 60 to 30 fps by the time there's about 6 asteroids and 6 bullets on the screen, which seems incredibly slow, even for a non-optimized way of detecting collision.
So I ran a profiler to determine where, exactly, in the code the program is getting hung up. It seems to be hung up in the method where I transform shape vertices into world space (I define the shapes around the origin and use OpenGL code to transform to world space for drawing, which I believe is the right way to do it). I grab the transformation matrix from OpenGL, turn it into a NumPy array, and then multiply each vertex by this matrix to get the transformed vertices. It's worth noting that I do this every collision check: I used to use XNA, and when I implemented the SAT in that (I made an asteroids clone there, too), the vertices were also defined around the origin and then you had to transform them using a world matrix.
Is it best to store the vertices around (0, 0) and transform each call, or just store the transformed vertices? I feel like the algorithm shouldn't be THIS slow, so I'm willing to bet I screwed up implementing something. If I was better at profiling (I'm pretty unfamiliar with it) I might be able to get a more complete picture, but I was hoping you guys might have some idea.
Here's a direct link to the file with the Shape class in it, where all the collision logic happens: shape.py. The specific method that the profiler seemed to mark as the bottleneck was __get_transformed_verts. Obviously you can get to the entire repo from there too, but just be aware that there's still a good deal not commented.
As Nico suggests in comments, a quick way to get a good speed-up would be to check simpler geometry first. For an Asteroids clone I guess a circle will be a good fit (or sphere for 3D). If the circles (at least large enough to cover your actual shape) don't overlap, then there is no need to do the more expensive geometry test.
If you have many objects, you will probably want to avoid doing n*n tests every frame. Take a look at space partitioning structures/algorithms. The simplest scheme with a lot of moving objects in 2D would be a grid. Then you only need to test objects belonging to the same - or neighbouring - grid cells for collision.
Another thing I noticed: You generate the transformed vertices every time you test for collision. It would be quicker to generate them only once per timestep (frame) for each object that fails the circle-circle test.
I have a selected mesh (creatively called 'selected') which I want to work with in its local space so I use the following:
tmesh = NMesh.GetRawFromObject(selected.name)
tmesh.transform(selected.matrix)
I then do a bunch of calculations on its verticies and remove them by index, which works great.
What doesn't work so great is when I try to add them back later from a list of vert.co objects. They end up being positioned correctly relative to each other, but nowhere near the original verticies.
for v in tmesh.verts:
toAdd.append(v.co)
mesh = selected.getData(mesh=1)
mesh.verts.extend(toAdd)
I realized this is because the object is in a different space. So I think what I need to do is translate the new verticies back to the old space... but my understanding of what exactly is going on here is fuzzy at best. How do I "undo" the tmesh.transform(selected.matrix) command?
A more thoroug explanation of what I'm doing can be found here: http://www.kellbot.com/2010/11/lego-plans-now-with-better-rendering/
I'm not familiar with Blender specfically, but after a quick peek at the documentation, in principle it seems like you ought to be able to use the invert() Matrix method to obtain the inverse transformation of the selected.matrix (perhaps after first making a copy() of it).