Extend/wrap an object ad-hoc with more functionality - python

The following question adresses a problem I often encounter. Basically, there are solutions like the adaptor pattern, but I find it a bit unsatisfying.
Suppose I have a class Polygon which implements an - uhm - polygon with quite some functionality. Many of those Polygon live in my program, some as lonely variables, some in collection structures.
Now, there's a function that needs an argument type that is basically a Polygon, but with some additional features. Let's say, a Polygon who can return some metrics: his volume, center of gravity, and angular mass. Plus, the function also needs the methods of the original Polygon.
A first idea is:
class Polygon:
# defines my polygon
class PolygonWithMetrics(Polygon):
# - extends polygon with the metrics
# - takes Polygon as argument upon construction
# - would need to delegate many functions to Polygon
def functionUsingPolygonWithMetrics(p):
# use functions of Polygon and PolygonWithMetrics
# driving code:
p = Polygon(some args here)
... more code ...
p_with_metrics = PolygonWithMetrics(p) # Bummer - problem here...
functionUsingPolygonWithMetrics(p_with_metrics)
The problem: It would require to delegate many many functions from PolygonWithMetrics into the original Polygon.
A second idea is:
class Polygon:
# defines my polygon
class PolygonMetrics:
# takes a polygon and provides metrics methods on it
def functionUsingPolygonWithMetrics(p):
# use functions of Polygon and PolygonMetrics
# driving code:
p = Polygon(some args here)
... more code ...
p_with_metrics = PolygonMetrics(p)
functionUsingPolygonWithMetrics(p, p_with_metrics) # Bummer - problem here...
This idea takes the original Polygon as an argument, plus a second object that provides the metrics functions. The problem is that I would need to change the signature of functionUsingPolygonWithMetrics.
What I would really need is an idea how to extend an existing object ad-hoc with some more functionality, without the problems given in idea 1 and 2.
I could imagine an idea roughly like this, where the job is mostly done by PolygonWithMetrics:
class Polygon:
# defines my polygon
class PolygonWithMetrics(maybe inherits something):
# - takes a Polygon and provides metrics methods on it
# - upon construction, it will take a polygon
# - will expose the full functionality of Polygon automatically
def functionUsingPolygonWithMetrics(p):
# use functions of Polygon and PolygonWithMetrics
# driving code:
p = Polygon(some args here)
... more code ...
p_with_metrics = PolygonWithMetrics(p)
functionUsingPolygonWithMetrics(p)
Three questions arise:
Does this pattern have sort of a name?
Is it a good idea, or should I resort to some more adviced techniques?
How to do it in Python?

Related

How to decompose a matrix in Python for Maya?

I'm trying to write a script that can transfer translate and rotate from child to parent and vice versa without having to resort on parent constraints. I've spent the last few days investigating this using matrices information but I'm a beginner in Python and matrices.
So far, I've been able to find the matrices I want and apply the correct calculations on it but I can't convert those matrices back to regular Maya values. Essentially I'm trying to replicate whatever it is the "decomposeMatrix" node does in Maya.
After a lot of research and testing, I'm pretty sure the MTransformationMatrix function from OpenMaya is what I need but I can't make it work, I don't know how to write/use it, what parameters it's using, etc.
If I am on the right track, what do I need to finish the work? If I'm tackling something bigger that requires a lot more coding than this, I'd be interested to understand more on that too.
Here's the code:
import maya.cmds as mc
import maya.OpenMaya as OpenMaya
def myM():
tlm = MMatrix(cmds.getAttr('TRAJ.matrix'))
pwm = MMatrix(cmds.getAttr('TRAJ.worldMatrix'))
pim = MMatrix(cmds.getAttr('TRAJ.parentInverseMatrix'))
prod = tlm * pwm * pim
return prod
tMat = OpenMaya.MTransformationMatrix(prod)
print tMat
myM()
Edit
I misused return on the code above. Re-testing the code today (and also when implementing Klaudikus' suggestions) I get the following error:
Error: TypeError: file /home/Maya_2020_DI/build/RelWithDebInfo/runTime/lib/python2.7/site-packages/maya/OpenMaya.py line 7945: in method 'new_MMatrix', argument 1 of type 'float const [4][4]' #
I'm assuming you understand what the script is doing, but I'll recap just in case:
Getting the local matrix values of TRAJ and creating a new MMatrix object from those values.
Getting the world matrix values of TRAJ and creating a new MMatrix object from those values.
Getting the parent inverse matrix of TRAJ and creating a new MMatrix object from those values.
Multiplying all three of the above matrices and storing the resulting MMatrix in prod.
Returning prod.
As stated by halfer, the rest of the function is ignored because return will exit the function. Here's something that should work.
# here you typed:
# import maya.cmds as mc
# but further below used the cmds.getAttr...
# I decided to change the import statement instead
import maya.cmds as cmds
# MMatrix is a class within OpenMaya
# when using it, you must reference the module first then the class/object as so:
# my_matrix = OpenMaya.MMatrix()
import maya.OpenMaya as OpenMaya
def myM():
tlm = OpenMaya.MMatrix(cmds.getAttr('TRAJ.matrix'))
pwm = OpenMaya.MMatrix(cmds.getAttr('TRAJ.worldMatrix'))
pim = OpenMaya.MMatrix(cmds.getAttr('TRAJ.parentInverseMatrix'))
prod = tlm * pwm * pim
# check the documentation at:
# https://help.autodesk.com/view/MAYAUL/2020/ENU/?guid=__py_ref_class_open_maya_1_1_m_transformation_matrix_html
# here I'm "creating" an MTransformationMatrix with the MMatrix stored in prod
tMat = OpenMaya.MTransformationMatrix(prod)
# this is how you get the data from this object
# translation is an MVector object
translation = tMat.translation(OpenMaya.MSpace.kObject)
# eulerRotation is a MEulerRotation object
eulerRotation = tMat.rotation(asQuaternion=False)
# typically you'll return these values
return translation, eulerRotation
TRAJ_trans, TRAJ_rot = myM()
But this is not really useful. First of all the function myM should really take arguments so you can reuse it and get the translation and rotation from any object in the scene, not just TRANJ. Second, the math you are performing doesn't make sense to me. I'm not a mathematician, but here you're multiplying the local matrix, by the world matrix, by the parent's inverse local matrix. I honestly wouldn't be able to tell you what this does, but as far as I know, it's most likely useless. I won't go into detail on matrices, as it's a beast of a subject and you seem to be on that path anyway, but in a nutshell:
obj_local_matrix = obj_world_matrix * inverse_parent_world_matrix
obj_world_matrix = obj_local_matrix * parent_world_matrix
Another way of writing this is:
obj_relative_to_other_matrix = obj_world_matrix * inverse_other_world_matrix
obj_world_matrix = obj_relative_to_other_matrix * other_world_matrix
This is essentially the math that dictates the relationship between parented nodes. However, you can get the transform values of one node relative to any other node, irrespective of it's parenting, as long as you have their world matrices. Note that order is important here, as matrix multiplication is not commutative A * B != B * A.
What I gather from your question is that you're trying to get the relative transforms of an object, to that of another object in a scene. The following function returns the position, rotation, and scale of an object relative to that of another object.
def relative_trs(obj_name, other_obj_name):
"""
Returns the position, rotation and scale of one object relative to another.
:param obj_name:
:param other_obj_name:
:return:
"""
obj_world_matrix = OpenMaya.MMatrix(cmds.getAttr('{}.worldMatrix'.format(obj_name)))
other_obj_world_matrix = OpenMaya.MMatrix(cmds.getAttr('{}.worldMatrix'.format(other_obj_name)))
# multiplying the world matrix of one object by the inverse of the world matrix of another, gives you it's relative matrix of the object to the other
matrix_product = obj_world_matrix * other_obj_world_matrix.inverse()
trans_matrix = OpenMaya.MTransformationMatrix(matrix_product)
translation = trans_matrix.translation(OpenMaya.MSpace.kObject)
euler_rotation = trans_matrix.rotation(asQuaternion=False)
scale = trans_matrix.scale(OpenMaya.MSpace.kObject)
return list(translation), list(euler_rotation), list(scale)
You can test the function by creating two hierarchies, and run the function between a leaf node of one hierarchy and any node in the other hierarchy. Store the results. Then re-parent the leaf node object to the one in the other hierarchy and check its local coordinates. They should match your stored results. This is essentially how a parent constraint works.
Thanks to Klaudikus and a bunch of perseverance, I managed to make it work. It may look bit wanky but that's good enough for me! It does needs user friendly optimisation of course (that's my next step) but the core idea works. It potentially makes exchanging objects location between child and parent way faster than snapping or parent constraining.
import maya.cmds as cmds
from maya.api.OpenMaya import MMatrix
import pymel.core as pm
def myM():
tlmList = cmds.getAttr('TRAJ.matrix')
tlm = MMatrix(tlmList)
pwmList = cmds.getAttr('PARENT1.worldMatrix')
pwm = MMatrix(pwmList)
pimList = cmds.getAttr('PARENT0.worldInverseMatrix')
pim = MMatrix(pimList)
prod = tlm * pwm * pim
pm.xform('PARENT1', m=([prod[0],prod[1],prod[2],prod[3],prod[4],prod[5],prod[6],prod[7],prod[8],prod[9],prod[10],prod[11],prod[12],prod[13],prod[14],prod[15]]))
myM()

Best practices for unit testing in Python - multiple functions apply to same object

I have a bunch of functions which apply to a similar object, for example a Numpy array which represents an n-dimensional box:
# 3-D box parameterized as:
# box[0] = 3-D min coordinate
# box[1] = 3-D max coordinate
box = np.array([
[1, 3, 0],
[4, 5, 7]
])
Now I have a whole bunch of functions that I want to run on lists of boxes, eg. volumes, intersection, smallest_containing_box, etc. In my mind here is the way I was hoping to set this up:
# list of test functions:
test_funcs = [volume, intersection, smallest_containing_box, ...]
# manually create a bunch of inputs and outputs
test_set_1 = (
input = [boxA, boxB, ...], # where each of these are np.Array objects
output = [
[volA, volB, ...], # floats I calculated manually
intersection, # np.Array representing the correct intersection
smallest_containing_box, # etc.
]
)
# Create a bunch of these, eg. test_set_2, test_set_3, etc. and bundle them in a list:
test_sets = [test_set_1, ...]
# Now run the set of tests over each of these:
test_results = [[assertEqual(test(t.input), t.output) for test in test_funcs] for t in test_sets]
The reason I want to structure it this way is so that I can create multiple sets of (input, answer) pairs and just run all the tests over each. Unless I'm missing something, the structure of unittest doesn't seem to work well with this approach. Instead, it seems like it wants me to create an individual TestCase object for each pair of function and input, i.e.
class TestCase1(unittest.TestCase):
def setUp(self):
self.input = [...]
self.volume = [volA, volB, ...]
self.intersection = ...
# etc.
def test_volume(self):
self.assertEqual(volume(self.input), self.volume)
def test_intersection(self):
self.assertEqual(intersection(self.input), self.output)
# etc.
# Repeat this for every test case!?
This seems like a crazy amount of boilerplate. Am I missing something?
Let met try to describe how I understand your approach: You have implemented a number of different functions that have a similarity, namely that they operate on the same types of input data. In your tests you try to make use of that similarity: You create some input data and pass that input data to all of your functions.
This test-data centric approach is unusual. The typical unit-testing approach is code-centric. The reason is, that one primary goal of unit-testing is to find bugs in the code. Different functions have (obviously) different code, and therefore the types of bugs may be different. Thus, test data is typically carefully designed to identify certain kinds of bugs in the respective code. Test design methods are approaches that methodically design test cases such that ideally all likely bugs would be detected.
I am sceptical that with your test-data centric approach you will be equally successful in finding the bugs in your different functions: For the volume function there may be overflow scenarios (and also underflow scenarios) that don't apply for intersection or smallest_containing_box. In contrast, there will have to be empty intersections, one-point-intersections etc.. Thus it seems that each of the functions probably needs specifically designed test scenarios.
Regarding the boilerplate code that seems to be the consequence of code-centric unit-testing: There are several ways to limit that. You would, agreed, have different test methods for different functions under test. But then you could use parameterized tests to avoid further code duplication. And, for the case that you still see an advantage to use (at least sometimes) common test-data for the different functions: For such cases you can use factory functions that create the test-data and can be called from the different test cases. For example, you could have a factory function make-unit-cube to be used from different tests.
Try unittest.TestSuite(). That gives you an object where you can add test cases. In your case, create the suite, then loop over your lists, creating instances of TestCase which all have only a single test method. Pass the test data to the constructor and save them to properties there instead of in setUp().
The unit test runner will detect the suite when you create it in a method called suite() and run all of them.
Note: Assign a name to each TestCase instance or finding out which failed will be very hard.

Generating random numbers for a probability density function in Python

I'm currently working on a project relating to brownian motion, and trying to simulate some of it using Python (a language I'm admittedly very new at). Currently, my goal is to generate random numbers following a given probability density function. I've been trying to use the scipy library for it.
My current code looks like this:
>>> import scipy.stats as st
>>> class my_pdf(st.rv_continuous):
def _pdf(self,x,y):
return (1/math.sqrt(4*t*D*math.pi))*(math.exp(-((x^2)/(4*D*t))))*(1/math.sqrt(4*t*D*math.pi))*(math.exp(-((y^2)/(4*D*t))))
>>> def get_brown(a,b):
D,t = a,b
return my_pdf()
>>> get_brown(1,1)
<__main__.my_pdf object at 0x000000A66400A320>
All attempts at launching the get_brown function end up giving me these hexadecimals (always starting at 0x000000A66400A with only the last three digits changing, no matter what parameters I give for D and t). I'm not sure how to interpret that. All I want is to get random numbers following the given PDF; what do these hexadecimals mean?
The result you see is the memory address of the object you have created. Now you might ask: which object? Your method get_brown(int, int) calls return my_pdf() which creates an object of the class my_pdf and returns it. If you want to access the _pdf function of your class now and calculate the value of the pdf you can use this code:
get_brown(1,1)._pdf(x, y)
On the object you have just created you can also use all methods of the scipy.stats.rv_continous class, which you can find here.
For your situation you could also discard your current code and just use the normal distribution included in scipy as Brownian motion is mainly a Normal random process.
As noted, this is a memory location. Your function get_brown gets an instance of the my_pdf class, but doesn't evaluate the method inside that class.
What you probably want to do is call the _pdf method on that instance, rather than return the class itself.
def get_brown(a,b):
D,t = a,b # what is D,t for?
return my_pdf()_pdf(a,b)
I expect that the code you've posted is a simplification of what you're really doing, but functions don't need to be inside classes - so the _pdf function could live on it's own. Alternatively, you don't need to use the get_brown function - just instantiate the my_pdf class and call the calculation method.

How to use igraph python's metamagic class?

The python interface of igraph has a class called metamagic, serving the purpose to collect graphical parameters for plotting. I am writing a module using igraph, and I almost started to write my own wrapper functions for this purpose, when I've found metamagic in the documentation. But after searching and trying, it's still not clear how to use these classes. If I define an AttributeCollectorBase class for edges, like this:
class VisEdge(igraph.drawing.metamagic.AttributeCollectorBase):
width = 0.002
color = "#CCCCCC44"
Then, is there an easy way to pass all these parameters to the igraph.plot() function? Or I can only do one by one, like this: plot(graph,edge_color=VisEdge(graph.es).color)?
And what if I would like to use not constant parameters, but calculate by a custom function? For example, vertex_size proportional to degree. The func parameter of the AttributeSpecification class supposed to do this, isn't it? But I haven't seen any example how to use it. If I define an AttributeSpecification instance, like this:
ds = igraph.drawing.metamagic.AttributeSpecification(name="vertex_size",alt_name="size",default=2,func='degree')
After how to pass it to an AtributeCollector, and finally to plot()?
(To put things in context: I am the author of the Python interface of igraph).
I'm not sure whether the metamagic package is the right tool for you. The only purpose of the AttributeCollectorBase class is to allow the vertex and edge drawers in igraph (see the igraph.drawing.vertex and igraph.drawing.edge packages) to define what vertex and edge attributes they are able to treat as visual properties in a nice and concise manner (without me having to type too much). So, for instance, if you take a look at the DefaultVertexDrawer class in igraph.drawing.vertex, you can see that I construct a VisualVertexBuilder class by deriving it from AttributeCollectorBase as follows:
class VisualVertexBuilder(AttributeCollectorBase):
"""Collects some visual properties of a vertex for drawing"""
_kwds_prefix = "vertex_"
color = ("red", self.palette.get)
frame_color = ("black", self.palette.get)
frame_width = 1.0
...
Later on, when the DefaultVertexDrawer is being used in DefaultGraphDrawer, I simply construct a VisualVertexBuilder as follows:
vertex_builder = vertex_drawer.VisualVertexBuilder(graph.vs, kwds)
where graph.vs is the vertex sequence of the graph (so the vertex builder can get access to the vertex attributes) and kwds is the set of keyword arguments passed to plot(). The vertex_builder variable then allows me to retrieve the calculated, effective visual properties of vertex i by writing something like vertex_builder[i].color; here, it is the responsibility of the VisualVertexBuilder to determine the effective color by looking at the vertex and checking its color attribute as well as looking at the keyword arguments and checking whether it contains vertex_color.
The bottom line is that the AttributeCollectorBase class is likely to be useful to you only if you are implementing a custom graph, vertex or edge drawer and you want to specify which vertex attributes you wish to treat as visual properties. If you only want to plot a graph and derive the visual properties of that particular graph from some other data, then AttributeCollectorBase is of no use to you. For instance, if you want the size of the vertex be proportional to the degree, the preferred way to do it is either this:
sizes = rescale(graph.degree(), out_range=(0, 10))
plot(graph, vertex_size=sizes)
or this:
graph.vs["size"] = rescale(graph.degree(), out_range=(0, 10))
plot(g)
If you have many visual properties, the best way is probably to collect them into a dictionary first and then pass that dictionary to plot(); e.g.:
visual_props = dict(
vertex_size = rescale(graph.degree(), out_range=(0, 10)),
edge_width = rescale(graph.es["weight"], out_range=(0, 5), scale=log10)
)
plot(g, **visual_props)
Take a look at the documentation of the rescale function for more details. If you want to map some vertex property into the color of the vertex, you can still use rescale to map the property into the range 0-255, then round them to the nearest integer and use a palette when plotting:
palette = palettes["red-yellow-green"]
colors = [round(x) for x in rescale(g.degree(), out_range=(0, len(palette)-1))]
plot(g, vertex_color=colors, palette=palette)

OOP Programming style decision

I just have a quick question about an OOP programming I've been having difficulty deciding. The premise is that I'm making a set of very simple geometric classes such as vertex and angle and vector objects, but one of the classes, the line class to be specific, is a little different. It's basically just a collection of methods that I use one time only, I never actually save a line object for later use or recollection of data anywhere else in the program. An example usage to demonstrate my point would be this:
class Line:
def __init__(self, vertex1, vertex2):
self.start = vertex1
self.end = vertex2
def to_the_left(self, vertex):
"""Check to see if a vertex is to the left of the line segment."""
#code stuff
data = Line(Vertex(0, 0), Vertex(10, 0)).to_the_left(Vertex(5, 5))
I only ever instantiate Line(Vertex(0, 0), Vertex(10, 0)) once to retrieve the data. So I was thinking that I might as well just have a bunch of functions available instead of packing it all into a class, but then I was skeptical about doing that since there are a ton of methods that would have to be converted to functions.
Another thing I was thinking of doing was to make a Line class and then convert all it's methods into normal functions like so:
#continuing from the code above
def to_the_left(line_start, line_end, vertex):
return Line(line_start, line_end).to_the_left(vertex)
data = to_the_left(Vertex(0, 0), Vertex(10, 0), Vertex(5, 5))
Which method do you think I should use?
I would opt for using an object as you might need to do multiple operations on Line.
For example you might compute the length, if it's to the left, and some other operation. You might need to pass the Line around who knows.
One thing you might want to consider is instead of using Line and Vertex, use Vector which acts as both. If your vertex has x,y you can make a Vector that has x,y,w.
In this scheme w=1 for vertices and w=0 for Lines - it would simplify a lot of code.
Look up Homogenous coordinates to learn more

Categories