In maya, I want to make an OpenMaya MSelectionList (api version 2.0) with more than one item in it... I've only been able to fill it with the add method like so:
import maya.api.OpenMaya as om
selList = om.MSelectionList()
selList.add('node1')
selList.add('node2')
selList.add('node3')
That's ok for populating it with just a few items, but it's tedious if you have more... I'm wondering if there's a way to do something more like this:
import maya.api.OpenMaya as om
selList = om.MSelectionList(['node1', 'node2', 'node3'])
I could write my own function to create an empty MSelectionList, loop through a list, and add them and then return it; I'm just wondering I have completely looked over something obvious? From what I can tell on the docs, you can only create an empty MSelectionList, or create one by passing in another MSelectionList (to basically duplicate it).
If this can't really be done inherently in the class, does anyone have an idea why it was implemented this way?
The MSelectionList is ultimately a wrapper around a C++ list of object pointers (the Maya api is unusual in that uses diffeent function sets to work on different aspects of an object, rather than the more familiar use of a classic inheritance tree).
Implementing variadic functions in C++ is not trivial (especially not back in the 90s when the Maya API was designed. I suspect nobody felt it was worth the time for what's essentially syntactic sugar.
sl = om.MSelectionList()
for node in nodelist:
sl.add(n0de)
or
sl = om.MSelectionList()
[sl.add(n0de) for node in nodelist]
although I would not recommend the shorter version, which produces an list of meaningless Nones as a side effect
Related
Why is everything in Python, an object? According to what I read, everything including functions is an object. It's not the same in other languages. So what prompted this shift of approach, to treat everything including, even functions, as objects.
The power of everything being an object is that you can define behavior for each object. For example a function being an object gives you an easy way to access the docs of the function for introspection.
print( function.__doc__ )
The alternative would be to provide a library of function that took
a function and returned its interesting properties.
import function_lib
print( function_lib.get_doc( function )
Making int, str etc classes means that you can extend those provide types
in interesting ways for your problem domain.
In my opinion, the 'Everything is object' is great in Python. In this language, you don't react to what are the objects you have to handle, but how they can interact. A function is just an object that you can __call__, a list is just an object that you can __iter__. But why should we divide data in non overlapping groups. An object can behave like a function when we call it, but also like an array when we access it.
This means that you don't think your "function" like, "i want an array of integers and i return the sum of it" but more "i will try to iterate over the thing that someone gave me and try to add them together, if something goes wrong, i will tell it to the caller by raiseing error and he will hate to modify his behavior".
The most interesting exemple is __add__. When you try something like Object1 + Object2, Python will ask (nicely ^^) to Object1 to try to add himself with object2 (Object1.__add__(Object2)). There is 2 scenarios here: either Oject1 knows how to add himself to Object2 and everything is fine, either he raises a NotImplemented error and Python will ask to Object2 to radd himself to Object1. Just with this mechanism, you can teach to your object to add themselves with any other object, you can manage commutativity,...
why is everything in Python, an object?
Python (unlike other languages) is a truly Object Orient language (aka OOP)
when everything is an object, it becomes easier to search, manipulate or access things. (But everything comes at the cost of speed)
what prompted this shift of approach, to treat everything including, even functions, as objects?
"Necessity is the mother of invention"
USAGE CONTEXT ADDED AT END
I often want to operate on an abstract object like a list. e.g.
def list_ish(thing):
for i in xrange(0,len(thing)):
print thing[i]
Now this appropriate if thing is a list, but will fail if thing is a dict for example. what is the pythonic why to ask "do you behave like a list?"
NOTE:
hasattr('__getitem__') and not hasattr('keys')
this will work for all cases I can think of, but I don't like defining a duck type negatively, as I expect there could be cases that it does not catch.
really what I want is to ask.
"hey do you operate on integer indicies in the way I expect a list to do?" e.g.
thing[i], thing[4:7] = [...], etc.
NOTE: I do not want to simply execute my operations inside of a large try/except, since they are destructive. it is not cool to try and fail here....
USAGE CONTEXT
-- A "point-lists" is a list-like-thing that contains dict-like-things as its elements.
-- A "matrix" is a list-like-thing that contains list-like-things
-- I have a library of functions that operate on point-lists and also in an analogous way on matrix like things.
-- for example, From the users point of view destructive operations like the "spreadsheet-like" operations "column-slice" can operate on both matrix objects and also on point-list objects in an analogous way -- the resulting thing is like the original one, but only has the specified columns.
-- since this particular operation is destructive it would not be cool to proceed as if an object were a matrix, only to find out part way thru the operation, it was really a point-list or none-of-the-above.
-- I want my 'is_matrix' and 'is_point_list' tests to be performant, since they sometimes occur inside inner loops. So I would be satisfied with a test which only investigated element zero for example.
-- I would prefer tests that do not involve construction of temporary objects, just to determine an object's type, but maybe that is not the python way.
in general I find the whole duck typing thing to be kinda messy, and fraught with bugs and slowness, but maybe I dont yet think like a true Pythonista
happy to drink more kool-aid...
One thing you can do, that should work quickly on a normal list and fail on a normal dict, is taking a zero-length slice from the front:
try:
thing[:0]
except TypeError:
# probably not list-like
else:
# probably list-like
The slice fails on dicts because slices are not hashable.
However, str and unicode also pass this test, and you mention that you are doing destructive edits. That means you probably also want to check for __delitem__ and __setitem__:
def supports_slices_and_editing(thing):
if hasattr(thing, '__setitem__') and hasattr(thing, '__delitem__'):
try:
thing[:0]
return True
except TypeError:
pass
return False
I suggest you organize the requirements you have for your input, and the range of possible inputs you want your function to handle, more explicitly than you have so far in your question. If you really just wanted to handle lists and dicts, you'd be using isinstance, right? Maybe what your method does could only ever delete items, or only ever replace items, so you don't need to check for the other capability. Document these requirements for future reference.
When dealing with built-in types, you can use the Abstract Base Classes. In your case, you may want to test against collections.Sequence or collections.MutableSequence:
if isinstance(your_thing, collections.Sequence):
# access your_thing as a list
This is supported in all Python versions after (and including) 2.6.
If you are using your own classes to build your_thing, I'd recommend that you inherit from these abstract base classes as well (directly or indirectly). This way, you can ensure that the sequence interface is implemented correctly, and avoid all the typing mess.
And for third-party libraries, there's no simple way to check for a sequence interface, if the third-party classes didn't inherit from the built-in types or abstract classes. In this case you'll have to check for every interface that you're going to use, and only those you use. For example, your list_ish function used __len__ and __getitem__, so only check whether these two methods exist. A wrong behavior of __getitem__ (e.g. a dict) should raise an exception.
Perhaps their is no ideal pythonic answer here, so I am proposing a 'hack' solution, but don't know enough about the class structure of python to know if I am getting this right:
def is_list_like(thing):
return hasattr(thing, '__setslice__')
def is_dict_like(thing):
return hasattr(thing, 'keys')
My reduce goals here are to simply have performant tests that will:
(1) never call a dict-thing, nor a string-like-thing a list List item
(2) returns the right answer for python types
(3) will return the right answer if someone implement a "full" set of core method for a list/dict
(4) is fast (ideally does not allocate objects during the test)
EDIT: Incorporated ideas from #DanGetz
This is a question about a clean, pythonic way to juggle some different instance methods.
I have a class that operates a little differently depending on certain inputs. The differences didn't seem big enough to justify producing entirely new classes. I have to interface the class with one of several data "providers". I thought I was being smart when I introduced a dictionary:
self.interface_tools={'TYPE_A':{ ... various ..., 'data_supplier':self.current_data},
'TYPE_B':{ ... various ..., 'data_supplier':self.predicted_data} }
Then, as part of the class initialization, I have an input "source_name" and I do ...
# ... various ....
self.data_supplier = self.interface_tools[source_name]['data_supplier']
self.current_data and self.predicted_data need the same input parameter, so when it comes time to call the method, I don't have to distinguish them. I can just call
new_data = self.data_supplier(param1)
But now I need to interface with a new data source -- call it "TYPE_C" -- and it needs more input parameters. There are ways to do this, but nothing I can think of is very clean. For instance, I could just add the new parameters to the old data_suppliers and never use them, so then the call would look like
new_data = self.data_supplier(param1,param2,param3)
But I don't like that. I could add an if block
if self.data_source != 'TYPE_C':
new_data = self.data_supplie(param1)
else:
new_data = self.data_c_supplier(param1,param2,param3)
but avoiding if blocks like this was exactly what I was trying to do in the first place with that dictionary I came up with.
So the upshot is: I have a few "data_supplier" routines. Now that my project has expanded, they have different input lists. But I want my class to be able to treat them all the same to the extent possible. Any ideas? Thanks.
Sounds like your functions could be making use of variable length argument lists.
That said, you could also just make subclasses. They're fairly cheap to make, and would solve your problem here. This is pretty much the case they were designed for.
You could make all your data_suppliers accept a single argument and make it a dictionary or a list or even a NamedTuple.
I was writing some Python 3.2 code and this question came to me:
I've got these variables:
# a list of xml.dom nodes (this is just an example!)
child_nodes = [node1, node2, node3]
# I want to add every item in child_node into this node (this also a xml.dom Node)
parent = xml_document.createElement('TheParentNode')
This is exactly what I want to do:
for node in child_nodes:
if node is not None:
parent.appendChild(node)
I wrote it in one line like this:
[parent.appendChild(c) for c in child_nodes if c is not None]
I'm not going to use the list result at all, I only need the appendChild to do its work.
I'm not a very experienced python programmer, so I wonder which one is better?
I like the single line solution, but I would like to know from experienced python programmers:
Which one is better, in a context of either code beauty/maintainability) and performance/memory use.
The former is preferable in this situation.
The latter is called a list comprehension and creates a new list object full of the results of each call to parent.appendChild(c). And then discards it.
However, if you want to make a list based on this kind of iteration, then you should certainly employ a list comprehension.
The question of code beauty/maintainability is a tricky one. It's really up to you and whoever you work with to decide.
For a long time I was uncomfortable with list comprehensions and so on and preferred writing it the first way because it was easier for me to read. Some people I work with, however, prefer the second method.
I'm coding a poker hand evaluator as my first programming project. I've made it through three classes, each of which accomplishes its narrowly-defined task very well:
HandRange = a string-like object (e.g. "AA"). getHands() returns a list of tuples for each specific hand within the string:
[(Ad,Ac),(Ad,Ah),(Ad,As),(Ac,Ah),(Ac,As),(Ah,As)]
Translation = a dictionary that maps the return list from getHands to values that are useful for a given evaluator (yes, this can probably be refactored into another class).
{'As':52, 'Ad':51, ...}
Evaluator = takes a list from HandRange (as translated by Translator), enumerates all possible hand matchups and provides win % for each.
My question: what should my "domain" class for using all these classes look like, given that I may want to connect to it via either a shell UI or a GUI? Right now, it looks like an assembly line process:
user_input = HandRange()
x = Translation.translateList(user_input)
y = Evaluator.getEquities(x)
This smells funny in that it feels like it's procedural when I ought to be using OO.
In a more general way: if I've spent so much time ensuring that my classes are well defined, narrowly focused, orthogonal, whatever ... how do I actually manage work flow in my program when I need to use all of them in a row?
Thanks,
Mike
Don't make a fetish of object orientation -- Python supports multiple paradigms, after all! Think of your user-defined types, AKA classes, as building blocks that gradually give you a "language" that's closer to your domain rather than to general purpose language / library primitives.
At some point you'll want to code "verbs" (actions) that use your building blocks to perform something (under command from whatever interface you'll supply -- command line, RPC, web, GUI, ...) -- and those may be module-level functions as well as methods within some encompassing class. You'll surely want a class if you need multiple instances, and most likely also if the actions involve updating "state" (instance variables of a class being much nicer than globals) or if inheritance and/or polomorphism come into play; but, there is no a priori reason to prefer classes to functions otherwise.
If you find yourself writing static methods, yearning for a singleton (or Borg) design pattern, writing a class with no state (just methods) -- these are all "code smells" that should prompt you to check whether you really need a class for that subset of your code, or rather whether you may be overcomplicating things and should use a module with functions for that part of your code. (Sometimes after due consideration you'll unearth some different reason for preferring a class, and that's allright too, but the point is, don't just pick a class over a module w/functions "by reflex", without critically thinking about it!).
You could create a Poker class that ties these all together and intialize all of that stuff in the __init__() method:
class Poker(object):
def __init__(self, user_input=HandRange()):
self.user_input = user_input
self.translation = Translation.translateList(user_input)
self.evaluator = Evaluator.getEquities(x)
# and so on...
p = Poker()
# etc, etc...