I'm working on the Python API for our physics simulation package (Mechanica, https://mechanica.readthedocs.io), and mulling over wether to use properties or methods.
Python has an established convention that of the objects in a dictionary are accessible by an items() method, i.e.
[i for i in d.items()]
I'm trying to adhere to this established convention for our objects, but it's sometimes awkward, for example, in our simulator, we have:
C.items() # get all the members of this
# type
n = a.neighbors() # get all the neighbors of an object
c = find_some_cluster() # some function to find a cluster
c.items() # get all the items in ths list
b = m.bind(pot, a, b) # create a bond between two objects
b.energy() # gets the current energy of the bond
b.half_life # gets / sets the half life
b.delete() # deletes the bond
b.items()[0], b.items()[1] # gets the pair of objects that this bond acts on.
b.dissociation_energy # bond breaking threshold
To access one of the of objects a that a bond is between, you currently have to call
b.items()[0]
I think that's awkward, and perhaps items would be better as a property. I just don't know though, because if I made it a property, that goes agains the some of the established Python convention. Python itself is pretty inconsistent where some things are a stand-alone function, i.e. len(a) for length of a list, but most other things are methods on objects.
Our bond object, some things on it are methods, like energy(), but others are properties, like half_life. I set these up this way, because half_life is actually a stateful property of the object, but energy() is a computed thing. I’m not sure if this makes the most sense to the end user though.
What do you guys think our items should be, a method or property. Is there any good rule to apply as to when something should be a method or property.
Related
I am working on code in Python that creates Compound objects (as in chemical compounds) that are be composed of Bond and Element objects. These Element objects are created with some inputs about them (Name, symbol, atomic number, atomic mass, etc). If I want to populate an array with Element objects, and I want the Element objects to be unique so I can do something to one and leave the rest unchanged, but they should all have the information related to a 'Hydrogen' element.
This question Python creating multiple instances for a single object/class leads me to believe that I should create sub-classes to Element - ie a Hydrogen object and a Carbon object, etc.
Is this doable without creating sub-classes, and if so how?
Design your object model based on making the concepts make sense, not based on what seems easiest to implement.
If, in your application, hydrogen atoms are a different type of thing than oxygen atoms, then you want to have a Hydrogen class and an Oxygen class, both probably subclasses of an Element class.*
If, on the other hand, there's nothing special about hydrogen or oxygen (e.g., if you don't want to distinguish between, say, oxygen and sulfur, since they both have the same valence), then you don't want subclasses.
Either way, you can create multiple instances. It's just a matter of whether you do it like this:
atoms = [Hydrogen(), Hydrogen(), Oxygen(), Oxygen()]
… or this:
atoms = [Element(1), Element(1), Element(-2), Element(-2)]
If your instances take a lot of arguments, and you want a lot of instances with the same arguments, repeating yourself like this can be a bad thing. But you can use a loop—either an explicit statement, or comprehension—to make it better:
for _ in range(50):
atoms.append(Element(group=16, valence=2, number=16, weight=32.066))
… or:
atoms.extend(Element(group=16, valence=2, number=16, weight=32.066)
for _ in range(50))
* Of course you may even want further subclasses, e.g., to distinguish Oxygen-16, Oxygen-17, Oxygen-18, or maybe even different mixtures, like the 99.762% Oxygen-16 with small amounts of -18 and tiny bits of the others that's standard in Earth's atmosphere, vs. the different mixture that was common millions of years ago…
I've made two classes called House and Window. I then made a list containing four Houses. Each instance of House has a list of Windows. I'm trying to iterate over the windows in each house and print it's ID. However, I seem to get some odd results :S I'd greatly appreciate any help.
#!/usr/bin/env python
# Minimal house class
class House:
ID = ""
window_list = []
# Minimal window class
class Window:
ID = ""
# List of houses
house_list = []
# Number of windows to build into each of the four houses
windows_per_house = [1, 3, 2, 1]
# Build the houses
for new_house in range(0, len(windows_per_house)):
# Append the new house to the house list
house_list.append(House())
# Give the new house an ID
house_list[new_house].ID = str(new_house)
# For each new house build some windows
for new_window in range(0, windows_per_house[new_house]):
# Append window to house's window list
house_list[new_house].window_list.append(Window())
# Give the window an ID
house_list[new_house].window_list[new_window].ID = str(new_window)
#Iterate through the windows of each house, printing house and window IDs.
for house in house_list:
print "House: " + house.ID
for window in house.window_list:
print " Window: " + window.ID
####################
# Desired output:
#
# House: 0
# Window: 0
# House: 1
# Window: 0
# Window: 1
# Window: 2
# House: 2
# Window: 0
# Window: 1
# House: 3
# Window: 0
####################
Currently you are using class attributes instead of instance attributes. Try changing your class definitions to the following:
class House:
def __init__(self):
self.ID = ""
self.window_list = []
class Window:
def __init__(self):
self.ID = ""
The way your code is now all instances of House are sharing the same window_list.
Here's the updated code.
# Minimal house class
class House:
def __init__(self, id):
self.ID = id
self.window_list = []
# Minimal window class
class Window:
ID = ""
# List of houses
house_list = []
# Number of windows to build into each of the for houses
windows_per_house = [1, 3, 2, 1]
# Build the houses
for new_house in range(len(windows_per_house)):
# Append the new house to the house list
house_list.append(House(str(new_house)))
# For each new house build some windows
for new_window in range(windows_per_house[new_house]):
# Append window to house's window list
house_list[new_house].window_list.append(Window())
# Give the window an ID
house_list[new_house].window_list[new_window].ID = str(new_window)
#Iterate through the windows of each house, printing house and window IDs.
for house in house_list:
print "House: " + house.ID
for window in house.window_list:
print " Window: " + window.ID
The actual problem is that the window_list attribute is mutable, so when the different instances are using it, they end up sharing the same one. By moving window_list into __init__ each instance gets its own.
C++, Java, C# etc. have this really strange behaviour regarding instance variables, whereby data (members, or fields, depending on which culture you belong to) that's described within a class {} block belongs to instances, while functions (well, methods, but C++ programmers seem to hate that term and say "member functions" instead) described within the same block belong to the class itself. Strange, and confusing, when you actually think about it.
A lot of people don't think about it; they just accept it and move on. But it actually causes confusion for a lot of beginners, who assume that everything within the block belongs to the instances. This leads to bizarre (to experienced programmers) questions and concerns about the per-instance overhead of these methods, and trouble wrapping their heads around the whole "vtable" implementation concept. (Of course, it's mostly the teachers' collective fault for failing to explain that vtables are just one implementation, and for failing to make clear distinctions between classes and instances in the first place.)
Python doesn't have this confusion. Since in Python, functions (including methods) are objects, it would be bizarrely inconsistent for the compiler to make a distinction like that. So, what happens in Python is what you should intuitively expect: everything within the class indented block belongs to the class itself. And, yes, Python classes are themselves objects as well (which gives a place to put those class attributes), and you don't have to jump through standard library hoops to use them reflectively. (The absence of manifest typing is quite liberating here.)
So how, I hear you protest, do we actually add any data to the instances? Well, by default, Python doesn't restrict you from adding anything to any instance. It doesn't even require you to make different instances of the same class contain the same attributes. And it certainly doesn't pre-allocate a single block of memory to contain all the object's attributes. (It would only be able to contain references, anyway, given that Python is a pure reference-semantics language, with no C# style value types or Java style primitives.)
But obviously, it's a good idea to do things that way, so the usual convention is "add all the data at the time that the instance is constructed, and then don't add any more (or delete any) attributes".
"When it's constructed"? Python doesn't really have constructors in the C++/Java/C# sense, because this absence of "reserved space" means there's no real benefit to considering "initialization" as a separate task from ordinary assignment - except of course the benefit of initialization being something that automatically happens to a new object.
So, in Python, our closest equivalent is the magic __init__ method that is automatically called upon newly-created instances of the class. (There is another magic method called __new__, which behaves more like a constructor, in the sense that it's responsible for the actual creation of the object. However, in nearly every case we just want to delegate to the base object __new__, which calls some built-in logic to basically give us a little pointer-ball that can serve as an object, and point it to a class definition. So there's no real point in worrying about __new__ in almost every case. It's really more analogous to overloading the operator new for a class in C++.) In the body of this method (there are no C++-style initialization lists, because there is no pre-reserved data to initialize), we set initial values for attributes (and possibly do other work), based on the parameters we're given.
Now, if we want to be a little bit neater about things, or efficiency is a real concern, there is another trick up our sleeves: we can use the magic __slots__ attribute of the class to specify class attribute names. This is a list of strings, nothing fancy. However, this still doesn't pre-initialize anything; an instance doesn't have an attribute until you assign it. This just prevents you from adding attributes with other names. You can even still delete attributes from an object whose class has specified __slots__. All that happens is that the instances are given a different internal structure, to optimize memory usage and attribute lookup.
The __slots__ usage requires that we derive from the built-in object type, which we should do anyway (although we aren't required in Python 2.x, this is intended only for backwards-compatibility purposes).
Ok, so now we can make the code work. But how do we make it right for Python?
First off, just as with any other language, constantly commenting to explain already-self-explanatory things is a bad idea. It distracts the user, and doesn't really help you as a learner of the language, either. You're supposed to know what a class definition looks like, and if you need a comment to tell you that a class definition is a class definition, then reading the code comments isn't the kind of help you need.
With this whole "duck typing" thing, it's poor form to include data type names in variable (or attribute) names. You're probably protesting, "but how am I supposed to keep track of the type otherwise, without the manifest type declaration"? Don't. The code that uses your list of windows doesn't care that your list of windows is a list of windows. It just cares that it can iterate over the list of windows, and thus obtain values that can be used in certain ways that are associated with windows. That's how duck typing works: stop thinking about what the object is, and worry about what it can do.
You'll notice in the code below that I put the string conversion code into the House and Window constructors themselves. This serves as a primitive form of type-checking, and also makes sure that we can't forget to do the conversion. If someone tries to create a House with an ID that can't even be converted to a string, then it will raise an exception. Easier to ask for forgiveness than permission, after all. (Note that you actually have to go out of your way a bit in Python to create
As for the actual iteration... in Python, we iterate by actually iterating over the objects in a container. Java and C# have this concept as well, and you can get at it with the C++ standard library too (although a lot of people don't bother). We don't iterate over indices, because it's a useless and distracting indirection. We don't need to number our "windows_per_house" values in order to use them; we just need to look at each value in turn.
How about the ID numbers, I hear you ask? Simple. Python provides us with a function called 'enumerate', which gives us (index, element) pairs given an input sequence of elements). It's clean, it lets us be explicit about our need for the index to solve the problem (and the purpose of the index), and it's a built-in that doesn't need to be interpreted like the rest of the Python code, so it doesn't incur all that much overhead. (When memory is a concern, it's possible to use a lazy-evaluation version instead.)
But even then, iterating to create each house, and then manually appending each one to an initially-empty list, is too low-level. Python knows how to construct a list of values; we don't need to tell it how. (And as a bonus, we typically get better performance by letting it do that part itself, since the actual looping logic can now be done internally, in native C.) We instead describe what we want in the list, with a list comprehension. We don't have to walk through the steps of "take each window-count in turn, make the corresponding house, and add it to the list", because we can say "a list of houses with the corresponding window-count for each window-count in this input list" directly. That's arguably clunkier in English, but much cleaner in a programming language like Python, because you can skip a bunch of the little words, and you don't have to expend effort to describe the initial list, or the act of appending the finished houses to the list. You don't describe the process at all, just the result. Made-to-order.
Finally, as a general programming concept, it makes sense, whenever possible, to delay the construction of an object until we have everything ready that's needed for that object's existence. "Two-phase construction" is ugly. So we make the windows for a house first, and then the house (using those windows). With list comprehensions, this is simple: we just nest the list comprehensions.
class House(object):
__slots__ = ['ID', 'windows']
def __init__(self, id, windows):
self.ID = str(id)
self.windows = windows
class Window(object):
__slots__ = ['ID']
def __init__(self, id):
self.ID = str(id)
windows_per_house = [1, 3, 2, 1]
# Build the houses.
houses = [
House(house_id, [Window(window_id) for window_id in range(window_count)])
for house_id, window_count in enumerate(windows_per_house)
]
# See how elegant the list comprehensions are?
# If you didn't quite follow the logic there, please try **not**
# to imagine the implicitly-defined process as you trace through it.
# (Pink elephants, I know, I know.) Just understand what is described.
# And now we can iterate and print just as before.
for house in houses:
print "House: " + house.ID
for window in house.windows:
print " Window: " + window.ID
Apart from some indentation errors, you're assigning the IDs and window_lists to the class and not the instances.
You want something like
class House():
def __init__(self, ID):
self.ID = ID
self.window_list = []
etc.
Then, you can do house_list.append(House(str(newHouse))) and so on.
Of what use is id() in real-world programming? I have always thought this function is there just for academic purposes. Where would I actually use it in programming?
I have been programming applications in Python for some time now, but I have never encountered any "need" for using id(). Could someone throw some light on its real world usage?
It can be used for creating a dictionary of metadata about objects:
For example:
someobj = int(1)
somemetadata = "The type is an int"
data = {id(someobj):somemetadata}
Now if I occur this object somewhere else I can find if metadata about this object exists, in O(1) time (instead of looping with is).
I use id() frequently when writing temporary files to disk. It's a very lightweight way of getting a pseudo-random number.
Let's say that during data processing I come up with some intermediate results that I want to save off for later use. I simply create a file name using the pertinent object's id.
fileName = "temp_results_" + str(id(self)).
Although there are many other ways of creating unique file names, this is my favorite. In CPython, the id is the memory address of the object. Thus, if multiple objects are instantiated, I'm guaranteed to never have a naming collision. That's all for the cost of 1 address lookup. The other methods that I'm aware of for getting a unique string are much more intense.
A concrete example would be a word-processing application where each open document is an object. I could periodically save progress to disk with multiple files open using this naming convention.
Anywhere where one might conceivably need id() one can use either is or a weakref instead. So, no need for it in real-world code.
The only time I've found id() useful outside of debugging or answering questions on comp.lang.python is with a WeakValueDictionary, that is a dictionary which holds a weak reference to the values and drops any key when the last reference to that value disappears.
Sometimes you want to be able to access a group (or all) of the live instances of a class without extending the lifetime of those instances and in that case a weak mapping with id(instance) as key and instance as value can be useful.
However, I don't think I've had to do this very often, and if I had to do it again today then I'd probably just use a WeakSet (but I'm pretty sure that didn't exist last time I wanted this).
in one program i used it to compute the intersection of lists of non-hashables, like:
def intersection(*lists):
id_row_row = {} # id(row):row
key_id_row = {} # key:set(id(row))
for key, rows in enumerate(lists):
key_id_row[key] = set()
for row in rows:
id_row_row[id(row)] = row
key_id_row[key].add(id(row))
from operator import and_
def intersect(sets):
if len(sets) > 0:
return reduce(and_, sets)
else:
return set()
seq = [ id_row_row[id_row] for id_row in intersect( key_id_row.values() ) ]
return seq
I'm coding a poker hand evaluator as my first programming project. I've made it through three classes, each of which accomplishes its narrowly-defined task very well:
HandRange = a string-like object (e.g. "AA"). getHands() returns a list of tuples for each specific hand within the string:
[(Ad,Ac),(Ad,Ah),(Ad,As),(Ac,Ah),(Ac,As),(Ah,As)]
Translation = a dictionary that maps the return list from getHands to values that are useful for a given evaluator (yes, this can probably be refactored into another class).
{'As':52, 'Ad':51, ...}
Evaluator = takes a list from HandRange (as translated by Translator), enumerates all possible hand matchups and provides win % for each.
My question: what should my "domain" class for using all these classes look like, given that I may want to connect to it via either a shell UI or a GUI? Right now, it looks like an assembly line process:
user_input = HandRange()
x = Translation.translateList(user_input)
y = Evaluator.getEquities(x)
This smells funny in that it feels like it's procedural when I ought to be using OO.
In a more general way: if I've spent so much time ensuring that my classes are well defined, narrowly focused, orthogonal, whatever ... how do I actually manage work flow in my program when I need to use all of them in a row?
Thanks,
Mike
Don't make a fetish of object orientation -- Python supports multiple paradigms, after all! Think of your user-defined types, AKA classes, as building blocks that gradually give you a "language" that's closer to your domain rather than to general purpose language / library primitives.
At some point you'll want to code "verbs" (actions) that use your building blocks to perform something (under command from whatever interface you'll supply -- command line, RPC, web, GUI, ...) -- and those may be module-level functions as well as methods within some encompassing class. You'll surely want a class if you need multiple instances, and most likely also if the actions involve updating "state" (instance variables of a class being much nicer than globals) or if inheritance and/or polomorphism come into play; but, there is no a priori reason to prefer classes to functions otherwise.
If you find yourself writing static methods, yearning for a singleton (or Borg) design pattern, writing a class with no state (just methods) -- these are all "code smells" that should prompt you to check whether you really need a class for that subset of your code, or rather whether you may be overcomplicating things and should use a module with functions for that part of your code. (Sometimes after due consideration you'll unearth some different reason for preferring a class, and that's allright too, but the point is, don't just pick a class over a module w/functions "by reflex", without critically thinking about it!).
You could create a Poker class that ties these all together and intialize all of that stuff in the __init__() method:
class Poker(object):
def __init__(self, user_input=HandRange()):
self.user_input = user_input
self.translation = Translation.translateList(user_input)
self.evaluator = Evaluator.getEquities(x)
# and so on...
p = Poker()
# etc, etc...
I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects.
This is language-agnostic pseudocode from Wikipedia:
s ← s0; e ← E(s) // Initial state, energy.
sbest ← s; ebest ← e // Initial "best" solution
k ← 0 // Energy evaluation count.
while k < kmax and e > emax // While time left & not good enough:
snew ← neighbour(s) // Pick some neighbour.
enew ← E(snew) // Compute its energy.
if enew < ebest then // Is this a new best?
sbest ← snew; ebest ← enew // Save 'new neighbour' to 'best found'.
if P(e, enew, temp(k/kmax)) > random() then // Should we move to it?
s ← snew; e ← enew // Yes, change state.
k ← k + 1 // One more evaluation done
return sbest // Return the best solution found.
The following is an adaptation of the technique. My supervisor said the idea is fine in theory.
First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below.
The function does the following:
Let this allocation, aOld be the best_node
From all the students, pick a random number of students and stick in a list
Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated)
Randomise that list
Try assigning (REALLOCATE) everyone in that list projects again
Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101)
For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue.
If wOld < wNew, then apply the algorithm to aOld again and continue.
The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict)
Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node.
So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA.
Questions:
I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me.
Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means?
I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one?
An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means?
A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?
I have another possible solution: use transactions. This probably still isn't the best solution but implementing it should be faster.
Firstly create your session like this:
# transactional session
Session = sessionmaker(transactional=True)
sess = Session()
That way it will be transactional. The way transactions work is that sess.commit() will make your changes permanent while sess.rollback() will revert them.
In the case of simulated annealing you want to commit when you find a new best solution. At any later point, you can invoke rollback() to revert the status back to that position.
You don't want to copy sqlalchemy objects like that. You could implement your own methods which make the copies easily enough, but that is probably not want you want. You don't want copies of students and projects in your database do you? So don't copy that data.
So you have a dictionary which holds your allocations. During the process you should never modify the SQLAlchemy objects. All information that can be modified should be stored in those dictionaries. If you need to modify the objects to take that into account, copy the data back at the end.