I just have a quick question about an OOP programming I've been having difficulty deciding. The premise is that I'm making a set of very simple geometric classes such as vertex and angle and vector objects, but one of the classes, the line class to be specific, is a little different. It's basically just a collection of methods that I use one time only, I never actually save a line object for later use or recollection of data anywhere else in the program. An example usage to demonstrate my point would be this:
class Line:
def __init__(self, vertex1, vertex2):
self.start = vertex1
self.end = vertex2
def to_the_left(self, vertex):
"""Check to see if a vertex is to the left of the line segment."""
#code stuff
data = Line(Vertex(0, 0), Vertex(10, 0)).to_the_left(Vertex(5, 5))
I only ever instantiate Line(Vertex(0, 0), Vertex(10, 0)) once to retrieve the data. So I was thinking that I might as well just have a bunch of functions available instead of packing it all into a class, but then I was skeptical about doing that since there are a ton of methods that would have to be converted to functions.
Another thing I was thinking of doing was to make a Line class and then convert all it's methods into normal functions like so:
#continuing from the code above
def to_the_left(line_start, line_end, vertex):
return Line(line_start, line_end).to_the_left(vertex)
data = to_the_left(Vertex(0, 0), Vertex(10, 0), Vertex(5, 5))
Which method do you think I should use?
I would opt for using an object as you might need to do multiple operations on Line.
For example you might compute the length, if it's to the left, and some other operation. You might need to pass the Line around who knows.
One thing you might want to consider is instead of using Line and Vertex, use Vector which acts as both. If your vertex has x,y you can make a Vector that has x,y,w.
In this scheme w=1 for vertices and w=0 for Lines - it would simplify a lot of code.
Look up Homogenous coordinates to learn more
Related
The following question adresses a problem I often encounter. Basically, there are solutions like the adaptor pattern, but I find it a bit unsatisfying.
Suppose I have a class Polygon which implements an - uhm - polygon with quite some functionality. Many of those Polygon live in my program, some as lonely variables, some in collection structures.
Now, there's a function that needs an argument type that is basically a Polygon, but with some additional features. Let's say, a Polygon who can return some metrics: his volume, center of gravity, and angular mass. Plus, the function also needs the methods of the original Polygon.
A first idea is:
class Polygon:
# defines my polygon
class PolygonWithMetrics(Polygon):
# - extends polygon with the metrics
# - takes Polygon as argument upon construction
# - would need to delegate many functions to Polygon
def functionUsingPolygonWithMetrics(p):
# use functions of Polygon and PolygonWithMetrics
# driving code:
p = Polygon(some args here)
... more code ...
p_with_metrics = PolygonWithMetrics(p) # Bummer - problem here...
functionUsingPolygonWithMetrics(p_with_metrics)
The problem: It would require to delegate many many functions from PolygonWithMetrics into the original Polygon.
A second idea is:
class Polygon:
# defines my polygon
class PolygonMetrics:
# takes a polygon and provides metrics methods on it
def functionUsingPolygonWithMetrics(p):
# use functions of Polygon and PolygonMetrics
# driving code:
p = Polygon(some args here)
... more code ...
p_with_metrics = PolygonMetrics(p)
functionUsingPolygonWithMetrics(p, p_with_metrics) # Bummer - problem here...
This idea takes the original Polygon as an argument, plus a second object that provides the metrics functions. The problem is that I would need to change the signature of functionUsingPolygonWithMetrics.
What I would really need is an idea how to extend an existing object ad-hoc with some more functionality, without the problems given in idea 1 and 2.
I could imagine an idea roughly like this, where the job is mostly done by PolygonWithMetrics:
class Polygon:
# defines my polygon
class PolygonWithMetrics(maybe inherits something):
# - takes a Polygon and provides metrics methods on it
# - upon construction, it will take a polygon
# - will expose the full functionality of Polygon automatically
def functionUsingPolygonWithMetrics(p):
# use functions of Polygon and PolygonWithMetrics
# driving code:
p = Polygon(some args here)
... more code ...
p_with_metrics = PolygonWithMetrics(p)
functionUsingPolygonWithMetrics(p)
Three questions arise:
Does this pattern have sort of a name?
Is it a good idea, or should I resort to some more adviced techniques?
How to do it in Python?
I am relatively new to OOP, and definitely still learning. I would like to know what is the best practice when dealing with two classes such as this:
(I have reverse engineered the statistics engine of an old computer game, but I guess the subject does not matter to my question)
class Step(object):
def __init__(self):
self.input = 'G'
...more attributes...
def reset_input(self):
''' Reset input to None. '''
self.input = None
print '* Input reset.'
... more methods ...
Then I have the Player class, which is the main object to control (at least in my design):
class Player(object):
''' Represents a player. Accepts initial stats.'''
def __init__(self, step= 250, off= 13, dng= 50000, dist= 128, d_inc= 113):
self.route = []
self.step = Step(step=step, off=off, dng=dng, dist=dist, d_inc=d_inc)
self.original = copy.copy(self.step)
As you can see, Player contains a Step object, which represents the next Step.
I have found that I sometimes want to access a method from that Step class.
In this case, is it better to add a wrapper to Player, such as:
(If I want to access reset_input()):
class Player(object):
...
def reset_input(self):
self.step.reset_input()
Then for Player to reset the input value:
p = Player()
p.reset_input()
Or would it be better practice to access the reset_input() directly with:
p = Player()
p.step.reset_input()
It seems adding the wrapper is just duplicating code. It's also annoying as I need access to quite a few of Step's methods.
So, when using composition (I think it is the correct term), is it good practice to directly access the 'inner' objects methods?
I believe you should apply an additional layer of abstraction in OOP if:
you foresee yourself updating the code later; and
the code will be used in multiple places.
In this case, let's say you go with this method:
def reset_input(self):
self.step.reset_input()
and then you call it in multiple places in your code. Later on, you decide that you want to do action x() before all your calls to reset_input, pass in optional parameter y to reset_input, and do action z() after that. Then it's trivial to update the method as follows:
def reset_input(self):
self.x()
self.step.reset_input(self.y)
self.z()
And the code will be changed everywhere with just a few keystrokes. Imagine the nightmare you'd have on your hands if you had to update all the calls in multiple places because you weren't using a wrapper function.
You should apply a wrapper if you actually foresee yourself using the wrapper to apply changes to your code. This will make your code easier to maintain. As stated in the comments, this concept is known as encapsulation; it allows you to use an interface that hides implementation details, so that you can easily update the implementation at any time and it will change the code universally in a very simple way.
It's always a tradeoff. Look at the Law of Demeter. It describes your situation and also pros and cons of different solutions.
The python interface of igraph has a class called metamagic, serving the purpose to collect graphical parameters for plotting. I am writing a module using igraph, and I almost started to write my own wrapper functions for this purpose, when I've found metamagic in the documentation. But after searching and trying, it's still not clear how to use these classes. If I define an AttributeCollectorBase class for edges, like this:
class VisEdge(igraph.drawing.metamagic.AttributeCollectorBase):
width = 0.002
color = "#CCCCCC44"
Then, is there an easy way to pass all these parameters to the igraph.plot() function? Or I can only do one by one, like this: plot(graph,edge_color=VisEdge(graph.es).color)?
And what if I would like to use not constant parameters, but calculate by a custom function? For example, vertex_size proportional to degree. The func parameter of the AttributeSpecification class supposed to do this, isn't it? But I haven't seen any example how to use it. If I define an AttributeSpecification instance, like this:
ds = igraph.drawing.metamagic.AttributeSpecification(name="vertex_size",alt_name="size",default=2,func='degree')
After how to pass it to an AtributeCollector, and finally to plot()?
(To put things in context: I am the author of the Python interface of igraph).
I'm not sure whether the metamagic package is the right tool for you. The only purpose of the AttributeCollectorBase class is to allow the vertex and edge drawers in igraph (see the igraph.drawing.vertex and igraph.drawing.edge packages) to define what vertex and edge attributes they are able to treat as visual properties in a nice and concise manner (without me having to type too much). So, for instance, if you take a look at the DefaultVertexDrawer class in igraph.drawing.vertex, you can see that I construct a VisualVertexBuilder class by deriving it from AttributeCollectorBase as follows:
class VisualVertexBuilder(AttributeCollectorBase):
"""Collects some visual properties of a vertex for drawing"""
_kwds_prefix = "vertex_"
color = ("red", self.palette.get)
frame_color = ("black", self.palette.get)
frame_width = 1.0
...
Later on, when the DefaultVertexDrawer is being used in DefaultGraphDrawer, I simply construct a VisualVertexBuilder as follows:
vertex_builder = vertex_drawer.VisualVertexBuilder(graph.vs, kwds)
where graph.vs is the vertex sequence of the graph (so the vertex builder can get access to the vertex attributes) and kwds is the set of keyword arguments passed to plot(). The vertex_builder variable then allows me to retrieve the calculated, effective visual properties of vertex i by writing something like vertex_builder[i].color; here, it is the responsibility of the VisualVertexBuilder to determine the effective color by looking at the vertex and checking its color attribute as well as looking at the keyword arguments and checking whether it contains vertex_color.
The bottom line is that the AttributeCollectorBase class is likely to be useful to you only if you are implementing a custom graph, vertex or edge drawer and you want to specify which vertex attributes you wish to treat as visual properties. If you only want to plot a graph and derive the visual properties of that particular graph from some other data, then AttributeCollectorBase is of no use to you. For instance, if you want the size of the vertex be proportional to the degree, the preferred way to do it is either this:
sizes = rescale(graph.degree(), out_range=(0, 10))
plot(graph, vertex_size=sizes)
or this:
graph.vs["size"] = rescale(graph.degree(), out_range=(0, 10))
plot(g)
If you have many visual properties, the best way is probably to collect them into a dictionary first and then pass that dictionary to plot(); e.g.:
visual_props = dict(
vertex_size = rescale(graph.degree(), out_range=(0, 10)),
edge_width = rescale(graph.es["weight"], out_range=(0, 5), scale=log10)
)
plot(g, **visual_props)
Take a look at the documentation of the rescale function for more details. If you want to map some vertex property into the color of the vertex, you can still use rescale to map the property into the range 0-255, then round them to the nearest integer and use a palette when plotting:
palette = palettes["red-yellow-green"]
colors = [round(x) for x in rescale(g.degree(), out_range=(0, len(palette)-1))]
plot(g, vertex_color=colors, palette=palette)
here is the problem:
1) suppose that I have some measure data (like 1Msample read from my electronics) and I need to process them by a processing chain.
2) this processing chain consists of different operations, which can be swapped/omitted/have different parameters. A typical example would be to take this data, first pass them via a lookup table, then do exponential fit, then multiply by some calibration factors
3) now, as I do not know what algorithm the the best, I'd like to evaluate at each stage best possible implementation (as an example, the LUTs can be produced by 5 ways and I want to see which one is the best)
4) i'd like to daisychain those functions such, that I would construct a 'class' containing top-level algorithm and owning (i.e. pointing) to child class, containing lower-level algorithm.
I was thinking to use double-linked-list and generate sequence like:
myCaptureClass.addDataTreatment(pmCalibrationFactor(opt, pmExponentialFit (opt, pmLUT (opt))))
where myCaptureClass is the class responsible for datataking and it should as well (after the data being taken) trigger the top-level data processing module (pm). This processing would first go deep into the bottom-child (lut), treat data there, then middle (expofit), then top (califactors) and return the data to the capture, which would return the data to the requestor.
Now this has several issues:
1) everywhere on the net is said that in python one should not use double-linked-lists
2) this seems to me highly inefficient because the data vectors are huge, hence i would prefer solution using generator function, but i'm not sure how to provide the 'plugin-like' mechanism.
could someone give me a hint how to solve this using 'plugin-style' and generator so I do not need to process vector of X megabytes of data and process them 'on-request' as is correct when using generator function?
thanks a lot
david
An addendum to the problem:
it seems that I did not express myself exactly. Hence: the data are generated by an external HW card plugged into VME crate. They are 'fetched' in a single block transfer to the python tuple, which is stored in myCaptureClass.
The set of operation to be applied is in fact on a stream data, represented by this tuple. Even exponential fit is stream operation (it is a set of variable state filters applied on each sample).
The parameter 'opt' i've mistakenly shown was to express, that each of those data processing classes has some configuration data which come with, and modify behaviour of the method used to operate on data.
The goal is to introduce into myCaptureClass a daisychained class (rather than function), which - when user asks for data - us used to process 'raw' data into final form.
In order to 'save' memory resources i thought it might be a good idea to use generator function to provide the data.
from this perspective it seems that the closest match to what i want to do is shown in code of bukzor. I'd prefer to have a class implementation instead of function, but i guess this is just a cosmetic stuff of implementing call operator in particular class, which realizes the data operation....
This is how I imagine you would do this. I expect this is incomplete, since I don't fully understand your problem statement. Please let me know what I've done wrong :)
class ProcessingPipeline(object):
def __init__(self, *functions, **kwargs):
self.functions = functions
self.data = kwargs.get('data')
def __call__(self, data):
return ProcessingPipeline(*self.functions, data=data)
def __iter__(self):
data = self.data
for func in self.functions:
data = func(data)
return data
# a few (very simple) operators, of different kinds
class Multiplier(object):
def __init__(self, by):
self.by = by
def __call__(self, data):
for x in data:
yield x * self.by
def add(data, y):
for x in data:
yield x + y
from functools import partial
by2 = Multiplier(by=2)
sub1 = partial(add, y=-1)
square = lambda data: ( x*x for x in data )
pp = ProcessingPipeline(square, sub1, by2)
print list(pp(range(10)))
print list(pp(range(-3, 4)))
Output:
$ python how-to-implement-daisychaining-of-pluggable-function-in-python.py
[-2, 0, 6, 16, 30, 48, 70, 96, 126, 160]
[16, 6, 0, -2, 0, 6, 16]
Get the functional module from pypi. It has a compose function to compose two callables. With that, you can chain functions together.
Both that module, and functool provide a partial function, for partial-application.
You can use the composed functions in a generator expression just like any other.
Not knowing exactly what you want, I feel like I should point out that you can put whatever you want inside a list comprehension:
l = [myCaptureClass.addDataTreatment(
pmCalibrationFactor(opt, pmExponentialFit (opt, pmLUT (opt))))
for opt in data]
will create a new list of data that has been passed through the composed functions.
Or you could create a generator expression for looping over, this won't construct a whole new list, it will just create an iterator. I don't think that there's any advantage to doing things this way as opposed to just processing the data in the body of the loop, but it's kind of interesting to look at:
d = (myCaptureClass.addDataTreatment(
pmCalibrationFactor(opt, pmExponentialFit (opt, pmLUT (opt))))
for opt in data)
for thing in d:
# do something
pass
Or is opt the data?
I'm trying to write a basic drawing widget using the Tkinter library.
The very basic code I am using for now is:
from Tkinter import *
master = Tk()
w = Canvas(master, width=1200, height=800)
w_centre = 600
h_centre = 400
w.pack()
w.create_oval(w_centre-50, h_centre-50, w_centre+50, h_centre+50)
mainloop()
What actually want to do is start with 3 variables, x,y (centre of circle) and size. From there, I can use simple maths to work out the (x0, y0, x1, y1) set required to make the circle (http://docs.huihoo.com/tkinter/tkinter-reference-a-gui-for-python/create_oval.html)
I want to do this programatically, by feeding in the size as a value from a dataset, and x,y as dependant value (if I need 1 circle, it would I would use x1,y1 if I need two circles they would be x2,y2 & x3,y3 etc). The purpose being to try and build a basic visualiser for a dataset I have. I figure I can write an array of the x,y coords that I can look up as required, and as the size value will be pulled from a list - so it would be better to write a function that would take the size, lookup the x,y as required and feed the create_circle call the appropriate values.
I know I need to call the create_oval function with the x0,y0,x1,y1 values, and I wonder if there was a way I could call another function that would allow me to make these values every time by handing it the x,y (centre of circle) and size (radius) value, and for it give me back the relevant x0,y0,x1,y1 values.
As this is a reusable piece of maths, I think I need to make a class, but I can't find a tutorial that helps me to understand how to define the class function, and then to call it every time I need it.
I appreciate I've probably not worded this very well, I'm trying to learn rudimentary python on my own (with no CS background) so please forgive me if I've named something wrong, or missed something important.
Could someone one throw me a hint or a pointer towards a decent resouce?
Python allows you to return any kind of object from a function; in particular, you can return the tuple (x0,y0,x,1,y1) that you need for create_oval:
def enclosing_box(x, y, radius):
"""Given the coordinates of the circle center and its radius, return the top-left and bottom-right coordinates of the enclosing box."""
return (x-radius, y-radius, x+radius, y+radius)
Then you can use the *args syntax to call a function with a set of arguments taken from a sequence (a list, a tuple, etc.). You can use it to call create_oval this way:
coords = enclosing_box(x,y,radius)
w.create_oval(*coords)