How to define toString method? - python

I have just almost finished my assignment and now the only thing I have left is to define the tostring method shown here.
import math
class RegularPolygon:
def __init__(self, n = 1, l = 1):
self.__n = n
self.__l = l
def set_n(self, n):
self.__n = n
def get_n(self):
return self.__n
def addSides(self, x):
self.__n = self.__n + x
def setLength(self, l ):
self.__l = l
def getLength(self):
return self.__l
def setPerimeter(self):
return (self.__n * self.__l )
def getArea(self):
return (self.__l ** 2 / 4 * math.tan(math.radians(180/self.__n)))
def toString(self):
return
x = 3
demo_object = RegularPolygon (3, 1)
print(demo_object.get_n() , demo_object.getLength())
demo_object.addSides(x)
print(demo_object.get_n(), demo_object.getLength())
print(demo_object.getArea())
print(demo_object.setPerimeter())
Basically the tostring on what it does is return a string that has the values of the internal variables included in it. I also need help on the getArea portion too.
Assignment instructions

The assignment says
... printing a string representation of a RegularPolygon object.
So I would expect you get to choose a suitable "representation". You could go for something like this:
return f'{self.__n+2} sided regular polygon of side length {self.__l}'
or as suggested by #Roy Cohen
return f'{self.__class__.__name__}({self.__n}, {self.__l})'
However, as #Klaus D. wrote in the comments, Python is not Java, and as such has its own standards and magic methods to use instead.
I would recommend reading this answer for an explanation between the differences between the two built-in string representation magic-methods: __repr__ and __str__. By implementing these methods, they will automatically be called whenever using print() or something similar, instead of you calling .toString() every time.
Now to address the getters and setters. Typically in Python you avoid these and prefer using properties instead. See this answer for more information, but to summarise you either directly use an objects properties, or use the #property decorator to turn a method into a property.
Edit
Your area formula is likely an error with order-of-operations. Make sure you are explicit with which operation you're performing first:
return self.__l ** 2 / (4 * math.tan(math.radians(180/self.__n)) )
This may be correct :)

Related

Python method calls in constructor and variable naming conventions inside a class

I try to process some data in Python and I defined a class for a sub-type of data. You can find a very simplified version of the class definition below.
class MyDataClass(object):
def __init__(self, input1, input2, input3):
"""
input1 and input2 are a 1D-array
input3 is a 2D-array
"""
self._x_value = None # int
self._y_value = None # int
self.data_array_1 = None # 2D array
self.data_array_2 = None # 1D array
self.set_data(input1, input2, input3)
def set_data(self, input1, input2, input3):
self._x_value, self._y_value = self.get_x_and_y_value(input1, input2)
self.data_array_1 = self.get_data_array_1(input1)
self.data_array_2 = self.get_data_array_2(input3)
#staticmethod
def get_x_and_y_value(input1, input2):
# do some stuff
return x_value, y_value
def get_data_array_1(self, input1):
# do some stuff
return input1[self._x_value:self._y_value + 1]
def get_data_array_2(self, input3):
q = self.data_array_1 - input3[self._x_value:self._y_value + 1, :]
return np.linalg.norm(q, axis=1)
I'm trying to follow the 'Zen of Python' and thereby to write beautiful code. I'm quite sceptic, whether the class definition above is a good pratice or not. While I was thinking about alternatives I came up with the following questions, to which I would like to kindly get your opinions and suggestions.
Does it make sense to define ''get'' and ''set'' methods?
IMHO, as the resulting data will be used several times (in several plots and computation routines), it is more convenient to create and store them once. Hence, I calculate the data arrays once in the constructor.
I do not deal with huge amount of data and therefore processing takes not more than a second, however I cannot estimate its potential implications on RAM if someone would use the same procedure for huge data.
Should I put the function get_x_and_y_value() out of the class scope and convert static method to a function?
As the method is only called inside the class definition, it is better to use it as a static method. If I should define it as a function, should I put all the lines relevant to this class inside a script and create a module of it?
The argument naming of the function get_x_and_y_value() are the same as __init__ method. Should I change it?
It would ease refactoring but could confuse others who read it.
In Python, you do not need getter and setter functions. Use properties instead. This is why you can access attributes directly in Python, unlike other languages like Java where you absolutely need to use getters and setters and to protect your attributes.
Consider the following example of a Circle class. Because we can use the #property decorator, we don't need getter and setter functions like other languages do. This is the Pythonic answer.
This should address all of your questions.
class Circle(object):
def __init__(self, radius):
self.radius = radius
self.x = 0
self.y = 0
#property
def diameter(self):
return self.radius * 2
#diameter.setter
def diameter(self, value):
self.radius = value / 2
#property
def xy(self):
return (self.x, self.y)
#xy.setter
def xy(self, xy_pair):
self.x, self.y = xy_pair
>>> c = Circle(radius=10)
>>> c.radius
10
>>> c.diameter
20
>>> c.diameter = 10
>>> c.radius
5.0
>>> c.xy
(0, 0)
>>> c.xy = (10, 20)
>>> c.x
10
>>> c.y
20

Setting parameters with decorators vs nested functions

I need to call a multi-parameter function many times while all but one parameter is fixed. I was thinking of using decorators:
# V1 - with #decorator
def dec_adder(num):
def wrap(fun):
def wrapped_fun(n1):
return fun(n1, second_num=num)
return wrapped_fun
return wrap
#dec_adder(2)
def adder(first_num, second_num):
return first_num + second_num
print adder(5)
>>> 7
But this seems confusing since it appears to be calling a 2-parameter function, adder with only one argument.
Another approach is to use a nested function definition that uses local variables from the parent function:
# V2 - without #decorator
def add_wrapper(num):
def wrapped_adder(num_2):
return num + num_2
return wrapped_adder
adder = add_wrapper(2)
print adder(5)
>>> 7
But I hesitate to use this approach since in my actual implementation the wrapped function is very complex. My instinct is that it should have a stand-alone definition.
Forgive me if this ventures into the realm of opinion, but is either approach considered better design and/or more Pythonic? Is there some other approach I should consider?
functools.partial should work nicely in this case:
from functools import partial
def adder(n1, n2):
return n1 + n2
adder_2 = partial(adder, 2)
adder_2(5)
Its' docstring:
partial(func, *args, **keywords) - new function with partial application
of the given arguments and keywords.
-- so, you can set keyword arguments as well.
PS
Sadly, the built-in sum does not suit this case: it sums over an iterable (in fact, sum(iterable[, start]) -> value), so partial(sum, 2) does not work.
Another possible solution - you can use functools and parametrized decorator:
from functools import wraps
def decorator(num):
def decor(f):
#wraps(f)
def wrapper(n,*args,**kwargs):
return f(n+num,*args,**kwargs)
return wrapper
return decor
#decorator(num=2) # number to add to the parameter
def test(n,*args,**kwargs):
print n
test(10) # base amount - prints 12

SICP "streams as signals" in Python

I have found some nice examples (here, here) of implementing SICP-like streams in Python. But I am still not sure how to handle an example like the integral found in SICP 3.5.3 "Streams as signals."
The Scheme code found there is
(define (integral integrand initial-value dt)
(define int
(cons-stream initial-value
(add-streams (scale-stream integrand dt)
int)))
int)
What is tricky about this one is that the returned stream int is defined in terms of itself (i.e., the stream int is used in the definition of the stream int).
I believe Python could have something similarly expressive and succinct... but not sure how. So my question is, what is an analogous stream-y construct in Python? (What I mean by a stream is the subject of 3.5 in SICP, but briefly, a construct (like a Python generator) that returns successive elements of a sequence of indefinite length, and can be combined and processed with operations such as add-streams and scale-stream that respect streams' lazy character.)
There are two ways to read your question. The first is simply: How do you use Stream constructs, perhaps the ones from your second link, but with a recursive definition? That can be done, though it is a little clumsy in Python.
In Python you can represent looped data structures but not directly. You can't write:
l = [l]
but you can write:
l = [None]
l[0] = l
Similarly you can't write:
def integral(integrand,initial_value,dt):
int_rec = cons_stream(initial_value,
add_streams(scale_stream(integrand,dt),
int_rec))
return int_rec
but you can write:
def integral(integrand,initial_value,dt):
placeholder = Stream(initial_value,lambda : None)
int_rec = cons_stream(initial_value,
add_streams(scale_stream(integrand,dt),
placeholder))
placeholder._compute_rest = lambda:int_rec
return int_rec
Note that we need to clumsily pre-compute the first element of placeholder and then only fix up the recursion for the rest of the stream. But this does all work (alongside appropriate definitions of all the rest of the code - I'll stick it all at the bottom of this answer).
However, the second part of your question seems to be asking how to do this naturally in Python. You ask for an "analogous stream-y construct in Python". Clearly the answer to that is exactly the generator. The generator naturally provides the lazy evaluation of the stream concept. It differs by not being naturally expressed recursively but then Python does not support that as well as Scheme, as we will see.
In other words, the strict stream concept can be expressed in Python (as in the link and above) but the idiomatic way to do it is to use generators.
It is more or less possible to replicate the Scheme example by a kind of direct mechanical transformation of stream to generator (but avoiding the built-in int):
def integral_rec(integrand,initial_value,dt):
def int_rec():
for x in cons_stream(initial_value,
add_streams(scale_stream(integrand,dt),int_rec())):
yield x
for x in int_rec():
yield x
def cons_stream(a,b):
yield a
for x in b:
yield x
def add_streams(a,b):
while True:
yield next(a) + next(b)
def scale_stream(a,b):
for x in a:
yield x * b
The only tricky thing here is to realise that you need to eagerly call the recursive use of int_rec as an argument to add_streams. Calling it doesn't start it yielding values - it just creates the generator ready to yield them lazily when needed.
This works nicely for small integrands, though it's not very pythonic. The Scheme version works by optimising the tail recursion - the Python version will exceed the max stack depth if your integrand is too long. So this is not really appropriate in Python.
A direct and natural pythonic version would look something like this, I think:
def integral(integrand,initial_value,dt):
value = initial_value
yield value
for x in integrand:
value += dt * x
yield value
This works efficiently and correctly treats the integrand lazily as a "stream". However, it uses iteration rather than recursion to unpack the integrand iterable, which is more the Python way.
In moving to natural Python I have also removed the stream combination functions - for example, replaced add_streams with +=. But we could still use them if we wanted a sort of halfway house version:
def accum(initial_value,a):
value = initial_value
yield value
for x in a:
value += x
yield value
def integral_hybrid(integrand,initial_value,dt):
for x in accum(initial_value,scale_stream(integrand,dt)):
yield x
This hybrid version uses the stream combinations from the Scheme and avoids only the tail recursion. This is still pythonic and python includes various other nice ways to work with iterables in the itertools module. They all "respect streams' lazy character" as you ask.
Finally here is all the code for the first recursive stream example, much of it taken from the Berkeley reference:
class Stream(object):
"""A lazily computed recursive list."""
def __init__(self, first, compute_rest, empty=False):
self.first = first
self._compute_rest = compute_rest
self.empty = empty
self._rest = None
self._computed = False
#property
def rest(self):
"""Return the rest of the stream, computing it if necessary."""
assert not self.empty, 'Empty streams have no rest.'
if not self._computed:
self._rest = self._compute_rest()
self._computed = True
return self._rest
def __repr__(self):
if self.empty:
return '<empty stream>'
return 'Stream({0}, <compute_rest>)'.format(repr(self.first))
Stream.empty = Stream(None, None, True)
def cons_stream(a,b):
return Stream(a,lambda : b)
def add_streams(a,b):
if a.empty or b.empty:
return Stream.empty
def compute_rest():
return add_streams(a.rest,b.rest)
return Stream(a.first+b.first,compute_rest)
def scale_stream(a,scale):
if a.empty:
return Stream.empty
def compute_rest():
return scale_stream(a.rest,scale)
return Stream(a.first*scale,compute_rest)
def make_integer_stream(first=1):
def compute_rest():
return make_integer_stream(first+1)
return Stream(first, compute_rest)
def truncate_stream(s, k):
if s.empty or k == 0:
return Stream.empty
def compute_rest():
return truncate_stream(s.rest, k-1)
return Stream(s.first, compute_rest)
def stream_to_list(s):
r = []
while not s.empty:
r.append(s.first)
s = s.rest
return r
def integral(integrand,initial_value,dt):
placeholder = Stream(initial_value,lambda : None)
int_rec = cons_stream(initial_value,
add_streams(scale_stream(integrand,dt),
placeholder))
placeholder._compute_rest = lambda:int_rec
return int_rec
a = truncate_stream(make_integer_stream(),5)
print(stream_to_list(integral(a,8,.5)))

Using super to make a pipeline?

I was thinking about how to use super to make a pipeline in python. I have a series of transformations I must do to a stream, and I thought that a good way to do it was something in the lines of:
class MyBase(object):
def transformData(self, x):
return x
class FirstStage(MyBase):
def transformData(self, x):
y = super(FirstStage, self).transformData(x)
return self.__transformation(y)
def __transformation(self, x):
return x * x
class SecondStage(FirstStage):
def transformData(self, x):
y = super(SecondStage, self).transformData(x)
return self.__transformation(y)
def __transformation(self, x):
return x + 1
It works as I intended, but there's a potential repetition. If I have N stages, I'll have N identical transformData methods where the only thing I change is the name of the current class.
Is there a way to remove this boilerplate? I tried a few things but the results only proved to me that I hadn't understood perfectly how super worked.
What I wanted was to define only the method __transformation and naturally inherit a transformData method that would go up in MRO, call that class' transformData method and then call the current class' __transformation on the result. Is it possible or do I have to define a new identical transformData for each child class?
I agree that this is a poor way of implementing a pipeline. That can be done with much simpler (and clearer) schemes. I thought of this as the least modification I could do on a existing model to get a pipeline out of the existing classes without modifying the code too much. I agree this is not the best way to do it. It would be a trick, and tricks should be avoided. Also I thought of it as a way of better understanding how super works.
Buuuut. Out of curiosity... is it possible to do it in the above scheme without the transformData repetition? This is a genuine doubt. Is there a trick to inherit transformData in a way that the super call in it is changed to be called on the current class?
It would be a tremendously unclear, unreadable, smart-ass trickery. I know. But is it possible?
I don't think using inheritance for a pipeline is the right way to go.
Instead, consider something like this -- here with "simple" examples and a parametrized one (a class using the __call__ magic method, but returning a closured function would do too, or even "JITing" one by way of eval).
def two_power(x):
return x * x
def add_one(x):
return x + 1
class CustomTransform(object):
def __init__(self, multiplier):
self.multiplier = multiplier
def __call__(self, value):
return value * self.multiplier
def transform(data, pipeline):
for datum in data:
for transform in pipeline:
datum = transform(datum)
yield datum
pipe = (two_power, two_power, add_one, CustomTransform(1.25))
print list(transform([1, 2, 4, 8], pipe))
would output
[2.5, 21.25, 321.25, 5121.25]
The problem is that using inheritance here is rather weird in terms of OOP. And do you really need to define the whole chain of transformations when defining classes?
But it's better to forget OOP here, the task is not for OOP. Just define functions for transformations:
def get_pipeline(*functions):
def pipeline(x):
for f in functions:
x = f(x)
return x
return pipeline
p = get_pipeline(lambda x: x * 2, lambda x: x + 1)
print p(5)
An even shorter version is here:
def get_pipeline(*fs):
return lambda v: reduce(lambda x, f: f(x), fs, v)
p = get_pipeline(lambda x: x * 2, lambda x: x + 1)
print p(5)
And here is an OOP solution. It is rather clumsy if compared to the previous one:
class Transform(object):
def __init__(self, prev=None):
self.prev_transform = prev
def transformation(self, x):
raise Exception("Not implemented")
def transformData(self, x):
if self.prev_transform:
x = self.prev_transform.transformData(x)
return self.transformation(x)
class TransformAdd1(Transform):
def transformation(self, x):
return x + 1
class TransformMul2(Transform):
def transformation(self, x):
return x * 2
t = TransformAdd1(TransformMul2())
print t.transformData(1) # 1 * 2 + 1

Returning an lvalue from a function in python

[Sorry, I'm new in Python. Although it seems to be a very basic question, I did my share of due diligence before asking this audience, trying to avoid really stupid questions].
I'm trying to figure out the correct idiom for returning an l-value from a function. Assume I've a container of 64 objects, and I want to be able to return a reference to these objects.
class ChessBoard:
def __init__(self):
self.squares = [None for x in range(64)]
square( row, col ):
return self.squares(row*8+col) <---- I'd like this to be l-value
Then, from outside the class I want to:
board = ChessBoard()
board.square(0,0) = Piece( Shapes.ROOK, Colors.White ) <-- I'm getting an error here
board.square(0,1) = Piece( Shapes.BISHOP, Colors.White )
... etc.
So, I would like the function 'at' to return a lvalue (Something like a reference in C++), but I can't find anything resembling a reference or a pointer in the language. If I stored a list in each square containing one Piece, it is possible I could do something like: board.square(0,0)[0] = Piece - but it seems crazy (or maybe not - as I said, I'm new to the language).
How would you approach this data structure?
In Python, everything is a reference. The only problem is that None is immutable, so you can't use the returned reference to change the value.
You also can't override the assignment operator, so you won't get this particular kind of behaviour. However, a good and very flexible solution would be to override the __setitem__ and __getitem__ methods to implement the subscription operator ([]) for the class:
class ChessBoard(object):
def __init__(self):
self.squares = [None] * 64
def __setitem__(self, key, value):
row, col = key
self.squares[row*8 + col] = value
def __getitem__(self, key):
row, col = key
return self.squares[row*8 + col]
Usage:
>>> c = ChessBoard()
>>> c[1,2] = 5
>>> c[1,2]
5
You can try something like this, at the cost of having to put bogus [:] indexers around:
class Board:
def __init__(self):
self.squares=[None for x in range(64)]
def square(self, row, col):
squares=self.squares
class Prox:
def __getitem__(self, i):
return squares[row*8+col]
def __setitem__(self, i, v):
squares[row*8+col]=v
return Prox()
Then you can do
b=Board()
b.square(2,3)[:]=Piece('Knight')
if b.square(x,y)[:] == Piece('King') ...
And so on. It doesn't actually matter what you put in the []s, it just has to be something.
(Got the idea from the Proxies Perl6 uses to do this)
As Niklas points out, you can't return an l-value.
However, in addition to overriding subscription, you can also use properties (an application of descriptors: http://docs.python.org/howto/descriptor.html) to create an object attribute, which when read from, or assigned to, runs code.
(Not answering your question in the title, but your "How would you approach this data structure?" question:) A more pythonic solution for your data structure would be using a list of lists:
# define a function that generates an empty chess board
make_chess_board = lambda : [[None for x in xrange(8)] for y in xrange(8)]
# grab an instance
b = make_chess_board()
# play the game!
b[0][0] = Piece(Shapes.ROOK, Colors.White)
b[0][1] = Piece(Shapes.BISHOP, Colors.White)
# Or use tuples:
b[0][0] = (Shapes.ROOK, Colors.White)
b[0][1] = (Shapes.BISHOP, Colors.White)

Categories