Python NameError: name is not defined - method not found - python

I'm working from a book, very much newbie stuff, and the code below is from the book, and defines a simple class. But for some reason, the author has decided to put a "helper method" called check_index outside the class. I cannot for the life of me figure out why he would do this, as the method seems integral to teh operation of the class. He writes:
The index checking is taken care of by a utility function I’ve written
for the purpose, check_index.
I have tried putting it inside the class (the code below is as it is in the book), but the runtime refuses to find the method - it falls over with
NameError: name 'check_index' is not defined
My questions are, why did the author put this "helper method" outside the class, and why does the code not work when I move the method inside the class:
class ArithmeticSequence:
def __init__(self, start=0, step=1):
self.start = start # Store the start value
self.step = step # Store the step value
self.changed = {} # No items have been modified
def __getitem__(self, key):
check_index(key)
try: return self.changed[key] # Modified?
except KeyError: # otherwise ...
return self.start + key * self.step # ... calculate the value
def __setitem__(self, key, value):
check_index(key)
self.changed[key] = value # Store the changed value
def check_index(key):
if not isinstance(key, int): raise TypeError
if key < 0: raise IndexError
When I move the method inside the class, I just slot it in with the other methods. But it is not found by the runtime. Why?
class ArithmeticSequence:
def __init__(self, start=0, step=1):
self.start = start # Store the start value
self.step = step # Store the step value
self.changed = {} # No items have been modified
def check_index(key):
if not isinstance(key, int): raise TypeError
if key < 0: raise IndexError
def __getitem__(self, key):
check_index(key)
try: return self.changed[key] # Modified?
except KeyError: # otherwise ...
return self.start + key * self.step # ... calculate the value
def __setitem__(self, key, value):
check_index(key)
self.changed[key] = value # Store the changed value

You need to use self
Ex:
class ArithmeticSequence:
def __init__(self, start=0, step=1):
self.start = start # Store the start value
self.step = step # Store the step value
self.changed = {} # No items have been modified
def check_index(self, key):
if not isinstance(key, int): raise TypeError
if key < 0: raise IndexError
def __getitem__(self, key):
self.check_index(key)
try: return self.changed[key] # Modified?
except KeyError: # otherwise ...
return self.start + key * self.step # ... calculate the value
def __setitem__(self, key, value):
self.check_index(key)
self.changed[key] = value # Store the changed value
And call the function with self Ex: self.check_index

Your def check_index(key) still defines a method of ArithmeticSequence, regardless of what you call the first argument, which means you have to call it like a regular instance method (self.check_index()), and if you want to pass it an argument you have to add it after self. If you want to define a method on the class itself, you can use #staticmethod or #classmethod:
class Foo:
#staticmethod
def bar(key):
return key
#classmethod
def baz(cls, key):
return key
def quux(self):
print(Foo.bar("abcd"), Foo.baz("abcd"))
Foo().quux()

ArithmeticSequence is not a new-style class. Check this
You have 2 options:
Add self to check_index. In the class, you will use it as self.check_index(key). You will need to instantiate an ArithmeticSequence class object.
Add #staticmethod before check_index. You will use it as ArithmeticSequence.check_index(key)

Related

Python validating an attempt to append to a list attribute

Just learning about properties and setters in python, and seems fair enough when we have a mutable attribute. But what happens when I want to validate a .append() on a list for example? In the below, I can validate the setting of the attribute and it works as expected. But I can bypass its effect by simply appending to get more cookies onto the tray...
class CookieTray:
def __init__(self):
self.cookies = []
#property
def cookies(self):
return self._cookies
#cookies.setter
def cookies(self, cookies):
if len(cookies) > 8:
raise ValueError("Too many cookies in the tray!")
self._cookies = cookies
if __name__ == '__main__':
tray = CookieTray()
print("Cookies: ", tray.cookies)
try:
tray.cookies = [1,1,0,0,0,1,1,0,1] # too many
except Exception as e:
print(e)
tray.cookies = [1,0,1,0,1,0]
print(tray.cookies)
tray.cookies.append(0)
tray.cookies.append(0)
tray.cookies.append(1) # too many, but can still append
print(tray.cookies)
Silly example, but I hope it illustrates my question. Should I just be avoiding the setter and making a "setter" method, like add_cookie(self, cookie_type) and then do my validation in there?
The setter only applies when assigning to the attribute. As you've seen mutating the attribute bypasses this.
To apply the validation to the object being mutated we can use a custom type. Here's an example which just wraps a normal list:
import collections
class SizedList(collections.abc.MutableSequence):
def __init__(self, maxlen):
self.maxlen = maxlen
self._list = []
def check_length(self):
if len(self._list) >= self.maxlen:
raise OverflowError("Max length exceeded")
def __setitem__(self, i, v):
self.check_length()
self._list[i] = v
def insert(self, i, v):
self.check_length()
self._list.insert(i, v)
def __getitem__(self, i): return self._list[i]
def __delitem__(self, i): del self._list[i]
def __len__(self): return len(self._list)
def __repr__(self): return f"{self._list!r}"
When overriding container types collections.abc can be useful - we can see the abstract methods that must be implemented: __getitem__, __setitem__, __delitem__, __len__, and insert in this case. All of them are delegated to the list object that's being wrapped, with the two that add items having the added length check.
The repr isn't needed, but makes it easier to check the contents - once again just delegating to the wrapped list.
With this you can simply replace the self.cookies = [] line with self.cookies = SizedList(8).
You would need to create a ValidatingList class that overrides list.append, something like this:
class ValidatingList(list):
def append(self, value):
if len(self) >= 8:
raise ValueError("Too many items in the list")
else:
super().append(value)
You then convert your cookies to a ValidatingList:
class CookieTray:
def __init__(self):
self.cookies = []
#property
def cookies(self):
return self._cookies
#cookies.setter
def cookies(self, cookies):
if len(cookies) > 8:
raise ValueError("Too many cookies in the tray!")
self._cookies = ValidatingList(cookies)

How to decorate a class and use descriptors to access properties?

I am trying to master (begin ;)) to understand how to properly work with decorators and descriptors on Python 3. I came up with an idea that i´m trying to figure how to code it.
I want to be able to create a class A decorated with certain "function" B or "class" B that allows me to create a instance of A, after delaring properties on A to be a component of certain type and assigning values on A __init__ magic function. For instance:
componentized is certain "function B" or "class B" that allows me to declarate a class Vector. I declare x and y to be a component(float) like this:
#componentized
class Vector:
x = component(float)
y = component(float)
def __init__ (self, x, y):
self.x = x
self.y = y
What I have in mind is to be able to this:
v = Vector(1,2)
v.x #returns 1
But the main goal is that I want do this for every marked component(float) property:
v.xy #returns a tuple (1,2)
v.xy = (3,4) #assigns to x the value 3 and y the value 4
My idea is to create a decorator #componentized that overrides the __getattr__ and __setattr__ magic methods. Sort of this:
def componentized(cls):
class Wrapper(object):
def __init__(self, *args):
self.wrapped = cls(*args)
def __getattr__(self, name):
print("Getting :", name)
if(len(name) == 1):
return getattr(self.wrapped, name)
t = []
for x in name:
t.append(getattr(self.wrapped, x))
return tuple(t)
#componentized
class Vector(object):
def __init__(self, x, y):
self.x = x
self.y = y
And it kind of worked, but i don't think I quite understood what happened. Cause when I tried to do an assign and override the __setattr__ magic method it gets invoked even when I am instantiating the class. Two times in the following example:
vector = Vector(1,2)
vector.x = 1
How would could I achieve that sort of behavior?
Thanks in advance! If more info is needed don't hesitate to ask!
EDIT:
Following #Diego's answer I manage to do this:
def componentized(cls):
class wrappedClass(object):
def __init__(self, *args, **kwargs):
t = cls(*args,**kwargs)
self.wrappedInstance = t
def __getattr__(self, item):
if(len(item) == 1):
return self.wrappedInstance.__getattribute__(item)
else:
return tuple(self.wrappedInstance.__getattribute__(char) for char in item)
def __setattr__(self, attributeName, value):
if isinstance(value, tuple):
for char, val in zip(attributeName, value):
self.wrappedInstance.__setattr__(char, val)
elif isinstance(value, int): #EMPHASIS HERE
for char in attributeName:
self.wrappedInstance.__setattr__(char, value)
else:
object.__setattr__(self, attributeName, value)
return wrappedClass
And Having a class Vector like this:
#componentized
class Vector:
def __init__ (self, x, y):
self.x = x
self.y = y
It kind of behave like I wanted, but I still have no idea how to achieve:
x = component(float)
y = component(float)
inside the Vector class to somehow subscribe x and y of type float, so when I do the #EMPHASIS LINE(in the line I hardcoded a specific type) on the code I can check whether the value someone is assigning a value to x and/or y for an instance of Vector is of type I defined it with:
x = component(float)
So I tried this (a component (descriptor) class):
class component(object):
def __init__(self, t, initval=None):
self.val = initval
self.type = t
def __get__(self, obj, objtype):
return self.val
def __set__(self, obj, val):
self.val = val
To use component like a descriptor, but I couldn't managed to do a workaround to handle the type. I tried to do an array to hold val and type, but then didn't know how to get it from the decorator __setattr__ method.
Can you point me into the right direction?
PS: Hope you guys understand what I am trying to do and lend me a hand with it. Thanks in advance
Solution
Well, using #Diego´s answer (which I will be accepting) and some workarounds to achieve my personal needs I managed to this:
Decorator (componentized)
def componentized(cls):
class wrappedClass(object):
def __init__(self, *args):
self.wrappedInstance = cls(*args)
def __getattr__(self, name):
#Checking if we only request for a single char named value
#and return the value using getattr() for the wrappedInstance instance
#If not, then we return a tuple getting every wrappedInstance attribute
if(len(name) == 1):
return getattr(self.wrappedInstance, name)
else:
return tuple(getattr(self.wrappedInstance, char) for char in name)
def __setattr__(self, attributeName, value):
try:
#We check if there is not an instance created on the wrappedClass __dict__
#Meaning we are initializing the class
if len(self.__dict__) == 0:
self.__dict__[attributeName] = value
elif isinstance(value, tuple): # We get a Tuple assign
self.__checkMultipleAssign(attributeName)
for char, val in zip(attributeName, value):
setattr(self.wrappedInstance, char, val)
else:
#We get a value assign to every component
self.__checkMultipleAssign(attributeName)
for char in attributeName:
setattr(self.wrappedInstance, char, value)
except Exception as e:
print(e)
def __checkMultipleAssign(self, attributeName):
#With this we avoid assigning multiple values to the same property like this
# instance.xx = (2,3) => Exception
for i in range(0,len(attributeName)):
for j in range(i+1,len(attributeName)):
if attributeName[i] == attributeName[j]:
raise Exception("Multiple component assignment not allowed")
return wrappedClass
component (descriptor class)
class component(object):
def __init__(self, t):
self.type = t #We store the type
self.value = None #We set an initial value to None
def __get__(self, obj, objtype):
return self.value #Return the value
def __set__(self, obj, value):
try:
#We check whether the type of the component is diferent to the assigned value type and raise an exeption
if self.type != type(value):
raise Exception("Type \"{}\" do not match \"{}\".\n\t--Assignation never happened".format(type(value), self.type))
except Exception as e:
print(e)
else:
#If the type match we set the value
self.value = value
(The code comments are self explanatories)
With this design I can achieve what I wanted (explained above)
Thanks you all for your help.
I thing there is an easiest way to achive the behavior : overloading __getattr__and __setattr__ functions.
Getting vector.xy :
class Vector:
...
def __getattr__(self, item):
return tuple(object.__getattribute__(self, char) for char in item)
The __getattr__ function is called only when "normal" ways of accessing an atribute fails, as stated in the Python documentation.
So, when python doesn't find vector.xy, the __getattr__method is called and we return a tuple of every value (ie. x and y).
We use object.__getattribute__ to avoid infinite recurtion.
Setting vector.abc :
def __setattr__(self, key, value):
if isinstance(value, tuple) and len(key) == len(value):
for char, val in zip(key, value):
object.__setattr__(self, char, val)
else:
object.__setattr__(self, key, value)
The __setattr__ method is always called unlike __getattr__, so we set each value separately only when the item we want to set is of the same lenght as the tuple of value.
>>> vector = Vector(4, 2)
>>> vector.x
4
>>> vector.xy
(4, 2)
>>> vector.xyz = 1, 2, 3
>>> vector.xyxyxyzzz
(1, 2, 1, 2, 1, 2, 3, 3, 3)
The only drawback is that if you really want to asign a tuple like (suppose you have an attribute called size):
vector.size = (1, 2, 3, 4)
Then s, i, z and e will by assigned separately, and that's obviously not what you want !
FWIW, I've done something similar by abusing __slots__. I created an Abstract Base Class which read the subclass's slots and then used that for pickling (with __getstate__ and __setstate__). You could do something similar with get-set-attr but you will still need to muck about with the class's actual attr's vs the ones you want to use as get/set properties.
Previous answer:
Why not just use the #property decorator? See the third example in the docs. You would apply it by first changing the attr names to something different and private (like _x) and then use the actual name x as the property.
class Vector(object):
def __init__(self, x, y):
self._x = x
self._y = y
#property
def x(self):
return self._x
#x.setter
def x(self, value):
self._x = value
#property
def xy(self):
return (self._x, self._y) # returns a tuple
#xy.setter
def xy(self, value):
self._x, self._y = value # splits out `value` to _x and _y
And if you want this to happen with every attr automatically, then you will need to use a metaclass, as #kasramvd commented. If you don't have many such different classes where you want to do this or many properties, may not be worth the effort.

pythonic way to declare multiple class instance properties as numbers?

I have a class with several properties, each of which has to be a number. After repeating the same code over and over again I think there is a more pythonic way to declare multiple class instance properties as numbers.
Right now I set each property value to None and raise a type error if the value is set to a non number type. I'd prefer to set the property type to a number when the property is initialized.
Thanks!
Example:
import numbers
class classWithNumbers(object):
def __init__(self):
self._numProp1 = None
self._numProp2 = None
#property
def numProp1(self):
return self._numProp1
#numProp1.setter
def numProp1(self,value):
if not isinstance(value, numbers.Number): #repeated test for number
raise TypeError("Must be number")
self._numProp1 = value
#property
def numProp2(self):
return self._numProp2
#numProp2.setter
def numProp(self,value):
if not isinstance(value, numbers.Number):
raise TypeError("Must be number")
self._numProp2 = value
Also, I actually have this wrapped into a method that is repeated at each property setter:
def isNumber(value):
if not isinstance(value, numbers.Number):
raise TypeError("Must be number")
If every property of this class should be a number you can implement custom __setattr__ method:
class ClassWithNumbers(object):
def __init__(self):
self.num_prop1 = 0
self.num_prop2 = 0
def __setattr__(self, name, value):
if not isinstance(value, numbers.Number):
raise TypeError("Must be number")
super(ClassWithNumbers, self).__setattr__(name, value)
From documentation: __setattr__ (is) called when an attribute assignment is attempted. This is called instead of the normal mechanism (i.e. store the value in the instance dictionary). name is the attribute name, value is the value to be assigned to it.
More general approach would be to not allow type of once assigned attribute to change:
class ClassWithUnchangeableTypes(object):
def __init__(self):
self.num_prop1 = 0
self.num_prop2 = 0
def __setattr__(self, name, value):
if hasattr(self, name): # this means that we assigned value in the past
previous_value_type = type(getattr(self, name))
if not isinstance(value, previous_value_type):
raise TypeError("Must be {}".format(previous_value_type))
super(ClassWithUnchangeableTypes, self).__setattr__(name, value)
Speaking of pythonic, from pep8:
Class names should normally use the CapWords convention.
Use the function naming rules: lowercase with words separated by underscores as necessary to improve readability.
A fairly modern (python 3.5+) and pythonic way is using type hints
#property
def numProp1(self):
return self._numProp1
#numProp1.setter
def numProp1(self,value: int):
self._numProp1 = value
A more compatible way is to try to convert to int, which will then throw an exception for you if that fails. It might also have unwanted behaviour like accepting floats:
#property
def numProp1(self):
return self._numProp1
#numProp1.setter
def numProp1(self,value):
self._numProp1 = int(value)
But there's already nothing wrong with your implementation in general.
If you do not want to explicitly declare getters and setters, you could check their type when used, not when assigned.
The most Pythonic way is probably to call the int constructor and let it throw an exception:
class ClassWithNumbers(object):
def __init__(self, v1, v2):
self.numprop1 = int(v1)
self.numprop2 = int(v2)
if the numprops are part of your interface then creating #property accessors would be appropriate. You can also implement your own descriptor:
class Number(object):
def __init__(self, val=0):
self.__set__(self, val)
def __get__(self, obj, cls=None):
return self.val
def __set__(self, obj, val):
try:
self.val = int(val)
except ValueError as e:
raise TypeError(str(e))
class ClassWithNumbers(object):
numprop1 = Number(42)
numprop2 = Number(-1)
usage:
c = ClassWithNumbers()
print c.numprop1
c.numprop1 += 1
print c.numprop1
c.numprop1 = 'hello'

How to inherit and extend a list object in Python?

I am interested in using the python list object, but with slightly altered functionality. In particular, I would like the list to be 1-indexed instead of 0-indexed. E.g.:
>> mylist = MyList()
>> mylist.extend([1,2,3,4,5])
>> print mylist[1]
output should be: 1
But when I changed the __getitem__() and __setitem__() methods to do this, I was getting a RuntimeError: maximum recursion depth exceeded error. I tinkered around with these methods a lot but this is basically what I had in there:
class MyList(list):
def __getitem__(self, key):
return self[key-1]
def __setitem__(self, key, item):
self[key-1] = item
I guess the problem is that self[key-1] is itself calling the same method it's defining. If so, how do I make it use the list() method instead of the MyList() method? I tried using super[key-1] instead of self[key-1] but that resulted in the complaint TypeError: 'type' object is unsubscriptable
Any ideas? Also if you could point me at a good tutorial for this that'd be great!
Thanks!
Use the super() function to call the method of the base class, or invoke the method directly:
class MyList(list):
def __getitem__(self, key):
return list.__getitem__(self, key-1)
or
class MyList(list):
def __getitem__(self, key):
return super(MyList, self).__getitem__(key-1)
However, this will not change the behavior of other list methods. For example, index remains unchanged, which can lead to unexpected results:
numbers = MyList()
numbers.append("one")
numbers.append("two")
print numbers.index('one')
>>> 1
print numbers[numbers.index('one')]
>>> 'two'
Instead, subclass integer using the same method to define all numbers to be minus one from what you set them to. Voila.
Sorry, I had to. It's like the joke about Microsoft defining dark as the standard.
You can avoid violating the Liskov Substitution principle by creating a class that inherits from collections.MutableSequence, which is an abstract class. It would look something like this:
def indexing_decorator(func):
def decorated(self, index, *args):
if index == 0:
raise IndexError('Indices start from 1')
elif index > 0:
index -= 1
return func(self, index, *args)
return decorated
class MyList(collections.MutableSequence):
def __init__(self):
self._inner_list = list()
def __len__(self):
return len(self._inner_list)
#indexing_decorator
def __delitem__(self, index):
self._inner_list.__delitem__(index)
#indexing_decorator
def insert(self, index, value):
self._inner_list.insert(index, value)
#indexing_decorator
def __setitem__(self, index, value):
self._inner_list.__setitem__(index, value)
#indexing_decorator
def __getitem__(self, index):
return self._inner_list.__getitem__(index)
def append(self, value):
self.insert(len(self) + 1, value)
class ListExt(list):
def extendX(self, l):
if l:
self.extend(l)

Why does my class not have a 'keys' function?

class a(object):
w='www'
def __init__(self):
for i in self.keys():
print i
def __iter__(self):
for k in self.keys():
yield k
a() # why is there an error here?
Thanks.
Edit: The following class also doesn't extend any class;
why it can use keys?
class DictMixin:
# Mixin defining all dictionary methods for classes that already have
# a minimum dictionary interface including getitem, setitem, delitem,
# and keys. Without knowledge of the subclass constructor, the mixin
# does not define __init__() or copy(). In addition to the four base
# methods, progressively more efficiency comes with defining
# __contains__(), __iter__(), and iteritems().
# second level definitions support higher levels
def __iter__(self):
for k in self.keys():
yield k
def has_key(self, key):
try:
value = self[key]
except KeyError:
return False
return True
def __contains__(self, key):
return self.has_key(key)
# third level takes advantage of second level definitions
def iteritems(self):
for k in self:
yield (k, self[k])
def iterkeys(self):
return self.__iter__()
# fourth level uses definitions from lower levels
def itervalues(self):
for _, v in self.iteritems():
yield v
def values(self):
return [v for _, v in self.iteritems()]
def items(self):
return list(self.iteritems())
def clear(self):
for key in self.keys():
del self[key]
def setdefault(self, key, default=None):
try:
return self[key]
except KeyError:
self[key] = default
return default
def pop(self, key, *args):
if len(args) > 1:
raise TypeError, "pop expected at most 2 arguments, got "\
+ repr(1 + len(args))
try:
value = self[key]
except KeyError:
if args:
return args[0]
raise
del self[key]
return value
def popitem(self):
try:
k, v = self.iteritems().next()
except StopIteration:
raise KeyError, 'container is empty'
del self[k]
return (k, v)
def update(self, other=None, **kwargs):
# Make progressively weaker assumptions about "other"
if other is None:
pass
elif hasattr(other, 'iteritems'): # iteritems saves memory and lookups
for k, v in other.iteritems():
self[k] = v
elif hasattr(other, 'keys'):
for k in other.keys():
self[k] = other[k]
else:
for k, v in other:
self[k] = v
if kwargs:
self.update(kwargs)
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
def __repr__(self):
return repr(dict(self.iteritems()))
def __cmp__(self, other):
if other is None:
return 1
if isinstance(other, DictMixin):
other = dict(other.iteritems())
return cmp(dict(self.iteritems()), other)
def __len__(self):
return len(self.keys())
Why would you expect it to have keys? You didn't define such a method in your class. Did you intend to inherit from a dictionary?
To do that declare class a(dict)
Or maybe you meant a.__dict__.keys()?
As for the large snippet you've posted in the update, read the comment above the class again:
# Mixin defining all dictionary methods for classes that already have
# a minimum dictionary interface including getitem, setitem, delitem,
# and keys
Note that "already have ... keys" part.
The DictMixin class comes from the UserDict module, which says:
class UserDict.DictMixin Mixin
defining all dictionary methods for
classes that already have a minimum
dictionary interface including
getitem(), setitem(), delitem(), and keys().
This mixin should be used as a
superclass. Adding each of the above
methods adds progressively more
functionality. For instance, defining
all but delitem() will preclude
only pop() and popitem() from the full
interface.
In addition to the four base methods,
progressively more efficiency comes
with defining contains(),
iter(), and iteritems().
Since the mixin has no knowledge of
the subclass constructor, it does not
define init() or copy().
Starting with Python version 2.6, it
is recommended to use
collections.MutableMapping instead of
DictMixin.
Note the recommendation in the last part - use collections.MutableMapping instead.
To iterate over attributes of an object:
class A(object):
def __init__(self):
self.myinstatt1 = 'one'
self.myinstatt2 = 'two'
def mymethod(self):
pass
a = A()
for attr, value in a.__dict__.iteritems():
print attr, value

Categories