Prevent other classes' methods from calling my constructor - python

How do I make a python "constructor" "private", so that the objects of its class can only be created by calling static methods? I know there are no C++/Java like private methods in Python, but I'm looking for another way to prevent others from calling my constructor (or other method).
I have something like:
class Response(object):
#staticmethod
def from_xml(source):
ret = Response()
# parse xml into ret
return ret
#staticmethod
def from_json(source):
# parse json
pass
and would like the following behavior:
r = Response() # should fail
r = Response.from_json(source) # should be allowed
The reason for using static methods is that I always forget what arguments my constructors take - say JSON or an already parsed object. Even then, I sometimes forget about the static methods and call the constructor directly (not to mention other people using my code). Documenting this contract won't help with my forgetfulness. I'd rather enforce it with an assertion.
And contrary to some of the commenters, I don't think this is unpythonic - "explicit is better than implicit", and "there should be only one way to do it".
How can I get a gentle reminder when I'm doing it wrong? I'd prefer a solution where I don't have to change the static methods, just a decorator or a single line drop-in for the constructor would be great. A la:
class Response(object):
def __init__(self):
assert not called_from_outside()

I think this is what you're looking for - but it's kind of unpythonic as far as I'm concerned.
class Foo(object):
def __init__(self):
raise NotImplementedError()
def __new__(cls):
bare_instance = object.__new__(cls)
# you may want to have some common initialisation code here
return bare_instance
#classmethod
def from_whatever(cls, arg):
instance = cls.__new__(cls)
instance.arg = arg
return instance
Given your example (from_json and from_xml), I assume you're retrieving attribute values from either a json or xml source. In this case, the pythonic solution would be to have a normal initializer and call it from your alternate constructors, i.e.:
class Foo(object):
def __init__(self, arg):
self.arg = arg
#classmethod
def from_json(cls, source):
arg = get_arg_value_from_json_source(source)
return cls(arg)
#classmethod
def from_xml(cls, source):
arg = get_arg_value_from_xml_source(source)
return cls(arg)
Oh and yes, about the first example: it will prevent your class from being instantiated in the usual way (calling the class), but the client code will still be able to call on Foo.__new__(Foo), so it's really a waste of time. Also it will make unit testing harder if you cannot instantiate your class in the most ordinary way... and quite a few of us will hate you for this.

I'd recommend turning the factory methods into module-level factory functions, then hiding the class itself from users of your module.
def one_constructor(source):
return _Response(...)
def another_constructor(source):
return _Response(...)
class _Response(object):
...
You can see this approach used in modules like re, where match objects are only constructed through functions like match and search, and the documentation doesn't actually name the match object type. (At least, the 3.4 documentation doesn't. The 2.7 documentation incorrectly refers to re.MatchObject, which doesn't exist.) The match object type also resists direct construction:
>>> type(re.match('',''))()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot create '_sre.SRE_Match' instances
but unfortunately, the way it does so relies upon the C API, so it's not available to ordinary Python code.

Good discussion in the comments.
For the minimal use case you describe,
class Response(object):
def __init__(self, construct_info = None):
if construct_info is None: raise ValueError, "must create instance using from_xml or from_json"
# etc
#staticmethod
def from_xml(source):
info = {} # parse info into here
return Response(info)
#staticmethod
def from_json(source):
info = {} # parse info into here
return Response(info)
It can be gotten around by a user who passes in a hand-constructed info, but at that point they'll have to read the code anyway and the static method will provide the path of least resistance. You can't stop them, but you can gently discourage them. It's Python, after all.

This might be achievable through metaclasses, but is heavily discouraged in Python. Python is not Java. There is no first-class notion of public vs private in Python; the idea is that users of the language are "consenting adults" and can use methods however they like. Generally, functions that are intended to be "private" (as in not part of the API) are denoted by a single leading underscore; however, this is mostly just convention and there's nothing stopping a user from using these functions.
In your case, the Pythonic thing to do would be to default the constructor to one of the available from_foo methods, or even to create a "smart constructor" that can find the appropriate parser for most cases. Or, add an optional keyword arg to the __init__ method that determines which parser to use.

An alternative API (and one I've seen far more in Python APIs) if you want to keep it explicit for the user would be to use keyword arguments:
class Foo(object):
def __init__(self, *, xml_source=None, json_source=None):
if xml_source and json_source:
raise ValueError("Only one source can be given.")
elif xml_source:
from_xml(xml_source)
elif json_source:
from_json(json_source)
else:
raise ValueError("One source must be given.")
Here using 3.x's * to signify keyword-only arguments, which helps enforce the explicit API. In 2.x this is recreatable with kwargs.
Naturally, this doesn't scale well to lots of arguments or options, but there are definitely cases where this style makes sense. (I'd argue bruno desthuilliers probably has it right for this case, from what we know, but I'll leave this here as an option for others).

The following is similar to what I ended up doing. It is a bit more general then what was asked in the question.
I made a function called guard_call, that checks if the current method is being called from a method of a certain class.
This has multiple uses. For example, I used the Command Pattern to implement undo and redo, and used this to ensure that my objects were only ever modified by command objects, and not random other code (which would make undo impossible).
In this concrete case, I place a guard in the constructor ensuring only Response methods can call it:
class Response(object):
def __init__(self):
guard_call([Response])
pass
#staticmethod
def from_xml(source):
ret = Response()
# parse xml into ret
return ret
For this specific case, you could probably make this a decorator and remove the argument, but I didn't do that here.
Here is the rest of the code. It's been a long time since I tested it, and can't guarentee that it works in all edge cases, so beware. It is also still Python 2. Another caveat is that it is slow, because it uses inspect. So don't use it in tight loops and when speed is an issue, but it might be useful when correctness is more important than speed.
Some day I might clean this up and release it as a library - I have a couple more of these functions, including one that asserts you are running on a particular thread. You may snear at the hackishness (it is hacky), but I did find this technique useful to smoke out some hard to find bugs, and to ensure my code still behaves during refactorings, for example.
from __future__ import print_function
import inspect
# http://stackoverflow.com/a/2220759/143091
def get_class_from_frame(fr):
args, _, _, value_dict = inspect.getargvalues(fr)
# we check the first parameter for the frame function is
# named 'self'
if len(args) and args[0] == 'self':
# in that case, 'self' will be referenced in value_dict
instance = value_dict.get('self', None)
if instance:
# return its class
return getattr(instance, '__class__', None)
# return None otherwise
return None
def guard_call(allowed_classes, level=1):
stack_info = inspect.stack()[level + 1]
frame = stack_info[0]
method = stack_info[3]
calling_class = get_class_from_frame(frame)
# print ("calling class:", calling_class)
if calling_class:
for klass in allowed_classes:
if issubclass(calling_class, klass):
return
allowed_str = ", ".join(klass.__name__ for klass in allowed_classes)
filename = stack_info[1]
line = stack_info[2]
stack_info_2 = inspect.stack()[level]
protected_method = stack_info_2[3]
protected_frame = stack_info_2[0]
protected_class = get_class_from_frame(protected_frame)
if calling_class:
origin = "%s:%s" % (calling_class.__name__, method)
else:
origin = method
print ()
print ("In %s, line %d:" % (filename, line))
print ("Warning, call to %s:%s was not made from %s, but from %s!" %
(protected_class.__name__, protected_method, allowed_str, origin))
assert False
r = Response() # should fail
r = Response.from_json("...") # should be allowed

Related

How can i make a method available only from within the class

Good evening, i need an advice, googling i couldn't find a proper direction.
I need to make a method available only within the class (i.e other methods or functions), if called from the program as a method of the object referring to the class i want:
the method to be invisible/not available to the intellisense
if i'm stubborn, and code it anyway, must raise an error.
Attaching a screenshot to make it more clear.
Any advice is appreciated, Thank you.
Screenshot of the problem
There's no private methods in python. Common usage dictates to precede a method that's only supposed to be used internally with one or two underscores, depending on the case. See here: What is the meaning of single and double underscore before an object name?
As others have mentioned there are no private methods in Python. I also don't know how to make it invisible for intelisense (probably there is some setting), but what you could theoretically do is this:
import re
def make_private(func):
def inner(*args, **kwargs):
name = func.__name__
pattern = re.compile(fr'(.*)\.{name}')
with open(__file__) as file:
for line in file:
lst = pattern.findall(line)
if (lst and not line.strip().startswith('#')
and not all(g.strip() == 'self' for g in lst)):
raise Exception()
return func(*args, **kwargs)
return inner
class MyClass:
#make_private
def some_method(self):
pass
def some_other_method(self):
self.some_method()
m = MyClass()
# m.some_method()
m.some_other_method()
It (make_private) is a decorator which basically when you call the function it is decorating, it first reads the entire file line by line and tries to find if in all of the file this method is called without being prefixed with self.. So if it is not then it is considered to be called from outside the class and an Exception is raised (probably add some message to it tho).
Issues could start once you have multiple files and this wouldn't entirely prevent someone from calling it if they really wanted for example if they did it like this:
self = MyClass()
self.some_method()
But mostly this would raise an exception.
OK Solved, to hide the method to the ide's Intellisense i added the double underscore (works fine with pycharm, not with vscode) then i used the accessify module to prevent forced execution calling myobj._myclass__somemethod()
from accessify import private
class myclass:
#private
def __somemethod(self)

python - log the request's journey

I want to log all methods a single request has visited once at the end of the request for debugging purposes.
I'm ok with starting with just one class at first:
here is my desired output example:
logging full trace once
'__init__': ->
'init_method_1' ->
'init_method_1_1'
'init_method_2'
'main_function': ->
'first_main_function': ->
'condition_method_3'
'condition_method_5'
here is my partial attempt:
import types
class DecoMeta(type):
def __new__(cls, name, bases, attrs):
for attr_name, attr_value in attrs.items():
if isinstance(attr_value, types.FunctionType):
attrs[attr_name] = cls.deco(attr_value)
return super(DecoMeta, cls).__new__(cls, name, bases, attrs)
#classmethod
def deco(cls, func):
def wrapper(*args, **kwargs):
name = func.__name__
stacktrace_full.setdefault(name, [])
sorted_functions = stacktrace_full[name]
if len(sorted_functions) > 0:
stacktrace_full[name].append(name)
result = func(*args, **kwargs)
print("after",func.__name__)
return result
return wrapper
class MyKlass(metaclass=DecoMeta):
Approaches
I think there are two different approaches worth considering for this problem:
"Simple" logging metaclass, or
Beefier metaclass to store call stacks
If you only need the method calls to be printed as they are made, and you don’t care about saving an actual record of the method call stack, then the first approach should do the trick.
I’m not certain which approach you’re looking for (if you had anything specific in mind), but if you know you need to store the method call stack, in addition to printing invocations, you might want to skip ahead to the second approach.
Note: All code hereafter assumes the presence of the following imports:
from types import FunctionType
1. Simple Logging Metaclass
This approach is far easier, and it doesn’t require too much extra work on top of your first attempt (depending on special circumstances we want to account for). However, as already mentioned, this metaclass is solely concerned with logging. If you definitely need to save a method call stack structure, consider skipping ahead to the second approach.
Changes to DecoMeta.__new__
With this approach, your DecoMeta.__new__ method remains mostly unchanged. The most notable change made in the code below is the addition of the “_in_progress_calls” list to namespace. DecoMeta.deco’s wrapper function will use this attribute to keep track of how many methods have been invoked, but not ended. With that information, it can appropriately indent the printed method names.
Also note the inclusion of staticmethod to the namespace attributes we want to decorate via DecoMeta.deco. However, you may not need this functionality. On the other hand, you may want to consider going further by accounting for classmethod and others, as well.
One other change you’ll notice is the creation of the cls variable, which is modified directly before being returned. However, your existing loop through the namespace, followed by both the creation and return of the class object should still do the trick here.
Changes to DecoMeta.deco
We set in_progress_calls to the current instance’s _in_progress_calls to be used later in wrapper
Next, we make a small modification to your first attempt to handle staticmethod — something you may or may not want, as mentioned earlier
In the “Log” section, we need to calculate pad for the following line, in which we print the name of the called method. After printing, we add the current method name to in_progress_calls, informing other methods of the in-progress method
In the “Invoke Method” section, we (optionally) handle staticmethod again.
Aside from this minor change, we make one small but significant change by adding the self argument to our func call. Without this, the normal methods of the class using DecoMeta would start complaining about not being given the positional self argument, which is kind of a big deal, since func.__call__ is a method-wrapper and needs the instance to which our method is bound.
The final change to your first attempt is to remove the last in_progress_calls value, since we have officially invoked the method and are returning result
Shut Up, and Show Me the Code
class DecoMeta(type):
def __new__(mcs, name, bases, namespace):
namespace["_in_progress_calls"] = []
cls = super().__new__(mcs, name, bases, namespace)
for attr_name, attr_value in namespace.items():
if isinstance(attr_value, (FunctionType, staticmethod)):
setattr(cls, attr_name, mcs.deco(attr_value))
return cls
#classmethod
def deco(mcs, func):
def wrapper(self, *args, **kwargs):
in_progress_calls = getattr(self, "_in_progress_calls")
try:
name = func.__name__
except AttributeError: # Resolve `staticmethod` names
name = func.__func__.__name__
#################### Log ####################
pad = " " * (len(in_progress_calls) * 3)
print(f"{pad}`{name}`")
in_progress_calls.append(name)
#################### Invoke Method ####################
try:
result = func(self, *args, **kwargs)
except TypeError: # Properly invoke `staticmethod`-typed `func`
result = func.__func__(*args, **kwargs)
in_progress_calls.pop(-1)
return result
return wrapper
What Does It Do?
Here’s some code for a dummy class that I tried to model after your desired example output:
Setup
Don't pay too much attention to this block. It's just a silly class whose methods call other methods
class MyKlass(metaclass=DecoMeta):
def __init__(self):
self.i_1()
self.i_2()
#################### Init Methods ####################
def i_1(self):
self.i_1_1()
def i_1_1(self): ...
def i_2(self): ...
#################### Main Methods ####################
def main(self, x):
self.m_1(x)
def m_1(self, x):
if x == 0:
self.c_1()
self.c_2()
self.c_4()
elif x == 1:
self.c_3()
self.c_5()
#################### Condition Methods ####################
def c_1(self): ...
def c_2(self): ...
def c_3(self): ...
def c_4(self): ...
def c_5(self): ...
Run
my_k = MyKlass()
my_k.main(1)
my_k.main(0)
Console Output
`__init__`
`i_1`
`i_1_1`
`i_2`
`main`
`m_1`
`c_3`
`c_5`
`main`
`m_1`
`c_1`
`c_2`
`c_4`
2. Beefy Metaclass to Store Call Stacks
Because I’m unsure whether you actually want this, and your question seems more focused on the metaclass part of the problem, rather than the call stack storage structure, I’ll focus on how to beef up the above metaclass to handle the required operations. Then, I’ll just make a few notes on the many ways you could store the call stack and “stub” out those parts of the code with a simple placeholder structure.
The obvious thing we need is a persistent call stack structure to extend the reach of the ephemeral _in_progress_calls attribute. So we can start by adding the following uncommented line to the top of DecoMeta.__new__:
namespace["full_stack"] = dict()
# namespace["_in_progress_calls"] = []
# cls = super().__new__(mcs, name, bases, namespace)
# ...
Unfortunately, the obviousness stops there, and things get tricky fairly quickly if you want to trace anything beyond very simple method call stacks.
Regarding how we need to save our call stack, there are a few things that might limit our options:
We can’t use a simple dict, with method names as keys, because in the resulting arbitrarily-complex call stack, it’s entirely possible that method X could call method Y multiple times
We can’t assume that every call to method X will invoke the same methods, as your example with “conditional” methods indicates. This means that we can’t say that any invocation of X will yield call stack Y, and neatly save that information somewhere
We need to limit the persistence of our new full_stack attribute, since we declare it on a class-wide basis in DecoMeta.__new__. If we don’t, then all instances of MyKlass will share the same full_stack, swiftly undermining its usefulness
Because the first two are highly dependent on your preferences/requirements and because I think your question is more concerned with the problem’s metaclass aspect, rather than call stack structure, I’ll start by addressing the third point.
To ensure each instance gets its own full_stack, we can add a new DecoMeta.__call__ method, which gets called whenever we make an instance of MyKlass (or anything using DecoMeta as a metaclass). Just drop the following into DecoMeta:
def __call__(cls, *args, **kwargs):
setattr(cls, "full_stack", dict())
return super().__call__(*args, **kwargs)
The last piece is to figure out how you want to structure full_stack and add the code to update it to the DecoMeta.deco.wrapper function.
A deeply-nested list of strings, naming the methods invoked in order, together with the methods invoked by those methods, and so on... should get the job done and sidestep the first two problems mentioned above, but that sounds messy, so I’ll let you decide if you actually need it.
As an example, we can make full_stack a dict with keys of Tuple[str], and values of List[str]. Be warned that this will quietly fail under both of the aforementioned problem conditions; however, it does serve to illustrate the updates that would be necessary to DecoMeta.deco.wrapper should you decide to go further.
Only two lines need to be added:
First, immediately below the signature of DecoMeta.deco.wrapper, add the following uncommented line:
full_stack = getattr(self, "full_stack")
# in_progress_calls = getattr(self, "_in_progress_calls")
# ...
Second, in the “Log” section, right after the print call, add the following uncommented line:
# print(f"{pad}`{name}`")
full_stack.setdefault(tuple(in_progress_calls), []).append(name)
# in_progress_calls.append(name)
# ...
TL;DR
If I am correct in interpreting your question as asking for a metaclass that really does just log method calls, then the first approach (outlined above under the “Simple Logging Metaclass” heading) should work great. However, if you also need to save a full record of all method calls, you can start by following the suggestions under the “Beefy Metaclass to Store Call Stacks” heading.
Please let me know if you have any other questions or clarifications. I hope this was useful!

Customize how a Python object is processed as a function argument?

A Python class's __call__ method lets us specify how a class member should be behave as a function. Can we do the "opposite", i.e. specify how a class member should behave as an argument to an arbitrary other function?
As a simple example, suppose I have a ListWrapper class that wraps lists, and when I call a function f on a member of this class, I want f to be mapped over the wrapped list. For instance:
x = WrappedList([1, 2, 3])
print(x + 1) # should be WrappedList([2, 3, 4])
d = {1: "a", 2: "b", 3:"c"}
print(d[x]) # should be WrappedList(["a", "b", "c"])
Calling the hypothetical __call__ analogue I'm looking for __arg__, we could imagine something like this:
class WrappedList(object):
def __init__(self, to_wrap):
self.wrapped = to_wrap
def __arg__(self, func):
return WrappedList(map(func, self.wrapped))
Now, I know that (1) __arg__ doesn't exist in this form, and (2) it's easy to get the behavior in this simple example without any tricks. But is there a way to approximate the behavior I'm looking for in the general case?
You can't do this in general.*
You can do something equivalent for most of the builtin operators (like your + example), and a handful of builtin functions (like abs). They're implemented by calling special methods on the operands, as described in the reference docs.
Of course that means writing a whole bunch of special methods for each of your types—but it wouldn't be too hard to write a base class (or decorator or metaclass, if that doesn't fit your design) that implements all those special methods in one place, by calling the subclass's __arg__ and then doing the default thing:
class ArgyBase:
def __add__(self, other):
return self.__arg__() + other
def __radd__(self, other):
return other + self.__arg__()
# ... and so on
And if you want to extend that to a whole suite of functions that you create yourself, you can give them all similar special-method protocols similar to the builtin ones, and expand your base class to cover them. Or you can just short-circuit that and use the __arg__ protocol directly in those functions. To avoid lots of repetition, I'd use a decorator for that.
def argify(func):
def _arg(arg):
try:
return arg.__arg__()
except AttributeError:
return arg
#functools.wraps(func)
def wrapper(*args, **kwargs):
args = map(_arg, args)
kwargs = {kw: _arg(arg) for arg in args}
return func(*args, **kwargs)
return wrapper
#argify
def spam(a, b):
return a + 2 * b
And if you really want to, you can go around wrapping other people's functions:
sin = argify(math.sin)
… or even monkeypatching their modules:
requests.get = argify(requests.get)
… or monkeypatching a whole module dynamically a la early versions of gevent, but I'm not going to even show that, because at this point we're getting into don't-do-this-for-multiple-reasons territory.
You mentioned in a comment that you'd like to do this to a bunch of someone else's functions without having to specify them in advance, if possible. Does that mean every function that ever gets constructed in any module you import? Well, you can even do that if you're willing to create an import hook, but that seems like an even worse idea. Explaining how to write an import hook and either AST-patch each function creation node or insert wrappers around the bytecode or the like is way too much to get into here, but if your research abilities exceed your common sense, you can figure it out. :)
As a side note, if I were doing this, I wouldn't call the method __arg__, I'd call it either arg or _arg. Besides being reserved for future use by the language, the dunder-method style implies things that aren't true here (special-method lookup instead of a normal call, you can search for it in the docs, etc.).
* There are languages where you can, such as C++, where a combination of implicit casting and typed variables instead of typed values means you can get a method called on your objects just by giving them an odd type with an implicit conversion operator to the expected type.

Class invariants in Python

Class invariants definitely can be useful in coding, as they can give instant feedback when clear programming error has been detected and also they improve code readability as being explicit about what arguments and return value can be. I'm sure this applies to Python too.
However, generally in Python, testing of arguments seems not to be "pythonic" way to do things, as it is against the duck-typing idiom.
My questions are:
What is Pythonic way to use assertions in code?
For example, if I had following function:
def do_something(name, path, client):
assert isinstance(name, str)
assert path.endswith('/')
assert hasattr(client, "connect")
More generally, when there is too much of assertions?
I'd be happy to hear your opinions on this!
Short Answer:
Are assertions Pythonic?
Depends how you use them. Generally, no. Making generalized, flexible code is the most Pythonic thing to do, but when you need to check invariants:
Use type hinting to help your IDE perform type inference so you can avoid potential pitfalls.
Make robust unit tests.
Prefer try/except clauses that raise more specific exceptions.
Turn attributes into properties so you can control their getters and setters.
Use assert statements only for debug purposes.
Refer to this Stack Overflow discussion for more info on best practices.
Long Answer
You're right. It's not considered Pythonic to have strict class invariants, but there is a built-in way to designate the preferred types of parameters and returns called type hinting, as defined in PEP 484:
[Type hinting] aims to provide a standard syntax for type annotations, opening up Python code to easier static analysis and refactoring, potential runtime type checking, and (perhaps, in some contexts) code generation utilizing type information.
The format is this:
def greeting(name: str) -> str:
return 'Hello ' + name
The typing library provides even further functionality. However, there's a huge caveat...
While these annotations are available at runtime through the usual __annotations__ attribute, no type checking happens at runtime . Instead, the proposal assumes the existence of a separate off-line type checker which users can run over their source code voluntarily. Essentially, such a type checker acts as a very powerful linter.
Whoops. Well, you could use an external tool while testing to check when invariance is broken, but that doesn't really answer your question.
Properties and try/except
The best way to handle an error is to make sure it never happens in the first place. The second best way is to have a plan when it does. Take, for example, a class like this:
class Dog(object):
"""Canis lupus familiaris."""
self.name = str()
"""The name you call it."""
def __init__(self, name: str):
"""What're you gonna name him?"""
self.name = name
def speak(self, repeat=0):
"""Make dog bark. Can optionally be repeated."""
print("{dog} stares at you blankly.".format(dog=self.name))
for i in range(repeat):
print("{dog} says: 'Woof!'".format(dog=self.name)
If you want your dog's name to be an invariant, this won't actually prevent self.name from being overwritten. It also doesn't prevent parameters that could crash speak(). However, if you make self.name a property...
class Dog(object):
"""Canis lupus familiaris."""
self._name = str()
"""The name on the microchip."""
self.name = property()
"""The name on the collar."""
def __init__(self, name: str):
"""What're you gonna name him?"""
if not name and not name.isalpha():
raise ValueError("Name must exist and be pronouncable.")
self._name = name
def speak(self, repeat=0):
"""Make dog bark. Can optionally be repeated."""
try:
print("{dog} stares at you blankly".format(dog=self.name))
if repeat < 0:
raise ValueError("Cannot negatively bark.")
for i in range(repeat):
print("{dog} says: 'Woof!'".format(dog=self.name))
except (ValueError, TypeError) as e:
raise RuntimeError("Dog unable to speak.") from e
#property
def name(self):
"""Gets name."""
return self._name
Since our property doesn't have a setter, self.name is essentially invariant; that value can't change unless someone is aware of the self._x. Furthermore, since we've added try/except clauses to process the specific errors we're expecting, we've provided a more concise control flow for our program.
So When Do You Use Assertions?
There might not be a 100% "Pythonic" way to perform assertions since you should be doing those in your unit tests. However, if it's critical at runtime for data to be invariant, assert statements can be used to pinpoint possible trouble spots, as explained in the Python wiki:
Assertions are particularly useful in Python because of Python's powerful and flexible dynamic typing system. In the same example, we might want to make sure that ids are always numeric: this will protect against internal bugs, and also against the likely case of somebody getting confused and calling by_name when they meant by_id.
For example:
from types import *
class MyDB:
...
def add(self, id, name):
assert type(id) is IntType, "id is not an integer: %r" % id
assert type(name) is StringType, "name is not a string: %r" % name
Note that the "types" module is explicitly "safe for import *"; everything it exports ends in "Type".
That takes care of data type checking. For classes, you use isinstance(), as you did in your example:
You can also do this for classes, but the syntax is a little different:
class PrintQueueList:
...
def add(self, new_queue):
assert new_queue not in self._list, \
"%r is already in %r" % (self, new_queue)
assert isinstance(new_queue, PrintQueue), \
"%r is not a print queue" % new_queue
I realize that's not the exact way our function works but you get the idea: we want to protect against being called incorrectly. You can also see how printing the string representation of the objects involved in the error will help with debugging.
For proper form, attaching a message to your assertions like in the examples above
(ex: assert <statement>, "<message>") will automatically attach the info into the resulting AssertionError to assist you with debugging. It could also give some insight into a consumer bug report as to why the program is crashing.
Checking isinstance() should not be overused: if it quacks like a duck, there's perhaps no need to enquire too deeply into whether it really is. Sometimes it can be useful to pass values that were not anticipated by the original programmer.
Places to consider putting assertions:
checking parameter types, classes, or values
checking data structure invariants
checking "can't happen" situations (duplicates in a list, contradictory state variables.)
after calling a function, to make sure that its return is reasonable
Assertions can be beneficial if they're properly used, but you shouldn't become dependent on them for data that doesn't need to be explicitly invariant. You might need to refactor your code if you want it to be more Pythonic.
Please have a look at icontract library. We developed it to bring design-by-contract into Python with informative error messages. Here as an example of a class invariant:
>>> #icontract.inv(lambda self: self.x > 0)
... class SomeClass:
... def __init__(self) -> None:
... self.x = 100
...
... def some_method(self) -> None:
... self.x = -1
...
... def __repr__(self) -> str:
... return "some instance"
...
>>> some_instance = SomeClass()
>>> some_instance.some_method()
Traceback (most recent call last):
...
icontract.ViolationError: self.x > 0:
self was some instance
self.x was -1

Using the self-parameter in python objects

I've got a question about defining functions and the self-parameter in python.
There is following code.
class Dictionaries(object):
__CSVDescription = ["ID", "States", "FilterTime", "Reaction", "DTC", "ActiveDischarge"]
def __makeDict(Lst):
return dict(zip(Lst, range(len(Lst))))
def getDict(self):
return self.__makeDict(self.__CSVDescription)
CSVDescription = __makeDict(__CSVDescription)
x = Dictionaries()
print x.CSVDescription
print x.getDict()
x.CSVDescription works fine. But print x.getDict() returns an error.
TypeError: __makeDict() takes exactly 1 argument (2 given)
I can add the self-parameter to the __makeDict() method, but then print x.CSVDescription wouldn't work.
How do I use the self-parameter correctly?
In python, the self parameter is implicitly passed to instance methods, unless the method is decorated with #staticmethod.
In this case, __makeDict doesn't need a reference to the object itself, so it can be made a static method so you can omit the self:
#staticmethod
def __makeDict(Lst): # ...
def getDict(self):
return self.__makeDict(self.__CSVDescription)
A solution using #staticmethod won't work here because calling the method from the class body itself doesn't invoke the descriptor protocol (this would also be a problem for normal methods if they were descriptors - but that isn't the case until after the class definition has been compiled). There are four major options here - but most of them could be seen as some level of code obfuscation, and would really need a comment to answer the question "why not just use a staticmethod?".
The first is, as #Marcus suggests, to always call the method from the class, not from an instance. That is, every time you would do self.__makeDict, do self.__class__.__makeDict instead. This will look strange, because it is a strange thing to do - in Python, you almost never need to call a method as Class.method, and the only time you do (in code written before super became available), using self.__class__ would be wrong.
In similar vein, but the other way around, you could make it a staticmethod and invoke the descriptor protocol manually in the class body - do: __makeDict.__get__(None, Dictionaries)(__lst).
Or, you could detect yourself what context its being called from by getting fancy with optional arguments:
def __makeDict(self, Lst=None):
if Lst is None:
Lst = self
...
But, by far the best way is to realise you're working in Python and not Java - put it outside the class.
def _makeDict(Lst):
...
class Dictionaries(object):
def getDict(self):
return _makeDict(self.__CSVDescription)
CSVDescription = _makeDict(__CSVDescription)

Categories