I am a Java programmer that is new to Python. I am having trouble understanding the syntax of the following code from the pymodbus repo in GitHub. Where is the function defined?
self.execute(request)
The reason I am confused is that AFAIK self refers to variables and functions of the current class, even inherited ones. There is no function defined in the ModBusClientMixIn class, nor the class inherit from any other class. So where is it coming from?
There is an execute function defined in the ReadCoilsRequest class, but to invoke that why would you need self? Also, where is context(a variable in the execute function argument list) coming from?
Would really appreciate if someone can help me understand the syntax.
It's a mixin which is used on classes which do define an execute method, e.g.:
class ModbusClientProtocol(protocol.Protocol, ModbusClientMixin):
A mixin adds methods to other classes and is not supposed to be used by itself.
If you wanted to type-annotate it properly, it would have to be something like:
class Executable(ABC):
#abstractmethod
def execute(self):
pass
class ModBusClientMixin:
def read_coils(self: Executable, address, count=1, **kwargs):
# ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
# Expects self to conform to Executable interface,
# i.e. to be used in a class that implements execute().
self.execute()
Since Python heavily relies on duck-typing and type annotations are a relatively recent addition, they're often omitted and replaced by verbose documentation, or it is expected that developers recognise the purpose of mixins, or that it's such an internal implementation detail that it hasn't been explicitly documented.
This is a special case. You are right, that execute has to be defined somewhere.
But in this case, execute is implemented by a child class that derives from ModBusClientMixIn.
You would get an error if you were to create an instance of ModBusClientMixIn directly, because it does not implement execute.
Look at the implementations of ModbusClientProtocol or BaseModbusClient for example, they both have an execute method.
Related
Forgive me for my ignorance, but does anyone know any languages that strictly enforce the condition I've given on the title? For example, using Python syntax, we can extend a class with a new method like this
class A:
pass
class B(A):
def foo(self):
pass
But is there a language that needs an additional keyword, let's say new, to specify that this method is unique to the child class and is not an override of the methods of its parent class/es? For example:
class A:
pass
class B(A):
def new foo(self):
pass
I am asking this because, when I am working on a project that requires multiple inheritance such as class B(A, C, D), and I saw a method defined in B, I need to check if the given method is from one of its parent class or its own method, and I find it extremely tedious.
The closest I can think of is the #Override annotation in Java, which can be applied to a method declaration in order for the compiler to check that it overrides an inherited method (or implements an interface method).
When used in conjunction with a linter which checks that all method overrides are annotated with #Override, then your IDE will give you a linter warning when you omit the annotation. IntellIJ IDEA and SonarSource both have linter rules for this, for example.
So long as you are strict about obeying the linter warning, then it's "strict" in that sense, but of course linter warnings don't actually prevent your code from being compiled or executed. Nonetheless, I don't know of a closer example from a real programming language. Unfortunately Java doesn't have multiple inheritance so it's not directly applicable to your problem.
I have a specific use-case for typings which I find hard to implement.
I have the concept of "Service" classes on my python codebase, which are classes with a handful of functions I want to "expose" so only they will be available when using an API. The implementation of the Service is like this:
class MyService(BaseService):
def normal_function(self):
pass
#exposed
def exposed_function(self):
pass
What's happening behind the #exposed thing is that it adds a unique property to the wrapped method which allows someone who uses it to know which functions are "exposed" or not.
I wish to make a type smart enough to understand that only the "exposed" functions are available.
Any ideas?
You could use name mangling:
class MyService():
def __normal_function(self):
pass
def exposed_function(self):
pass
my_service = MyService()
my_service.exposed_function() # this works, user can use the exposed function
my_service.__normal_function() # error: MyService instance has no attribute '__normal_function'
my_service._MyService__normal_function() # normal_function can only be called using its "mangled" name
In this case, the name of the normal function - __normal_function - will be textually replaced with _MyService__normal_function, so that the user will not be able to call the function using its "original" name.
Note that the normal function can still be called outside of the class, since private variables and methods don't exist in Python, but name mangling is probably the closest you can get to implementing private-like behavior.
I've been reading lots of previous SO discussions of factory functions, etc. and still don't know what the best (pythonic) approach is to this particular situation. I'll admit up front that i am imposing a somewhat artificial constraint on the problem in that i want my solution to work without modifying the module i am trying to extend: i could make modifications to it, but let's assume that it must remain as-is because i'm trying to understand best practice in this situation.
I'm working with the http://pypi.python.org/pypi/icalendar module, which handles parsing from and serializing to the Icalendar spec (hereafter ical). It parses the text into a hierarchy of dictionary-like "component" objects, where every "component" is an instance of a trivial derived class implementing the different valid ical types (VCALENDAR, VEVENT, etc.) and they are all spit out by a recursive factory from the common parent class:
class Component(...):
#classmethod
def from_ical(cls, ...)
I have created a 'CalendarFile' class that extends the ical 'Calendar' class, including in it generator function of its own:
class CalendarFile(Calendar):
#classmethod
def from_file(cls, ics):
which opens a file (ics) and passes it on:
instance = cls.from_ical(f.read())
It initializes and modifies some other things in instance and then returns it. The problem is that instance ends up being a Calendar object instead of a CalendarFile object, in spite of cls being CalendarFile. Short of going into the factory function of the ical module and fiddling around in there, is there any way to essentially "recast" that object as a 'CalendarFile'?
The alternatives (again without modifying the original module) that I have considered are:make the CalendarFile class a has-a Calendar class (each instance creates its own internal instance of a Calendar object), but that seems methodically stilted.
fiddle with the returned object to give it the methods it needs (i know there's a term for creating a customized object but it escapes me).
make the additional methods into functions and just have them work with instances of Calendar.
or perhaps the answer is that i shouldn't be trying to subclass from a module in the first place, and this type of code belongs in the module itself.
Again i'm trying to understand what the "best" approach is and also learn if i'm missing any alternatives. Thanks.
Normally, I would expect an alternative constructor defined as a classmethod to simply call the class's standard constructor, transforming the arguments that it receives into valid arguments to the standard constructor.
>>> class Toy(object):
... def __init__(self, x):
... self.x = abs(x)
... def __repr__(self):
... return 'Toy({})'.format(self.x)
... #classmethod
... def from_string(cls, s):
... return cls(int(s))
...
>>> Toy.from_string('5')
Toy(5)
In most cases, I would strongly recommend something like this approach; this is the gold standard for alternative constructors.
But this is a special case.
I've now looked over the source, and I think the best way to add a new class is to edit the module directly; otherwise, scrap inheritance and take option one (your "has-a" option). The different classes are all slightly differentiated versions of the same container class -- they shouldn't really even be separate classes. But if you want to add a new class in the idiom of the code as it it is written, you have to add a new class to the module itself. Furthermore, from_iter is deceptively named; it's not really a constructor at all. I think it should be a standalone function. It builds a whole tree of components linked together, and the code that builds the individual components is buried in a chain of calls to various factory functions that also should be standalone functions but aren't. IMO much of that code ought to live in __init__ where it would be useful to you for subclassing, but it doesn't.
Indeed, none of the subclasses of Component even add any methods. By adding methods to your subclass of Calendar, you're completely disregarding the actual idiom of the code. I don't like its idiom very much but by disregarding that idiom, you're making it even worse. If you don't want to modify the original module, then forget about inheritance here and give your object a has-a relationship to Calendar objects. Don't modify __class__; establish your own OO structure that follows standard OO practices.
I don't even know how to explain this, so here is the code I'm trying.
from couchdb.schema import Document, TextField
class Base(Document):
type = TextField(default=self.__name__)
#self doesn't work, how do I get a reference to Base?
class User(Base):
pass
#User.type be defined as TextField(default="Test2")
The reason I'm even trying this is I'm working on creating a base class for an orm I'm using. I want to avoid defining the table name for every model I have. Also knowing what the limits of python is will help me avoid wasting time trying impossible things.
The class object does not (yet) exist while the class body is executing, so there is no way for code in the class body to get a reference to it (just as, more generally, there is no way for any code to get a reference to any object that does not exist). Test2.__name__, however, already does what you're specifically looking for, so I don't think you need any workaround (such as metaclasses or class decorators) for your specific use case.
Edit: for the edited question, where you don't just need the name as a string, a class decorator is the simplest way to work around the problem (in Python 2.6 or later):
def maketype(cls):
cls.type = TextField(default=cls.__name__)
return cls
and put #maketype in front of each class you want to decorate that way. In Python 2.5 or earlier, you need instead to say maketype(Base) after each relevant class statement.
If you want this functionality to get inherited, then you have to define a custom metaclass that performs the same functionality in its __init__ or __new__ methods. Personally, I would recommend against defining custom metaclasses unless they're really indispensable -- instead, I'd stick with the simpler decorator approach.
You may want to check out the other question python super class relection
In your case, Test2.__base__ will return the base class Test. If it doesn't work, you may use the new style: class Test(object)
I was wondering if anyone knew of a particular reason (other than purely stylistic) why the following languages these syntaxes to initiate a class?
Python:
class MyClass:
def __init__(self):
x = MyClass()
Ruby:
class AnotherClass
def initialize()
end
end
x = AnotherClass.new()
I can't understand why the syntax used for the constructor and the syntax used to actually get an instance of the class are so different. Sure, I know it doesn't really make a difference but, for example, in ruby what's wrong with making the constructor "new()"?
When you are creating an object of a class, you are doing more than just initializing it. You are allocating the memory for it, then initializing it, then returning it.
Note also that in Ruby, new() is a class method, while initialize() is an instance method. If you simply overrode new(), you would have to create the object first, then operate on that object, and return it, rather than the simpler initialize() where you can just refer to self, as the object has already been created for you by the built-in new() (or in Ruby, leave self off as it's implied).
In Objective-C, you can actually see what's going on a little more clearly (but more verbosely) because you need to do the allocation and initialization separately, since Objective-C can't pass argument lists from the allocation method to the initialization one:
[[MyClass alloc] initWithFoo: 1 bar: 2];
Actually in Python the constructor is __new__(), while __init__() is instance initializer.
__new__() is static class method, thus it has to be called first, as a first parameter (usually named cls or klass) it gets the class . It creates object instance, which is then passed to __init__() as first parameter (usually named self), along with all the rest of __new__'s parameters.
This is useful because in Python, a constructor is just another function. For example, I've done this several times:
def ClassThatShouldntBeDirectlyInstantiated():
return _classThatShouldntBeDirectlyInstantiated()
class _classThatShouldntBeDirectlyInstantiated(object):
...
Of course, that's a contrived example, but you get the idea. Essentially, most people that use your class will probably think of ClassThatShouldntBeDirectlyInstantiated as your class, and there's no need to let them think otherwise. Doing things this way, all you have to do is document the factory function as the class it instantiates and not confuse anyone using the class.
In a language like C# or Java, I sometimes find it annoying to make classes like this because it can be difficult to determine whether you should use the constructor or some factory function. I'm not sure if this is also the case in Ruby though.