Having this class:
class A(frozenset):
def __init__(self, *args):
frozenset.__init__(self, *args)
Executing A(range(2)) results in the following error:
Traceback (most recent call last):
File "<pyshell#65>", line 1, in <module>
A(range(2))
File "<pyshell#60>", line 3, in __init__
frozenset.__init__(self, *args)
TypeError: object.__init__() takes no parameters
Meanwhile, frozenset(range(2)) works, and if I inherit A from set instead, A(range(2)) also works.
If I pass to A's constructor 0 or more than 1 parameter, it works as it should (with 0 paramemeters creates an empty set, with 2 or more parameters raises TypeError: A expected at most 1 arguments, got 2).
Actually you need to override __new__ method (not __init__, __init__ method will accept an instance generated and returned by __new__ method) when subclassing frozenset to create a new frozenset from a passed iterable (as argument):
class A(frozenset):
def __new__(cls, *args):
self = super().__new__(cls, *args)
return self
print(A(range(2)))
print(A(range(2)).__class__.__bases__)
Sample output:
A({0, 1})
(<class 'frozenset'>,)
Related
I found this weird behaviour where I don't know if I am the problem or if this is a python / dataclass / callable bug.
Here is a minimal working example
from dataclasses import dataclass
from typing import Callable
import numpy as np
def my_dummy_callable(my_array, my_bool):
return 1.0
#dataclass()
class MyDataClassDummy:
my_data: int = 1
my_callable: Callable[[np.ndarray, bool], float] = my_dummy_callable
def __init__(self):
print("I initialized my Class!")
#classmethod
def my_factory_with_callable_setting(cls):
my_dummy = MyDataClassDummy()
my_dummy.my_callable = my_dummy_callable
return my_dummy
#classmethod
def my_factory_without_callable_setting(cls):
my_dummy = MyDataClassDummy()
return my_dummy
def do_something(self):
print("This is my data", self.my_data)
print("This is the name of my callable", str(self.my_callable))
return self.my_callable(np.empty(shape=(42, 42)), True) + self.my_data
#dataclass()
class MySecondDataClassDummy:
my_data: int = 4
my_callable: Callable[[np.ndarray, bool], float] = my_dummy_callable
#classmethod
def my_factory(cls):
my_dummy = MySecondDataClassDummy()
return my_dummy
def do_something(self):
print("This is my data", self.my_data)
print("This is the name of my callable", str(self.my_callable))
return self.my_callable(np.empty(shape=(42, 42)), True) - self.my_data
if __name__ == '__main__':
# this works
my_first_dummy = MyDataClassDummy.my_factory_with_callable_setting()
my_first_dummy.do_something()
# this also works
my_second_dummy = MySecondDataClassDummy.my_factory()
my_second_dummy.do_something()
# this does not work
my_other_dummy = MyDataClassDummy.my_factory_without_callable_setting()
my_other_dummy.do_something()
case1: initialize with factory, initialize with my own init and then set the callable explicitly after initialization (allthough there is a default value) - works
case2: initialize with factory but don't explicitly code the init() myself - works
case3: initialize with factory, initialize with my own init and not set the callable explicitly after initialization (because this is why I have default values, isn't it?!) - doesn't work but throws the error:
Traceback (most recent call last):
File "my_path/dataclass_dummy.py", line 63, in <module>
my_other_dummy.do_something()
File "my_path/dataclass_dummy.py", line 33, in do_something
return self.my_callable(np.empty(shape=(42, 42)), True) + self.my_data
TypeError: my_dummy_callable() takes 2 positional arguments but 3 were given
So now I am wondering, what I am doing wrong in the third case.
I am using Python 3.8 and numpy 1.20.2
The #dataclass decorator by default supplies an __init__() method to a class. This method turns type annotated class variables into attributes of instances of the class. This mechanism is used in the case of the class MySecondDataClassDummy. In effect, every instance of this class has an attribute my_callable. Since this attribute is a function, you can call it as you do in Case 2, and everything works.
The class MyDataClassDummy has its own __init__() method, which overrides __init__() provided by #dataclass. Instances of this class are then initialized more or less as they would be without the #dataclass decorator. In particular, class variables that are functions become bound methods of class instances. As a result, my_callable becomes such a bound method, and when in Case 3 you execute
self.my_callable(np.empty(shape=(42, 42)), True)
then self is used as the first argument of my_callable. Since this function takes only two arguments, it generates an error.
The same problem does not occur in Case 1, since in this case you modify
my_dummy.my_callable making it an attribute of my_dummy whose value is a function. After this modification, it is not a bound method anymore.
I have an class 'A', which has subclasses 'B', 'C', and D.
'A' serves only the purpose of categorization and inheritance, so I don't want user to create the instance of 'A'.
However, I want to create the instances of 'B', 'C', and 'D' as usual.
First, I blocked the generation of the instance of 'A' by overriding A.__new__.
Then, I overrode B.__new__ again, just like below.
class A(object):
def __new__(cls, *args, **kwargs):
raise AssertionError('A cannot be generated directly.')
class B(A):
def __new__(cls, *args, **kwargs):
return super(cls,B).__new__(cls, *args, **kwargs)
However, with this code, generation of B returns same AssertionError.
I understand that since A.__new__ is disabled, super(cls,B).__new__ (which is exactly same as A.__new__) is disabled as well.
Is there any way to generate the instance of subclass, without invoking the __new__ of its superclass?
Edited - Resolved
Instead of invoking A.__new__, I invoked the __new__ method of higher superclass (which is 'object'). This solution circumvented the problem.
The new code is something like this:
class A(object):
def __new__(cls, *args, **kwargs):
raise AssertionError('A cannot be generated directly.')
class B(A):
def __new__(cls, *args, **kwargs):
return object.__new__(cls)
Note that it is return object.__new__(cls), not return object.__new__(cls, *args, **kwargs). This is because object.__new__ doesn't have any argument except cls, therefore no arguments must be passed.
You're explicitly invoking A.__new__ in the B.__new__ method. This version works fine:
class A:
def __new__(cls, *args, **kwargs):
raise AssertionError()
class B(A):
def __new__(cls, *args, **kwargs):
pass
Result:
>>> x = A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __new__
AssertionError
>>> y = B()
>>> y
>>>
Note that B.__new__ is now pretty much useless: I don't actually get a B instance (y is empty). I'm assuming you have an actual reason to be modifying __new__, in which case you'd put your replacement code where I have pass.
If you don't have a good reason to modify __new__, don't. You almost always want __init__ instead.
>>> class A1:
... def __init__(self):
... raise ValueError()
...
>>> class B1(A1):
... def __init__(self):
... pass
...
>>> A1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __init__
ValueError
>>> B1()
<__main__.B1 object at 0x7fa97b3f26d8>
Note how I get an actual default-constructed B1 object now.
As a general rule, you'll only want to modify __new__ if you're working with immutable types (like str and tuple) or building new metaclasses. You can see more details on what each method is responsible for in the docs.
I have a four distinct classes. There is a main base/parent class, two main classes that inherit from this parent class, and another class that inherits from both of these main classes. If I have a method with the same name but a different number of arguments as a parent class, I get a TypeError.
# Example
class Parent(object):
def check(self, arg):
tmp = {
'one': False,
'two': False
}
try:
if 'one' in arg:
tmp['one'] = True
if 'two' in arg:
tmp['two'] = True
except TypeError:
pass
return tmp
class Child(Parent):
def check(self, arg):
return Parent.check(self, arg)['one']
def method(self, arg):
if self.check(arg):
print 'One!'
class ChildTwo(Parent):
def check(self, arg):
return Parent.check(self, arg)['two']
def method(self, arg):
if self.check(arg):
print 'Two!'
class ChildThree(Child, ChildTwo):
def check(self, arg, arg2):
print arg2
return Child.check(self, arg)
def method(self, arg):
if self.check(arg, 'test'):
print 'One!'
ChildTwo.method(self, arg)
test = ChildThree()
test = test.method('one and two')
runfile('untitled6.py', wdir='./Documents')
test
One!
Traceback (most recent call last):
File "< stdin >", line 1, in < module >
File "C:\Users\py\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\py\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "untitled6.py", line 49, in
test = test.method('one and two')
File "untitled6.py", line 46, in method
ChildTwo.method(self, arg)
File "untitled6.py", line 34, in method
if self.check(arg):
TypeError: check() takes exactly 3 arguments (2 given)
However, when I remove the second argument from the 'check' method in 'ChildThree', it seems to work fine:
class ChildThree(Child, ChildTwo):
def check(self, arg):
return Child.check(self, arg)
def method(self, arg):
if self.check(arg):
print 'One!'
ChildTwo.method(self, arg)
runfile('untitled6.py', wdir='./Documents')
One!
Two!
I am fairly new to classes/inheritance, so I am not sure why an extra argument causes a TypeError even though it calls the parent class method with a single argument.
Consider this line:
ChildTwo.method(self, arg)
You passed in self explicitly. self here is a reference to a ChildThree instance. Later, in the body of ChildTwo.method:
if self.check(arg):
It's the same self we're talking about here; self is still a reference on your ChildThree instance.
It looks like you expected self to do something magical, but it doesn't - it's just a plain old name. For it to refer to a ChildTwo instance it would have had to be called like a bound method. Compare and contrast:
my_child_two.method(arg) <-- "self" gets passed implicitly by descriptor protocol
ChildTwo.method(self, arg) <-- "self" is just whatever it is
This type of inheritance is called "The Diamond Problem". It is a topic for itself, so I'll explain on a simpler case:
class C1(object):
def check(self, arg):
return 1
def method(self, arg):
return self.check(arg)
class C2(C1):
def check(self, arg1, arg2): # this overrides C1.check!
return x + C1.check(self, arg1)
c2 = C2()
c2.method(55) # fails
C2.check overrides C1.check on all C2 instances. Therefore, when self.check(arg) is called from method, it calls C2.check for instances of C2. That will fail because C2.check takes two arguments.
How to resolve that? When overriding methods, do not change their signature (number and type of received arguments and type of return value), or you'll get in trouble.
[more advanced] You could have more freedom with functions which take *args and **kwargs.
Besides that, I see that ChildThree.check calls Child.check which calls Parent.check, but noone calls ChildTwo.check. That cannot be right.
You should either call the method on all base classes (and risk calling the Parent implementation twice, which may even be right here), or use super().
I encounter a surprising behaviour of the side_effect parameter in patch.object where the function replacing the original does not receive self
class Animal():
def __init__(self):
self.noise = 'Woof'
def make_noise(self):
return self.noise
def loud(self):
return self.noise.upper() + '!!'
from unittest.mock import patch
dog = Animal()
dog.make_noise()
with patch.object(Animal, 'make_noise', side_effect=loud):
dog.make_noise()
This give:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/lustre/home/production/Applications/python/python-3.4.4/lib/python3.4/unittest/mock.py", line 902, in __call__
return _mock_self._mock_call(*args, **kwargs)
File "/lustre/home/production/Applications/python/python-3.4.4/lib/python3.4/unittest/mock.py", line 968, in _mock_call
ret_val = effect(*args, **kwargs)
TypeError: loud() missing 1 required positional argument: 'self'
If I change the loud function to
def loud(*args, **kwargs):
print(args)
print(kwargs)
I get the following output:
()
{}
Is there a way to replace a function from an object and still receive self?
self is only supplied for bound methods (because functions are descriptors). A Mock object is not such a method, and the side_effect function is not bound, so self is indeed not going to be passed in.
If you must have access the instance in a side_effect object, you'll have to patch the function on the class with an actual function:
with patch.object(Animal, 'make_noise', new=loud):
Now make_noise is replaced by the loud function on the Animal class, so it'll be bound:
>>> with patch.object(Animal, 'make_noise', new=loud):
... dog.make_noise()
...
'WOOF!!'
The alternative is to set autospec=True, at which point mock will use a real function to mock out make_noise():
>>> with patch.object(Animal, 'make_noise', autospec=True, side_effect=loud):
... dog.make_noise()
...
'WOOF!!'
Also see the Mocking Unbound Methods section in the mock getting started section.
class _GhostLink(object):
toGhost = lambda filename: False
class _Mod_AllowGhosting_All(_GhostLink):
def _loop(self):
# ...
if self.__class__.toGhost(fileName) != oldGhost:...
produces:
Traceback (most recent call last):
File "bash\basher\mod_links.py", line 592, in Execute
changed = self._loop()
File "bash\basher\mod_links.py", line 587, in _loop
if self.__class__.toGhost(fileName) != oldGhost:
TypeError: unbound method <lambda>() must be called with _Mod_AllowGhosting_All instance as first argument (got Path instance instead)
while passing an instance as in if self.toGhost(fileName) != ... results in:
Traceback (most recent call last):
File "bash\basher\mod_links.py", line 592, in Execute
changed = self._loop()
File "bash\basher\mod_links.py", line 587, in _loop
if self.toGhost(fileName) != oldGhost:
TypeError: <lambda>() takes exactly 1 argument (2 given)
How come toGhost behaves as a classmethod instance method ?
EDIT: I know the difference of class,static etc methods - this is a syntactic question
Looks like you want a static method:
class _GhostLink(object):
toGhost = staticmethod(lambda filename: False)
or:
class _GhostLink(object):
#staticmethod
def toGhost(filename):
return False
The reason this happens is fundamentally that lambda and def do the same thing, except that def also assigns a variable, That is, both constructs produce a function.
The binding of a function (whether from lambda or def) into an instance method happens because functions are also descriptors; remember, in every single case:
foo = lambda (...): (...)
is identical to:
def foo(...):
return (...)
so when you say:
class _GhostLink(object):
toGhost = lambda filename: False
It's the same as if you had said:
class _GhostLink(object):
def toGhost(filename):
return False
So the moral of the story is that you should probably never use lambda as the right side of an assignment; it's not "better" or even different from using def. All it does is confuse.