Propagation of NaN through calculations - python

Normally, NaN (not a number) propagates through calculations, so I don't need to check for NaN in each step. This works almost always, but apparently there are exceptions. For example:
>>> nan = float('nan')
>>> pow(nan, 0)
1.0
I found the following comment on this:
The propagation of quiet NaNs through arithmetic operations allows
errors to be detected at the end of a sequence of operations without
extensive testing during intermediate stages. However, note that
depending on the language and the function, NaNs can silently be
removed in expressions that would give a constant result for all other
floating-point values e.g. NaN^0, which may be defined as 1, so in
general a later test for a set INVALID flag is needed to detect all
cases where NaNs are introduced.
To satisfy those wishing a more strict interpretation of how the power
function should act, the 2008 standard defines two additional power
functions; pown(x, n) where the exponent must be an integer, and
powr(x, y) which returns a NaN whenever a parameter is a NaN or the
exponentiation would give an indeterminate form.
Is there a way to check the INVALID flag mentioned above through Python? Alternatively, is there any other approach to catch cases where NaN does not propagate?
Motivation: I decided to use NaN for missing data. In my application, missing inputs should result in missing result. It works great, with the exception I described.

I realise that a month has passed since this was asked, but I've come across a similar problem (i.e. pow(float('nan'), 1) throws an exception in some Python implementations, e.g. Jython 2.52b2), and I found the above answers weren't quite what I was looking for.
Using a MissingData type as suggested by 6502 seems like the way to go, but I needed a concrete example. I tried Ethan Furman's NullType class but found that that this didn't work with any arithmetic operations as it doesn't coerce data types (see below), and I also didn't like that it explicitly named each arithmetic function that was overriden.
Starting with Ethan's example and tweaking code I found here, I arrived at the class below. Although the class is heavily commented you can see that it actually only has a handful of lines of functional code in it.
The key points are:
1. Use coerce() to return two NoData objects for mixed type (e.g. NoData + float) arithmetic operations, and two strings for string based (e.g. concat) operations.
2. Use getattr() to return a callable NoData() object for all other attribute/method access
3. Use call() to implement all other methods of the NoData() object: by returning a NoData() object
Here's some examples of its use.
>>> nd = NoData()
>>> nd + 5
NoData()
>>> pow(nd, 1)
NoData()
>>> math.pow(NoData(), 1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: nb_float should return float object
>>> nd > 5
NoData()
>>> if nd > 5:
... print "Yes"
... else:
... print "No"
...
No
>>> "The answer is " + nd
'The answer is NoData()'
>>> "The answer is %f" % (nd)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: float argument required, not instance
>>> "The answer is %s" % (nd)
'The answer is '
>>> nd.f = 5
>>> nd.f
NoData()
>>> nd.f()
NoData()
I noticed that using pow with NoData() calls the ** operator and hence works with NoData, but using math.pow does not as it first tries to convert the NoData() object to a float. I'm happy using the non math pow - hopefully 6502 etc were using math.pow when they had problems with pow in their comments above.
The other issue I can't think of a way of solving is the use with the format (%f) operator... No methods of NoData are called in this case, the operator just fails if you don't provide a float. Anyway here's the class itself.
class NoData():
"""NoData object - any interaction returns NoData()"""
def __str__(self):
#I want '' returned as it represents no data in my output (e.g. csv) files
return ''
def __unicode__(self):
return ''
def __repr__(self):
return 'NoData()'
def __coerce__(self, other_object):
if isinstance(other_object, str) or isinstance(other_object, unicode):
#Return string objects when coerced with another string object.
#This ensures that e.g. concatenation operations produce strings.
return repr(self), other_object
else:
#Otherwise return two NoData objects - these will then be passed to the appropriate
#operator method for NoData, which should then return a NoData object
return self, self
def __nonzero__(self):
#__nonzero__ is the operation that is called whenever, e.g. "if NoData:" occurs
#i.e. as all operations involving NoData return NoData, whenever a
#NoData object propagates to a test in branch statement.
return False
def __hash__(self):
#prevent NoData() from being used as a key for a dict or used in a set
raise TypeError("Unhashable type: " + self.repr())
def __setattr__(self, name, value):
#This is overridden to prevent any attributes from being created on NoData when e.g. "NoData().f = x" is called
return None
def __call__(self, *args, **kwargs):
#if a NoData object is called (i.e. used as a method), return a NoData object
return self
def __getattr__(self,name):
#For all other attribute accesses or method accesses, return a NoData object.
#Remember that the NoData object can be called (__call__), so if a method is called,
#a NoData object is first returned and then called. This works for operators,
#so e.g. NoData() + 5 will:
# - call NoData().__coerce__, which returns a (NoData, NoData) tuple
# - call __getattr__, which returns a NoData object
# - call the returned NoData object with args (self, NoData)
# - this call (i.e. __call__) returns a NoData object
#For attribute accesses NoData will be returned, and that's it.
#print name #(uncomment this line for debugging purposes i.e. to see that attribute was accessed/method was called)
return self

Why using NaN that already has another semantic instead of using an instance of a class MissingData defined by yourself?
Defining operations on MissingData instances to get propagation should be easy...

If it's just pow() giving you headaches, you can easily redefine it to return NaN under whatever circumstances you like.
def pow(x, y):
return x ** y if x == x else float("NaN")
If NaN can be used as an exponent you'd also want to check for that; this raises a ValueError exception except when the base is 1 (apparently on the theory that 1 to any power, even one that's not a number, is 1).
(And of course pow() actually takes three operands, the third optional, which omission I'll leave as an exercise...)
Unfortunately the ** operator has the same behavior, and there's no way to redefine that for built-in numeric types. A possibility to catch this is to write a subclass of float that implements __pow__() and __rpow__() and use that class for your NaN values.
Python doesn't seem to provide access to any flags set by calculations; even if it did, it's something you'd have to check after each individual operation.
In fact, on further consideration, I think the best solution might be to simply use an instance of a dummy class for missing values. Python will choke on any operation you try to do with these values, raising an exception, and you can catch the exception and return a default value or whatever. There's no reason to proceed with the rest of the calculation if a needed value is missing, so an exception should be fine.

To answer your question: No, there is no way to check the flags using normal floats. You can use the Decimal class, however, which provides much more control . . . but is a bit slower.
Your other option is to use an EmptyData or Null class, such as this one:
class NullType(object):
"Null object -- any interaction returns Null"
def _null(self, *args, **kwargs):
return self
__eq__ = __ne__ = __ge__ = __gt__ = __le__ = __lt__ = _null
__add__ = __iadd__ = __radd__ = _null
__sub__ = __isub__ = __rsub__ = _null
__mul__ = __imul__ = __rmul__ = _null
__div__ = __idiv__ = __rdiv__ = _null
__mod__ = __imod__ = __rmod__ = _null
__pow__ = __ipow__ = __rpow__ = _null
__and__ = __iand__ = __rand__ = _null
__xor__ = __ixor__ = __rxor__ = _null
__or__ = __ior__ = __ror__ = _null
__divmod__ = __rdivmod__ = _null
__truediv__ = __itruediv__ = __rtruediv__ = _null
__floordiv__ = __ifloordiv__ = __rfloordiv__ = _null
__lshift__ = __ilshift__ = __rlshift__ = _null
__rshift__ = __irshift__ = __rrshift__ = _null
__neg__ = __pos__ = __abs__ = __invert__ = _null
__call__ = __getattr__ = _null
def __divmod__(self, other):
return self, self
__rdivmod__ = __divmod__
if sys.version_info[:2] >= (2, 6):
__hash__ = None
else:
def __hash__(yo):
raise TypeError("unhashable type: 'Null'")
def __new__(cls):
return cls.null
def __nonzero__(yo):
return False
def __repr__(yo):
return '<null>'
def __setattr__(yo, name, value):
return None
def __setitem___(yo, index, value):
return None
def __str__(yo):
return ''
NullType.null = object.__new__(NullType)
Null = NullType()
You may want to change the __repr__ and __str__ methods. Also, be aware that Null cannot be used as a dictionary key, nor stored in a set.

Related

Python's standard hashing algorithm [duplicate]

Following on from this question, I'm interested to know when is a python object's hash computed?
At an instance's __init__ time,
The first time __hash__() is called,
Every time __hash__() is called, or
Any other opportunity I might be missing?
May this vary depending on the type of the object?
Why does hash(-1) == -2 whilst other integers are equal to their hash?
The hash is generally computed each time it's used, as you can quite easily check yourself (see below).
Of course, any particular object is free to cache its hash. For example, CPython strings do this, but tuples don't (see e.g. this rejected bug report for reasons).
The hash value -1 signals an error in CPython. This is because C doesn't have exceptions, so it needs to use the return value. When a Python object's __hash__ returns -1, CPython will actually silently change it to -2.
See for yourself:
class HashTest(object):
def __hash__(self):
print('Yes! __hash__ was called!')
return -1
hash_test = HashTest()
# All of these will print out 'Yes! __hash__ was called!':
print('__hash__ call #1')
hash_test.__hash__()
print('__hash__ call #2')
hash_test.__hash__()
print('hash call #1')
hash(hash_test)
print('hash call #2')
hash(hash_test)
print('Dict creation')
dct = {hash_test: 0}
print('Dict get')
dct[hash_test]
print('Dict set')
dct[hash_test] = 0
print('__hash__ return value:')
print(hash_test.__hash__()) # prints -1
print('Actual hash value:')
print(hash(hash_test)) # prints -2
From here:
The hash value -1 is reserved (it’s used to flag errors in the C implementation).
If the hash algorithm generates this value, we simply use -2 instead.
As integer's hash is integer itself it's just changed right away.
It is easy to see that option #3 holds for user defined objects. This allows the hash to vary if you mutate the object, but if you ever use the object as a dictionary key you must be sure to prevent the hash ever changing.
>>> class C:
def __hash__(self):
print("__hash__ called")
return id(self)
>>> inst = C()
>>> hash(inst)
__hash__ called
43795408
>>> hash(inst)
__hash__ called
43795408
>>> d = { inst: 42 }
__hash__ called
>>> d[inst]
__hash__ called
Strings use option #2: they calculate the hash value once and cache the result. This is safe because strings are immutable so the hash can never change, but if you subclass str the result might not be immutable so the __hash__ method will be called every time again. Tuples are usually thought of as immutable so you might think the hash could be cached, but in fact a tuple's hash depends on the hash of its content and that might include mutable values.
For #max who doesn't believe that subclasses of str can modify the hash:
>>> class C(str):
def __init__(self, s):
self._n = 1
def __hash__(self):
return str.__hash__(self) + self._n
>>> x = C('hello')
>>> hash(x)
-717693723
>>> x._n = 2
>>> hash(x)
-717693722

Override all binary operators (or other way to respect physics dimensions) in python?

I'm building classes inherited from float to respect a dimension in some chemical calculations. e.g.:
class volume(float):
def __init__(self, value):
float._init_(value)
Now my goal is:
to raise an error whenever + or - is used between a normal float and an instance of volume
return an instance of volume, whenever + or - is used between two instances of volume
return an instance of volume, whenever * is used (from both sides) and / is used (from left)
raise an error when / is used from left
return a float whenever two instances of volume are divided.
Right now, I'm going to override all these four operators, from left and right (e.g. _add_ and _radd_);
err='Only objects of type volume can be added or subtracted together'
def __add__(self,volume2):
if isinstance(volume2,volume): return volume(float(self)+volume2)
else: raise TypeError(err)
def __radd__(self,volume2):
if isinstance(volume2,volume): return volume(float(self)+volume2)
else: raise TypeError(err)
Is there any easier way to access all of them, or at least an expression to include both left and right uses of the operator?
It seems that this question is primarily about avoiding code duplication. Regarding the multiply and divide cases, you have slightly different functionality and may have to write separate methods explicitly, but for the addition and subtraction related methods, the following technique would work. It is essentially monkey-patching the class, and it is okay to do this, although you should not attempt similarly to monkey-patch instances in Python 3.
I have called the class Volume with capital V in accordance with convention.
class Volume(float):
pass
def _make_wrapper(name):
def wrapper(self, other):
if not isinstance(other, Volume):
raise ValueError
return Volume(getattr(float, name)(self, other))
setattr(Volume, name, wrapper)
for _method in ('__add__', '__radd__',
'__sub__', '__rsub__',):
_make_wrapper(_method)
The equivalent explicit method in these cases looks like the following, so adapt as required for the multiply / divide cases, but note the explicit use of float.__add__(self, other) rather than self + other as the question suggests that you intended to use (where the question mentions volume(self+volume2)), which would lead to infinite recursion.
def __add__(self, other):
if not isinstance(other, Volume):
raise ValueError
return Volume(float.__add__(self, other))
Regarding the __init__, I have now removed it above, because if all it does is call float.__init__, then it is not necessary to define it at all (let it simply inherit __init__ from the base class). If you want to have an __init__ method in order to initialise something else, then yes you will also need to include the explicit call to float.__init__ as you do in the question (although note the double underscores -- in the question you are trying to call float._init_).
metaclass is a the way to control how class are constructed.
You can use metaclasses to overload all the math oeprators like this:
err='Only objects of type volume can be added or subtracted together'
class OverLoadMeta(type):
def __new__(meta,name,bases,dct):
# this is the operation you want to use instead of default add or subtract.
def op(self,volume2):
if isinstance(volume2,Volume):
return Volume(float.__add__(self,volume2))
else:
raise TypeError(err)
# you can overload whatever method you want here
for method in ('__add__','__radd__','__sub__'):
dct[method] = op
return super(OverLoadMeta, meta).__new__(meta, name, bases, dct)
class Volume(float,metaclass=OverLoadMeta):
""
# you can use it like this:
a = Volume(1)
b = Volume(2)
c = a+b
print(c.__class__)
# class will be <class '__main__.Volume'>
a + 1
# raise TypeError: Only objects of type volume can be added or subtracted together

__add__ to support addition of different types?

Would be very easy to solve had python been a static programming language that supported overloading. I am making a class called Complex which is a representation of complex numbers (I know python has its own, but i want to make one myself), where a is the real number and b is the imaginary (Complex(a, b)). It should support adding Complex instances together (Complex(2, 4) + Complex(4, 5) = Complex(6, 9)), as well as adding an integer (Complex(2, 3) + 4 = Complex(6, 3)). However, due to the nature of python...
__add__(self, other):
...I have to choose which the class will support, because it won't recognize types at compile-time, as well as not supporting overloading of functions. What is the best solution? Do I have to write an if statement in relation to the datatype of the other parameter?
What you could do is check for the thing being of instance Complex, and if not, turn it into one, like so:
def __add__(self, other):
if isinstance(other, Complex):
# do addition
else:
return self + Complex(other, 0)
That of course does not eliminate type checking, but it reuses whatever you are doing in __init__ (which is probably checking if input is int or float).
If at the moment you do not do type checking in init, it is probably a good idea, and this looks reasonable, excepting built-in complex type.
There is not necessarily a best solution. In this particular case, though:
def __add__(self, other):
c = make_complex(other)
return Complex(self.real + c.real, self.imag + real.imag)
is probably the way to go (though I'm making lots of assumptions about your Complex class here). If other is already Complex, the make_complex function returns it. If not, it tries its best to convert (e.g., to turn a real-only into a complex pair by constructing a complex with a zero imaginary part). If that fails, it raises some suitable exception.
This make_complex is also suitable in the constructor for Complex, so that you can replace parts of:
e = Complex(1.718, 0) # e (well, not very exactly)
i = Complex(0, 1) # sqrt(-1)
pi = Complex(3.14, 0) # pi
# you know what to do next
with:
e = Complex(1.718)
pi = make_complex(3.14)
for instance. (You can just use the Complex constructor to do all the work, using isinstance() to check the argument types as appropriate.)
Note that since complex addition is commutative you may wish to implement __radd__ as well.
Use isinstance to check if it's the same Type if not assum it's any type of number:
def __add__(self, other):
# it's the same class
if isinstance(other, Complex):
# and you should return the same class
# if anyone extend your class SomeClass(Complex): you should return SomeClass not Complex Object
return self.__class__(self.a + other.a, self.b + other.b)
# assuming is any type of number
try:
return self.__class__(int(self.a + other), self.b)
except TypeError:
# change the error message
raise TypeError("unsupported operand type(s) for +: '%s' and '%s'" % (self.__class__, other.__class__)
Mixed-types operations
The numbers module in Python can be used to implement you own number classes. Among other things it allows to correctly implement mixed-types operations using __add__ and __radd__
Example
import numbers
class Complex:
def __add__(self, other):
if isinstance(self, Complex):
...
elif isinstance(other, numbers.Real):
...
else:
raise TypeError
def __radd__(self, other):
return self + other
Implemention new number types
If you want to implement a number class which works along with Python built-in number types, you can implement your own Complex class by subclassing the abstract base class numbers.Complex.
This abstract base class will enforce the implementation of the needed methods __abs__, __add__, __complex__, __eq__, __mul__, __neg__, __pos__, __pow__, __radd__, __rmul__, __rpow__, __rtruediv__, __truediv__, conjugate, imag and real.
What's the problem here?
You can always check the type of a python object:
if type(other) != type(self):
# raise some error
# do addition
return

How to set up a class with all the methods of and functions like a built in such as float, but holds onto extra data?

I am working with 2 data sets on the order of ~ 100,000 values. These 2 data sets are simply lists. Each item in the list is a small class.
class Datum(object):
def __init__(self, value, dtype, source, index1=None, index2=None):
self.value = value
self.dtype = dtype
self.source = source
self.index1 = index1
self.index2 = index2
For each datum in one list, there is a matching datum in the other list that has the same dtype, source, index1, and index2, which I use to sort the two data sets such that they align. I then do various work with the matching data points' values, which are always floats.
Currently, if I want to determine the relative values of the floats in one data set, I do something like this.
minimum = min([x.value for x in data])
for datum in data:
datum.value -= minimum
However, it would be nice to have my custom class inherit from float, and be able to act like this.
minimum = min(data)
data = [x - minimum for x in data]
I tried the following.
class Datum(float):
def __new__(cls, value, dtype, source, index1=None, index2=None):
new = float.__new__(cls, value)
new.dtype = dtype
new.source = source
new.index1 = index1
new.index2 = index2
return new
However, doing
data = [x - minimum for x in data]
removes all of the extra attributes (dtype, source, index1, index2).
How should I set up a class that functions like a float, but holds onto the extra data that I instantiate it with?
UPDATE: I do many types of mathematical operations beyond subtraction, so rewriting all of the methods that work with a float would be very troublesome, and frankly I'm not sure I could rewrite them properly.
I suggest subclassing float and using a couple decorators to "capture" the float output from any method (except for __new__ of course) and returning a Datum object instead of a float object.
First we write the method decorator (which really isn't being used as a decorator below, it's just a function that modifies the output of another function, AKA a wrapper function):
def mydecorator(f,cls):
#f is the method being modified, cls is its class (in this case, Datum)
def func_wrapper(*args,**kwargs):
#*args and **kwargs are all the arguments that were passed to f
newvalue = f(*args,**kwargs)
#newvalue now contains the output float would normally produce
##Now get cls instance provided as part of args (we need one
##if we're going to reattach instance information later):
try:
self = args[0]
##Now check to make sure new value is an instance of some numerical
##type, but NOT a bool or a cls type (which might lead to recursion)
##Including ints so things like modulo and round will work right
if (isinstance(newvalue,float) or isinstance(newvalue,int)) and not isinstance(newvalue,bool) and type(newvalue) != cls:
##If newvalue is a float or int, now we make a new cls instance using the
##newvalue for value and using the previous self instance information (arg[0])
##for the other fields
return cls(newvalue,self.dtype,self.source,self.index1,self.index2)
#IndexError raised if no args provided, AttributeError raised of self isn't a cls instance
except (IndexError, AttributeError):
pass
##If newvalue isn't numerical, or we don't have a self, just return what
##float would normally return
return newvalue
#the function has now been modified and we return the modified version
#to be used instead of the original version, f
return func_wrapper
The first decorator only applies to a method to which it is attached. But we want it to decorate all (actually, almost all) the methods inherited from float (well, those that appear in the float's __dict__, anyway). This second decorator will apply our first decorator to all of the methods in the float subclass except for those listed as exceptions (see this answer):
def for_all_methods_in_float(decorator,*exceptions):
def decorate(cls):
for attr in float.__dict__:
if callable(getattr(float, attr)) and not attr in exceptions:
setattr(cls, attr, decorator(getattr(float, attr),cls))
return cls
return decorate
Now we write the subclass much the same as you had before, but decorated, and excluding __new__ from decoration (I guess we could also exclude __init__ but __init__ doesn't return anything, anyway):
#for_all_methods_in_float(mydecorator,'__new__')
class Datum(float):
def __new__(klass, value, dtype="dtype", source="source", index1="index1", index2="index2"):
return super(Datum,klass).__new__(klass,value)
def __init__(self, value, dtype="dtype", source="source", index1="index1", index2="index2"):
self.value = value
self.dtype = dtype
self.source = source
self.index1 = index1
self.index2 = index2
super(Datum,self).__init__()
Here are our testing procedures; iteration seems to work correctly:
d1 = Datum(1.5)
d2 = Datum(3.2)
d3 = d1+d2
assert d3.source == 'source'
L=[d1,d2,d3]
d4=max(L)
assert d4.source == 'source'
L = [i for i in L]
assert L[0].source == 'source'
assert type(L[0]) == Datum
minimum = min(L)
assert [x - minimum for x in L][0].source == 'source'
Notes:
I am using Python 3. Not certain if that will make a difference for you.
This approach effectively overrides EVERY method of float other than the exceptions, even the ones for which the result isn't modified. There may be side effects to this (subclassing a built-in and then overriding all of its methods), e.g. a performance hit or something; I really don't know.
This will also decorate nested classes.
This same approach could also be implemented using a metaclass.
The problem is when you do :
x - minimum
in terms of types you are doing either :
datum - float, or datum - integer
Either way python doesn't know how to do either of them, so what it does is look at parent classes of the arguments if it can. since datum is a type of float, it can easily use float - and the calculation ends up being
float - float
which will obviously result in a 'float' - python has no way of knowing how to construct your datum object unless you tell it.
To solve this you either need to implement the mathematical operators so that python knows how to do datum - float or come up with a different design.
Assuming that 'dtype', 'source', index1 & index2 need to stay the same after a calculation - then as an example your class needs :
def __sub__(self, other):
return datum(value-other, self.dtype, self.source, self.index1, self.index2)
this should work - not tested
and this will now allow you to do this
d = datum(23.0, dtype="float", source="me", index1=1)
e = d - 16
print e.value, e.dtype, e.source, e.index1, e.index2
which should result in :
7.0 float me 1 None

Does python coerce types when doing operator overloading?

I have the following code:
a = str('5')
b = int(5)
a == b
# False
But if I make a subclass of int, and reimplement __cmp__:
class A(int):
def __cmp__(self, other):
return super(A, self).__cmp__(other)
a = str('5')
b = A(5)
a == b
# TypeError: A.__cmp__(x,y) requires y to be a 'A', not a 'str'
Why are these two different? Is the python runtime catching the TypeError thrown by int.__cmp__(), and interpreting that as a False value? Can someone point me to the bit in the 2.x cpython source that shows how this is working?
The documentation isn't completely explicit on this point, but see here:
If both are numbers, they are converted to a common type. Otherwise, objects of different types always compare unequal, and are ordered consistently but arbitrarily. You can control comparison behavior of objects of non-built-in types by defining a __cmp__ method or rich comparison methods like __gt__, described in section Special method names.
This (particularly the implicit contrast between "objects of different types" and "objects of non-built-in types") suggests that the normal process of actually calling comparison methods is skipped for built-in types: if you try to compare objects of two dfferent (and non-numeric) built-in types, it just short-circuits to an automatic False.
A comparison decision tree for a == b looks something like:
python calls a.__cmp__(b)
a checks that b is an appropriate type
if b is an appropriate type, return -1, 0, or +1
if b is not, return NotImplented
if -1, 0, or +1 returned, python is done; otherwise
if NotImplented returned, try
b.__cmp__(a)
b checks that a is an appropriate type
if a is an appropriate type, return -1, 0, or +1
if a is not, return NotImplemented
if -1, 0, or +1 returned, python is done; otherwise
if NotImplented returned again, the answer is False
Not an exact answer, but hopefully it helps.
If I understood your problem right, you need something like:
>>> class A(int):
... def __cmp__(self, other):
... return super(A, self).__cmp__(A(other)) # <--- A(other) instead of other
...
>>> a = str('5')
>>> b = A(5)
>>> a == b
True
Updated
Regarding to 2.x cpython source, you can find reason for this result in typeobject.c in function wrap_cmpfunc which actually checks two things: given compare function is a func and other is subtype for self.
if (Py_TYPE(other)->tp_compare != func &&
!PyType_IsSubtype(Py_TYPE(other), Py_TYPE(self))) {
// ....
}

Categories