Implementing complex number comparison in Python? - python

I know that comparison operators with complex numbers can't be defined in general. That is why python throws a TypeError exception when trying to use out-of-the-box complex comparison. I understand why this is the case (please don't go off topic trying to explain why two complex numbers can't be compared).
That said, in this particular case I would like to implement complex number comparison based on their magnitudes. In other words, for z1 and z2 complex values, then z1 > z2 if-and-only-if abs(z1) > abs(z2), where abs() implements complex number magnitude, as in numpy.abs().
I have come up with a solution (at least I think I have) as follows:
import numpy as np
class CustomComplex(complex):
def __lt__(self, other):
return np.abs(self) < np.abs(other)
def __le__(self, other):
return np.abs(self) <= np.abs(other)
def __eq__(self, other):
return np.abs(self) == np.abs(other)
def __ne__(self, other):
return np.abs(self) != np.abs(other)
def __gt__(self, other):
return np.abs(self) > np.abs(other)
def __ge__(self, other):
return np.abs(self) >= np.abs(other)
complex = CustomComplex
This seems to work, but I have a few questions:
Is this the way to go or is there a better alternative?
I would like my package to transparently work with the built-in complex data type as well as numpy.complex. How can this be done elegantly, without code duplication?

I'm afraid I'm going to be off topic (yes I fully read your post :-) ). Ok, Python do allow you to try to compare complex numbers that way because you can define separately all operators even if I strongly advice you not to redefine __eq__ like you did : you are saying 1 == -1 !
IMHO the problem lies there and will spring at your face at one moment (or at the face of anyone who would use your package) : when using equalities and inequalities, ordinary mortals (and most python code) do simple assumptions like -1 != 1, and (a <= b) && (b <= a) implies a == b. And you simply cannot have those 2 assumptions be true at the same time for pure mathematical reasons.
Another classic assumption is a <= b is equivalent to -b <= -a. But with you pre-order a <= b is equivalent to -a <= -b !
That being said, I'll try to answer to your 2 questions :
1: IMHO it is a harmfull way (as dicussed above), but I have no better alternative ...
2: I think a mixin could be an elegant way to limit code duplication
Code example (based on your own code, but not extensively tested):
import numpy as np
class ComplexOrder(Object):
def __lt__(self, other):
return np.absolute(self) < np.absolute(other)
# ... keep want you want (including or not eq and ne)
def __ge__(self, other):
return np.absolute(self) >= np.absolute(other)
class OrderedComplex(ComplexOrder, complex):
def __init__(self, real, imag = 0):
complex.__init__(self, real, imag)
class NPOrderedComplex64(ComplexOrder, np.complex64):
def __init__(self, real = 0):
np.complex64.__init__(self, real)

I'll forgo all the reasons why this may be a bad idea, as per your request.
Is this the way to go or is there a better alternative?
No need to go with numpy, when the normal abs accepts complex numbers and is much faster*. There's also a convenient total_ordering in functools that works well for such simple comparisons, if you want to reduce code (but this may be slower):
from functools import total_ordering
#total_ordering
class CustomComplex(complex):
def __eq__(self, other):
return abs(self) == abs(other)
def __lt__(self, other):
return abs(self) < abs(other)
(That's all the code you need.)
I would like my package to transparently work with the built-in complex data type as well as numpy.complex. How can this be done elegantly, without code duplication?
It automatically works when the right argument is a normal complex (or any) number:
>>> CustomComplex(1+7j) < 2+8j
True
But that's the best you can do, if you want to use the operators < etc. and not functions. The complex type doesn't allow you to set __lt__ and the TypeError is hardcoded.
If you want to do such comparisons on normal complex numbers, you must define and use your own comparison functions instead of the normal operators. Or just use abs(a) < abs(b) which is clear and not terribly verbose.
* Timing built-in abs vs. numpy.abs:
>>> timeit.timeit('abs(7+6j)')
0.10257387161254883
>>> timeit.timeit('np.abs(7+6j)', 'import numpy as np')
1.6638610363006592

Related

functools total_ordering doesn't appear to do anything with inherited class

I am trying sort a list of strings in a way that uses a special comparison. I am trying to use functools.total_ordering, but I'm not sure whether it's filling out the undefined comparisons correctly.
The two I define ( > and ==) work as expected, but < does not. In particular, I print all three and I get that a > b and a < b. How is this possible? I would think that total_ordering would simply define < as not > and not ==. The result of my < test is what you would get with regular str comparison, leading me to believe that total_ordering isn't doing anything.
Perhaps the problem is that I am inheriting str, which already has __lt__ implemented? If so, is there a fix to this issue?
from functools import total_ordering
#total_ordering
class SortableStr(str):
def __gt__(self, other):
return self+other > other+self
#Is this necessary? Or will default to inherited class?
def __eq__(self, other):
return str(self) == str(other)
def main():
a = SortableStr("99")
b = SortableStr("994")
print(a > b)
print(a == b)
print(a < b)
if __name__ == "__main__":
main()
OUTPUT:
True
False
True
You're right that the built-in str comparison operators are interfering with your code. From the docs
Given a class defining one or more rich comparison ordering methods, this class decorator supplies the rest.
So it only supplies the ones not already defined. In your case, the fact that some of them are defined in a parent class is undetectable to total_ordering.
Now, we can dig deeper into the source code and find the exact check
roots = {op for op in _convert if getattr(cls, op, None) is not getattr(object, op, None)}
So it checks if the values are equal to the ones defined in the root object object. We can make that happen
#total_ordering
class SortableStr(str):
__lt__ = object.__lt__
__le__ = object.__le__
__ge__ = object.__ge__
def __gt__(self, other):
return self+other > other+self
#Is this necessary? Or will default to inherited class?
def __eq__(self, other):
return str(self) == str(other)
Now total_ordering will see that __lt__, __le__, and __ge__ are equal to the "original" object values and overwrite them, as desired.
This all being said, I would argue that this is a poor use of inheritance. You're violating Liskov substitution at the very least, in that mixed comparisons between str and SortableStr are going to, to put it lightly, produce counterintuitive results.
My more general recommendation is to favor composition over inheritance and, rather than defining a thing that "is a" specialized string, consider defining a type that "contains" a string and has specialized behavior.
#total_ordering
class SortableStr:
def __init__(self, value):
self.value = value
def __gt__(self, other):
return self.value + other.value > other.value + self.value
def __eq__(self, other):
return self.value == other.value
There, no magic required. Now SortableStr("99") is a valid object that is not a string but exhibits the behavior you want.
Not sure if this is correct, but glancing at the documentation of functools.total_ordering, this stands out to me:
Given a class defining one or more rich comparison ordering methods,
this class decorator supplies the rest.
Emphasis mine. Your class inherits __lt__ from str, so it does not get re-implemented by total_ordering since it isn't missing. That's my best guess.

How to improve my implementation of an arithmetic neutral element (identity) so that it works with numpy.around?

Context
So for a project I was writing some Monte Carlo code and I wanted to support both 2D coordinates and 3D coordinates. The obvious solutions were to implement either 2D and 3D versions of certain functions or to have some 'if' checking in the algorithm itself. I wanted a more elegant solution where I could just work with one algorithm capable of handling both situations. The idea I came up with was to work with some kind of neutral element 'I' for the optional third coordinate (z-direction). It's usage is as follows: if no explicit z value is provided it defaults to I. In any case the z value is effectively used in the calculations but has no effect if z = I.
For example: let a be any number then a + I = I + a = a. Likewise a x I = I x a = a
For addition and multiplication these are I = 0 and respectively I = 1. It is immediately clear that any numerical value for I will not work. For example if I have a cost function of the form xyz + (x +y +x)^2.
Implementation
Luckily, programmatically we are not constrained by mathematics to implement something like that and here's my attempt:
class NeutralElement:
def __add__(self, other):
return other
def __sub__(self, other):
return other
def __mul__(self, other):
return other
def __truediv__(self, other):
return other
def __pow__(self, other):
return self
def __radd__(self, other):
return other
def __rsub__(self, other):
return other
def __rmul__(self, other):
return other
def __rtruediv__(self, other):
return other
def __bool__(self):
return True
def __getitem__(self, index):
return self
def __str__(self):
return 'I'
def __repr__(self):
return 'I'
Usage is simple: n = NeutralElement() and you can then use n in whatever equation you want. The above implementation works well in the sense that the Monte Carlo algorithm finishes without problems and manages to give meaningful results. I can even construct a Numpy array of it and use it. Though it's not compatible with everything. Calling np.around on an array of it gives an error about '_rint' not implemented. Though I did manage to get it to work with round() and ndarray.round(). (Not reflected in the code above.)
Question
My question now is if there's anything I can do to improve compatibility with other functions or just improve this implementation itself. Also welcome are better alternative solutions to the problem described above.
After some feedback I've attempted to narrow down the scope of my question. I would like to modify the above class so that it returns a Numpy array of strings ['I', 'I', ..., 'I'] when numpy.around is used on a Numpy array of NeutralElements

overloading less than in python

I've been fiddling with overloading operators in python and I've come across a question.
So I have a class that has values in which I want to use for a comparison.
class comparison:
def __init__(self, x):
self.x = x
...
def __lt__(self,other):
return self.x < other
which overloads the operator for less than. I made conditions on other such as what type it must be.
An example would be
x = comparison(2)
x < 1 #--> False
x < 3 #--> True
My question would be how could I check on the first part of the comparison?
I'm trying to limit the first part to something specific.
An example would be
7 < x # --> I don't want the first one to be an int
To do this, you can override the __gt__ method.
class comparison:
...
def __gt__(self, other):
...
7<comparison(2) will then get transformed in the call comparison(2).__gt__(7), which you can override.

How to implement __eq__ in shapely (python)

I have a question regarding shapely and the usage of == operator. There exists a function to test equality of geometric object: .equals(). However == does not work.
Point((0, 2)).equals(Point((0,2))
returns True.
However:
Point((0, 2)) == Point((0, 2))
returns False
I would like to be able to use the == operator to check if a Point is already present in a list. One use case could be:
if Point not in list_of_points:
list_of_points.append(Point)
As far as I understand, this does not work because == returns False. I know there exists alternative to in by using the any() function, but I would prefer the in keyword:
if not any(Point.equals(point) for point in list_of_points):
list_of_points.append(Point)
Would it be a large effort to implement __eq__ in the shapely/geometry/base.py?
What do you think of this naive implementation of __eq__?
class BaseGeometry(object):
def __eq__(self, other):
return self.equals(other)
or
class BaseGeometry(object):
def __eq__(self, other):
return bool(self.impl['equals'](self, other))
One side effect of implementing __eq__ is that a Point can no longer be a key in a dictionary. If you want that feature, you can add this:
def __hash__(self):
return hash(id(self))

Generic programming: Log FFT OR high-precision convolution (python)

I have a slightly unusual problem, but I am trying to avoid re-coding FFT.
In general, I want to know this: If I have an algorithm that is implemented for type float, but it would work wherever a certain set of operations is defined (e.g. complex numbers, for which also define +, *, ...), what is the best way to use that algorithm on another type that supports those operations? In practice this is tricky because generally numeric algorithms are written for speed, not generality.
Specifically:
I am working with values with a very high dynamic range, and so I would like to store them in log space (mostly to avoid underflow).
What I'd like is the log of the FFT of some series:
x = [1,2,3,4,5]
fft_x = [ log( x_val ) for x_val in fft(x) ]
Even this will result in significant underflow. What I'd like is to store log values and use + in place of * and logaddexp in place of +, etc.
My thought of how to do this was to implement a simple LogFloat class that defines these primitive operations (but operates in log space). Then I could simply run the FFT code by letting it use my logged values.
class LogFloat:
def __init__(self, sign, log_val):
assert(float(sign) in (-1, 1))
self.sign = int(sign)
self.log_val = log_val
#staticmethod
def from_float(fval):
return LogFloat(sign(fval), log(abs(fval)))
def __imul__(self, lf):
self.sign *= lf.sign
self.log_val += lf.log_val
return self
def __idiv__(self, lf):
self.sign *= lf.sign
self.log_val -= lf.log_val
return self
def __iadd__(self, lf):
if self.sign == lf.sign:
self.log_val = logaddexp(self.log_val, lf.log_val)
else:
# subtract the smaller magnitude from the larger
if self.log_val > lf.log_val:
self.log_val = log_sub(self.log_val, lf.log_val)
else:
self.log_val = log_sub(lf.log_val, self.log_val)
self.sign *= -1
return self
def __isub__(self, lf):
self.__iadd__(LogFloat(-1 * lf.sign, lf.log_val))
return self
def __pow__(self, lf):
# note: there may be a way to do this without exponentiating
# if the exponent is 0, always return 1
# print self, '**', lf
if lf.log_val == -float('inf'):
return LogFloat.from_float(1.0)
lf_value = lf.sign * math.exp(lf.log_val)
if self.sign == -1:
# note: in this case, lf_value must be an integer
return LogFloat(self.sign**int(lf_value), self.log_val * lf_value)
return LogFloat(self.sign, self.log_val * lf_value)
def __mul__(self, lf):
temp = LogFloat(self.sign, self.log_val)
temp *= lf
return temp
def __div__(self, lf):
temp = LogFloat(self.sign, self.log_val)
temp /= lf
return temp
def __add__(self, lf):
temp = LogFloat(self.sign, self.log_val)
temp += lf
return temp
def __sub__(self, lf):
temp = LogFloat(self.sign, self.log_val)
temp -= lf
return temp
def __str__(self):
result = str(self.sign * math.exp(self.log_val)) + '('
if self.sign == -1:
result += '-'
result += 'e^' + str(self.log_val) + ')'
return result
def __neg__(self):
return LogFloat(-self.sign, self.log_val)
def __radd__(self, val):
# for sum
if val == 0:
return self
return self + val
Then, the idea would be to construct a list of LogFloats, and then use it in the FFT:
x_log_float = [ LogFloat.from_float(x_val) for x_val in x ]
fft_x_log_float = fft(x_log_float)
This can definitely be done if I re-implement FFT (and simply use LogFloat wherever I would use float before, but I thought I would ask for advice. This is a fairly recurring problem: I have a stock algorithm that I want to operate in log space (and it only uses a handful of operations like '+', '-', '', '/', etc.).
This reminds me of writing generic functions with templates, so that the return arguments, parameters, etc. are constructed from the same type. For exmaple, if you can do an FFT of floats, you should be able to easily do one on complex values (by simply using a class that provides the necessary operations for complex values).
As it currently stands, it looks like all FFT implementations are written for bleeding-edge speed, and so won't be very general. So as of now, it looks like I'd have to reimplement FFT for generic types...
The reason I'm doing this is because I want very high-precision convolutions (and the N^2 runtime is extremely slow).
Any advice would be greatly appreciated.
*Note, I might need to implement trigonometric functions for LogFloat, and that would be fine.
EDIT:
This does work because LogFloat is a commutative ring (and it doesn't require implementation of trigonometric functions for LogFloat). The simplest way to do it was to reimplement FFT, but #J.F.Sebastian also pointed out a way of using the Python generic convolution, which avoids coding the FFT (which, again, was quite easy using either a DSP textbook or the Wikipedia pseudocode).
I confess I didn't entirely keep up with the math in your question. However it sounds like what you're really wondering is how to deal with extremely small and large (in absolute value) numbers without hitting underflows and overflows. Unless I misunderstand you, I think this is similar to the problem I have working with units of money, not losing pennies on billion-dollar transactions due to rounding. If that's the case, my solution has been Python's built-in decimal-math module. The documentation is good for both Python 2 and Python 3. The short version is that decimal math is an arbitrary-precision floating- and fixed-point type. The Python modules conform to the IBM/IEEE standards for decimal math. In Python 3.3 (which is currently in alpha form, but I've been using it with no problems at all), the module has been rewritten in C for an up to 100x speed up (in my quick tests).
You could scale your time domain samples by a large number s to avoid underflow, and then, if
F( f(t) ) = X(j*w)
then
F(s f(s*t)) <-> X(w/s)
Now using the convolution theorem you can work out how to scale your final result to remove the effect of the scaling factor.

Categories