It looks like z3 expression has a hash() method but not __hash__(). Is there a reason why not using __hash__() ? This allows the expression to be hashable.
There is no reason for not calling it __hash__(). I called it hash() because I'm new to Python. I will add __hash__() in the next release (Z3 4.2).
EDIT: as pointed out in the comments, we also need __eq__ or __cmp__ to be able to use a Z3 object as a key in a Python dictionary. Unfortunately, the __eq__ method (defined at ExprRef) is used to build Z3 expressions. That is, if a and b are referencing Z3 expressions, then a == b returns the Z3 expression object representing the expression a = b. This "feature" is convenient for writing Z3 examples in Python, but it has a nasty side-effect: the Python dictionary class will assume that all Z3 expressions are equal to each other. Actually, it is even worse, since the Python dictionary only invokes the method __eq__ for objects that have the same hashcode. Thus, if we define __hash__() we may have the illusion that is safe to use Z3 expression objects as keys in Python dictionaries. For this reason, I will not include __hash__() in the class AstRef.
Users that want to use Z3 expressions as keys in dictionaries may use the following trick:
from z3 import *
class AstRefKey:
def __init__(self, n):
self.n = n
def __hash__(self):
return self.n.hash()
def __eq__(self, other):
return self.n.eq(other.n)
def askey(n):
assert isinstance(n, AstRef)
return AstRefKey(n)
x = Int('x')
y = Int('y')
d = {}
d[askey(x)] = 10
d[askey(y)] = 20
d[askey(x + y)] = 30
print d[askey(x)]
print d[askey(y)]
print d[askey(x + y)]
n = simplify(x + 1 + y - 1)
print d[askey(n)]
Related
I have just almost finished my assignment and now the only thing I have left is to define the tostring method shown here.
import math
class RegularPolygon:
def __init__(self, n = 1, l = 1):
self.__n = n
self.__l = l
def set_n(self, n):
self.__n = n
def get_n(self):
return self.__n
def addSides(self, x):
self.__n = self.__n + x
def setLength(self, l ):
self.__l = l
def getLength(self):
return self.__l
def setPerimeter(self):
return (self.__n * self.__l )
def getArea(self):
return (self.__l ** 2 / 4 * math.tan(math.radians(180/self.__n)))
def toString(self):
return
x = 3
demo_object = RegularPolygon (3, 1)
print(demo_object.get_n() , demo_object.getLength())
demo_object.addSides(x)
print(demo_object.get_n(), demo_object.getLength())
print(demo_object.getArea())
print(demo_object.setPerimeter())
Basically the tostring on what it does is return a string that has the values of the internal variables included in it. I also need help on the getArea portion too.
Assignment instructions
The assignment says
... printing a string representation of a RegularPolygon object.
So I would expect you get to choose a suitable "representation". You could go for something like this:
return f'{self.__n+2} sided regular polygon of side length {self.__l}'
or as suggested by #Roy Cohen
return f'{self.__class__.__name__}({self.__n}, {self.__l})'
However, as #Klaus D. wrote in the comments, Python is not Java, and as such has its own standards and magic methods to use instead.
I would recommend reading this answer for an explanation between the differences between the two built-in string representation magic-methods: __repr__ and __str__. By implementing these methods, they will automatically be called whenever using print() or something similar, instead of you calling .toString() every time.
Now to address the getters and setters. Typically in Python you avoid these and prefer using properties instead. See this answer for more information, but to summarise you either directly use an objects properties, or use the #property decorator to turn a method into a property.
Edit
Your area formula is likely an error with order-of-operations. Make sure you are explicit with which operation you're performing first:
return self.__l ** 2 / (4 * math.tan(math.radians(180/self.__n)) )
This may be correct :)
Can I somehow refer to a method without using the lambda keyword?
Say we have following example code:
class AbstractDummy:
def size(self):
raise NotImplementedError
class Dummy1(AbstractDummy):
def size(self):
return 10
class Dummy2(AbstractDummy):
def size(self):
return 20
If I have my example objects:
dummies1 = [Dummy1(), Dummy1(), Dummy1()]
dummies2 = [Dummy2(), Dummy2()]
Then if I want to map them, and I can do that with extracted function parameter to save me some characters:
f = lambda x : x.size()
map(f, dummies1)
map(f, dummies2)
Question here: can I somehow avoid this temporary f and/or lambda keyword?
To make a small comparison, in Java it would be possible to refer to AbstractDummy::size and so the invocation would look a bit like print(map(AbstractDummy::size, dummies1).
The operator module provides methodcaller for this.
from operator import methodcaller
f = methodcaller('size')
results1 = [f(x) for x in dummies1]
results2 = [f(x) for x in dummies2]
though [x.size() for x in ...] is simpler, as in C_Z_'s answer. methodcaller is useful for when you need a function as a function argument, for example
# Sort some_list_of_objects on return value of each object's `a` method.
sorted_list = sorted(some_list_of_objects, key=methodcaller('a'))
In this case you would probably want to use a list comprehension
[x.size() for x in dummies1]
[x.size() for x in dummies2]
if object in lst:
#do something
As far as I can tell, when you execute this statement it is internally checking == between object and every element in lst, which will refer to the __eq__ methods of these two objects. This can have the implication of two distinct objects being "equal", which is usually desired if all of their attributes are the same.
However, is there a way to Pythonically achieve a predicate such as in where the underlying equality check is is - i.e. we're actually checking if the two references are to the same object?
3list membership in python is dictated by the __contains__ dunder method. You can choose to overwrite this for a custom implementation if you want to use the normal "in" syntax:
class my_list(list):
def __contains__(self, x):
for y in self:
if x is y:
return True
return False
4 in my_list([4, [3,2,1]])
>> True
[3,2,1] in my_list([4, [3,2,1]]) # Because while the lists are "==" equal, they have different pointers.
>>> False
Otherwise, I'd suggest kaya3's answer of using a generator check.
Use the any function:
if any(x is object for x in lst):
# ...
if you want to specifically use is then just use filter like:
filtered_list = filter(lambda n: n is object, list)
Given:
class T:
def __hash__(self):
return 1234
t1 = T()
t2 = T()
my_set = { t1 }
I would expect the following to print True:
print t2 in my_set
Isn't this supposed to print True because t1 and t2 have the same hash value. How can I make the in operator of the set use the given hash function?
You need to define an __eq__ method because only instances that are identical a is b or equal a == b (besides having the same hash) will be recognized as equal by set and dict:
class T:
def __hash__(self):
return 1234
def __eq__(self, other):
return True
t1 = T()
t2 = T()
my_set = { t1 }
print(t2 in my_set) # True
The data model on __hash__ (and the same documentation page for Python 2) explains this:
__hash__
Called by built-in function hash() and for operations on members of hashed collections including set, frozenset, and dict. __hash__() should return an integer. The only required property is that objects which compare equal have the same hash value; it is advised to mix together the hash values of the components of the object that also play a part in comparison of objects by packing them into a tuple and hashing the tuple.
If a class does not define an __eq__() method it should not define a __hash__() operation either; if it defines __eq__() but not __hash__(), its instances will not be usable as items in hashable collections. If a class defines mutable objects and implements an __eq__() method, it should not implement __hash__(), since the implementation of hashable collections requires that a key’s hash value is immutable (if the object’s hash value changes, it will be in the wrong hash bucket).
User-defined classes have __eq__() and __hash__() methods by default; with them, all objects compare unequal (except with themselves) and x.__hash__() returns an appropriate value such that x == y implies both that x is y and hash(x) == hash(y).
(Emphasis mine)
Note: In Python 2 you can also implement a __cmp__ method instead of __eq__.
In psuedocode, the logic for set.__contains__() when called by x in s is roughly:
h = hash(s) # This uses your class's __hash__()
i = h % table_size # This logic is internal to the hash table
if table[i] is empty: return False # Nothing found in the set
if table[i] is x: return True # Identity implies equality
if hash(table[i]) != h: return False # Hash mismatch implies inequality
return table[i] == x # This needs __eq__() in your class
How can I check if an object is orderable/sortable in Python?
I'm trying to implement basic type checking for the __init__ method of my binary tree class, and I want to be able to check if the value of the node is orderable, and throw an error if it isn't. It's similar to checking for hashability in the implementation of a hashtable.
I'm trying to accomplish something similar to Haskell's (Ord a) => etc. qualifiers. Is there a similar check in Python?
If you want to know if an object is sortable, you must check if it implements the necessary methods of comparison.
In Python 2.X there were two different ways to implement those methods:
cmp method (equivalent of compareTo in Java per example)
__cmp__(self, other): returns >0, 0 or <0 wether self is more, equal or less than other
rich comparison methods
__lt__, __gt__, __eq__, __le__, __ge__, __ne__
The sort() functions call this method to make the necessary comparisons between instances (actually sort only needs the __lt__ or __gt__ methods but it's recommended to implement all of them)
In Python 3.X the __cmp__ was removed in favor of the rich comparison methods as having more than one way to do the same thing is really against Python's "laws".
So, you basically need a function that check if these methods are implemented by a class:
# Python 2.X
def is_sortable(obj):
return hasattr(obj, "__cmp__") or \
hasattr(obj, "__lt__") or \
hasattr(obj, "__gt__")
# Python 3.X
def is_sortable(obj):
cls = obj.__class__
return cls.__lt__ != object.__lt__ or \
cls.__gt__ != object.__gt__
Different functions are needed for Python 2 and 3 because a lot of other things also change about unbound methods, method-wrappers and other internal things in Python 3.
Read this links you want better understanding of the sortable objects in Python:
http://python3porting.com/problems.html#unorderable-types-cmp-and-cmp
http://docs.python.org/2/howto/sorting.html#the-old-way-using-the-cmp-parameter
PS: this was a complete re-edit of my first answer, but it was needed as I investigated the problem better and had a cleaner idea about it :)
While the explanations in answers already here address runtime type inspection, here's how the static types are annotated by typeshed. They start by defining a collection of comparison Protocols, e.g.
class SupportsDunderLT(Protocol):
def __lt__(self, __other: Any) -> bool: ...
which are then collected into rich comparison sum types, such as
SupportsRichComparison = Union[SupportsDunderLT, SupportsDunderGT]
SupportsRichComparisonT = TypeVar("SupportsRichComparisonT", bound=SupportsRichComparison)
then finally these are used to type e.g. the key functions of list.sort:
#overload
def sort(self: list[SupportsRichComparisonT], *, key: None = ..., reverse: bool = ...) -> None: ...
#overload
def sort(self, *, key: Callable[[_T], SupportsRichComparison], reverse: bool = ...) -> None: ...
and sorted:
#overload
def sorted(
__iterable: Iterable[SupportsRichComparisonT], *, key: None = ..., reverse: bool = ...
) -> list[SupportsRichComparisonT]: ...
#overload
def sorted(__iterable: Iterable[_T], *, key: Callable[[_T], SupportsRichComparison], reverse: bool = ...) -> list[_T]: ...
Regrettably it is not enough to check that your object implements lt.
numpy uses the '<' operator to return an array of Booleans, which has no truth value. SQL Alchemy uses it to return a query filter, which again no truth value.
Ordinary sets uses it to check for a subset relationship, so that
set1 = {1,2}
set2 = {2,3}
set1 == set2
False
set1 < set2
False
set1 > set2
False
The best partial solution I could think of (starting from a single object of unknown type) is this, but with rich comparisons it seems to be officially impossible to determine orderability:
if hasattr(x, '__lt__'):
try:
isOrderable = ( ((x == x) is True) and ((x > x) is False)
and not isinstance(x, (set, frozenset)) )
except:
isOrderable = False
else:
isOrderable = False
Edited
As far as I know, all lists are sortable, so if you want to know if a list is "sortable", the answer is yes, no mather what elements it has.
class C:
def __init__(self):
self.a = 5
self.b = "asd"
c = C()
d = True
list1 = ["abc", "aad", c, 1, "b", 2, d]
list1.sort()
print list1
>>> [<__main__.C instance at 0x0000000002B7DF08>, 1, True, 2, 'aad', 'abc', 'b']
You could determine what types you consider "sortable" and implement a method to verify if all elements in the list are "sortable", something like this:
def isSortable(list1):
types = [int, float, str]
res = True
for e in list1:
res = res and (type(e) in types)
return res
print isSortable([1,2,3.0, "asd", [1,2,3]])