Differentiate False and 0 - python

Let's say I have a list with different values, like this:
[1,2,3,'b', None, False, True, 7.0]
I want to iterate over it and check that every element is not in list of some forbidden values. For example, this list is [0,0.0].
When I check if False in [0,0.0] I get True. I understand that python casts False to 0 here - but how I can avoid it and make this check right - that False value is not in [0,0.0]?

To tell the difference between False and 0 you may use is to compare them. False is a singleton value and always refers to the same object. To compare all the items in a list to make sure they are not False, try:
all(x is not False for x in a_list)
BTW, Python doesn't cast anything here: Booleans are a subclass of integers, and False is literally equal to 0, no conversion required.

You would want to use is instead of == when comparing.
y = 0
print y == False # True
print y is False # False
x = False
print x == False # True
print x is False # True

Found a weird corner case on differentiating between 0 and False today. If the initial list contains the numpy version of False (numpy.bool_(False)), the is comparisons don't work, because numpy.bool_(False) is not False.
These arise all the time in comparisons that use numpy types. For example:
>>> type(numpy.array(50)<0)
<class 'numpy.bool_'>
The easiest way would be to compare using the numpy.bool_ type: (np.array(50)<0) is (np.False_). But doing that requires a numpy dependency. The solution I came up with was to do a string comparison (working as of numpy 1.18.1):
str(numpy.bool_(False)) == str(False)
So when dealing with a list, a la #kindall it would be:
all(str(x) != str(False) for x in a_list)
Note that this test also has a problem with the string 'False'. To avoid that, you could exclude against cases where the string representation was equivalent to itself (this also dodges a numpy string array). Here's some test outputs:
>>> foo = False
>>> str(foo) != foo and str(foo) == str(False)
True
>>> foo = numpy.bool_(False)
>>> str(foo) != foo and str(foo) == str(False)
True
>>> foo = 0
>>> str(foo) != foo and str(foo) == str(False)
False
>>> foo = 'False'
>>> str(foo) != foo and str(foo) == str(False)
False
>>> foo = numpy.array('False')
>>> str(foo) != foo and str(foo) == str(False)
array(False)
I am not really an expert programmer, so there may be some limitations I've still missed, or a big reason not to do this, but it allowed me to differentiate 0 and False without needing to resort to a numpy dependency.

Related

Why is +False not False in Python?

I was checking some of CPython's tests and in this file I saw a test case which was strange to me:
def test_math(self):
...
self.assertIsNot(+False, False)
At first I thought it is a typo and it should be self.assertIs(+False, False) but when I tried it on the Python console the result was False:
>>> +False is False
<stdin>:1: SyntaxWarning: "is" with a literal. Did you mean "=="?
False
>>>
>>> id(False)
140078839501184
>>> id(+False)
140078839621760
Why does + make it a different object?
Comments suggest that +False is 0. So maybe the better question should be why is this?
Because a bool is a type of int:
>>> isinstance(False, int)
True
>>> False == 0
True
a bool is accepted by functions that take ints as inputs (including all the standard operators), and those functions will generally return ints:
>>> True + False
1
>>> True * 2
2
>>> True ** False
1
or sometimes floats:
>>> True / True
1.0
Specifically, putting + in front of a number is a "unary plus", the opposite of a "unary minus" which returns the negative of its operand:
>>> +True
1
>>> -True
-1
>>> +False
0
>>> -False
0
Although this bool/int behavior catches most people off guard the first time they find it, it allows for some useful shortcuts; for example, you can sum a bunch of bools to find the number of True values:
>>> sum([True, True, False, False, True])
3
>>> sum(s.startswith("a") for s in ("apple", "banana", "pear", "avocado"))
2
When you apply arithmetic operations to boolean values they are turned into int.
In [10]: type(+False)
Out[10]: int
In [11]: type(False)
Out[11]: bool

Booleans interpretted from strings, unexpected behavior

Can anyone explain this behaviour in Python (2.7 and 3)
>>> a = "Monday" and "tuesday"
>>> a
'tuesday' # I expected this to be True
>>> a == True
False # I expected this to be True
>>> a is True
False # I expected this to be True
>>> a = "Monday" or "tuesday"
>>> a
'Monday' # I expected this to be True
>>> a == True
False # I expected this to be True
>>> a is True
False # I expected this to be True
I would expect that because I am using logic operators and and or, the statements would be evaluated as a = bool("Monday") and bool("tuesday").
So what is happening here?
As explained here using and / or on strings will yield the following result:
a or b returns a if a is True, else returns b.
a and b returns b if a is True, else returns a.
This behavior is called Short-circuit_evaluation and it applies for both and, or as can be seen here.
This explains why a == 'tuesday' in the 1st case and 'Monday' in the 2nd.
As for checking a == True, a is True, using logical operators on strings yields a specific result (as explained in above), and it is not the same as bool("some_string").

How chain of `and` operators works in python?

In my program encountered with this:
>>> True and True and (3 or True)
3
>>> True and True and ('asd' or True)
'asd'
while I expected to get some boolean value depending on the result in brackets. So if I try comparisons like these (0 or True) or ('' or True) python will return True, which is clear because 0 and '' equivalent to False in comparisons.
Why doesn't python return boolean value by converting 3 and 'asd' into True?
From https://docs.python.org/3/library/stdtypes.html:
Important exception: the Boolean operations or and and always return
one of their operands
The behavior can be most easily seen with:
>>> 3 and True
True
>>> True and 3
3
If you need to eliminate this behavior, wrap it in a bool:
>>> bool(True and 3)
True
See this question
As Reut Sharabani, answered, this behavior allows useful things like:
>>> my_list = []
>>> print (my_list or "no values")

Python - numeric class with "don't care" values

I am trying to create a simple data type to be used as a dtype for a Numpy array, and on which I can perform element wise addition, subtraction, and comparison. The type should take on (at least) three values representing true, false, and "don't care" (DC). The latter is equal to both true and false and behaves like zero in addition and subtraction:
>>> MyDtype(True) == MyDtype(DC) == MyDtype(True) # note reflection
True
>>> MyDtype(False) == MyDtype(DC) == MyDtype(False) # ditto
True
>>> MyDtype(True) == MyDtype(False)
False
>>> MyDtype(True) - MyDtype(DC) == MyDtype(True)
True
>>> MyDtype(DC) + MyDtype(False) == MyDtype(False)
True
I am totally stumped on how to get these semantics in a sane fashion.
You can use magic functions to control arithmetic operations on objects of your class. You can control even reflected operations, if the object on the left hand side does not implement the respective non-reflected operation.
A comprehensive documentation of magic methods can be found here (The link refers to the arithmetic operator section which is followed by the section about reflective arithmetic operations):
http://www.rafekettler.com/magicmethods.html#numeric
I've had the same problem and wrote a class whose objects are dontcare symbols. It is not exactly what you asked for since it does not wrap values, but it should be easy to adapt it to your needs.
You can get it here:
https://github.com/keepitfree/nicerpython
from symbols import dontcare
True == dontcare == True
>>> True
False == dontcare == False
>>> True
True == False
>>> False
True - dontcare == True
>>> True
dontcare + False == False
>>> True

Why is a list containing objects False when value tested?

I would expect an empty list to value test as False, but I'm a bit confused why a reference for a list containing an object reports as False also when value tested as in the following example:
>>> weapon = []
>>> weapon == True
False
>>> weapon.append("sword")
>>> weapon == True
False
>>> weapon
['sword']
If weapon = [] is False, why would weapon = ['sword'] also be False? According to docs http://docs.python.org/release/2.4.4/lib/truth.html, it should be True. What am I missing in my understanding of this?
you should do a check like
In [1]: w = []
In [2]: if w:
...: print True
...: else:
...: print False
...:
False
When you do:
w = []
if w:
print "Truthy"
else:
print "Falsy"
the key thing to note is that whatever you are testing in the if clause is coerced to a boolean. To make it explicit:
w = []
if bool(w):
print "Truthy"
else:
print "Falsy"
To compare apples to apples then you don't want to compare ["sword"] to True. Instead you want to compare bool(["sword"]) to True:
bool(["sword"]) == True
# True
You need to use bool() if you want to compare it directly
>>> weapon = []
>>> bool(weapon) == True
False
>>> weapon.append("sword")
>>> bool(weapon) == True
True
When you test a condition using if or while, the conversion to bool is done implicitly
>>> if weapon == True: # weapon isn't equal to True
... print "True"
...
>>> if weapon:
... print "True"
...
True
From that article, note that even things are considered to have a "true" truth value, they are not necessarily == True. For example:
["hi"] == True
// False
if ["hi"]:
print("hello")
// prints hello
The documentation says "Any object can be tested for truth value" not that [] == False or ['whatever'] == True. You should test objects as specified in the documentation "for use in an if or while condition or as operand of the Boolean operation".

Categories