Why is +False not False in Python? - python

I was checking some of CPython's tests and in this file I saw a test case which was strange to me:
def test_math(self):
...
self.assertIsNot(+False, False)
At first I thought it is a typo and it should be self.assertIs(+False, False) but when I tried it on the Python console the result was False:
>>> +False is False
<stdin>:1: SyntaxWarning: "is" with a literal. Did you mean "=="?
False
>>>
>>> id(False)
140078839501184
>>> id(+False)
140078839621760
Why does + make it a different object?
Comments suggest that +False is 0. So maybe the better question should be why is this?

Because a bool is a type of int:
>>> isinstance(False, int)
True
>>> False == 0
True
a bool is accepted by functions that take ints as inputs (including all the standard operators), and those functions will generally return ints:
>>> True + False
1
>>> True * 2
2
>>> True ** False
1
or sometimes floats:
>>> True / True
1.0
Specifically, putting + in front of a number is a "unary plus", the opposite of a "unary minus" which returns the negative of its operand:
>>> +True
1
>>> -True
-1
>>> +False
0
>>> -False
0
Although this bool/int behavior catches most people off guard the first time they find it, it allows for some useful shortcuts; for example, you can sum a bunch of bools to find the number of True values:
>>> sum([True, True, False, False, True])
3
>>> sum(s.startswith("a") for s in ("apple", "banana", "pear", "avocado"))
2

When you apply arithmetic operations to boolean values they are turned into int.
In [10]: type(+False)
Out[10]: int
In [11]: type(False)
Out[11]: bool

Related

Differentiate False and 0

Let's say I have a list with different values, like this:
[1,2,3,'b', None, False, True, 7.0]
I want to iterate over it and check that every element is not in list of some forbidden values. For example, this list is [0,0.0].
When I check if False in [0,0.0] I get True. I understand that python casts False to 0 here - but how I can avoid it and make this check right - that False value is not in [0,0.0]?
To tell the difference between False and 0 you may use is to compare them. False is a singleton value and always refers to the same object. To compare all the items in a list to make sure they are not False, try:
all(x is not False for x in a_list)
BTW, Python doesn't cast anything here: Booleans are a subclass of integers, and False is literally equal to 0, no conversion required.
You would want to use is instead of == when comparing.
y = 0
print y == False # True
print y is False # False
x = False
print x == False # True
print x is False # True
Found a weird corner case on differentiating between 0 and False today. If the initial list contains the numpy version of False (numpy.bool_(False)), the is comparisons don't work, because numpy.bool_(False) is not False.
These arise all the time in comparisons that use numpy types. For example:
>>> type(numpy.array(50)<0)
<class 'numpy.bool_'>
The easiest way would be to compare using the numpy.bool_ type: (np.array(50)<0) is (np.False_). But doing that requires a numpy dependency. The solution I came up with was to do a string comparison (working as of numpy 1.18.1):
str(numpy.bool_(False)) == str(False)
So when dealing with a list, a la #kindall it would be:
all(str(x) != str(False) for x in a_list)
Note that this test also has a problem with the string 'False'. To avoid that, you could exclude against cases where the string representation was equivalent to itself (this also dodges a numpy string array). Here's some test outputs:
>>> foo = False
>>> str(foo) != foo and str(foo) == str(False)
True
>>> foo = numpy.bool_(False)
>>> str(foo) != foo and str(foo) == str(False)
True
>>> foo = 0
>>> str(foo) != foo and str(foo) == str(False)
False
>>> foo = 'False'
>>> str(foo) != foo and str(foo) == str(False)
False
>>> foo = numpy.array('False')
>>> str(foo) != foo and str(foo) == str(False)
array(False)
I am not really an expert programmer, so there may be some limitations I've still missed, or a big reason not to do this, but it allowed me to differentiate 0 and False without needing to resort to a numpy dependency.

How chain of `and` operators works in python?

In my program encountered with this:
>>> True and True and (3 or True)
3
>>> True and True and ('asd' or True)
'asd'
while I expected to get some boolean value depending on the result in brackets. So if I try comparisons like these (0 or True) or ('' or True) python will return True, which is clear because 0 and '' equivalent to False in comparisons.
Why doesn't python return boolean value by converting 3 and 'asd' into True?
From https://docs.python.org/3/library/stdtypes.html:
Important exception: the Boolean operations or and and always return
one of their operands
The behavior can be most easily seen with:
>>> 3 and True
True
>>> True and 3
3
If you need to eliminate this behavior, wrap it in a bool:
>>> bool(True and 3)
True
See this question
As Reut Sharabani, answered, this behavior allows useful things like:
>>> my_list = []
>>> print (my_list or "no values")

Python Bool and int comparison and indexing on list with boolean values

Indexing on list with boolean values works fine.
Though the index should be an integer.
Following is what I tried in console:
>>> l = [1,2,3,4,5,6]
>>>
>>> l[False]
1
>>> l[True]
2
>>> l[False + True]
2
>>> l[False + 2*True]
3
>>>
>>> l['0']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list indices must be integers, not str
>>> type(True)
<type 'bool'>
When I tried l['0'] it printed error that int type expected in indices and that is obvious.
Then, even the type of 'True' and 'False' being Bool, indexing on the list works fine and automatically converts it to int type and performs the operation.
Please explain what is going on internally.
I am posting question for the first time, so please forgive me for any mistake.
What's going on is that booleans actually are integers. True is 1 and False is 0. Bool is a subtype of int.
>>> isinstance(True, int)
True
>>> issubclass(bool, int)
True
So it's not converting them to integers, it's just using them as integers.
(Bools are ints for historical reasons. Before a bool type existed in Python, people used the integer 0 to mean false and 1 to mean true. So when they added a bool type, they made the boolean values integers in order to maintain backward compatibility with old code that used these integer values. See for instance http://www.peterbe.com/plog/bool-is-int .)
>>> help(True)
Help on bool object:
class bool(int)
| bool(x) -> bool
|
| Returns True when the argument x is true, False otherwise.
| The builtins True and False are the only two instances of the class bool.
| The class bool is a subclass of the class int, and cannot be subclassed.
Python used to lack booleans, we just used integers, 0 for False and any other integer for True. So when booleans were added to the language, the values False and True, can be treated as the integer values 0 and 1 still by the interpreter, to help backwards compatibility. Internally, bool is a sub-class of int.
In other words, the following equations are True:
>>> False == 0
True
>>> True == 1
True
>>> isinstance(True, int)
True
>>> issubclass(bool, int)
True
and as you found out:
>>> True * 3
3
This doesn't extend to strings however.
...Booleans are a subtype of plain integers.
Source.
As you can see, False is 0 and True is 1.
The Python source documentation does not mention directly that all non-zero integers are evaluate to True when passed to an if statement, while only zero evaluates to False. You can prove it to yourself with the following code in Python:
for test_integer in range(-2, 3, ):
if not test_integer:
print('{} evaluates to False in Python.'.format(test_integer))
else:
print('{} evaluates to True in Python.'.format(test_integer))
>>>-2 evaluates to True in Python.
-1 evaluates to True in Python.
0 evaluates to False in Python.
1 evaluates to True in Python.
2 evaluates to True in Python.
Try it for as far on either side of zero as you want; this code only shows for -2, -1, 0, 1, and 2 inclusive.

Python max() and min() values for bool

In python interpreter,
min(True,False)==False
max(True,False)==True
is assured by design?
True is equal to 1 and False is 0
It seems, at least in CPython, bool subclasses int. Therefore, you can do:
>>> abs(False)
0
>>> abs(True)
1
and:
>>> False < True
True
>>> True > False
True
I guess max and min work on the comparison operator:
>>> cmp(False, True)
-1
>>> cmp(True, False)
1
>>> cmp(False, False)
0
>>> cmp(True, True)
0
In python 2.x, this is not guaranteed, as you can overwrite True and False:
>>> False = 23
>>> max(True, False)
23
But if you do not assign to True or False, it is guaranteed by Language Design that Booleans subclass int with values 0, 1, yes. (and in py3, True and False are reserved words, so you cannot do the above)
According to the doc,
In numeric contexts (for example when used as the argument to an arithmetic operator), they behave like the integers 0 and 1, respectively.
So, yes, it is assured.

Adding the number 1 to a set has no effect

I cannot add the integer number 1 to an existing set. In an interactive shell, this is what I am doing:
>>> st = {'a', True, 'Vanilla'}
>>> st
{'a', True, 'Vanilla'}
>>> st.add(1)
>>> st
{'a', True, 'Vanilla'} # Here's the problem; there's no 1, but anything else works
>>> st.add(2)
>>> st
{'a', True, 'Vanilla', 2}
This question was posted two months ago, but I believe it was misunderstood.
I am using Python 3.2.3.
>>> 1 == True
True
I believe your problem is that 1 and True are the same value, so 1 is "already in the set".
>>> st
{'a', True, 'Vanilla'}
>>> 1 in st
True
In mathematical operations True is itself treated as 1:
>>> 5 + True
6
>>> True * 2
2
>>> 3. / (True + True)
1.5
Though True is a bool and 1 is an int:
>>> type(True)
<class 'bool'>
>>> type(1)
<class 'int'>
Because 1 in st returns True, I think you shouldn't have any problems with it. It is a very strange result though. If you're interested in further reading, #Lattyware points to PEP 285 which explains this issue in depth.
I believe, though I'm not certain, that because hash(1) == hash(True) and also 1 == True that they are considered the same elements by the set. I don't believe that should be the case, as 1 is True is False, but I believe it explains why you can't add it.
1 is equivalent to True as 1 == True returns true. As a result the insertion of 1 is rejected as a set cannot have duplicates.
Here are some link if anyone is interested in further study.
Is it Pythonic to use bools as ints?
https://stackoverflow.com/a/2764099/1355722
We have to use a list if you want to have items with the same hash.If you're absolutely sure your set needs to be able to contain both True and 1.0, I'm pretty sure you'll have to define your own custom class, probably as a thin wrapper around dict.
As in many languages, Python's set type is just a thin wrapper around dict where we're only interested in the keys.
Ex:
st = {'a', True, 'Vanilla'}
list_st = []
for i in st:
list_st.append(i)
list_st.append(1)
for i in list_st:
print(f'The hash of {i} is {hash(i)}')
produces
The hash of True is 1
The hash of Vanilla is -6149594130004184476
The hash of a is 8287428602346617974
The hash of 1 is 1
[Program finished]

Categories