How can I closely achieve ?: from C++/C# in Python? - python

In C# I could easily write the following:
string stringValue = string.IsNullOrEmpty( otherString ) ? defaultString : otherString;
Is there a quick way of doing the same thing in Python or am I stuck with an 'if' statement?

In Python 2.5, there is
A if C else B
which behaves a lot like ?: in C. However, it's frowned upon for two reasons: readability, and the fact that there's usually a simpler way to approach the problem. For instance, in your case:
stringValue = otherString or defaultString

#Dan
if otherString:
stringValue = otherString
else:
stringValue = defaultString
This type of code is longer and more expressive, but also more readable
Well yes, it's longer. Not so sure about “more expressive” and “more readable”. At the very least, your claim is disputable. I would even go as far as saying it's downright wrong, for two reasons.
First, your code emphasizes the decision-making (rather extremely). Onthe other hand, the conditional operator emphasizes something else, namely the value (resp. the assignment of said value). And this is exactly what the writer of this code wants. The decision-making is really rather a by-product of the code. The important part here is the assignment operation. Your code hides this assignment in a lot of syntactic noise: the branching.
Your code is less expressive because it shifts the emphasis from the important part.
Even then your code would probably trump some obscure ASCII art like ?:. An inline-if would be preferable. Personally, I don't like the variant introduced with Python 2.5 because it's backwards. I would prefer something that reads in the same flow (direction) as the C ternary operator but uses words instead of ASCII characters:
C = if cond then A else B
This wins hands down.
C and C# unfortunately don't have such an expressive statement. But (and this is the second argument), the ternary conditional operator of C languages is so long established that it has become an idiom in itself. The ternary operator is as much part of the language as the “conventional” if statement. Because it's an idiom, anybody who knows the language immediately reads this code right. Furthermore, it's an extremely short, concise way of expressing these semantics. In fact, it's the shortest imaginable way. It's extremely expressive because it doesn't obscure the essence with needless noise.
Finally, Jeff Atwood has written the perfect conclusion to this: The best code is no code at all.

It's never a bad thing to write readable, expressive code.
if otherString:
stringValue = otherString
else:
stringValue = defaultString
This type of code is longer and more expressive, but also more readable and less likely to get tripped over or mis-edited down the road. Don't be afraid to write expressively - readable code should be a goal, not a byproduct.

There are a few duplicates of this question, e.g.
Does Python have a ternary conditional operator?
What's the best way to replace the ternary operator in Python?
In essence, in a general setting pre-2.5 code should use this:
(condExp and [thenExp] or [elseExp])[0]
(given condExp, thenExp and elseExp are arbitrary expressions), as it avoids wrong results if thenExp evaluates to boolean False, while maintaining short-circuit evaluation.

By the way, j0rd4n, you don't (please don't!) write code like this in C#. Apart from the fact that the IsDefaultOrNull is actually called IsNullOrEmpty, this is pure code bloat. C# offers the coalesce operator for situations like these:
string stringValue = otherString ?? defaultString;
It's true that this only works if otherString is null (rather than empty) but if this can be ensured beforehand (and often it can) it makes the code much more readable.

I also discovered that just using the "or" operator does pretty well. For instance:
finalString = get_override() or defaultString
If get_override() returns "" or None, it will always use defaultString.

Chapter 4 of diveintopython.net has the answer. It's called the and-or trick in Python.

You can take advantage of the fact that logical expressions return their value, and not just true or false status. For example, you can always use:
result = question and firstanswer or secondanswer
With the caveat that it doesn't work like the ternary operator if firstanswer is false. This is because question is evaluated first, assuming it's true firstanswer is returned unless firstanswer is false, so this usage fails to act like the ternary operator. If you know the values, however, there is usually no problem. An example would be:
result = choice == 7 and "Seven" or "Another Choice"

If you used ruby, you could write
stringValue = otherString.blank? ? defaultString : otherString;
the built in blank? method means null or empty.
Come over to the dark side...

Related

Is there a benefit to using a boolean over say, an int with an 'if' statement

As a begineer coder, I wrote a program, something along the lines of:
ArbitraryVariable = 1
--Code--
--Code--
--Code--
If ArbitraryVariable == 1:
--Code--
--Code--
--Code--
I would set ArbitraryVariable to 1 if I wanted it to do that thing, and anything else (usually 0) if I didn't, which i guess accomplishes a de facto true/false logic.
My friend who actually knows what he's doing informs me this would've been better accomplished with a boolean as it's true/false, which makes sense, but I am curious if there is actually any specific benefit to doing this, other than a very slight increase in optimisation and it seemingly just being the boolean's 'thing'?
I wrote this in python, but feel free to answer across languages.
this is a great question.
Boolean is used for values that only require the true/false data types. If I'm honest it's mostly used because of coding standards that most programmers use. As you mentioned there is a slight performance increase over using an integer to denote true or false.
It also provides the positive of being easier to read and understand, you don't need operators in python if you use a boolean value. You can just do this;
Boolean = True
if Boolean:
print("True")
and if it's false you can just use;
Boolean = False
if not Boolean:
print("False")
Overall, the one you used should be depending on what the if statement is doing. If the question is something like Is the light on? true and false would be an appropriate answer. That's what I've found in my experience anyway.

Why is the last number returned when putting "and" in python? [duplicate]

First, the code:
>>> False or 'hello'
'hello'
This surprising behavior lets you check if x is not None and check the value of x in one line:
>>> x = 10 if randint(0,2) == 1 else None
>>> (x or 0) > 0
# depend on x value...
Explanation: or functions like this:
if x is false, then y, else x
No language that I know lets you do this. So, why does Python?
It sounds like you're combining two issues into one.
First, there's the issue of short-circuiting. Marcin's answer addresses this issue perfectly, so I won't try to do any better.
Second, there's or and and returning the last-evaluated value, rather than converting it to bool. There are arguments to be made both ways, and you can find many languages on either side of the divide.
Returning the last-evaluated value allows the functionCall(x) or defaultValue shortcut, avoids a possibly wasteful conversion (why convert an int 2 into a bool 1 if the only thing you're going to do with it is check whether it's non-zero?), and is generally easier to explain. So, for various combinations of these reasons, languages like C, Lisp, Javascript, Lua, Perl, Ruby, and VB all do things this way, and so does Python.
Always returning a boolean value from an operator helps to catch some errors (especially in languages where the logical operators and the bitwise operators are easy to confuse), and it allows you to design a language where boolean checks are strictly-typed checks for true instead of just checks for nonzero, it makes the type of the operator easier to write out, and it avoids having to deal with conversion for cases where the two operands are different types (see the ?: operator in C-family languages). So, for various combinations of these reasons, languages like C++, Fortran, Smalltalk, and Haskell all do things this way.
In your question (if I understand it correctly), you're using this feature to be able to write something like:
if (x or 0) < 1:
When x could easily be None. This particular use case isn't very useful, both because the more-explicit x if x else 0 (in Python 2.5 and later) is just as easy to write and probably easier to understand (at least Guido thinks so), but also because None < 1 is the same as 0 < 1 anyway (at least in Python 2.x, so you've always got at least one of the two options)… But there are similar examples where it is useful. Compare these two:
return launchMissiles() or -1
return launchMissiles() if launchMissiles() else -1
The second one will waste a lot of missiles blowing up your enemies in Antarctica twice instead of once.
If you're curious why Python does it this way:
Back in the 1.x days, there was no bool type. You've got falsy values like None, 0, [], (), "", etc., and everything else is true, so who needs explicit False and True? Returning 1 from or would have been silly, because 1 is no more true than [1, 2, 3] or "dsfsdf". By the time bool was added (gradually over two 2.x versions, IIRC), the current logic was already solidly embedded in the language, and changing would have broken a lot of code.
So, why didn't they change it in 3.0? Many Python users, including BDFL Guido, would suggest that you shouldn't use or in this case (at the very least because it's a violation of "TOOWTDI"); you should instead store the result of the expression in a variable, e.g.:
missiles = launchMissiles()
return missiles if missiles else -1
And in fact, Guido has stated that he'd like to ban launchMissiles() or -1, and that's part of the reason he eventually accepted the ternary if-else expression that he'd rejected many times before. But many others disagree, and Guido is a benevolent DFL. Also, making or work the way you'd expect everywhere else, while refusing to do what you want (but Guido doesn't want you to want) here, would actually be pretty complicated.
So, Python will probably always be on the same side as C, Perl, and Lisp here, instead of the same side as Java, Smalltalk, and Haskell.
No language that i know lets you do this. So, why Python do?
Then you don't know many languages. I can't think of one language that I do know that does not exhibit this "shortcircuiting" behaviour.
It does it because it is useful to say:
a = b or K
such that a either becomes b, if b is not None (or otherwise falsy), and if not it gets the default value K.
Actually a number of languages do. See Wikipedia about Short-Circuit Evaluation
For the reason why short-circuit evaluation exists, wikipedia writes:
If both expressions used as conditions are simple boolean variables,
it can be actually faster to evaluate both conditions used in boolean
operation at once, as it always requires a single calculation cycle,
as opposed to one or two cycles used in short-circuit evaluation
(depending on the value of the first).
This behavior is not surprising, and it's quite straightforward if you consider Python has the following features regarding or, and and not logical operators:
Short-circuit evaluation: it only evaluates operands up to where it needs to.
Non-coercing result: the result is one of the operands, not coerced to bool.
And, additionally:
The Truth Value of an object is False only for None, False, 0, "", [], {}. Everything else has a truth value of True (this is a simplification; the correct definition is in the official docs)
Combine those features, and it leads to:
or : if the first operand evaluates as True, short-circuit there and return it. Or return the 2nd operand.
and: if the first operand evaluates as False, short-circuit there and return it. Or return the 2nd operand.
It's easier to understand if you generalize to a chain of operations:
>>> a or b or c or d
>>> a and b and c and d
Here is the "rule of thumb" I've memorized to help me easily predict the result:
or : returns the first "truthy" operand it finds, or the last one.
and: returns the first "falsy" operand it finds, or the last one.
As for your question, on why python behaves like that, well... I think because it has some very neat uses, and it's quite intuitive to understand. A common use is a series of fallback choices, the first "found" (ie, non-falsy) is used. Think about this silly example:
drink = getColdBeer() or pickNiceWine() or random.anySoda or "meh, water :/"
Or this real-world scenario:
username = cmdlineargs.username or configFile['username'] or DEFAULT_USERNAME
Which is much more concise and elegant than the alternative.
As many other answers have pointed out, Python is not alone and many other languages have the same behavior, for both short-circuit (I believe most current languanges are) and non-coercion.
"No language that i know lets you do this. So, why Python do?" You seem to assume that all languages should be the same. Wouldn't you expect innovation in programming languages to produce unique features that people value?
You've just pointed out why it's useful, so why wouldn't Python do it? Perhaps you should ask why other languages don't.
You can take advantage of the special features of the Python or operator out of Boolean contexts. The rule of thumb is still that the result of your Boolean expressions is the first true operand or the last in the line.
Notice that the logical operators (or included) are evaluated before the assignment operator =, so you can assign the result of a Boolean expression to a variable in the same way you do with a common expression:
>>> a = 1
>>> b = 2
>>> var1 = a or b
>>> var1
1
>>> a = None
>>> b = 2
>>> var2 = a or b
>>> var2
2
>>> a = []
>>> b = {}
>>> var3 = a or b
>>> var3
{}
Here, the or operator works as expected, returning the first true operand or the last operand if both are evaluated to false.

What is the motivation for the "or" operator to not return a bool?

First, the code:
>>> False or 'hello'
'hello'
This surprising behavior lets you check if x is not None and check the value of x in one line:
>>> x = 10 if randint(0,2) == 1 else None
>>> (x or 0) > 0
# depend on x value...
Explanation: or functions like this:
if x is false, then y, else x
No language that I know lets you do this. So, why does Python?
It sounds like you're combining two issues into one.
First, there's the issue of short-circuiting. Marcin's answer addresses this issue perfectly, so I won't try to do any better.
Second, there's or and and returning the last-evaluated value, rather than converting it to bool. There are arguments to be made both ways, and you can find many languages on either side of the divide.
Returning the last-evaluated value allows the functionCall(x) or defaultValue shortcut, avoids a possibly wasteful conversion (why convert an int 2 into a bool 1 if the only thing you're going to do with it is check whether it's non-zero?), and is generally easier to explain. So, for various combinations of these reasons, languages like C, Lisp, Javascript, Lua, Perl, Ruby, and VB all do things this way, and so does Python.
Always returning a boolean value from an operator helps to catch some errors (especially in languages where the logical operators and the bitwise operators are easy to confuse), and it allows you to design a language where boolean checks are strictly-typed checks for true instead of just checks for nonzero, it makes the type of the operator easier to write out, and it avoids having to deal with conversion for cases where the two operands are different types (see the ?: operator in C-family languages). So, for various combinations of these reasons, languages like C++, Fortran, Smalltalk, and Haskell all do things this way.
In your question (if I understand it correctly), you're using this feature to be able to write something like:
if (x or 0) < 1:
When x could easily be None. This particular use case isn't very useful, both because the more-explicit x if x else 0 (in Python 2.5 and later) is just as easy to write and probably easier to understand (at least Guido thinks so), but also because None < 1 is the same as 0 < 1 anyway (at least in Python 2.x, so you've always got at least one of the two options)… But there are similar examples where it is useful. Compare these two:
return launchMissiles() or -1
return launchMissiles() if launchMissiles() else -1
The second one will waste a lot of missiles blowing up your enemies in Antarctica twice instead of once.
If you're curious why Python does it this way:
Back in the 1.x days, there was no bool type. You've got falsy values like None, 0, [], (), "", etc., and everything else is true, so who needs explicit False and True? Returning 1 from or would have been silly, because 1 is no more true than [1, 2, 3] or "dsfsdf". By the time bool was added (gradually over two 2.x versions, IIRC), the current logic was already solidly embedded in the language, and changing would have broken a lot of code.
So, why didn't they change it in 3.0? Many Python users, including BDFL Guido, would suggest that you shouldn't use or in this case (at the very least because it's a violation of "TOOWTDI"); you should instead store the result of the expression in a variable, e.g.:
missiles = launchMissiles()
return missiles if missiles else -1
And in fact, Guido has stated that he'd like to ban launchMissiles() or -1, and that's part of the reason he eventually accepted the ternary if-else expression that he'd rejected many times before. But many others disagree, and Guido is a benevolent DFL. Also, making or work the way you'd expect everywhere else, while refusing to do what you want (but Guido doesn't want you to want) here, would actually be pretty complicated.
So, Python will probably always be on the same side as C, Perl, and Lisp here, instead of the same side as Java, Smalltalk, and Haskell.
No language that i know lets you do this. So, why Python do?
Then you don't know many languages. I can't think of one language that I do know that does not exhibit this "shortcircuiting" behaviour.
It does it because it is useful to say:
a = b or K
such that a either becomes b, if b is not None (or otherwise falsy), and if not it gets the default value K.
Actually a number of languages do. See Wikipedia about Short-Circuit Evaluation
For the reason why short-circuit evaluation exists, wikipedia writes:
If both expressions used as conditions are simple boolean variables,
it can be actually faster to evaluate both conditions used in boolean
operation at once, as it always requires a single calculation cycle,
as opposed to one or two cycles used in short-circuit evaluation
(depending on the value of the first).
This behavior is not surprising, and it's quite straightforward if you consider Python has the following features regarding or, and and not logical operators:
Short-circuit evaluation: it only evaluates operands up to where it needs to.
Non-coercing result: the result is one of the operands, not coerced to bool.
And, additionally:
The Truth Value of an object is False only for None, False, 0, "", [], {}. Everything else has a truth value of True (this is a simplification; the correct definition is in the official docs)
Combine those features, and it leads to:
or : if the first operand evaluates as True, short-circuit there and return it. Or return the 2nd operand.
and: if the first operand evaluates as False, short-circuit there and return it. Or return the 2nd operand.
It's easier to understand if you generalize to a chain of operations:
>>> a or b or c or d
>>> a and b and c and d
Here is the "rule of thumb" I've memorized to help me easily predict the result:
or : returns the first "truthy" operand it finds, or the last one.
and: returns the first "falsy" operand it finds, or the last one.
As for your question, on why python behaves like that, well... I think because it has some very neat uses, and it's quite intuitive to understand. A common use is a series of fallback choices, the first "found" (ie, non-falsy) is used. Think about this silly example:
drink = getColdBeer() or pickNiceWine() or random.anySoda or "meh, water :/"
Or this real-world scenario:
username = cmdlineargs.username or configFile['username'] or DEFAULT_USERNAME
Which is much more concise and elegant than the alternative.
As many other answers have pointed out, Python is not alone and many other languages have the same behavior, for both short-circuit (I believe most current languanges are) and non-coercion.
"No language that i know lets you do this. So, why Python do?" You seem to assume that all languages should be the same. Wouldn't you expect innovation in programming languages to produce unique features that people value?
You've just pointed out why it's useful, so why wouldn't Python do it? Perhaps you should ask why other languages don't.
You can take advantage of the special features of the Python or operator out of Boolean contexts. The rule of thumb is still that the result of your Boolean expressions is the first true operand or the last in the line.
Notice that the logical operators (or included) are evaluated before the assignment operator =, so you can assign the result of a Boolean expression to a variable in the same way you do with a common expression:
>>> a = 1
>>> b = 2
>>> var1 = a or b
>>> var1
1
>>> a = None
>>> b = 2
>>> var2 = a or b
>>> var2
2
>>> a = []
>>> b = {}
>>> var3 = a or b
>>> var3
{}
Here, the or operator works as expected, returning the first true operand or the last operand if both are evaluated to false.

Why does python not support ++i or i++ [duplicate]

Why are there no ++ and -- operators in Python?
It's not because it doesn't make sense; it makes perfect sense to define "x++" as "x += 1, evaluating to the previous binding of x".
If you want to know the original reason, you'll have to either wade through old Python mailing lists or ask somebody who was there (eg. Guido), but it's easy enough to justify after the fact:
Simple increment and decrement aren't needed as much as in other languages. You don't write things like for(int i = 0; i < 10; ++i) in Python very often; instead you do things like for i in range(0, 10).
Since it's not needed nearly as often, there's much less reason to give it its own special syntax; when you do need to increment, += is usually just fine.
It's not a decision of whether it makes sense, or whether it can be done--it does, and it can. It's a question of whether the benefit is worth adding to the core syntax of the language. Remember, this is four operators--postinc, postdec, preinc, predec, and each of these would need to have its own class overloads; they all need to be specified, and tested; it would add opcodes to the language (implying a larger, and therefore slower, VM engine); every class that supports a logical increment would need to implement them (on top of += and -=).
This is all redundant with += and -=, so it would become a net loss.
This original answer I wrote is a myth from the folklore of computing: debunked by Dennis Ritchie as "historically impossible" as noted in the letters to the editors of Communications of the ACM July 2012 doi:10.1145/2209249.2209251
The C increment/decrement operators were invented at a time when the C compiler wasn't very smart and the authors wanted to be able to specify the direct intent that a machine language operator should be used which saved a handful of cycles for a compiler which might do a
load memory
load 1
add
store memory
instead of
inc memory
and the PDP-11 even supported "autoincrement" and "autoincrement deferred" instructions corresponding to *++p and *p++, respectively. See section 5.3 of the manual if horribly curious.
As compilers are smart enough to handle the high-level optimization tricks built into the syntax of C, they are just a syntactic convenience now.
Python doesn't have tricks to convey intentions to the assembler because it doesn't use one.
I always assumed it had to do with this line of the zen of python:
There should be one — and preferably only one — obvious way to do it.
x++ and x+=1 do the exact same thing, so there is no reason to have both.
Of course, we could say "Guido just decided that way", but I think the question is really about the reasons for that decision. I think there are several reasons:
It mixes together statements and expressions, which is not good practice. See http://norvig.com/python-iaq.html
It generally encourages people to write less readable code
Extra complexity in the language implementation, which is unnecessary in Python, as already mentioned
Because, in Python, integers are immutable (int's += actually returns a different object).
Also, with ++/-- you need to worry about pre- versus post- increment/decrement, and it takes only one more keystroke to write x+=1. In other words, it avoids potential confusion at the expense of very little gain.
Clarity!
Python is a lot about clarity and no programmer is likely to correctly guess the meaning of --a unless s/he's learned a language having that construct.
Python is also a lot about avoiding constructs that invite mistakes and the ++ operators are known to be rich sources of defects.
These two reasons are enough not to have those operators in Python.
The decision that Python uses indentation to mark blocks rather
than syntactical means such as some form of begin/end bracketing
or mandatory end marking is based largely on the same considerations.
For illustration, have a look at the discussion around introducing a conditional operator (in C: cond ? resultif : resultelse) into Python in 2005.
Read at least the first message and the decision message of that discussion (which had several precursors on the same topic previously).
Trivia:
The PEP frequently mentioned therein is the "Python Enhancement Proposal" PEP 308. LC means list comprehension, GE means generator expression (and don't worry if those confuse you, they are none of the few complicated spots of Python).
My understanding of why python does not have ++ operator is following: When you write this in python a=b=c=1 you will get three variables (labels) pointing at same object (which value is 1). You can verify this by using id function which will return an object memory address:
In [19]: id(a)
Out[19]: 34019256
In [20]: id(b)
Out[20]: 34019256
In [21]: id(c)
Out[21]: 34019256
All three variables (labels) point to the same object. Now increment one of variable and see how it affects memory addresses:
In [22] a = a + 1
In [23]: id(a)
Out[23]: 34019232
In [24]: id(b)
Out[24]: 34019256
In [25]: id(c)
Out[25]: 34019256
You can see that variable a now points to another object as variables b and c. Because you've used a = a + 1 it is explicitly clear. In other words you assign completely another object to label a. Imagine that you can write a++ it would suggest that you did not assign to variable a new object but ratter increment the old one. All this stuff is IMHO for minimization of confusion. For better understanding see how python variables works:
In Python, why can a function modify some arguments as perceived by the caller, but not others?
Is Python call-by-value or call-by-reference? Neither.
Does Python pass by value, or by reference?
Is Python pass-by-reference or pass-by-value?
Python: How do I pass a variable by reference?
Understanding Python variables and Memory Management
Emulating pass-by-value behaviour in python
Python functions call by reference
Code Like a Pythonista: Idiomatic Python
It was just designed that way. Increment and decrement operators are just shortcuts for x = x + 1. Python has typically adopted a design strategy which reduces the number of alternative means of performing an operation. Augmented assignment is the closest thing to increment/decrement operators in Python, and they weren't even added until Python 2.0.
I'm very new to python but I suspect the reason is because of the emphasis between mutable and immutable objects within the language. Now, I know that x++ can easily be interpreted as x = x + 1, but it LOOKS like you're incrementing in-place an object which could be immutable.
Just my guess/feeling/hunch.
To complete already good answers on that page:
Let's suppose we decide to do this, prefix (++i) that would break the unary + and - operators.
Today, prefixing by ++ or -- does nothing, because it enables unary plus operator twice (does nothing) or unary minus twice (twice: cancels itself)
>>> i=12
>>> ++i
12
>>> --i
12
So that would potentially break that logic.
now if one needs it for list comprehensions or lambdas, from python 3.8 it's possible with the new := assignment operator (PEP572)
pre-incrementing a and assign it to b:
>>> a = 1
>>> b = (a:=a+1)
>>> b
2
>>> a
2
post-incrementing just needs to make up the premature add by subtracting 1:
>>> a = 1
>>> b = (a:=a+1)-1
>>> b
1
>>> a
2
I believe it stems from the Python creed that "explicit is better than implicit".
First, Python is only indirectly influenced by C; it is heavily influenced by ABC, which apparently does not have these operators, so it should not be any great surprise not to find them in Python either.
Secondly, as others have said, increment and decrement are supported by += and -= already.
Third, full support for a ++ and -- operator set usually includes supporting both the prefix and postfix versions of them. In C and C++, this can lead to all kinds of "lovely" constructs that seem (to me) to be against the spirit of simplicity and straight-forwardness that Python embraces.
For example, while the C statement while(*t++ = *s++); may seem simple and elegant to an experienced programmer, to someone learning it, it is anything but simple. Throw in a mixture of prefix and postfix increments and decrements, and even many pros will have to stop and think a bit.
The ++ class of operators are expressions with side effects. This is something generally not found in Python.
For the same reason an assignment is not an expression in Python, thus preventing the common if (a = f(...)) { /* using a here */ } idiom.
Lastly I suspect that there operator are not very consistent with Pythons reference semantics. Remember, Python does not have variables (or pointers) with the semantics known from C/C++.
as i understood it so you won't think the value in memory is changed.
in c when you do x++ the value of x in memory changes.
but in python all numbers are immutable hence the address that x pointed as still has x not x+1. when you write x++ you would think that x change what really happens is that x refrence is changed to a location in memory where x+1 is stored or recreate this location if doe's not exists.
Other answers have described why it's not needed for iterators, but sometimes it is useful when assigning to increase a variable in-line, you can achieve the same effect using tuples and multiple assignment:
b = ++a becomes:
a,b = (a+1,)*2
and b = a++ becomes:
a,b = a+1, a
Python 3.8 introduces the assignment := operator, allowing us to achievefoo(++a) with
foo(a:=a+1)
foo(a++) is still elusive though.
Maybe a better question would be to ask why do these operators exist in C. K&R calls increment and decrement operators 'unusual' (Section 2.8page 46). The Introduction calls them 'more concise and often more efficient'. I suspect that the fact that these operations always come up in pointer manipulation also has played a part in their introduction.
In Python it has been probably decided that it made no sense to try to optimise increments (in fact I just did a test in C, and it seems that the gcc-generated assembly uses addl instead of incl in both cases) and there is no pointer arithmetic; so it would have been just One More Way to Do It and we know Python loathes that.
This may be because #GlennMaynard is looking at the matter as in comparison with other languages, but in Python, you do things the python way. It's not a 'why' question. It's there and you can do things to the same effect with x+=. In The Zen of Python, it is given: "there should only be one way to solve a problem." Multiple choices are great in art (freedom of expression) but lousy in engineering.
I think this relates to the concepts of mutability and immutability of objects. 2,3,4,5 are immutable in python. Refer to the image below. 2 has fixed id until this python process.
x++ would essentially mean an in-place increment like C. In C, x++ performs in-place increments. So, x=3, and x++ would increment 3 in the memory to 4, unlike python where 3 would still exist in memory.
Thus in python, you don't need to recreate a value in memory. This may lead to performance optimizations.
This is a hunch based answer.
I know this is an old thread, but the most common use case for ++i is not covered, that being manually indexing sets when there are no provided indices. This situation is why python provides enumerate()
Example : In any given language, when you use a construct like foreach to iterate over a set - for the sake of the example we'll even say it's an unordered set and you need a unique index for everything to tell them apart, say
i = 0
stuff = {'a': 'b', 'c': 'd', 'e': 'f'}
uniquestuff = {}
for key, val in stuff.items() :
uniquestuff[key] = '{0}{1}'.format(val, i)
i += 1
In cases like this, python provides an enumerate method, e.g.
for i, (key, val) in enumerate(stuff.items()) :
In addition to the other excellent answers here, ++ and -- are also notorious for undefined behavior. For example, what happens in this code?
foo[bar] = bar++;
It's so innocent-looking, but it's wrong C (and C++), because you don't know whether the first bar will have been incremented or not. One compiler might do it one way, another might do it another way, and a third might make demons fly out of your nose. All would be perfectly conformant with the C and C++ standards.
(EDIT: C++17 has changed the behavior of the given code so that it is defined; it will be equivalent to foo[bar+1] = bar; ++bar; — which nonetheless might not be what the programmer is expecting.)
Undefined behavior is seen as a necessary evil in C and C++, but in Python, it's just evil, and avoided as much as possible.

Use of OR as branch control in FP

I undertook an interview last week in which I learnt a few things about python I didn't know about (or rather realise how they could be used), first up and the content of this question is the use of or for the purposes of branch control.
So, for example, if we run:
def f():
# do something. I'd use ... but that's actually a python object.
def g():
# something else.
f() or g()
Then if f() evaluates to some true condition then that value is returned, if not, g() is evaluated and whatever value it produces is returned, whether true or false. This gives us the ability to implement an if statement using or keywords.
We can also use and such that f() and g() will return the value of g() if f() is true and the value of f() if g() is false.
I am told that this (the use of or for branch control) is a common thing in languages such as lisp (hence the lisp tag). I'm currently following SICP learning Scheme, so I can see that (or (f x) (g x)) would return the value of (g x) assuming (f x) is #f.
I'm confused as to whether there is any advantage of this technique. It clearly achieves branch control but to me the built in keywords seem more self-explanatory.
I'm also confused as to whether or not this is "functional"? My understanding of pure functional programming is that you use constructs like this (an example from my recent erlang experiments):
makeeven(N,1) -> N+1;
makeeven(N,0) -> N;
makeeven(N) -> makeeven(N,N rem 2).
Or a better, more complicated example using template meta-programming in C++ (discovered via cpp-next.com). My thought process is that one aspect of functional programming boils down the use of piecewise defined functions in code for branch control (and if you can manage it, tail recursion).
So, my questions:
Is this "functional"? It appears that way and my interviewers said they had backgrounds in functional programming, but it didn't match what I thought was functional. I see no reason why you couldn't have a logical operator as part of a function - it seems to lend itself nicely to the concept of higher order functions. I just hadn't thought that the use of logical operators was how functional programmers achieved branch control. Right? Wrong? I can see that circuits use logic gates for branch control so I guess this is a similar (related) concept?
Is there some advantage to using this technique? Is it just language conciseness/a syntax issue, or are there implications in terms of building an interpreter to using this construct?
Are there any use cases for this technique? Or is it not used very often? Is it used at all? As a self-taught guy I'd never seen it before although that in itself isn't necessarily surprising.
I apologise for jumping over so many languages; I'm simply trying to tie together my understanding across them. Feel free to answer in any language mentioned. I also apologise if I've misunderstood any definitions or am missing something vital here, I've never formally studied computer science.
Your interviewers must have had a "functional background" way back. It used to be common to write
(or (some-condition) (some-side-effect))
but in CL and in Scheme implementation that support it, it is much better written with unless. Same goes for and vs when.
So, to be more concrete -- it's not more functional (and in fact the common use of these things was for one-sided conditionals, which are not functional to begin with); there is no advantage (which becomes very obvious in these languages when you know that things are implemented as macros anyway -- for example, most or and and implementations expand to an if); and any possible use cases should use when and unless if you have them in your implementation, otherwise it's better to define them as macros than to not use them.
Oh, and you could use a combination of them instead of a two sided if, but that would be obfuscatingly ugly.
I'm not aware of any issues with the way this code will execute, but it is confusing to read for the uninitiated. In fact, this kind of syntax is like a Python anti-pattern: you can do it, but it is in no way Pythonic.
condition and true_branch or false_branch works in all languages that have short circuting logical operators. On the other hand it's not really a good idea to use in a language where values have a boolean value.
For example
zero = (1==0) and 0 or 1 # (1==0) -> False
zero = (False and 0) or 1 # (False and X) -> X
zero = 0 or 1 # 0 is False in most languages
zero = False or 1
zero = 1
As Eli said; also, performing control flow purely with logical operators tends to be taught in introductory FP classes -- more as a mind exercise, really, not something that you necessarily want to use IRL. It's always good to be able to translate any control operator down to if.
Now, the big difference between FPs and other languages is that, in more functional languages, if is actually an expression, not a statement. An if block always has a value! The C family of languages has a macro version of this -- the test? consequent : alternative construct -- but it gets really unreadable if you nest more expressions.
Prior to Python 2.5, if you want to have a control-flow expression in Python you might have to use logical operators. In Python 2.5, though, there is an FP-like if-expression syntax, so you can do something like this:
(42 if True else 7) + 35
See PEP 308
You only mention the case where there are exactly 2 expressions to evaluate. What happens if there are 5?
;; returns first true value, evaluating only as many as needed
(or (f x) (g x) (h x) (i x) (j x))
Would you nest if-statements? I'm not sure how I'd do this in Python. It's almost like this:
any(c(x) for c in [f, g, h, i, j])
except Python's any throws away the value and just returns True. (There might be a way to do it with itertools.dropwhile, but it seems a little awkward to me. Or maybe I'm just missing the obvious way.)
(As an aside: I find that Lisp's builtins don't quite correspond to what their names are in other languages, which can be confusing. Lisp's IF is like C's ternary operator ?: or Python's conditional expressions, for example, not their if-statements. Likewise, Lisp's OR is in some ways more like (but not exactly like) Python's any(), which only takes 2 expressions. Since the normal IF returns a value already, there's no point in having a separate kind of "if" that can't be used like this, or a separate kind of "or" that only takes two values. It's already as flexible as the less common variant in other languages.)
I happen to be writing code like this right now, coincidentally, where some of the functions are "go ask some server for an answer", and I want to stop as soon as I get a positive response. I'd never use OR where I really want to say IF, but I'd rather say:
(setq did-we-pass (or (try-this x)
(try-that x)
(try-some-other-thing x)
(heck-maybe-this-will-work x))
than make a big tree of IFs. Does that qualify as "flow control" or "functional"? I guess it depends on your definitions.
It may be considered "functional" in the sense of style of programming that is/was preferred in functional language. There is nothing functional in it otherwise.
It's just syntax.
It may be sometimes more readable to use or, for example:
def foo(bar=None):
bar = bar or []
...
return bar
def baz(elems):
print "You have %s elements." % (len(elems) or "no")
You could use bar if bar else [], but it's quite elaborate.

Categories