increment operator not working in python code [duplicate] - python

How do I use pre-increment/decrement operators (++, --), just like in C++?
Why does ++count run, but not change the value of the variable?

++ is not an operator. It is two + operators. The + operator is the identity operator, which does nothing. (Clarification: the + and - unary operators only work on numbers, but I presume that you wouldn't expect a hypothetical ++ operator to work on strings.)
++count
Parses as
+(+count)
Which translates to
count
You have to use the slightly longer += operator to do what you want to do:
count += 1
I suspect the ++ and -- operators were left out for consistency and simplicity. I don't know the exact argument Guido van Rossum gave for the decision, but I can imagine a few arguments:
Simpler parsing. Technically, parsing ++count is ambiguous, as it could be +, +, count (two unary + operators) just as easily as it could be ++, count (one unary ++ operator). It's not a significant syntactic ambiguity, but it does exist.
Simpler language. ++ is nothing more than a synonym for += 1. It was a shorthand invented because C compilers were stupid and didn't know how to optimize a += 1 into the inc instruction most computers have. In this day of optimizing compilers and bytecode interpreted languages, adding operators to a language to allow programmers to optimize their code is usually frowned upon, especially in a language like Python that is designed to be consistent and readable.
Confusing side-effects. One common newbie error in languages with ++ operators is mixing up the differences (both in precedence and in return value) between the pre- and post-increment/decrement operators, and Python likes to eliminate language "gotcha"-s. The precedence issues of pre-/post-increment in C are pretty hairy, and incredibly easy to mess up.

Python does not have pre and post increment operators.
In Python, integers are immutable. That is you can't change them. This is because the integer objects can be used under several names. Try this:
>>> b = 5
>>> a = 5
>>> id(a)
162334512
>>> id(b)
162334512
>>> a is b
True
a and b above are actually the same object. If you incremented a, you would also increment b. That's not what you want. So you have to reassign. Like this:
b = b + 1
Many C programmers who used python wanted an increment operator, but that operator would look like it incremented the object, while it actually reassigns it. Therefore the -= and += operators where added, to be shorter than the b = b + 1, while being clearer and more flexible than b++, so most people will increment with:
b += 1
Which will reassign b to b+1. That is not an increment operator, because it does not increment b, it reassigns it.
In short: Python behaves differently here, because it is not C, and is not a low level wrapper around machine code, but a high-level dynamic language, where increments don't make sense, and also are not as necessary as in C, where you use them every time you have a loop, for example.

While the others answers are correct in so far as they show what a mere + usually does (namely, leave the number as it is, if it is one), they are incomplete in so far as they don't explain what happens.
To be exact, +x evaluates to x.__pos__() and ++x to x.__pos__().__pos__().
I could imagine a VERY weird class structure (Children, don't do this at home!) like this:
class ValueKeeper(object):
def __init__(self, value): self.value = value
def __str__(self): return str(self.value)
class A(ValueKeeper):
def __pos__(self):
print 'called A.__pos__'
return B(self.value - 3)
class B(ValueKeeper):
def __pos__(self):
print 'called B.__pos__'
return A(self.value + 19)
x = A(430)
print x, type(x)
print +x, type(+x)
print ++x, type(++x)
print +++x, type(+++x)

TL;DR
Python does not have unary increment/decrement operators (--/++). Instead, to increment a value, use
a += 1
More detail and gotchas
But be careful here. If you're coming from C, even this is different in python. Python doesn't have "variables" in the sense that C does, instead python uses names and objects, and in python ints are immutable.
so lets say you do
a = 1
What this means in python is: create an object of type int having value 1 and bind the name a to it. The object is an instance of int having value 1, and the name a refers to it. The name a and the object to which it refers are distinct.
Now lets say you do
a += 1
Since ints are immutable, what happens here is as follows:
look up the object that a refers to (it is an int with id 0x559239eeb380)
look up the value of object 0x559239eeb380 (it is 1)
add 1 to that value (1 + 1 = 2)
create a new int object with value 2 (it has object id 0x559239eeb3a0)
rebind the name a to this new object
Now a refers to object 0x559239eeb3a0 and the original object (0x559239eeb380) is no longer refered to by the name a. If there aren't any other names refering to the original object it will be garbage collected later.
Give it a try yourself:
a = 1
print(hex(id(a)))
a += 1
print(hex(id(a)))

In python 3.8+ you can do :
(a:=a+1) #same as ++a (increment, then return new value)
(a:=a+1)-1 #same as a++ (return the incremented value -1) (useless)
You can do a lot of thinks with this.
>>> a = 0
>>> while (a:=a+1) < 5:
print(a)
1
2
3
4
Or if you want write somthing with more sophisticated syntaxe (the goal is not optimization):
>>> del a
>>> while (a := (a if 'a' in locals() else 0) + 1) < 5:
print(a)
1
2
3
4
It will return 0 even if 'a' doesn't exist without errors, and then will set it to 1

Python does not have these operators, but if you really need them you can write a function having the same functionality.
def PreIncrement(name, local={}):
#Equivalent to ++name
if name in local:
local[name]+=1
return local[name]
globals()[name]+=1
return globals()[name]
def PostIncrement(name, local={}):
#Equivalent to name++
if name in local:
local[name]+=1
return local[name]-1
globals()[name]+=1
return globals()[name]-1
Usage:
x = 1
y = PreIncrement('x') #y and x are both 2
a = 1
b = PostIncrement('a') #b is 1 and a is 2
Inside a function you have to add locals() as a second argument if you want to change local variable, otherwise it will try to change global.
x = 1
def test():
x = 10
y = PreIncrement('x') #y will be 2, local x will be still 10 and global x will be changed to 2
z = PreIncrement('x', locals()) #z will be 11, local x will be 11 and global x will be unaltered
test()
Also with these functions you can do:
x = 1
print(PreIncrement('x')) #print(x+=1) is illegal!
But in my opinion following approach is much clearer:
x = 1
x+=1
print(x)
Decrement operators:
def PreDecrement(name, local={}):
#Equivalent to --name
if name in local:
local[name]-=1
return local[name]
globals()[name]-=1
return globals()[name]
def PostDecrement(name, local={}):
#Equivalent to name--
if name in local:
local[name]-=1
return local[name]+1
globals()[name]-=1
return globals()[name]+1
I used these functions in my module translating javascript to python.

In Python, a distinction between expressions and statements is rigidly
enforced, in contrast to languages such as Common Lisp, Scheme, or
Ruby.
Wikipedia
So by introducing such operators, you would break the expression/statement split.
For the same reason you can't write
if x = 0:
y = 1
as you can in some other languages where such distinction is not preserved.

Yeah, I missed ++ and -- functionality as well. A few million lines of c code engrained that kind of thinking in my old head, and rather than fight it... Here's a class I cobbled up that implements:
pre- and post-increment, pre- and post-decrement, addition,
subtraction, multiplication, division, results assignable
as integer, printable, settable.
Here 'tis:
class counter(object):
def __init__(self,v=0):
self.set(v)
def preinc(self):
self.v += 1
return self.v
def predec(self):
self.v -= 1
return self.v
def postinc(self):
self.v += 1
return self.v - 1
def postdec(self):
self.v -= 1
return self.v + 1
def __add__(self,addend):
return self.v + addend
def __sub__(self,subtrahend):
return self.v - subtrahend
def __mul__(self,multiplier):
return self.v * multiplier
def __div__(self,divisor):
return self.v / divisor
def __getitem__(self):
return self.v
def __str__(self):
return str(self.v)
def set(self,v):
if type(v) != int:
v = 0
self.v = v
You might use it like this:
c = counter() # defaults to zero
for listItem in myList: # imaginary task
doSomething(c.postinc(),listItem) # passes c, but becomes c+1
...already having c, you could do this...
c.set(11)
while c.predec() > 0:
print c
....or just...
d = counter(11)
while d.predec() > 0:
print d
...and for (re-)assignment into integer...
c = counter(100)
d = c + 223 # assignment as integer
c = c + 223 # re-assignment as integer
print type(c),c # <type 'int'> 323
...while this will maintain c as type counter:
c = counter(100)
c.set(c + 223)
print type(c),c # <class '__main__.counter'> 323
EDIT:
And then there's this bit of unexpected (and thoroughly unwanted) behavior,
c = counter(42)
s = '%s: %d' % ('Expecting 42',c) # but getting non-numeric exception
print s
...because inside that tuple, getitem() isn't what used, instead a reference to the object is passed to the formatting function. Sigh. So:
c = counter(42)
s = '%s: %d' % ('Expecting 42',c.v) # and getting 42.
print s
...or, more verbosely, and explicitly what we actually wanted to happen, although counter-indicated in actual form by the verbosity (use c.v instead)...
c = counter(42)
s = '%s: %d' % ('Expecting 42',c.__getitem__()) # and getting 42.
print s

There are no post/pre increment/decrement operators in python like in languages like C.
We can see ++ or -- as multiple signs getting multiplied, like we do in maths (-1) * (-1) = (+1).
E.g.
---count
Parses as
-(-(-count)))
Which translates to
-(+count)
Because, multiplication of - sign with - sign is +
And finally,
-count

A straight forward workaround
c = 0
c = (lambda c_plusplus: plusplus+1)(c)
print(c)
1
No more typing
c = c + 1
Also, you could just write
c++
and finish all your code and then do search/replace for "c++", replace with "c=c+1". Just make sure regular expression search is off.

Extending Henry's answer, I experimentally implemented a syntax sugar library realizing a++: hdytto.
The usage is simple. After installing from PyPI, place sitecustomize.py:
from hdytto import register_hdytto
register_hdytto()
in your project directory. Then, make main.py:
# coding: hdytto
a = 5
print(a++)
print(++a)
b = 10 - --a
print(b--)
and run it by PYTHONPATH=. python main.py. The output will be
5
7
4
hdytto replaces a++ as ((a:=a+1)-1) when decoding the script file, so it works.

Related

Breaking down Python syntax in this quicksort algorithm [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are the lesser-known but useful features of the Python programming language?
Try to limit answers to Python core.
One feature per answer.
Give an example and short description of the feature, not just a link to documentation.
Label the feature using a title as the first line.
Quick links to answers:
Argument Unpacking
Braces
Chaining Comparison Operators
Decorators
Default Argument Gotchas / Dangers of Mutable Default arguments
Descriptors
Dictionary default .get value
Docstring Tests
Ellipsis Slicing Syntax
Enumeration
For/else
Function as iter() argument
Generator expressions
import this
In Place Value Swapping
List stepping
__missing__ items
Multi-line Regex
Named string formatting
Nested list/generator comprehensions
New types at runtime
.pth files
ROT13 Encoding
Regex Debugging
Sending to Generators
Tab Completion in Interactive Interpreter
Ternary Expression
try/except/else
Unpacking+print() function
with statement
Chaining comparison operators:
>>> x = 5
>>> 1 < x < 10
True
>>> 10 < x < 20
False
>>> x < 10 < x*10 < 100
True
>>> 10 > x <= 9
True
>>> 5 == x > 4
True
In case you're thinking it's doing 1 < x, which comes out as True, and then comparing True < 10, which is also True, then no, that's really not what happens (see the last example.) It's really translating into 1 < x and x < 10, and x < 10 and 10 < x * 10 and x*10 < 100, but with less typing and each term is only evaluated once.
Get the python regex parse tree to debug your regex.
Regular expressions are a great feature of python, but debugging them can be a pain, and it's all too easy to get a regex wrong.
Fortunately, python can print the regex parse tree, by passing the undocumented, experimental, hidden flag re.DEBUG (actually, 128) to re.compile.
>>> re.compile("^\[font(?:=(?P<size>[-+][0-9]{1,2}))?\](.*?)[/font]",
re.DEBUG)
at at_beginning
literal 91
literal 102
literal 111
literal 110
literal 116
max_repeat 0 1
subpattern None
literal 61
subpattern 1
in
literal 45
literal 43
max_repeat 1 2
in
range (48, 57)
literal 93
subpattern 2
min_repeat 0 65535
any None
in
literal 47
literal 102
literal 111
literal 110
literal 116
Once you understand the syntax, you can spot your errors. There we can see that I forgot to escape the [] in [/font].
Of course you can combine it with whatever flags you want, like commented regexes:
>>> re.compile("""
^ # start of a line
\[font # the font tag
(?:=(?P<size> # optional [font=+size]
[-+][0-9]{1,2} # size specification
))?
\] # end of tag
(.*?) # text between the tags
\[/font\] # end of the tag
""", re.DEBUG|re.VERBOSE|re.DOTALL)
enumerate
Wrap an iterable with enumerate and it will yield the item along with its index.
For example:
>>> a = ['a', 'b', 'c', 'd', 'e']
>>> for index, item in enumerate(a): print index, item
...
0 a
1 b
2 c
3 d
4 e
>>>
References:
Python tutorial—looping techniques
Python docs—built-in functions—enumerate
PEP 279
Creating generators objects
If you write
x=(n for n in foo if bar(n))
you can get out the generator and assign it to x. Now it means you can do
for n in x:
The advantage of this is that you don't need intermediate storage, which you would need if you did
x = [n for n in foo if bar(n)]
In some cases this can lead to significant speed up.
You can append many if statements to the end of the generator, basically replicating nested for loops:
>>> n = ((a,b) for a in range(0,2) for b in range(4,6))
>>> for i in n:
... print i
(0, 4)
(0, 5)
(1, 4)
(1, 5)
iter() can take a callable argument
For instance:
def seek_next_line(f):
for c in iter(lambda: f.read(1),'\n'):
pass
The iter(callable, until_value) function repeatedly calls callable and yields its result until until_value is returned.
Be careful with mutable default arguments
>>> def foo(x=[]):
... x.append(1)
... print x
...
>>> foo()
[1]
>>> foo()
[1, 1]
>>> foo()
[1, 1, 1]
Instead, you should use a sentinel value denoting "not given" and replace with the mutable you'd like as default:
>>> def foo(x=None):
... if x is None:
... x = []
... x.append(1)
... print x
>>> foo()
[1]
>>> foo()
[1]
Sending values into generator functions. For example having this function:
def mygen():
"""Yield 5 until something else is passed back via send()"""
a = 5
while True:
f = (yield a) #yield a and possibly get f in return
if f is not None:
a = f #store the new value
You can:
>>> g = mygen()
>>> g.next()
5
>>> g.next()
5
>>> g.send(7) #we send this back to the generator
7
>>> g.next() #now it will yield 7 until we send something else
7
If you don't like using whitespace to denote scopes, you can use the C-style {} by issuing:
from __future__ import braces
The step argument in slice operators. For example:
a = [1,2,3,4,5]
>>> a[::2] # iterate over the whole list in 2-increments
[1,3,5]
The special case x[::-1] is a useful idiom for 'x reversed'.
>>> a[::-1]
[5,4,3,2,1]
Decorators
Decorators allow to wrap a function or method in another function that can add functionality, modify arguments or results, etc. You write decorators one line above the function definition, beginning with an "at" sign (#).
Example shows a print_args decorator that prints the decorated function's arguments before calling it:
>>> def print_args(function):
>>> def wrapper(*args, **kwargs):
>>> print 'Arguments:', args, kwargs
>>> return function(*args, **kwargs)
>>> return wrapper
>>> #print_args
>>> def write(text):
>>> print text
>>> write('foo')
Arguments: ('foo',) {}
foo
The for...else syntax (see http://docs.python.org/ref/for.html )
for i in foo:
if i == 0:
break
else:
print("i was never 0")
The "else" block will be normally executed at the end of the for loop, unless the break is called.
The above code could be emulated as follows:
found = False
for i in foo:
if i == 0:
found = True
break
if not found:
print("i was never 0")
From 2.5 onwards dicts have a special method __missing__ that is invoked for missing items:
>>> class MyDict(dict):
... def __missing__(self, key):
... self[key] = rv = []
... return rv
...
>>> m = MyDict()
>>> m["foo"].append(1)
>>> m["foo"].append(2)
>>> dict(m)
{'foo': [1, 2]}
There is also a dict subclass in collections called defaultdict that does pretty much the same but calls a function without arguments for not existing items:
>>> from collections import defaultdict
>>> m = defaultdict(list)
>>> m["foo"].append(1)
>>> m["foo"].append(2)
>>> dict(m)
{'foo': [1, 2]}
I recommend converting such dicts to regular dicts before passing them to functions that don't expect such subclasses. A lot of code uses d[a_key] and catches KeyErrors to check if an item exists which would add a new item to the dict.
In-place value swapping
>>> a = 10
>>> b = 5
>>> a, b
(10, 5)
>>> a, b = b, a
>>> a, b
(5, 10)
The right-hand side of the assignment is an expression that creates a new tuple. The left-hand side of the assignment immediately unpacks that (unreferenced) tuple to the names a and b.
After the assignment, the new tuple is unreferenced and marked for garbage collection, and the values bound to a and b have been swapped.
As noted in the Python tutorial section on data structures,
Note that multiple assignment is really just a combination of tuple packing and sequence unpacking.
Readable regular expressions
In Python you can split a regular expression over multiple lines, name your matches and insert comments.
Example verbose syntax (from Dive into Python):
>>> pattern = """
... ^ # beginning of string
... M{0,4} # thousands - 0 to 4 M's
... (CM|CD|D?C{0,3}) # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
... # or 500-800 (D, followed by 0 to 3 C's)
... (XC|XL|L?X{0,3}) # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
... # or 50-80 (L, followed by 0 to 3 X's)
... (IX|IV|V?I{0,3}) # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
... # or 5-8 (V, followed by 0 to 3 I's)
... $ # end of string
... """
>>> re.search(pattern, 'M', re.VERBOSE)
Example naming matches (from Regular Expression HOWTO)
>>> p = re.compile(r'(?P<word>\b\w+\b)')
>>> m = p.search( '(((( Lots of punctuation )))' )
>>> m.group('word')
'Lots'
You can also verbosely write a regex without using re.VERBOSE thanks to string literal concatenation.
>>> pattern = (
... "^" # beginning of string
... "M{0,4}" # thousands - 0 to 4 M's
... "(CM|CD|D?C{0,3})" # hundreds - 900 (CM), 400 (CD), 0-300 (0 to 3 C's),
... # or 500-800 (D, followed by 0 to 3 C's)
... "(XC|XL|L?X{0,3})" # tens - 90 (XC), 40 (XL), 0-30 (0 to 3 X's),
... # or 50-80 (L, followed by 0 to 3 X's)
... "(IX|IV|V?I{0,3})" # ones - 9 (IX), 4 (IV), 0-3 (0 to 3 I's),
... # or 5-8 (V, followed by 0 to 3 I's)
... "$" # end of string
... )
>>> print pattern
"^M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$"
Function argument unpacking
You can unpack a list or a dictionary as function arguments using * and **.
For example:
def draw_point(x, y):
# do some magic
point_foo = (3, 4)
point_bar = {'y': 3, 'x': 2}
draw_point(*point_foo)
draw_point(**point_bar)
Very useful shortcut since lists, tuples and dicts are widely used as containers.
ROT13 is a valid encoding for source code, when you use the right coding declaration at the top of the code file:
#!/usr/bin/env python
# -*- coding: rot13 -*-
cevag "Uryyb fgnpxbiresybj!".rapbqr("rot13")
Creating new types in a fully dynamic manner
>>> NewType = type("NewType", (object,), {"x": "hello"})
>>> n = NewType()
>>> n.x
"hello"
which is exactly the same as
>>> class NewType(object):
>>> x = "hello"
>>> n = NewType()
>>> n.x
"hello"
Probably not the most useful thing, but nice to know.
Edit: Fixed name of new type, should be NewType to be the exact same thing as with class statement.
Edit: Adjusted the title to more accurately describe the feature.
Context managers and the "with" Statement
Introduced in PEP 343, a context manager is an object that acts as a run-time context for a suite of statements.
Since the feature makes use of new keywords, it is introduced gradually: it is available in Python 2.5 via the __future__ directive. Python 2.6 and above (including Python 3) has it available by default.
I have used the "with" statement a lot because I think it's a very useful construct, here is a quick demo:
from __future__ import with_statement
with open('foo.txt', 'w') as f:
f.write('hello!')
What's happening here behind the scenes, is that the "with" statement calls the special __enter__ and __exit__ methods on the file object. Exception details are also passed to __exit__ if any exception was raised from the with statement body, allowing for exception handling to happen there.
What this does for you in this particular case is that it guarantees that the file is closed when execution falls out of scope of the with suite, regardless if that occurs normally or whether an exception was thrown. It is basically a way of abstracting away common exception-handling code.
Other common use cases for this include locking with threads and database transactions.
Dictionaries have a get() method
Dictionaries have a 'get()' method. If you do d['key'] and key isn't there, you get an exception. If you do d.get('key'), you get back None if 'key' isn't there. You can add a second argument to get that item back instead of None, eg: d.get('key', 0).
It's great for things like adding up numbers:
sum[value] = sum.get(value, 0) + 1
Descriptors
They're the magic behind a whole bunch of core Python features.
When you use dotted access to look up a member (eg, x.y), Python first looks for the member in the instance dictionary. If it's not found, it looks for it in the class dictionary. If it finds it in the class dictionary, and the object implements the descriptor protocol, instead of just returning it, Python executes it. A descriptor is any class that implements the __get__, __set__, or __delete__ methods.
Here's how you'd implement your own (read-only) version of property using descriptors:
class Property(object):
def __init__(self, fget):
self.fget = fget
def __get__(self, obj, type):
if obj is None:
return self
return self.fget(obj)
and you'd use it just like the built-in property():
class MyClass(object):
#Property
def foo(self):
return "Foo!"
Descriptors are used in Python to implement properties, bound methods, static methods, class methods and slots, amongst other things. Understanding them makes it easy to see why a lot of things that previously looked like Python 'quirks' are the way they are.
Raymond Hettinger has an excellent tutorial that does a much better job of describing them than I do.
Conditional Assignment
x = 3 if (y == 1) else 2
It does exactly what it sounds like: "assign 3 to x if y is 1, otherwise assign 2 to x". Note that the parens are not necessary, but I like them for readability. You can also chain it if you have something more complicated:
x = 3 if (y == 1) else 2 if (y == -1) else 1
Though at a certain point, it goes a little too far.
Note that you can use if ... else in any expression. For example:
(func1 if y == 1 else func2)(arg1, arg2)
Here func1 will be called if y is 1 and func2, otherwise. In both cases the corresponding function will be called with arguments arg1 and arg2.
Analogously, the following is also valid:
x = (class1 if y == 1 else class2)(arg1, arg2)
where class1 and class2 are two classes.
Doctest: documentation and unit-testing at the same time.
Example extracted from the Python documentation:
def factorial(n):
"""Return the factorial of n, an exact integer >= 0.
If the result is small enough to fit in an int, return an int.
Else return a long.
>>> [factorial(n) for n in range(6)]
[1, 1, 2, 6, 24, 120]
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: n must be >= 0
Factorials of floats are OK, but the float must be an exact integer:
"""
import math
if not n >= 0:
raise ValueError("n must be >= 0")
if math.floor(n) != n:
raise ValueError("n must be exact integer")
if n+1 == n: # catch a value like 1e300
raise OverflowError("n too large")
result = 1
factor = 2
while factor <= n:
result *= factor
factor += 1
return result
def _test():
import doctest
doctest.testmod()
if __name__ == "__main__":
_test()
Named formatting
% -formatting takes a dictionary (also applies %i/%s etc. validation).
>>> print "The %(foo)s is %(bar)i." % {'foo': 'answer', 'bar':42}
The answer is 42.
>>> foo, bar = 'question', 123
>>> print "The %(foo)s is %(bar)i." % locals()
The question is 123.
And since locals() is also a dictionary, you can simply pass that as a dict and have % -substitions from your local variables. I think this is frowned upon, but simplifies things..
New Style Formatting
>>> print("The {foo} is {bar}".format(foo='answer', bar=42))
To add more python modules (espcially 3rd party ones), most people seem to use PYTHONPATH environment variables or they add symlinks or directories in their site-packages directories. Another way, is to use *.pth files. Here's the official python doc's explanation:
"The most convenient way [to modify
python's search path] is to add a path
configuration file to a directory
that's already on Python's path,
usually to the .../site-packages/
directory. Path configuration files
have an extension of .pth, and each
line must contain a single path that
will be appended to sys.path. (Because
the new paths are appended to
sys.path, modules in the added
directories will not override standard
modules. This means you can't use this
mechanism for installing fixed
versions of standard modules.)"
Exception else clause:
try:
put_4000000000_volts_through_it(parrot)
except Voom:
print "'E's pining!"
else:
print "This parrot is no more!"
finally:
end_sketch()
The use of the else clause is better than adding additional code to the try clause because it avoids accidentally catching an exception that wasn’t raised by the code being protected by the try ... except statement.
See http://docs.python.org/tut/node10.html
Re-raising exceptions:
# Python 2 syntax
try:
some_operation()
except SomeError, e:
if is_fatal(e):
raise
handle_nonfatal(e)
# Python 3 syntax
try:
some_operation()
except SomeError as e:
if is_fatal(e):
raise
handle_nonfatal(e)
The 'raise' statement with no arguments inside an error handler tells Python to re-raise the exception with the original traceback intact, allowing you to say "oh, sorry, sorry, I didn't mean to catch that, sorry, sorry."
If you wish to print, store or fiddle with the original traceback, you can get it with sys.exc_info(), and printing it like Python would is done with the 'traceback' module.
Main messages :)
import this
# btw look at this module's source :)
De-cyphered:
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Interactive Interpreter Tab Completion
try:
import readline
except ImportError:
print "Unable to load readline module."
else:
import rlcompleter
readline.parse_and_bind("tab: complete")
>>> class myclass:
... def function(self):
... print "my function"
...
>>> class_instance = myclass()
>>> class_instance.<TAB>
class_instance.__class__ class_instance.__module__
class_instance.__doc__ class_instance.function
>>> class_instance.f<TAB>unction()
You will also have to set a PYTHONSTARTUP environment variable.
Nested list comprehensions and generator expressions:
[(i,j) for i in range(3) for j in range(i) ]
((i,j) for i in range(4) for j in range(i) )
These can replace huge chunks of nested-loop code.
Operator overloading for the set builtin:
>>> a = set([1,2,3,4])
>>> b = set([3,4,5,6])
>>> a | b # Union
{1, 2, 3, 4, 5, 6}
>>> a & b # Intersection
{3, 4}
>>> a < b # Subset
False
>>> a - b # Difference
{1, 2}
>>> a ^ b # Symmetric Difference
{1, 2, 5, 6}
More detail from the standard library reference: Set Types

Is it possible to return a value while still modifying it?

Well honestly no easy way for me to lamens the title of this question, basically is there a way to modify a return value for a function while also returning it in the same line.
Example - custom implementation of an iterable class
I’d like to replace this:
def __next__(self):
if self.count <= 0:
raise StopIteration
r = self.count
self.count -= 1
return r
With this:
def __next__(self):
if self.count <= 0:
raise StopIteration
return self.count -= 1
Honestly, I know this may seem frivolous (which it may be) but I’m only this because I’m a fan of one-liners and this boils down to making even a process as simple as this more logically readable; plus, depending on the implementation, it would nullify having to hold the value r in memory (I know I know, removing the need for r has no significant gain but hey I’m only asking if this is possible).
I know I’ve only given one example but this happens to be the only case I can think of that something like this Would be needed. Python is a wonderful language full of many special things like += being a “wrapper” of __iadd__ my thing is am I missing something? Or is this possible... and why must it be used as a single line and not in conjunction with a return statement as it doesn’t return its altered value?
It's because -= modifies the variable, and do any thing to that, (like now you returned that) will raise errors.
Demo:
>>> a=3
>>> a+(a+=1)
SyntaxError: invalid syntax
>>> # also to show that it does modify the variable:
>>> a=3
>>> a+=1
>>> a
4
>>>
Update:
You do a two-liner:
def f(a):
if a<=0:raise StopIteration
a-=1;return a
return foobar -= 1
or
>>> a = 3
>>> b = (a += 1)
File "<stdin>", line 1
b = (a += 1)
^
SyntaxError: invalid syntax
is not possible in Python.
Although the first solution needs to store one more variable for this timestep (or do one operation more), to cite the Python Zen: Readability counts.

Function that returns an accumulator in Python

I am reading Hackers and Painters and am confused by a problem mentioned by the author to illustrate the power of different programming languages.
The problem is:
We want to write a function that generates accumulators—a function that takes a number n, and returns a function that takes another number i and returns n incremented by i. (That’s incremented by, not plus. An accumulator has to accumulate.)
The author mentions several solutions with different programming languages. For example, Common Lisp:
(defun foo (n)
(lambda (i) (incf n i)))
and JavaScript:
function foo(n) { return function (i) { return n += i } }
However, when it comes to Python, the following codes do not work:
def foo(n):
s = n
def bar(i):
s += i
return s
return bar
f = foo(0)
f(1) # UnboundLocalError: local variable 's' referenced before assignment
A simple modification will make it work:
def foo(n):
s = [n]
def bar(i):
s[0] += i
return s[0]
return bar
I am new to Python. Why doesn the first solution not work while the second one does? The author mentions lexical variables but I still don't get it.
s += i is just sugar for s = s + i.*
This means you assign a new value to the variable s (instead of mutating it in place). When you assign to a variable, Python assumes it is local to the function. However, before assigning it needs to evaluate s + i, but s is local and still unassigned -> Error.
In the second case s[0] += i you never assign to s directly, but only ever access an item from s. So Python can clearly see that it is not a local variable and goes looking for it in the outer scope.
Finally, a nicer alternative (in Python 3) is to explicitly tell it that s is not a local variable:
def foo(n):
s = n
def bar(i):
nonlocal s
s += i
return s
return bar
(There is actually no need for s - you could simply use n instead inside bar.)
*The situation is slightly more complex, but the important issue is that computation and assignment are performed in two separate steps.
An infinite generator is one implementation. You can call __next__ on a generator instance to extract successive results iteratively.
def incrementer(n, i):
while True:
n += i
yield n
g = incrementer(2, 5)
print(g.__next__()) # 7
print(g.__next__()) # 12
print(g.__next__()) # 17
If you need a flexible incrementer, one possibility is an object-oriented approach:
class Inc(object):
def __init__(self, n=0):
self.n = n
def incrementer(self, i):
self.n += i
return self.n
g = Inc(2)
g.incrementer(5) # 7
g.incrementer(3) # 10
g.incrementer(7) # 17
In Python if we use a variable and pass it to a function then it will be Call by Value whatever changes you make to the variable it will not be reflected to the original variable.
But when you use a list instead of a variable then the changes that you make to the list in the functions are reflected in the original List outside the function so this is called call by reference.
And this is the reason for the second option does work and the first option doesn't.

Overload python ternary operator

Is it possible to overload the ternary operator in python? Basically what I want is something like:
class A(object):
def __ternary__(self, a, c):
return a + c
a = A()
print "asdf" if a else "fdsa" # prints "asdffdsa"
I'm trying to implement a symbolic package and basically want something that can do things like:
sym = Symbol("s")
result = 1 if sym < 3 else 10
print result.evaluate(sym=2) # prints 1
print result.evaluate(sym=4) # prints 10
Edit: Let me put out a bit more complex example to show how this could be layered upon.
sym = Symbol("s")
result = 1 if sym < 3 else 10
...
something_else = (result+1)*3.5
...
my_other_thing = sqrt(something_else)
print my_other_thing.evaluate(sym=2) # prints sqrt(7) or rather the decimal equivalent
The point is, I don't need to just be able to late evaluate the one ternary operator, I need to take the result and do other symbolic stuff with that before finally evaluating. Furthermore, my code can do partial evaluations where I give it a few bindings and it returns another symbolic expression if it can't evaluate the full expression.
My backup plan is just to directly use the ternary class taking 3 expressions objects that I would need to make anyway. I was just trying to hide the generation of this class with an operator overload. Basically:
a = TernaryOperator(a,b,c)
# vs
b = a if b else c
look at the sympy module; it already does this
for simple comparison, write A.__eq__ and A.__lt__ methods and use the total_ordering class decorator; this should be sufficient for comparing two As or an A and a constant
write it as a lambda,
result = lambda sym: 1 if sym < 3 else 10
print(result(2)) # => 1
print(result(4)) # => 10
Overload the comparison operators instead (something you probably needed to do anyway):
class A(object):
def __lt__(self, other):
return self.value() < other.value() # Replace with your own implementation of <
Then, use lambda functions to achieve the delayed evaluation you desire:
sym = Symbol("s")
result = lambda s: 1 if s < 3 else 10
sym.set(2)
print result(sym) # prints 1
sym.set(4)
print result(sym) # prints 10
(I don't think you can overload the assignment operator, as it doesn't actually perform an operation on any object, but rather on a variable.)

Python functions within lists

So today in computer science I asked about using a function as a variable. For example, I can create a function, such as returnMe(i) and make an array that will be used to call it. Like h = [help,returnMe] and then I can say h1 and it would call returnMe("Bob"). Sorry I was a little excited about this. My question is is there a way of calling like h.append(def function) and define a function that only exists in the array?
EDIT:
Here Is some code that I wrote with this!
So I just finished an awesome FizzBuzz with this solution thank you so much again! Here's that code as an example:
funct = []
s = ""
def newFunct(str, num):
return (lambda x: str if(x%num==0) else "")
funct.append(newFunct("Fizz",3))
funct.append(newFunct("Buzz",5))
for x in range(1,101):
for oper in funct:
s += oper(x)
s += ":"+str(x)+"\n"
print s
You can create anonymous functions using the lambda keyword.
def func(x,keyword='bar'):
return (x,keyword)
is roughly equivalent to:
func = lambda x,keyword='bar':(x,keyword)
So, if you want to create a list with functions in it:
my_list = [lambda x:x**2,lambda x:x**3]
print my_list[0](2) #4
print my_list[1](2) #8
Not really in Python. As mgilson shows, you can do this with trivial functions, but they can only contain expressions, not statements, so are very limited (you can't assign to a variable, for example).
This is of course supported in other languages: in Javascript, for example, creating substantial anonymous functions and passing them around is a very idiomatic thing to do.
You can create the functions in the original scope, assign them to the array and then delete them from their original scope. Thus, you can indeed call them from the array but not as a local variable. I am not sure if this meets your requirements.
#! /usr/bin/python3.2
def a (x): print (x * 2)
def b (x): print (x ** 2)
l = [a, b]
del a
del b
l [0] (3) #works
l [1] (3) #works
a (3) #fails epicly
You can create a list of lambda functions to increment by every number from 0 to 9 like so:
increment = [(lambda arg: (lambda x: arg + x))(i) for i in range(10)]
increment[0](1) #returns 1
increment[9](10) #returns 19
Side Note:
I think it's also important to note that this (function pointers not lambdas) is somewhat like how python holds methods in most classes, except instead of a list, it's a dictionary with function names pointing to the functions. In many but not all cases instance.func(args) is equivalent to instance.__dict__['func'](args) or type(class).__dict__['func'](args)

Categories