Related
In Java, we can have a FOR loop like this
for (i = 0; i < 10; i += 1) {
}
Yet this is not possible in Python; FOR can only be used in the "foreach" sense, looping through each element in an iterable.
Why were the control structures for the FOR loop left out in Python?
The Zen of Python says “There should be one-- and preferably only one --obvious way to do it.” Python already has this way to do it:
for i in xrange(10):
pass
If you want to make all the parts explicit, you can do that too:
for i in xrange(0, 10, 1):
pass
Since it already has an “obvious” way to do it, adding another way would be un-zen. (Would that be “nez”?)
(Note: In Python 3, use range instead of xrange.)
A C-style for loop has more flexibility, but ultimately you can write an equivalent loop with Python's while (or C's while for that matter), which touches not only on the “one obvious way” principle, but also “Simple is better than complex” amongst others. This is all a matter of taste, of course—in this case, Guido van Rossum's taste.
Since this is one of those questions that goes back to the mists of Python 0.x (in fact, most likely to its predecessor ABC), the only person who can really answer is Guido, and then only if he remembers. You could try asking him to write a post on his History of Python blog.
However, there are some good reasons for this.
First, having only a single form of for instead of two different forms makes the language smaller—simpler to learn and read for humans, and simpler to parse for computers.
Not having the "foreach"-style for loop leads to many off-by-one errors, and makes a lot of simple code much more verbose.
Not having the C-style for loop makes some already-very-complicated for loops slightly more complicated by forcing them to be while loops. It doesn't affect simple loops like yours, which can be written as "foreach" loops, like for i in range(10):—which is briefer, and more readable, and harder to get wrong.*
When looked at that way, the choice is obvious.
* In fact, even complex loops can be turned into foreach loops, via iterators. And sometimes that's a good idea. For example, is for (int i=2; i=next_prime(i); i<1000000) really better than for i in takewhile(lambda i: i<1000000, generate_primes()):? There's a reason that C++ has gradually added features to make the latter possible, to avoid the mistakes you can easily make with the former…
Generally, It is better performance to iterate over the elements. You can have the same result with:
for i in xrange(0, 20):
pass
And you should defenetly read the zed of python:
http://www.python.org/dev/peps/pep-0020/
Because it's just syntactic sugar for a while loop
i = 0
while i < 10:
# stuff
i += 1
Put up your hand if you've never had an off-by-one error in a C-style-for loop. Thought so.
Consider:
categories = {'foo':[4], 'mer':[2, 9, 0]}
key = 'bar'
value = 5
We could safely append to a list stored in a dictionary in either of the following ways:
Being cautious, we always check whether the list exists before appending to it.
if not somedict.has_key(key):
somedict[key] = []
somedict[key].append(value)
Being direct, we simply clean up if there is an exception.
try:
somedict[key].append(value)
except KeyError:
somedict[key] = [value]
In both cases, the result could be:
{'foo':[4], 'mer':[2, 9, 0], 'bar':[5]}
To restate my question: In simple instances like this, is it better (in terms of style, efficiency, & philosophy) to be cautious or direct?
What you'll find is that your option 1 "being cautious" is often remarkably slow. Also, it's subject to obscure errors because the test you tried to write to "avoid" the exception is incorrect.
What you'll find is that your option 2 "being direct" is often much faster. It's also more likely to be correct, as well as faster and easier for people to read.
Why? Internally, Python often implements things like "contains" or "has_key" as an exception test.
def has_key( self, some_key ):
try:
self[some_key]
except KeyError:
return False
return True
Since this is typically how a has_key type of method is implemented, there's no reason for you code do waste time doing this in addition to what Python will already do.
More fundamentally, there's a correctness issue. Many attempts to prevent or avoid an exception are incomplete are incorrect.
For example, trying to establish if a string is potentially a float-point number is fraught with numerous exceptions and special cases. About the only way to do it correctly is this.
try:
x= float( some_string )
except ValueError:
# not a floating-point value
Just do the algorithm without worrying about "preventing" or "avoiding" exceptions.
In the general case, EFAP ("easier to ask for forgiveness than permission") is preferred in Python. Of course the rule of thumb "exceptions should be for exceptional cases" still holds (if you expect an exception to occur frequently, you propably should "look before you leap") - i.e. it depends. Efficiency-wise, it shouldn't make too much of a difference in most cases - when it does, consider that try blocks without exceptions are cheap and conditions are always checked.
Note that neither is necessary (at least you don't have to do it yourself/epplicitly) some cases, including this example - here, you should just use collections.defaultdict
You don't need a strong, compelling reason to use exceptions--they're not very expensive in Python. Here are some possible reasons to prefer one or the other, for your particular example:
The exception version requires a simpler API. Any container that supports item lookup and assignment (__getitem__ and __setitem__) will work. The non-exception version additionally requires that has_key be implemented.
The exception version may be slightly faster if the key usually exists, since it only requires a single dict lookup. The has_key version requires at least two--one for has_key and one for the actual lookup.
The non-exception version has a more consistent code path: it always puts the value in the array in the same place. By comparison, the exception version has a separate code path for each case.
Unless performance is particularly important (in which case you'd be benchmarking and profiling), none of these are very strong reasons; just use whichever seems more natural.
try is fast enough, except (if it happens) may not be. If the average length of those lists is going to be 1.1, use the check-first method. If it's going to be in the thousands, use try/except. If you are really worried, benchmark the alternatives.
Ensure that you are benchmarking the best alternatives. d.has_key(k) is a slow old has_been; you don't need the attribute lookup and the function call. Use k in d instead. Also use else to save a wasted append on the first trip:
Instead of:
if not somedict.has_key(key):
somedict[key] = []
somedict[key].append(value)
do this:
if key in somedict:
somedict[key].append(value)
else:
somedict[key] = [value]
You can use setdefault for this specific case:
somedict.setdefault(key, []).append(value)
See here: http://docs.python.org/library/stdtypes.html#mapping-types-dict
It depends, for exemple if the key is a paramenter of a function that will be used by an other programer, I would use the second approach, because I can't control the input, and the exception information it's actually usefull for a programer. But if its just a process inside a function and the key it's just some input from a database for exemple, the first approach it's better, then if something goes wrong, maybe show the exception information isn't helpfull at all. Use the exception approach if you want to do someting with the exception information.
EFAP is a good habit to get into for Python.
One reason is that it avoids the race condition if someone wants to use your code in a multithreaded app
I try to make my code fool-proof, but I've noticed that it takes a lot of time to type things out and it takes more time to read the code.
Instead of:
class TextServer(object):
def __init__(self, text_values):
self.text_values = text_values
# <more code>
# <more methods>
I tend to write this:
class TextServer(object):
def __init__(self, text_values):
for text_value in text_values:
assert isinstance(text_value, basestring), u'All text_values should be str or unicode.'
assert 2 <= len(text_value), u'All text_values should be at least two characters long.'
self.__text_values = frozenset(text_values) # <They shouldn't change.>
# <more code>
#property
def text_values(self):
# <'text_values' shouldn't be replaced.>
return self.__text_values
# <more methods>
Is my python coding style too paranoid? Or is there a way to improve readability while keeping it fool-proof?
Note 1: I've added the comments between < and > just for clarification.
Note 2: The main fool I try to prevent to abuse my code is my future self.
Here's some good advice on Python idioms from this page:
Catch errors rather than avoiding them to avoid cluttering your code with special cases. This idiom is called EAFP ('easier to ask forgiveness than permission'), as opposed to LBYL ('look before you leap'). This often makes the code more readable. For example:
Worse:
#check whether int conversion will raise an error
if not isinstance(s, str) or not s.isdigit:
return None
elif len(s) > 10: #too many digits for int conversion
return None
else:
return int(str)
Better:
try:
return int(str)
except (TypeError, ValueError, OverflowError): #int conversion failed
return None
(Note that in this case, the second version is much better, since it correctly handles leading + and -, and also values between 2 and 10 billion (for 32-bit machines). Don't clutter your code by anticipating all the possible failures: just try it and use appropriate exception handling.)
"Is my python coding style too paranoid? Or is there a way to improve readability while keeping it fool-proof?"
Who's the fool you're protecting yourself from?
You? Are you worried that you didn't remember the API you wrote?
A peer? Are you worried that someone in the next cubicle will actively make an effort to pass the wrong things through an API? You can talk to them to resolve this problem. It saves a lot of code if you provide documentation.
A total sociopath who will download your code, refuse to read the API documentation, and then call all the methods with improper arguments? What possible help can you provide to them?
The "fool-proof" coding isn't really very helpful, since all of these scenarios are more easily addressed another way.
If you're fool-proofing against yourself, perhaps that's not really sensible.
If you're fool-proofing for a co-worker or peer, you should -- perhaps -- talk to them and make sure they understand the API docs.
If you're fool-proofing against some hypothetical sociopathic programmer who's out to subvert the API, there's nothing you can do. It's Python. They have the source. Why would they go through the effort to misuse the API when they can just edit the source to break things?
It's unusual in Python to use a private instance attribute, and then expose it through a property as you have. Just use self.text_values.
Your code is too paranoid (especially when you want to protect only against yourself).
In Python circles, LBYL is generally (but not always) frowned upon. But there's also the (often unstated) assumption that one has (good) unit tests.
Me personally? I think readability is of paramount importance. I mean, if you yourself think it's hard to read, what will others think? And less readable code is also more likely to catch bugs. Not to mention making working on it harder/more time consuming (you have to dig to find what the code actually does in all that LBYLing)
If you try to make your code totally foolproof, someone will invent a better fool. Seriously, a good rule of thumb is to guard against likely errors, but don't clutter your code trying to think of every conceivable way a caller could break you.
instead of spending time in assert and private variable i prefer spend time in documentation and test cases. i prefer read documentation and when i have to read code i prefer read tests. this is more true the more code grow up. in the same time tests give you a fool-proof code and useful use cases.
I base the need for error checking code on the consequences of the errors that it's checking for. If crap data gets into my system, how long will it be before I discover it, how hard will it be to determine that the problem was crap data, and how difficult will it be to fix? For cases like the one you've posted, the answers are generally "not long," "not hard," and "not difficult."
But data that's going to be persisted somewhere and then used as the input to a complicated algorithm in six weeks? I'll check the hell out of that.
I don't think this is specific to python. I'm a firm believer in Design by Contract: Ideally all functions should have clear pre- and post-conditions; unfortunately most languages (Eiffel being the canonical exception) don't provide particularly convenient ways to achieve this which contributes to the apparent conflict between clarity and correctness.
As a practical matter, one approach is to write a 'checkValues' method so as to avoid cluttering __init__. You can even compress it to:
def __init__(self, text_values):
self.text_values = checkValues( text_values )
def checkValues(text_values):
for text_value in text_values:
assert isinstance(text_value, basestring), u'All text_values should be str or unicode.'
assert 2 <= len(text_value), u'All text_values should be at least two characters long.'
return( frozenset( text_values ) )
Another approach would be using a folding text editor that can hide/show the pre-conditions with the aid of some commenting conventions; this would also be useful for auto-generating documentation.
Take all the energy you're putting into argument checking and channel it instead into writing clear, concise doc-strings.
Unless you're writing code for nuclear reactors; in which case I would appreciate you doing both.
Another way to think is: you only need to catch errors that you can correct. If you're checking the input just for aborting with an AssertionError, you're better off just allowing the code to raise the appropriate exception later so you can debug correctly.
This line in particular is pretty bad since it stops duck-typing:
assert isinstance(text_value, basestring), u'All text_values should be str or unicode.'
In an answer (by S.Lott) to a question about Python's try...else statement:
Actually, even on an if-statement, the
else: can be abused in truly terrible
ways creating bugs that are very hard
to find. [...]
Think twice about else:. It is
generally a problem. Avoid it except
in an if-statement and even then
consider documenting the else-
condition to make it explicit.
Is this a widely held opinion? Is else considered harmful?
Of course you can write confusing code with it but that's true of any other language construct. Even Python's for...else seems to me a very handy thing to have (less so for try...else).
S.Lott has obviously seen some bad code out there. Haven't we all? I do not consider else harmful, though I've seen it used to write bad code. In those cases, all the surrounding code has been bad as well, so why blame poor else?
No it is not harmful, it is necessary.
There should always be a catch-all statement. All switches should have a default. All pattern matching in an ML language should have a default.
The argument that it is impossible to reason what is true after a series of if statements is a fact of life. The computer is the biggest finite state machine out there, and it is silly to enumerate every single possibility in every situation.
If you are really afraid that unknown errors go unnoticed in else statements, is it really that hard to raise an exception there?
Saying that else is considered harmful is a bit like saying that variables or classes are harmful. Heck, it's even like saying that goto is harmful. Sure, things can be misused. But at some point, you just have to trust programmers to be adults and be smart enough not to.
What it comes down to is this: if you're willing to not use something because an answer on SO or a blog post or even a famous paper by Dijkstra told you not to, you need to consider if programming is the right profession for you.
I wouldn't say it is harmful, but there are times when the else statement can get you into trouble. For instance, if you need to do some processing based on an input value and there are only two valid input values. Only checking for one could introduce a bug.
eg:
The only valid inputs are 1 and 2:
if(input == 1)
{
//do processing
...
}
else
{
//do processing
...
}
In this case, using the else would allow all values other than 1 to be processed when it should only be for values 1 and 2.
To me, the whole concept of certain popular language constructs being inherently bad is just plain wrong. Even goto has its place. I've seen very readable, maintainable code by the likes of Walter Bright and Linus Torvalds that uses it. It's much better to just teach programmers that readability counts and to use common sense than to arbitrarily declare certain constructs "harmful".
If you write:
if foo:
# ...
elif bar:
# ...
# ...
then the reader may be left wondering: what if neither foo nor bar is true? Perhaps you know, from your understanding of the code, that it must be the case that either foo or bar. I would prefer to see:
if foo:
# ...
else:
# at this point, we know that bar is true.
# ...
# ...
or:
if foo:
# ...
else:
assert bar
# ...
# ...
This makes it clear to the reader how you expect control to flow, without requiring the reader to have intimate knowledge of where foo and bar come from.
(in the original case, you could still write a comment explaining what is happening, but I think I would then wonder: "Why not just use an else: clause?")
I think the point is not that you shouldn't use else:; rather, that an else: clause can allow you to write unclear code and you should try to recognise when this happens and add a little comment to help out any readers.
Which is true about most things in programming languages, really :-)
Au contraire... In my opinion, there MUST be an else for every if. Granted, you can do stupid things, but you can abuse any construct if you try hard enough. You know the saying "a real programer can write FORTRAN in every language".
What I do lots of time is to write the else part as a comment, describing why there's nothing to be done.
Else is most useful when documenting assumptions about the code. It ensures that you have thought through both sides of an if statement.
Always using an else clause with each if statement is even a recommended practice in "Code Complete".
The rationale behind including the else statement (of try...else) in Python in the first place was to only catch the exceptions you really want to. Normally when you have a try...except block, there's some code that might raise an exception, and then there's some more code that should only run if the previous code was successful. Without an else block, you'd have to put all that code in the try block:
try:
something_that_might_raise_error()
do_this_only_if_that_was_ok()
except ValueError:
# whatever
The issue is, what if do_this_only_if_that_was_ok() raises a ValueError? It would get caught by the except statement, when you might not have wanted it to. That's the purpose of the else block:
try:
something_that_might_raise_error()
except ValueError:
# whatever
else:
do_this_only_if_that_was_ok()
I guess it's a matter of opinion to some extent, but I personally think this is a great idea, even though I use it very rarely. When I do use it, it just feels very appropriate (and besides, I think it helps clarify the code flow a bit)
Seems to me that, for any language and any flow-control statement where there is a default scenario or side-effect, that scenario needs to have the same level of consideration. The logic in if or switch or while is only as good as the condition if(x) while(x) or for(...). Therefore the statement is not harmful but the logic in their condition is.
Therefore, as developers it is our responsibility to code with the wide scope of the else in-mind. Too many developers treat it as a 'if not the above' when in-fact it can ignore all common sense because the only logic in it is the negation of the preceding logic, which is often incomplete. (an algorithm design error itself)
I don't then consider 'else' any more harmful than off-by-ones in a for() loop or bad memory management. It's all about the algorithms. If your automata is complete in its scope and possible branches, and all are concrete and understood then there is no danger. The danger is misuse of the logic behind the expressions by people not realizing the impact of wide-scope logic. Computers are stupid, they do what they are told by their operator(in theory)
I do consider the try and catch to be dangerous because it can negate handling to an unknown quantity of code. Branching above the raise may contain a bug, highlighted by the raise itself. This is can be non-obvious. It is like turning a sequential set of instructions into a tree or graph of error handling, where each component is dependent on the branches in the parent. Odd. Mind you, I love C.
There is a so called "dangling else" problem which is encountered in C family languages as follows:
if (a==4)
if (b==2)
printf("here!");
else
printf("which one");
This innocent code can be understood in two ways:
if (a==4)
if (b==2)
printf("here!");
else
printf("which one");
or
if (a==4)
if (b==2)
printf("here!");
else
printf("which one");
The problem is that the "else" is "dangling", one can confuse the owner of the else. Of course the compiler will not make this confusion, but it is valid for mortals.
Thanks to Python, we can not have a dangling else problem in Python since we have to write either
if a==4:
if b==2:
print "here!"
else:
print "which one"
or
if a==4:
if b==2:
print "here!"
else:
print "which one"
So that human eye catches it. And, nope, I do not think "else" is harmful, it is as harmful as "if".
In the example posited of being hard to reason, it can be written explicitly, but the else is still necessary.
E.g.
if a < 10:
# condition stated explicitly
elif a > 10 and b < 10:
# condition confusing but at least explicit
else:
# Exactly what is true here?
# Can be hard to reason out what condition is true
Can be written
if a < 10:
# condition stated explicitly
elif a > 10 and b < 10:
# condition confusing but at least explicit
elif a > 10 and b >=10:
# else condition
else:
# Handle edge case with error?
I think the point with respect to try...except...else is that it is an easy mistake to use it to create inconsistent state rather than fix it. It is not that it should be avoided at all costs, but it can be counter-productive.
Consider:
try:
file = open('somefile','r')
except IOError:
logger.error("File not found!")
else:
# Some file operations
file.close()
# Some code that no longer explicitly references 'file'
It would be real nice to say that the above block prevented code from trying to access a file that didn't exist, or a directory for which the user has no permissions, and to say that everything is encapsulated because it is within a try...except...else block. But in reality, a lot of code in the above form really should look like this:
try:
file = open('somefile','r')
except IOError:
logger.error("File not found!")
return False
# Some file operations
file.close()
# Some code that no longer explicitly references 'file'
You are often fooling yourself by saying that because file is no longer referenced in scope, it's okay to go on coding after the block, but in many cases something will come up where it just isn't okay. Or maybe a variable will later be created within the else block that isn't created in the except block.
This is how I would differentiate the if...else from try...except...else. In both cases, one must make the blocks parallel in most cases (variables and state set in one ought to be set in the other) but in the latter, coders often don't, likely because it's impossible or irrelevant. In such cases, it often will make a whole lot more sense to return to the caller than to try and keep working around what you think you will have in the best case scenario.
This question already has answers here:
The Zen of Python [closed]
(22 answers)
Python: Am I missing something? [closed]
(16 answers)
Closed 8 years ago.
I would be interested in knowing what the StackOverflow community thinks are the important language features (idioms) of Python. Features that would define a programmer as Pythonic.
Python (pythonic) idiom - "code expression" that is natural or characteristic to the language Python.
Plus, Which idioms should all Python programmers learn early on?
Thanks in advance
Related:
Code Like a Pythonista: Idiomatic Python
Python: Am I missing something?
Python is a language that can be described as:
"rules you can fit in the
palm of your hand with a huge bag of
hooks".
Nearly everything in python follows the same simple standards. Everything is accessible, changeable, and tweakable. There are very few language level elements.
Take for example, the len(data) builtin function. len(data) works by simply checking for a data.__len__() method, and then calls it and returns the value. That way, len() can work on any object that implements a __len__() method.
Start by learning about the types and basic syntax:
Dynamic Strongly Typed Languages
bool, int, float, string, list, tuple, dict, set
statements, indenting, "everything is an object"
basic function definitions
Then move on to learning about how python works:
imports and modules (really simple)
the python path (sys.path)
the dir() function
__builtins__
Once you have an understanding of how to fit pieces together, go back and cover some of the more advanced language features:
iterators
overrides like __len__ (there are tons of these)
list comprehensions and generators
classes and objects (again, really simple once you know a couple rules)
python inheritance rules
And once you have a comfort level with these items (with a focus on what makes them pythonic), look at more specific items:
Threading in python (note the Global Interpreter Lock)
context managers
database access
file IO
sockets
etc...
And never forget The Zen of Python (by Tim Peters)
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
This page covers all the major python idioms: http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html
An important idiom in Python is docstrings.
Every object has a __doc__ attribute that can be used to get help on that object. You can set the __doc__ attribute on modules, classes, methods, and functions like this:
# this is m.py
""" module docstring """
class c:
"""class docstring"""
def m(self):
"""method docstring"""
pass
def f(a):
"""function f docstring"""
return
Now, when you type help(m), help(m.f) etc. it will print the docstring as a help message.
Because it's just part of normal object introspection this can be used by documention generating systems like epydoc or used for testing purposes by unittest.
It can also be put to more unconventional (i.e. non-idiomatic) uses such as grammars in Dparser.
Where it gets even more interesting to me is that, even though doc is a read-only attribute on most objects, you can use them anywhere like this:
x = 5
""" pseudo docstring for x """
and documentation tools like epydoc can pick them up and format them properly (as opposed to a normal comment which stays inside the code formatting.
Decorators get my vote. Where else can you write something like:
def trace(num_args=0):
def wrapper(func):
def new_f(*a,**k):
print_args = ''
if num_args > 0:
print_args = str.join(',', [str(x) for x in a[0:num_args]])
print('entering %s(%s)' %(f.__name__,print_args))
rc = f(*a,**k)
if rc is not None:
print('exiting %s(%s)=%s' %(f.__name__,str(rc)))
else:
print('exiting %s(%s)' %(f.__name__))
return rc
return new_f
return wrapper
#trace(1)
def factorial(n):
if n < 2:
return 1
return n * factorial(n-1)
factorial(5)
and get output like:
entering factorial(5)
entering factorial(4)
entering factorial(3)
entering factorial(2)
entering factorial(1)
entering factorial(0)
exiting factorial(0)=1
exiting factorial(1)=1
exiting factorial(2)=2
exiting factorial(3)=6
exiting factorial(4)=24
exiting factorial(5)=120
Everything connected to list usage.
Comprehensions, generators, etc.
Personally, I really like Python syntax defining code blocks by using indentation, and not by the words "BEGIN" and "END" (as in Microsoft's Basic and Visual Basic - I don't like these) or by using left- and right-braces (as in C, C++, Java, Perl - I like these).
This really surprised me because, although indentation has always been very important to me, I didn't make to much "noise" about it - I lived with it, and it is considered a skill to be able to read other peoples, "spaghetti" code. Furthermore, I never heard another programmer suggest making indentation a part of a language. Until Python! I only wish I had realized this idea first.
To me, it is as if Python's syntax forces you to write good, readable code.
Okay, I'll get off my soap-box. ;-)
From a more advanced viewpoint, understanding how dictionaries are used internally by Python. Classes, functions, modules, references are all just properties on a dictionary. Once this is understood it's easy to understand how to monkey patch and use the powerful __gettattr__, __setattr__, and __call__ methods.
Here's one that can help. What's the difference between:
[ foo(x) for x in range(0, 5) ][0]
and
( foo(x) for x in range(0, 5) ).next()
answer:
in the second example, foo is called only once. This may be important if foo has a side effect, or if the iterable being used to construct the list is large.
Two things that struck me as especially Pythonic were dynamic typing and the various flavors of lists used in Python, particularly tuples.
Python's list obsession could be said to be LISP-y, but it's got its own unique flavor. A line like:
return HandEvaluator.StraightFlush, (PokerCard.longFaces[index + 4],
PokerCard.longSuits[flushSuit]), []
or even
return False, False, False
just looks like Python and nothing else. (Technically, you'd see the latter in Lua as well, but Lua is pretty Pythonic in general.)
Using string substitutions:
name = "Joe"
age = 12
print "My name is %s, I am %s" % (name, age)
When I'm not programming in python, that simple use is what I miss most.
Another thing you cannot start early enough is probably testing. Here especially doctests are a great way of testing your code by explaining it at the same time.
doctests are simple text file containing an interactive interpreter session plus text like this:
Let's instantiate our class::
>>> a=Something(text="yes")
>>> a.text
yes
Now call this method and check the results::
>>> a.canify()
>>> a.text
yes, I can
If e.g. a.text returns something different the test will fail.
doctests can be inside docstrings or standalone textfiles and are executed by using the doctests module. Of course the more known unit tests are also available.
I think that tutorials online and books only talk about doing things, not doing things in the best way. Along with the python syntax i think that speed in some cases is important.
Python provides a way to benchmark functions, actually two!!
One way is to use the profile module, like so:
import profile
def foo(x, y, z):
return x**y % z # Just an example.
profile.run('foo(5, 6, 3)')
Another way to do this is to use the timeit module, like this:
import timeit
def foo(x, y, z):
return x**y % z # Can also be 'pow(x, y, z)' which is way faster.
timeit.timeit('foo(5, 6, 3)', 'from __main__ import *', number = 100)
# timeit.timeit(testcode, setupcode, number = number_of_iterations)