Assert an integer is within range - python

I am writing some unittests in python that are testing if I receive an integer. However sometimes this integer can be off by 1 or 2 and I don't really care. Essentially I want to be able to assert that the received integer is within a certain range, something like:
self.assertBetween(998, 1000, my_integer)
Is there an accepted way of doing this? Or will I have to do something like this:
self.assertTrue(998 <= my_integer)
self.assertTrue(my_integer <= 1000)
EDIT
The answers so far suggest:
self.assertTrue(998 <= my_integer <= 1000)
Is there any benefit of this over my example with 2 asserts?

You can use a "chained comparison":
self.assertTrue(998 <= my_integer <= 1000)

Python has a built in function you may use for this: assertAlmostEqual.
self.assertAlmostEqual(myinteger, 999, delta=1)
# is equivalent to
self.assertTrue(998 <= myinteger <= 1000)
# ... but gives better error messages.
The optional parameter delta specifies the allowed distance from the value you're testing.

I don't think it's a good idea to use assertTrue with comparison inside -
that way you lose any information in FAIL message:
AssertionError: False is not true
Which is not helpful at all and you're basicaly back to "raw" assert and you are losing a lot of unittest's methods benefits.
I would recommend either:
Creating your own custom assert
in which you can print more meaningful message. For example:
import unittest
class BetweenAssertMixin(object):
def assertBetween(self, x, lo, hi):
if not (lo <= x <= hi):
raise AssertionError('%r not between %r and %r' % (x, lo, hi))
class Test1(unittest.TestCase, BetweenAssertMixin):
def test_between(self):
self.assertBetween(999, 998, 1000)
def test_too_low(self):
self.assertBetween(997, 998, 1000)
def test_too_high(self):
self.assertBetween(1001, 998, 1000)
if __name__ == '__main__':
unittest.main()
then you'll have following output (shortened):
======================================================================
FAIL: test_too_high (__main__.Test1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "example.py", line 19, in test_too_high
self.assertBetween(1001, 998, 1000)
File "example.py", line 8, in assertBetween
raise AssertionError('%r is not between %r and %r' % (x, lo, hi))
AssertionError: 1001 is not between 998 and 1000
======================================================================
FAIL: test_too_low (__main__.Test1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "example.py", line 16, in test_too_low
self.assertBetween(997, 998, 1000)
File "example.py", line 8, in assertBetween
raise AssertionError('%r is not between %r and %r' % (x, lo, hi))
AssertionError: 997 is not between 998 and 1000
----------------------------------------------------------------------
Or use assertLessEqual and assertGreaterEqual
if you don't want custom assert (which does add another traceback record and several lines of code):
...
def test_no_custom_assert(self):
my_integer = 100
self.assertGreaterEqual(my_integer, 998)
self.assertLessEqual(my_integer, 1000)
...
which is a bit longer (it may be shorter in total than adding custom assert if it's used only once) than assertTrue(998 <= my_integer <= 1000) but you'll still get nice fail messages (also without additional traceback record):
======================================================================
FAIL: test_no_custom_assert (__main__.Test1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "example.py", line 23, in test_no_custom_assert
self.assertGreaterEqual(my_integer, 998)
AssertionError: 100 not greater than or equal to 998

self.assertTrue(998 <= my_integer <= 1000)

Related

Handle the error when the limits do not exist in sympy

I have been having a problem with SymPy at the time of making a limit which would have to answer "does not exist" is returning the infinite value. The code section would be this:
x = Symbol ('x')
a = Limit ((5-x) / (x-2), x, 2, "+"). doit ()
print (a)
oo
b = Limit ((5-x) / (x-2), x, 2, "-"). doit ()
print (b)
-oo
c = Limit ((5-x) / (x-2), x, 2) .doit ()
print (c)
oo
Here is the problem as investigated and check should leave a message as the limit does not exist or return any value equal to 0.
By default, Limit ((5-x) / (x-2), x, 2) means one-sided limit from the right, so c will always be the same as a in your example.
In the current development version on GitHub, one can pass "+-" as the direction, which will compute both limits and raise ValueError if they are not the same:
>>> limit((5-x) / (x-2), x, 2, "+-")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/sympy/sympy/series/limits.py", line 66, in limit
% (llim, rlim))
ValueError: The limit does not exist since left hand limit = -oo and right hand limit = oo
But you probably just want a conditional like
if a != b:
print("One-sided limits are different")
elif a.is_infinite:
print("Infinite limit")
# and so on

Raising an exception while using numba

Following up from here, I keep getting overflows. So I'm trying to raise an exception so that I know exactly what's going wrong where.
I've got something like this:
#jit
def train_function(X, y, H):
np.seterr(over="raise", under="raise", invalid="raise")
# do some stuff, start a double loop, and then do:
try:
z[i,j] = math.exp(-beta[j,i])
except OverflowError:
print "Calculation failed! z[i,j] = math.exp(-beta[j,i]), j: " + str(j) + ", i: " +str(i) + ", b: " + str(beta[j,i]) + ", omb: " + str(oneminusbeta[j,i])
raise
class MyClass(object):
# init and other methods
def train(self, X, y, H):
train_function(X, y, H)
But I get this error:
Traceback (most recent call last):
File "C:\work_asaaki\code\gbc_classifier_train_7.py", line 55, in <module>
gentlebooster.train(X_train, y_train, boosting_rounds)
File "C:\work_asaaki\code\gentleboost_c_class_jit_v7_nolimit.py", line 297, in train
self.g_per_round, self.g = train_function(X, y, H)
File "C:\Anaconda\lib\site-packages\numba\dispatcher.py", line 152, in _compile_for_args
return self.jit(sig)
File "C:\Anaconda\lib\site-packages\numba\dispatcher.py", line 143, in jit
return self.compile(sig, **kws)
File "C:\Anaconda\lib\site-packages\numba\dispatcher.py", line 131, in compile
flags=flags, locals=locs)
File "C:\Anaconda\lib\site-packages\numba\compiler.py", line 103, in compile_extra
bc = bytecode.ByteCode(func=func)
File "C:\Anaconda\lib\site-packages\numba\bytecode.py", line 305, in __init__
table = utils.SortedMap(ByteCodeIter(code))
File "C:\Anaconda\lib\site-packages\numba\utils.py", line 70, in __init__
for i, (k, v) in enumerate(sorted(seq)):
File "C:\Anaconda\lib\site-packages\numba\bytecode.py", line 219, in next
raise NotImplementedError(ts % tv)
NotImplementedError: offset=742 opcode=0x79 opname=SETUP_EXCEPT
Can't I raise exception while I'm using numba? I'm using Anaconda 2.0.1 with Numba 0.13.x and Numpy 1.8.x on a 64-bit machine.
http://numba.pydata.org/numba-doc/dev/reference/pysupported.html
2.6.1.1. Constructs
Numba strives to support as much of the Python language as possible, but some language features are not available inside Numba-compiled functions. The following Python language features are not currently supported:
Class definition
Exception handling (try .. except, try .. finally)
Context management (the with statement)
The raise statement is supported in several forms:
raise (to re-raise the current exception)
raise SomeException
raise SomeException(<arguments>)
so that leaves us here:
z[i,j] = math.exp(-beta[j,i])
anything negative under about exp(-1000); really-really small will evaluate to zero without overflow
math.exp(-1000000000) "works" and is probably not your issue (although it will return 0.0, its not "really" zero)
so what would cause this to fail? well we know:
print(math.exp(100))
>>>
2.6881171418161356e+43
is silly big, much more than that... probably overflow
sure enough
print(math.exp(1000))
>>>
OverflowError: math range error
I don't have citation but I think the effective range is like -700 to 700 which evaluate to 0 and infinity(overflow) effectively from the perspective of double floats
to handle that we window the function:
n = beta
if n > 100:
n = 100
z = math.exp(n)
but that won't work either because math.exp(n) only accepts floats and your beta appears to be a list; you'll have to use numpy.exp(n) and numpy.clip() to window
b = numpy.array(-beta[j,i])
n = numpy.clip(b, a_max=100)
z = numpy.exp(n)
or to raise the overflow exception:
b = numpy.array(-beta[j,i])
n = numpy.clip(b, a_max=100)
if b != n:
print (j,i,-beta[j,i])
raise OverflowError
else:
z = numpy.exp(n)

Best way to develop a programme without using reload

I see from other discussions that reload is considered an unnecessary operation and a bad way to develop programmes. People say to use doctest and unittest. I must be missing something. To develop a programme I write it in a module, load the module into an eclipse console, experiment in the console by running little samples of code, then I spot an error in the module or decide to change something. Surely the quickest thing to do is save the module, reload it and carry on working in the console.
Is there a better way?
Basically instead of trying things out in the console, try things out by writing tests. It's almost the same amount of typing, but repeatable (you can check if things still work after you make changes) and it works as a rudimentary form of documentation.
Here's a contrived example of how you could use doctest and test driven development in comparision with testing by entering code at the console, with a recursive factorial function being used as an example:
Console:
First attempt:
def factorial(x)
pass
Console:
>>> factorial(7)
SyntaxError: invalid syntax
Second attempt:
def factorial(x):
return x*x-1
Console:
>>> factorial(7)
42
Third attempt:
def factorial(x):
return x * factorial(x-1)
Console:
>>> factorial(7)
RuntimeError: maximum recursion depth reached # StackOverflow! (basically)
Fourth attempt:
def factorial(x):
if x == 0:
return 1
else:
return x * factorial(x-1)
Console:
>>> factorial(5)
120
In the end, we get the right answer, but at each stage we have to go back to the console and write the same thing. This is alright for a short program like this, but for a program with many functions, and more complex interactions and possibilities to be tested it would take a very long time. Programming is about automating repetitive tasks, and testing is a repetitive task, so it makes sense to automate it. And in Python, tools to do just this are provided for you.
Behold - the doctest module: (example taken from the docs)
First attempt:
def factorial(n):
"""Return the factorial of n, an exact integer >= 0.
>>> [factorial(n) for n in range(6)]
[1, 1, 2, 6, 24, 120]
>>> factorial(30)
265252859812191058636308480000000
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: n must be >= 0
Factorials of floats are OK, but the float must be an exact integer:
>>> factorial(30.1)
Traceback (most recent call last):
...
ValueError: n must be exact integer
>>> factorial(30.0)
265252859812191058636308480000000
It must also not be ridiculously large:
>>> factorial(1e100)
Traceback (most recent call last):
...
OverflowError: n too large
"""
if __name__ == "__main__":
import doctest
doctest.testmod(verbose=True)
First test (just run the program):
Trying:
[factorial(n) for n in range(6)]
Expecting:
[1, 1, 2, 6, 24, 120]
ok
Trying:
factorial(30)
Expecting:
265252859812191058636308480000000
ok
Trying:
factorial(-1)
Expecting:
Traceback (most recent call last):
...
ValueError: n must be >= 0
ok
Trying:
factorial(30.1)
Expecting:
Traceqwrqaeqrback (most recent call last):
...
ValueError: n must be exact integer
**********************************************************************
File "C:/Python33/doctestex.py", line 14, in __main__.factorial
Failed example:
factorial(30.1)
Exception raised:
Traceback (most recent call last):
File "C:\Python33\lib\doctest.py", line 1287, in __run
compileflags, 1), test.globs)
File "<doctest __main__.factorial[3]>", line 1, in <module>
factorial(30.1)
File "C:/Python33/doctestex.py", line 32, in factorial
raise ValueError("n must be exact integer")
ValueError: n must be exact integer
Trying:
factorial(30.0)
Expecting:
265252859812191058636308480000000
ok
Trying:
factorial(1e100)
Expecting:
Traceback (most recent call last):
...
OverflowError: n too large
ok
1 items had no tests:
__main__
**********************************************************************
1 items had failures:
1 of 6 in __main__.factorial
6 tests in 2 items.
5 passed and 1 failed.
***Test Failed*** 1 failures.
>>> ================================ RESTART ================================
>>>
Trying:
[factorial(n) for n in range(6)]
Expecting:
[1, 1, 2, 6, 24, 120]
**********************************************************************
File "C:/Python33/doctestex.py", line 4, in __main__.factorial
Failed example:
[factorial(n) for n in range(6)]
Expected:
[1, 1, 2, 6, 24, 120]
Got:
[None, None, None, None, None, None]
Trying:
factorial(30)
Expecting:
265252859812191058636308480000000
**********************************************************************
File "C:/Python33/doctestex.py", line 6, in __main__.factorial
Failed example:
factorial(30)
Expected:
265252859812191058636308480000000
Got nothing
Trying:
factorial(-1)
Expecting:
Traceback (most recent call last):
...
ValueError: n must be >= 0
**********************************************************************
File "C:/Python33/doctestex.py", line 8, in __main__.factorial
Failed example:
factorial(-1)
Expected:
Traceback (most recent call last):
...
ValueError: n must be >= 0
Got nothing
Trying:
factorial(30.1)
Expecting:
Traceback (most recent call last):
...
ValueError: n must be exact integer
**********************************************************************
File "C:/Python33/doctestex.py", line 14, in __main__.factorial
Failed example:
factorial(30.1)
Expected:
Traceback (most recent call last):
...
ValueError: n must be exact integer
Got nothing
Trying:
factorial(30.0)
Expecting:
265252859812191058636308480000000
**********************************************************************
File "C:/Python33/doctestex.py", line 18, in __main__.factorial
Failed example:
factorial(30.0)
Expected:
265252859812191058636308480000000
Got nothing
Trying:
factorial(1e100)
Expecting:
Traceback (most recent call last):
...
OverflowError: n too large
**********************************************************************
File "C:/Python33/doctestex.py", line 22, in __main__.factorial
Failed example:
factorial(1e100)
Expected:
Traceback (most recent call last):
...
OverflowError: n too large
Got nothing
1 items had no tests:
__main__
**********************************************************************
1 items had failures:
6 of 6 in __main__.factorial
6 tests in 2 items.
0 passed and 6 failed.
***Test Failed*** 6 failures.
Then when we finish the program (and at all stages between) we just press one button and that does all our testing for us. If you were working with a team on a bigger project, you could, for instance, write the test data to file and use the failures to see where you need to focus on.
Trying:
[factorial(n) for n in range(6)]
Expecting:
[1, 1, 2, 6, 24, 120]
ok
Trying:
factorial(30)
Expecting:
265252859812191058636308480000000
ok
Trying:
factorial(-1)
Expecting:
Traceback (most recent call last):
...
ValueError: n must be >= 0
ok
Trying:
factorial(30.1)
Expecting:
Traceback (most recent call last):
...
ValueError: n must be exact integer
ok
Trying:
factorial(30.0)
Expecting:
265252859812191058636308480000000
ok
Trying:
factorial(1e100)
Expecting:
Traceback (most recent call last):
...
OverflowError: n too large
ok
1 items had no tests:
__main__
1 items passed all tests:
6 tests in __main__.factorial
6 tests in 2 items.
6 passed and 0 failed.
Test passed.
And (provided you added all the test cases to the docstring) now you can ship or integrate or whatever the program and it much fewer bugs than it would've if you just tested for the things you thought mattered each time you changed something, rather than the things you thought could possibly ever matter at the start of the development process.
Not to mention you now have the basis for your documentation! Running the program in the console, and then entering help(factorial) will now give you this:
Help on function factorial in module __main__:
factorial(n)
Return the factorial of n, an exact integer >= 0.
>>> [factorial(n) for n in range(6)]
[1, 1, 2, 6, 24, 120]
>>> factorial(30)
265252859812191058636308480000000
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: n must be >= 0
Factorials of floats are OK, but the float must be an exact integer:
>>> factorial(30.1)
Traceback (most recent call last):
...
ValueError: n must be exact integer
>>> factorial(30.0)
265252859812191058636308480000000
It must also not be ridiculously large:
>>> factorial(1e100)
Traceback (most recent call last):
...
OverflowError: n too large
You can then use a number of tools (pydoc is one in the standard library) to turn the docstrings into formatted HTML helpfiles.
Of course, this is only one of many testing tools you can use with Python. Others include the more powerful unittest module, and the less powerful technique of adding assert statements into your code.

What is the real world use or significance of sphinx doctest?

What is is the significance of doctest in Sphinx? Can someone help me understand its use with a simple example.
Sphinx's doctest is for testing the documentation itself. In other words, it allows for the automatic verification of the documentation's sample code. While it might also verify whether the Python code works as expected, Sphinx is unnecessary for that purpose alone (you could more easily use the standard library's doctest module).
So, a real-world scenario (one I find myself in on a regular basis) goes something like this: a new feature is nearing completion, so I write some documentation to introduce the new feature. The new docs contain one or more code samples. Before publishing the documentation, I run make doctest in my Sphinx documentation directory to verify that the code samples I've written for the audience will actually work.
I haven't used it myself but it is my understanding that it extends the functionality of doctest. For example it adds testsetup and testcleanup directives which you can put your set-up and tear-down logic in. Making it possible for Sphinx to exclude that in the documentation.
Here is a simple example (from the doctest module):
"""
This is the "example" module.
The example module supplies one function, factorial(). For example,
>>> factorial(5)
120
"""
def factorial(n):
"""Return the factorial of n, an exact integer >= 0.
If the result is small enough to fit in an int, return an int.
Else return a long.
>>> [factorial(n) for n in range(6)]
[1, 1, 2, 6, 24, 120]
>>> [factorial(long(n)) for n in range(6)]
[1, 1, 2, 6, 24, 120]
>>> factorial(30)
265252859812191058636308480000000L
>>> factorial(30L)
265252859812191058636308480000000L
>>> factorial(-1)
Traceback (most recent call last):
...
ValueError: n must be >= 0
Factorials of floats are OK, but the float must be an exact integer:
>>> factorial(30.1)
Traceback (most recent call last):
...
ValueError: n must be exact integer
>>> factorial(30.0)
265252859812191058636308480000000L
It must also not be ridiculously large:
>>> factorial(1e100)
Traceback (most recent call last):
...
OverflowError: n too large
"""
import math
if not n >= 0:
raise ValueError("n must be >= 0")
if math.floor(n) != n:
raise ValueError("n must be exact integer")
if n+1 == n: # catch a value like 1e300
raise OverflowError("n too large")
result = 1
factor = 2
while factor <= n:
result *= factor
factor += 1
return result
if __name__ == "__main__":
import doctest
doctest.testmod()

How come there is a TypeError in my Python code?

from celery.task import Task
class Decayer(Task):
def calc_decay_value(self, x):
y = (1.0/(2^x))
return y
def calc_decay_time(self, x):
y = 2^x
return y
def run(self, d, **kwargs):
#do stuff.
return 0
>>> decayer = tasks.Decayer(r)
Traceback (most recent call last):
File "scanDecay.py", line 31, in <module>
decayer = tasks.Decayer(r)
TypeError: object.__new__() takes no parameters
Two errors
1) Your class doesn't have an __init__ function. Either add one, or use this instead:
decayer = tasks.Decayer()
2) You are trying to raise an integer to the power of a float, but ^ means xor and cannot be used on floats. Use ** instead of ^:
y = 2 ** x
The problem seems due to decayer = tasks.Decayer(r) call and tasks.Decayer is not designed to take a argument, because Task does not define a __init__ method which can take one.

Categories