Related
Say I have 2 variables a and b where it is given that b > a, how then can I enforce this relative constraint on the hypothesis strategies?
from hypothesis import given, strategies as st
#given(st.integers(), st.integers())
def test_subtraction(a, b):
# only evaluates to true if b > a
# hence I'd like to enforce this constraint on the strategy
assert abs(b - a) == -(a - b)
(I've no idea how to do this in Python, but it's a common enough problem, so I hope you can use these F# FsCheck examples instead. The ideas are universal.)
Filtering
Most property-based frameworks come with an ability to filter values based on a predicate. In FsCheck it's the ==> operator. In QuickCheck the equivalent is called suchThat.
Using the ==> operator in FsCheck, you can write the property like this:
[<Property>]
let property_using_filtering (a : int) (b : int) =
b > a ==> lazy
Assert.Equal (abs (b - a), -(a - b))
(It's possible to write the test in a more terse and idiomatic style, but since I'm assuming that you may not be familiar with F#, I chose to be more explicit than usual.)
Notice that the predicate b > a precedes the filtering operator ==>. This means that the rest of the code to the right of, and below, the operator only runs when the predicate is true.
The framework is still going to generate entirely random values, so (assuming a uniform random distribution) it'll be throwing half of the generated values away.
Thus, to generate 100 (the default) valid test cases, it'll have to generate on average 200 test cases (i.e. 400 integers). Generating 400 integers instead of 200 integers probably isn't a big deal, but in general, this kind of filtering can be prohibitively wasteful.
Therefore, it's always useful to be aware of alternatives.
Seed and diff
When faced with this sort of problem, it usually helps to take an alternative look at how to generate values. How do you generate two values where one is strictly greater than the other?
You can generate a random value (the seed), which in this case will also serve as the first value itself. Then a second value will indicate the difference between the two.
Some property-based frameworks come with features where you can tell it to generate strictly positive numbers. FsCheck comes with those features, but assuming for the moment that not all frameworks can do this, you can still use an unconstrained random value.
In that case, the difference, being any random number, may be both negative, zero, or positive. In this case, we can take the absolute value of the number and then add one to ensure that it's strictly greater than zero. Now you have a number that's guaranteed to be greater than zero. If you add that to the first number, you're guaranteed to have a number greater than the first one:
[<Property>]
let property_using_seed_and_diff (seed : int) (diff : int) =
let a = seed
let b = a + 1 + abs diff
Assert.Equal (abs (b - a), -(a - b))
Here, (somewhat redundantly) we set a = seed and then b = a + 1 + abs diff according to the above description.
(I only included the redundant seed function parameter to illustrate the general idea. Sometimes, you need one or more values calculated from a seed, but not the seed itself. In the present case, however, the value and seed coincide.)
In addition to the filtering and seed-plus-diff approaches shown above, the "fix-it-up" approach can be useful: try to generate something valid, and just patch the object if it's not satisfied. In this case, that might look like:
#given(st.integers(), st.integers())
def test_subtraction(a, b):
a, b = sorted([a, b])
....
The advantage here is that the minimal failing example tends to look a bit more natural, and might have a nicer distribution than a seed-and-diff (or "constructive") approach. It also combines well with the other approaches, especially if you're defining your own strategy with #st.composite.
You can add these constrains using assume in hypothesis:
from hypothesis import assume, given, strategies as st
#given(st.integers(), st.integers())
def test_subtraction(a, b):
assume(b > a)
assert abs(b - a) == -(a - b)
See: https://hypothesis.readthedocs.io/en/latest/details.html#making-assumptions for more details
It's well known that comparing floats for equality is a little fiddly due to rounding and precision issues.
For example: Comparing Floating Point Numbers, 2012 Edition
What is the recommended way to deal with this in Python?
Is a standard library function for this somewhere?
Python 3.5 adds the math.isclose and cmath.isclose functions as described in PEP 485.
If you're using an earlier version of Python, the equivalent function is given in the documentation.
def isclose(a, b, rel_tol=1e-09, abs_tol=0.0):
return abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)
rel_tol is a relative tolerance, it is multiplied by the greater of the magnitudes of the two arguments; as the values get larger, so does the allowed difference between them while still considering them equal.
abs_tol is an absolute tolerance that is applied as-is in all cases. If the difference is less than either of those tolerances, the values are considered equal.
Something as simple as the following may be good enough:
return abs(f1 - f2) <= allowed_error
I would agree that Gareth's answer is probably most appropriate as a lightweight function/solution.
But I thought it would be helpful to note that if you are using NumPy or are considering it, there is a packaged function for this.
numpy.isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)
A little disclaimer though: installing NumPy can be a non-trivial experience depending on your platform.
Use Python's decimal module, which provides the Decimal class.
From the comments:
It is worth noting that if you're
doing math-heavy work and you don't
absolutely need the precision from
decimal, this can really bog things
down. Floats are way, way faster to
deal with, but imprecise. Decimals are
extremely precise but slow.
The common wisdom that floating-point numbers cannot be compared for equality is inaccurate. Floating-point numbers are no different from integers: If you evaluate "a == b", you will get true if they are identical numbers and false otherwise (with the understanding that two NaNs are of course not identical numbers).
The actual problem is this: If I have done some calculations and am not sure the two numbers I have to compare are exactly correct, then what? This problem is the same for floating-point as it is for integers. If you evaluate the integer expression "7/3*3", it will not compare equal to "7*3/3".
So suppose we asked "How do I compare integers for equality?" in such a situation. There is no single answer; what you should do depends on the specific situation, notably what sort of errors you have and what you want to achieve.
Here are some possible choices.
If you want to get a "true" result if the mathematically exact numbers would be equal, then you might try to use the properties of the calculations you perform to prove that you get the same errors in the two numbers. If that is feasible, and you compare two numbers that result from expressions that would give equal numbers if computed exactly, then you will get "true" from the comparison. Another approach is that you might analyze the properties of the calculations and prove that the error never exceeds a certain amount, perhaps an absolute amount or an amount relative to one of the inputs or one of the outputs. In that case, you can ask whether the two calculated numbers differ by at most that amount, and return "true" if they are within the interval. If you cannot prove an error bound, you might guess and hope for the best. One way of guessing is to evaluate many random samples and see what sort of distribution you get in the results.
Of course, since we only set the requirement that you get "true" if the mathematically exact results are equal, we left open the possibility that you get "true" even if they are unequal. (In fact, we can satisfy the requirement by always returning "true". This makes the calculation simple but is generally undesirable, so I will discuss improving the situation below.)
If you want to get a "false" result if the mathematically exact numbers would be unequal, you need to prove that your evaluation of the numbers yields different numbers if the mathematically exact numbers would be unequal. This may be impossible for practical purposes in many common situations. So let us consider an alternative.
A useful requirement might be that we get a "false" result if the mathematically exact numbers differ by more than a certain amount. For example, perhaps we are going to calculate where a ball thrown in a computer game traveled, and we want to know whether it struck a bat. In this case, we certainly want to get "true" if the ball strikes the bat, and we want to get "false" if the ball is far from the bat, and we can accept an incorrect "true" answer if the ball in a mathematically exact simulation missed the bat but is within a millimeter of hitting the bat. In that case, we need to prove (or guess/estimate) that our calculation of the ball's position and the bat's position have a combined error of at most one millimeter (for all positions of interest). This would allow us to always return "false" if the ball and bat are more than a millimeter apart, to return "true" if they touch, and to return "true" if they are close enough to be acceptable.
So, how you decide what to return when comparing floating-point numbers depends very much on your specific situation.
As to how you go about proving error bounds for calculations, that can be a complicated subject. Any floating-point implementation using the IEEE 754 standard in round-to-nearest mode returns the floating-point number nearest to the exact result for any basic operation (notably multiplication, division, addition, subtraction, square root). (In case of tie, round so the low bit is even.) (Be particularly careful about square root and division; your language implementation might use methods that do not conform to IEEE 754 for those.) Because of this requirement, we know the error in a single result is at most 1/2 of the value of the least significant bit. (If it were more, the rounding would have gone to a different number that is within 1/2 the value.)
Going on from there gets substantially more complicated; the next step is performing an operation where one of the inputs already has some error. For simple expressions, these errors can be followed through the calculations to reach a bound on the final error. In practice, this is only done in a few situations, such as working on a high-quality mathematics library. And, of course, you need precise control over exactly which operations are performed. High-level languages often give the compiler a lot of slack, so you might not know in which order operations are performed.
There is much more that could be (and is) written about this topic, but I have to stop there. In summary, the answer is: There is no library routine for this comparison because there is no single solution that fits most needs that is worth putting into a library routine. (If comparing with a relative or absolute error interval suffices for you, you can do it simply without a library routine.)
math.isclose() has been added to Python 3.5 for that (source code). Here is a port of it to Python 2. It's difference from one-liner of Mark Ransom is that it can handle "inf" and "-inf" properly.
def isclose(a, b, rel_tol=1e-09, abs_tol=0.0):
'''
Python 2 implementation of Python 3.5 math.isclose()
https://github.com/python/cpython/blob/v3.5.10/Modules/mathmodule.c#L1993
'''
# sanity check on the inputs
if rel_tol < 0 or abs_tol < 0:
raise ValueError("tolerances must be non-negative")
# short circuit exact equality -- needed to catch two infinities of
# the same sign. And perhaps speeds things up a bit sometimes.
if a == b:
return True
# This catches the case of two infinities of opposite sign, or
# one infinity and one finite number. Two infinities of opposite
# sign would otherwise have an infinite relative tolerance.
# Two infinities of the same sign are caught by the equality check
# above.
if math.isinf(a) or math.isinf(b):
return False
# now do the regular computation
# this is essentially the "weak" test from the Boost library
diff = math.fabs(b - a)
result = (((diff <= math.fabs(rel_tol * b)) or
(diff <= math.fabs(rel_tol * a))) or
(diff <= abs_tol))
return result
I'm not aware of anything in the Python standard library (or elsewhere) that implements Dawson's AlmostEqual2sComplement function. If that's the sort of behaviour you want, you'll have to implement it yourself. (In which case, rather than using Dawson's clever bitwise hacks you'd probably do better to use more conventional tests of the form if abs(a-b) <= eps1*(abs(a)+abs(b)) + eps2 or similar. To get Dawson-like behaviour you might say something like if abs(a-b) <= eps*max(EPS,abs(a),abs(b)) for some small fixed EPS; this isn't exactly the same as Dawson, but it's similar in spirit.
If you want to use it in testing/TDD context, I'd say this is a standard way:
from nose.tools import assert_almost_equals
assert_almost_equals(x, y, places=7) # The default is 7
In terms of absolute error, you can just check
if abs(a - b) <= error:
print("Almost equal")
Some information of why float act weird in Python:
Python 3 Tutorial 03 - if-else, logical operators and top beginner mistakes
You can also use math.isclose for relative errors.
This is useful for the case where you want to make sure two numbers are the same 'up to precision', and there isn't any need to specify the tolerance:
Find minimum precision of the two numbers
Round both of them to minimum precision and compare
def isclose(a, b):
astr = str(a)
aprec = len(astr.split('.')[1]) if '.' in astr else 0
bstr = str(b)
bprec = len(bstr.split('.')[1]) if '.' in bstr else 0
prec = min(aprec, bprec)
return round(a, prec) == round(b, prec)
As written, it only works for numbers without the 'e' in their string representation (meaning 0.9999999999995e-4 < number <= 0.9999999999995e11)
Example:
>>> isclose(10.0, 10.049)
True
>>> isclose(10.0, 10.05)
False
For some of the cases where you can affect the source number representation, you can represent them as fractions instead of floats, using integer numerator and denominator. That way you can have exact comparisons.
See Fraction from fractions module for details.
I liked Sesquipedal's suggestion, but with modification (a special use case when both values are 0 returns False). In my case, I was on Python 2.7 and just used a simple function:
if f1 ==0 and f2 == 0:
return True
else:
return abs(f1-f2) < tol*max(abs(f1),abs(f2))
If you want to do it in a testing or TDD context using the pytest package, here's how:
import pytest
PRECISION = 1e-3
def assert_almost_equal():
obtained_value = 99.99
expected_value = 100.00
assert obtained_value == pytest.approx(expected_value, PRECISION)
I found the following comparison helpful:
str(f1) == str(f2)
To compare up to a given decimal without atol/rtol:
def almost_equal(a, b, decimal=6):
return '{0:.{1}f}'.format(a, decimal) == '{0:.{1}f}'.format(b, decimal)
print(almost_equal(0.0, 0.0001, decimal=5)) # False
print(almost_equal(0.0, 0.0001, decimal=4)) # True
This maybe is a bit ugly hack, but it works pretty well when you don't need more than the default float precision (about 11 decimals).
The round_to function uses the format method from the built-in str class to round up the float to a string that represents the float with the number of decimals needed, and then applies the eval built-in function to the rounded float string to get back to the float numeric type.
The is_close function just applies a simple conditional to the rounded up float.
def round_to(float_num, prec):
return eval("'{:." + str(int(prec)) + "f}'.format(" + str(float_num) + ")")
def is_close(float_a, float_b, prec):
if round_to(float_a, prec) == round_to(float_b, prec):
return True
return False
>>>a = 10.0
10.0
>>>b = 10.0001
10.0001
>>>print is_close(a, b, prec=3)
True
>>>print is_close(a, b, prec=4)
False
Update:
As suggested by #stepehjfox, a cleaner way to build a rount_to function avoiding "eval" is using nested formatting:
def round_to(float_num, prec):
return '{:.{precision}f}'.format(float_num, precision=prec)
Following the same idea, the code can be even simpler using the great new f-strings (Python 3.6+):
def round_to(float_num, prec):
return f'{float_num:.{prec}f}'
So, we could even wrap it up all in one simple and clean 'is_close' function:
def is_close(a, b, prec):
return f'{a:.{prec}f}' == f'{b:.{prec}f}'
If you want to compare floats, the options above are great, but in my case, I ended up using Enum's, since I only had few valid floats my use case was accepting.
from enum import Enum
class HolidayMultipliers(Enum):
EMPLOYED_LESS_THAN_YEAR = 2.0
EMPLOYED_MORE_THAN_YEAR = 2.5
Then running:
testable_value = 2.0
HolidayMultipliers(testable_value)
If the float is valid, it's fine, but otherwise it will just throw an ValueError.
Use == is a simple good way, if you don't care about tolerance precisely.
# Python 3.8.5
>>> 1.0000000000001 == 1
False
>>> 1.00000000000001 == 1
True
But watch out for 0:
>>> 0 == 0.00000000000000000000000000000000000000000001
False
The 0 is always the zero.
Use math.isclose if you want to control the tolerance.
The default a == b is equivalent to math.isclose(a, b, rel_tol=1e-16, abs_tol=0).
If you still want to use == with a self-defined tolerance:
>>> class MyFloat(float):
def __eq__(self, another):
return math.isclose(self, another, rel_tol=0, abs_tol=0.001)
>>> a == MyFloat(0)
>>> a
0.0
>>> a == 0.001
True
So far, I didn't find anywhere to config it globally for float. Besides, mock is also not working for float.__eq__.
I want to be able to access the sign bit of a number in python. I can do something like n >> 31 in C since int is represented as 32 bits.
I can't make use of the conditional operator and > <.
in python 3 integers don't have a fixed size, and aren't represented using the internal CPU representation (which allows to handle very large numbers without trouble).
So the best way is
signbit = 1 if n < 0 else 0
or
signbit = int(n < 0)
EDIT: if you cannot use < or > (which is ludicrious but so be it) you could use the fact that a-b will be positive if a is greater than b, so you could do
abs(a-b) == a-b
that doesn't use < or > (at least in the text, because abs uses it you can trust me)
I would argue that in Python there is not really a concept of a sign bit. As far as the programmer is concerned, an int is just a type with certain behavior. You don't get access to the low-level representation. Even bin(-3) returns a "negative" binary representation: '-0b11'
However, there are ways to get the sign or the bigger if two integers without comparisons. The following approach abuses floating point math to avoid comparisons.
def sign(a):
try:
return (1 - int(a / (a**2)**0.5)) // 2
except ZeroDivisionError:
return 0
def return_bigger(a, b):
s = sign(b - a)
return a * s + b * (1 - s)
assert sign(-33) == 1
assert sign(33) == 0
assert return_bigger(10, 15) == 15
assert return_bigger(25, 3) == 25
assert return_bigger(42, 42) == 42
(a**2)**0.5 could be replaced with abs but I bet internally this is implemented with a comparison.
The try/except is not needed if you don't care about 0 or equal integers (or there may be another horrible math workaround).
Finally, I'd like to point out that I have absolutely no idea why on earth anybody would want to implement something like that, except for the hell of it.
Conceptually, the bit representation of a negative integer is padded with an infinite number of 1 bits to the left (just like a non-negative number is regarded as padded with an infinite number of 0 bits). The operation n >> 31 does work (given that n is in the range of signed 32-bit numbers) in the sense that it places the sign bit (or if you prefer, one of the left-padding bits) in the lowest bit position. You just need to get rid of the rest of the left-padding bits, which you can do with a bitwise and operation like this:
n >> 31 & 1
Or you can make use of the fact that all one bits is how −1 is represented, and simply negate the result:
-(n >> 31)
Alternatively, you can cut off all but the lowest 32 1 bits before you do the shift, like this:
(n & 0xffffffff) >> 31
This is all under the assumption that you are working with numbers that fit into a signed 32-bit int. If n might need a 64 bit representation, shift by 63 places instead of 31, and if it's just 16 bits numbers, shifting by 15 places is enoough. (If you use the (n & 0xffffffff) >> 31 variant, adjust the number of fs accordingly).
On machine code level, and-ing/negating and shifting is potentially much more efficient than using comparison. The former is just a couple of machine instructions, while the latter would usually boil down to a branch. Branching doesn't just take more machine instructions, it also has a bad influence on the pipelining and out-of-order execution of modern CPUs. But Python execution takes place in a higher-level layer than machine code execution, and therefore it's difficult to say anything about the performance impact in Python: it may depend on the context – as it would in machine code – and may therefore also be difficult to test generally. (Caveat: I don't know much about how low-level execution happens in CPython, or in Python in general. For someone who does, this might not be so difficult to say.)
If you don't know how big n is (in Python, an integer is not required to fit into any specific number of bits), you can use bit_length() to find out. This will work for integers of any size:
-(n >> n.bit_length())
The bit_length() operation might boil down to a single machine instruction, or it might actually need a loop to find the result, depending on the implementation and the underlying machine architecture. In the latter case, this should be noticeably more costly than using a constant
Final remark: in C, n >> 31 is actually not guaranteed to work like you assume, because the C language leaves it undefined whether >> does logical right shift (like you assume) or arithmetic shift right (like Python does). In some languages, these are different operators, which makes it clear what you get. In Java for instance, logical shift right is >>>, and arithmetic shift right is >>.
How about this?
def is_negative(num, places):
return not long(num * 10**places) & 0xFFFFFFFF == long(num * 10**places)
Not efficient, but not using < or > definitely restricts you to weirdness. Note that zero will evaluate as positive.
I need a reversible hash function (obviously the input will be much smaller in size than the output) that maps the input to the output in a random-looking way. Basically, I want a way to transform a number like "123" to a larger number like "9874362483910978", but not in a way that will preserve comparisons, so it must not be always true that, if x1 > x2, f(x1) > f(x2) (but neither must it be always false).
The use case for this is that I need to find a way to transform small numbers into larger, random-looking ones. They don't actually need to be random (in fact, they need to be deterministic, so the same input always maps to the same output), but they do need to look random (at least when base64encoded into strings, so shifting by Z bits won't work as similar numbers will have similar MSBs).
Also, easy (fast) calculation and reversal is a plus, but not required.
I don't know if I'm being clear, or if such an algorithm exists, but I'd appreciate any and all help!
None of the answers provided seemed particularly useful, given the question. I had the same problem, needing a simple, reversible hash for not-security purposes, and decided to go with bit relocation. It's simple, it's fast, and it doesn't require knowing anything about boolean maths or crypo algorithms or anything else that requires actual thinking.
The simplest would probably be to just move half the bits left, and the other half right:
def hash(n):
return ((0x0000FFFF & n)<<16) + ((0xFFFF0000 & n)>>16)
This is reversible, in that hash(hash(n)) = n, and has non-sequential pairs {n,m}, n < m, where hash(m) < hash(n).
And to get a much less sequential looking implementation, you might also want to consider an interlace reordering from [msb,z,...,a,lsb] to [msb,lsb,z,a,...] or [lsb,msb,a,z,...] or any other relocation you feel gives an appropriately non-sequential sequence for the numbers you deal with, or even add a XOR on top for peak desequential'ing.
(The above function is safe for numbers that fit in 32 bits, larger numbers are guaranteed to cause collisions and would need some more bit mask coverage to prevent problems. That said, 32 bits is usually enough for any non-security uid).
Also have a look at the multiplicative inverse answer given by Andy Hayden, below.
Another simple solution is to use multiplicative inverses (see Eri Clippert's blog):
we showed how you can take any two coprime positive integers x and m and compute a third positive integer y with the property that (x * y) % m == 1, and therefore that (x * z * y) % m == z % m for any positive integer z. That is, there always exists a “multiplicative inverse”, that “undoes” the results of multiplying by x modulo m.
We take a large number e.g. 4000000000 and a large co-prime number e.g. 387420489:
def rhash(n):
return n * 387420489 % 4000000000
>>> rhash(12)
649045868
We first calculate the multiplicative inverse with modinv which turns out to be 3513180409:
>>> 3513180409 * 387420489 % 4000000000
1
Now, we can define the inverse:
def un_rhash(h):
return h * 3513180409 % 4000000000
>>> un_rhash(649045868) # un_rhash(rhash(12))
12
Note: This answer is fast to compute and works for numbers up to 4000000000, if you need to handle larger numbers choose a sufficiently large number (and another co-prime).
You may want to do this with hexidecimal (to pack the int):
def rhash(n):
return "%08x" % (n * 387420489 % 4000000000)
>>> rhash(12)
'26afa76c'
def un_rhash(h):
return int(h, 16) * 3513180409 % 4000000000
>>> un_rhash('26afa76c') # un_rhash(rhash(12))
12
If you choose a relatively large co-prime then this will seem random, be non-sequential and also be quick to calculate.
What you are asking for is encryption. A block cipher in its basic mode of operation, ECB, reversibly maps a input block onto an output block of the same size. The input and output blocks can be interpreted as numbers.
For example, AES is a 128 bit block cipher, so it maps an input 128 bit number onto an output 128 bit number. If 128 bits is good enough for your purposes, then you can simply pad your input number out to 128 bits, transform that single block with AES, then format the output as a 128 bit number.
If 128 bits is too large, you could use a 64 bit block cipher, like 3DES, IDEA or Blowfish.
ECB mode is considered weak, but its weakness is the constraint that you have postulated as a requirement (namely, that the mapping be "deterministic"). This is a weakness, because once an attacker has observed that 123 maps to 9874362483910978, from then on whenever she sees the latter number, she knows the plaintext was 123. An attacker can perform frequency analysis and/or build up a dictionary of known plaintext/ciphertext pairs.
Basically, you are looking for 2 way encryption, and one that probably uses a salt.
You have a number of choices:
TripleDES
AES
Here is an example:" Simple insecure two-way "obfuscation" for C#
What language are you looking at? If .NET then look at the encryption namespace for some ideas.
Why not just XOR with a nice long number?
Easy. Fast. Reversible.
Or, if this doesn't need to be terribly secure, you could convert from base 10 to some smaller base (like base 8 or base 4, depending on how long you want the numbers to be).
So in Ruby there is a trick to specify infinity:
1.0/0
=> Infinity
I believe in Python you can do something like this
float('inf')
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
To summarize, what I'm looking for is a real world reason to use Infinity.
EDIT: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people actually used it.
Dijkstra's Algorithm typically assigns infinity as the initial edge weights in a graph. This doesn't have to be "infinity", just some arbitrarily constant but in java I typically use Double.Infinity. I assume ruby could be used similarly.
Off the top of the head, it can be useful as an initial value when searching for a minimum value.
For example:
min = float('inf')
for x in somelist:
if x<min:
min=x
Which I prefer to setting min initially to the first value of somelist
Of course, in Python, you should just use the min() built-in function in most cases.
There seems to be an implied "Why does this functionality even exist?" in your question. And the reason is that Ruby and Python are just giving access to the full range of values that one can specify in floating point form as specified by IEEE.
This page seems to describe it well:
http://steve.hollasch.net/cgindex/coding/ieeefloat.html
As a result, you can also have NaN (Not-a-number) values and -0.0, while you may not immediately have real-world uses for those either.
In some physics calculations you can normalize irregularities (ie, infinite numbers) of the same order with each other, canceling them both and allowing a approximate result to come through.
When you deal with limits, calculations like (infinity / infinity) -> approaching a finite a number could be achieved. It's useful for the language to have the ability to overwrite the regular divide-by-zero error.
Use Infinity and -Infinity when implementing a mathematical algorithm calls for it.
In Ruby, Infinity and -Infinity have nice comparative properties so that -Infinity < x < Infinity for any real number x. For example, Math.log(0) returns -Infinity, extending to 0 the property that x > y implies that Math.log(x) > Math.log(y). Also, Infinity * x is Infinity if x > 0, -Infinity if x < 0, and 'NaN' (not a number; that is, undefined) if x is 0.
For example, I use the following bit of code in part of the calculation of some log likelihood ratios. I explicitly reference -Infinity to define a value even if k is 0 or n AND x is 0 or 1.
Infinity = 1.0/0.0
def Similarity.log_l(k, n, x)
unless x == 0 or x == 1
k * Math.log(x.to_f) + (n-k) * Math.log(1.0-x)
end
-Infinity
end
end
Alpha-beta pruning
I use it to specify the mass and inertia of a static object in physics simulations. Static objects are essentially unaffected by gravity and other simulation forces.
In Ruby infinity can be used to implement lazy lists. Say i want N numbers starting at 200 which get successively larger by 100 units each time:
Inf = 1.0 / 0.0
(200..Inf).step(100).take(N)
More info here: http://banisterfiend.wordpress.com/2009/10/02/wtf-infinite-ranges-in-ruby/
I've used it for cases where you want to define ranges of preferences / allowed.
For example in 37signals apps you have like a limit to project number
Infinity = 1 / 0.0
FREE = 0..1
BASIC = 0..5
PREMIUM = 0..Infinity
then you can do checks like
if PREMIUM.include? current_user.projects.count
# do something
end
I used it for representing camera focus distance and to my surprise in Python:
>>> float("inf") is float("inf")
False
>>> float("inf") == float("inf")
True
I wonder why is that.
I've used it in the minimax algorithm. When I'm generating new moves, if the min player wins on that node then the value of the node is -∞. Conversely, if the max player wins then the value of that node is +∞.
Also, if you're generating nodes/game states and then trying out several heuristics you can set all the node values to -∞/+∞ which ever makes sense and then when you're running a heuristic its easy to set the node value:
node_val = -∞
node_val = max(heuristic1(node), node_val)
node_val = max(heuristic2(node), node_val)
node_val = max(heuristic2(node), node_val)
I've used it in a DSL similar to Rails' has_one and has_many:
has 0..1 :author
has 0..INFINITY :tags
This makes it easy to express concepts like Kleene star and plus in your DSL.
I use it when I have a Range object where one or both ends need to be open
I've used symbolic values for positive and negative infinity in dealing with range comparisons to eliminate corner cases that would otherwise require special handling:
Given two ranges A=[a,b) and C=[c,d) do they intersect, is one greater than the other, or does one contain the other?
A > C iff a >= d
A < C iff b <= c
etc...
If you have values for positive and negative infinity that respectively compare greater than and less than all other values, you don't need to do any special handling for open-ended ranges. Since floats and doubles already implement these values, you might as well use them instead of trying to find the largest/smallest values on your platform. With integers, it's more difficult to use "infinity" since it's not supported by hardware.
I ran across this because I'm looking for an "infinite" value to set for a maximum, if a given value doesn't exist, in an attempt to create a binary tree. (Because I'm selecting based on a range of values, and not just a single value, I quickly realized that even a hash won't work in my situation.)
Since I expect all numbers involved to be positive, the minimum is easy: 0. Since I don't know what to expect for a maximum, though, I would like the upper bound to be Infinity of some sort. This way, I won't have to figure out what "maximum" I should compare things to.
Since this is a project I'm working on at work, it's technically a "Real world problem". It may be kindof rare, but like a lot of abstractions, it's convenient when you need it!
Also, to those who say that this (and other examples) are contrived, I would point out that all abstractions are somewhat contrived; that doesn't mean they are useful when you contrive them.
When working in a problem domain where trig is used (especially tangent) infinity is an answer that can come up. Trig ends up being used heavily in graphics applications, games, and geospatial applications, plus the obvious math applications.
I'm sure there are other ways to do this, but you could use Infinity to check for reasonable inputs in a String-to-Float conversion. In Java, at least, the Float.isNaN() static method will return false for numbers with infinite magnitude, indicating they are valid numbers, even though your program might want to classify them as invalid. Checking against the Float.POSITIVE_INFINITY and Float.NEGATIVE_INFINITY constants solves that problem. For example:
// Some sample values to test our code with
String stringValues[] = {
"-999999999999999999999999999999999999999999999",
"12345",
"999999999999999999999999999999999999999999999"
};
// Loop through each string representation
for (String stringValue : stringValues) {
// Convert the string representation to a Float representation
Float floatValue = Float.parseFloat(stringValue);
System.out.println("String representation: " + stringValue);
System.out.println("Result of isNaN: " + floatValue.isNaN());
// Check the result for positive infinity, negative infinity, and
// "normal" float numbers (within the defined range for Float values).
if (floatValue == Float.POSITIVE_INFINITY) {
System.out.println("That number is too big.");
} else if (floatValue == Float.NEGATIVE_INFINITY) {
System.out.println("That number is too small.");
} else {
System.out.println("That number is jussssst right.");
}
}
Sample Output:
String representation: -999999999999999999999999999999999999999999999
Result of isNaN: false
That number is too small.
String representation: 12345
Result of isNaN: false
That number is jussssst right.
String representation: 999999999999999999999999999999999999999999999
Result of isNaN: false
That number is too big.
It is used quite extensively in graphics. For example, any pixel in a 3D image that is not part of an actual object is marked as infinitely far away. So that it can later be replaced with a background image.
I'm using a network library where you can specify the maximum number of reconnection attempts. Since I want mine to reconnect forever:
my_connection = ConnectionLibrary(max_connection_attempts = float('inf'))
In my opinion, it's more clear than the typical "set to -1 to retry forever" style, since it's literally saying "retry until the number of connection attempts is greater than infinity".
Some programmers use Infinity or NaNs to show a variable has never been initialized or assigned in the program.
If you want the largest number from an input but they might use very large negatives. If I enter -13543124321.431 it still works out as the largest number since it's bigger than -inf.
enter code here
initial_value = float('-inf')
while True:
try:
x = input('gimmee a number or type the word, stop ')
except KeyboardInterrupt:
print("we done - by yo command")
break
if x == "stop":
print("we done")
break
try:
x = float(x)
except ValueError:
print('not a number')
continue
if x > initial_value: initial_value = x
print("The largest number is: " + str(initial_value))
You can to use:
import decimal
decimal.Decimal("Infinity")
or:
from decimal import *
Decimal("Infinity")
For sorting
I've seen it used as a sort value, to say "always sort these items to the bottom".
To specify a non-existent maximum
If you're dealing with numbers, nil represents an unknown quantity, and should be preferred to 0 for that case. Similarly, Infinity represents an unbounded quantity, and should be preferred to (arbitrarily_large_number) in that case.
I think it can make the code cleaner. For example, I'm using Float::INFINITY in a Ruby gem for exactly that: the user can specify a maximum string length for a message, or they can specify :all. In that case, I represent the maximum length as Float::INFINITY, so that later when I check "is this message longer than the maximum length?" the answer will always be false, without needing a special case.