subclassing float to force fixed point printing precision in python - python

[Python 3.1]
I'm following up on this answer:
class prettyfloat(float):
def __repr__(self):
return "%0.2f" % self
I know I need to keep track of my float literals (i.e., replace 3.0 with prettyfloat(3.0), etc.), and that's fine.
But whenever I do any calculations, prettyfloat objects get converted into float.
What's the easiest way to fix it?
EDIT:
I need exactly two decimal digits; and I need it across the whole code, including where I print a dictionary with float values inside. That makes any formatting functions hard to use.
I can't use Decimal global setting, since I want computations to be at full precision (just printing at 2 decimal points).
#Glenn Maynard: I agree I shouldn't override __repr__; if anything, it would be just __str__. But it's a moot point because of the following point.
#Glenn Maynard and #singularity: I won't subclass float, since I agree it will look very ugly in the end.
I will stop trying to be clever, and just call a function everywhere a float is being printed. Though I am really sad that I can't override __str__ in the builtin class float.
Thank you!

I had a look at the answer you followed up on, and I think you're confusing data and its representation.
#Robert Rossney suggested to subclass float so you could map() an iterable of standard, non-adulterated floats into prettyfloats for display purposes:
# Perform all our computations using standard floats.
results = compute_huge_numbers(42)
# Switch to prettyfloats for printing.
print(map(prettyfloat, results))
In other words, you were not supposed to (and you shouldn't) use prettyfloat as a replacement for float everywhere in your code.
Of course, inheriting from float to solve that problem is overkill, since it's a representation problem and not a data problem. A simple function would be enough:
def prettyfloat(number):
return "%0.2f" % number # Works the same.
Now, if it's not about representation after all, and what you actually want to achieve is fixed-point computations limited to two decimal places everywhere in your code, that's another story entirely.

that because prettyfloat (op) prettyfloat don't return a prettyfloat
example:
>>> prettyfloat(0.6)
0.60 # type prettyfloat
>>> prettyfloat(0.6) + prettyfloat(4.4)
5.0 # type float
solution if you don't want to cast every operation result manually to prettyfloat and if you still want to use prettyfloat is to override all operators.
example with operator __add__ (which is ugly)
class prettyfloat(float):
def __repr__(self):
return "%0.2f" % self
def __add__(self, other):
return prettyfloat(float(self) + other)
>>> prettyfloat(0.6) + prettyfloat(4.4)
5.00
by doing this i think you will have also to change the name from prettyfloat to uglyfloat :) , Hope this will help

Use decimal. This is what it's for.
>>> import decimal
>>> decimal.getcontext().prec = 2
>>> one = decimal.Decimal("1.0")
>>> three = decimal.Decimal("3.0")
>>> one / three
Decimal('0.33')
...unless you actually want to work with full-precision floats everywhere in your code but print them rounded to two decimal places. In that case, you need to rewrite your printing logic.

Related

What is f in front of few math functions in python

I am learning some basic modules in python and came across math function.
I observed 'f' in front of few functions like fabs, fmod, frexp, fsum etc.
may I know what is this 'f' in these functions
It's the floating point returning version of some functions that may return an integer. Example:
>>> abs(50)
50
>>> from math import *
>>> fabs(50)
50.0
>>>
since return type is different, you cannot have only one function
Note: As dawg mentionned it could check input type and return the same, but that may not be what you want. Everyone would end up forcing the type to float or int to make sure.

Counterintuitive behaviour of int() in python

It's clearly stated in the docs that int(number) is a flooring type conversion:
int(1.23)
1
and int(string) returns an int if and only if the string is an integer literal.
int('1.23')
ValueError
int('1')
1
Is there any special reason for that? I find it counterintuitive that the function floors in one case, but not the other.
There is no special reason. Python is simply applying its general principle of not performing implicit conversions, which are well-known causes of problems, particularly for newcomers, in languages such as Perl and Javascript.
int(some_string) is an explicit request to convert a string to integer format; the rules for this conversion specify that the string must contain a valid integer literal representation. int(float) is an explicit request to convert a float to an integer; the rules for this conversion specify that the float's fractional portion will be truncated.
In order for int("3.1459") to return 3 the interpreter would have to implicitly convert the string to a float. Since Python doesn't support implicit conversions, it chooses to raise an exception instead.
This is almost certainly a case of applying three of the principles from the Zen of Python:
Explicit is better implicit.
[...] practicality beats purity
Errors should never pass silently
Some percentage of the time, someone doing int('1.23') is calling the wrong conversion for their use case, and wants something like float or decimal.Decimal instead. In these cases, it's clearly better for them to get an immediate error that they can fix, rather than silently giving the wrong value.
In the case that you do want to truncate that to an int, it is trivial to explicitly do so by passing it through float first, and then calling one of int, round, trunc, floor or ceil as appropriate. This also makes your code more self-documenting, guarding against a later modification "correcting" a hypothetical silently-truncating int call to float by making it clear that the rounded value is what you want.
Sometimes a thought experiment can be useful.
Behavior A: int('1.23') fails with an error. This is the existing behavior.
Behavior B: int('1.23') produces 1 without error. This is what you're proposing.
With behavior A, it's straightforward and trivial to get the effect of behavior B: use int(float('1.23')) instead.
On the other hand, with behavior B, getting the effect of behavior A is significantly more complicated:
def parse_pure_int(s):
if "." in s:
raise ValueError("invalid literal for integer with base 10: " + s)
return int(s)
(and even with the code above, I don't have complete confidence that there isn't some corner case that it mishandles.)
Behavior A therefore is more expressive than behavior B.
Another thing to consider: '1.23' is a string representation of a floating-point value. Converting '1.23' to an integer conceptually involves two conversions (string to float to integer), but int(1.23) and int('1') each involve only one conversion.
Edit:
And indeed, there are corner cases that the above code would not handle: 1e-2 and 1E-2 are both floating point values too.
In simple words - they're not the same function.
int( decimal ) behaves as 'floor i.e. knock off the decimal portion and return as int'
int( string ) behaves as 'this text describes an integer, convert it and return as int'.
They are 2 different functions with the same name that return an integer but they are different functions.
'int' is short and easy to remember and its meaning applied to each type is intuitive to most programmers which is why they chose it.
There's no implication they are providing the same or combined functionality, they simply have the same name and return the same type. They could as easily be called 'floorDecimalAsInt' and 'convertStringToInt', but they went for 'int' because it's easy to remember, (99%) intuitive and confusion would rarely occur.
Parsing text as an Integer for text which included a decimal point such as "4.5" would throw an error in majority of computer languages and be expected to throw an error by majority of programmers, since the text-value does not represent an integer and implies they are providing erroneous data

How do I ONLY round a number/float down in Python?

I will have this random number generated e.g 12.75 or 1.999999999 or 2.65
I want to always round this number down to the nearest integer whole number so 2.65 would be rounded to 2.
Sorry for asking but I couldn't find the answer after numerous searches, thanks :)
You can us either int(), math.trunc(), or math.floor(). They all will do what you want for positive numbers:
>>> import math
>>> math.floor(12.6) # returns 12.0 in Python 2
12
>>> int(12.6)
12
>>> math.trunc(12.6)
12
However, note that they behave differently with negative numbers: int and math.trunc will go to 0, whereas math.floor always floors downwards:
>>> import math
>>> math.floor(-12.6) # returns -13.0 in Python 2
-13
>>> int(-12.6)
-12
>>> math.trunc(-12.6)
-12
Note that math.floor and math.ceil used to return floats in Python 2.
Also note that int and math.trunc will both (at first glance) appear to do the same thing, though their exact semantics differ. In short: int is for general/type conversion and math.trunc is specifically for numeric types (and will help make your intent more clear).
Use int if you don't really care about the difference, if you want to convert strings, or if you don't want to import a library. Use trunc if you want to be absolutely unambiguous about what you mean or if you want to ensure your code works correctly for non-builtin types.
More info below:
Math.floor() in Python 2 vs Python 3
Note that math.floor (and math.ceil) were changed slightly from Python 2 to Python 3 -- in Python 2, both functions will return a float instead of an int. This was changed in Python 3 so that both methods return an int (more specifically, they call the __float__ method on whatever object they were given). So then, if you're using Python 2, or would like your code to maintain compatibility between the two versions, it would generally be safe to do int(math.floor(...)).
For more information about why this change was made + about the potential pitfalls of doing int(math.floor(...)) in Python 2, see
Why do Python's math.ceil() and math.floor() operations return floats instead of integers?
int vs math.trunc()
At first glance, the int() and math.trunc() methods will appear to be identical. The primary differences are:
int(...)
The int function will accept floats, strings, and ints.
Running int(param) will call the param.__int__() method in order to perform the conversion (and then will try calling __trunc__ if __int__ is undefined)
The __int__ magic method was not always unambiguously defined -- for some period of time, it turned out that the exact semantics and rules of how __int__ should work were largely left up to the implementing class.
The int function is meant to be used when you want to convert a general object into an int. It's a type conversion method. For example, you can convert strings to ints by doing int("42") (or do things like change of base: int("AF", 16) -> 175).
math.trunc(...)
The trunc will only accept numeric types (ints, floats, etc)
Running math.trunc(param) function will call the param.__trunc__() method in order to perform the conversion
The exact behavior and semantics of the __trunc__ magic method was precisely defined in PEP 3141 (and more specifically in the Changes to operations and __magic__ methods section).
The math.trunc function is meant to be used when you want to take an existing real number and specifically truncate and remove its decimals to produce an integral type. This means that unlike int, math.trunc is a purely numeric operation.
All that said, it turns out all of Python's built-in types will behave exactly the same whether you use int or trunc. This means that if all you're doing is using regular ints, floats, fractions, and decimals, you're free to use either int or trunc.
However, if you want to be very precise about what exactly your intent is (ie if you want to make it absolutely clear whether you're flooring or truncating), or if you're working with custom numeric types that have different implementations for __int__ and __trunc__, then it would probably be best to use math.trunc.
You can also find more information and debate about this topic on Python's developer mailing list.
you can do this easily with a built in python functions, just use two forward slashes and divide by 1.
>>> print 12.75//1
12.0
>>> print 1.999999999//1
1.0
>>> print 2.65//1
2.0
No need to import any module like math etc....
python bydeafault it convert if you do simply type cast by integer
>>>x=2.65
>>>int(x)
2
I'm not sure whether you want math.floor, math.trunc, or int, but... it's almost certainly one of those functions, and you can probably read the docs and decide more easily than you can explain enough for usb to decide for you.
Obviously, Michael0x2a's answer is what you should do. But, you can always get a bit creative.
int(str(12.75).split('.')[0])
If you only looking for the nearest integer part I think the best option would be to use math.trunc() function.
import math
math.trunc(123.456)
You can also use int()
int(123.456)
The difference between these two functions is that int() function also deals with string numeric conversion, where trunc() only deals with numeric values.
int('123')
# 123
Where trunc() function will throw an exception
math.trunc('123')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-62-f9aa08f6d314> in <module>()
----> 1 math.trunc('123')
TypeError: type str doesn't define __trunc__ method
If you know that you only dealing with numeric data, you should consider using trunc() function since it's faster than int()
timeit.timeit("math.trunc(123.456)", setup="import math", number=10_000)
# 0.0011689490056596696
timeit.timeit("int(123.456)", number=10_000)
# 0.0014109049952821806

Python, How to extend Decimal class to add helpful methods

I would like to extend the Decimal class to add some helpful methods to it, specially for handling money.
The problem when I go do this:
from decimal import Decimal
class NewDecimal(Decimal):
def new_str(self):
return "${}".format(self)
d1 = NewDecimal(1)
print d1.new_str() # prints '$1'
d2 = NewDecimal(2)
d3 = NewDecimal(3)
d5 = d2 + d3
print d5.new_str() #exception happens here
It throws an exception:
AttributeError: 'Decimal' object has no attribute 'new_str'
This is because of the way Decimal does arithmetic, it always returns a new Decimal object, by literally calling Decimal(new value) at the end of the computation.
Does anyone no a workaround for this other than completely reimplementing all the arithmetic?
You probably don't actually want to do this just to have an extra method for printing Decimal objects in an alternate way. A top-level function or monkeypatched method is a whole lot simpler, and cleaner. Or, alternatively, a Money class that has a Decimal member that it delegates arithmetic to.
But what you want is doable.
To make NewDecimal(1) + NewDecimal(2) return NewDecimal(3), you can just override __add__:
def __add__(self, rhs):
return NewDecimal(super().__add__(rhs))
And of course you'll want to override __iadd__ as well. And don't forget mul and all the other numeric special methods.
But that still won't help for Decimal(2) + NewDecimal(3). To make that work, you need to define NewDecimal.__radd__. You also need to ensure that NewDecimal.__radd__ will get called instead of Decimal.__add__, but when you're using inheritance, that's easy, because Python has a rule specifically to make this easy:
Note: If the right operand’s type is a subclass of the left operand’s type and that subclass provides the reflected method for the operation, this method will be called before the left operand’s non-reflected method. This behavior allows subclasses to override their ancestors’ operations.
You may want to read the section Implementing the arithmetic operations in the numbers module docs, and the implementation of fractions.Fraction (which was intended to serve as sample code for creating new numeric types, which is why the docs link directly to the source). Your life is easier than Fraction's because you can effectively fall back to Decimal for every operation and then convert (since NewDecimal doesn't have any different numeric behavior from Decimal), but it's worth seeing all the issues, and understanding which ones are and aren't relevant and why.
The quick way to what you want, would be like this:
from decimal import Decimal
class NewDecimal(Decimal):
def __str__(self):
return "${}".format(self)
def __add__(self,b):
return NewDecimal( Decimal.__add__(self,b) )
d1 = NewDecimal(1)
print d1 # prints '$1'
d2 = NewDecimal(2)
d3 = NewDecimal(3)
d5 = d2 + d3
print d5
> $5

What class to use for money representation?

What class should I use for representation of money to avoid most rounding errors?
Should I use Decimal, or a simple built-in number?
Is there any existing Money class with support for currency conversion that I could use?
Any pitfalls that I should avoid?
Never use a floating point number to represent money. Floating numbers do not represent numbers in decimal notation accurately. You would end with a nightmare of compound rounding errors, and unable to reliably convert between currencies. See Martin Fowler's short essay on the subject.
If you decide to write your own class, I recommend basing it on the decimal data type.
I don't think python-money is a good option, because it wasn't maintained for quite some time and its source code has some strange and useless code, and exchanging currencies is simply broken.
Try py-moneyed. It's an improvement over python-money.
Just use decimal.
http://code.google.com/p/python-money/
"Primitives for working with money and currencies in Python" - the title is self explanatory :)
You might be interested in QuantLib for working with finance.
It has built in classes for handling currency types and claims 4 years of active development.
You could have a look at this library: python-money. Since I've no experience with it I cannot comment on its usefullness.
A 'trick' you could employ to handle currency as integers:
Multiply by 100 / Divide by 100 (e.g. $100,25 -> 10025) to have a representation in 'cents'
Simple, light-weight, yet extensible idea:
class Money():
def __init__(self, value):
# internally use Decimal or cents as long
self._cents = long(0)
# Now parse 'value' as needed e.g. locale-specific user-entered string, cents, Money, etc.
# Decimal helps in conversion
def as_my_app_specific_protocol(self):
# some application-specific representation
def __str__(self):
# user-friendly form, locale specific if needed
# rich comparison and basic arithmetics
def __lt__(self, other):
return self._cents < Money(other)._cents
def __add__(self, other):
return Money(self._cents + Money(other)._cents)
You can:
Implement only what you need in your application.
Extend it as you grow.
Change internal representation and implementation as needed.

Categories