What is f in front of few math functions in python - python

I am learning some basic modules in python and came across math function.
I observed 'f' in front of few functions like fabs, fmod, frexp, fsum etc.
may I know what is this 'f' in these functions

It's the floating point returning version of some functions that may return an integer. Example:
>>> abs(50)
50
>>> from math import *
>>> fabs(50)
50.0
>>>
since return type is different, you cannot have only one function
Note: As dawg mentionned it could check input type and return the same, but that may not be what you want. Everyone would end up forcing the type to float or int to make sure.

Related

Question on Python treatment of numpy.int32 vs int

In coding up a simple Fibonacci script, I found some 'odd' behaviour in how Python treats numpy.int32 vs how it treats regular int numbers.
Can anyone help me understand what causes this behaviour?
Using the Fibonacci code as follows, leveraging caching to significantly speed things up;
from functools import lru_cache
import numpy as np
#lru_cache(maxsize=None)
def fibo(n):
if n <= 1:
return n
else:
return fibo(n-1)+fibo(n-2)
If I define a Numpy array of numbers to calculate over (with np.arange), it all works well until n = 47, then things start going haywire. If, on the other hand, I use a regular python list, then the values are all correctly calculated
You should be able to see the difference with the following;
fibo(np.int32(47)), fibo(47)
Which should return (at least it does for me);
(-1323752223, 2971215073)
Obviously, something very wrong has occured with the calculations against the numpy.int32 input. Now, I can get around the issue by simply inserting a 'n = int(n)' line in the fibo function before anything else is evaluated, but I dont understand why this is necessary.
I've also tried np.int(47) instead of np.int32(47), and found that the former works just fine. However, using np.arange to create the array seems to default to np.int32 data type.
I've tried removing the caching (I wouldn't recommend you try - it takes around 2 hours to calculate to n = 47) - and I get the same behaviour, so that is not the cause.
Can anyone shed some insight into this for me?
Thanks
Python's "integers have unlimited precision". This was built into the language so that new users have "one less thing to learn".
Though maybe not in your case, or for anyone using NumPy. That library is designed to make computations as fast as possible. It therefore uses data types that are well supported by the CPU architecture, such as 32-bit and 64-bit integers that neatly fit into a CPU register and have an invariable memory footprint.
But then we're back to dealing with overflow problems like in any other programming language. NumPy does warn about that though:
>>> print(fibo(np.int32(47)))
fib.py:9: RuntimeWarning: overflow encountered in long_scalars
return fibo(n-1)+fibo(n-2)
-1323752223
Here we are using a signed 32-bit integer. The largest positive number it can hold is 231 - 1 = 2147483647. But the 47th Fibonacci number is even larger than that, it's 2971215073 as you calculated. In that case, the 32-bit integer overflows and we end up with -1323752223, which is its two's complement:
>>> 2971215073 + 1323752223 == 2**32
True
It worked with np.int because that's just an alias of the built-in int, so it returns a Python integer:
>>> np.int is int
True
For more on this, see: What is the difference between native int type and the numpy.int types?
Also note that np.arange for integer arguments returns an integer array of type np.int_ (with a trailing underscore, unlike np.int). That data type is platform-dependent and maps to 32-bit integers on Windows, but 64-bit on Linux.

Cython returns 0 for expression that should evaluate to 0.5?

For some reason, Cython is returning 0 on a math expression that should evaluate to 0.5:
print(2 ** (-1)) # prints 0
Oddly enough, mix variables in and it'll work as expected:
i = 1
print(2 ** (-i)) # prints 0.5
Vanilla CPython returns 0.5 for both cases. I'm compiling for 37m-x86_64-linux-gnu, and language_level is set to 3.
What is this witchcraft?
It's because it's using C ints rather than Python integers so it matches C behaviour rather than Python behaviour. I'm relatively sure this used to be documented as a limitation somewhere but I can't find it now. If you want to report it as a bug then go to https://github.com/cython/cython/issues, but I suspect this is a deliberate trade-off of speed for compatibility.
The code gets translated to
__Pyx_pow_long(2, -1L)
where __Pyx_pow_long is a function of type static CYTHON_INLINE long __Pyx_pow_long(long b, long e).
The easiest way to fix it is to change one/both of the numbers to be a floating point number
print(2. ** (-1))
As a general comment on the design choice: people from the C world generally expect int operator int to return an int, and this option will be fastest. Python had tried to do this in the past with the Python 2 division behaviour (but inconsistently - power always returned a floating point number).
Cython generally tries to follow Python behaviour. However, a lot of people are using it for speed so they also try to fall back to quick, C-like operations especially when people specify types (since those people want speed). I think what's happened here is that it's been able to infer the types automatically, and so defaulted to C behaviour. I suspect ideally it should distinguish between specified types and types that it's inferred. However, it's also probably too late to start changing that.
It looks like Cython is incorrectly inferring the final data type as int rather than float when only numbers are involved
The following code works as expected:
print(2.0 ** (-1))
See this link for a related discussion: https://groups.google.com/forum/#!topic/cython-users/goVpote2ScY

Differents way to define a function in sagemath

I would like to know why these two "programs" produce different output
f(x)=x^2
f(90).mod(7)
and
def f(x):
return(x^2)
f(90).mod(7)
Thanks
Great question! Let's take a deeper look at the functions in question.
f(x)=x^2
def g(x):
return(x^2)
print type(g(90))
print type(f(90))
This yields
<type 'sage.rings.integer.Integer'>
<type 'sage.symbolic.expression.Expression'>
So what you are seeing is the difference between a symbolic function defined with the f(x) notation and a Python function using the def keyword. In Sage, the former has access to a lot of stuff (e.g. calculus) that plain old Sage integers won't have.
What I would recommend in this case, just for what you need, is
sage: a = f(90)
sage: ZZ(a).mod(7)
1
or actually the possibly more robust
sage: mod(a,7)
1
Longer explanation.
For symbolic stuff, mod isn't what you think. In fact, I'm not sure it will do anything (see the documentation for mod to see how to use it for polynomial modular work over ideals, though). Here's the code (accessible with x.mod??, documentation accessible with x.mod?):
from sage.rings.ideal import is_Ideal
if not is_Ideal(I) or not I.ring() is self._parent:
I = self._parent.ideal(I)
#raise TypeError, "I = %s must be an ideal in %s"%(I, self.parent())
return I.reduce(self)
And it turns out that for generic rings (like the symbolic 'ring'), nothing happens in that last step:
return f
This is why we need to, one way or another, ask it to be an integer again. See Trac 27401.

How do I ONLY round a number/float down in Python?

I will have this random number generated e.g 12.75 or 1.999999999 or 2.65
I want to always round this number down to the nearest integer whole number so 2.65 would be rounded to 2.
Sorry for asking but I couldn't find the answer after numerous searches, thanks :)
You can us either int(), math.trunc(), or math.floor(). They all will do what you want for positive numbers:
>>> import math
>>> math.floor(12.6) # returns 12.0 in Python 2
12
>>> int(12.6)
12
>>> math.trunc(12.6)
12
However, note that they behave differently with negative numbers: int and math.trunc will go to 0, whereas math.floor always floors downwards:
>>> import math
>>> math.floor(-12.6) # returns -13.0 in Python 2
-13
>>> int(-12.6)
-12
>>> math.trunc(-12.6)
-12
Note that math.floor and math.ceil used to return floats in Python 2.
Also note that int and math.trunc will both (at first glance) appear to do the same thing, though their exact semantics differ. In short: int is for general/type conversion and math.trunc is specifically for numeric types (and will help make your intent more clear).
Use int if you don't really care about the difference, if you want to convert strings, or if you don't want to import a library. Use trunc if you want to be absolutely unambiguous about what you mean or if you want to ensure your code works correctly for non-builtin types.
More info below:
Math.floor() in Python 2 vs Python 3
Note that math.floor (and math.ceil) were changed slightly from Python 2 to Python 3 -- in Python 2, both functions will return a float instead of an int. This was changed in Python 3 so that both methods return an int (more specifically, they call the __float__ method on whatever object they were given). So then, if you're using Python 2, or would like your code to maintain compatibility between the two versions, it would generally be safe to do int(math.floor(...)).
For more information about why this change was made + about the potential pitfalls of doing int(math.floor(...)) in Python 2, see
Why do Python's math.ceil() and math.floor() operations return floats instead of integers?
int vs math.trunc()
At first glance, the int() and math.trunc() methods will appear to be identical. The primary differences are:
int(...)
The int function will accept floats, strings, and ints.
Running int(param) will call the param.__int__() method in order to perform the conversion (and then will try calling __trunc__ if __int__ is undefined)
The __int__ magic method was not always unambiguously defined -- for some period of time, it turned out that the exact semantics and rules of how __int__ should work were largely left up to the implementing class.
The int function is meant to be used when you want to convert a general object into an int. It's a type conversion method. For example, you can convert strings to ints by doing int("42") (or do things like change of base: int("AF", 16) -> 175).
math.trunc(...)
The trunc will only accept numeric types (ints, floats, etc)
Running math.trunc(param) function will call the param.__trunc__() method in order to perform the conversion
The exact behavior and semantics of the __trunc__ magic method was precisely defined in PEP 3141 (and more specifically in the Changes to operations and __magic__ methods section).
The math.trunc function is meant to be used when you want to take an existing real number and specifically truncate and remove its decimals to produce an integral type. This means that unlike int, math.trunc is a purely numeric operation.
All that said, it turns out all of Python's built-in types will behave exactly the same whether you use int or trunc. This means that if all you're doing is using regular ints, floats, fractions, and decimals, you're free to use either int or trunc.
However, if you want to be very precise about what exactly your intent is (ie if you want to make it absolutely clear whether you're flooring or truncating), or if you're working with custom numeric types that have different implementations for __int__ and __trunc__, then it would probably be best to use math.trunc.
You can also find more information and debate about this topic on Python's developer mailing list.
you can do this easily with a built in python functions, just use two forward slashes and divide by 1.
>>> print 12.75//1
12.0
>>> print 1.999999999//1
1.0
>>> print 2.65//1
2.0
No need to import any module like math etc....
python bydeafault it convert if you do simply type cast by integer
>>>x=2.65
>>>int(x)
2
I'm not sure whether you want math.floor, math.trunc, or int, but... it's almost certainly one of those functions, and you can probably read the docs and decide more easily than you can explain enough for usb to decide for you.
Obviously, Michael0x2a's answer is what you should do. But, you can always get a bit creative.
int(str(12.75).split('.')[0])
If you only looking for the nearest integer part I think the best option would be to use math.trunc() function.
import math
math.trunc(123.456)
You can also use int()
int(123.456)
The difference between these two functions is that int() function also deals with string numeric conversion, where trunc() only deals with numeric values.
int('123')
# 123
Where trunc() function will throw an exception
math.trunc('123')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-62-f9aa08f6d314> in <module>()
----> 1 math.trunc('123')
TypeError: type str doesn't define __trunc__ method
If you know that you only dealing with numeric data, you should consider using trunc() function since it's faster than int()
timeit.timeit("math.trunc(123.456)", setup="import math", number=10_000)
# 0.0011689490056596696
timeit.timeit("int(123.456)", number=10_000)
# 0.0014109049952821806

subclassing float to force fixed point printing precision in python

[Python 3.1]
I'm following up on this answer:
class prettyfloat(float):
def __repr__(self):
return "%0.2f" % self
I know I need to keep track of my float literals (i.e., replace 3.0 with prettyfloat(3.0), etc.), and that's fine.
But whenever I do any calculations, prettyfloat objects get converted into float.
What's the easiest way to fix it?
EDIT:
I need exactly two decimal digits; and I need it across the whole code, including where I print a dictionary with float values inside. That makes any formatting functions hard to use.
I can't use Decimal global setting, since I want computations to be at full precision (just printing at 2 decimal points).
#Glenn Maynard: I agree I shouldn't override __repr__; if anything, it would be just __str__. But it's a moot point because of the following point.
#Glenn Maynard and #singularity: I won't subclass float, since I agree it will look very ugly in the end.
I will stop trying to be clever, and just call a function everywhere a float is being printed. Though I am really sad that I can't override __str__ in the builtin class float.
Thank you!
I had a look at the answer you followed up on, and I think you're confusing data and its representation.
#Robert Rossney suggested to subclass float so you could map() an iterable of standard, non-adulterated floats into prettyfloats for display purposes:
# Perform all our computations using standard floats.
results = compute_huge_numbers(42)
# Switch to prettyfloats for printing.
print(map(prettyfloat, results))
In other words, you were not supposed to (and you shouldn't) use prettyfloat as a replacement for float everywhere in your code.
Of course, inheriting from float to solve that problem is overkill, since it's a representation problem and not a data problem. A simple function would be enough:
def prettyfloat(number):
return "%0.2f" % number # Works the same.
Now, if it's not about representation after all, and what you actually want to achieve is fixed-point computations limited to two decimal places everywhere in your code, that's another story entirely.
that because prettyfloat (op) prettyfloat don't return a prettyfloat
example:
>>> prettyfloat(0.6)
0.60 # type prettyfloat
>>> prettyfloat(0.6) + prettyfloat(4.4)
5.0 # type float
solution if you don't want to cast every operation result manually to prettyfloat and if you still want to use prettyfloat is to override all operators.
example with operator __add__ (which is ugly)
class prettyfloat(float):
def __repr__(self):
return "%0.2f" % self
def __add__(self, other):
return prettyfloat(float(self) + other)
>>> prettyfloat(0.6) + prettyfloat(4.4)
5.00
by doing this i think you will have also to change the name from prettyfloat to uglyfloat :) , Hope this will help
Use decimal. This is what it's for.
>>> import decimal
>>> decimal.getcontext().prec = 2
>>> one = decimal.Decimal("1.0")
>>> three = decimal.Decimal("3.0")
>>> one / three
Decimal('0.33')
...unless you actually want to work with full-precision floats everywhere in your code but print them rounded to two decimal places. In that case, you need to rewrite your printing logic.

Categories