How to store a number with certain precision? - python

What I want to do is to calculate square root of a number N and than we have to store this square root with P correct decimal values.For example
Lets,N=2 and P=100
//I want to store square root till P correct decimal values
like storing
1.41421356237309504880...till 100 values
EDIT - Initially I asked question for specifically c++ but as told by vsoftco that it is almost impossible for c++ to do this without boost.So I have tagged python too and not removed c++ tag in hope for correct answer in C++

C++ uses floating-point types (such as float, double etc) to store floating-point values and perform floating point arithmetic. The size of these types (which directly influences their precision) is implementation-defined (usually you won't get more than 128 bit of precision). Also, precision is a relative term: when your numbers grow, you have less and less precision.
So, to answer your question: it is impossible to store a number with arbitrary precision using standard C++ types. You need to use a multi-precision library for that, e.g. Boost.Multiprecision.
Example code using Boost.Multiprecison:
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <iostream>
int main()
{
using namespace boost::multiprecision;
typedef number<cpp_dec_float<100> > cpp_dec_float_100; // 100 decimal places
cpp_dec_float_100 N = 2;
cpp_dec_float_100 result = sqrt(N); // calls boost::multiprecision::sqrt
std::cout << result.str() << std::endl; // displays the result as a string
}
If you use Python, you can make use of decimal module:
from decimal import *
getcontext().prec = 100
result = Decimal.sqrt(Decimal(2))
print("Decimal.sqrt(2): {0}".format(result))

Well, there is a proposed extension to c++ that defines a decimal type family described in http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf. There is some support for it in modern gcc version, however it's still not full, even in 4.9.2
Maybe it will be feasible for you.

Related

Python to C Floating Point Imprecision

I'm having some problems with precision when importing floats from csv files created in Python to a program written in C. The following code is an example of what happens in my program: Python writes a float32 value to a file (in this case 9.8431373e+00), I read it into a string and use strtof to convert it back into float32, but the result is different on the last decimal place.
#include <stdio.h>
#include <stdlib.h>
int main(void) {
char* a;
char x[10] = "9.8431373";
printf("%s\n", x);
float f = strtof(x,&a);
printf("%.7e\n", f);
}
Output:
9.8431373
9.8431377e+00
Now, correct me if I am wrong, but a float object in C has 32 bits which leads to 7 decimal places of precision after wherever its floating point is. So there shouldn't be any precision errors if I'm not declaring a number bigger than the float allows.
If I did indeed declare a number more precise than the float in C allows, then how has Python accepted "9.8431373e+00" as a Float32 without correcting it? Does Python and C have different standards for 32-bit floats?
Python interprets as double by default. You would see the same issue in Python if you packed to single precision and unpacked again:
>>> import struct
>>> a = struct.pack('<f', 9.8431373)
>>> b, = struct.unpack('<f', a)
>>> b
9.843137741088867
Fundamentally, the decimal fraction 9.8431373 does not exist in binary floating-point, either single (float) or double precision. As a 32-bit float the closest you can get is a binary number that's equivalent to about 9.84313774, and as a double the closest you can get is a number corresponding to about 9.8431373000000004.
It's up to some combination of you, and your programming language, how these numbers get displayed back to you in decimal. Sometimes they're rounded, sometimes they're truncated. I don't know that much about Python, but I know that its rules are at least a little bit different from C's, so I'm not surprised you're seeing last digits that are off by 1.

Python issues with math.floor: int too large to convert to float

I'd like to calculate (⌊2^(1918)*π⌋+124476) in python but I get this error when I do it using the following code:
b = math.floor((2**1918) * math.pi) + 124476
print(b)
OverflowError: int too large to convert to float
How can you get this to work? In the end I just like to have it all as hexadecimal (if that helps with answering my question) but I was actually only trying to get it as an integer first :)
The right solution really depends on how precise the results are required. Since 2^1918 already is too large for both standard integer and floating point containers, it is not possible to get away with direct calculations without loosing all the precision below ~ 10^300.
In order to compute the desired result, you should use arbitrary-precision calculation techniques. You can implement the algorithms yourself or use one of the available libraries.
Assuming you are looking for an integer part of your expression, it will take about 600 decimal places to store the results precisely. Here is how you can get it using mpmath:
from mpmath import mp
mp.dps = 600
print(mp.floor(mp.power(2, 1918)*mp.pi + 124476))
74590163000744215664571428206261183464882552592869067139382222056552715349763159120841569799756029042920968184704590129494078052978962320087944021101746026347535981717869532122259590055984951049094749636380324830154777203301864744802934173941573749720376124683717094961945258961821638084501989870923589746845121992752663157772293235786930128078740743810989039879507242078364008020576647135087519356182872146031915081433053440716531771499444683048837650335204793844725968402892045220358076481772902929784589843471786500160230209071224266538164123696273477863853813807997663357545.0
Next, all you have to do is to convert it to hex representation (or extract hex from its internal binary form), which is a matter for another subject :)
The basic problem is what the message says. Python integers can be arbitrarily large, larger even than the range of a float. 2**1918 in decimal contains 578 significant digits and is way bigger than the biggest float your IEEE754 hardware can represent. So the call just fails.
You could try looking at the mpmath module. It is designed for floating point arithmetic outside the bounds of what ordinary hardware can handle.
I think the problem can be solved without resorting to high-precision arithmetic. floor(n.something + m) where m and n are integers is equal to floor(n.something) + m. So in this case you are looking for floor(2**1918 * pi) plus an integer (namely 124476). floor(2**whatever * pi) is just the first whatever + 2 bits of pi. So just look up the first 1920 bits of pi, add the bits for 124476, and output as hex digits.
A spigot algorithm can generate digits of pi without using arbitrary precision. A quick web search seems to find some Python implementations for generating digits in base 10. I didn't see anything about base 2, but Plouffe's formula generates base 16 digits if I am not mistaken.
The problem is that (2**1918) * math.pi attempts to convert the integer to 64-bit floating point precision, which is insufficiently large. You can convert math.pi to a fraction to use arbitrary precision.
>>> math.floor((2**1918) * fractions.Fraction(math.pi) + 124476)
74590163000744212756918704280961225881025315246628098737524697383138220762542289800871336766911957454080350508173317171375032226685669280397906783245438534131599390699781017605377332298669863169044574050694427882869191541933796848577277592163846082732344724959222075452985644173036076895843129191378853006780204194590286508603564336292806628948212533561286572102730834409985441874735976583720122784469168008083076285020654725577288682595262788418426186598550864392013191287665258445673204426746083965447956681216069719524525240073122409298640817341016286940008045020172328756796
Note that arbitrary precision applies to the calculation; math.pi is defined only with 64-bit floating point precision. Use an external library, such as mpmath, if you need the exact value.
To convert this to a hexadecimal string, use hex or a string format:
>>> hex(math.floor((2**1918) * fractions.Fraction(math.pi) + 124476))
'0xc90fdaa22168c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001e63c'
>>> '%x' % math.floor((2**1918) * fractions.Fraction(math.pi) + 124476)
'c90fdaa22168c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001e63c'
>>> f'{math.floor((2**1918) * fractions.Fraction(math.pi) + 124476):X}'
'C90FDAA22168C0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001E63C'
For string formats, x provides lower-case hex whereas X provides upper-case case.

What is a float, have I got the right idea?

I was programming a calculator and I came across float. I was told to use floats then my calculator would be able to calculate calculations including numbers like 16.4 and 4.5 and not just whole numbers.
Now, I sort of got thinking and wondered if someone could just verify I'm on the right tracks.
I understand an int is a solid number, a double is basically 2 numbers with a dot in the middle, a decimal? And now comes the tricky one, the float data type.
I just need somebody to verify I'm on the right track. I think a float is a data type that can either be a whole number (this is what I'm not sure about) and a double/decimal.
I feel like float is the safe data type kinda thing, where you're not sure and want to accept whole and decimal numbers, am I right? Is it for accepting both whole and decimals?
For anyone that struggles to understand, heres the code to my calculator, it might help.
while True:
calculation = input("Calculation: ")
if (calculation.__contains__("+")):
print(float(calculation[0] ) + float(calculation[2]))
elif (calculation.__contains__("-")):
print(float(calculation[0] ) - float(calculation[2]))
elif (calculation.__contains__("*")):
print(float(calculation[0] ) * float(calculation[2]))
elif (calculation.__contains__("/")):
print(float(calculation[0] ) / float(calculation[2]))
A float is a floating-point number (more or less your "basically 2 numbers with a dot in the middle").
The term double is short for "double precision floating-point number": a similar kind of number but typically using more bits to store it, allowing for more precision.
In Python, the type float is used to refer to all floating-point numbers, regardless of precision.
You mention Python's float and also double. These are exactly the same thing, because what Python calls float (pedantically, in most implementations of Python) is what everyone else calls double. And what C and C++ call float does not exist in Python.

Print python float as single precision float

When printing a floating point variable in Python 3 like this:
str(1318516946165810000000000.123123123)
The output is:
1.31851694616581e+24
Is there a simple way in the standard lib (not Numpy) to print the same thing with only 32 bit float precision? (or more general any precision)
Be aware precision != places, like in Decimal
EDIT
The result should should be a string like str does but with a limited precision for example:
32 bit representation of the above float:
1.31851e+24
I may have misunderstood, but is using format with a suitable precision modifier what you are asking for?
>>> "{0:6g}".format(1.31851694616581e24)
'1.31852e+24'
Change the 6 to control the number of significant figures

How do languages such as Python overcome C's Integral data limits?

While doing some random experimentation with a factorial program in C, Python and Scheme. I came across this fact:
In C, using 'unsigned long long' data type, the largest factorial I can print is of 65. which is '9223372036854775808' that is 19 digits as specified here.
In Python, I can find the factorial of a number as large as 999 which consists of a large number of digits, much more than 19.
How does CPython achieve this? Does it use a data type like 'octaword' ?
I might be missing some fundamental facts here. So, I would appreciate some insights and/or references to read. Thanks!
UPDATE: Thank you all for the explanation. Does that means, CPython is using the GNU Multi-precision library (or some other similar library)?
UPDATE 2: I am looking for Python's 'bignum' implementation in the sources. Where exactly it is? Its here at http://svn.python.org/view/python/trunk/Objects/longobject.c?view=markup. Thanks Baishampayan.
It's called Arbitrary Precision Arithmetic. There's more here: http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
Looking at the Python source code, it seems the long type (at least in pre-Python 3 code) is defined in longintrepr.h like this -
/* Long integer representation.
The absolute value of a number is equal to
SUM(for i=0 through abs(ob_size)-1) ob_digit[i] * 2**(SHIFT*i)
Negative numbers are represented with ob_size < 0;
zero is represented by ob_size == 0.
In a normalized number, ob_digit[abs(ob_size)-1] (the most significant
digit) is never zero. Also, in all cases, for all valid i,
0 <= ob_digit[i] <= MASK.
The allocation function takes care of allocating extra memory
so that ob_digit[0] ... ob_digit[abs(ob_size)-1] are actually available.
CAUTION: Generic code manipulating subtypes of PyVarObject has to
aware that longs abuse ob_size's sign bit.
*/
struct _longobject {
PyObject_VAR_HEAD
digit ob_digit[1];
};
The actual usable interface of the long type is then defined in longobject.h by creating a new type PyLongObject like this -
typedef struct _longobject PyLongObject;
And so on.
There is more stuff happening inside longobject.c, you can take a look at those for more details.
Data types such as int in C are directly mapped (more or less) to the data types supported by the processor. So the limits on C's int are essentially the limits imposed by the processor hardware.
But one can implement one's own int data type entirely in software. You can for example use an array of digits as your underlying representation. May be like this:
class MyInt {
private int [] digits;
public MyInt(int noOfDigits) {
digits = new int[noOfDigits];
}
}
Once you do that you may use this class and store integers containing as many digits as you want, as long as you don't run out memory.
Perhaps Python is doing something like this inside its virtual machine. You may want to read this article on Arbitrary Precision Arithmetic to get the details.
Not octaword. It implemented bignum structure to store arbitary-precision numbers.
Python assigns to long integers (all ints in Python 3) just as much space as they need -- an array of "digits" (base being a power of 2) allocated as needed.

Categories