I am looking to do some basic math in python. However, I am dealing with numbers such as
0.0000000001 and -0.00000000001
Are there any variables that can hold 10 decimal places with negative and positive numbers?
If not, I could multiply by 100000000000 and make it one, what is the best variable to hold numbers between -100000000000 and 100000000000?
Thanks :)
You probably want the decimal module:
from decimal import Decimal
x = Decimal('0.0000000001')
y = Decimal('-0.00000000001')
Related
Why does the large number give me an integer (or at least no decimal points), but the smaller number give me a bunch of decimal points? Is the way I set the precision or declare the variables wrong?
import math
from mpmath import *
mp.prec=1000
x = 5431526412865007456
print mpf((x)/6)
ACTUAL OUTPUT: 905254402144167909.0
WANTED OUTPUT: 905254402144167909.3333333333333333333333(…)
x = 5431526413
print mpf((x)/6.)
OUTPUT: 905254402.16666662693023681640625
Try using mpf(x)/6 or mpf(x)/6.0. The reason your code didn't work is that it did the division using Python's normal rules, then converted it to a arbitrary-precision number, whereas this converts it first so the division is done using arbitrary-precision math.
How it's going?
I need to generate a random number with a large number of decimal to use in advanced calculation.
I've tried to use this code:
round(random.uniform(min_time, max_time), 1)
But it doesn't work for length of decimals above 15.
If I use, for e.g:
round(random.uniform(0, 0.5), 100)
It returns 0.586422176354875, but I need a code that returns a number with 100 decimals.
Can you help me?
Thanks!
100 decimals
The first problem is how to create a number with 1000 decimals at all.
This won't do:
>>> 1.23456789012345678901234567890
1.2345678901234567
Those are floating point numbers which have limitations far from 100 decimals.
Luckily, in Python, there is the decimal built-in module which can help:
>>> from decimal import Decimal
>>> Decimal('1.2345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901')
Decimal('1.2345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901')
Decimal can have any precision you need and it won't introduce floating point errors, but it will be much slower.
random
Now you just have to create a string with 100 decmals and give it to Decimal.
This will create one random digit:
random.choice('0123456789')
This will create 100 random digits and concatenate them:
''.join(random.choice('0123456789') for i in range(100))
Now just create a Decimal:
Decimal('0.' + ''.join(random.choice('0123456789') for i in range(100)))
This creates a number between 0 and 1. Multiply it or divide to get a different range.
What I wanted to do:
Paradox: Suppose Peter Parker were running to catch a bus. To reach it, he’d first need to get halfway there. Before that, he’d need to get a quarter of the way there……before a quarter, an eighth; before an eighth, a 16th; and so on. Since the distance can be halved infinitely, he’d be trying to complete an infinite number of tasks… WHICH WOULD BE LOGICALLY IMPOSSIBLE!
I tried to to resolve this paradox using Python
I have some questions:
How can I get a number to have no limitations with decimals? Python limits the numbers of decimals, I think to 12, How to make that number infinite?
Aparrently there is no way to make the float decimals infinite, the closest I could get was using this
from decimal import Decimal
Is this the correct way of asking the user for an input in numbers?
Code modified
from decimal import Decimal
def infinite_loop():
x = 0;
number = Decimal(raw_input())
while x != number:
x = x + number
number = number/2
print x
infinite_loop()
What you ask is impossible. There are no "infinite precision" floating point values in the real world of finite computing systems. If there were, a single floating point value could consume all of the system's resources. pi * d? Ooops!! pi is infinite. There goes the system!
What you can do, however, is get arbitrary precision decimal values. They're still finite, but you can choose how much precision you want (and are willing to pay for). E.g.:
>>> from decimal import Decimal
>>> x = Decimal('1.' + '0' * 200)
>>> x
Decimal('1.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000')
Now you have 200 digits of precision. Not enough? Go 400. 800. However many you like. As long as that's a finite, practical value.
If you want "infinite" precision (e.g. a decimal number that be extended as far as you have memory for), either use Python's builtin module Decimal or for more heavy computation, mpmath:
import mpmath as mp
mp.mp.dps = 100
print mp.sqrt(mp.mpf(2))
>> 1.414213562373095048801688724209698078569671875376948073176679737990732478462107038850387534327641573
I would like to generate uniformly distributed random numbers between 0 and 0.5, but truncated to 2 decimal places.
without the truncation, I know this is done by
import numpy as np
rs = np.random.RandomState(123456)
set = rs.uniform(size=(50,1))*0.5
could anyone help me with suggestions on how to generate random numbers up to 2 d.p. only? Thanks!
A float cannot be truncated (or rounded) to 2 decimal digits, because there are many values with 2 decimal digits that just cannot be represented exactly as an IEEE double.
If you really want what you say you want, you need to use a type with exact precision, like Decimal.
Of course there are downsides to doing that—the most obvious one for numpy users being that you will have to use dtype=object, with all of the compactness and performance implications.
But it's the only way to actually do what you asked for.
Most likely, what you actually want to do is either Joran Beasley's answer (leave them untruncated, and just round at print-out time) or something similar to Lauritz V. Thaulow's answer (get the closest approximation you can, then use explicit epsilon checks everywhere).
Alternatively, you can do implicitly fixed-point arithmetic, as David Heffernan suggests in a comment: Generate random integers between 0 and 50, keep them as integers within numpy, and just format them as fixed point decimals and/or convert to Decimal when necessary (e.g., for printing results). This gives you all of the advantages of Decimal without the costs… although it does open an obvious window to create new bugs by forgetting to shift 2 places somewhere.
decimals are not truncated to 2 decimal places ever ... however their string representation maybe
import numpy as np
rs = np.random.RandomState(123456)
set = rs.uniform(size=(50,1))*0.5
print ["%0.2d"%val for val in set]
How about this?
np.random.randint(0, 50, size=(50,1)).astype("float") / 100
That is, create random integers between 0 and 50, and divide by 100.
EDIT:
As made clear in the comments, this will not give you exact two-digit decimals to work with, due to the nature of float representations in memory. It may look like you have the exact float 0.1 in your array, but it definitely isn't exactly 0.1. But it is very very close, and you can get it closer by using a "double" datatype instead.
You can postpone this problem by just keeping the numbers as integers, and remember that they're to be divided by 100 when you use them.
hundreds = random.randint(0, 50, size=(50, 1))
Then at least the roundoff won't happen until at the last minute (or maybe not at all, if the numerator of the equation is a multiple of the denominator).
I managed to find another alternative:
import numpy as np
rs = np.random.RandomState(123456)
set = rs.uniform(size=(50,2))
for i in range(50):
for j in range(2):
set[i,j] = round(set[i,j],2)
How do I get a random decimal.Decimal instance? It appears that the random module only returns floats which are a pita to convert to Decimals.
What's "a random decimal"? Decimals have arbitrary precision, so generating a number with as much randomness as you can hold in a Decimal would take the entire memory of your machine to store.
You have to know how many decimal digits of precision you want in your random number, at which point it's easy to just grab an random integer and divide it. For example if you want two digits above the point and two digits in the fraction (see randrange here):
decimal.Decimal(random.randrange(10000))/100
From the standard library reference :
To create a Decimal from a float, first convert it to a string. This serves as an explicit reminder of the details of the conversion (including representation error).
>>> import random, decimal
>>> decimal.Decimal(str(random.random()))
Decimal('0.467474014342')
Is this what you mean? It doesn't seem like a pita to me. You can scale it into whatever range and precision you want.
If you know how many digits you want after and before the comma, you can use:
>>> import decimal
>>> import random
>>> def gen_random_decimal(i,d):
... return decimal.Decimal('%d.%d' % (random.randint(0,i),random.randint(0,d)))
...
>>> gen_random_decimal(9999,999999) #4 digits before, 6 after
Decimal('4262.786648')
>>> gen_random_decimal(9999,999999)
Decimal('8623.79391')
>>> gen_random_decimal(9999,999999)
Decimal('7706.492775')
>>> gen_random_decimal(99999999999,999999999999) #11 digits before, 12 after
Decimal('35018421976.794013996282')
>>>
The random module has more to offer than "only returning floats", but anyway:
from random import random
from decimal import Decimal
randdecimal = lambda: Decimal("%f" % random.random())
Or did I miss something obvious in your question ?
decimal.Decimal(random.random() * MAX_VAL).quantize(decimal.Decimal('.01'))
Yet another way to make a random decimal.
import random
round(random.randint(1, 1000) * random.random(), 2)
On this example,
random.randint() generates random integer in specified range (inclusively),
random.random() generates random floating point number in the range (0.0, 1.0)
Finally, round() function will round the multiplication result of the abovementioned values multiplication (something long like 254.71921934351644) to the specified number after the decimal point (in our case we'd get 254.71)
import random
y = eval(input("Enter the value of y for the range of random number : "))
x = round(y*random.random(),2) #only for 2 round off
print(x)