I'm trying to create a program that finds the nth digit on the square root of 2 by using the decimal module
√2 = 1.414213(5)6237309504880168872420969807856967187537694…
if the user requests the 8th digit, the program generates 8 digits of √2 (1.4142135) and print the last digit (5)
nth_digit_of_sqrt_of_2 = 8 # i wanna find the 8th digit of √2
expected_sqrt_of_2 = "14142135" # 5 first digits of √2 (no decimal point)
expected_answer = 5 # the last digit
but what actually happens:
from decimal import Decimal, getcontext
getcontext().prec = nth_digit_of_sqrt_of_2 # set precision to 5 digits
decimal_sqrt_of_2 = Decimal('2').sqrt()
decimal_sqrt_of_2 = str(decimal_sqrt_of_2).replace('.', '') # convert to string and remove decimal point
print(decimal_sqrt_of_2)
# actual_sqrt_of_2 = 14142136
# actual_answer = 6
I tried using ROUND_DOWN and ROND_FLOOR but doesn't seems to work either
You can try this :
from decimal import Decimal, getcontext
nth_digit_of_sqrt_of_2 = 8
getcontext().prec = nth_digit_of_sqrt_of_2 + 1 # set precision to n+1 digits
decimal_sqrt_of_2 = Decimal('2').sqrt()
decimal_sqrt_of_2 = str(decimal_sqrt_of_2).replace('.', '') # convert to string and remove decimal point
print(int(str(decimal_sqrt_of_2)[nth_digit_of_sqrt_of_2 - 1]))
You can get the digits by using:
def digit(n):
from decimal import Decimal
return str(Decimal('2').sqrt()).replace('.', '')[n-1]
digit(8)
#'5'
digit(7)
#'3'
digit(9)
#'6'
Edit: If you want more digits you can customize your own function.
def sqrut(x, digits):
x = x * (10**(2*digits))
prev = 0
next = 1 * (10**digits)
while prev != next:
prev = next
next = (prev + (x // prev)) >> 1
return str(next)
suppose you want 1000 digits in squareroot of 2 you can get by
print(sqrut(2, 1000))
'14142135623730950488016887242096980785696718753769480731766797379907324784621070388503875343276415727350138462309122970249248360558507372126441214970999358314132226659275055927557999505011527820605714701095599716059702745345968620147285174186408891986095523292304843087143214508397626036279952514079896872533965463318088296406206152583523950547457502877599617298355752203375318570113543746034084988471603868999706990048150305440277903164542478230684929369186215805784631115966687130130156185689872372352885092648612494977154218334204285686060146824720771435854874155657069677653720226485447015858801620758474922657226002085584466521458398893944370926591800311388246468157082630100594858704003186480342194897278290641045072636881313739855256117322040245091227700226941127573627280495738108967504018369868368450725799364729060762996941380475654823728997180326802474420629269124859052181004459842150591120249441341728531478105803603371077309182869314710171111683916581726889419758716582152128229518488472'
Now, if you want 8th digit then:
print(sqrut(2, 1000)[8-1])
#'5'
#9th digit then:
print(sqrut(2, 1000)[9-1])
#'6'
#nth digit then:
print(sqrut(2, 1000)[n-1])
Related
Say I have an arbitrary float x = 123.123456 and want to remove the last n decimal digits from the float i.e. n = 1 then x = 123.12345, n = 2 then x = 123.1234 and so on. How can this be achieved in python?
This will do the trick you are asking, but be mindful of the issues with floating point numbers.
# let's cut 2 digits
n = 2
# naively we can do this
f = 123.123456
short_f = float(str(f)[:-n])
# but watch out for floating point error
f = 1.2 - 1.0 # f should be 0.2 but is actually 0.19999999999999996
short_f = float(str(f)[:-n]) # so this gives 0.199999999999999
This sounds like an XY problem, maybe you are looking for round or string formatting.
Problem:Take a number example 37 is (binary 100101).
Count the binary 1s and create a binary like (111) and print the decimal of that binary(7)
num = bin(int(input()))
st = str(num)
count=0
for i in st:
if i == "1":
count +=1
del st
vt = ""
for i in range(count):
vt = vt + "1"
vt = int(vt)
print(vt)
I am a newbie and stuck here.
I wouldn't recommend your approach, but to show where you went wrong:
num = bin(int(input()))
st = str(num)
count = 0
for i in st:
if i == "1":
count += 1
del st
# start the string representation of the binary value correctly
vt = "0b"
for i in range(count):
vt = vt + "1"
# tell the `int()` function that it should consider the string as a binary number (base 2)
vt = int(vt, 2)
print(vt)
Note that the code below does the exact same thing as yours, but a bit more concisely so:
ones = bin(int(input())).count('1')
vt = int('0b' + '1' * ones, 2)
print(vt)
It uses the standard method count() on the string to get the number of ones in ones and it uses Python's ability to repeat a string a number of times using the multiplication operator *.
Try this once you got the required binary.
def binaryToDecimal(binary):
binary1 = binary
decimal, i, n = 0, 0, 0
while(binary != 0):
dec = binary % 10
decimal = decimal + dec * pow(2, i)
binary = binary//10
i += 1
print(decimal)
In one line:
print(int(format(int(input()), 'b').count('1') * '1', 2))
Let's break it down, inside out:
format(int(input()), 'b')
This built-in function takes an integer number from the input, and returns a formatted string according to the Format Specification Mini-Language. In this case, the argument 'b' gives us a binary format.
Then, we have
.count('1')
This str method returns the total number of occurrences of '1' in the string returned by the format function.
In Python, you can multiply a string times a number to get the same string repeatedly concatenated n times:
x = 'a' * 3
print(x) # prints 'aaa'
Thus, if we take the number returned by the count method and multiply it by the string '1' we get a string that only contains ones and only the same amount of ones as our original input number in binary. Now, we can express this number in binary by casting it in base 2, like this:
int(number_string, 2)
So, we have
int(format(int(input()), 'b').count('1') * '1', 2)
Finally, let's print the whole thing:
print(int(format(int(input()), 'b').count('1') * '1', 2))
I am having problem rounding floating value to a decimal number greater than 1.
def float_generator(size=6, chars = 6):
return random.uniform(size,chars)
and i tried
target.write(str(float_generator(7,3)))
the above returns random float to 1 decimal num like 1.2222 , 1.333333
I want to generate 7 decimal and 3 points as illustrated in the 7,3
i tried rounding off below. It round up the points instead .
target.write(str(format(float_generator(7,3),'.3f')))
Please how do i achieve 12345432.123 instead of 1.222343223 ? Any help would be appreciated.
You could use decimal and random to return a Decimal object if you want fine-tuned control of decimal places:
import random, decimal
digits = list('0123456789')
def float_generator(size = 6, chars = 6):
base_str = ''.join(random.choice(digits) for i in range(size))
dec_str = ''.join(random.choice(digits) for i in range(chars))
return decimal.Decimal(base_str + '.' + dec_str)
>>> float_generator(7,3)
Decimal('4768087.977')
Decimal objects can be fed to float:
>>> float(decimal.Decimal('4768087.977'))
4768087.977
If you always want a float, you can replace the last line of the function by:
return float(decimal.Decimal(base_str + '.' + dec_str))
here is a solution for you:
import random
def float_generator(size = 6, chars = 6):
return round(random.uniform(10**(size-1),10**(size)), chars)
call it:
float_generator(7,3)
output example:
2296871.988
or better:
float_generator = lambda size, chars: round(random.uniform(10**(size-1),10**(size)), chars)
call it the same way.
def pi():
prompt=">>> "
print "\nWARNING: Pi may take some time to be calculated and may not always be correct beyond 100 digits."
print "\nShow Pi to what digit?"
n=raw_input(prompt)
from decimal import Decimal, localcontext
with localcontext() as ctx:
ctx.prec = 10000
pi = Decimal(0)
for k in range(350):
pi += (Decimal(4)/(Decimal(8)*k+1) - Decimal(2)/(Decimal(8)*k+4) - Decimal(1)/(Decimal(8)*k+5) - Decimal(1)/(Decimal(8)*k+6)) / Decimal(16)**k
print pi[:int(n)]
pi()
Traceback (most recent call last):
File "/Users/patrickcook/Documents/Pi", line 13, in <module>
pi()
File "/Users/patrickcook/Documents/Pi", line 12, in pi
print pi[:int(n)]
TypeError: 'Decimal' object has no attribute '__getitem__'
If you'd like a faster pi algorithm, try this one. I've never used the Decimal module before; I normally use mpmath for arbitrary precision calculations, which comes with lots of functions, and built-in "constants" for pi and e. But I guess Decimal is handy because it's a standard module.
''' The Salamin / Brent / Gauss Arithmetic-Geometric Mean pi formula.
Let A[0] = 1, B[0] = 1/Sqrt(2)
Then iterate from 1 to 'n'.
A[n] = (A[n-1] + B[n-1])/2
B[n] = Sqrt(A[n-1]*B[n-1])
C[n] = (A[n-1]-B[n-1])/2
PI[n] = 4A[n+1]^2 / (1-(Sum (for j=1 to n; 2^(j+1))*C[j]^2))
See http://stackoverflow.com/q/26477866/4014959
Written by PM 2Ring 2008.10.19
Converted to use Decimal 2014.10.21
Converted to Python 3 2018.07.17
'''
import sys
from decimal import Decimal as D, getcontext, ROUND_DOWN
def AGM_pi(m):
a, b = D(1), D(2).sqrt() / 2
s, k = D(0), D(4)
for i in range(m):
c = (a - b) / 2
a, b = (a + b) / 2, (a * b).sqrt()
s += k * c * c
#In case we want to see intermediate results
#if False:
#pi = 4 * a * a / (1 - s)
#print("%2d:\n%s\n" % (i, pi))
k *= 2
return 4 * a * a / (1 - s)
def main():
prec = int(sys.argv[1]) if len(sys.argv) > 1 else 50
#Add 1 for the digit before the decimal point,
#plus a few more to compensate for rounding errors.
#delta == 7 handles the Feynman point, which has six 9s followed by an 8
delta = 3
prec += 1 + delta
ctx = getcontext()
ctx.prec = prec
#The precision of the AGM value doubles on every loop
pi = AGM_pi(prec.bit_length())
#Round down so all printed digits are (usually) correct
ctx.rounding = ROUND_DOWN
ctx.prec -= delta
print("pi ~=\n%s" % +pi)
if __name__ == '__main__':
main()
You are trying to treat pi as an array, when it is a Decimal. I think you are looking for quantize:https://docs.python.org/2/library/decimal.html
I got bored with how long the process it was taking (that 350-iteration loop is a killer), but the answer seems plain. A Decimal object is not subscriptable the way you have it.
You probably want to turn it into a string first and then process that to get the digits:
print str(pi)[:int(n)+1] # ignore decimal point in digit count.
You should also keep in mind that this truncates the value rather than rounding it. For example, with PI starting out as:
3.141592653589
(about as much as I can remember off the top of my head), truncating the string at five significant digits will give you 3.1415 rather than the more correct 3.1416.
A Decimal object can't be sliced to get the individual digits. However a string can, so convert it to a string first.
print str(pi)[:int(n)]
You may need to adjust n for the decimal point and desired digit range.
Here is the example which is bothering me:
>>> x = decimal.Decimal('0.0001')
>>> print x.normalize()
>>> print x.normalize().to_eng_string()
0.0001
0.0001
Is there a way to have engineering notation for representing mili (10e-3) and micro (10e-6)?
Here's a function that does things explicitly, and also has support for using SI suffixes for the exponent:
def eng_string( x, format='%s', si=False):
'''
Returns float/int value <x> formatted in a simplified engineering format -
using an exponent that is a multiple of 3.
format: printf-style string used to format the value before the exponent.
si: if true, use SI suffix for exponent, e.g. k instead of e3, n instead of
e-9 etc.
E.g. with format='%.2f':
1.23e-08 => 12.30e-9
123 => 123.00
1230.0 => 1.23e3
-1230000.0 => -1.23e6
and with si=True:
1230.0 => 1.23k
-1230000.0 => -1.23M
'''
sign = ''
if x < 0:
x = -x
sign = '-'
exp = int( math.floor( math.log10( x)))
exp3 = exp - ( exp % 3)
x3 = x / ( 10 ** exp3)
if si and exp3 >= -24 and exp3 <= 24 and exp3 != 0:
exp3_text = 'yzafpnum kMGTPEZY'[ ( exp3 - (-24)) / 3]
elif exp3 == 0:
exp3_text = ''
else:
exp3_text = 'e%s' % exp3
return ( '%s'+format+'%s') % ( sign, x3, exp3_text)
EDIT:
Matplotlib implemented the engineering formatter, so one option is to directly use Matplotlibs formatter, e.g.:
import matplotlib as mpl
formatter = mpl.ticker.EngFormatter()
formatter(10000)
result: '10 k'
Original answer:
Based on Julian Smith's excellent answer (and this answer), I changed the function to improve on the following points:
Python3 compatible (integer division)
Compatible for 0 input
Rounding to significant number of digits, by default 3, no trailing zeros printed
so here's the updated function:
import math
def eng_string( x, sig_figs=3, si=True):
"""
Returns float/int value <x> formatted in a simplified engineering format -
using an exponent that is a multiple of 3.
sig_figs: number of significant figures
si: if true, use SI suffix for exponent, e.g. k instead of e3, n instead of
e-9 etc.
"""
x = float(x)
sign = ''
if x < 0:
x = -x
sign = '-'
if x == 0:
exp = 0
exp3 = 0
x3 = 0
else:
exp = int(math.floor(math.log10( x )))
exp3 = exp - ( exp % 3)
x3 = x / ( 10 ** exp3)
x3 = round( x3, -int( math.floor(math.log10( x3 )) - (sig_figs-1)) )
if x3 == int(x3): # prevent from displaying .0
x3 = int(x3)
if si and exp3 >= -24 and exp3 <= 24 and exp3 != 0:
exp3_text = 'yzafpnum kMGTPEZY'[ exp3 // 3 + 8]
elif exp3 == 0:
exp3_text = ''
else:
exp3_text = 'e%s' % exp3
return ( '%s%s%s') % ( sign, x3, exp3_text)
The decimal module is following the Decimal Arithmetic Specification, which states:
This is outdated - see below
to-scientific-string – conversion to numeric string
[...]
The coefficient is first converted to a string in base ten using the characters 0 through 9 with no leading zeros (except if its value is zero, in which case a single 0 character is used).
Next, the adjusted exponent is calculated; this is the exponent, plus the number of characters in the converted coefficient, less one. That is, exponent+(clength-1), where clength is the length of the coefficient in decimal digits.
If the exponent is less than or equal to zero and the adjusted exponent is greater than or equal to -6, the number will be converted
to a character form without using exponential notation.
[...]
to-engineering-string – conversion to numeric string
This operation converts a number to a string, using engineering
notation if an exponent is needed.
The conversion exactly follows the rules for conversion to scientific
numeric string except in the case of finite numbers where exponential
notation is used. In this case, the converted exponent is adjusted to be a multiple of three (engineering notation) by positioning the decimal point with one, two, or three characters preceding it (that is, the part before the decimal point will range from 1 through 999).
This may require the addition of either one or two trailing zeros.
If after the adjustment the decimal point would not be followed by a digit then it is not added. If the final exponent is zero then no indicator letter and exponent is suffixed.
Examples:
For each abstract representation [sign, coefficient, exponent] on the left, the resulting string is shown on the right.
Representation
String
[0,123,1]
"1.23E+3"
[0,123,3]
"123E+3"
[0,123,-10]
"12.3E-9"
[1,123,-12]
"-123E-12"
[0,7,-7]
"700E-9"
[0,7,1]
"70"
Or, in other words:
>>> for n in (10 ** e for e in range(-1, -8, -1)):
... d = Decimal(str(n))
... print d.to_eng_string()
...
0.1
0.01
0.001
0.0001
0.00001
0.000001
100E-9
I realize that this is an old thread, but it does come near the top of a search for python engineering notation and it seems prudent to have this information located here.
I am an engineer who likes the "engineering 101" engineering units. I don't even like designations such as 0.1uF, I want that to read 100nF. I played with the Decimal class and didn't really like its behavior over the range of possible values, so I rolled a package called engineering_notation that is pip-installable.
pip install engineering_notation
From within Python:
>>> from engineering_notation import EngNumber
>>> EngNumber('1000000')
1M
>>> EngNumber(1000000)
1M
>>> EngNumber(1000000.0)
1M
>>> EngNumber('0.1u')
100n
>>> EngNumber('1000m')
1
This package also supports comparisons and other simple numerical operations.
https://github.com/slightlynybbled/engineering_notation
The «full» quote shows what is wrong!
The decimal module is indeed following the proprietary (IBM) Decimal Arithmetic Specification.
Quoting this IBM specification in its entirety clearly shows what is wrong with decimal.to_eng_string() (emphasis added):
to-engineering-string – conversion to numeric string
This operation converts a number to a string, using engineering
notation if an exponent is needed.
The conversion exactly follows the rules for conversion to scientific
numeric string except in the case of finite numbers where exponential
notation is used. In this case, the converted exponent is adjusted to be a multiple of three (engineering notation) by positioning the decimal point with one, two, or three characters preceding it (that is, the part before the decimal point will range from 1 through 999). This may require the addition of either one or two trailing zeros.
If after the adjustment the decimal point would not be followed by a digit then it is not added. If the final exponent is zero then no indicator letter and exponent is suffixed.
This proprietary IBM specification actually admits to not applying the engineering notation for numbers with an infinite decimal representation, for which ordinary scientific notation is used instead! This is obviously incorrect behaviour for which a Python bug report was opened.
Solution
from math import floor, log10
def powerise10(x):
""" Returns x as a*10**b with 0 <= a < 10
"""
if x == 0: return 0,0
Neg = x < 0
if Neg: x = -x
a = 1.0 * x / 10**(floor(log10(x)))
b = int(floor(log10(x)))
if Neg: a = -a
return a,b
def eng(x):
"""Return a string representing x in an engineer friendly notation"""
a,b = powerise10(x)
if -3 < b < 3: return "%.4g" % x
a = a * 10**(b % 3)
b = b - b % 3
return "%.4gE%s" % (a,b)
Source: https://code.activestate.com/recipes/578238-engineering-notation/
Test result
>>> eng(0.0001)
100E-6
Like the answers above, but a bit more compact:
from math import log10, floor
def eng_format(x,precision=3):
"""Returns string in engineering format, i.e. 100.1e-3"""
x = float(x) # inplace copy
if x == 0:
a,b = 0,0
else:
sgn = 1.0 if x > 0 else -1.0
x = abs(x)
a = sgn * x / 10**(floor(log10(x)))
b = int(floor(log10(x)))
if -3 < b < 3:
return ("%." + str(precision) + "g") % x
else:
a = a * 10**(b % 3)
b = b - b % 3
return ("%." + str(precision) + "gE%s") % (a,b)
Trial:
In [10]: eng_format(-1.2345e-4,precision=5)
Out[10]: '-123.45E-6'