Computing pi by using definite integral of a circle - python

I am new to coding, and my assignment requires that I approximate pi by using the definite integral of area of a disk (circle) with radius 1. I have created the following code, via Python. This gives me the correct answer, however pi is truncated to six digits. Is there a way for me to expand it to 7 digits, per assignment parameters? Thanks so much!
Code:
GlowScript 2.7 VPython
x=-1
dx=0.00001
A=0
while x<1:
A=A+sqrt(1-x**2)*dx
x=x+dx
tpi=2*A
print(tpi)

I think, you're using Python version 2.7? Try to format the float when you're printing the value:
print('%.7f' % tpi)
This should print your value as a float rounded to 7 decimal places. To read more about that, have a look at String Formatting Operations

What you see in the console does not always match the value of a variable. The console might cut of some digits (a logger usually does this in order to save space on the screen) or the python interpreter might round. To combat this you can use the following code to extend the number of digits
# coding: utf-8
import math
x=-1
dx=0.00001
A=0
while x<1:
A=A+math.sqrt(1-x**2)*dx
x=x+dx
tpi=2*A
print("{0:.30g}".format(tpi)) #this will give you 30 digits including the digits before the . !

Related

Having issues making math calculations in python, I am receiving the wrong answer in the end

Beginner here. I've been following the 100 days of code course on Udemy and I have been trying to figure out the Tip Calculator project.
https://gyazo.com/285baf25f0c803fc893faa32d23d9fd1
I am receiving the wrong tip per person. For example I make the total bill amount 40.53, percent tip 15 and make it a 3 way split, and it gives me 15.33... and if I do this online on another program it would give me 15.54. Any tips for a beginner?
Your issue is that it is converting those numbers to integers, however, integers don't have decimal places and also always round down, you need to use floats for maximum precision
Without seeing your code, as BeRT2me has requested to see, it sounds like the variable you are assigning your intermediary total to is being set as an integer or is being truncated/rounded down perhaps by the output type of a function call you are making/returning.
Likely, change it to be an appropriate form of decimal type.
e.g. Using your values, 40.53 * 1.15 = 46.6095
46.6095 / 3 = 15.5365 (or 15.54 rounded to 2 decimal places)
46 / 3 = 15.33 (rounded to 2 decimal places)
If you are using that code example of total_final_bill. The type would be implicitly set as an integer since you are adding to integer values.
You are performing an implicit rounding function on the tip_amount and the total_bill by using the int() function. These should not be integers as they are decimal/float values.
So, each time you use int() other than int(tip_percent) and int(split_question) which are actually integer values for your formulas you are rounding the decimals.

Simplify an irrational decimal to its simple fraction equivalent using Python

Here is my problem:
I am writing a program to solve a statistical problem from timed coding challenge using Python 2.7
I'm not allowed to use many external packages ( however I can use Fractions). To finish my problem I need to convert an irrational decimal number to it's fraction equivalent.
Example Input:
0.6428571428571428 [i.e. 9/14]
Problem:
I want to output 9/14 in this instance but if I do something like:
print(Fraction(0.6428571428571428))
It will print some ungodly long fraction that can't be reduced.
Is there a way to reduce 0.6428571428571428 to 9/14 without forcing Fraction to round closest to 14 (since I need to use it for a lot of different fractions)?
Another Example:
.33333333333 (i.e. 1/3)
Current Output:
print(Fraction(.333333333333333)) # Outputs 6004799503160061/18014398509481984
If you know around how large your denominators get, you can use limit_denominator. See the docs for this
Here's what you'd get setting 100000 as your denominator
from fractions import Fraction
print(Fraction(.333333333333333).limit_denominator(max_denominator=100000))
# 1/3
print(Fraction(0.6428571428571428).limit_denominator(max_denominator=100000))
# 9/14
We're giving plenty of freedom with 100000 as the upper limit but it still finds the result we are looking for. You can adjust that number to suit your needs.
For these cases, I continued to get these results up to 10**14 and I start getting different results at 10**15 which is because, as Olivier Melançon points, that we have 15 digits in our input and when using max_denominator the error is 1/(2 * max_denominator)

Arithmetic error: Python is incorrectly dividing variables

I'm getting something that doesn't seem to be making a lot of sense. I was practicing my coding by making a little program that would give me the probability of getting certain cards within a certain timeframe of a card game. In order to calculate the chances, I would need to create a method that would perform division, and report the chances as a fraction and as a decimal. so I designed this:
from fractions import Fraction
def time_odds(card_count,turns,deck_size=60):
chance_of_occurence = float(card_count)/float(deck_size)
opening_hand_odds = 7*chance_of_occurence
turn_odds = (7 + turns)*chance_of_occurence
print ("Chance of it being in the opening hand: %s or %s" %(opening_hand_odds,Fraction(opening_hand_odds)))
print ("Chance of it being acquired by turn %s : %s or %s" %(turns,turn_odds,Fraction(turn_odds) ))
and then I used it like so:
time_odds(3,5)
but for whatever reason I got this as the answer:
"Chance of it being in the opening hand: 0.35000000000000003 or
6305039478318695/18014398509481984"
"Chance of it being acquired by turn 5 : 0.6000000000000001 or
1351079888211149/2251799813685248"
so it's like, almost right, except the decimal is just slightly off, giving like a 0.0000000000003 difference or a 0.000000000000000000001 difference.
Python doesn't do this when I just make it do division like this:
print (7*3/60)
This gives me just 0.35, which is correct. The only difference that I can observe, is that I get the slightly incorrect values when I am dividing with variables rather than just numbers.
I've looked around a little for an answer, and most incorrect division problems have to do with integer division (or I think it can be called floor division) , but I didn't manage to find anything addressing this.
I've had a similar problem with python doing this when I was dividing really big numbers. What's going on?
Why is this so? what can I do to correct it?
The extra digits you're seeing are floating point precision errors. As you do more and more operations with floating point numbers, the errors have a chance of compounding.
The reason you don't see them when you try to replicate the computation by hand is that your replication performs the operations in a different order. If you compute 7 * 3 / 60, the mutiplication happens first (with no error), and the division introduces a small enough error that Python's float type hides it for you in its repr (because 0.35 unambiguously refers to the same float value as the computation). If you do 7 * (3 / 60), the division happens first (introducing error) and then the multiplication increases the size of the error to the point that it can't be hidden (because 0.35000000000000003 is a different float value than 0.35).
To avoid printing out the the extra digits that are probably error, you may want to explicitly specify a precision to use when turning your numbers into strings. For instance, rather than using the %s format code (which calls str on the value), you could use %.3f, which will round off your number after three decimal places.
There's another issue with your Fractions. You're creating the Fraction directly from the floating point value, which already has the error calculated in. That's why you're seeing the fraction print out with a very large numerator and denominator (it's exactly representing the same number as the inaccurate float). If you instead pass integer numerator and denominator values to the Fraction constructor, it will take care of simplifying the fraction for you without any floating point inaccuracy:
print("Chance of it being in the opening hand: %.3f or %s"
% (opening_hand_odds, Fraction(7*card_count, deck_size)))
This should print out the numbers as 0.350 and 7/20. You can of course choose whatever number of decimal places you want.
Completely separate from the floating point errors, the calculation isn't actually getting the probability right. The formula you're using may be a good enough one for doing in your head while playing a game, but it's not completely accurate. If you're using a computer to crunch the numbers for you, you might as well get it right.
The probability of drawing at least one of N specific cards from a deck of size M after D draws is:
1 - (comb(M-N, D) / comb(M, D))
Where comb is the binary coefficient or "combination" function (often spoken as "N choose R" and written "nCr" in mathematics). Python doesn't have an implementation of that function in the standard library, but there are a lot of add on modules you may already have installed that provide one or you can pretty easily write your own. See this earlier question for more specifics.
For your example parameters, the correct odds are '5397/17110' or 0.315.

Unexpected number of decimal places and Syntactical Query

I'm trying to find the intersection between the curves $ y= x^2+3x+2 $ and $ y=x^2+2x+1$. For this, I have written the following python program:
from numpy import *
import numpy as np
for x in np.arange(-100, 100, 0.0001):
y_1=x**2+3*x+2
y_2=x**2+2*x+1
if round(y_1, 5)==round(y_2,5):
print x
print 'end'
The console displays:
-0.999999996714
end
I have three questions.
1) Why must I include y_1=x**2+3*x+2 and y_2=x**2+2*x+1 in the for statement? Why can I not simply include them after the line from numpy import*?
2) Why is the output to 12 decimal places when I have specified the step in np.arange to be 4 decimal places?
3) Why is -1.0000 not outputted?
Please go easy on me, I'm just starting to use python and thought I would try and solve some simultaneous equations with it.
Thanks,
Jack
Because the y_1 and y_2 lines are computing specific values, not defining functions. Plain Python does not have a built-in concept of symbolic equations. (Although you can implement symbolic equations various ways.)
Because binary floating-point, as used in Python, cannot exactly represent 0.0001 (base 10). Therefore, the step is rounded, so your steps are not exactly ten-thousandths. The Python print statement does not round, absent specific instructions to do so, so you get exactly the value the system is using, even though that's not quite the value you asked for.
Same reason: Since the steps are not exactly ten-thousandths, the point at which the functions are close enough to test as equal under rounding is not exactly at -1.
1) First you have (probably) redundant import statements:
from numpy import *
import numpy as np
The first statement imports the __all__ variable from the package the second statement imports the numpy package then aliases it as np. The normal convention is to import numpy as np, so I would delete your first line and keep the second.
Now to more clearly answer your question, you need to include your equations in the for loop because x is representing each element in the np.array using the for loop.
2 and 3) The value is probably being interpreted as a float in your equations. The rounding error is inherent to how python (and most programing languages) interpret fractions. See more here.

Numpy to weak to calculate a precise mean value

This question is very similar to this post - but not exactly
I have some data in a .csv file. The data has precision to the 4th digit (#.####).
Calculating the mean in Excel or SAS gives a result with precision to 5th digit (#.#####) but using numpy gives:
import numpy as np
data = np.recfromcsv(path2file, delimiter=';', names=['measurements'], dtype=np.float64)
rawD = data['measurements']
print np.average(rawD)
gives a number like this
#.#####999999999994
Clearly something is wrong..
using
from math import fsum
print fsum(rawD.ravel())/rawD.size
gives
#.#####
Is there anything in the np.average that I set wrong _______?
BONUS info:
I'm only working with 200 data points in the array
UPDATE
I thought I should make my case more clear.
I have numbers like 4.2730 in my csv (giving a 4 decimal precision - even though the 4th always is zero [not part of the subject so don't mind that])
Calculating an average/mean by numpy gives me this
4.2516499999999994
Which gives a print by
>>>print "%.4f" % np.average(rawD)
4.2516
During the same thing in Excel or SAS gives me this:
4.2517
Which I actually believe as being the true average value because it finds it to be 4.25165.
This code also illustrate it:
answer = 0
for number in rawD:
answer += int(number*1000)
print answer/2
425165
So how do I tell np.average() to calculate this value ___?
I'm a bit surprised that numpy did this to me... I thought that I only needed to worry if I was dealing with 16 digits numbers. Didn't expect a round off on the 4 decimal place would be influenced by this..
I know I could use
fsum(rawD.ravel())/rawD.size
But I also have other things (like std) I want to calculate with the same precision
UPDATE 2
I thought I could make a temp solution by
>>>print "%.4f" % np.float64("%.5f" % np.mean(rawD))
4.2416
Which did not solve the case. Then I tried
>>>print "%.4f" % float("4.24165")
4.2416
AHA! There is a bug in the formatter: Issue 5118
To be honest I don't care if python stores 4.24165 as 4.241649999... It's still a round off error - NO MATTER WHAT.
If the interpeter can figure out how to display the number
>>>print float("4.24165")
4.24165
Then should the formatter as well and deal with that number when rounding..
It still doesn't change the fact that I have a round off problem (now both with the formatter and numpy)
In case you need some numbers to help me out then I have made this modified .csv file:
Download it from here
(I'm aware that this file does not have the number of digits I explained earlier and that the average gives ..9988 at the end instead of ..9994 - it's modified)
Guess my qeustion boils down to how do I get a string output like the one excel gives me if I use =average()
and have it round off correctly if I choose to show only 4 digits
I know that this might seem strange for some.. But I have my reasons for wanting to reproduce the behavior of Excel.
Any help would be appreciated, thank you.
To get exact decimal numbers, you need to use decimal arithmetic instead of binary. Python provides the decimal module for this.
If you want to continue to use numpy for the calculations and simply round the result, you can still do this with decimal. You do it in two steps, rounding to a large number of digits to eliminate the accumulated error, then rounding to the desired precision. The quantize method is used for rounding.
from decimal import Decimal,ROUND_HALF_UP
ten_places = Decimal('0.0000000001')
four_places = Decimal('0.0001')
mean = 4.2516499999999994
print Decimal(mean).quantize(ten_places).quantize(four_places, rounding=ROUND_HALF_UP)
4.2517
The result value of average is a double. When you print out a double, by default all digits are printed. What you see here is the result of limited digital precision, which is not a problem of numpy, but a general computing problem. When you care of the presentation of your float value, use "%.4f" % avg_val. There is also a package for rational numbers, to avoid representing fractions as real numbers, but I guess that's not what you're looking for.
For your second statement, summarizing all the values by hand and then dividing it, I suppose you're using python 2.7 and all your input values are integer. In that way, you would have an integer division, which truncates everything after the dot, resulting in another integer value.

Categories