I am looking for an efficient way to determine the greatest common divisor of two floats with python. The routine should have the following layout
gcd(a, b, rtol=1e-05, atol=1e-08)
"""
Returns the greatest common divisor of a and b
Parameters
----------
a,b : float
two floats for gcd
rtol, atol : float, optional
relative and absolute tolerance
Returns
-------
gcd : float
Greatest common divisor such that for x in [a,b]:
np.mod(x,gcd) < rtol*x + atol
.. _PEP 484:
https://www.python.org/dev/peps/pep-0484/
"""
Example: gcd of rational and irrational number
The gcd(1., np.pi, rtol=0, atol=1e-5) should return (roughly) 1e-5, as
In [1]: np.mod(np.pi,1e-5)
Out[1]: 2.6535897928590063e-06
In [2]: np.mod(1.,1e-5)
Out[2]: 9.9999999999181978e-06
I would prefer to use a library implementation and not to write it myself.
The fractions.gcd function does not seem appropriate to me here, as I do not want to work with fractions and it (obviously) does not have the tolerance parameters.
Seems like you could just modify the code of fractions.gcd to include the tolerances:
def float_gcd(a, b, rtol = 1e-05, atol = 1e-08):
t = min(abs(a), abs(b))
while abs(b) > rtol * t + atol:
a, b = b, a % b
return a
The following code may be useful. The function to call is float_gdc(a, b).
from math import gcd, log10, pow
def float_scale(x):
max_digits = 14
int_part = int(abs(x))
magnitude = 1 if int_part == 0 else int(log10(int_part)) + 1
if magnitude >= max_digits:
return 0
frac_part = abs(x) - int_part
multiplier = 10 ** (max_digits - magnitude)
frac_digits = multiplier + int(multiplier * frac_part + 0.5)
while frac_digits % 10 == 0:
frac_digits /= 10
return int(log10(frac_digits))
def float_gcd(a, b):
sc = float_scale(a)
sc_b = float_scale(b)
sc = sc_b if sc_b > sc else sc
fac = pow(10, sc)
a = int(round(a*fac))
b = int(round(b*fac))
return round(gcd(a, b)/fac, sc)
A part of the code is taken from here: Determine precision and scale of particular number in Python
Related
I have this python code for finding intersections in the function "f(x)=x**2+x-2" with "g(x)=6-x"
import math
#brute force the functions with numbers until his Y values match, and then, do this for the other point.
def funcs():
for X in range(-100, 100):
funcA = (X**2)+X-2
funcB = 6 - X
if funcA == funcB:
print("##INTERSECTION FOUND!!")
print(f"({X},{funcA})")
print(f"({X},{funcB})")
else:
pass
funcs()
But my problem is the script only works with THAT SPECIFIC MATH FUNCTION, if I try to change the math function a little bit, the code won't work.
The code just checks when the Y values of the f(x) and the g(x) match together, and do the same for the other point.
Here it is the output:
##INTERSECTION FOUND!!
(-4,10)
(-4,10)
##INTERSECTION FOUND!!
(2,4)
(2,4)
In general, this is a root finding problem.
Define h(x) = f(x) - g(x).
The intersection point x implies f(x)=g(x) or h(x)=0.
For root-finding problems, there are many methods, say, bisection method, newton's method.
Here is a numerical example with bisection method.
def f(x):
return x ** 2 + x - 2
def g(x):
return 6 - x
def h(x):
return f(x) - g(x)
def bisection(a, b):
eps = 10 ** -10
ha = h(a)
hb = h(b)
if ha * hb > 0:
raise ValueError("Bad input")
for i in range(1000): # fix iteration number
ha = h(a)
midpoint = (a + b) / 2
hm = h(midpoint)
if abs(hm) < eps:
return midpoint
if hm * ha >= 0:
a = midpoint
else:
b = midpoint
raise RuntimeError("Out of iterations")
if __name__ == '__main__':
print(bisection(0, 100))
print(bisection(-100, 0))
Output:
1.999999999998181
-3.999999999996362
Why the numbers so close but not exact? because the problem is solved numerically. Other answers that utilize the sympy package solves the problem symbolically, which give the exact answer. But they only work with simple problems.
Why [0, 100] and [-100, 0]? this is because I sketched the graph somehow and know there is a root within the interval. In practice, the bisection method requires the interval [a,b] such that h(a) * h(b) < 0. Given a big interval [-100,100] and, thus, h(-100) * h(100) > 0, bisection method does not work this case. The big interval is partitioned such that some of the sub-intervals [a,b] satisfy the condition h(a) * h(b) < 0, say, [-100, 0] and [0, 100]
Why abs(hm) < eps? This tests whether hm is close to 0. In computers, we consider two floating-point numbers equal if the absolute value of their difference abs(hm) is smaller than a threshold eps. eps is usually 10 ** -10 to 10 ** -15 because there are usually 15 decimal significant digits for float numbers in Python or computer.
Newton's method will give you one of the outputs depending on the initial point.
For further study, search root finding problem or numerical root finding.
As you want the intersection, hence you are looking for a solution for f(x) - g(x) = 0. So, you can use fsolve in python to find the root of f(x) - g(x):
from scipy.optimize import fsolve
def func(X):
funcA = (X ** 2) + X - 2
funcB = 6 - X
return (funcA - funcB)
x = fsolve(func,0)
print(x)
You could employ sympy, Python's symbolic math library:
from sympy import symbols, Eq, solve
X = symbols('X', real=True)
funcA = (X ** 2) + X - 2
funcB = 6 - X
sol = solve(Eq(funcA, funcB))
print(sol) # --> [-4, 2]
To obtain the corresponding values for funcA and funcB
for s in sol:
print(f'X={s} funcA({s})={funcA.subs(X, s)} funcB({s})={funcB.subs(X, s)} ')
# X=-4 funcA(-4)=10 funcB(-4)=10
# X=2 funcA(2)=4 funcB(2)=4
For some functions, the result could still be symbolically, as that is the most exact form. .evalf() obtains a numeric approximation. For example:
funcA = X ** 2 + X - 2
funcB = - 2*X ** 2 + X + 7
sol = solve(Eq(funcA, funcB))
for s in sol:
print(f'X={s} funcA(X)={funcA.subs(X, s)} funcB(X)={funcB.subs(X, s)}')
print(f'X={s.evalf()} funcA(X)={funcA.subs(X, s).evalf()} funcB(X)={funcB.subs(X, s).evalf()}')
Output:
X=-sqrt(3) funcA(X)=1 - sqrt(3) funcB(X)=1 - sqrt(3)
X=-1.73205080756888 funcA(X)=-0.732050807568877 funcB(X)=-0.732050807568877
X=sqrt(3) funcA(X)=1 + sqrt(3) funcB(X)=1 + sqrt(3)
X=1.73205080756888 funcA(X)=2.73205080756888 funcB(X)=2.73205080756888
Good day!
I am using python v3.8 and I want to calculate the value range (minimum and maximum) of a given formula with (also given) bounds e.g.:
formula1: a * sqrt(b/c)
formula2: a^2 * b/1000 + 3 * (a+b)
formula3: (1/(2 * PI * (a * 1000) * (b * 1000)) * 10^12
..with a=[0,5], b=[10,20], c=[30,40]
I am not too familiar with scipy, numpy, sympy.. and I wonder if there is an "easy" way to just calculate the formula with different values, write it into an array and get the min/max from that? The problem I have with "writing into an array and get min/max" is, that there are some bounds [-100000, 100000] with given float numbers and that would generate way too much values.
I do not need the information with which values the minimum/maximum was reached, but only which min/max values can be reached.
Try SymPy solvers, they have solveset that can do the following:
>>> solveset(Eq(x**2, 1), x)
{-1, 1}
Symbolic:
Maximum and minimum points are called critical points. To find a critical point you need to a partial derivative respect every variable (f'_a = df/da, f'_b = df/db, f'_c = df/dc), and solve the system of equations where all of them are equal to 0. We can do this with sympy.
import sympy as sp
a = sp.Symbol('a', real=True)
b = sp.Symbol('b', real=True)
c = sp.Symbol('c', real=True)
functions = [
a * sp.sqrt(b / c),
a**2 * b/1000 + 3*(a+b),
10**6 / (2 * sp.pi * a * b),
]
for f in functions:
f_a, f_b, f_c = f.diff(a), f.diff(b), f.diff(c)
print(f"f(a, b, c) = {f}")
print(f"f'_a(a, b, c) = {f_a} = 0")
print(f"f'_b(a, b, c) = {f_b} = 0")
print(f"f'_c(a, b, c) = {f_c} = 0")
print("Critical points:", sp.solve([f_a, f_b, f_c], a, b, c))
print()
As you can see if you execute this code, there is no critical point, so no absolute maximum nor minimum value for any of these funtions in the Real domain (there are two critical points for the second equation in the Imaginary domain).
Numeric approach:
Using numpy and pandas we can create a matrix with all the possible combinations and then apply each of our functions. The maximum and minimum values are per column, they are not related between the columns. As expected, the max and min values for the a, b and c columns are the range lower and upper bounds.
import pandas as pd
from pandas.core.reshape.util import cartesian_product
import numpy as np
# Number of values per range
N = 21
# Functions
functions = [
lambda row: row['a'] * np.sqrt(row['b'] / row['c']),
lambda row: row['a']**2 * row['b']/1000 + 3*(row['a']+row['b']),
lambda row: 10**6 / (2 * np.pi * row['a'] * row['b']),
]
# Lower and upper bounds
a_lower, a_upper = 0, 5
b_lower, b_upper = 10, 20
c_lower, c_upper = 30, 40
def min_max(col):
return pd.Series(index=['min', 'max'], data=[col.min(), col.max()])
values = [
np.linspace(a_lower, a_upper, N),
np.linspace(b_lower, b_upper, N),
np.linspace(c_lower, c_upper, N),
]
df = pd.DataFrame(cartesian_product(values), index=['a', 'b', 'c']).T
for i, f in enumerate(functions):
df[f'f_{i + 1}'] = df.apply(f, axis=1)
print(df.apply(min_max))
Output:
a b c f_1 f_2 f_3
min 0.0 10.0 30.0 0.000000 30.0 1591.549431
max 5.0 20.0 40.0 4.082483 75.5 inf
N = 101 has exactly the same output (it takes a bit to process as it has to compute the formulas 101^3 > 1M times)
I'm tring to approximate an empirical cumulative distribution function (ECDF I want to approximate) with a smooth function (with less than 5 parameter) such as the generalized logistic function.
However, using scipy.optimize.curve_fit, the fitting operation gives really bad approximations or it doesn't work at all (depending on the initial values). The variable series represents my data stored as pandas.Series.
from scipy.optimize import curve_fit
def fit_ecdf(x):
x = np.sort(x)
def result(v):
return np.searchsorted(x, v, side='right') / x.size
return result
ecdf = fit_ecdf(series)
def genlogistic(x, B, M, Q, v):
return 1 / (1 + Q * np.exp(-B * (x - M))) ** (1 / v)
params = curve_fit(genlogistic, xdata = series, ydata = ecdf(series), p0 = (0.1, 10.0, 0.1, 0.1))[0]
Should I use another type of function for the fit?
Are there any code mistakes?
UPDATE - 1
As asked, I link to a csv containing the data.
UPDATE - 2
After a lot of search and trial and error I find out this function
f(x; a, b, c) = 1 - 1 / (1 + (x / b) ** a) ** c
with a = 4.61320000, b = 2.94570952, c = 0.5886922
which fits a lot better than the other one. The only problem is the little step that the ECDF shows near x=1. How can I modify f to improve the quality of the fit? I was thinking of adding some sort of function that is "relevant" only in those kind of points. Here are the graphical results of the fit where the solid blue line is the ECDF and the dotted line represents the (x, f(x)) points.
I find out how to deal with that little step near x=1. As expressed in the question, adding some sort of function that is significant only in that interval was the game changer.
The "step" ends at about (1.7, 0.04) so I needed a sort of function that flattens for x > 1.7 and has y = 0.04 as asymptote. The natural choice (just to stay on point) was to take a function like f(x) = 1/exp(x).
Thanks to JamesPhillips, I also picked up the proper data for the regression (no double values = no overweighted points).
Python Code
from scipy.optimize import curve_fit
def fit_ecdf(x):
x = np.sort(x)
def result(v):
return np.searchsorted(x, v, side = 'right') / x.size
return result
ecdf = fit_ecdf(series)
unique_series = series.unique().tolist()
def cdf_interpolation(x, a, b, c, d):
f_1 = 0.95 + (0 - 0.95) / (1 + (x / b) ** a) ** c + 0.05
f_2 = (0 - 0.05)/(np.exp(d * x))
return f_1 + f_2
params = curve_fit(cdf_interpolation,
xdata = unique_series ,
ydata = ecdf(unique_series),
p0 = (6.0, 3.0, 0.4, 1.0))[0]
Parameters
a = 6.03256462
b = 2.89418871
c = 0.42997956
d = 1.06864006
Graphical results
I got an OK fit for a 5-parameter logistic equation (see image and code) using unique values, not sure if the low end curve is sufficient for your needs, please check.
import numpy as np
def Sigmoidal_FiveParameterLogistic_model(x_in): # from zunzun.com
# coefficients
a = 9.9220221252324947E-01
b = -3.1572339989462903E+00
c = 2.2303376075685142E+00
d = 2.6271495036080207E-02
f = 3.4399008905318986E+00
return d + (a - d) / np.power(1.0 + np.power(x_in / c, b), f)
I am trying to implement periodic Gaussian in C, MATLAB or Python.
What is the correct way to evaluate the periodic Gaussian function as defined below
I am currently evaluating according to the formula below to avoid the summation over minus to plus infinity:
Thanks in advance.
Well, you shouldn't have to evaluate the infinite sum, because once you get to (x-kL) >> 2sigma, you'll reach the limits of floating points precision.
So you should be able to start by finding the minimum of x - kL (i.e., just set x = x mod L and k=0, legit to do because this is an infinite sum) and then adding the terms at k = +/- 1, +/- 2, ... until you reach floating point limits. Here's some example MATLAB code that illustrates the idea - I just whipped this up so I can't promise it's bug-free, but it does seem to exhibit some of the basic expected behavior.
function [result] = Periodic_Gaussian(x, L, sigma)
gaussian = #(y) 1/(2*pi*sigma)*exp(-y.^2 ./ 4 ./sigma^2);
x = mod(x, L);
oldresult = NaN;
newresult = gaussian(x);
k = 1;
while any(newresult ~= oldresult)
oldresult = newresult;
newresult = oldresult + gaussian(x-k*L) + gaussian(x+k*L);
k = k+1;
end
result = newresult;
Hope this is helpful!
EDIT: Missed a factor of 4 in the denominator of the argument to the exponential, and updated the code to take a vector of x if desired.
function [result] = PeriodicGaussian(x, L, sigma)
gaussian = #(y, sigma) 1/(2*pi*sigma)*exp(-y.^2 ./ 2 ./sigma^2);
x0 = mod(x, L)
x1 = mod(x, -1 * L)
result = gaussian(x0, sigma) + gaussian(x1, sigma);
correctionIdx = (x0 == 0 & x1 == 0);
result(correctionIdx) = 0.5 * result(correctionIdx);
end
I need to calculate binomial confidence intervals for large set of data within a script of python. Do you know any function or library of python that can do this?
Ideally I would like to have a function like this http://statpages.org/confint.html implemented on python.
Thanks for your time.
Just noting because it hasn't been posted elsewhere here that statsmodels.stats.proportion.proportion_confint lets you get a binomial confidence interval with a variety of methods. It only does symmetric intervals, though.
I would say that R (or another stats package) would probably serve you better if you have the option. That said, if you only need the binomial confidence interval you probably don't need an entire library. Here's the function in my most naive translation from javascript.
def binP(N, p, x1, x2):
p = float(p)
q = p/(1-p)
k = 0.0
v = 1.0
s = 0.0
tot = 0.0
while(k<=N):
tot += v
if(k >= x1 and k <= x2):
s += v
if(tot > 10**30):
s = s/10**30
tot = tot/10**30
v = v/10**30
k += 1
v = v*q*(N+1-k)/k
return s/tot
def calcBin(vx, vN, vCL = 95):
'''
Calculate the exact confidence interval for a binomial proportion
Usage:
>>> calcBin(13,100)
(0.07107391357421874, 0.21204372406005856)
>>> calcBin(4,7)
(0.18405151367187494, 0.9010086059570312)
'''
vx = float(vx)
vN = float(vN)
#Set the confidence bounds
vTU = (100 - float(vCL))/2
vTL = vTU
vP = vx/vN
if(vx==0):
dl = 0.0
else:
v = vP/2
vsL = 0
vsH = vP
p = vTL/100
while((vsH-vsL) > 10**-5):
if(binP(vN, v, vx, vN) > p):
vsH = v
v = (vsL+v)/2
else:
vsL = v
v = (v+vsH)/2
dl = v
if(vx==vN):
ul = 1.0
else:
v = (1+vP)/2
vsL =vP
vsH = 1
p = vTU/100
while((vsH-vsL) > 10**-5):
if(binP(vN, v, 0, vx) < p):
vsH = v
v = (vsL+v)/2
else:
vsL = v
v = (v+vsH)/2
ul = v
return (dl, ul)
While the scipy.stats module has a method .interval() to compute the equal tails confidence, it lacks a similar method to compute the highest density interval. Here is a rough way to do it using methods found in scipy and numpy.
This solution also assumes you want to use a Beta distribution as a prior. The hyper-parameters a and b are set to 1, so that the default prior is a uniform distribution between 0 and 1.
import numpy
from scipy.stats import beta
from scipy.stats import norm
def binomial_hpdr(n, N, pct, a=1, b=1, n_pbins=1e3):
"""
Function computes the posterior mode along with the upper and lower bounds of the
**Highest Posterior Density Region**.
Parameters
----------
n: number of successes
N: sample size
pct: the size of the confidence interval (between 0 and 1)
a: the alpha hyper-parameter for the Beta distribution used as a prior (Default=1)
b: the beta hyper-parameter for the Beta distribution used as a prior (Default=1)
n_pbins: the number of bins to segment the p_range into (Default=1e3)
Returns
-------
A tuple that contains the mode as well as the lower and upper bounds of the interval
(mode, lower, upper)
"""
# fixed random variable object for posterior Beta distribution
rv = beta(n+a, N-n+b)
# determine the mode and standard deviation of the posterior
stdev = rv.stats('v')**0.5
mode = (n+a-1.)/(N+a+b-2.)
# compute the number of sigma that corresponds to this confidence
# this is used to set the rough range of possible success probabilities
n_sigma = numpy.ceil(norm.ppf( (1+pct)/2. ))+1
# set the min and max values for success probability
max_p = mode + n_sigma * stdev
if max_p > 1:
max_p = 1.
min_p = mode - n_sigma * stdev
if min_p > 1:
min_p = 1.
# make the range of success probabilities
p_range = numpy.linspace(min_p, max_p, n_pbins+1)
# construct the probability mass function over the given range
if mode > 0.5:
sf = rv.sf(p_range)
pmf = sf[:-1] - sf[1:]
else:
cdf = rv.cdf(p_range)
pmf = cdf[1:] - cdf[:-1]
# find the upper and lower bounds of the interval
sorted_idxs = numpy.argsort( pmf )[::-1]
cumsum = numpy.cumsum( numpy.sort(pmf)[::-1] )
j = numpy.argmin( numpy.abs(cumsum - pct) )
upper = p_range[ (sorted_idxs[:j+1]).max()+1 ]
lower = p_range[ (sorted_idxs[:j+1]).min() ]
return (mode, lower, upper)
Just been trying this myself. If it helps here's my solution, which takes two lines of code and seems to give equivalent results to that JS page. This is the frequentist one-sided interval, I'm calling the input argument the MLE (maximum likelihood estimate) of the binomial parameter theta. I.e. mle = number of successes/number of trials. I find the upper bound of the one sided interval. The alpha value used here is therefore double the one in the JS page for the upper limit.
from scipy.stats import binom
from scipy.optimize import bisect
def binomial_ci( mle, N, alpha=0.05 ):
"""
One sided confidence interval for a binomial test.
If after N trials we obtain mle as the proportion of those
trials that resulted in success, find c such that
P(k/N < mle; theta = c) = alpha
where k/N is the proportion of successes in the set of trials,
and theta is the success probability for each trial.
"""
to_minimise = lambda c: binom.cdf(mle*N,N,c)-alpha
return bisect(to_minimise,0,1)
To find the two sided interval, call with (1-alpha/2) and alpha/2 as arguments.
The following gives exact (Clopper-Pearson) interval for binomial distribution in a simple way.
def binomial_ci(x, n, alpha=0.05):
#x is number of successes, n is number of trials
from scipy import stats
if x==0:
c1 = 0
else:
c1 = stats.beta.interval(1-alpha, x,n-x+1)[0]
if x==n:
c2=1
else:
c2 = stats.beta.interval(1-alpha, x+1,n-x)[1]
return c1, c2
You may check the code by e.g.:
p1,p2 = binomial_ci(2,7)
from scipy import stats
assert abs(stats.binom.cdf(1,7,p1)-.975)<1E-5
assert abs(stats.binom.cdf(2,7,p2)-.025)<1E-5
assert abs(binomial_ci(0,7, alpha=.1)[0])<1E-5
assert abs((1-binomial_ci(0,7, alpha=.1)[1])**7-0.05)<1E-5
assert abs(binomial_ci(7,7, alpha=.1)[1]-1)<1E-5
assert abs((binomial_ci(7,7, alpha=.1)[0])**7-0.05)<1E-5
I used the relation between the binomial proportion confidence interval and the regularized incomplete beta function, as described here:
https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Clopper%E2%80%93Pearson_interval
I needed to do this as well. I was using R and wanted to learn a way to work it out for myself. I would not say it is strictly pythonic.
The docstring explains most of it. It assumes you have scipy installed.
def exact_CI(x, N, alpha=0.95):
"""
Calculate the exact confidence interval of a proportion
where there is a wide range in the sample size or the proportion.
This method avoids the assumption that data are normally distributed. The sample size
and proportion are desctibed by a beta distribution.
Parameters
----------
x: the number of cases from which the proportion is calulated as a positive integer.
N: the sample size as a positive integer.
alpha : set at 0.95 for 95% confidence intervals.
Returns
-------
The proportion with the lower and upper confidence intervals as a dict.
"""
from scipy.stats import beta
x = float(x)
N = float(N)
p = round((x/N)*100,2)
intervals = [round(i,4)*100 for i in beta.interval(alpha,x,N-x+1)]
intervals.insert(0,p)
result = {'Proportion': intervals[0], 'Lower CI': intervals[1], 'Upper CI': intervals[2]}
return result
A numpy/scipy-free way of computing the same thing using the Wilson score and an approximation to the normal cumulative density function,
import math
def binconf(p, n, c=0.95):
'''
Calculate binomial confidence interval based on the number of positive and
negative events observed.
Parameters
----------
p: int
number of positive events observed
n: int
number of negative events observed
c : optional, [0,1]
confidence percentage. e.g. 0.95 means 95% confident the probability of
success lies between the 2 returned values
Returns
-------
theta_low : float
lower bound on confidence interval
theta_high : float
upper bound on confidence interval
'''
p, n = float(p), float(n)
N = p + n
if N == 0.0: return (0.0, 1.0)
p = p / N
z = normcdfi(1 - 0.5 * (1-c))
a1 = 1.0 / (1.0 + z * z / N)
a2 = p + z * z / (2 * N)
a3 = z * math.sqrt(p * (1-p) / N + z * z / (4 * N * N))
return (a1 * (a2 - a3), a1 * (a2 + a3))
def erfi(x):
"""Approximation to inverse error function"""
a = 0.147 # MAGIC!!!
a1 = math.log(1 - x * x)
a2 = (
2.0 / (math.pi * a)
+ a1 / 2.0
)
return (
sign(x) *
math.sqrt( math.sqrt(a2 * a2 - a1 / a) - a2 )
)
def sign(x):
if x < 0: return -1
if x == 0: return 0
if x > 0: return 1
def normcdfi(p, mu=0.0, sigma2=1.0):
"""Inverse CDF of normal distribution"""
if mu == 0.0 and sigma2 == 1.0:
return math.sqrt(2) * erfi(2 * p - 1)
else:
return mu + math.sqrt(sigma2) * normcdfi(p)
Astropy provides such a function (although installing and importing astropy may be a bit excessive):
astropy.stats.binom_conf_interval
I am not an expert on statistics, but binomtest is built into SciPy and produces the same results as the accepted answer:
from scipy.stats import binomtest
binomtest(13, 100).proportion_ci()
Out[11]: ConfidenceInterval(low=0.07107304618545972, high=0.21204067708744978)
binomtest(4, 7).proportion_ci()
Out[25]: ConfidenceInterval(low=0.18405156764007, high=0.9010117215575631)
It uses Clopper-Pearson exact method by default, which matches Curt's accepted answer, which gives these values, for comparison:
Usage:
>>> calcBin(13,100)
(0.07107391357421874, 0.21204372406005856)
>>> calcBin(4,7)
(0.18405151367187494, 0.9010086059570312)
It also has options for Wilson's method, with or without continuity correction, which matches TheBamf's astropy answer:
binomtest(4, 7).proportion_ci(method='wilson')
Out[32]: ConfidenceInterval(low=0.2504583645276572, high=0.8417801447485302)
binom_conf_interval(4, 7, 0.95, interval='wilson')
Out[33]: array([0.25045836, 0.84178014])
This also matches R's binom.test and statsmodels.stats.proportion.proportion_confint, according to cxrodgers' comment:
For 30 successes in 60 trials, both R's binom.test and statsmodels.stats.proportion.proportion_confint give (.37, .63) using Klopper-Pearson.
binomtest(30, 60).proportion_ci(method='exact')
Out[34]: ConfidenceInterval(low=0.3680620319424367, high=0.6319379680575633)