I'm studying an article where there are two functions:
$e(x) = (e_0/8)\cdot[(2x^3 + x)\sqrt{(1 + x^2)} − \sinh^{−1}(x)]$
$p(x) = (e_0/24)\cdot[(2x^3 - 3x)\sqrt{(1 + x^2)} + 3\sinh^{−1}(x)]$
The article ask me to find numerically $e(p)$ starting from these two expressions. The article suggests me to use a "root-finding routine" but I have no idea how to implement the code. Can someone please help me? I need, possibly, a more general numerical algorithm. (I'm writing in Python). I tried with pynverse library but it is not sufficiently accurate.
I solved my problem with the following code (it uses the bisection root finding):
import numpy as np
def bisection(func,a,b,N):
if func(a)*func(b) >= 0:
# No solutions.
return None
a_n = a
b_n = b
for n in range(1,N+1):
m_n = (a_n + b_n)/2
f_m_n = func(m_n)
if func(a_n)*f_m_n < 0:
a_n = a_n
b_n = m_n
elif func(b_n)*f_m_n < 0:
a_n = m_n
b_n = b_n
elif f_m_n == 0:
# Found exact solution.
return m_n
else:
print("No solutions with bisection. Return 'None'.")
return None
return (a_n + b_n)/2
def eden (pressure):
e = lambda x : (1/8)*((2*x**3 + x)*(1 + x**2)**(1/2) - np.arcsinh(x))
p = lambda x : (2/24)*((2*x**3 - 3*x)*(1 + x**2)**(1/2) + 3*np.arcsinh(x)) - pressure
x_p = bisection(p, -1e3, 1e3, 1000)
return e(x_p)
Related
Given two sequences A and B of the same length: one is strictly increasing, the other is strictly decreasing.
It is required to find an index i such that the absolute value of the difference between A[i] and B[i] is minimal. If there are several such indices, the answer is the smallest of them. The input sequences are standard Python arrays. It is guaranteed that they are of the same length. Efficiency requirements: Asymptotic complexity: no more than the power of the logarithm of the length of the input sequences.
I have implemented index lookup using the golden section method, but I am confused by the use of floating point arithmetic. Is it possible to somehow improve this algorithm so as not to use it, or can you come up with a more concise solution?
import random
import math
def peak(A,B):
def f(x):
return abs(A[x]-B[x])
phi_inv = 1 / ((math.sqrt(5) + 1) / 2)
def cal_x1(left,right):
return right - (round((right-left) * phi_inv))
def cal_x2(left,right):
return left + (round((right-left) * phi_inv))
left, right = 0, len(A)-1
x1, x2 = cal_x1(left, right), cal_x2(left,right)
while x1 < x2:
if f(x1) > f(x2):
left = x1
x1 = x2
x2 = cal_x1(x1,right)
else:
right = x2
x2 = x1
x1 = cal_x2(left,x2)
if x1 > 1 and f(x1-2) <= f(x1-1): return x1-2
if x1+2 < len(A) and f(x1+2) < f(x1+1): return x1+2
if x1 > 0 and f(x1-1) <= f(x1): return x1-1
if x1+1 < len(A) and f(x1+1) < f(x1): return x1+1
return x1
#value check
def make_arr(inv):
x = set()
while len(x) != 1000:
x.add(random.randint(-10000,10000))
x = sorted(list(x),reverse = inv)
return x
x = make_arr(0)
y = make_arr(1)
needle = 1000000
c = 0
for i in range(1000):
if abs(x[i]-y[i]) < needle:
c = i
needle = abs(x[i]-y[i])
print(c)
print(peak(x,y))
Approach
The poster asks about alternative, simpler solutions to posted code.
The problem is a variant of Leetcode Problem 852, where the goal is to find the peak index in a moutain array. We convert to a peak, rather than min, by computing the negative of the abolute difference. Our aproach is to modify this Python solution to the Leetcode problem.
Code
def binary_search(x, y):
''' Mod of https://walkccc.me/LeetCode/problems/0852/ to use function'''
def f(m):
' Absoute value of difference at index m of two arrays '
return -abs(x[m] - y[m]) # Make negative so we are looking for a peak
# peak using binary search
l = 0
r = len(arr) - 1
while l < r:
m = (l + r) // 2
if f(m) < f(m + 1): # check if increasing
l = m + 1
else:
r = m # was decreasing
return l
Test
def linear_search(A, B):
' Linear Search Method '
values = [abs(ai-bi) for ai, bi in zip(A, B)]
return values.index(min(values)) # linear search
def make_arr(inv):
random.seed(10) # added so we can repeat with the same data
x = set()
while len(x) != 1000:
x.add(random.randint(-10000,10000))
x = sorted(list(x),reverse = inv)
return x
# Create data
x = make_arr(0)
y = make_arr(1)
# Run search methods
print(f'Linear Search Solution {linear_search(x, y)}')
print(f'Golden Section Search Solution {peak(x, y)}') # posted code
print(f'Binary Search Solution {binary_search(x, y)}')
Output
Linear Search Solution 499
Golden Section Search Solution 499
Binary Search Solution 499
I have been writing code for a module I am making for my Discord Bot. I have been trying not to use any module as it is not helping in in importing stuff. So I thought I should write the code myself for both of them.
The problem here is that I don't really know how do we make them. I couldn't find them anywhere on the net as everywhere I only saw the use of math module which I don't want to use.
I don't know how do I work with them, so I want some help.
Thank You! :)
Using Taylor expansion you get an approximation up to the desired precision.
http://hyperphysics.phy-astr.gsu.edu/hbase/tayser.html
def pow(base, exponent):
return base ** exponent
def faktorial(n):
value = float(1)
for i in range(1, n+1):
value = value * i
return value
def cos(x):
x = x * 3.14/180
value = 1
sign = -1
n = 200 # precision
i = 2
while i < n:
value = value + (pow(x, i)/faktorial(i) * sign)
i = i + 2
sign = sign * -1
return value
def sin(x):
x = x * 3.14/180
value = x
sign = -1
n = 200 # precision
i = 3
while i < n:
value = value + (pow(x, i)/faktorial(i) * sign)
i = i + 2
sign = sign * -1
return value
pi = 3.1415926535897932384626433832795028841971 # Value of constant pi
def f(n): # Factorial Function
if n == 1 or n == 0:
return 1
else:
return n * f(n - 1)
def deg(x):
rad = x * pi/180
return rad
def sin(x): # Taylor Expansion of sinx
k = 0
sinx = 0
while x >= pi:
x -= pi
if pi > x > pi / 2:
x = pi - x
while k < 15:
sinx += (-1)**k * x**(2*k + 1) / f(2*k + 1)
k += 1
return sinx
def cos(x):
cosx = sin(pi / 2 - x)
return cosx
I improved the code now. Now it gives you accurate results of up to 14 decimal places. Also instead of writing full Taylor expression formula, I used a while loop to do that. While loop here acts as a summation function of maths. I also shorten the code inside cos(x). Instead of writing Taylor's expression here, I used a conversion formula of sinx to cosx. Which reduces the calculation process. I made a little change in the code. Now you can calculate sinx of huge number too with the same accuracy.
so im trying to make a program to solve various normal distribution questions with pure python (no modules other than math) to 4 decimal places only for A Levels, and there is this problem that occurs in the function get_z_less_than_a_equal(0.75):. Apparently, without the assert statement in the except clause, the variables all get messed up, and change. The error, i'm catching is the recursion error. Anyways, if there is an easier and more efficient way to do things, it'd be appreciated.
import math
mean = 0
standard_dev = 1
percentage_points = {0.5000: 0.0000, 0.4000: 0.2533, 0.3000: 0.5244, 0.2000: 0.8416, 0.1000: 1.2816, 0.0500: 1.6440, 0.0250: 1.9600, 0.0100: 2.3263, 0.0050: 2.5758, 0.0010: 3.0902, 0.0005: 3.2905}
def get_z_less_than(x):
"""
P(Z < x)
"""
return round(0.5 * (1 + math.erf((x - mean)/math.sqrt(2 * standard_dev**2))), 4)
def get_z_greater_than(x):
"""
P(Z > x)
"""
return round(1 - get_z_less_than(x), 4)
def get_z_in_range(lower_bound, upper_bound):
"""
P(lower_bound < Z < upper_bound)
"""
return round(get_z_less_than(upper_bound) - get_z_less_than(lower_bound), 4)
def get_z_less_than_a_equal(x):
"""
P(Z < a) = x
acquires a, given x
"""
# first trial: brute forcing
for i in range(401):
a = i/100
p = get_z_less_than(a)
if x == p:
return a
elif p > x:
break
# second trial: using symmetry
try:
res = -get_z_less_than_a_equal(1 - x)
except:
# third trial: using estimation
assert a, "error"
prev = get_z_less_than(a-0.01)
p = get_z_less_than(a)
if abs(x - prev) > abs(x - p):
res = a
else:
res = a - 0.01
return res
def get_z_greater_than_a_equal(x):
"""
P(Z > a) = x
"""
if x in percentage_points:
return percentage_points[x]
else:
return get_z_less_than_a_equal(1-x)
print(get_z_in_range(-1.20, 1.40))
print(get_z_less_than_a_equal(0.7517))
print(get_z_greater_than_a_equal(0.1000))
print(get_z_greater_than_a_equal(0.0322))
print(get_z_less_than_a_equal(0.1075))
print(get_z_less_than_a_equal(0.75))
Since python3.8, the statistics module in the standard library has a NormalDist class, so we could use that to implement our functions "with pure python" or at least for testing:
import math
from statistics import NormalDist
normal_dist = NormalDist(mu=0, sigma=1)
for i in range(-2000, 2000):
test_val = i / 1000
assert get_z_less_than(test_val) == round(normal_dist.cdf(test_val), 4)
Doesn't throw an error, so that part probably works fine
Your get_z_less_than_a_equal seems to be the equivalent of NormalDist.inv_cdf
There are very efficient ways to compute it accurately using the inverse of the error function (see Wikipedia and Python implementation), but we don't have that in the standard library
Since you only care about the first few digits and get_z_less_than is monotonic, we can use a simple bisection method to find our solution
Newton's method would be much faster, and not too hard to implement since we know that the derivative of the cdf is just the pdf, but still probably more complex than what we need
def get_z_less_than_a_equal(x):
"""
P(Z < a) = x
acquires a, given x
"""
if x <= 0.0 or x >= 1.0:
raise ValueError("x must be >0.0 and <1.0")
min_res, max_res = -10, 10
while max_res - min_res > 1e-7:
mid = (max_res + min_res) / 2
if get_z_less_than(mid) < x:
min_res = mid
else:
max_res = mid
return round((max_res + min_res) / 2, 4)
Let's test this:
for i in range(1, 2000):
test_val = i / 2000
left_val = get_z_less_than_a_equal(test_val)
right_val = round(normal_dist.inv_cdf(test_val), 4)
assert left_val == right_val, f"{left_val} != {right_val}"
# AssertionError: -3.3201 != -3.2905
We see that we are losing some precision, that's because the error introduced by get_z_less_than (which rounds to 4 digits) gets propagated and amplified when we use it to estimate its inverse (see Wikipedia - error propagation for details)
So let's add a "digits" parameter to get_z_less_than and change our functions slightly:
def get_z_less_than(x, digits=4):
"""
P(Z < x)
"""
res = 0.5 * (1 + math.erf((x - mean) / math.sqrt(2 * standard_dev ** 2)))
return round(res, digits)
def get_z_less_than_a_equal(x, digits=4):
"""
P(Z < a) = x
acquires a, given x
"""
if x <= 0.0 or x >= 1.0:
raise ValueError("x must be >0.0 and <1.0")
min_res, max_res = -10, 10
while max_res - min_res > 10 ** -(digits * 2):
mid = (max_res + min_res) / 2
if get_z_less_than(mid, digits * 2) < x:
min_res = mid
else:
max_res = mid
return round((max_res + min_res) / 2, digits)
And now we can try the same test again and see it passes
I'm trying to implement the Miller's Weil Pairing algorithm but I have a problem.
I used the book "An introduction to mathematical cryptography" by Hoffstein, Pipher and Silverman and tried my implementation with the example given in the book :
I obtain the same response so the computations should be good but I have a problem choosing a random point S. In the book, it is stated that we must choose a point S not in {O,P,-Q,P-Q}.
But with this equation :
I try to compute e(P,Q) with P = (1,5) and Q = (12,2) of order 9 but there is always an error at one moment because the result of the Miller's function is 0/0. -Q = (12,11) and P-Q = (9,6) so I tried with all the others points and it does not work with points of the same order which is normal. (9,6) and (9,7) are the only ones that have a different order : 3. As (9,6) cannot be chosen I took (9,7) but I still get some 0 that create problems.
Can someone tell me what I did wrong or what I didn't understood ?
Thank you
Here is the code :
def g(p, q, r, curve):
modulo = curve.p
if p.x is None and p.y is None:
return (r.x - q.x) % modulo
if q.x is None and q.y is None:
return (r.x - p.x) % modulo
if p != q and p.x == q.x:
return (r.x - p.x) % modulo
l = slope(p, q, curve)
num = (r.y - p.y - l*(r.x - p.x)) % modulo
denom = (r.x + p.x + q.x - l**2) % modulo
return (num * mult_inverse(denom, modulo)) % modulo
def miller_algorithm(p, q, m, curve):
f = 1
t = copy(p)
n = list(bin(m)[2:])
for i in range(1,len(n)):
f = (f**2) * g(t, t, q, curve)
t = t.double_and_add(2)
if n[i] == '1':
f = f * g(t, p, q, curve)
t = t.add(p)
return f % curve.p
def weil_pairing(p, q, curve):
if p == q or (p.x is None and p.y is None) or (q.x is None and q.y is None):
return 1
m = q.order()
s = curve.random_point()
while s == p or s == q.neg() or s == (p.sub(q)) or s.order() == m:
s = curve.random_point()
fpnum = miller_algorithm(p, q.add(s), m, curve)
fpdenom = miller_algorithm(p, s, m, curve)
fqnum = miller_algorithm(q, p.sub(s), m, curve)
fqdenom = miller_algorithm(q, s.neg(), m, curve)
modulo = curve.p
num = (fpnum * mult_inverse(fpdenom, modulo)) % modulo
denom = (fqnum * mult_inverse(fqdenom, modulo)) % modulo
return (num * mult_inverse(denom, modulo)) % modulo
I can't tell you what you are doing incorrectly. But, the python program sagemath has an implementation of miller's algorithm, and all the pairings, weil, tate, and ate. If you install it, you can find the source code for miller and weil in the file ell_point.py. The algorithm is more general, and so more complex, than your implementation, but you should be able to translate to get what you want. Or, you can just import sagemath and use its implementations.
I have to implement the Z algorithm and use it to search a target text for a specific pattern. I've implemented what I thought was the correct algorithm and search function using it but it's really slow. For the naive implementation of string search I consistently got times lower than 1.5 seconds and for the z string search I consistently got times over 3 seconds (for my biggest test case) so I have to be doing something wrong. The results seem to be correct, or were at least for the few test cases we were given. The code for the functions mentioned in my rant is below:
import sys
import time
# z algorithm a.k.a. the fundemental preprocessing algorithm
def z(P, start=1, max_box_size=sys.maxsize):
n = len(P)
boxes = [0] * n
l = -1
r = -1
for k in range(start, n):
if k > r:
i = 0
while k + i < n and P[i] == P[k + i] and i < max_box_size:
i += 1
boxes[k] = i
if i:
l = k
r = k + i - 1
else:
kp = k - l
Z_kp = boxes[kp]
if Z_kp < r - k + 1:
boxes[k] = Z_kp
else:
i = r + 1
while i < n and P[i] == P[i - k] and i - k < max_box_size:
i += 1
boxes[k] = i - k
l = k
r = i - 1
return boxes
# a simple string search
def naive_string_search(P, T):
m = len(T)
n = len(P)
indices = []
for i in range(m - n + 1):
if P == T[i: i + n]:
indices.append(i)
return indices
# string search using the z algorithm.
# The pattern you're searching for is simply prepended to the target text
# and than the z algorithm is run on that concatenation
def z_string_search(P, T):
PT = P + T
n = len(P)
boxes = z(PT, start=n, max_box_size=n)
return list(map(lambda x: x[0]-n, filter(lambda x: x[1] >= n, enumerate(boxes))))
Your's implementation of z-function def z(..) is algorithmically ok and asymptotically ok.
It has O(m + n) time complexity in worst case while implementation of naive string search has O(m*n) time complexity in worst case, so I think that the problem is in your test cases.
For example if we take this test case:
T = ['a'] * 1000000
P = ['a'] * 1000
we will get for z-function:
real 0m0.650s
user 0m0.606s
sys 0m0.036s
and for naive string matching:
real 0m8.235s
user 0m8.071s
sys 0m0.085s
PS: You should understand that there are a lot of test cases where naive string matching works in linear time too, for example:
T = ['a'] * 1000000
P = ['a'] * 1000000
Thus the worst case for a naive string matching is where function should apply pattern and check again and again. But in this case it will do only one check because of the lengths of the input (it cannot apply pattern from index 1 so it won't continue).