I need help coding a program that will use the Riemann definition (left AND right rules) to calculate the integral of f(x)=sin(x) from a=0 to b=2*pi. I can do this by hand for days, but I have zero idea how to code it with python.
Did you take a look at this code: http://statmath.org/calculate_area.pdf
# Calcuate the area under a curve
#
# Example Function y = x^2
#
# This program integrates the function from x1 to x2
# x2 must be greater than x1, otherwise the program will print an error message.
#
x1 = float(input('x1='))
x2 = float (input('x2='))
if x1 > x2:
print('The calculated area will be negative')
# Compute delta_x for the integration interval
#
delta_x = ((x2-x1)/1000)
j = abs ((x2-x1)/delta_x)
i = int (j)
print('i =', i)
# initialize
n=0
A= 0.0
x = x1
# Begin Numerical Integration
while n < i:
delta_A = x**2 * delta_x
x = x + delta_x
A = A + delta_A
n = n+1
print('Area Under the Curve =', A)
From my experience, looking at the equations from wiki has helped me with translating into python. Here are a few wiki pages:
Riemann definition
Fundamental theorem of calculus
Numerical integration
Also, The math module of python will help you with this:
Python Math
After checking these out, look at some examples of other mathematical equations in the python language to understand how to integrate some of the math functions.
Related
THIS PART IS JUST BACKGROUND IF YOU NEED IT
I am developing a numerical solver for the Second-Order Kuramoto Model. The functions I use to find the derivatives of theta and omega are given below.
# n-dimensional change in omega
def d_theta(omega):
return omega
# n-dimensional change in omega
def d_omega(K,A,P,alpha,mask,n):
def layer1(theta,omega):
T = theta[:,None] - theta
A[mask] = K[mask] * np.sin(T[mask])
return - alpha*omega + P - A.sum(1)
return layer1
These equations return vectors.
QUESTION 1
I know how to use odeint for two dimensions, (y,t). for my research I want to use a built-in Python function that works for higher dimensions.
QUESTION 2
I do not necessarily want to stop after a predetermined amount of time. I have other stopping conditions in the code below that will indicate whether the system of equations converges to the steady state. How do I incorporate these into a built-in Python solver?
WHAT I CURRENTLY HAVE
This is the code I am currently using to solve the system. I just implemented RK4 with constant time stepping in a loop.
# This function randomly samples initial values in the domain and returns whether the solution converged
# Inputs:
# f change in theta (d_theta)
# g change in omega (d_omega)
# tol when step size is lower than tolerance, the solution is said to converge
# h size of the time step
# max_iter maximum number of steps Runge-Kutta will perform before giving up
# max_laps maximum number of laps the solution can do before giving up
# fixed_t vector of fixed points of theta
# fixed_o vector of fixed points of omega
# n number of dimensions
# theta initial theta vector
# omega initial omega vector
# Outputs:
# converges true if it nodes restabilizes, false otherwise
def kuramoto_rk4_wss(f,g,tol_ss,tol_step,h,max_iter,max_laps,fixed_o,fixed_t,n):
def layer1(theta,omega):
lap = np.zeros(n, dtype = int)
converges = False
i = 0
tau = 2 * np.pi
while(i < max_iter): # perform RK4 with constant time step
p_omega = omega
p_theta = theta
T1 = h*f(omega)
O1 = h*g(theta,omega)
T2 = h*f(omega + O1/2)
O2 = h*g(theta + T1/2,omega + O1/2)
T3 = h*f(omega + O2/2)
O3 = h*g(theta + T2/2,omega + O2/2)
T4 = h*f(omega + O3)
O4 = h*g(theta + T3,omega + O3)
theta = theta + (T1 + 2*T2 + 2*T3 + T4)/6 # take theta time step
mask2 = np.array(np.where(np.logical_or(theta > tau, theta < 0))) # find which nodes left [0, 2pi]
lap[mask2] = lap[mask2] + 1 # increment the mask
theta[mask2] = np.mod(theta[mask2], tau) # take the modulus
omega = omega + (O1 + 2*O2 + 2*O3 + O4)/6
if(max_laps in lap): # if any generator rotates this many times it probably won't converge
break
elif(np.any(omega > 12)): # if any of the generators is rotating this fast, it probably won't converge
break
elif(np.linalg.norm(omega) < tol_ss and # assert the nodes are sufficiently close to the equilibrium
np.linalg.norm(omega - p_omega) < tol_step and # assert change in omega is small
np.linalg.norm(theta - p_theta) < tol_step): # assert change in theta is small
converges = True
break
i = i + 1
return converges
return layer1
Thanks for your help!
You can wrap your existing functions into a function accepted by odeint (option tfirst=True) and solve_ivp as
def odesys(t,u):
theta,omega = u[:n],u[n:]; # or = u.reshape(2,-1);
return [ *f(omega), *g(theta,omega) ]; # or np.concatenate([f(omega), g(theta,omega)])
u0 = [*theta0, *omega0]
t = linspan(t0, tf, timesteps+1);
u = odeint(odesys, u0, t, tfirst=True);
#or
res = solve_ivp(odesys, [t0,tf], u0, t_eval=t)
The scipy methods pass numpy arrays and convert the return value into same, so that you do not have to care in the ODE function. The variant in comments is using explicit numpy functions.
While solve_ivp does have event handling, using it for a systematic collection of events is rather cumbersome. It would be easier to advance some fixed step, do the normalization and termination detection, and then repeat this.
If you want to later increase efficiency somewhat, use directly the stepper classes behind solve_ivp.
I am trying to find a point 'p2' on a curve, and it is 'd' away from point 'p1'.
The curve is quadratic formula ax^2 + bx + c = y
point p1 is on the curve, let us say (p1x, p1y)
point p2 is on the curve, but we only know its distance 'along the curve' from p1, which is 'd'. A distance on a curve can be calculated by integrating'(1+(2*a*x+b)^2)^(1/2) dx'. Here, integrating'(1+(2*a*x+b)^2)^(1/2) dx' from p1x to p2x is expectd to have a given number k. p2x is unknown.
I have been using a loop to find the point.
from scipy import integrate
def integral(a, b, c, p1x, distance_between_p1_and_p2):
x = lambda x:(1+(2*a*x+b)**2)**(1/2)
best_i=0
p2x=0
for points_on_curve in range(int(p1x*1000),int((p1x+0.15)*1000),1):
i,j = integrate.quad(x,p1x,points_on_curve/1000)
if abs(i-distance_between_p1_and_p2)<abs(best_i-distance_between_p1_and_p2):
best_i=i
p2x=points_on_curve/1000
return p1x+p2x
The problem here is it takes so long becuase it begins from p1x and slightly increase the value, calculate the length from p1 to potential p2 and see if it is closer to the target distance_between_p1_and_p2 than the previous one.
Would it be there a better way of programming it?
I have been working on it and i found two solutions to this.
First, I used sympy.geometry.curve
from sympy.geometry.curve import Curve
x = sp. Symbol('x')
a = sp. Symbol('a')
b = sp. Symbol('b')
c = sp. Symbol('c')
start = sp. Symbol('start')
end = sp. Symbol('end')
print('length')
print(Curve((a*x**2+b*x+c, x), (x, start, end)).length)
I get this as an output.
(end + b/(2*a))*sqrt(4*a**2*(end + b/(2*a))**2 + 1)/2 - (start + b/(2*a))*sqrt(4*a**2*(start + b/(2*a))**2 + 1)/2 + asinh(2*a*(end + b/(2*a)))/(4*a) - asinh(2*a*(start + b/(2*a)))/(4*a)
Here, I can use the equation.
from sympy import solve, sqrt, asinh, nsolve
end = sp.S('end')
a = -1
b = 0
c = 4
w3 = 1
length = 2
eq = sp.Eq((end + b/(2*a))*sqrt(4*a**2*(end + b/(2*a))**2 + 1)/2 - (w3 + b/(2*a))*sqrt(4*a**2*(w3 + b/(2*a))**2 + 1)/2 + asinh(2*a*(end + b/(2*a)))/(4*a) - asinh(2*a*(w3 + b/(2*a)))/(4*a),length)
I found two ways to solve the equation.
Use nsolve. This gives only one answer even if I have two. For example, if there are two answers (a+sqrt(b), a-sqrt(b)), I guess this gives only one answer closer to expected_value_to_start_search_answer.
print(sp.nsolve(eq, expected_value_to_start_search_answer))
Use solve. This gives all possible answers, but it is slower than the first option.
sol = solve(eq,end)
print(sol)
Your target points x, y sit on the parabolic curve as well as on a circle around p1, in other words, all fulfill the equations
a x^2 + b x + c = y
(x - p1x)^2 + (y - p1y)^2 = r^2
You can simply eliminate y by inserting the lhs from the first equation into the second, and solve the resulting quadratic equation for x.
could somebody please explain how to do a gradient descent problem WITHOUT the context of the cost function? I have seen countless tutorials that explain gradient descent using the cost function, but I really don't understand how it works in a more general sense.
I am given a 3D function:
z = 3*((1-xx)2) * np.exp(-(xx2) - (yy+1)2) \
- 10*(xx/5 - xx3 - yy5) * np.exp(-xx2 - yy2)- (1/3)* np.exp(-(xx+1)**2 - yy2)
And I am asked to:
Code a simple gradient algorithm. Set the parameters as follows:
learning rate = step size: 0.1
Max number of iterations: 20
Stopping criterion: 0.0001 (Your iterations should stop when your gradient is smaller than the threshold)
Then start your algorithm at
(x0 = 0.5, y0 = -0.5)
(x0 = -0.3, y0 = -0.3)
I have seen this piece of code floating around wherever gradient descent is talked about:
def update_weights(m, b, X, Y, learning_rate):
m_deriv = 0
b_deriv = 0
N = len(X)
for i in range(N):
# Calculate partial derivatives
# -2x(y - (mx + b))
m_deriv += -2*X[i] * (Y[i] - (m*X[i] + b))
# -2(y - (mx + b))
b_deriv += -2*(Y[i] - (m*X[i] + b))
# We subtract because the derivatives point in direction of steepest ascent
m -= (m_deriv / float(N)) * learning_rate
b -= (b_deriv / float(N)) * learning_rate
return m, b
enter code here
But I don't understand how to use it for my problem. How does my function fit in there? What do I adjust instead of m and b? I'm very very confused.
Thank you.
Gradient Descent is optimization algorithm for finding the minimum of a function.
Very simplified view
Lets start with a 1D function y = f(x)
Lets start at an arbitrary value of x and find the gradient (slope) of f(x).
If the slope is decreasing at x then it means we have to go further toward (right of number line) x (for reaching the minimum)
If the slope is increasing at x then it means we have to go away from (left of number line) x
We can get the slope by taking the derivative of the function. The derivative is -ve if the slop is decreasing and +ve if the slope is increasing
So we can start at some arbitrary value of x and slowly move toward the minimum using the derivatives at that value of x. How slowly we are moving is determined by the learning rate or step size. so we have the update rule
x = x - df_dx*lr
We can see that if the slope is decreasing the derivative (df_dx) is -ve and x is increasing and so x is moving to further right. On the other hand if slope is increasing the df_dx is +ve which decreases x and so we are moving toward left.
We continue this either for some large number of times or until the derivative is very small
Multivariate function z = f(x,y)
The same logic as above applies except now we take the partial derivatives instead of derivative.
Update rule is
x = x - dpf_dx*lr
y = y - dpf_dy*lr
Where dpf_dx is the partial derivative of f with respect to x
The above algorithm is called the gradient decent algorithm. In Machine learning the f(x,y) is a cost/loss function whose minimum we are interested in.
Example
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.axes3d import Axes3D
from pylab import meshgrid
from scipy.optimize import fmin
import math
def z_func(a):
x, y = a
return ((x-1)**2+(y-2)**2)
x = np.arange(-3.0,3.0,0.1)
y = np.arange(-3.0,3.0,0.1)
X,Y = meshgrid(x, y) # grid of point
Z = z_func((X, Y)) # evaluation of the function on the grid
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1,linewidth=0, antialiased=False)
plt.show()
The min of z_func is at (1,2). This can be verified using the fmin function of scipy
fmin(z_func,np.array([10,10]))
Now lets write our own gradient decent algorithm to find the min of z_func
def gradient_decent(x,y,lr):
while True:
d_x = 2*(x-1)
d_y = 2*(y-2)
x -= d_x*lr
y -= d_y*lr
if d_x < 0.0001 and d_y < 0.0001:
break
return x,y
print (gradient_decent(10,10,0.1)
We are starting at some arbitrary value x=10 and y=10 and a learning rate of 0.1. The above code prints 1.000033672997724 2.0000299315535326 which is correct.
So if you have a continuous differentiable convex function, to find its optimal (which is minimal for a convex) all you have to do is find the partial derivatives of the function with respect to each variable and use the update rule mentioned above. Repeat the steps until the gradients are small which mean we have reached the minima for a convex function.
If the function is not convex, we might get stuck in a local optima.
I am trying to find the minimum of a natural cubic spline. I have written the following code to find the natural cubic spline. (I have been given test data and have confirmed this method is correct.) Now I can not figure out how to find the minimum of this function.
This is the data
xdata = np.linspace(0.25, 2, 8)
ydata = 10**(-12) * np.array([1,2,1,2,3,1,1,2])
This is the function
import scipy as sp
import numpy as np
import math
from numpy.linalg import inv
from scipy.optimize import fmin_slsqp
from scipy.optimize import minimize, rosen, rosen_der
def phi(x, xd,yd):
n = len(xd)
h = np.array(xd[1:n] - xd[0:n-1])
f = np.divide(yd[1:n] - yd[0:(n-1)],h)
q = [0]*(n-2)
for i in range(n-2):
q[i] = 3*(f[i+1] - f[i])
A = np.zeros(((n-2),(n-2)))
#define A for j=0
A[0,0] = 2*(h[0] + h[1])
A[0,1] = h[1]
#define A for j = n-2
A[-1,-2] = h[-2]
A[-1,-1] = 2*(h[-2] + h[-1])
#define A for in the middle
for j in range(1,(n-3)):
A[j,j-1] = h[j]
A[j,j] = 2*(h[j] + h[j+1])
A[j,j+1] = h[j+1]
Ainv = inv(A)
B = Ainv.dot(q)
b = (n)*[0]
b[1:(n-1)] = B
# now we find a, b, c and d
a = [0]*(n-1)
c = [0]*(n-1)
d = [0]*(n-1)
s = [0]*(n-1)
for r in range(n-1):
a[r] = 1/(3*h[r]) * (b[r + 1] - b[r])
c[r] = f[r] - h[r]*((2*b[r] + b[r+1])/3)
d[r] = yd[r]
#solution 1 start
for m in range(n-1):
if xd[m] <= x <= xd[m+1]:
s = a[m]*(x - xd[m])**3 + b[m]*(x-xd[m])**2 + c[m]*(x-xd[m]) + d[m]
return(s)
#solution 1 end
I want to find the minimum on the domain of my xdata, so a fmin didn't work as you can not define bounds there. I tried both fmin_slsqp and minimize. They are not compatible with the phi function I wrote so I rewrote phi(x, xd,yd) and added an extra variable such that phi is phi(x, xd,yd, m). M indicates in which subfunction of the spline we are calculating a solution (from x_m to x_m+1). In the code we replaced #solution 1 by the following
# solution 2 start
return(a[m]*(x - xd[m])**3 + b[m]*(x-xd[m])**2 + c[m]*(x-xd[m]) + d[m])
# solution 2 end
To find the minimum in a domain x_m to x_(m+1) we use the following code: (we use an instance where m=0, so x from 0.25 to 0.5. The initial guess is 0.3)
fmin_slsqp(phi, x0 = 0.3, bounds=([(0.25,0.5)]), args=(xdata, ydata, 0))
What I would then do (I know it's crude), is iterate this with a for loop to find the minimum on all subdomains and then take the overall minimum. However, the function fmin_slsqp constantly returns the initial guess as the minimum. So there is something wrong, which I do not know how to fix. If you could help me this would be greatly appreciated. Thanks for reading this far.
When I plot your function phi and the data you feed in, I see that its range is of the order of 1e-12. However, fmin_slsqp is unable to handle that level of precision and fails to find any change in your objective.
The solution I propose is scaling the return of your objective by the same order of precision like so:
return(s*1e12)
Then you get good results.
>>> sol = fmin_slsqp(phi, x0=0.3, bounds=([(0.25, 0.5)]), args=(xdata, ydata))
>>> print(sol)
Optimization terminated successfully. (Exit mode 0)
Current function value: 1.0
Iterations: 2
Function evaluations: 6
Gradient evaluations: 2
[ 0.25]
Given a mean and standard-deviation defining a normal distribution, how would you calculate the following probabilities in pure-Python (i.e. no Numpy/Scipy or other packages not in the standard library)?
The probability of a random variable r where r < x or r <= x.
The probability of a random variable r where r > x or r >= x.
The probability of a random variable r where x > r > y.
I've found some libraries, like Pgnumerics, that provide functions for calculating these, but the underlying math is unclear to me.
Edit: To show this isn't homework, posted below is my working code for Python<=2.6, albeit I'm not sure if it handles the boundary conditions correctly.
from math import *
import unittest
def erfcc(x):
"""
Complementary error function.
"""
z = abs(x)
t = 1. / (1. + 0.5*z)
r = t * exp(-z*z-1.26551223+t*(1.00002368+t*(.37409196+
t*(.09678418+t*(-.18628806+t*(.27886807+
t*(-1.13520398+t*(1.48851587+t*(-.82215223+
t*.17087277)))))))))
if (x >= 0.):
return r
else:
return 2. - r
def normcdf(x, mu, sigma):
t = x-mu;
y = 0.5*erfcc(-t/(sigma*sqrt(2.0)));
if y>1.0:
y = 1.0;
return y
def normpdf(x, mu, sigma):
u = (x-mu)/abs(sigma)
y = (1/(sqrt(2*pi)*abs(sigma)))*exp(-u*u/2)
return y
def normdist(x, mu, sigma, f):
if f:
y = normcdf(x,mu,sigma)
else:
y = normpdf(x,mu,sigma)
return y
def normrange(x1, x2, mu, sigma, f=True):
"""
Calculates probability of random variable falling between two points.
"""
p1 = normdist(x1, mu, sigma, f)
p2 = normdist(x2, mu, sigma, f)
return abs(p1-p2)
All these are very similar: If you can compute #1 using a function cdf(x), then the solution to #2 is simply 1 - cdf(x), and for #3 it's cdf(x) - cdf(y).
Since Python includes the (gauss) error function built in since version 2.7 you can do this by calculating the cdf of the normal distribution using the equation from the article you linked to:
import math
print 0.5 * (1 + math.erf((x - mean)/math.sqrt(2 * standard_dev**2)))
where mean is the mean and standard_dev is the standard deviation.
Some notes since what you asked seemed relatively straightforward given the information in the article:
CDF of a random variable (say X) is the probability that X lies between -infinity and some limit, say x (lower case). CDF is the integral of the pdf for continuous distributions. The cdf is exactly what you described for #1, you want some normally distributed RV to be between -infinity and x (<= x).
< and <= as well as > and >= are same for continuous random variables as the probability that the rv is any single point is 0. So whether or not x itself is included doesn't actually matter when calculating the probabilities for continuous distributions.
Sum of probabilities is 1, if its not < x then it's >= x so if you have the cdf(x). then 1 - cdf(x) is the probability that the random variable X >= x. Since >= is equivalent for continuous random variables to >, this is also the probability X > x.