I am trying to get a integration which the limits of the integral should be in logarithmic intervals, in another word, if "a" and "b" are the higher and lower limits of integration, then the points would be xi=a*(b/a)^(i/N) for N intervals. so I wrote the code to sum up the trapezoids in python as below and I gave it a simple formula like f(x)= x^2 (since my actual formula is very complicated), but it doesn't give me any result. I wanted to know if I am in a right track or no. S here is the code:
import numpy as np
import math
a = 2
b = 4
N = 100
def integrate(f, a, b, N):
for i in range(1,N):
h = a*((b/a)**(i/float(N)))*(((a/b)**(1/float(N)))-1) # our intervals
xi = a*((b/a)**(i/float(N)))
xii = a*((b/a)**(i+1)/float(N))
s = ((1/2.0)*(f(a)+f(a*(b/a)**(1/float(N)))))*(a*(((b/a)**(1/float(N)))-1)) #the area of the first trapezoid
s = s +((f(xi)+ f(xii))*(1/2.0))*h
return s
def F(x):
return x**2
print integrate (F, a, b, N)
Your arithmetic is off somewhere. For instance, this line is missing a pair of parentheses:
xii = a*((b/a)**(i+1)/float(N))
Should be
xii = a*((b/a)**((i+1)/float(N)))
Since you're having trouble debugging this, I suggest a few basic steps:
Use liberal print statements to track intermediate results
use intermediate variables to help with that tracking, and to avoid repeated computations. The less you have to type nested parentheses, the lower the chance of error.
See this lovely debug blog for help.
First of all, your indenting is screwed up. You need to indent your for-loop, and unindent after defining integrate. Second, if xi is supposed to be the left end of your interval and xii the right end, then you should start with i=0. Third, you can use one variable to define others. So if h is supposed to be the length of the interval, you can define xi and xii first, then just put h=xii-xi, and so on.
Related
I'm trying to solve the equation for T(p,q) as shown more clearly in the attached image.
Where:
p = 0.60
q = 0.45
M - coefficient matrix with 3 rows and 6 columns
I created five matrices and put them within their own functions in order to later call them in the while loop. However, the loop doesn't cycle through the various values of i.
How can I get the loop to work or is there another/better way I can solve the following equation?
(FYI this is approx my third day ever working with Python and coding)
import numpy as np
def M1(i):
M = np.array([[1,3,0,4,0,0],[3,3,1,4,0,0],[4,4,0,3,1,0]])
return M[i-1,0]
def M2(i):
M = np.array([[1,3,0,4,0,0],[3,3,1,4,0,0],[4,4,0,3,1,0]])
return M[i-1,1]
def M3(i):
M = np.array([[1,3,0,4,0,0],[3,3,1,4,0,0],[4,4,0,3,1,0]])
return M[i-1,2]
def M4(i):
M = np.array([[1,3,0,4,0,0],[3,3,1,4,0,0],[4,4,0,3,1,0]])
return M[i-1,3]
def M5(i):
M = np.array([[1,3,0,4,0,0],[3,3,1,4,0,0],[4,4,0,3,1,0]])
return M[i-1,4]
def T(p,q):
sum_i = 0
i = 1
while i <=5:
sum_i = sum_i + ((M1(i)*p**M2(i))*((1-p)**M3(i))*(q**M4(i))*((1-q)**M5(i)))
i = i +1
return sum_i
print(T(0.6,0.45))
"""I printed the below equation (using a single value for i) to test if the above loop is working and since I get the same answer as the loop, I can see that the loop is not cycling through the various values of i as expected"""
i=1
p=0.6
q=0.45
print(((M1(i)*p**M2(i))*((1-p)**M3(i))*(q**M4(i))*((1-q)**M5(i))))
return is placed inside while loop, you need to change the code a bit
while i <=5:
sum_i = sum_i + ((M1(i)*p**M2(i))*((1-p)**M3(i))*(q**M4(i))*((1-q)**M5(i)))
i = i +1
return sum_i
The real power with numpy is to look at computations like these and try to understand what is repeatable and what structure they have. Once you find operations that are similar or parallel, try to set up your numpy function calls so that they can be done element-wise in parallel with one call.
For instance, inside the typical element of the sum, there are four things being explicitly raised to a power (a fifth if you consider M(i, 1)^1). In one function call, you can perform all four of these exponentiations in parallel with one function call if you arrange your arrays smartly:
M = np.array([[1,3,0,4,0,0],[3,3,1,4,0,0],[4,4,0,3,1,0]])
ps_and_qs = np.array([[p, (1-p), q, (1-q)]])
a = np.power(ps_and_qs, M[:,1:5])
Now a will be populated with a 3 x 4 matrix with all of your exponentiations.
Now the next step is how to reduce these. There are several reduction functions that are built into numpy that are efficiently implemented with vectorized loops where possible. They can speed up your code quite a bit. In particular, there is both a product reduction as well as a sum reduction. With your equation, we need to first multiply across the rows to get one number per row and then sum across the remaining column like this:
b = M[:,:1] * np.product(a,axis=1)
c = np.sum(b, axis=0)
c should now be a scalar equal to T evaluated at (p,q). That is a lot to take in for a third day, but something to consider if you continue to use numpy for numerical analysis on bigger projects that need better performance.
tl/dr: I have a numpy boundary/initial value problem and want to see if I'm approaching this the right way. I'm fairly new with numpy. I'm presenting a simplified version of the problem.
I have 2 functions a and b defined for integer values of t and x, which I'm trying to calculate for positive x and t (say up to N ). I want to figure out the best way to do this with numpy.
I have boundary values at t=0 and x=0, a(t,x) depends only on a(t-1,x-1) and b(t-1,x-1) while b(t,x) depends on lots of values of a with smaller t, x . This is what makes it 'simple'. We have
a=1 for t=0 and for x=0.
b=0.1 for for t=0 and b=1 for x=0. At x=t=0, we get b=0.1.
In the interior, a(t,x) = a(t-1,x-1) - b(t-1,x-1).
Now the hard part. b(t,x) = a(t-1,x-1) S(t, t-1) + a(t-2,x-2) S(t,t-2) + ...
where S(t,y) is a sum equal to f(a(t-1,1)) + f(a(t-1,2)) + ... + f(a(t-1,y)) for some function f (If you need something specific, you could assume it's just a + a**2).
So my plan is to do this basically as:
initialize values
loop over t:
update a
loop over y:
define the S(t,y) #each step is vectorizable I think
loop over x:
set b to equal the dot product between vector of S and slice of a.
My question: Is this a reasonable approach - can I cut out any of those loops, or should I take a different tack entirely?
Bonus question: Any likely errors for a numpy newb to make coding this?
I am new to StackOverflow, and I am extremely new to Python.
My problem is this... I am needing to write a double-sum, as follows:
The motivation is that this is the angular correction to the gravitational potential used for the geoid.
I am having difficulty writing the sums. And please, before you say "Go to such-and-such a resource," or get impatient with me, this is the first time I have ever done coding/programming/whatever this is.
Is this a good place to use a "for" loop?
I have data for the two indices (n,m) and for the coefficients c_{nm} and s_{nm} in a .txt file. Each of those items is a column. When I say usecols, do I number them 0 through 3, or 1 through 4?
(the equation above)
\begin{equation}
V(r, \phi, \lambda) = \sum_{n=2}^{360}\left(\frac{a}{r}\right)^{n}\sum_{m=0}^{n}\left[c_{nm}*\cos{(m\lambda)} + s_{nm}*\sin{(m\lambda)}\right]*\sqrt{\frac{(n-m)!}{(n+m)!}(2n + 1)(2 - \delta_{m0})}P_{nm}(\sin{\lambda})
\end{equation}
(2) Yes, a "for" loop is fine. As #jpmc26 notes, a generator expression is a good alternative to a "for" loop. IMO, you'll want to use numpy if efficiency is important to you.
(3) As #askewchan notes, "usecols" refers to an argument of genfromtxt; as specified in that documentation, column indexes start at 0, so you'll want to use 0 to 3.
A naive implementation might be okay since the larger factorial is the denominator, but I wouldn't be surprised if you run into numerical issues. Here's something to get you started. Note that you'll need to define P() and a. I don't understand how "0 through 3" relates to c and s since their indexes range much further. I'm going to assume that each (and delta) has its own file of values.
import math
import numpy
c = numpy.getfromtxt("the_c_file.txt")
s = numpy.getfromtxt("the_s_file.txt")
delta = numpy.getfromtxt("the_delta_file.txt")
def V(r, phi, lam):
ret = 0
for n in xrange(2, 361):
for m in xrange(0, n + 1):
inner = c[n,m]*math.cos(m*lam) + s[n,m]*math.sin(m*lam)
inner *= math.sqrt(math.factorial(n-m)/math.factorial(n+m)*(2*n+1)*(2-delta[m,0]))
inner *= P(n, m, math.sin(lam))
ret += math.pow(a/r, n) * inner
return ret
Make sure to write unittests to check the math. Note that "lambda" is a reserved word.
I have a convolution integral of the type:
To solve this integral numerically, I would like to use numpy.convolve(). Now, as you can see in the online help, the convolution is formally done from -infinity to +infinity meaning that the arrays are moved along each other completely for evaluation - which is not what I need. I obviously need to be sure to pick the correct part of the convolution - can you confirm that this is the right way to do it or alternatively tell me how to do it right and (maybe even more important) why?
res = np.convolve(J_t, dF, mode="full")[:len(dF)]
J_t is an analytical function and I can evaluate as many points as I need, dF are derivatives of measurement data. for this attempt I choose len(J_t) = len(dF) because from my understanding I do not need more.
Thank you for your thoughts, as always, I appreciate your help!
Background information (for those who might be interested)
These type of integrals can be used to evaluate viscoelastic behaviour of bodies (or the response of an electric circuit during change of voltage, if you feel more familiar on this topic). For viscoelasticity, J(t) is the creep compliance function and F(t) can be the deviatoric strains over time, then this integral would yield the deviatoric stresses.
If you now e.g. have a J(t) of the form:
J_t = lambda p, t: p[0] + p[1]*N.exp(-t/p[2])
with p = [J_elastic, J_viscous, tau] this would be the "famous" standard linear solid. The integral limits are the start of the measurement t_0 = 0 and the moment of interest, t.
To get it right, I have chosen the following two functions:
a(t) = t
b(t) = t**2
It is easy to do the math and find that their "convolution" as defined in your case, takes
on the values:
c(t) = t**4 / 12
So lets try them out:
>>> delta = 0.001
>>> t = np.arange(1000) * delta
>>> a = t
>>> b = t**2
>>> c = np.convolve(a, b) * delta
>>> d = t**4 / 12
>>> plt.plot(np.arange(len(c)) * delta, c)
[<matplotlib.lines.Line2D object at 0x00000000025C37B8>]
>>> plt.plot(t[::50], d[::50], 'o')
[<matplotlib.lines.Line2D object at 0x000000000637AB38>]
>>> plt.show()
So by doing the above, if both your a and b have n elements, you get the right convolution values in the first n elements of c.
Not sure if the following explanation will make any sense, but here it goes... If you think of convolution as mirroring one of the functions along the y-axis, then sliding it along the x axis and computing the integral of the product at each point, it is easy to see how, since outside of the area of definition numpy takes them as if padded with zeros, you are effectively setting an integration interval from 0 to t, since the first function is zero below zero, and the second is zero above t, since it originally was zero below zero, but has been mirrored and moved t to the right.
I was tackling this same problem and solved it using a highly inefficient but functionally correct algorithm:
def Jfunk(inz,t):
c0 = inz[0]
c1 = inz[1]
c2 = inz[2]
J = c0 - c1*np.exp(-t/c2)
return J
def SLS_funk(inz, t, dl_dt):
boltz_int = np.empty(shape=(0,))
for i,v in enumerate(t, start=1):
t_int = t[0:i]
Jarg = v - t[0:i]
J_int = Jfunk(inz,Jarg)
dl_dt_int = dl_dt[0:i]
inter_grand = np.multiply(J_int, dl_dt_int)
boltz_int = np.append(boltz_int, simps (inter_grand, x=t_int) )
return boltz_int
Thanks to this question and its answers, I was able to implement a much better solution based on the numpy convolution function suggested above. In case the OP was curious I did a time comparison of the two methods.
For an SLS (three parameter J function) with 20,000 time points:
Using Numpy convolution: ~0.1 seconds
Using Brute Force method: ~7.2 seconds
If if helps to get a feeling for the alignment, try convolving a pair of impulses. With matplotlib (using ipython --pylab):
In [1]: a = numpy.zeros(20)
In [2]: b = numpy.zeros(20)
In [3]: a[0] = 1
In [4]: b[0] = 1
In [5]: c = numpy.convolve(a, b, mode='full')
In [6]: plot(c)
You can see from the resultant plot that the first sample in c corresponds to the first position of overlap. In this case, only the first samples of a and b overlap. All the rest are floating in undefined space. numpy.convolve effectively replaces this undefined space with zeros, which you can see if you set a second non-zero value:
In [9]: b[1] = 1
In [10]: plot(numpy.convolve(a, b, mode='full'))
In this case, the first value of the plot is 1, as before (showing that the second value of b is not contributing at all).
I have been struggling with similar question for past 2 days.
The OP may have moved on, but I am still presenting my analysis here.
Following two sources helped me:
Discussion on stackoverflow
These notes
I will consider time-series data defined on the same time series starting from time .
Let the two series be A and B.
Their (continuous) convolution is
Substituting with in the above equation we get what np.convolve(A,B) returns:
What you want is
Again making the same substitution, we get
which is same as above because A for negative indices is extrapolated to zero and for i > (j + m) B[j - i + m] is zero.
If you look at the notes cited above, you can figure out that corresponds to time for our time series.
The next value in the list will correspond to and so on.
Therefore, the correct answer will be
is equal to np.convolve(A,B)[0:M], where M = len(A) = len(B).
Here keep in mind that M*dt = T, where T is the last element of time array.
Disclaimer: I am not a programmer, mathematician or an engineer. I had to use convolution somewhere and have derived these conclusions from my own struggle with the problem. I will be happy to cite any book which has this analysis if someone can point it out.
I am working on a homework problem for which I am supposed to make a function that interpolates sin(x) for n+1 interpolation points and compares the interpolation to the actual values of sin at those points. The problem statement asks for a function Lagrangian(x,points) that accomplishes this, although my current attempt at executing it does not use 'x' and 'points' in the loops, so I think I will have to try again (especially since my code doesn't work as is!) However, why I can't I access the items in the x_n array with an index, like x_n[k]? Additionally, is there a way to only access the 'x' values in the points array and loop over those for L_x? Finally, I think my 'error' definition is wrong, since it should also be an array of values. Is it necessary to make another for loop to compare each value in the 'error' array to 'max_error'? This is my code right now (we are executing in a GUI our professor made, so I think some of the commands are unique to that such as messages.write()):
def problem_6_run(problem_6_n, problem_6_m, plot, messages, **kwargs):
n = problem_6_n.value
m = problem_6_m.value
messages.write('\n=== PROBLEM 6 ==========================\n')
x_n = np.linspace(0,2*math.pi,n+1)
y_n = np.sin(x_n)
points = np.column_stack((x_n,y_n))
i = 0
k = 1
L_x = 1.0
def Lagrange(x, points):
for i in n+1:
for k in n+1:
return L_x = (x- x_n[k] / x_n[i] - x_n[k])
return Lagrange = y_n[i] * L_x
error = np.sin(x) - Lagrange
max_error = 0
if error > max_error:
max_error = error
print.messages('Maximum error = &g' % max_error)
plot.draw_lines(n+1,np.sin(x))
plot.draw_points(m,Lagrange)
plots.draw_points(m,error)
Edited:
Yes, the different things ThiefMaster mentioned are part of my (non CS) professor's environment; and yes, voithos, I'm using numpy and at this point have definitely had more practice with Matlab than Python (I guess that's obvious!). n and m are values entered by the user in the GUI; n+1 is the number of interpolation points and m is the number of points you plot against later.
Pseudocode:
Given n and m
Generate x_n a list of n evenly spaced points from 0 to 2*pi
Generate y_n a corresponding list of points for sin(x_n)
Define points, a 2D array consisting of these ordered pairs
Define Lagrange, a function of x and points
for each value in the range n+1 (this is where I would like to use points but don't know how to access those values appropriately)
evaluate y_n * (x - x_n[later index] / x_n[earlier index] - x_n[later index])
Calculate max error
Calculate error interpolation Lagrange - sin(x)
plot sin(x); plot Lagrange; plot error
Does that make sense?
Some suggestions:
You can access items in x_n via x_n[k] (to answer your question).
Your loops for i in n+1: and for k in n+1: only execute once each, one with i=n+1 and one with k=n+1. You need to use for i in range(n+1) (or xrange) to get the whole list of values [0,1,2,...,n].
in error = np.sin(x) - Lagrange: You haven't defined x anywhere, so this will probably result in an error. Did you mean for this to be within the Lagrange function? Also, you're subtracting a function (Lagrange) from a number np.sin(x), which isn't going to end well.
When you use the return statement in your def Lagrange you are exiting your function. So your loop will never loop more than once because you're returning out of the function. I think you might actually want to store those values instead of returning them.
Can you write some pseudocode to show what you'd like to do? e.g.:
Given a set of points `xs` and "interpolated" points `ys`:
For each point (x,y) in (xs,ys):
Calculate `sin(x)`
Calculate `sin(x)-y` being the difference between the function and y
.... etc etc
This will make the actual code easier for you to write, and easier for us to help you with (especially if you intellectually understand what you're trying to do, and the only problem is with converting that into python).
So : try fix up some of these points in your code, and try write some pseudocode to say what you want to do, and we'll keep helping you :)