Numerical integration Loop Python - python

I would like to write a program that solves the definite integral below in a loop which considers a different value of the constant c per iteration.
I would then like each solution to the integral to be outputted into a new array.
How do I best write this program in python?
with limits between 0 and 1.
from scipy import integrate
integrate.quad
Is acceptable here. My major struggle is structuring the program.
Here is an old attempt (that failed)
# import c
fn = 'cooltemp.dat'
c = loadtxt(fn,unpack=True,usecols=[1])
I=[]
for n in range(len(c)):
# equation
eqn = 2*x*c[n]
# integrate
result,error = integrate.quad(lambda x: eqn,0,1)
I.append(result)
I = array(I)

For instance to compute the given integral for c in [0, 9] :
[scipy.integrate.quadrature(lambda x: 2 * c * x, 0, 1)[0] for c in xrange(10)]
This is using list comprehension and lambda functions.
Alternatively, you could define the function which returns the integral from a given c as a ufunc (thanks to vectorize). This is perhaps more in the spirit of numpy.
>>> func = lambda c: scipy.integrate.quadrature(lambda x: 2 * c * x, 0, 1)[0]
>>> ndfunc = np.vectorize(func)
>>> ndfunc(np.arange(10))
array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])

You're really close.
fn = 'cooltemp.dat'
c_values = loadtxt(fn,unpack=True,usecols=[1])
I=[]
for c in c_values: #can iterate over numpy arrays directly. No need for `range(len(...))`
# equation
#eqn = 2*x*c[n] #This doesn't work, x not defined yet.
# integrate
result,error = integrate.quad(lambda x: 2*c*x, 0, 1)
I.append(result)
I = array(I)
I think you're a little confused about how lambda works.
my_func = lambda x: 2*x
is the same thing as:
def my_func(x):
return 2*x
If you still don't like lambda, you can do this:
f(x,c):
return 2*x*c
#...snip...
integral, error = integrate.quad(f, 0, 1, args=(c,) )

constants = [1,2,3]
integrals = [] #alternatively {}
from scipy import integrate
def f(x,c):
2*x*c
for c in constants:
integral, error = integrate.quad(lambda x: f(x,c),0.,1.)
integrals.append(integral) #alternatively integrals[integral]
This will output a list of just like Nicolas answer, for whatever list of constants.

Related

finding the derivative of function using scipy.misc.derivative

I'm having a problem that the function and its derivative should have the same value.
The function is y=e^x so its derivative should be the same y'=e^x
but when i do it with scipy :
from scipy.misc import derivative
from math import *
def f(x):
return exp(x)
def df(x):
return derivative(f,x)
print(f(1))
print(df(1))
it will print the different value
f(1) = 2.178...
df(1) = 3.194...
so it means, e has the different value.
Can anyone explain that and how to fix it?
As pointed out by #SevC_10 in his answer, you are missing dx parameter.
I like to show case the use of sympy for derivation operations, I find it much easier in many cases.
import sympy
import numpy as np
x = sympy.Symbol('x')
f = sympy.exp(x) # my function e^x
df = f.diff() # y' of the function = e^x
f_lambda = sympy.lambdify(x, f, 'numpy')
df_lambda = sympy.lambdify(x, yprime, 'numpy') # use lambdify
print(f_lambda(np.ones(5)))
# array([2.71828183, 2.71828183, 2.71828183, 2.71828183, 2.71828183])
print(df_lambda(np.ones(5)))
# array([2.71828183, 2.71828183, 2.71828183, 2.71828183, 2.71828183])
print(f_lambda(np.zeros(5)))
# array([1., 1., 1., 1., 1.])
print(df_lambda(np.zeros(5)))
# array([1., 1., 1., 1., 1.])
print(f_lambda(np.array([0, 1, 2, 3, 4])))
# array([ 1. , 2.71828183, 7.3890561 , 20.08553692, 54.59815003])
print(df_lambda(np.array([0, 1, 2, 3, 4])))
# array([ 1. , 2.71828183, 7.3890561 , 20.08553692, 54.59815003])
The derivative function has other arguments. From the help(derivative):
Parameters
----------
func : function
Input function.
x0 : float
The point at which the nth derivative is found.
dx : float, optional
Spacing.
n : int, optional
Order of the derivative. Default is 1.
args : tuple, optional
Arguments
order : int, optional
Number of points to use, must be odd.
As you can see, you didn't specify the dx parameter, so this can cause rounding error because the approximate derivative is computed on a larger interval. From the documentation, the default value is 1 (https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.derivative.html).
Simply try to reduce the spacing interval: for example, using 1e-3 I get:
2.718281828459045
2.718282281505724

Constrained Optimization Problem : Python

I am sure , there must be a simple solution that keeps evading me.
I have a function
f=ax+by+c*z
and a constraint
lx+my+n*z=B
Need to find the (x,y,z), that maximizes f subject to the constraint.
I also need
x,y,z>=0
I remember having seen a solution like this.
This example uses
a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
Ideally, this should give me x=1,y=0 , z=1, such that f=12
import numpy as np
from scipy.optimize import minimize
def objective(x, sign=-1.0):
x1 = x[0]
x2 = x[1]
x3 = x[2]
return sign*((2*x1) + (4*x2)+(10*x3))
def constraint1(x, sign=1.0):
return sign*(1*x[0] +2*x[1]+4*x[2]- 5)
x0=[0,0,0]
b1 = (0,None)
b2 = (0,None)
b3=(0,None)
bnds= (b1,b2,b3)
con1 = {'type': 'ineq', 'fun': constraint1}
cons = [con1]
sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
print(sol)
This is generating bizarre solution. What am I missing ?
The problem as you stated originally without integer constraints can be solved simply and efficiently by linprog:
import scipy.optimize
c = [-2, -4, -10]
A_eq = [[1, 2, 4]]
b_eq = 5
# bounds are for non-negative values by default
scipy.optimize.linprog(c, A_eq=A_eq, b_eq=b_eq)
I would recommend against using more general purpose solvers to solve narrow problems like this as you will often encounter worse performance and sometimes unexpected results.
You need to change your constraint to an 'equality constraint'. Also, your problem didn't specify that integer answers were required, so there is a better non-integer answer to this knapsack problem. (I don't have much experience with scipy.optimize and I'm not sure if it can work integer LP problems.)
In [13]: con1 = {'type': 'eq', 'fun': constraint1}
In [14]: cons = [con1,]
In [15]: sol = minimize (objective,x0,method='SLSQP',bounds=bnds,constraints=cons)
In [16]: print(sol)
fun: -12.5
jac: array([ -2., -4., -10.])
message: 'Optimization terminated successfully.'
nfev: 10
nit: 2
njev: 2
status: 0
success: True
x: array([0. , 0. , 1.25])
Like Jeff said, scipy.optimize only works with linear programming problems.
You can try using PuLP instead for Integer Optimization problems:
from pulp import *
prob = LpProblem("F Problem", LpMaximize)
# a,b,c=2,4,10 and l,m,n=1,2,4 and B=5
a,b,c=2,4,10
l,m,n=1,2,4
B=5
# x,y,z>=0
x = LpVariable("x",0,None,LpInteger)
y = LpVariable("y",0,None,LpInteger)
z = LpVariable("z",0,None,LpInteger)
# f=ax+by+c*z
prob += a*x + b*y + c*z, "Objective Function f"
# lx+my+n*z=B
prob += l*x + m*y + n*z == B, "Constraint B"
# solve
prob.solve()
print("Status:", LpStatus[prob.status])
for v in prob.variables():
print(v.name, "=", v.varValue)
Documentation is here: enter link description here

Python Optimization: Using vector technique to find power of each matrix in an numpy array

3D numpy array A contains a series (in this example, I am choosing 3) of 2D numpy array D of shape 2 x 2. The D matrix is as follows:
D = np.array([[1,2],[3,4]])
A is initialized and assigned as below:
idx = np.arange(3)
A = np.zeros((3,2,2))
A[idx,:,:] = D # This gives A = [[[1,2],[3,4]],[[1,2],[3,4]],\
# [[1,2],[3,4]]]
# In mathematical notation: A = {D, D, D}
Now, essentially what I require after the execution of the codes is:
Mathematically, A = {D^0, D^1, D^2} = {D0, D1, D2}
where D0 = [[1,0],[0,1]], D1 = [[1,2],[3,4]], D2=[[7,10],[15,22]]
Is it possible to apply power to each matrix element in A without using a for-loop? I would be doing larger matrices with more in the series.
I had defined, n = np.array([0,1,2]) # corresponding to powers 0, 1 and 2 and tried
Result = np.power(A,n) but I do not get the desired output.
Is there are an efficient way to do it?
Full code:
D = np.array([[1,2],[3,4]])
idx = np.arange(3)
A = np.zeros((3,2,2))
A[idx,:,:] = D # This gives A = [[[1,2],[3,4]],[[1,2],[3,4]],\
# [[1,2],[3,4]]]
# In mathematical notation: A = {D, D, D}
n = np.array([0,1,2])
Result = np.power(A,n) # ------> Not the desired output.
A cumulative product exists in numpy, but not for matrices. Therefore, you need to make your own 'matcumprod' function. You can use np.dot for this, but np.matmul (or #) is specialized for matrix multiplication.
Since you state your powers always go from 0 to some_power, I suggest the following function:
def matcumprod(D, upto):
Res = np.empty((upto, *D.shape), dtype=A.dtype)
Res[0, :, :] = np.eye(D.shape[0])
Res[1, :, :] = D.copy()
for i in range(1,upto):
Res[i, :, :] = Res[i-1,:,:] # D
return Res
By the way, a loop often times outperforms a built-in numpy function if the latter uses a lot of memory, so don't fret over it if your powers stay within bounds...
Alright, i spent a lot of time on this problem but could not seem to find a vectorized solution in the way you'd like. So i would like to instead first propose a basic solution, and then perhaps an optimization if you require finding continuous powers.
The function you're looking for is called numpy.linalg.matrix_power
import numpy as np
D = np.matrix([[1,2],[3,4]])
idx = np.arange(3)
A = np.zeros((3,2,2))
A[idx,:,:] = D # This gives A = [[[1,2],[3,4]],[[1,2],[3,4]],\
# [[1,2],[3,4]]]
# In mathematical notation: A = {D, D, D}
np.zeros(A.shape)
n = np.array([0,1,2])
result = [np.linalg.matrix_power(D, i) for i in n]
np.array(result)
#Output:
array([[[ 1, 0],
[ 0, 1]],
[[ 1, 2],
[ 3, 4]],
[[ 7, 10],
[15, 22]]])
However, if you notice, you end up calculating multiple powers for the same base matrix. We could instead utilize the intermediate results and go from there, using numpy.linalg.multi_dot
def all_powers_arr_of_matrix(A):
result = np.zeros(A.shape)
result[0] = np.linalg.matrix_power(A[0], 0)
for i in range(1, A.shape[0]):
result[i] = np.linalg.multi_dot([result[i - 1], A[i]])
return result
result = all_powers_arr_of_matrix(A)
#Output:
array([[[ 1., 0.],
[ 0., 1.]],
[[ 1., 2.],
[ 3., 4.]],
[[ 7., 10.],
[15., 22.]]])
Also, we can avoid creating the matrix A entirely, saving some time.
def all_powers_matrix(D, *rangeargs): #end exclusive
''' Expects 2D matrix.
Use as all_powers_matrix(D, end) or
all_powers_matrix(D, start, end)
'''
if len(rangeargs) == 1:
start = 0
end = rangeargs[0]
elif len(rangeargs) == 2:
start = rangeargs[0]
end = rangeargs[1]
else:
print("incorrect args")
return None
result = np.zeros((end - start, *D.shape))
result[0] = np.linalg.matrix_power(A[0], start)
for i in range(start + 1, end):
result[i] = np.linalg.multi_dot([result[i - 1], D])
return result
return result
result = all_powers_matrix(D, 3)
#Output:
array([[[ 1., 0.],
[ 0., 1.]],
[[ 1., 2.],
[ 3., 4.]],
[[ 7., 10.],
[15., 22.]]])
Note that you'd need to add error handling if you decide to use these functions as-is.
To calculate power of matrix D, one way could be to find the eigenvalues and right eigenvectors of it with np.linalg.eig and then raise the power of the diagonal matrix as it is easier, then after some manipulation, you can use two np.einsum to calculate A
#get eigvalues and eigvectors
eigval, eigvect = np.linalg.eig(D)
# to check how it works, you can do:
print (np.dot(eigvect*eigval,np.linalg.inv(eigvect)))
#[[1. 2.]
# [3. 4.]]
# so you get back on D
#use power as ufunc of outer with n on the eigenvalues to get all the one you want
arrp = np.power.outer( eigval, n).T
#apply_along_axis to create the diagonal matrix along the last axis
diagp = np.apply_along_axis( np.diag, axis=-1, arr=arrp)
#finally use two np.einsum to calculate with the subscript to get what you want
A = np.einsum('lij,jk -> lik',
np.einsum('ij,kjl -> kil',eigvect,diagp), np.linalg.inv(eigvect)).round()
print (A)
print (A.shape)
#[[[ 1. 0.]
# [-0. 1.]]
#
# [[ 1. 2.]
# [ 3. 4.]]
#
# [[ 7. 10.]
# [15. 22.]]]
#
#(3, 2, 2)
I don't have a full solution, but there are some things I wanted to mention which are a bit too long for the comments.
You might first look into addition chain exponentiation if you are computing big powers of big matrices. This is basically asking how many matrix multiplications are required to compute A^k for a given k. For instance A^5 = A(A^2)^2 so you need to only three matrix multiplies: A^2 and (A^2)^2 and A(A^2)^2. This might be the simplest way to gain some efficiency, but you will probably still have to use explicit loops.
Your question is also related to the problem of computing Ax, A^2x, ... , A^kx for a given A and x. This is an active area of research right now (search "matrix powers kernel"), since computing such a sequence efficiently is useful for parallel/communication avoiding Krylov subspace methods. If you're looking for a very efficient solution to your problem it might be worth looking into some of the results about this.

sum a 3x3 array on a given point to another matrix maintaining boundaries

suppose I have this 2d array A:
[[0,0,0,0],
[0,0,0,0],
[0,0,0,0],
[0,0,0,4]]
and I want to sum B:
[[1,2,3]
[4,5,6]
[7,8,9]]
centered on A[0][0] so the result would be:
array_sum(A,B,0,0) =
[[5,6,0,4],
[8,9,0,0],
[0,0,0,0],
[2,0,0,5]]
I was thinking that I should make a function that compares if its on a boundary and then adjust the index for that:
def array_sum(A,B,i,f):
...
if i == 0 and j == 0:
A[-1][-1] = A[-1][-1]+B[0][0]
...
else:
A[i-1][j-1] = A[i][j]+B[0][0]
A[i][j] = A[i][j]+B[1][1]
A[i+1][j+1] = A[i][j]+B[2][2]
...
but I don't know if there is a better way of doing that, I was reading about broadcasting or maybe using convolute for that, but I'm not sure if there is a better way to do that.
Assuming B.shape is all odd numbers, you can use np.indices, manipulate them to point where you want, and use np.add.at
def array_sum(A, B, loc = (0, 0)):
A_ = A.copy()
ix = np.indices(B.shape)
new_loc = np.array(loc) - np.array(B.shape) // 2
new_ix = np.mod(ix + new_loc[:, None, None],
np.array(A.shape)[:, None, None])
np.add.at(A_, tuple(new_ix), B)
return A_
Testing:
array_sum(A, B)
Out:
array([[ 5., 6., 0., 4.],
[ 8., 9., 0., 7.],
[ 0., 0., 0., 0.],
[ 2., 3., 0., 5.]])
As a rule of thumb slice indexing is faster (~2x) than fancy indexing. This appears to be true even for the small example in OP. Downside: the code is slightly more complicated.
import numpy as np
from numpy import s_ as _
from itertools import product, starmap
def wrapsl1d(N, n, c):
# check in 1D whether a patch of size n centered at c in a vector
# of length N fits or has to be wrapped around
# return appropriate slice objects for both vector and patch
assert n <= N
l = (c - n//2) % N
h = l + n
# return list of pairs (index into A, index into patch)
# 2 pairs if we wrap around, otherwise 1 pair
return [_[l:h, :]] if h <= N else [_[l:, :N-l], _[:h-N, n+N-h:]]
def use_slices(A, patch, center=(0, 0)):
slAptch = product(*map(wrapsl1d, A.shape, patch.shape, center))
# the product now has elements [(idx0A, idx0ptch), (idx1A, idx1ptch)]
# transpose them:
slAptch = starmap(zip, slAptch)
out = A.copy()
for sa, sp in slAptch:
out[sa] += patch[sp]
return out

Multiplying Block Matrices in Numpy

Hi Everyone I am python newbie
I have to implement lasso L1 regression for a class assignment. This involves solving a quadratic equation involving block matrices.
minimize x^t * H * x + f^t * x
where x > 0
Where H is a 2 X 2 block matrix with each element being a k dimensional matrix and x and f being a 2 X 1 vectors each element being a k dimension vector.
I was thinking of using ndarrays.
Such that :
np.shape(H) = (2, 2, k, k)
np.shape(x) = (2, k)
But I figured out that np.dot(X, H) doesn't work here.
Is there an easy way to solve this problem? Thanks in advance.
First of all, I am convinced that converting to matrices will lead to more efficient computations. Stating that, if you consider your 2k x 2k matrix being a 2 x 2 matrix, then you operate in a tensor product of vector spaces, and have to use tensordot instead of dot.
Let give it a try, with k=5 for example:
>>> import numpy as np
>>> k = 5
Define our matrix a and vector x
>>> a = np.arange(1.*2*2*k*k).reshape(2,2,k,k)
>>> x = np.arange(1.*2*k).reshape(2,k)
>>> x
array([[ 0., 1., 2., 3., 4.],
[ 5., 6., 7., 8., 9.]])
now we can multipy our tensors. Be sure to choose right axes, I didn't tested following formula explicetely, and there might be an error
>>> result = np.tensordot(a,x,([1,3],[0,1]))
>>> result
array([[ 985., 1210., 1435., 1660., 1885.],
[ 3235., 3460., 3685., 3910., 4135.]])
>>> np.shape(result)
(2, 5)
np.einsum gives good control over which axes are summed.
np.einsum('ijkl,jk',H,x)
is one possible (generalized) dot product, (2,4) (first and last dim of H)
np.einsum('ijkl,jl',H,x)
is another. You need to be explicit - which dimensions of x go with which of H.

Categories