Heuristic algorithm in subarrays - python

ORIGINAL PROBLEM:
Given a set A={a1, . . . , an} and the matrix D of distances between the elements of A, we want to select the subset A*⊂ A of cardinal p with minimum diameter δ(A∗
) with δ(A∗) =max{d(a, a0) : a, a0 ∈ A∗}.
Write a python code that solve heuristically the particular case of n=8, p=4.
WHAT I HAVE UNDERSTOOD:
Given a matrix mxn (in this case 8x8) I am trying to look through a heuristic algorithm the max value of each sub-array of size 4x4, and store these values in a final matrix
For example:
Given the C matrix of euclidean distances 8x8:
What is the max value of a each possible sub-array 4x4?
and then store this max value in the final matrix mxn..
I have tried this but only returns one max value in the matrix.
# Python 3 Program to find the maximum
# value in a matrix which contain
# intersecting concentric submatrix
MAXN = 64
# Return the maximum value in intersecting
# concentric submatrix.
def maxValue( n, m, x, y, a):
c = [[0 for x in range(MAXN)]
for y in range(MAXN)]
# For each center of concentric sub-matrix.
for i in range( m):
# for each row
for p in range(n) :
# for each column
for q in range( n) :
# finding x distance.
dx = abs(p - x[i])
# finding y distance.
dy = abs(q - y[i])
# maximum of x distance and y distance
d = max(dx, dy)
# assigning the value.
c[p][q] += max(0, a[i] - d)
# Finding the maximum value in
# the formed matrix.
res = 0
for i in range(n) :
for j in range(n) :
res = max(res, c[i][j])
return res
# Driver Code
if __name__ == "__main__":
n = 10
m = 2
x = [ 3, 7 ]
y = [ 3, 7 ]
a = [ 4, 3 ]
print(maxValue(n, m, x, y, a))

Related

How to find all the sub matrix of a N*N matrix?

Let's say we have a matrix A,
A = [[1,2],
[3,4]
]
and I want to find all the submatrix ie,
1,2,3,4,(1,2),(3,4),(1,3),(2,4),(1,2,3,4)
Using basic for loops. I tried but this doesn't give correct results.
def mtx(arr):
n = len(arr)
for i in range(n,1,-1):
off_cnt = n - i + 1
for j in range(off_cnt):
for k in range(off_cnt):
for p in range(i):
for q in range(i):
print(arr[p+j][q+k])
print('--------')
# print(mtx(a)
You need to find all sizes of submatrices of N*N, which is suppose are matrices of size H*W, where H and W range from 1 to N (included) - that is your first range of for loops. Then you need to create all submatrices of this size from all possible starting coordinates, that means, given submatrix may start from a position x, y; where x, y range from start index (let's say 0) to last possible index, which is for x N - W (included) and for y N - H (included). Anything above won't fit. Then, just fill your submatrix and do whatever you want (print it?) as shown in code below:
def print_submatrices(matrix):
# all possible submatrices heights
for height in range(1, len(matrix)+1):
# all possible submatrices width
for width in range(1, len(matrix[0]) + 1):
# create empty submatrix of given size
template = list()
for i in range(height):
template.append([None]*width)
# fill submatrix
for y in range(len(matrix) - height + 1): # every possible start on y axis
for x in range(len(matrix[0]) - width + 1): # every possible start on x axis
# fill submatrix of given size starting at y, x coords
for i in range(y, y + height):
for j in range(x, x + width):
template[i-y][j-x] = matrix[i][j]
# when the matrix is filled, print it
print(template)

Translating from Matlab: New problems (complex ginzburg landau equation

Thank you for all of your constructive criticisim on my last post. I have made some changes, but alas my code is still not working and I can't figure out why. What happens when I run this version is that I get a runtime warning about invalid errors encountered in matmul.
My code is given as
from __future__ import division
import numpy as np
from scipy.linalg import eig
from scipy.linalg import toeplitz
def poldif(*arg):
"""
Calculate differentiation matrices on arbitrary nodes.
Returns the differentiation matrices D1, D2, .. DM corresponding to the
M-th derivative of the function f at arbitrarily specified nodes. The
differentiation matrices can be computed with unit weights or
with specified weights.
Parameters
----------
x : ndarray
vector of N distinct nodes
M : int
maximum order of the derivative, 0 < M <= N - 1
OR (when computing with specified weights)
x : ndarray
vector of N distinct nodes
alpha : ndarray
vector of weight values alpha(x), evaluated at x = x_j.
B : int
matrix of size M x N, where M is the highest derivative required.
It should contain the quantities B[l,j] = beta_{l,j} =
l-th derivative of log(alpha(x)), evaluated at x = x_j.
Returns
-------
DM : ndarray
M x N x N array of differentiation matrices
Notes
-----
This function returns M differentiation matrices corresponding to the
1st, 2nd, ... M-th derivates on arbitrary nodes specified in the array
x. The nodes must be distinct but are, otherwise, arbitrary. The
matrices are constructed by differentiating N-th order Lagrange
interpolating polynomial that passes through the speficied points.
The M-th derivative of the grid function f is obtained by the matrix-
vector multiplication
.. math::
f^{(m)}_i = D^{(m)}_{ij}f_j
This function is based on code by Rex Fuzzle
https://github.com/RexFuzzle/Python-Library
References
----------
..[1] B. Fornberg, Generation of Finite Difference Formulas on Arbitrarily
Spaced Grids, Mathematics of Computation 51, no. 184 (1988): 699-706.
..[2] J. A. C. Weidemann and S. C. Reddy, A MATLAB Differentiation Matrix
Suite, ACM Transactions on Mathematical Software, 26, (2000) : 465-519
"""
if len(arg) > 3:
raise Exception('number of arguments is either two OR three')
if len(arg) == 2:
# unit weight function : arguments are nodes and derivative order
x, M = arg[0], arg[1]
N = np.size(x)
# assert M<N, "Derivative order cannot be larger or equal to number of points"
if M >= N:
raise Exception("Derivative order cannot be larger or equal to number of points")
alpha = np.ones(N)
B = np.zeros((M, N))
elif len(arg) == 3:
# specified weight function : arguments are nodes, weights and B matrix
x, alpha, B = arg[0], arg[1], arg[2]
N = np.size(x)
M = B.shape[0]
I = np.eye(N) # identity matrix
L = np.logical_or(I, np.zeros(N)) # logical identity matrix
XX = np.transpose(np.array([x, ] * N))
DX = XX - np.transpose(XX) # DX contains entries x(k)-x(j)
DX[L] = np.ones(N) # put 1's one the main diagonal
c = alpha * np.prod(DX, 1) # quantities c(j)
C = np.transpose(np.array([c, ] * N))
C = C / np.transpose(C) # matrix with entries c(k)/c(j).
Z = 1 / DX # Z contains entries 1/(x(k)-x(j)
Z[L] = 0 # eye(N)*ZZ; # with zeros on the diagonal.
X = np.transpose(np.copy(Z)) # X is same as Z', but with ...
Xnew = X
for i in range(0, N):
Xnew[i:N - 1, i] = X[i + 1:N, i]
X = Xnew[0:N - 1, :] # ... diagonal entries removed
Y = np.ones([N - 1, N]) # initialize Y and D matrices.
D = np.eye(N) # Y is matrix of cumulative sums
DM = np.empty((M, N, N)) # differentiation matrices
for ell in range(1, M + 1):
Y = np.cumsum(np.vstack((B[ell - 1, :], ell * (Y[0:N - 1, :]) * X)), 0) # diags
D = ell * Z * (C * np.transpose(np.tile(np.diag(D), (N, 1))) - D) # off-diags
D[L] = Y[N - 1, :]
DM[ell - 1, :, :] = D
return DM
def herdif(N, M, b=1):
"""
Calculate differentiation matrices using Hermite collocation.
Returns the differentiation matrices D1, D2, .. DM corresponding to the
M-th derivative of the function f, at the N Chebyshev nodes in the
interval [-1,1].
Parameters
----------
N : int
number of grid points
M : int
maximum order of the derivative, 0 < M < N
b : float, optional
scale parameter, real and positive
Returns
-------
x : ndarray
N x 1 array of Hermite nodes which are zeros of the N-th degree
Hermite polynomial, scaled by b
DM : ndarray
M x N x N array of differentiation matrices
Notes
-----
This function returns M differentiation matrices corresponding to the
1st, 2nd, ... M-th derivates on a Hermite grid of N points. The
matrices are constructed by differentiating N-th order Hermite
interpolants.
The M-th derivative of the grid function f is obtained by the matrix-
vector multiplication
.. math::
f^{(m)}_i = D^{(m)}_{ij}f_j
References
----------
..[1] B. Fornberg, Generation of Finite Difference Formulas on Arbitrarily
Spaced Grids, Mathematics of Computation 51, no. 184 (1988): 699-706.
..[2] J. A. C. Weidemann and S. C. Reddy, A MATLAB Differentiation Matrix
Suite, ACM Transactions on Mathematical Software, 26, (2000) : 465-519
..[3] R. Baltensperger and M. R. Trummer, Spectral Differencing With A
Twist, SIAM Journal on Scientific Computing 24, (2002) : 1465-1487
"""
if M >= N - 1:
raise Exception('number of nodes must be greater than M - 1')
if M <= 0:
raise Exception('derivative order must be at least 1')
x = herroots(N) # compute Hermite nodes
alpha = np.exp(-x * x / 2) # compute Hermite weights.
beta = np.zeros([M + 1, N])
# construct beta(l,j) = d^l/dx^l (alpha(x)/alpha'(x))|x=x_j recursively
beta[0, :] = np.ones(N)
beta[1, :] = -x
for ell in range(2, M + 1):
beta[ell, :] = -x * beta[ell - 1, :] - (ell - 1) * beta[ell - 2, :]
# remove initialising row from beta
beta = np.delete(beta, 0, 0)
# compute differentiation matrix (b=1)
DM = poldif(x, alpha, beta)
# scale nodes by the factor b
x = x / b
# scale the matrix by the factor b
for ell in range(M):
DM[ell, :, :] = (b ** (ell + 1)) * DM[ell, :, :]
return x, DM
def herroots(N):
"""
Compute roots of the Hermite polynomial of degree N
Parameters
----------
N : int
degree of the Hermite polynomial
Returns
-------
x : ndarray
N x 1 array of Hermite roots
"""
# Jacobi matrix
d = np.sqrt(np.arange(1, N))
J = np.diag(d, 1) + np.diag(d, -1)
# compute eigenvalues
mu = eig(J)[0]
# return sorted, normalised eigenvalues
# real part only since all roots must be real.
return np.real(np.sort(mu) / np.sqrt(2))
a = 1-1j
b = 2+0.2j
c1 = 0.34
c2 = 0.005
alpha1 = (4*c2/a)**0.25
alpha2 = b/2*a
Nx = 220;
# hermite differentiation matrices
[x,D] = herdif(Nx, 2, np.real(alpha1))
D1 = D[0,:]
D2 = D[1,:]
# integration weights
diff = np.diff(x)
#print(len(diff))
p = np.concatenate([np.zeros(1), diff])
q = np.concatenate([diff, np.zeros(1)])
w = (p + q)/2
Q = np.diag(w)
#Discretised operator
const = c1*np.diag(np.ones(len(x)))-c2*(np.diag(x)*np.diag(x))
#print(const)
A = a*D2 - b*D1 + const
##### Timestepping
tmax = 200
tmin = 0
dt = 1
n = (tmax - tmin)/dt
tvec = np.linspace(0,tmax,n, endpoint = True)
#(len(tvec))
q = np.zeros((Nx, len(tvec)),dtype=complex)
f = np.zeros((Nx, len(tvec)),dtype=complex)
q0 = np.ones(Nx)*10**4
q[:,0] = q0
#print(q[:,0])
#print(q0)
# qnew - qold = dt*Aqold + dt*N(qold,qold,qold)
# qnew - qold = dt*Aqnew - dt*N(qold,qold,qold)
# therefore qnew - qold = 0.5*dtAqold + 0.5*dt*Aqnew + dtN(qold,qold,qold)
# rearranging to give qnew( 1- 0.5Adt) = (1 + 0.5Adt) + dt N(qold,qold,qold)
from numpy.linalg import inv
inverted = inv(np.eye(Nx)-0.5*A*dt)
forqold = (np.eye(Nx) + 0.5*A*dt)
firstterm = np.matmul(inverted,forqold)
for t in range(0, len(tvec)-1):
nl = abs(np.square(q[:,t]))*q[:,t]
q[:,t+1] = np.matmul(firstterm,q[:,t]) - dt*np.matmul(inverted,nl)
where the hermitedifferentiation matrices can be found online and are in a different file. This code blows up after five interations, which I cannot understand as I don't see how it differs in the matlab found here https://www.bagherigroup.com/research/open-source-codes/
I would really appreciate any help.
Error in:
q[:,t+1] = inverted*forgold*np.array(q[:,t]) + inverted*dt*np.array(nl)
q[:, t+1] indexes a 2d array (probably not a np.matrix which is more MATLAB like). This indexing reduces the number of dimensions by 1, hence the (220,) shape in the error message.
The error message says the RHS is (220,220). That shape probably comes from inverted and forgold. np.array(q[:,t]) is 1d. Multiplying a (220,220) by a (220,) is ok, but you can't put that square array into a 1d slot.
Both uses of np.array in the error line are superfluous. Their arguments are already ndarray.
As for the loop, it may be necessary. It looks like q[:,t+1] is a function of q[:,t], a serial, rather than parallel operation. Those are harder to render as 'vectorized' (unless you can usecumsum` like operations).
Note that in numpy * is elementwise multiplication, the .* of MATLAB. np.dot and # are used for matrix multiplication.
q[:,t+1]= invert#q[:,t]
would work

Finding Inverse of Matrix using Cramer's Rule

I have created a function determinant which outputs a determinant of a 3x3 matrix. I also need to create a function to invert that matrix however the code doesn't seem to work and I can't figure out why.
M = np.array([
[4.,3.,9.],
[2.,1.,8.],
[10.,7.,5.]
])
def inverse(M):
'''
This function finds the inverse of a matrix using the Cramers rule.
Input: Matrix - M
Output: The inverse of the Matrix - M.
'''
d = determinant(M) # Simply returns the determinant of the matrix M.
counter = 1
array = []
for line in M: # This for loop simply creates a co-factor of Matrix M and puts it in a list.
y = []
for item in line:
if counter %2 == 0:
x = -item
else:
x = item
counter += 1
y.append(x)
array.append(y)
cf = np.matrix(array) # Translating the list into a matrix.
adj = np.matrix.transpose(cf) # Transposing the matrix.
inv = (1/d) * adj
return inv
OUTPUT:
via inverse(M):
[[ 0.0952381 -0.04761905 0.23809524],
[-0.07142857 0.02380952 -0.16666667],
[ 0.21428571 -0.19047619 0.11904762]]
via built-in numpy inverse function:
[[-1.21428571 1.14285714 0.35714286]
[ 1.66666667 -1.66666667 -0.33333333]
[ 0.0952381 0.04761905 -0.04761905]]
As you can see some of the numbers match and I'm just not sure why the answer isn't exact as I'm using the formula correctly.
You co-factor matrix calculation isn't correct.
def inverse(M):
d = np.linalg.det(M)
cf_mat = []
for i in range(M.shape[0]):
for j in range(M.shape[1]):
# for each position we need to calculate det
# of submatrix without current row and column
# and multiply it on position coefficient
coef = (-1) ** (i + j)
new_mat = []
for i1 in range(M.shape[0]):
for j1 in range(M.shape[1]):
if i1 != i and j1 != j:
new_mat.append(M[i1, j1])
new_mat = np.array(new_mat).reshape(
(M.shape[0] - 1, M.shape[1] - 1))
new_mat_det = np.linalg.det(new_mat)
cf_mat.append(new_mat_det * coef)
cf_mat = np.array(cf_mat).reshape(M.shape)
adj = np.matrix.transpose(cf_mat)
inv = (1 / d) * adj
return inv
This code isn't very effective, but here you can see, how it should be calculated. More information and clear formula you can find at Wiki.
Output matrix:
[[-1.21428571 1.14285714 0.35714286]
[ 1.66666667 -1.66666667 -0.33333333]
[ 0.0952381 0.04761905 -0.04761905]]

Euclidian distance for two multidimentionnal array

I would like to find the euclidian distance between two numpy.ndarray.
lower_boundary = 0
upper_boundary = 1
n = 4 # dimension
sample_size = 3
np.random.seed(9001) # set the seed to yield reproducible results
X2 = np.random.uniform( low=lower_boundary, high=upper_boundary, size=(sample_size, n) )
Y2 = np.random.uniform( low=lower_boundary, high=upper_boundary, size=(sample_size, n) )
print( 'X2: ', X2 )
print( 'Y2: ', Y2 )
How can i implement this calculation from scratch, by using np.sum and np.sqrt instead of importing euclidean_distances from sklearn.metrics.pairwise
Thanks for all
Eucledian Distance be D(i,j). Then D(i,j) corresponds to the pairwise distance between row i in X and row j in Y. In this case, the size of distance matrix will be 3 by 3.
final_sum=np.zeros([sample_size,sample_size])
for row_inX in range(0, sample_size):
for row_inY in range(0, sample_size):
final_sum[row_inX][row_inY]= np.sqrt(np.sum((X2[row_inX]- Y2[row_inY])**2))
print(final_sum)

Solving CVXPY Matrix Optimization Linear Programming

I'm trying to solve for the ideal matrix X in the following linear program setup:
X = N by T matrix which is our variable. For simplicity, let's set N to 4 and T to 3.
X_column_sum = 1 by T matrix where each column value is the sum of all values of the corresponding column in X
R = N by 1 matrix with randomly determined values
G = constant (let's set to 100 for simplicity)
d = 1 by T matrix whose values take in the range [0, G-1]
P = 1 by T matrix equal to X_column_sum + d
C = X dotted with P
I want to minimize the sum of the entries of C, while preserving the following constraints:
all values in X have to be >= 0
the sum of all values in each corresponding row of X have to be at least equal to the corresponding value in R
I tried the following code using cvxpy in python, but to no avail:
from cvxpy import *
X = Variable(N,T)
G = 100
d = np.random.randn(1, T)
d *= G-1
X_column_sum = cumsum(X,axis=0)
P = cost_matrix_cars + d
R = matrix([[10]]*N) # all set to 10 for testing
objective = Minimize(sum_entries(X#P)) #think this is good
constraints = [0 <= X, cumsum(X,axis=0) >= R]
prob = Problem(objective, constraints)
print("Optimal value", prob.solve())
print("Optimal X is",x.value ) # A numpy matrix.

Categories