Related
I am trying to write a function that returns an np.array of size nx x ny that contains a centered gaussian distribution with mean mu and sd sig. It works in principle like below but the problem is that the result is not completely symmetric. This is not a problem for larger nx x ny but for smaller ones it is obvious that something is not quite right in my implementation ...
For:
create2dGaussian (1, 1, 5, 5)
It outputs:
[[ 0. 0.2 0.3 0.1 0. ]
[ 0.2 0.9 1. 0.5 0. ]
[ 0.3 1. 1. 0.6 0. ]
[ 0.1 0.5 0.6 0.2 0. ]
[ 0. 0. 0. 0. 0. ]]
... which is not symmetric. For larger nx and ny a 3d plot looks perfectly fine/smooth but why are the detailed numerics not correct and how can I fix it?
import numpy as np
def create2dGaussian (mu, sigma, nx, ny):
x, y = np.meshgrid(np.linspace(-nx/2, +nx/2+1,nx), np.linspace(-ny/2, +ny/2+1,ny))
d = np.sqrt(x*x+y*y)
g = np.exp(-((d-mu)**2 / ( 2.0 * sigma**2 )))
np.set_printoptions(precision=1, suppress=True)
print(g.shape)
print(g)
return g
----- EDIT -----
While the below described solution works for the problem mentioned in the headline (non-symmetric distribution) this code has also some other issues that are discussed here.
Numpy's linspace is inclusive of both edges by default, unlike range, you don't need to add one to the right side. I'd also recommend only dividing by floats, just to be safe:
x, y = np.meshgrid(np.linspace(-nx/2.0, +nx/2.0,nx), np.linspace(-ny/2.0, +ny/2.0,ny))
Say we have N color (RGB) images of size 100x100 stored in A[N][100][100][3].
So:
Channel 0 = R
Channel 1 = G
Channel 2 = B
What is the most efficient way of building some other channels using numpy? For example, let's define:
Channel 3 = R + G * 0.5
Channel 4 = If B > 128 Then 1 Else 0
Channel 5 = If R == 100 Then 1 Else 0
Channel 6 = If (R + G) > B Then 1 Else 0
In other words, we would like to get A[N][100][100][7] with the extra 4 channels built using the above rules for each pixel.
It seems that is no general method to vectorize such operations in numpy, but I think there should be a method for the simple case here. Moreover, what will be the fastest method when N is large (>10000) ?
There are comparatively straight-forward ways, for example:
rgb = np.random.random((1,2,2,3))
r,g,b = np.transpose(rgb, (3,0,1,2))
np.r_["-1, 4, 0", rgb, r+g*0.5, b>128, r==100, (r+g)>b]
# array([[[[ 0.64715017, 0.45204962, 0.28497451, 0.87317498, 0. , 0. , 1. ],
# [ 0.51238478, 0.62095329, 0.9339249 , 0.82286142, 0. , 0. , 1. ]],
# [[ 0.29647208, 0.81635033, 0.76079918, 0.70464724, 0. , 0. , 1. ],
# [ 0.3307639 , 0.1878836 , 0.04642399, 0.4247057 , 0. , 0. , 1. ]]]])
The r_ concatenation operator is a bit cryptic, if a 3-int-string is passed as first argument it means concatenation axis, depth to promote operands to, axis to align unpadded dimensions to.
One could probably save a bit of peak memory by preallocating and computing the intermediates sequentially.
Speed-wise I don't see any obvious improvements over the above.
It has been a while since I have done this so I am a bit rusty, but equation is:
max t(C)*x
s.t. Ax <=b
And I have my A matrix of constraints which is (1448x1359) :
[[ 1. 1. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 1. 1. 1.]]
Then I have my binding b (1448x1):
[ 1. 1. 7. ..., 2. 1. 2.]
And my objective function to be maximised which is a vector of ones (1359,1).
Now in other packages my maximised objective function is 841, however using linprog:
res = linprog(c=OBJ_N, A_ub=A, b_ub=b, options={"disp": True})
It optimised successfully to -0.0 so I wonder if I'm using the right command in python and have my constraints the right way around?
Edit: Ok that makes sense, it was trying to minimise. I have rewritten now (swapped c and b and transposed A to minimise).
# (max t(C)*x s.t. Ax <=b) = min t(b)*x s.t. ATy = c, y ≥ 0
# (i): minimise number of shops no bounds
ID = np.ones(len(w[0]))
print(ID)
print(ID.shape) #1359
At = A.transpose()
need_divest = (A.dot(ID)) - 1
print(need_divest)
print(need_divest.shape) #1448
res = linprog(c=need_divest, A_eq=At, b_eq=ID, options={"disp": True})
print(res)
However, I get "message: 'Optimzation failed. Unable to find a feasible starting point.'"
I guess you are probably minimizing instead of maximizing your objective function.
Try with this (inserting a - in front of your objective function coefficients) :
res = linprog(c=-OBJ_N, A_ub=A, b_ub=b, options={"disp": True})
Your result should then be -841.
This works simply because :
min(f(x))=-max(-f(x))
Currently I'm trying to solve the generalized eigenvalue problem in NumPy for two symmetric matrices and I've been running into massive trouble as I'm expecting all eigenvalues to be positive, but eigh returns several very large numbers that are not all positive, while eig returns the correct, expected values (but is, of course, very, very slow).
In this case, note that K is symmetric as expected from its construction (here is the code in question):
# Calculate K matrix (<i|pHp|j> in the LGL-nodes basis)
for i in range(Ne):
idx_s, idx_e = i*(Np-1), i*(Np-1)+Np
K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(diag(w*peq[idx_s:idx_e])).dot(dmat)
# Re-make matrix for efficient vector products
K = sparse.csr_matrix(K)
# Make matrix for <i|p|j> in the LGL basis as efficient diagonal sparse matrix
S = sparse.diags(peq*w_d, 0)
# Solve the generalized eigenvalue problem: Kc = lSc for hermitian matrices K and S
lQ, Q = linalg.eigh(K.todense(), S.todense())
_lQ, _Q = linalg.eig(K.todense(), S.todense())
lQ.sort()
_lQ.sort()
if not allclose(lQ, _lQ):
print('Literally why')
print(lQ)
print(_lQ)
return
For testing, dmat is defined as
array([[ -896. , 1212.00631086, -484.43454844, 275.06612251,
-179.85209531, 124.26620323, -83.05199285, 32. ],
[ -205.43460499, 0. , 290.78944413, -135.17191772,
82.83085126, -55.64467829, 36.70818656, -14.07728095],
[ 50.7185076 , -179.61445086, 0. , 184.03311398,
-87.85829324, 54.08144362, -34.37053351, 13.01021241],
[ -23.81762789, 69.05246008, -152.20398294, 0. ,
152.89115899, -72.66291308, 42.31407046, -15.57316561],
[ 15.57316561, -42.31407046, 72.66291308, -152.89115899,
0. , 152.20398294, -69.05246008, 23.81762789],
[ -13.01021241, 34.37053351, -54.08144362, 87.85829324,
-184.03311398, 0. , 179.61445086, -50.7185076 ],
[ 14.07728095, -36.70818656, 55.64467829, -82.83085126,
135.17191772, -290.78944413, 0. , 205.43460499],
[ -32. , 83.05199285, -124.26620323, 179.85209531,
-275.06612251, 484.43454844, -1212.00631086, 896. ]])
And all of w[i], w_d[i], peq[i] are essentially arbitrary positive-valued arrays. w_d and w are of the same order (~ 1e-1) and peq[i] ranges on the order of (~ 1e-10 to 1e1)
Some of the output I'm getting is
Literally why
[ -6.25540943e+07 -4.82660391e+07 -2.62629052e+07 ..., 1.07960873e+10
1.07967334e+10 4.26007915e+10]
[ -5.25462340e-12+0.j 4.62614812e-01+0.j 1.23357898e+00+0.j ...,
2.17613917e+06+0.j 1.07967334e+10+0.j 4.26007915e+10+0.j]
EDIT:
Here's a self-contained version of the code for easier debugging
import numpy as np
from math import *
from scipy import sparse, linalg
# Variable declarations and such (pre-computed)
Ne, Np = 256, 8
N = Ne*Np - Ne + 1
domain_size = 4/Ne
x = np.array([-0.015625 , -0.01362094, -0.00924532, -0.0032703 , 0.0032703 ,
0.00924532, 0.01362094, 0.015625 ])
w = np.array([ 0.00055804, 0.00329225, 0.00533004, 0.00644467, 0.00644467,
0.00533004, 0.00329225, 0.00055804])
dmat = np.array([[ -896. , 1212.00631086, -484.43454844, 275.06612251,
-179.85209531, 124.26620323, -83.05199285, 32. ],
[ -205.43460499, 0. , 290.78944413, -135.17191772,
82.83085126, -55.64467829, 36.70818656, -14.07728095],
[ 50.7185076 , -179.61445086, 0. , 184.03311398,
-87.85829324, 54.08144362, -34.37053351, 13.01021241],
[ -23.81762789, 69.05246008, -152.20398294, 0. ,
152.89115899, -72.66291308, 42.31407046, -15.57316561],
[ 15.57316561, -42.31407046, 72.66291308, -152.89115899,
0. , 152.20398294, -69.05246008, 23.81762789],
[ -13.01021241, 34.37053351, -54.08144362, 87.85829324,
-184.03311398, 0. , 179.61445086, -50.7185076 ],
[ 14.07728095, -36.70818656, 55.64467829, -82.83085126,
135.17191772, -290.78944413, 0. , 205.43460499],
[ -32. , 83.05199285, -124.26620323, 179.85209531,
-275.06612251, 484.43454844, -1212.00631086, 896. ]])
# More declarations
x_d = np.zeros(N)
w_d = np.zeros(N)
dmat_d = np.zeros((N, N))
for i in range(Ne):
x_d[i*(Np-1):i*(Np-1)+Np] = x+i*domain_size
w_d[i*(Np-1):i*(Np-1)+Np] += w
dmat_d[i*(Np-1):i*(Np-1)+Np, i*(Np-1):i*(Np-1)+Np] += dmat
peq = (np.cos((x_d-2)*pi/4))**2
# Normalization
peq = peq/np.sum(w_d*peq)
p0 = np.maximum(peq, 1e-10)
p0 /= np.sum(p0*w_d)
# Make efficient matrix that can be built
K = sparse.lil_matrix((N, N))
# Calculate K matrix (<i|pHp|j> in the LGL-nodes basis)
for i in range(Ne):
idx_s, idx_e = i*(Np-1), i*(Np-1)+Np
K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(np.diag(w*p0[idx_s:idx_e])).dot(dmat)
# Re-make matrix for efficient vector products
K = sparse.csr_matrix(K)
# Make matrix for <i|p|j> in the LGL basis as efficient diagonal sparse matrix
S = sparse.diags(p0*w_d, 0)
# Solve the generalized eigenvalue problem: Kc = lSc for hermitian matrices K and S
lQ, Q = linalg.eigh(K.todense(), S.todense())
_lQ, _Q = linalg.eig(K.todense(), S.todense())
lQ.sort()
_lQ.sort()
if not np.allclose(lQ, _lQ):
print('Literally why')
print(lQ)
print(_lQ)
EDIT2: This is really odd. Running all of the NumPy/SciPy tests on my machine, I receive no errors. But even running the simple test (with large enough matrices) as
import numpy as np
from spicy import linalg
M = np.random.random((1000,1000))
M += M.T
np.allclose(sorted(linalg.eigh(M)[0]), sorted(linalg.eig(M)[0]))
fails on my machine. Though running the same test with a 50x50 matrix does work---even after rebuilding the SciPy/NumPy stack and passing all unit tests.
EDIT3: Actually, this seems to fail everywhere, after testing it on a cluster computer. I'm not sure why.
The above fails due to the in-place behaviour of += and .T as a view rather than an operation.
EDIT: Paul has solved this one below. Thanks!
I'm trying to resample (upscale) a 3x3 matrix to 5x5, filling in the intermediate points with either interpolate.interp2d or interpolate.RectBivariateSpline (or whatever works).
If there's a simple, existing function to do this, I'd like to use it, but I haven't found it yet. For example, a function that would work like:
# upscale 2x2 to 4x4
matrixSmall = ([[-1,8],[3,5]])
matrixBig = matrixSmall.resample(4,4,cubic)
So, if I start with a 3x3 matrix / array:
0,-2,0
-2,11,-2
0,-2,0
I want to compute a new 5x5 matrix ("I" meaning interpolated value):
0, I[1,0], -2, I[3,0], 0
I[0,1], I[1,1], I[2,1], I[3,1], I[4,1]
-2, I[1,2], 11, I[3,2], -2
I[0,3], I[1,3], I[2,3], I[3,3], I[4,3]
0, I[1,4], -2, I[3,4], 0
I've been searching and reading up and trying various different test code, but I haven't quite figured out the correct syntax for what I'm trying to do. I'm also not sure if I need to be using meshgrid, mgrid or linspace in certain lines.
EDIT: Fixed and working Thanks to Paul
import numpy, scipy
from scipy import interpolate
kernelIn = numpy.array([[0,-2,0],
[-2,11,-2],
[0,-2,0]])
inKSize = len(kernelIn)
outKSize = 5
kernelOut = numpy.zeros((outKSize,outKSize),numpy.uint8)
x = numpy.array([0,1,2])
y = numpy.array([0,1,2])
z = kernelIn
xx = numpy.linspace(x.min(),x.max(),outKSize)
yy = numpy.linspace(y.min(),y.max(),outKSize)
newKernel = interpolate.RectBivariateSpline(x,y,z, kx=2,ky=2)
kernelOut = newKernel(xx,yy)
print kernelOut
Only two small problems:
1) Your xx,yy is outside the bounds of x,y (you can extrapolate, but I'm guessing you don't want to.)
2) Your sample size is too small for a kx and ky of 3 (default). Lower it to 2 and get a quadratic fit instead of cubic.
import numpy, scipy
from scipy import interpolate
kernelIn = numpy.array([
[0,-2,0],
[-2,11,-2],
[0,-2,0]])
inKSize = len(kernelIn)
outKSize = 5
kernelOut = numpy.zeros((outKSize),numpy.uint8)
x = numpy.array([0,1,2])
y = numpy.array([0,1,2])
z = kernelIn
xx = numpy.linspace(x.min(),x.max(),outKSize)
yy = numpy.linspace(y.min(),y.max(),outKSize)
newKernel = interpolate.RectBivariateSpline(x,y,z, kx=2,ky=2)
kernelOut = newKernel(xx,yy)
print kernelOut
##[[ 0. -1.5 -2. -1.5 0. ]
## [ -1.5 5.4375 7.75 5.4375 -1.5 ]
## [ -2. 7.75 11. 7.75 -2. ]
## [ -1.5 5.4375 7.75 5.4375 -1.5 ]
## [ 0. -1.5 -2. -1.5 0. ]]
If you are using scipy already, I think scipy.ndimage.interpolate.zoom can do what you need:
import numpy
import scipy.ndimage
a = numpy.array([[0.,-2.,0.], [-2.,11.,-2.], [0.,-2.,0.]])
out = numpy.round(scipy.ndimage.interpolation.zoom(input=a, zoom=(5./3), order = 2),1)
print out
#[[ 0. -1. -2. -1. 0. ]
# [ -1. 1.8 4.5 1.8 -1. ]
# [ -2. 4.5 11. 4.5 -2. ]
# [ -1. 1.8 4.5 1.8 -1. ]
# [ 0. -1. -2. -1. 0. ]]
Here the "zoom factor" is 5./3 because we are going from a 3x3 array to a 5x5 array. If you read the docs, it says that you can also specify the zoom factor independently for the two axes, which means you can upscale non-square matrices as well. By default, it uses third order spline interpolation, which I am not sure is best.
I tried it on some images and it works nicely.