Coarsen xarray DataArray with weighted mean - python

Let's say I have some DataArray:
da = xr.DataArray(
data=np.random.random((25,25)),
dims=["x", "y"],
coords=dict(
x=np.arange(25),
y=np.arange(25),
),
)
I want to downsample this array to 5x5 chunks. I can do this with the coarsen function:
da_coarse = da.coarsen(x=5,y=5).mean()
As I understand it, this is basically splitting the DataArray into 5x5 "chunks" and averaging each chunk into one value. However, what I want to do is take a weighted average of this 5x5 group, so the center points are weighted more heavily in the final mean than the edge points.
I can create a gaussian kernel with weights like this:
def gkern(kernlen=21, sig=3):
import scipy.stats as st
"""Returns a 2D Gaussian kernel."""
x = np.linspace(-(kernlen/2)/sig, (kernlen/2)/sig, kernlen+1)
kern1d = np.diff(st.norm.cdf(x))
kern2d = np.outer(kern1d, kern1d)
return kern2d/kern2d.sum()
window = gkern(5)
Where window is now a 5x5 array with the desired weights for each point. However, I am unsure how to implement this window/kernel when doing the averaging in the coarsen function.
What is the best way to do this?

One way to do this is through DataArrayCoarsen.construct, which allows you to more easily operate on individual windows at a time:
weights = xr.DataArray(gkern(5), dims=["x_window", "y_window"])
windowed_da = da.coarsen(x=5, y=5).construct(
x=("x_coarse", "x_window"),
y=("y_coarse", "y_window")
)
coarsened = (weights * windowed_da).sum(["x_window", "y_window"]) / weights.sum()
windowed_da is the original DataArray, but reshaped into individual windows of the size specified in the coarsen step.

Related

Faster method for creating spatially correlated noise?

In my current project, I am interested in calculating spatially correlated noise for a large model grid. The noise should be strongly correlated over short distances, and uncorrelated over large distances. My current approach uses multivariate Gaussians with a covariance matrix specifying the correlation between all cells.
Unfortunately, this approach is extremely slow for large grids. Do you have a recommendation of how one might generate spatially correlated noise more efficiently? (It doesn't have to be Gaussian)
import scipy.stats
import numpy as np
import scipy.spatial.distance
import matplotlib.pyplot as plt
# Create a 50-by-50 grid; My actual grid will be a LOT larger
X,Y = np.meshgrid(np.arange(50),np.arange(50))
# Create a vector of cells
XY = np.column_stack((np.ndarray.flatten(X),np.ndarray.flatten(Y)))
# Calculate a matrix of distances between the cells
dist = scipy.spatial.distance.pdist(XY)
dist = scipy.spatial.distance.squareform(dist)
# Convert the distance matrix into a covariance matrix
correlation_scale = 50
cov = np.exp(-dist**2/(2*correlation_scale)) # This will do as a covariance matrix
# Sample some noise !slow!
noise = scipy.stats.multivariate_normal.rvs(
mean = np.zeros(50**2),
cov = cov)
# Plot the result
plt.contourf(X,Y,noise.reshape((50,50)))
Faster approach:
Generate spatially uncorrelated noise.
Blur with Gaussian filter kernel to make noise spatially correlated.
Since the filter kernel is rather large, it is a good idea to use a convolution method based on Fast Fourier Transform.
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
# Compute filter kernel with radius correlation_scale (can probably be a bit smaller)
correlation_scale = 50
x = np.arange(-correlation_scale, correlation_scale)
y = np.arange(-correlation_scale, correlation_scale)
X, Y = np.meshgrid(x, y)
dist = np.sqrt(X*X + Y*Y)
filter_kernel = np.exp(-dist**2/(2*correlation_scale))
# Generate n-by-n grid of spatially correlated noise
n = 50
noise = np.random.randn(n, n)
noise = scipy.signal.fftconvolve(noise, filter_kernel, mode='same')
plt.contourf(np.arange(n), np.arange(n), noise)
plt.savefig("fast.png")
Sample output of this method:
Sample output of slow method from question:
Image size vs running time:

Select values from one array using values of another array in numpy

In numpy, I have a 3D array. Along the 0 axis, it stores multiple 2D planes. I need to get the gradient of each of these planes, select the median gradient magnitude at each point across these planes, and hence isolate the corresponding x and y gradient components. But I'm having difficulty carrying this out properly.
So far, to get the gradient and median, I have:
img_l = #My 3D array of 2D planes
grad = np.gradient(img_l,axis=[1,2]) #Get gradient of each image. This is a list with 2 elements.
mag_grad = np.sqrt(grad[0]**2 + grad[1]**2) #Get magnitude of gradient in each case
med = np.median(mag_grad, axis=0) #Get median value at each point in the planes
Then to select the correct x & y components of the gradient, I use:
pos=(mag_grad==med).argmax(axis=0) #This returns the first instance where the median element encountered along axis=0
G = np.stack([np.zeros(med.shape),np.zeros(med.shape)], axis=0) #Will store y and x median components of the gradient, respectively.
for i in range(med.shape[0]):
for j in range(med.shape[1]):
G[0,i,j], G[1,i,j] = grad[0][pos[i,j],i,j], grad[1][pos[i,j],i,j] #Manually select the median y and x components of the gradient, and save to G.
I believe the 2nd code block works correctly. However, it is very inelegant, and because I couldn't find a way to do this in NumPy, I had to use a Python loop which adds a large amount of overhead. In addition, since this operation occurs frequently in NumPy, I suspect there should be an in-built way to do this.
How can I implement this code more effectively and elegantly?
Using itertools to index your array can make it more efficient/elegant.
import itertools
idxs = np.array(list(itertools.product(range(med.shape[0]), range(med.shape[1]))))
G[0,idxs], G[1,idxs] = grad[0][pos[idxs],idxs], grad[1][pos[idxs],idxs]

coarse graining a graph (networkx)

I am trying to coarse grain a large network to a smaller network by predefined node labels. say:
large_network = np.random.rand(100,100)
labels = [1,1,1,1,
5,5,5,5,5,5,5,5,
0,0,0,0,0, ...] #[1x100]
for example, we have 10 regions each having a few nodes.
something like membership list (in the network community detection algorithms in networkx), that tells each node belongs to which community, but here I am defining it manually. Then I need to calculate new reduced adjacency matrix say [10x10].
So the average weights of edges between the regions A and B that w_{AB} = mean(edges(A, B)) determine the weight of the edge between these two regions.
One way is to loop over edges of each node and if two endpoints of the edge were in the membership list of two regions, add it to the weighted sum.
Am I doing right?
Is there any better strightforward method?
You could coo_matrix in scipy.sparse to do the job for you. The nice thing is that this approach can readily by extended to sparse network representations.
import numpy as np
from scipy.sparse import coo_matrix
# set parameters
N = 100 # no of nodes
M = 10 # no of types
# initialise random network and random node labels
weights = np.random.rand(N, N) # a.k.a "large_network"
labels = np.random.randint(0, M, size=N)
# get sum of weights by connection type
indices = np.tile(labels, (N,1)) # create N x N matrix of labels
nominator = coo_matrix((weights.ravel(), (indices.ravel(), indices.transpose().ravel())), shape=(M,M)).todense()
# count number of weights by connection type
adjacency = (weights > 0.).astype(np.int)
denominator = coo_matrix((adjacency.ravel(), (indices.ravel(), indices.transpose().ravel())), shape=(M,M)).todense()
# normalise sum of weights by counts
small_network = nominator / denominator

Is this the correct way of whitening an image in python?

I am trying to zero-center and whiten CIFAR10 dataset, but the result I get looks like random noise!
Cifar10 dataset contains 60,000 color images of size 32x32. The training set contains 50,000 and test set contains 10,000 images respectively.
The following snippets of code show the process I did to get the dataset whitened :
# zero-center
mean = np.mean(data_train, axis = (0,2,3))
for i in range(data_train.shape[0]):
for j in range(data_train.shape[1]):
data_train[i,j,:,:] -= mean[j]
first_dim = data_train.shape[0] #50,000
second_dim = data_train.shape[1] * data_train.shape[2] * data_train.shape[3] # 3*32*32
shape = (first_dim, second_dim) # (50000, 3072)
# compute the covariance matrix
cov = np.dot(data_train.reshape(shape).T, data_train.reshape(shape)) / data_train.shape[0]
# compute the SVD factorization of the data covariance matrix
U,S,V = np.linalg.svd(cov)
print 'cov.shape = ',cov.shape
print U.shape, S.shape, V.shape
Xrot = np.dot(data_train.reshape(shape), U) # decorrelate the data
Xwhite = Xrot / np.sqrt(S + 1e-5)
print Xwhite.shape
data_whitened = Xwhite.reshape(-1,32,32,3)
print data_whitened.shape
outputs:
cov.shape = (3072L, 3072L)
(3072L, 3072L) (3072L,) (3072L, 3072L)
(50000L, 3072L)
(50000L, 32L, 32L, 3L)
(32L, 32L, 3L)
and trying to show the resulting image :
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.misc import imshow
print data_whitened[0].shape
fig = plt.figure()
plt.subplot(221)
plt.imshow(data_whitened[0])
plt.subplot(222)
plt.imshow(data_whitened[100])
plt.show()
By the way the data_train[0].shape is (3,32,32),
but if I reshape the whittened image according to that I get
TypeError: Invalid dimensions for image data
Could this be a visualization issue only? if so how can I make sure thats the case?
Update :
Thanks to #AndrasDeak, I fixed the visualization code this way, but still the output looks random :
data_whitened = Xwhite.reshape(-1,3,32,32).transpose(0,2,3,1)
print data_whitened.shape
fig = plt.figure()
plt.subplot(221)
plt.imshow(data_whitened[0])
Update 2:
This is what I get when I run some of the commands given below :
As it can be seen below, toimage can show the image just fine, but trying to reshape it, messes up the image.
# output is of shape (N, 3, 32, 32)
X = X.reshape((-1,3,32,32))
# output is of shape (N, 32, 32, 3)
X = X.transpose(0,2,3,1)
# put data back into a design matrix (N, 3072)
X = X.reshape(-1, 3072)
plt.imshow(X[6].reshape(32,32,3))
plt.show()
for some wierd reason, this was what I got at first , but then after several tries, it changed to the previous image.
Let's walk through this. As you point out, CIFAR contains images which are stored in a matrix; each image is a row, and each row has 3072 columns of uint8 numbers (0-255). Images are 32x32 pixels and pixels are RGB (three channel colour).
# https://www.cs.toronto.edu/~kriz/cifar.html
# wget https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
# tar xf cifar-10-python.tar.gz
import numpy as np
import cPickle
with open('cifar-10-batches-py/data_batch_1') as input_file:
X = cPickle.load(input_file)
X = X['data'] # shape is (N, 3072)
It turns out that the columns are ordered a bit funny: all the red pixel values come first, then all the green pixels, then all the blue pixels. This makes it tricky to have a look at the images. This:
import matplotlib.pyplot as plt
plt.imshow(X[6].reshape(32,32,3))
plt.show()
gives this:
So, just for ease of viewing, let's shuffle the dimensions of our matrix around with reshape and transpose:
# output is of shape (N, 3, 32, 32)
X = X.reshape((-1,3,32,32))
# output is of shape (N, 32, 32, 3)
X = X.transpose(0,2,3,1)
# put data back into a design matrix (N, 3072)
X = X.reshape(-1, 3072)
Now:
plt.imshow(X[6].reshape(32,32,3))
plt.show()
gives:
OK, on to ZCA whitening. We're frequently reminded that it's super important to zero-center the data before whitening it. At this point, an observation about the code you include. From what I can tell, computer vision views color channels as just another feature dimension; there's nothing special about the separate RGB values in an image, just like there's nothing special about the separate pixel values. They're all just numeric features. So, whereas you're computing the average pixel value, respecting colour channels (i.e., your mean is a tuple of r,g,b values), we'll just compute the average image value. Note that X is a big matrix with N rows and 3072 columns. We'll treat every column as being "the same kind of thing" as every other column.
# zero-centre the data (this calculates the mean separately across
# pixels and colour channels)
X = X - X.mean(axis=0)
At this point, let's also do Global Contrast Normalization, which is quite often applied to image data. I'll use the L2 norm, which makes every image have vector magnitude 1:
X = X / np.sqrt((X ** 2).sum(axis=1))[:,None]
One could easily use something else, like the standard deviation (X = X / np.std(X, axis=0)) or min-max scaling to some interval like [-1,1].
Nearly there. At this point, we haven't greatly modified our data, since we've just shifted and scaled it (a linear transform). To display it, we need to get image data back into the range [0,1], so let's use a helper function:
def show(i):
i = i.reshape((32,32,3))
m,M = i.min(), i.max()
plt.imshow((i - m) / (M - m))
plt.show()
show(X[6])
The peacock looks slightly brighter here, but that's just because we've stretched its pixel values to fill the interval [0,1]:
ZCA whitening:
# compute the covariance of the image data
cov = np.cov(X, rowvar=True) # cov is (N, N)
# singular value decomposition
U,S,V = np.linalg.svd(cov) # U is (N, N), S is (N,)
# build the ZCA matrix
epsilon = 1e-5
zca_matrix = np.dot(U, np.dot(np.diag(1.0/np.sqrt(S + epsilon)), U.T))
# transform the image data zca_matrix is (N,N)
zca = np.dot(zca_matrix, X) # zca is (N, 3072)
Taking a look (show(zca[6])):
Now the peacock definitely looks different. You can see that the ZCA has rotated the image through colour space, so it looks like a picture on an old TV with the Tone setting out of whack. Still recognisable, though.
Presumably because of the epsilon value I used, the covariance of my transformed data isn't exactly identity, but it's fairly close:
>>> (np.cov(zca, rowvar=True).argmax(axis=1) == np.arange(zca.shape[0])).all()
True
Update 29 January
I'm not entirely sure how to sort out the issues you're having; your trouble seems to lie in the shape of your raw data at the moment, so I would advise you to sort that out first before you try to move on to zero-centring and ZCA.
One the one hand, the first plot of the four plots in your update looks good, suggesting that you've loaded up the CIFAR data in the correct way. The second plot is produced by toimage, I think, which will automagically figure out which dimension has the colour data, which is a nice trick. On the other hand, the stuff that comes after that looks weird, so it seems something is going wrong somewhere. I confess I can't quite follow the state of your script, because I suspect you're working interactively (notebook), retrying things when they don't work (more on this in a second), and that you're using code that you haven't shown in your question. In particular, I'm not sure how you're loading the CIFAR data; your screenshot shows output from some print statements (Reading training data..., etc.), and then when you copy train_data into X and print the shape of X, the shape has already been reshaped into (N, 3, 32, 32). Like I say, Update plot 1 would tend to suggest that the reshape has happened correctly. From plots 3 and 4, I think you're getting mixed up about matrix dimensions somewhere, so I'm not sure how you're doing the reshape and transpose.
Note that it's important to be careful with the reshape and transpose, for the following reason. The X = X.reshape(...) and X = X.transpose(...) code is modifying the matrix in place. If you do this multiple times (like by accident in the jupyter notebook), you will shuffle the axes of your matrix over and over, and plotting the data will start to look really weird. This image shows the progression, as we iterate the reshape and transpose operations:
This progression does not cycle back, or at least, it doesn't cycle quickly. Because of periodic regularities in the data (like the 32-pixel row structure of the images), you tend to get banding in these improperly reshape-transposed images. I'm wondering if that's what's going on in the third of your four plots in your update, which looks a lot less random than the images in the original version of your question.
The fourth plot of your update is a colour negative of the peacock. I'm not sure how you're getting that, but I can reproduce your output with:
plt.imshow(255 - X[6].reshape(32,32,3))
plt.show()
which gives:
One way you could get this is if you were using my show helper function, and you mixed up m and M, like this:
def show(i):
i = i.reshape((32,32,3))
m,M = i.min(), i.max()
plt.imshow((i - M) / (m - M)) # this will produce a negative img
plt.show()
I had the same issue: the resulting projected values are off:
A float image is supposed to be in [0-1.0] values for each
def toimage(data):
min_ = np.min(data)
max_ = np.max(data)
return (data-min_)/(max_ - min_)
NOTICE: use this function only for visualization!
However notice how the "decorrelation" or "whitening" matrix is computed #wildwilhelm
zca_matrix = np.dot(U, np.dot(np.diag(1.0/np.sqrt(S + epsilon)), U.T))
This is because the U matrix of eigen vectors of the correlation matrix it's actually this one: SVD(X) = U,S,V but U is the EigenBase of X*X not of X https://en.wikipedia.org/wiki/Singular-value_decomposition
As a final note, I would rather consider statistical units only the pixels and the RGB channels their modalities instead of Images as statistical units and pixels as modalities.
I've tryed this on the CIFAR 10 database and it works quite nicely.
IMAGE EXAMPLE: Top image has RGB values "withened", Bottom is the original
IMAGE EXAMPLE2: NO ZCA transform performances in train and loss
IMAGE EXAMPLE3: ZCA transform performances in train and loss
If you want to linearly scale the image to have zero mean and unit norm you can do the same image whitening as Tensofrlow's tf.image.per_image_standardization
. After the documentation you need to use the following formula to normalize each image independently:
(image - image_mean) / max(image_stddev, 1.0/sqrt(image_num_elements))
Keep in mind that the mean and the standard deviation should be computed over all values in the image. This means that we don't need to specify the axis/axes along which they are computed.
The way to implement that without Tensorflow is by using numpy as following:
import math
import numpy as np
from PIL import Image
# open image
image = Image.open("your_image.jpg")
image = np.array(image)
# standardize image
mean = image.mean()
stddev = image.std()
adjusted_stddev = max(stddev, 1.0/math.sqrt(image.size))
standardized_image = (image - mean) / adjusted_stddev

numpy.interp & masked arrays

I am using a numpy masked array to perform some image processing. The mask is in place to handle NoData pixels which surround the image (a necessary border as these are map projected images with the origin in a no data pixel).
Using the following code block, I am able to perform a gaussian stretch on an image.
def gaussian_stretch(input_array, array_mean, array_standard_deviation, number_of_bins, n):
shape = input_array.shape
input_array = input_array.flatten()
#define a gaussian distribution, get binned GDF histogram
array_standard_deviation *= n
gdf = numpy.random.normal(array_mean, array_standard_deviation, 10000)
hist, bins = numpy.histogram(gdf, number_of_bins, normed=True)
cdf = hist.cumsum()
cdf = 256 * cdf / cdf[-1]
#interpolate and reshape
input_array = numpy.interp(input_array,bins[:-1],cdf)
input_array = input_array.reshape(shape)
return input_array
If the image does not contain a NoData border the stretch works as expected. On an image with a mask, the mask is ignored. Is this expected behavior? Any ideas on how to process only the unmasked data?
I have tried using input_array.compressed(), but this returns a 1D array of only the unmasked values. Using numpy.interp then fails, as expected, because of the size disparity between arrays.
Finally, I understand that using numpy.random.normal will not always return a perfect gaussian distribution and I will add some margin of error contraints once the rest of the algorithm is functioning.
you can get the mask of input_array first, and apply it to the result array, and use scipy.stats.norm to calculate cdf of normal distribution, or you can use scipy.special.erf() to calculate cdf by using the cdf formula of normal distribution:
import scipy.stats as stats
def gaussian_stretch2(input_array, array_mean, array_standard_deviation, n):
mask = input_array.mask
n = stats.norm(array_mean, array_standard_deviation*n)
return numpy.ma.array(n.cdf(input_array), mask=mask)

Categories