Lookup table in xarray? - python

So I want to merge 2 datasets, 1 is a single band raster dataset that came from rioxarray.open_rasterio(), the other a lookup table, with an index dim 'mukey'. The coords along 'mukey' correspond to 'mukey' index values in the lookup table. The desired result is a dataset with identical x and y coords to the Raster dataset, with variables 'n' and 'K' whose values are populated by merging on the 'mukey'. If you are familiar with ArcGIS, this is the analogous operation.
xr.merge() and assign() don't quite seem to perform this operation, and cheating by converting into pandas or numpy hits memory issues on my 32GB machine. Does xarray provide any support for this simple use case? Thanks,
data = (np.abs(np.random.randn(12000000))).astype(np.int32).reshape(1,4000,3000)
raster = xr.DataArray(data,dims=['band','y','x'],coords=[[1],np.arange(4000),np.arange(3000)])
raster = raster.to_dataset(name='mukey')
raster
lookup = pd.DataFrame({'mukey':list(range(10)),'n':np.random.randn(10),'K':np.random.randn(10)*2}).set_index('mukey').to_xarray()
lookup

You're looking for the advanced indexing with DataArrays feature of xarray.
You can provide a DataArray as a keyword argument to DataArray.sel or Dataset.sel - this will reshape the indexed array along the dimensions of the indexing array, based on the values of the indexing array. I think this is exactly what you're looking for in a "lookup table".
In your case:
lookup.sel(mukey=raster.mukey)

Related

How do I append new data to the DataArray along existing dimension in-place?

I have data grouped into a 3-D DataArray named 'da' with dimensions 'time', 'indicators', and 'coins', using Dask as a backend:
I need to select the data for peculiar indicator, calculate a new indicator based on it, and append this newly calculated indicator to da along indicators dimension using the new indicator name (let's call it daily_return). In somewhat simplistic terms of a 2-D analogy, I need to perform something like calculating a new pandas DataFrame column based on its other columns, but in 3-D.
So far I've tried to apply_ufunc() with both drop=False (then I retrieve scalar indicators coordinate on the resulting DataArray) and drop=True (respectively, indicators are dropped) using the corresponding tutorial:
dr_func = lambda today, yesterday: today / yesterday - 1 # Assuming for simplicity that yesterday != 0
today_da = da.sel(indicators="price_daily_close_usd", drop=False) # or drop=True
yesterday_da = da.shift(time=1).sel(indicators="price_daily_close_usd", drop=False) # or drop=True
dr = xr.apply_ufunc(
dr_func,
today_da,
yesterday_da,
dask="allowed",
dask_gufunc_kwargs={"allow_rechunk": True},
vectorize=True
)
Obviously, in case of drop=True I cannot concat da and dr DataArrays, since indicators are not present among dr's coordinates.
In its turn, in case of drop=False I've managed to concat these DataArrays along indicators; however, the resulting indicators coord would contain two similarly named CoordinateVariables, specifically "price_daily_close_usd":
...while the second of them should be renamed into "daily_return".
I've also tried to extract the needed data from dr through .sel(), but failed due to the absence of index along indicators dimension (as far as I've understood, it's not possible to set an index in this case, since this dimension is scalar):
dr.sel(indicators="price_daily_close_usd") # Would result in KeyError: "no index found for coordinate 'indicators'"
Moreover, the solution above is not done in-place - i.e. it creates a new combined DataArray instance instead of modifying da, while the latter would be highly preferable.
How can I append new data to da along existing dimension, desirably in-place?
Loading all the data directly into RAM would hardly be possible due to its huge volumes, that's why Dask is being used.
I'm also not sticking to the DataArray data structure and it would be no problem to switch to a Dataset if it has more suitable methods for solving my problem.
Xarray does not support appending in-place. Any change to the shape of your array will need to produce a new array.
If you want to work with a single array and know the size of the final array, you could generate an empty array and assign values based on coordinate labels.
I need to perform something like calculating a new pandas DataFrame column based on its other columns, but in 3-D.
Xarray's Dataset is a better analog to the Pandas.Dataframe. The Dataset is a dict-like container storing ND arrays (DataArray's) just like the Dataframe is a dict-like container storing 1D arrays (Series).

Can't convert FITS data (FITS_rec type) to multidimensional numpy array

I am attempting to read in a FITS file and convert the data into a multidimensional numpy array (So i can easily index the data).
The FITS data is structured like:
FITS_rec([(time, [rate, rate, rate, rate], [ERROR, ERROR, ERROR, ERROR], TOTCOUNTS, fraxexp.)]
that is one 'row' (IE = data[0] of data = hdul[1].data), in my case the number of 'rate' (or error) is varies, for different FITS files.
I wish to make this data into a numpy array, but when I do:
arr = np.asarray(data), I get a 1D object out which I cannot index easily. IE arr[:][0] is just equal to data[0]. I have also tried to do a np.split with no benifit.

Combining DataArrays in an xarray Dataset

Is there a nicer way of summing over all the DataArrays in an xarray Dataset than
sum(d for d in ds.data_vars.values())
This works, but seems a bit clunky. Is there an equivalent to summing over pandas DataFrame columns?
Note the ds.sum() method applies to each of the DataArrays - but I want to combine the DataArrays.
I assume you want to sum each data variable as well, e.g., sum(d.sum() for d in ds.data_vars.values()). In a future version of xarray (not yet in v0.10) this will be more succinct: you will be able to write sum(d.sum() for d in ds.values()).
Another option is to convert the Dataset into a single DataArray and sum it at once, e.g., ds.to_array().sum(). This will be less efficient if you have data variables with different dimensions.

Converting 2D numpy array to 3D array without looping

I have a 2D array of shape (t*40,6) which I want to convert into a 3D array of shape (t,40,5) for the LSTM's input data layer. The description on how the conversion is desired in shown in the figure below. Here, F1..5 are the 5 input features, T1...40 are the time steps for LSTM and C1...t are the various training examples. Basically, for each unique "Ct", I want a "T X F" 2D array, and concatenate all along the 3rd dimension. I do not mind losing the value of "Ct" as long as each Ct is in a different dimension.
I have the following code to do this by looping over each unique Ct, and appending the "T X F" 2D arrays in 3rd dimension.
# load 2d data
data = pd.read_csv('LSTMTrainingData.csv')
trainX = []
# loop over each unique ct and append the 2D subset in the 3rd dimension
for index, ct in enumerate(data.ct.unique()):
trainX.append(data[data['ct'] == ct].iloc[:, 1:])
However, there are over 1,800,000 such Ct's so this makes it quite slow to loop over each unique Ct. Looking for suggestions on doing this operation faster.
EDIT:
data_3d = array.reshape(t,40,6)
trainX = data_3d[:,:,1:]
This is the solution for the original question posted.
Updating the question with an additional problem: the T1...40 time steps can have the highest number of steps = 40, but it could be less than 40 as well. The rest of the values can be 'np.nan' out of the 40 slots available.
Since all Ct have not the same length , you have no other choice than rebuild a new block.
But use of data[data['ct'] == ct] can be O(n²) so it's a bad way to do it.
Here a solution using Panel . cumcount renumber each Ct line :
t=5
CFt=randint(0,t,(40*t,6)).astype(float) # 2D data
df= pd.DataFrame(CFt)
df2=df.set_index([df[0],df.groupby(0).cumcount()]).sort_index()
df3=df2.to_panel()
This automatically fills missing data with Nan. But It warns :
DeprecationWarning:
Panel is deprecated and will be removed in a future version.
The recommended way to represent these types of 3-dimensional data are with a MultiIndex on a DataFrame, via the Panel.to_frame() method
So perhaps working with df2 is the recommended way to manage your data.

calculating means of many matrices in numpy

I have many csv files which each contain roughly identical matrices. Each matrix is 11 columns by either 5 or 6 rows. The columns are variables and the rows are test conditions. Some of the matrices do not contain data for the last test condition, which is why there are 5 rows in some matrices and six rows in other matrices.
My application is in python 2.6 using numpy and sciepy.
My question is this:
How can I most efficiently create a summary matrix that contains the means of each cell across all of the identical matrices?
The summary matrix would have the same structure as all of the other matrices, except that the value in each cell in the summary matrix would be the mean of the values stored in the identical cell across all of the other matrices. If one matrix does not contain data for the last test condition, I want to make sure that its contents are not treated as zeros when the averaging is done. In other words, I want the means of all the non-zero values.
Can anyone show me a brief, flexible way of organizing this code so that it does everything I want to do with as little code as possible and also remain as flexible as possible in case I want to re-use this later with other data structures?
I know how to pull all the csv files in and how to write output. I just don't know the most efficient way to structure flow of data in the script, including whether to use python arrays or numpy arrays, and how to structure the operations, etc.
I have tried coding this in a number of different ways, but they all seem to be rather code intensive and inflexible if I later want to use this code for other data structures.
You could use masked arrays. Say N is the number of csv files. You can store all your data in a masked array A, of shape (N,11,6).
from numpy import *
A = ma.zeros((N,11,6))
A.mask = zeros_like(A) # fills the mask with zeros: nothing is masked
A.mask = (A.data == 0) # another way of masking: mask all data equal to zero
A.mask[0,0,0] = True # mask a value
A[1,2,3] = 12. # fill a value: like an usual array
Then, the mean values along first axis, and taking into account masked values, are given by:
mean(A, axis=0) # the returned shape is (11,6)

Categories