Related
I have a root file that I open with 2000 entries, and variable amount of subentries and in each column is a different variable. Lets say I am only interested in 5 of those. I want to put them in an array with np.shape(array)=(2000,250,5). The 250 is plenty to contain all subentrys per entry.
The root file is converted into a dictionary by uproot DATA=[variablename:[array of entries [array of subentries]]
I create an array np.zeros(2000,250,5) and fill it with the data I want, but it takes about 500ms and I need a solution that scales as I aim for 1 million entries later on. I found multiple solutions, but my lowest was about 300ms
lim_i=len(N_DATA["nTrack"])
i=0
INPUT_ARRAY=np.zeros((lim_i,500,5))
for l in range(len(INPUT_ARRAY)):
while i < lim_i:
EVENT=np.zeros((500,5))
k=0
lim_k=len(TRACK_DATA["Track_pt"][i])
while k<lim_k:
EVENT[k][0]=TRACK_DATA["Track_pt"][i][k]
EVENT[k][1]=TRACK_DATA["Track_phi"][i][k]
EVENT[k][2]=TRACK_DATA["Track_eta"][i][k]
EVENT[k][3]=TRACK_DATA["Track_dxy"][i][k]
EVENT[k][4]=TRACK_DATA["Track_charge"][i][k]
k+=1
INPUT_ARRAY[i]=EVENT
i+=1
INPUT_ARRAY
Taking note of fKarl Knechtel's second comment, "You should avoid explicitly iterating over Numpy arrays yourself (there is practically guaranteed to be a built-in Numpy thing that just does what you want, and probably much faster than native Python can)," there is a way to do this with array-at-a-time programming, but not in NumPy. The reason Uproot returns Awkward Arrays is because you need a way to deal with variable-length data efficiently.
I don't have your file, but I'll start with a similar one:
>>> import uproot4
>>> import skhep_testdata
>>> events = uproot4.open(skhep_testdata.data_path("uproot-HZZ.root"))["events"]
The branches that start with "Muon_" in this file have the same variable-length structure as in your tracks. (The C++ typename is a dynamically sized array, interpreted in Python "as jagged.")
>>> events.show(filter_name="Muon_*")
name | typename | interpretation
---------------------+--------------------------+-------------------------------
Muon_Px | float[] | AsJagged(AsDtype('>f4'))
Muon_Py | float[] | AsJagged(AsDtype('>f4'))
Muon_Pz | float[] | AsJagged(AsDtype('>f4'))
Muon_E | float[] | AsJagged(AsDtype('>f4'))
Muon_Charge | int32_t[] | AsJagged(AsDtype('>i4'))
Muon_Iso | float[] | AsJagged(AsDtype('>f4'))
If you just ask for these arrays, you get them as an Awkward Array.
>>> muons = events.arrays(filter_name="Muon_*")
>>> muons
<Array [{Muon_Px: [-52.9, 37.7, ... 0]}] type='2421 * {"Muon_Px": var * float32,...'>
To put them to better use, let's import Awkward Array and start by asking for its type.
>>> import awkward1 as ak
>>> ak.type(muons)
2421 * {"Muon_Px": var * float32, "Muon_Py": var * float32, "Muon_Pz": var * float32, "Muon_E": var * float32, "Muon_Charge": var * int32, "Muon_Iso": var * float32}
What does this mean? It means you have 2421 records with fields named "Muon_Px", etc., that each contain variable-length lists of float32 or int32, depending on the field. We can look at one of them by converting it to Python lists and dicts.
>>> muons[0].tolist()
{'Muon_Px': [-52.89945602416992, 37.7377815246582],
'Muon_Py': [-11.654671669006348, 0.6934735774993896],
'Muon_Pz': [-8.16079330444336, -11.307581901550293],
'Muon_E': [54.77949905395508, 39.401695251464844],
'Muon_Charge': [1, -1],
'Muon_Iso': [4.200153350830078, 2.1510612964630127]}
(You could have made these lists of records, rather than records of lists, by passing how="zip" to TTree.arrays or using ak.unzip and ak.zip in Awkward Array, but that's tangential to the padding that you want to do.)
The problem is that the lists have different lengths. NumPy doesn't have any functions that will help us here because it deals entirely in rectilinear arrays. Therefore, we need a function that's specific to Awkward Array, ak.num.
>>> ak.num(muons)
<Array [{Muon_Px: 2, ... Muon_Iso: 1}] type='2421 * {"Muon_Px": int64, "Muon_Py"...'>
This is telling us the number of elements in each list, per field. For clarity, look at the first one:
>>> ak.num(muons)[0].tolist()
{'Muon_Px': 2, 'Muon_Py': 2, 'Muon_Pz': 2, 'Muon_E': 2, 'Muon_Charge': 2, 'Muon_Iso': 2}
You want to turn these irregular lists into regular lists that all have the same size. That's called "padding." Again, there's a function for that, but we first need to get the maximum number of elements, so that we know how much to pad it by.
>>> ak.max(ak.num(muons))
4
So let's make them all length 4.
>>> ak.pad_none(muons, ak.max(ak.num(muons)))
<Array [{Muon_Px: [-52.9, 37.7, ... None]}] type='2421 * {"Muon_Px": var * ?floa...'>
Again, let's look at the first one to understand what we have.
{'Muon_Px': [-52.89945602416992, 37.7377815246582, None, None],
'Muon_Py': [-11.654671669006348, 0.6934735774993896, None, None],
'Muon_Pz': [-8.16079330444336, -11.307581901550293, None, None],
'Muon_E': [54.77949905395508, 39.401695251464844, None, None],
'Muon_Charge': [1, -1, None, None],
'Muon_Iso': [4.200153350830078, 2.1510612964630127, None, None]}
You wanted to pad them with zeros, not None, so we convert the missing values into zeros.
>>> ak.fill_none(ak.pad_none(muons, ak.max(ak.num(muons))), 0)[0].tolist()
{'Muon_Px': [-52.89945602416992, 37.7377815246582, 0.0, 0.0],
'Muon_Py': [-11.654671669006348, 0.6934735774993896, 0.0, 0.0],
'Muon_Pz': [-8.16079330444336, -11.307581901550293, 0.0, 0.0],
'Muon_E': [54.77949905395508, 39.401695251464844, 0.0, 0.0],
'Muon_Charge': [1, -1, 0, 0],
'Muon_Iso': [4.200153350830078, 2.1510612964630127, 0.0, 0.0]}
Finally, NumPy doesn't have records (other than the structured array, which also implies that the columns are contiguous in memory; Awkward Array's "records" are abstract). So let's unzip what we have into six separate arrays.
>>> arrays = ak.unzip(ak.fill_none(ak.pad_none(muons, ak.max(ak.num(muons))), 0))
>>> arrays
(<Array [[-52.9, 37.7, 0, 0, ... 23.9, 0, 0, 0]] type='2421 * var * float64'>,
<Array [[-11.7, 0.693, 0, 0, ... 0, 0, 0]] type='2421 * var * float64'>,
<Array [[-8.16, -11.3, 0, 0, ... 0, 0, 0]] type='2421 * var * float64'>,
<Array [[54.8, 39.4, 0, 0], ... 69.6, 0, 0, 0]] type='2421 * var * float64'>,
<Array [[1, -1, 0, 0], ... [-1, 0, 0, 0]] type='2421 * var * int64'>,
<Array [[4.2, 2.15, 0, 0], ... [0, 0, 0, 0]] type='2421 * var * float64'>)
Note that this one line does everything from the initial data-pull from Uproot (muons). I'm not going to profile it now, but you'll find that this one line is considerably faster than explicit looping.
Now what we have is semantically equivalent to six NumPy arrays, so we'll just cast them as NumPy. (Attempts to do so with irregular data would fail. You have to explicitly pad the data.)
>>> numpy_arrays = [ak.to_numpy(x) for x in arrays]
>>> numpy_arrays
[array([[-52.89945602, 37.73778152, 0. , 0. ],
[ -0.81645936, 0. , 0. , 0. ],
[ 48.98783112, 0.82756668, 0. , 0. ],
...,
[-29.75678635, 0. , 0. , 0. ],
[ 1.14186978, 0. , 0. , 0. ],
[ 23.9132061 , 0. , 0. , 0. ]]),
array([[-11.65467167, 0.69347358, 0. , 0. ],
[-24.40425873, 0. , 0. , 0. ],
[-21.72313881, 29.8005085 , 0. , 0. ],
...,
[-15.30385876, 0. , 0. , 0. ],
[ 63.60956955, 0. , 0. , 0. ],
[-35.66507721, 0. , 0. , 0. ]]),
array([[ -8.1607933 , -11.3075819 , 0. , 0. ],
[ 20.19996834, 0. , 0. , 0. ],
[ 11.16828537, 36.96519089, 0. , 0. ],
...,
[-52.66374969, 0. , 0. , 0. ],
[162.17631531, 0. , 0. , 0. ],
[ 54.71943665, 0. , 0. , 0. ]]),
array([[ 54.77949905, 39.40169525, 0. , 0. ],
[ 31.69044495, 0. , 0. , 0. ],
[ 54.73978806, 47.48885727, 0. , 0. ],
...,
[ 62.39516068, 0. , 0. , 0. ],
[174.20863342, 0. , 0. , 0. ],
[ 69.55621338, 0. , 0. , 0. ]]),
array([[ 1, -1, 0, 0],
[ 1, 0, 0, 0],
[ 1, -1, 0, 0],
...,
[-1, 0, 0, 0],
[-1, 0, 0, 0],
[-1, 0, 0, 0]]),
array([[4.20015335, 2.1510613 , 0. , 0. ],
[2.18804741, 0. , 0. , 0. ],
[1.41282165, 3.38350415, 0. , 0. ],
...,
[3.76294518, 0. , 0. , 0. ],
[0.55081069, 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ]])]
And now NumPy's dstack is appropriate. (This is making them contiguous in memory, so you could use NumPy's structured arrays if you want to. I would find that easier for keeping track of which index means which variable, but that's up to you. Actually, Xarray is particularly good at tracking metadata of rectilinear arrays.)
>>> import numpy as np
>>> np.dstack(numpy_arrays)
array([[[-52.89945602, -11.65467167, -8.1607933 , 54.77949905,
1. , 4.20015335],
[ 37.73778152, 0.69347358, -11.3075819 , 39.40169525,
-1. , 2.1510613 ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
[[ -0.81645936, -24.40425873, 20.19996834, 31.69044495,
1. , 2.18804741],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
[[ 48.98783112, -21.72313881, 11.16828537, 54.73978806,
1. , 1.41282165],
[ 0.82756668, 29.8005085 , 36.96519089, 47.48885727,
-1. , 3.38350415],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
...,
[[-29.75678635, -15.30385876, -52.66374969, 62.39516068,
-1. , 3.76294518],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
[[ 1.14186978, 63.60956955, 162.17631531, 174.20863342,
-1. , 0.55081069],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
[[ 23.9132061 , -35.66507721, 54.71943665, 69.55621338,
-1. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]]])
Observation 1: we can assign directly to the appropriate sub-arrays of INPUT_ARRAY[i], instead of creating EVENT as a proxy for INPUT_ARRAY[i] and then copying that in. (I will also set your variable names in lowercase, to follow normal conventions.
lim_i = len(n_data["nTrack"])
i = 0
input_array = np.zeros((lim_i,500,5))
for l in range(len(input_array)):
while i < lim_i:
k = 0
lim_k = len(track_data["Track_pt"][i])
while k < lim_k:
input_array[i][k][0] = track_data["Track_pt"][i][k]
input_array[i][k][1] = track_data["Track_phi"][i][k]
input_array[i][k][2] = track_data["Track_eta"][i][k]
input_array[i][k][3] = track_data["Track_dxy"][i][k]
input_array[i][k][4] = track_data["Track_charge"][i][k]
k += 1
i += 1
Observation 2: the assignments we make in the innermost loop have the same basic structure. It would be nice if we could take the various entries of the TRACK_DATA dict (which are 2-dimensional data) and stack them together. Numpy has a convenient (and efficient) built-in for stacking 2-dimensional data along the third dimension: np.dstack. Having prepared that 3-dimensional array, we can just copy in from it mechanically:
track_array = np.dstack((
track_data['Track_pt'],
track_data['Track_phi'],
track_data['Track_eta'],
track_data['Track_dxy'],
track_data['Track_charge']
))
lim_i = len(n_data["nTrack"])
i = 0
input_array = np.zeros((lim_i,500,5))
for l in range(len(input_array)):
while i < lim_i:
k = 0
lim_k = len(track_data["Track_pt"][i])
while k < lim_k:
input_array[i][k][0] = track_data[i][k][0]
input_array[i][k][1] = track_data[i][k][1]
input_array[i][k][2] = track_data[i][k][2]
input_array[i][k][3] = track_data[i][k][3]
input_array[i][k][4] = track_data[i][k][4]
k += 1
i += 1
Observation 3: but now, the purpose of our innermost loop is simply to copy an entire chunk of track_data along the last dimension. We could just do that directly:
track_array = np.dstack((
track_data['Track_pt'],
track_data['Track_phi'],
track_data['Track_eta'],
track_data['Track_dxy'],
track_data['Track_charge']
))
lim_i = len(n_data["nTrack"])
i = 0
input_array = np.zeros((lim_i,500,5))
for l in range(len(input_array)):
while i < lim_i:
k = 0
lim_k = len(track_data["Track_pt"][i])
while k < lim_k:
input_array[i][k] = track_data[i][k]
k += 1
i += 1
Observation 4: But actually, the same reasoning applies to the other two dimensions of the array. Clearly, our intent is to copy the entire array produced from the dstack; and that is already a new array, so we could just use it directly.
input_array = np.dstack((
track_data['Track_pt'],
track_data['Track_phi'],
track_data['Track_eta'],
track_data['Track_dxy'],
track_data['Track_charge']
))
I have following program
import numpy as np
arr = np.random.randn(3,4)
print(arr)
regArr = (arr > 0.8)
print (regArr)
print (arr[ regArr].reshape(arr.shape))
output:
[[ 0.37182134 1.4807685 0.11094223 0.34548185]
[ 0.14857641 -0.9159358 -0.37933393 -0.73946522]
[ 1.01842304 -0.06714827 -1.22557205 0.45600827]]
I am looking for output in arr where values greater than 0.8 should exist and other values to be zero.
I tried bool masking as shown above. But I am able to slove this. Kindly help
I'm not entirely sure what exactly you want to achieve, but this is what I did to filter.
arr = np.random.randn(3,4)
array([[-0.04790508, -0.71700005, 0.23204224, -0.36354634],
[ 0.48578236, 0.57983561, 0.79647091, -1.04972601],
[ 1.15067885, 0.98622772, -0.7004639 , -1.28243462]])
arr[arr < 0.8] = 0
array([[0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ],
[1.15067885, 0.98622772, 0. , 0. ]])
Thanks to user3053452, I have added one more solution which the original data will not be changed.
arr = np.random.randn(3,4)
array([[ 0.4297907 , 0.38100702, 0.30358291, -0.71137138],
[ 1.15180635, -1.21251676, 0.04333404, 1.81045931],
[ 0.17521058, -1.55604971, 1.1607159 , 0.23133528]])
new_arr = np.where(arr < 0.8, 0, arr)
array([[0. , 0. , 0. , 0. ],
[1.15180635, 0. , 0. , 1.81045931],
[0. , 0. , 1.1607159 , 0. ]])
I have a numpy array:
arr=np.array([0,1,0,0.5])
I need to form a new array from it as follows, such that every zero elements is repeated thrice and every non-zero element has 2 preceding zeroes, followed by the non-zero number. In short, every element is repeated thrice, zero as it is and non-zero has 2 preceding 0 and then the number itself. It is as follows:
([0,1,0,0.5])=0,0,0, [for index 0]
0,0,1 [for index 1]
0,0,0 [for index 2, which again has a zero] and
0,0,0.5
final output should be:
new_arr=[0,0,0,0,0,1,0,0,0,0,0,0.5]
np.repeat() repeats all the array elements n number of times, but i dont want that exactly. How should this be done? Thanks for the help.
A quick reshape followed by a call to np.pad will do it:
np.pad(arr.reshape(-1, 1), ((0, 0), (2, 0)), 'constant')
Output:
array([[ 0. , 0. , 0. ],
[ 0. , 0. , 1. ],
[ 0. , 0. , 0. ],
[ 0. , 0. , 0.5]])
You'll want to flatten it back again. That's simply done by calling .reshape(-1, ).
>>> np.pad(arr.reshape(-1, 1), ((0, 0), (2, 0)), 'constant').reshape(-1, )
array([ 0. , 0. , 0. , 0. , 0. , 1. , 0. , 0. , 0. , 0. , 0. ,
0.5])
A variant on the pad idea is to concatenate a 2d array of zeros
In [477]: arr=np.array([0,1,0,0.5])
In [478]: np.column_stack([np.zeros((len(arr),2)),arr])
Out[478]:
array([[ 0. , 0. , 0. ],
[ 0. , 0. , 1. ],
[ 0. , 0. , 0. ],
[ 0. , 0. , 0.5]])
In [479]: _.ravel()
Out[479]:
array([ 0. , 0. , 0. , 0. , 0. , 1. , 0. , 0. , 0. , 0. , 0. ,
0.5])
or padding in the other direction:
In [481]: np.vstack([np.zeros((2,len(arr))),arr])
Out[481]:
array([[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ],
[ 0. , 1. , 0. , 0.5]])
In [482]: _.T.ravel()
Out[482]:
array([ 0. , 0. , 0. , 0. , 0. , 1. , 0. , 0. , 0. , 0. , 0. ,
0.5])
I have been trying to divide a python scipy sparse matrix by a vector sum of its rows. Here is my code
sparse_mat = bsr_matrix((l_data, (l_row, l_col)), dtype=float)
sparse_mat = sparse_mat / (sparse_mat.sum(axis = 1)[:,None])
However, it throws an error no matter how I try it
sparse_mat = sparse_mat / (sparse_mat.sum(axis = 1)[:,None])
File "/usr/lib/python2.7/dist-packages/scipy/sparse/base.py", line 381, in __div__
return self.__truediv__(other)
File "/usr/lib/python2.7/dist-packages/scipy/sparse/compressed.py", line 427, in __truediv__
raise NotImplementedError
NotImplementedError
Anyone with an idea of where I am going wrong?
You can circumvent the problem by creating a sparse diagonal matrix from the reciprocals of your row sums and then multiplying it with your matrix. In the product the diagonal matrix goes left and your matrix goes right.
Example:
>>> a
array([[0, 9, 0, 0, 1, 0],
[2, 0, 5, 0, 0, 9],
[0, 2, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0],
[0, 9, 5, 3, 0, 7],
[1, 0, 0, 8, 9, 0]])
>>> b = sparse.bsr_matrix(a)
>>>
>>> c = sparse.diags(1/b.sum(axis=1).A.ravel())
>>> # on older scipy versions the offsets parameter (default 0)
... # is a required argument, thus
... # c = sparse.diags(1/b.sum(axis=1).A.ravel(), 0)
...
>>> a/a.sum(axis=1, keepdims=True)
array([[ 0. , 0.9 , 0. , 0. , 0.1 , 0. ],
[ 0.125 , 0. , 0.3125 , 0. , 0. , 0.5625 ],
[ 0. , 1. , 0. , 0. , 0. , 0. ],
[ 1. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0.375 , 0.20833333, 0.125 , 0. , 0.29166667],
[ 0.05555556, 0. , 0. , 0.44444444, 0.5 , 0. ]])
>>> (c # b).todense() # on Python < 3.5 replace c # b with c.dot(b)
matrix([[ 0. , 0.9 , 0. , 0. , 0.1 , 0. ],
[ 0.125 , 0. , 0.3125 , 0. , 0. , 0.5625 ],
[ 0. , 1. , 0. , 0. , 0. , 0. ],
[ 1. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0.375 , 0.20833333, 0.125 , 0. , 0.29166667],
[ 0.05555556, 0. , 0. , 0.44444444, 0.5 , 0. ]])
Something funny is going on. I have no problem performing the element division. I wonder if it's a Py2 issue. I'm using Py3.
In [1022]: A=sparse.bsr_matrix([[2,4],[1,2]])
In [1023]: A
Out[1023]:
<2x2 sparse matrix of type '<class 'numpy.int32'>'
with 4 stored elements (blocksize = 2x2) in Block Sparse Row format>
In [1024]: A.A
Out[1024]:
array([[2, 4],
[1, 2]], dtype=int32)
In [1025]: A.sum(axis=1)
Out[1025]:
matrix([[6],
[3]], dtype=int32)
In [1026]: A/A.sum(axis=1)
Out[1026]:
matrix([[ 0.33333333, 0.66666667],
[ 0.33333333, 0.66666667]])
or to try the other example:
In [1027]: b=sparse.bsr_matrix([[0, 9, 0, 0, 1, 0],
...: [2, 0, 5, 0, 0, 9],
...: [0, 2, 0, 0, 0, 0],
...: [2, 0, 0, 0, 0, 0],
...: [0, 9, 5, 3, 0, 7],
...: [1, 0, 0, 8, 9, 0]])
In [1028]: b
Out[1028]:
<6x6 sparse matrix of type '<class 'numpy.int32'>'
with 14 stored elements (blocksize = 1x1) in Block Sparse Row format>
In [1029]: b.sum(axis=1)
Out[1029]:
matrix([[10],
[16],
[ 2],
[ 2],
[24],
[18]], dtype=int32)
In [1030]: b/b.sum(axis=1)
Out[1030]:
matrix([[ 0. , 0.9 , 0. , 0. , 0.1 , 0. ],
[ 0.125 , 0. , 0.3125 , 0. , 0. , 0.5625 ],
....
[ 0.05555556, 0. , 0. , 0.44444444, 0.5 , 0. ]])
The result of this sparse/dense is also dense, where as the c*b (c is the sparse diagonal) is sparse.
In [1039]: c*b
Out[1039]:
<6x6 sparse matrix of type '<class 'numpy.float64'>'
with 14 stored elements in Compressed Sparse Row format>
The sparse sum is a dense matrix. It is 2d, so there's no need to expand it dimensions. In fact if I try that I get an error:
In [1031]: A/(A.sum(axis=1)[:,None])
....
ValueError: shape too large to be a matrix.
Per this message, to keep the matrix sparse, you access the data values and use the (nonzero) indices:
sums = np.asarray(A.sum(axis=1)).squeeze() # this is dense
A.data /= sums[A.nonzero()[0]]
If dividing by the nonzero row mean instead of the sum, one can
nnz = A.getnnz(axis=1) # this is also dense
means = sums / nnz
A.data /= means[A.nonzero()[0]]
When using scipy.sparse.spdiags or scipy.sparse.diags I have noticed want I consider to be a bug in the routines eg
scipy.sparse.spdiags([1.1,1.2,1.3],1,4,4).toarray()
returns
array([[ 0. , 1.2, 0. , 0. ],
[ 0. , 0. , 1.3, 0. ],
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ]])
That is for positive diagonals it drops the first k data. One might argue that there is some grand programming reason for this and that I just need to pad with zeros. OK annoying as that may be, one can use scipy.sparse.diags which gives the correct result. However this routine has a bug that can't be worked around
scipy.sparse.diags([1.1,1.2],0,(4,2)).toarray()
gives
array([[ 1.1, 0. ],
[ 0. , 1.2],
[ 0. , 0. ],
[ 0. , 0. ]])
nice, and
scipy.sparse.diags([1.1,1.2],-2,(4,2)).toarray()
gives
array([[ 0. , 0. ],
[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.2]])
but
scipy.sparse.diags([1.1,1.2],-1,(4,2)).toarray()
gives an error saying ValueError: Diagonal length (index 0: 2 at offset -1) does not agree with matrix size (4, 2). Obviously the answer is
array([[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.2],
[ 0. , 0. ]])
and for extra random behaviour we have
scipy.sparse.diags([1.1],-1,(4,2)).toarray()
giving
array([[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.1],
[ 0. , 0. ]])
Anyone know if there is a function for constructing diagonal sparse matrices that actually works?
Executive summary: spdiags works correctly, even if the matrix input isn't the most intuitive. diags has a bug that affects some offsets in rectangular matrices. There is a bug fix on scipy github.
The example for spdiags is:
>>> data = array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])
>>> diags = array([0,-1,2])
>>> spdiags(data, diags, 4, 4).todense()
matrix([[1, 0, 3, 0],
[1, 2, 0, 4],
[0, 2, 3, 0],
[0, 0, 3, 4]])
Note that the 3rd column of data always appears in the 3rd column of the sparse. The other columns also line up. But they are omitted where they 'fall off the edge'.
The input to this function is a matrix, while the input to diags is a ragged list. The diagonals of the sparse matrix all have different numbers of values. So the specification has to accomodate this in one or other. spdiags does this by ignoring some values, diags by taking a list input.
The sparse.diags([1.1,1.2],-1,(4,2)) error is puzzling.
the spdiags equivalent does work:
In [421]: sparse.spdiags([[1.1,1.2]],-1,4,2).A
Out[421]:
array([[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.2],
[ 0. , 0. ]])
The error is raised in this block of code:
for j, diagonal in enumerate(diagonals):
offset = offsets[j]
k = max(0, offset)
length = min(m + offset, n - offset)
if length <= 0:
raise ValueError("Offset %d (index %d) out of bounds" % (offset, j))
try:
data_arr[j, k:k+length] = diagonal
except ValueError:
if len(diagonal) != length and len(diagonal) != 1:
raise ValueError(
"Diagonal length (index %d: %d at offset %d) does not "
"agree with matrix size (%d, %d)." % (
j, len(diagonal), offset, m, n))
raise
The actual matrix constructor in the diags is:
dia_matrix((data_arr, offsets), shape=(m, n))
This is the same constructor that spdiags uses, but without any manipulation.
In [434]: sparse.dia_matrix(([[1.1,1.2]],-1),shape=(4,2)).A
Out[434]:
array([[ 0. , 0. ],
[ 1.1, 0. ],
[ 0. , 1.2],
[ 0. , 0. ]])
In dia format, the inputs are stored exactly as given by spdiags (complete with that matrix with extra values):
In [436]: M.data
Out[436]: array([[ 1.1, 1.2]])
In [437]: M.offsets
Out[437]: array([-1], dtype=int32)
As #user2357112 points out, length = min(m + offset, n - offset is wrong, producing 3 in the test case. Changing it to length = min(m + k, n - k) makes all cases for this (4,2) matrix work. But it fails with the transpose: diags([1.1,1.2], 1, (2, 4))
The correction, as of Oct 5, for this issue is:
https://github.com/pv/scipy-work/commit/529cbde47121c8ed87f74fa6445c05d71353eb6c
length = min(m + offset, n - offset, min(m,n))
With this fix, diags([1.1,1.2], 1, (2, 4)) works.