Related
I have a root file that I open with 2000 entries, and variable amount of subentries and in each column is a different variable. Lets say I am only interested in 5 of those. I want to put them in an array with np.shape(array)=(2000,250,5). The 250 is plenty to contain all subentrys per entry.
The root file is converted into a dictionary by uproot DATA=[variablename:[array of entries [array of subentries]]
I create an array np.zeros(2000,250,5) and fill it with the data I want, but it takes about 500ms and I need a solution that scales as I aim for 1 million entries later on. I found multiple solutions, but my lowest was about 300ms
lim_i=len(N_DATA["nTrack"])
i=0
INPUT_ARRAY=np.zeros((lim_i,500,5))
for l in range(len(INPUT_ARRAY)):
while i < lim_i:
EVENT=np.zeros((500,5))
k=0
lim_k=len(TRACK_DATA["Track_pt"][i])
while k<lim_k:
EVENT[k][0]=TRACK_DATA["Track_pt"][i][k]
EVENT[k][1]=TRACK_DATA["Track_phi"][i][k]
EVENT[k][2]=TRACK_DATA["Track_eta"][i][k]
EVENT[k][3]=TRACK_DATA["Track_dxy"][i][k]
EVENT[k][4]=TRACK_DATA["Track_charge"][i][k]
k+=1
INPUT_ARRAY[i]=EVENT
i+=1
INPUT_ARRAY
Taking note of fKarl Knechtel's second comment, "You should avoid explicitly iterating over Numpy arrays yourself (there is practically guaranteed to be a built-in Numpy thing that just does what you want, and probably much faster than native Python can)," there is a way to do this with array-at-a-time programming, but not in NumPy. The reason Uproot returns Awkward Arrays is because you need a way to deal with variable-length data efficiently.
I don't have your file, but I'll start with a similar one:
>>> import uproot4
>>> import skhep_testdata
>>> events = uproot4.open(skhep_testdata.data_path("uproot-HZZ.root"))["events"]
The branches that start with "Muon_" in this file have the same variable-length structure as in your tracks. (The C++ typename is a dynamically sized array, interpreted in Python "as jagged.")
>>> events.show(filter_name="Muon_*")
name | typename | interpretation
---------------------+--------------------------+-------------------------------
Muon_Px | float[] | AsJagged(AsDtype('>f4'))
Muon_Py | float[] | AsJagged(AsDtype('>f4'))
Muon_Pz | float[] | AsJagged(AsDtype('>f4'))
Muon_E | float[] | AsJagged(AsDtype('>f4'))
Muon_Charge | int32_t[] | AsJagged(AsDtype('>i4'))
Muon_Iso | float[] | AsJagged(AsDtype('>f4'))
If you just ask for these arrays, you get them as an Awkward Array.
>>> muons = events.arrays(filter_name="Muon_*")
>>> muons
<Array [{Muon_Px: [-52.9, 37.7, ... 0]}] type='2421 * {"Muon_Px": var * float32,...'>
To put them to better use, let's import Awkward Array and start by asking for its type.
>>> import awkward1 as ak
>>> ak.type(muons)
2421 * {"Muon_Px": var * float32, "Muon_Py": var * float32, "Muon_Pz": var * float32, "Muon_E": var * float32, "Muon_Charge": var * int32, "Muon_Iso": var * float32}
What does this mean? It means you have 2421 records with fields named "Muon_Px", etc., that each contain variable-length lists of float32 or int32, depending on the field. We can look at one of them by converting it to Python lists and dicts.
>>> muons[0].tolist()
{'Muon_Px': [-52.89945602416992, 37.7377815246582],
'Muon_Py': [-11.654671669006348, 0.6934735774993896],
'Muon_Pz': [-8.16079330444336, -11.307581901550293],
'Muon_E': [54.77949905395508, 39.401695251464844],
'Muon_Charge': [1, -1],
'Muon_Iso': [4.200153350830078, 2.1510612964630127]}
(You could have made these lists of records, rather than records of lists, by passing how="zip" to TTree.arrays or using ak.unzip and ak.zip in Awkward Array, but that's tangential to the padding that you want to do.)
The problem is that the lists have different lengths. NumPy doesn't have any functions that will help us here because it deals entirely in rectilinear arrays. Therefore, we need a function that's specific to Awkward Array, ak.num.
>>> ak.num(muons)
<Array [{Muon_Px: 2, ... Muon_Iso: 1}] type='2421 * {"Muon_Px": int64, "Muon_Py"...'>
This is telling us the number of elements in each list, per field. For clarity, look at the first one:
>>> ak.num(muons)[0].tolist()
{'Muon_Px': 2, 'Muon_Py': 2, 'Muon_Pz': 2, 'Muon_E': 2, 'Muon_Charge': 2, 'Muon_Iso': 2}
You want to turn these irregular lists into regular lists that all have the same size. That's called "padding." Again, there's a function for that, but we first need to get the maximum number of elements, so that we know how much to pad it by.
>>> ak.max(ak.num(muons))
4
So let's make them all length 4.
>>> ak.pad_none(muons, ak.max(ak.num(muons)))
<Array [{Muon_Px: [-52.9, 37.7, ... None]}] type='2421 * {"Muon_Px": var * ?floa...'>
Again, let's look at the first one to understand what we have.
{'Muon_Px': [-52.89945602416992, 37.7377815246582, None, None],
'Muon_Py': [-11.654671669006348, 0.6934735774993896, None, None],
'Muon_Pz': [-8.16079330444336, -11.307581901550293, None, None],
'Muon_E': [54.77949905395508, 39.401695251464844, None, None],
'Muon_Charge': [1, -1, None, None],
'Muon_Iso': [4.200153350830078, 2.1510612964630127, None, None]}
You wanted to pad them with zeros, not None, so we convert the missing values into zeros.
>>> ak.fill_none(ak.pad_none(muons, ak.max(ak.num(muons))), 0)[0].tolist()
{'Muon_Px': [-52.89945602416992, 37.7377815246582, 0.0, 0.0],
'Muon_Py': [-11.654671669006348, 0.6934735774993896, 0.0, 0.0],
'Muon_Pz': [-8.16079330444336, -11.307581901550293, 0.0, 0.0],
'Muon_E': [54.77949905395508, 39.401695251464844, 0.0, 0.0],
'Muon_Charge': [1, -1, 0, 0],
'Muon_Iso': [4.200153350830078, 2.1510612964630127, 0.0, 0.0]}
Finally, NumPy doesn't have records (other than the structured array, which also implies that the columns are contiguous in memory; Awkward Array's "records" are abstract). So let's unzip what we have into six separate arrays.
>>> arrays = ak.unzip(ak.fill_none(ak.pad_none(muons, ak.max(ak.num(muons))), 0))
>>> arrays
(<Array [[-52.9, 37.7, 0, 0, ... 23.9, 0, 0, 0]] type='2421 * var * float64'>,
<Array [[-11.7, 0.693, 0, 0, ... 0, 0, 0]] type='2421 * var * float64'>,
<Array [[-8.16, -11.3, 0, 0, ... 0, 0, 0]] type='2421 * var * float64'>,
<Array [[54.8, 39.4, 0, 0], ... 69.6, 0, 0, 0]] type='2421 * var * float64'>,
<Array [[1, -1, 0, 0], ... [-1, 0, 0, 0]] type='2421 * var * int64'>,
<Array [[4.2, 2.15, 0, 0], ... [0, 0, 0, 0]] type='2421 * var * float64'>)
Note that this one line does everything from the initial data-pull from Uproot (muons). I'm not going to profile it now, but you'll find that this one line is considerably faster than explicit looping.
Now what we have is semantically equivalent to six NumPy arrays, so we'll just cast them as NumPy. (Attempts to do so with irregular data would fail. You have to explicitly pad the data.)
>>> numpy_arrays = [ak.to_numpy(x) for x in arrays]
>>> numpy_arrays
[array([[-52.89945602, 37.73778152, 0. , 0. ],
[ -0.81645936, 0. , 0. , 0. ],
[ 48.98783112, 0.82756668, 0. , 0. ],
...,
[-29.75678635, 0. , 0. , 0. ],
[ 1.14186978, 0. , 0. , 0. ],
[ 23.9132061 , 0. , 0. , 0. ]]),
array([[-11.65467167, 0.69347358, 0. , 0. ],
[-24.40425873, 0. , 0. , 0. ],
[-21.72313881, 29.8005085 , 0. , 0. ],
...,
[-15.30385876, 0. , 0. , 0. ],
[ 63.60956955, 0. , 0. , 0. ],
[-35.66507721, 0. , 0. , 0. ]]),
array([[ -8.1607933 , -11.3075819 , 0. , 0. ],
[ 20.19996834, 0. , 0. , 0. ],
[ 11.16828537, 36.96519089, 0. , 0. ],
...,
[-52.66374969, 0. , 0. , 0. ],
[162.17631531, 0. , 0. , 0. ],
[ 54.71943665, 0. , 0. , 0. ]]),
array([[ 54.77949905, 39.40169525, 0. , 0. ],
[ 31.69044495, 0. , 0. , 0. ],
[ 54.73978806, 47.48885727, 0. , 0. ],
...,
[ 62.39516068, 0. , 0. , 0. ],
[174.20863342, 0. , 0. , 0. ],
[ 69.55621338, 0. , 0. , 0. ]]),
array([[ 1, -1, 0, 0],
[ 1, 0, 0, 0],
[ 1, -1, 0, 0],
...,
[-1, 0, 0, 0],
[-1, 0, 0, 0],
[-1, 0, 0, 0]]),
array([[4.20015335, 2.1510613 , 0. , 0. ],
[2.18804741, 0. , 0. , 0. ],
[1.41282165, 3.38350415, 0. , 0. ],
...,
[3.76294518, 0. , 0. , 0. ],
[0.55081069, 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ]])]
And now NumPy's dstack is appropriate. (This is making them contiguous in memory, so you could use NumPy's structured arrays if you want to. I would find that easier for keeping track of which index means which variable, but that's up to you. Actually, Xarray is particularly good at tracking metadata of rectilinear arrays.)
>>> import numpy as np
>>> np.dstack(numpy_arrays)
array([[[-52.89945602, -11.65467167, -8.1607933 , 54.77949905,
1. , 4.20015335],
[ 37.73778152, 0.69347358, -11.3075819 , 39.40169525,
-1. , 2.1510613 ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
[[ -0.81645936, -24.40425873, 20.19996834, 31.69044495,
1. , 2.18804741],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
[[ 48.98783112, -21.72313881, 11.16828537, 54.73978806,
1. , 1.41282165],
[ 0.82756668, 29.8005085 , 36.96519089, 47.48885727,
-1. , 3.38350415],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
...,
[[-29.75678635, -15.30385876, -52.66374969, 62.39516068,
-1. , 3.76294518],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
[[ 1.14186978, 63.60956955, 162.17631531, 174.20863342,
-1. , 0.55081069],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]],
[[ 23.9132061 , -35.66507721, 54.71943665, 69.55621338,
-1. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ],
[ 0. , 0. , 0. , 0. ,
0. , 0. ]]])
Observation 1: we can assign directly to the appropriate sub-arrays of INPUT_ARRAY[i], instead of creating EVENT as a proxy for INPUT_ARRAY[i] and then copying that in. (I will also set your variable names in lowercase, to follow normal conventions.
lim_i = len(n_data["nTrack"])
i = 0
input_array = np.zeros((lim_i,500,5))
for l in range(len(input_array)):
while i < lim_i:
k = 0
lim_k = len(track_data["Track_pt"][i])
while k < lim_k:
input_array[i][k][0] = track_data["Track_pt"][i][k]
input_array[i][k][1] = track_data["Track_phi"][i][k]
input_array[i][k][2] = track_data["Track_eta"][i][k]
input_array[i][k][3] = track_data["Track_dxy"][i][k]
input_array[i][k][4] = track_data["Track_charge"][i][k]
k += 1
i += 1
Observation 2: the assignments we make in the innermost loop have the same basic structure. It would be nice if we could take the various entries of the TRACK_DATA dict (which are 2-dimensional data) and stack them together. Numpy has a convenient (and efficient) built-in for stacking 2-dimensional data along the third dimension: np.dstack. Having prepared that 3-dimensional array, we can just copy in from it mechanically:
track_array = np.dstack((
track_data['Track_pt'],
track_data['Track_phi'],
track_data['Track_eta'],
track_data['Track_dxy'],
track_data['Track_charge']
))
lim_i = len(n_data["nTrack"])
i = 0
input_array = np.zeros((lim_i,500,5))
for l in range(len(input_array)):
while i < lim_i:
k = 0
lim_k = len(track_data["Track_pt"][i])
while k < lim_k:
input_array[i][k][0] = track_data[i][k][0]
input_array[i][k][1] = track_data[i][k][1]
input_array[i][k][2] = track_data[i][k][2]
input_array[i][k][3] = track_data[i][k][3]
input_array[i][k][4] = track_data[i][k][4]
k += 1
i += 1
Observation 3: but now, the purpose of our innermost loop is simply to copy an entire chunk of track_data along the last dimension. We could just do that directly:
track_array = np.dstack((
track_data['Track_pt'],
track_data['Track_phi'],
track_data['Track_eta'],
track_data['Track_dxy'],
track_data['Track_charge']
))
lim_i = len(n_data["nTrack"])
i = 0
input_array = np.zeros((lim_i,500,5))
for l in range(len(input_array)):
while i < lim_i:
k = 0
lim_k = len(track_data["Track_pt"][i])
while k < lim_k:
input_array[i][k] = track_data[i][k]
k += 1
i += 1
Observation 4: But actually, the same reasoning applies to the other two dimensions of the array. Clearly, our intent is to copy the entire array produced from the dstack; and that is already a new array, so we could just use it directly.
input_array = np.dstack((
track_data['Track_pt'],
track_data['Track_phi'],
track_data['Track_eta'],
track_data['Track_dxy'],
track_data['Track_charge']
))
I have a numpy array as follows:
array([0. , 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. , 0. ,
0.00791667, 0. , 0. , 0. , 0. ,
0. , 0.06837452, 0.09166667, 0.00370881, 0. ,
0. , 0.00489809, 0. , 0. , 0. ,
0. , 0. , 0.23888889, 0. , 0.05927778,
0.12138889, 0. , 0. , 0. , 0.36069444,
0.31711111, 0.16333333, 0.15005556, 0.01 , 0.005 ,
0.14357413, 0. , 0.15722222, 0.29494444, 0.3245 ,
0.31276639, 0.095 , 0.04750292, 0.09127039, 0. ,
0.06847222, 0.17 , 0.18039233, 0.21567804, 0.15913079,
0.4579781 , 0. , 0.2459 , 0.14886556, 0.08447222,
0. , 0.13722222, 0.28336984, 0.0725 , 0.077355 ,
0.45166391, 0. , 0.24892933, 0.25360062, 0. ,
0.12923041, 0.16145892, 0.48771795, 0.38527778, 0.29432968,
0.31983305, 1.07573089, 0.30611111, 0. , 0.0216475 ,
0. , 0.62268056, 0.16829156, 0.46239719, 0.6415958 ,
0.02138889, 0.76457155, 0.05711551, 0.35050949, 0.34856278,
0.15686164, 0.23158889, 0.16593262, 0.34961111, 0.21247575,
0.14116667, 0.19414785, 0.09166667, 0.93376627, 0.12772222,
0.00366667, 0.10297222, 0.173 , 0.0381225 , 0.22441667,
0.46686111, 0.18761111, 0.56037889, 0.47566111])
From this array, I need to calculate the area under the curve for each sub-array where the first value is 0, where it goes above 0, and the last number should be the 0 after a non-zero number. Obviously the array lengths will vary. It may also occur that two of these sub-arrays will share a 0 value (the last 0 of the first array will be the fist 0 if the second array).
The expected first two arrays should be:
[0. , 0.00791667, 0. ]
[0. , 0.06837452, 0.09166667, 0.00370881, 0. ]
I've tried and splitting python lists based on a character being equal to 0, but haven't found anything useful. What can I do?
See the code below - I think this is the most efficient you'll be able to do.
First, split the array using the indices of all of the zeroes. Where multiple zeroes are together, this produces several [ 0. ] arrays, so filter those out (based on length, as all arrays must necessarily begin with a zero) to produce C. Finally, since they all begin with zero, but none end with zero, append a zero to each array.
import numpy as np
# <Your array here>
A = np.array(...)
# Split into arrays based on zeroes
B = np.split(A, np.where(A == 0)[0])
# Filter out arrays of length 1
# (just a zero, caused by multiple zeroes together)
f = np.vectorize(lambda a: len(a) > 1)
C = np.extract(f(B), B)
# Append a zero to each array
g = np.vectorize(lambda a: np.append(a, 0), otypes=[object])
D = g(C)
# Output result
for array in D:
print(array)
This gives the following output:
[ 0. 0.00791667 0. ]
[ 0. 0.06837452 0.09166667 0.00370881 0. ]
[ 0. 0.00489809 0. ]
[ 0. 0.23888889 0. ]
[ 0. 0.05927778 0.12138889 0. ]
[ 0. 0.36069444 0.31711111 0.16333333 0.15005556 0.01 0.005
0.14357413 0. ]
[ 0. 0.15722222 0.29494444 0.3245 0.31276639 0.095
0.04750292 0.09127039 0. ]
[ 0. 0.06847222 0.17 0.18039233 0.21567804 0.15913079
0.4579781 0. ]
[ 0. 0.2459 0.14886556 0.08447222 0. ]
[ 0. 0.13722222 0.28336984 0.0725 0.077355 0.45166391
0. ]
[ 0. 0.24892933 0.25360062 0. ]
[ 0. 0.12923041 0.16145892 0.48771795 0.38527778 0.29432968
0.31983305 1.07573089 0.30611111 0. ]
[ 0. 0.0216475 0. ]
[ 0. 0.62268056 0.16829156 0.46239719 0.6415958 0.02138889
0.76457155 0.05711551 0.35050949 0.34856278 0.15686164 0.23158889
0.16593262 0.34961111 0.21247575 0.14116667 0.19414785 0.09166667
0.93376627 0.12772222 0.00366667 0.10297222 0.173 0.0381225
0.22441667 0.46686111 0.18761111 0.56037889 0.47566111 0. ]
I have a 2-d NumPy array that looks like this:
array([[0. , 0. , 0.2, 0.2],
[0.3, 0. , 0.3, 0. ]])
I'd like to modify it so that each row consists of all 0's, except for the first non-zero entry. If it's all 0s to start with, we don't change anything.
I could do this:
example = np.array([[0,0, 0.2, 0.2], [0.3, 0, 0.3, 0]])
my_copy = np.zeros_like(example)
for i, row in enumerate(example):
for j, elem in enumerate(row):
if elem > 0:
my_copy[i, j] = elem
break
But that's ugly and not vectorized. Any suggestions for how to vectorize this?
Thanks!
Here's a vectorised solution. The trick is to calculate your first non-zero entries via bool conversion and argmax.
import numpy as np
A = np.array([[0. , 0. , 0.2, 0.2],
[0.3, 0. , 0.3, 0. ],
[0. , 0. , 0. , 0. ]])
res = np.zeros(A.shape)
idx = np.arange(res.shape[0])
args = A.astype(bool).argmax(1)
res[idx, args] = A[idx, args]
print(res)
array([[ 0. , 0. , 0.2, 0. ],
[ 0.3, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ]])
Simply
e =np.zeros(example.shape)
rows = np.arange(example.shape[0])
cols = np.argmax(example != 0, 1)
e[rows, cols] = example[rows, cols]
Setup
x = np.array([[0. , 0. , 0.2, 0.2],
[0.3, 0. , 0.3, 0. ],
[0. , 0. , 0. , 0. ]])
Using logical_and with np.eye:
m = (x!=0).argmax(1)
x[~np.logical_and(x, np.eye(x.shape[1])[m])] = 0
Output:
array([[0. , 0. , 0.2, 0. ],
[0.3, 0. , 0. , 0. ],
[0. , 0. , 0. , 0. ]])
Using this method will be slightly slower than the other two suggested.
I have a numpy array:
arr=np.array([0,1,0,0.5])
I need to form a new array from it as follows, such that every zero elements is repeated thrice and every non-zero element has 2 preceding zeroes, followed by the non-zero number. In short, every element is repeated thrice, zero as it is and non-zero has 2 preceding 0 and then the number itself. It is as follows:
([0,1,0,0.5])=0,0,0, [for index 0]
0,0,1 [for index 1]
0,0,0 [for index 2, which again has a zero] and
0,0,0.5
final output should be:
new_arr=[0,0,0,0,0,1,0,0,0,0,0,0.5]
np.repeat() repeats all the array elements n number of times, but i dont want that exactly. How should this be done? Thanks for the help.
A quick reshape followed by a call to np.pad will do it:
np.pad(arr.reshape(-1, 1), ((0, 0), (2, 0)), 'constant')
Output:
array([[ 0. , 0. , 0. ],
[ 0. , 0. , 1. ],
[ 0. , 0. , 0. ],
[ 0. , 0. , 0.5]])
You'll want to flatten it back again. That's simply done by calling .reshape(-1, ).
>>> np.pad(arr.reshape(-1, 1), ((0, 0), (2, 0)), 'constant').reshape(-1, )
array([ 0. , 0. , 0. , 0. , 0. , 1. , 0. , 0. , 0. , 0. , 0. ,
0.5])
A variant on the pad idea is to concatenate a 2d array of zeros
In [477]: arr=np.array([0,1,0,0.5])
In [478]: np.column_stack([np.zeros((len(arr),2)),arr])
Out[478]:
array([[ 0. , 0. , 0. ],
[ 0. , 0. , 1. ],
[ 0. , 0. , 0. ],
[ 0. , 0. , 0.5]])
In [479]: _.ravel()
Out[479]:
array([ 0. , 0. , 0. , 0. , 0. , 1. , 0. , 0. , 0. , 0. , 0. ,
0.5])
or padding in the other direction:
In [481]: np.vstack([np.zeros((2,len(arr))),arr])
Out[481]:
array([[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ],
[ 0. , 1. , 0. , 0.5]])
In [482]: _.T.ravel()
Out[482]:
array([ 0. , 0. , 0. , 0. , 0. , 1. , 0. , 0. , 0. , 0. , 0. ,
0.5])
I've implemented a matrix factorization model, say R = U*V, and now I would to train and test this model.
To this end, given a sparse matrix R (zero for missing value), I want to first hide some non-zero elements in the training and use these non-zero elements as test set later.
How can I randomly select some non-zero elements from a numpy.ndarray? Besides, I need to remember the index and column position of these selected elements to use these elements in testing.
for example:
In [2]: import numpy as np
In [4]: mtr = np.random.rand(10,10)
In [5]: mtr
Out[5]:
array([[ 0.92685787, 0.95496193, 0.76878455, 0.12304856, 0.13804963,
0.30867502, 0.60245974, 0.00797898, 0.1060602 , 0.98277982],
[ 0.88879888, 0.40209901, 0.35274404, 0.73097713, 0.56238248,
0.380625 , 0.16432029, 0.5383006 , 0.0678564 , 0.42875591],
[ 0.42343761, 0.31957986, 0.5991212 , 0.04898903, 0.2908878 ,
0.13160296, 0.26938537, 0.91442668, 0.72827097, 0.4511198 ],
[ 0.63979934, 0.33421621, 0.09218392, 0.71520048, 0.57100522,
0.37205284, 0.59726293, 0.58224992, 0.58690505, 0.4791199 ],
[ 0.35219557, 0.34954002, 0.93837312, 0.2745864 , 0.89569075,
0.81244084, 0.09661341, 0.80673646, 0.83756759, 0.7948081 ],
[ 0.09173706, 0.86250006, 0.22121994, 0.21097563, 0.55090202,
0.80954817, 0.97159981, 0.95888693, 0.43151554, 0.2265607 ],
[ 0.00723128, 0.95690539, 0.94214806, 0.01721733, 0.12552314,
0.65977765, 0.20845669, 0.44663729, 0.98392716, 0.36258081],
[ 0.65994805, 0.47697842, 0.35449045, 0.73937445, 0.68578224,
0.44278095, 0.86743906, 0.5126411 , 0.75683392, 0.73354572],
[ 0.4814301 , 0.92410622, 0.85267402, 0.44856078, 0.03887269,
0.48868498, 0.83618382, 0.49404473, 0.37328248, 0.18134919],
[ 0.63999748, 0.48718656, 0.54826717, 0.1001681 , 0.1940816 ,
0.3937014 , 0.48768013, 0.70610649, 0.03213063, 0.88371607]])
In [6]: mtr = np.where(mtr>0.5, 0, mtr)
In [7]: %clear
In [8]: mtr
Out[8]:
array([[ 0. , 0. , 0. , 0.12304856, 0.13804963,
0.30867502, 0. , 0.00797898, 0.1060602 , 0. ],
[ 0. , 0.40209901, 0.35274404, 0. , 0. ,
0.380625 , 0.16432029, 0. , 0.0678564 , 0.42875591],
[ 0.42343761, 0.31957986, 0. , 0.04898903, 0.2908878 ,
0.13160296, 0.26938537, 0. , 0. , 0.4511198 ],
[ 0. , 0.33421621, 0.09218392, 0. , 0. ,
0.37205284, 0. , 0. , 0. , 0.4791199 ],
[ 0.35219557, 0.34954002, 0. , 0.2745864 , 0. ,
0. , 0.09661341, 0. , 0. , 0. ],
[ 0.09173706, 0. , 0.22121994, 0.21097563, 0. ,
0. , 0. , 0. , 0.43151554, 0.2265607 ],
[ 0.00723128, 0. , 0. , 0.01721733, 0.12552314,
0. , 0.20845669, 0.44663729, 0. , 0.36258081],
[ 0. , 0.47697842, 0.35449045, 0. , 0. ,
0.44278095, 0. , 0. , 0. , 0. ],
[ 0.4814301 , 0. , 0. , 0.44856078, 0.03887269,
0.48868498, 0. , 0.49404473, 0.37328248, 0.18134919],
[ 0. , 0.48718656, 0. , 0.1001681 , 0.1940816 ,
0.3937014 , 0.48768013, 0. , 0.03213063, 0. ]])
Given such sparse ndarray, how can I select 20% of the non-zero elements and remember their position?
We'll use numpy.random.choice. First, we get arrays of the (i,j) indices where the data is nonzero:
i,j = np.nonzero(x)
Then we'll select 20% of these:
ix = np.random.choice(len(i), int(np.floor(0.2 * len(i))), replace=False)
Here ix is a list of random, unique indices, 20% the length of i and j (the length of i and j is the number of nonzero entries). To recover the indices, we do i[ix] and j[ix], so we can then select 20% of the nonzero entries of x by writing:
print x[i[ix], j[ix]]