Delete first column of a numpy array - python

I have the following np.array():
[[55.3 1. 2. 2. 2. 2. ]
[55.5 1. 2. 0. 2. 2. ]
[54.9 2. 2. 2. 2. 2. ]
[47.9 2. 2. 2. 0. 0. ]
[57. 1. 2. 2. 0. 2. ]
[56.6 1. 2. 2. 2. 2. ]
[54.7 1. 2. 2. 2. nan]
[51.4 2. 2. 2. 2. 2. ]
[55.3 2. 2. 2. 2. nan]]
And I would Like to get the following one :
[[1. 2. 2. 2. 2. ]
[1. 2. 0. 2. 2. ]
[2. 2. 2. 2. 2. ]
[2. 2. 2. 0. 0. ]
[1. 2. 2. 0. 2. ]
[1. 2. 2. 2. 2. ]
[1. 2. 2. 2. nan]
[2. 2. 2. 2. 2. ]
[2. 2. 2. 2. nan]]
I did try :
MyArray[1:]#But this delete the first line
np.delete(MyArray, 0, 1) #Where I don't understand the output
[[ 2. 2. 2. 2. 2.]
[ 1. 2. 2. 2. 2.]
[ 1. 2. 0. 2. 2.]
[ 2. 2. 2. 2. 2.]
[ 2. 2. 2. 0. 0.]
[ 1. 2. 2. 0. 2.]
[ 1. 2. 2. 2. 2.]
[ 1. 2. 2. 2. nan]
[ 2. 2. 2. 2. 2.]
[ 2. 2. 2. 2. nan]]

You made a bit of a mistake using np.delete,
The np.delete arguments are array,list of indexes to be deleted, axis. By using the below snippet you get the output you want.
arr=np.delete(arr,[0],1)
The problem you created was, you passed integer instead of a list, which is why it isn't giving correct output.

You could try: new_array = [i[1:] for i in MyArray]

Try MyArray[:,1:]
I think you can get rid of column 0 with this

It should be straight forward with
new_array = MyArray[:, 1:]
See this link for explanation and examples.
Or this link

Related

Merging NumPyarrays

I am looking to merge NumPy array elements in a list into a single NumPy array. How can I do this?
This is how the list containing arrays is structured and the code I tried:
import numpy as np
baked_quad_vertices = []
A = (1,2,3,4,5,
1,2,3,4,5,
1,2,3,4,5,
1,2,3,4,5)
A = np.array(A, dtype=np.float32)
B = (1,2,3,4,5,
1,2,3,4,5,
1,2,3,4,5,
1,2,3,4,5)
B = np.array(B, dtype=np.float32)
baked_quad_vertices.append(A)
baked_quad_vertices.append(B)
Z = baked_quad_vertices
Z = np.vstack(Z)
print(Z)
I get:
[[1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5.]
[1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5.]]
I want:
[1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5.
1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5.]
Optimally I'd want:
[1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5.
1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5. 1. 2. 3. 4. 5., dtype=np.float32]
To get the result you want, try using np.hstack instead of np.vstack.
Editor's note: original answer below is referring to Revision 1 of the question:
This looks wrong as each numpy array is still separated, what does the ... mean?
In fact, when you print an array it looks just like that. The output of np.vstack returns an array so you should have an array. Try printing:
print(type(baked_quad_vertices[chunk_count]))

NxN matrix in python with non-duplicate integers (in range [0:N-1]) in both rows AND columns

In python, how to create a matrix or 2D array of N x N such that :
[A] Each Row has non-duplicate integers from 0 : N-1
And [B] Each Column has non-duplicate integers from 0:N-1
Example :
[[1 0 2]
[2 1 0]
[0 2 1]]
So I had a bit of a tinker with this question, this code seems to work
import numpy as np
N = 10
row = np.arange(N)
result = np.zeros((N, N))
for i in row:
result[i] = np.roll(row, i)
print(result)
output:
[[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
[9. 0. 1. 2. 3. 4. 5. 6. 7. 8.]
[8. 9. 0. 1. 2. 3. 4. 5. 6. 7.]
[7. 8. 9. 0. 1. 2. 3. 4. 5. 6.]
[6. 7. 8. 9. 0. 1. 2. 3. 4. 5.]
[5. 6. 7. 8. 9. 0. 1. 2. 3. 4.]
[4. 5. 6. 7. 8. 9. 0. 1. 2. 3.]
[3. 4. 5. 6. 7. 8. 9. 0. 1. 2.]
[2. 3. 4. 5. 6. 7. 8. 9. 0. 1.]
[1. 2. 3. 4. 5. 6. 7. 8. 9. 0.]]
Ask away if you have any questions.

Convert for loop into numpy array

for timeprojection in range(100):
for term in range(8):
zerocouponbondprice[timeprojection,term] = zerocouponbondprice[timeprojection-1,term-1]*cashflow[timeprojection,term]
How can I convert something like this into numpy array form, so that I can reduce two for loop to increase the speed? (If timeprojection and term are dynamic numbers.)
You can construct the numpy array from a nested list comprehension
import numpy as np
zerocouponbondprice = np.array([[k * l for k,l in zip(i,j)] for i,j in zip(zerocouponbondprice, cashflow[1:])])
If I get the question right, you can replace the two loops / ranges by using appropriate indexing. A simplified example:
import numpy as np
# these would be your input arrays zerocouponbondprice and cashflow:
arr0, arr1 = np.ones((10,10)), np.ones((10,10))
# these would be your ranges:
idx0, idx1 = 3, 9
# now you can do the calculation as simple as
arr0[idx0:idx1, idx0:idx1] = arr0[idx0-1:idx1-1, idx0-1:idx1-1] + arr1[idx0:idx1, idx0:idx1]
print(arr0)
[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 2. 2. 2. 2. 2. 2. 1.]
[1. 1. 1. 2. 2. 2. 2. 2. 2. 1.]
[1. 1. 1. 2. 2. 2. 2. 2. 2. 1.]
[1. 1. 1. 2. 2. 2. 2. 2. 2. 1.]
[1. 1. 1. 2. 2. 2. 2. 2. 2. 1.]
[1. 1. 1. 2. 2. 2. 2. 2. 2. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]

NumPy: impute mean of the two nearest rows for all NaN

I have a NumPy array with missing values. I want to impute the mean of the nearest values vertically.
import numpy as np
arr = np.random.randint(0, 10, (10, 4)).astype(float)
arr[2, 0] = np.nan
arr[4, 3] = np.nan
arr[0, 2] = np.nan
print(arr)
[[ 5. 7. nan 4.] # should be 4
[ 2. 6. 4. 9.]
[nan 2. 5. 5.] # should be 4.5
[ 7. 0. 3. 8.]
[ 6. 4. 3. nan] # should be 4
[ 8. 1. 2. 0.]
[ 0. 0. 1. 1.]
[ 1. 2. 6. 6.]
[ 8. 1. 9. 7.]
[ 3. 5. 8. 8.]]
If you are open to using Pandas, pd.DataFrame.interpolate is easy to use. Set limit_direction if "interpolating" values at ends of array:
df = pd.DataFrame(arr).interpolate(limit_direction='both')
df.to_numpy() # back to a numpy array if needed (if using v0.24.0 or above)
Output:
array([[5. , 7. , 4. , 4. ],
[2. , 6. , 4. , 9. ],
[4.5, 2. , 5. , 5. ],
[7. , 0. , 3. , 8. ],
[6. , 4. , 3. , 4. ],
[8. , 1. , 2. , 0. ],
[0. , 0. , 1. , 1. ],
[1. , 2. , 6. , 6. ],
[8. , 1. , 9. , 7. ],
[3. , 5. , 8. , 8. ]])
import numpy as np
arr = np.random.randint(0, 10, (10, 4)).astype(float)
arr[2, 0] = np.nan
arr[4, 3] = np.nan
arr[0, 2] = np.nan
print(arr)
[[ 5. 7. nan 4.]
[ 2. 6. 4. 9.]
[nan 2. 5. 5.]
[ 7. 0. 3. 8.]
[ 6. 4. 3. nan]
[ 8. 1. 2. 0.]
[ 0. 0. 1. 1.]
[ 1. 2. 6. 6.]
[ 8. 1. 9. 7.]
[ 3. 5. 8. 8.]]
for x, y in np.argwhere(np.isnan(arr)):
sample = arr[np.maximum(x - 1, 0):np.minimum(x + 2, 20), y]
arr[x, y] = np.mean(sample[np.logical_not(np.isnan(sample))])
print(arr)
[[5. 7. 4. 4. ] # 3rd value here is mean(4)
[2. 6. 4. 9. ]
[4.5 2. 5. 5. ] # first value here is mean(2, 7)
[7. 0. 3. 8. ]
[6. 4. 3. 4. ] # 4th value here is mean(8, 0)
[8. 1. 2. 0. ]
[0. 0. 1. 1. ]
[1. 2. 6. 6. ]
[8. 1. 9. 7. ]
[3. 5. 8. 8. ]]

Create a permutation with same autocorrelation

My question is similar to this one, but with the difference that I need an array of zeros and ones as output. I have an original time series of zeroes and ones with high autocorrelation (i.e., the ones are clustered). For some significance-testing I need to create random arrays with the same number of zeroes and ones. I.e. permutations of the original array, however, also the autocorrelation should stay the same/similar to the original so a simple np.permutation does not help me.
Since I'm doing multiple realizations I would need a solution which is as fast as possible. Any help is much appreciated.
According to the question to which you refer, you would like to permute x such that
np.corrcoef(x[0: len(x) - 1], x[1: ])[0][1]
doesn't change.
Say the sequence x is composed of
z1 o1 z2 o2 z3 o3 ... zk ok,
where each zi is a sequence of 0s, and each oi is a sequence of 1s. (There are four cases, depending on whether the sequence starts with 0s or 1s, and whether it ends with 0s or 1s, but they're all the same in principle).
Suppose p and q are each permutations of {1, ..., k}, and consider the sequence
zp[1] oq[1] zp[2] oq[2] zp[3] oq[3] ... zp[k] oq[k],
that is, each of the run-length sub-sequences of 0s and 1s have been permuted internally.
For example, suppose the original sequence is
0, 0, 0, 1, 1, 0, 1.
Then
0, 0, 0, 1, 0, 1, 1,
is such a permutation, as well as
0, 1, 1, 0, 0, 0, 1,
and
0, 1, 0, 0, 0, 1, 1.
Performing this permutation will not change the correlation:
within each run, the differences are the same
the boundaries between the runs are the same as before
Therefore, this gives a way to generate permutations which do not affect the correlation. (Also, see at the end another far simpler and more efficient way which can work in many common cases.)
We start with the function preprocess, which takes the sequence, and returns a tuple starts_with_zero, zeros, ones, indicating, respectively,
whether x began with 0
The 0 runs
The 1 runs
In code, this is
import numpy as np
import itertools
def preprocess(x):
def find_runs(x, val):
matches = np.concatenate(([0], np.equal(x, val).view(np.int8), [0]))
absdiff = np.abs(np.diff(matches))
ranges = np.where(absdiff == 1)[0].reshape(-1, 2)
return ranges[:, 1] - ranges[:, 0]
starts_with_zero = x[0] == 0
run_lengths_0 = find_runs(x, 0)
run_lengths_1 = find_runs(x, 1)
zeros = [np.zeros(l) for l in run_lengths_0]
ones = [np.ones(l) for l in run_lengths_1]
return starts_with_zero, zeros, ones
(This function borrows from an answer to this question.)
To use this function, you could do, e.g.,
x = (np.random.uniform(size=10000) > 0.2).astype(int)
starts_with_zero, zeros, ones = preprocess(x)
Now we write a function to permute internally the 0 and 1 runs, and concatenate the results:
def get_next_permutation(starts_with_zero, zeros, ones):
np.random.shuffle(zeros)
np.random.shuffle(ones)
if starts_with_zero:
all_ = itertools.izip_longest(zeros, ones, fillvalue=np.array([]))
else:
all_ = itertools.izip_longest(ones, zeros, fillvalue=np.array([]))
all_ = [e for p in all_ for e in p]
x_tag = np.concatenate(all_)
return x_tag
To generate another permutation (with same correlation), you would use
x_tag = get_next_permutation(starts_with_zero, zeros, ones)
To generate many permutations, you could do:
starts_with_zero, zeros, ones = preprocess(x)
for i in range(<number of permutations needed):
x_tag = get_next_permutation(starts_with_zero, zeros, ones)
Example
Suppose we run
x = (np.random.uniform(size=10000) > 0.2).astype(int)
print np.corrcoef(x[0: len(x) - 1], x[1: ])[0][1]
starts_with_zero, zeros, ones = preprocess(x)
for i in range(10):
x_tag = get_next_permutation(starts_with_zero, zeros, ones)
print x_tag[: 50]
print np.corrcoef(x_tag[0: len(x_tag) - 1], x_tag[1: ])[0][1]
Then we get:
0.00674330566615
[ 1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 0. 1. 0.
1. 1. 0. 1. 1. 1. 1. 0. 1. 1. 0. 0. 1. 0. 1. 1. 1. 1.
0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
0.00674330566615
[ 1. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 0. 1. 1. 0. 1. 1. 1. 1. 1. 1. 0. 0. 1. 0.
1. 1. 1. 1. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1.]
0.00674330566615
[ 1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 0. 0. 0. 0. 1. 0. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 0. 1. 1.
1. 1. 1. 1. 1. 1. 0. 1. 0. 0. 1. 1. 1. 0.]
0.00674330566615
[ 1. 1. 1. 1. 0. 1. 0. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 0.
1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 1. 0. 1. 1. 1. 1. 1. 1.
1. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 0. 0. 1.]
0.00674330566615
[ 1. 1. 1. 1. 0. 0. 0. 0. 1. 1. 0. 1. 1. 0. 0. 1. 0. 1.
1. 1. 0. 1. 0. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 0. 0. 1.
0. 1. 1. 1. 1. 1. 1. 0. 1. 0. 1. 1. 1. 1.]
0.00674330566615
[ 1. 1. 0. 1. 1. 1. 0. 0. 1. 1. 0. 1. 1. 0. 0. 1. 1. 0.
1. 1. 1. 0. 1. 1. 1. 1. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1.
0. 1. 1. 1. 1. 0. 1. 1. 0. 1. 0. 0. 1. 1.]
0.00674330566615
[ 1. 1. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 1. 1. 1.
1. 1. 0. 1. 0. 1. 1. 0. 1. 0. 1. 1. 1. 1.]
0.00674330566615
[ 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 0. 1. 1. 0. 1. 0. 1. 1.
1. 1. 1. 0. 1. 0. 1. 1. 0. 1. 1. 1. 0. 1. 1. 1. 1. 0.
0. 1. 1. 1. 0. 1. 1. 0. 1. 1. 0. 1. 1. 1.]
0.00674330566615
[ 1. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 1.
0. 1. 1. 1. 1. 1. 1. 0. 1. 1. 0. 1. 1. 1.]
0.00674330566615
[ 1. 1. 0. 1. 1. 1. 1. 0. 1. 1. 1. 1. 1. 1. 0. 1. 0. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 0. 1. 0. 1. 0. 1. 1. 1. 1. 1. 1. 0.]
Note that there is a much simpler solution if
your sequence is of length n,
some number m has m << n, and
m! is much larger than the number of permutations you need.
In this case, simply divide your sequence into m (approximately) equal parts, and permute them randomly. As noted before, only the m - 1 boundaries change in a way that potentially affects the correlations. Since m << n, this is negligible.
For some numbers, say you have a sequence with 10000 elements. It is known that 20! = 2432902008176640000, which is far far more permutations than you need, probably. By dividing your sequence into 20 parts and permuting, you're affecting at most 19 / 10000 with might be small enough. For these sizes, this is the method I'd use.

Categories