Removing NaN rows from a three dimensional array - python

How can I remove the NaN rows from the array below using indices (since I will need to remove the same rows from a different array.
array([[[nan, 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]],
[[ 0., 0., 0., 0.],
[ 0., nan, 0., 0.],
[ 0., 0., 0., 0.]]])
I get the indices of the rows to be removed by using the command
a[np.isnan(a).any(axis=2)]
But using what I would normally use on a 2D array does not produce the desired result, losing the array structure.
a[~np.isnan(a).any(axis=2)]
array([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
How can I remove the rows I want using the indices obtained from my first command?

You need to reshape:
a[~np.isnan(a).any(axis=2)].reshape(a.shape[0], -1, a.shape[2])
But be aware that the number of NaN-rows at each 2D subarray should be the same to get a new 3D array.

Related

Creating many state vectors and saving them in a file

I want to create m number of matrices, each of which is an n x 1 numpy arrays. Moreover those matrices should have only two nonzero entries in the two rows, all other rows should have 0 as their entries, meaning that matrix number m=1 should have entries m[0,:]=m[1,:]=1, rest elements are 0. And similarly the last matrix m=m should have entries like m[n-1,:]=m[n,:]=1, where rest of the elements in other rows are 0. So for consecutive two matrices, the nonzero elements shift by two rows. And finally, I would like them to be stored into a dictionary or in a file.
What would be a neat way to do this?
Is this what you're looking for?
In [2]: num_rows = 10 # should be divisible by 2
In [3]: np.repeat(np.eye(num_rows // 2), 2, axis=0)
Out[3]:
array([[1., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1.]])
In terms of storage in a file, you can use np.save and np.load.
Note that the default data type for np.eye will be float64. If you expect your values to be small when you begin integrating or whatever you're planning on doing with your state vectors, I'd recommend setting the data type appropriately (like np.uint8 for positive integers < 256 for example).

Add homogeneous coordinate (x0=1) to images in numpy

I have 7 images of size 29*29, I want to add one homogenous coordinate (augment them
with feature, x0=1) to all 7 images, but I am not sure how to do it.
my original image dimension is
images.shape
#(7, 29, 29)
What I have tried is zipping np.ones() but it ends up making separate array for first feature resulting in dimension 7*2
np.array([list(a) for a in zip(np.ones([7,1]),images_all[:,:])]).shape
#(7,2)
#
#[[array([1.]),
# array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
....
As you can see, it adds 1 as separate array and does append in as the first element.
Also, I tried to loop through images and insert 1 at the first element, but it makes dimension 30 and gives error
for i in range(len(images)):
images[i][0] = np.insert(images[i][0], 0, 1., axis=0)
ValueError: could not broadcast input array from shape (30) into shape (29)
First create a larger array of ones, reshape the original array and update the larger array.
padded_images = np.ones((7,29*29+1))
padded_images[:,1:] = images.reshape(7,29*29)

Indexing numpy matrix

So lets say I have a (4,10) array initialized to zeros, and I have an input array in the form [2,7,0,3]. The input array will modify the zeros matrix to look like this:
[[0,0,1,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,1,0,0],
[1,0,0,0,0,0,0,0,0,0],
[0,0,0,1,0,0,0,0,0,0]]
I know I can do that by looping through the input target and indexing the matrix array with something like matrix[i][target in input target], but I tried to do it without a loop doing something like:
matrix[:, input_target] = 1, but that sets me the entire matrix to all 1.
Apparently the way to do it is:
matrix[range(input_target.shape[0]), input_target], the question is why this works and not using the colon ?
Thanks!
You only wish to update one column for each row. Therefore, with advanced indexing you must explicitly provide those row identifiers:
A = np.zeros((4, 10))
A[np.arange(A.shape[0]), [2, 7, 0, 3]] = 1
Result:
array([[ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]])
Using a colon for the row indexer will tell NumPy to update all rows for the specified columns:
A[:, [2, 7, 0, 3]] = 1
array([[ 1., 0., 1., 1., 0., 0., 0., 1., 0., 0.],
[ 1., 0., 1., 1., 0., 0., 0., 1., 0., 0.],
[ 1., 0., 1., 1., 0., 0., 0., 1., 0., 0.],
[ 1., 0., 1., 1., 0., 0., 0., 1., 0., 0.]])

Vectorize Sequences explanation

Studying Deep Learning with Python, I can't comprehend the following simple batch of code which encodes the integer sequences into a binary matrix.
def vectorize_sequences(sequences, dimension=10000):
# Create an all-zero matrix of shape (len(sequences), dimension)
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1. # set specific indices of results[i] to 1s
return results
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
x_train = vectorize_sequences(train_data)
And the output of x_train is something like
x_train[0]
array([ 0., 1.,1., ...,0.,0.,0.])
Can someone put some light of the 0.'s existance in x_train array while only 1.'s are appending in each next i iteration?
I mean shouldn't be all 1's?
The script transforms you dataset into a binary vector space model. Let's disect things one by one.
First, if we examine the x_train content we see that each review is represented as a sequence of word ids. Each word id corresponds to one specific word:
print(train_data[0]) # print the first review
[1, 14, 22, 16, 43, 530, 973, ..., 5345, 19, 178, 32]
Now, this would be very difficult to feed the network. The lengths of reviews varies, fractional values between any integers have no meaning (e.g. what if on the output we get 43.5, what does it mean?)
So what we can do, is create a single looong vector, the size of the entire dictionary, dictionary=10000 in your example. We will then associate each element/index of this vector with one word/word_id. So word represented by word id 14 will now be represented by 14-th element of this vector.
Each element will either be 0 (word is not present in the review) or 1 (word is present in the review). And we can treat this as a probability, so we even have meaning for values in between 0 and 1. Furthermore, every review will now be represented by this very long (sparse) vector which has a constant length for every review.
So on a smaller scale if:
word word_id
I -> 0
you -> 1
he -> 2
be -> 3
eat -> 4
happy -> 5
sad -> 6
banana -> 7
a -> 8
the sentences would then be processed in a following way.
I be happy -> [0,3,5] -> [1,0,0,1,0,1,0,0,0]
I eat a banana. -> [0,4,8,7] -> [1,0,0,0,1,0,0,1,1]
Now I highlighted the word sparse. That means, there will have A LOT MORE zeros in comparison with ones. We can take advantage of that. Instead of checking every word, whether it is contained in a review or not; we will check a substantially smaller list of only those words that DO appear in our review.
Therefore, we can make things easy for us and create reviews × vocabulary matrix of zeros right away by np.zeros((len(sequences), dimension)). And then just go through words in each review and flip the indicator to 1.0 at position corresponding to that word:
result[review_id][word_id] = 1.0
So instead of doing 25000 x 10000 = 250 000 000 operations, we only did number of words = 5 967 841. That's just ~2.5% of original amount of operations.
The for loop here is not processing all the matrix. As you can see, it enumerates elements of the sequence, so it's looping only on one dimension.
Let's take a simple example :
t = np.array([1,2,3,4,5,6,7,8,9])
r = np.zeros((len(t), 10))
Output
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
then we modify elements with the same way you have :
for i, s in enumerate(t):
r[i,s] = 1.
array([[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]])
you can see that the for loop modified only a set of elements (len(t)) which has index [i,s] (in this case ; (0, 1), (1, 2), (2, 3), an so on))
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results

Calculate the area of two separate geometries in Python

I have been stumped on this problem for a while now and was wondering if anyone would be able to help. So let's say I have a binary image as shown below and I would like to count the black elements (zero). The problem is I want to know the number of elements associated with 'background' and 'trapezoid' in the middle individually, so output two values. What would be the easiest way to approach this? I have been trying to do it without using a mask but is that even possible? I have the numpy and scipy libraries if that helps.
You can use two functions from scipy.ndimage.measurements: label and find_objects.
First you invert the array, because label function considers zero to be the background.
inverted = 1 - binary_image_array
Then you call label to find the different regions:
labeled_array, num_features = scipy.ndimage.measurements.label(inverted)
So, for this particular array, where you already know there are exactely two black blobs, you have the two regions in labeled_array.
Obviously, the scipy approach is a good answer.
I was thinking that you might be able to work with numpy.cumsum and numpy.diff to find an enclosed area.
The cumulative sum will be zero while you are in the black area, then increase by one for every pixel in the white area, be stable again while you traverse the enclosed area, then start increasing again, etc.
The second order difference then finds places where the jumps occur, and you are left with a "classified" map. No guarantee that this generalizes, just an idea.
a = numpy.zeros((10,10))
a[3:7,3:7] = 1
a[4:6, 4:6] = 0
y = numpy.cumsum(a, axis=0)
x = numpy.cumsum(a, axis=1)
yy= numpy.diff(y, n=2, axis=0)
xx = numpy.diff(x, n=2, axis=1)
numpy.dot(xx,yy)
array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 2., 2., 2., 2., 0., 0., 0.],
[ 0., 0., 0., 2., 4., 4., 2., 0., 0., 0.],
[ 0., 0., 0., 2., 4., 4., 2., 0., 0., 0.],
[ 0., 0., 0., 2., 2., 2., 2., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])

Categories