How to properly use numpy in keras - python

the question is:
In the keras tutorial it use an input x_train = np.random.random((100, 100, 100, 3)), it should means that there's 100 images each has size of [100,100,3] right?
So i thought that x_train[0][0] should represent the first channel of the first img (which should be [100, 100]), but x_train[0][0] in fact has a size of [100,3]... so i'm confused, how can keras take this [100,100,100,3] numpy array as a set of imgs? please help me out, thank in advance.
Another question is:
how can I construct a input like this ? Cause when I do np.array([[100,100],[100,100]]), it becomes to array of [2,100,100]

Here is an explanation on how you can access your images.
X is four dimensional tensor. In mathematics tensors are generalization of vectors and metrics into higher dimensional arrays.
Assuming "channels last" data-format
1st Axis = Number of images
2nd Axis = Number of rows in single image
3rd Axis = Number of columns in single row
4th Axis = Number of channels of certain pixel
Now you can access image,row,column, and channels using indexing as follows.
x[0] Represents first image
x[0][0] Represents First row of first image
x[0][0][0] Represents First column of first row of first image
x[0][0][0][0] Represents Red channel of First column of first row of first image

Related

how to convert a x^2 long list to a list of shape (x,x)

Im in python, I have a list with a length 784 (which i extracted from a 28x28 image ) and now i want to arrange it in a way i can use it in tensor-flow to be used with a trained model but the problem is that the NN needs to have a 28,28 shaped array but my current array is of shape (784,)
I tried to use some for loops for this but i have no luck in successfully creating a system to carry this out , please help
i figured out that i need to use this structure
for i in range(res):
for a in range(0,res):
mnistFormat.append(grascalePix[a]) #mnistFormat is an innitially empty list
#and grascale has my 784 grayscale pixles
but i cant figure out what should go in the range function of the for loop to make this possible
For example lets say i have a sample 4x4 image's pixel list >
grayscalePix = [255,255,255,255,255,100,83,200,255,50,60,255,30,1,46,255]
this is a Row by Row representation , which means the first 4 elements are the first ---- row
i want to arrange them into a list of shape (4,4)
mnistFormat = [255,255,255,255],[255,100,83,200],[255,50,60,255],[30,1,46,255]
Just keep in mind this is sample , the real data is 784 elements long and i dont have much experince in numpy
numpy might help you there very easily:
mnistFormat = np.array(grascalePix).reshape((np.sqrt(len()),-1))

Efficiently filter 3D matrix in numpy with variable 2D masks

I have a 3D numpy array points of dimensions [10000x3000x128] where the first dimension is the number of frames, the second dimension the number of points in each frame and the third dimension is a 128-element feature vector associated to each point. What I want to do is to efficiently filter the points in each frame by using a boolean 2D mask of dimensions [10000x3000] and for each of the selected points also take the related 128-dim vector of features. Moreover, in output I need still a 3D vector and not a merged 2D vector and possibly avoid any for loop.
Actually what I'm doing is:
# example of points
points = np.array([10000, 3000, 128])
# fg, bg = 2D dimensional boolean np.array
# init empty lists
fg_points, bg_points = [], []
for i in range(points.shape[0]):
fg_mask_tmp, bg_mask_tmp = fg[i], bg[i]
fg_points.append(points[i,fg_mask_tmp,:])
bg_points.append(points[i,bg_mask_tmp,:])
fg_features, bg_features = np.array(fg_points), np.array(bg_points)
But this is a quite naive solution that for sure can be improved in a more numpy-like way.
In addition, I also tried other solutions as:
fg_features = points[fg,:]
But this solution does not preserve the dimensions of the array merging the two first dimensions since the number of filtered points for each frame can vary.
Another solution I tried is to enlarge the 2D masks by appending a [128] true value to the last dimension, but with any successful result.
Dos anyone know a possible efficient solution?
Thank you in advance for any help!

How to stack matrices in a single column table

I am trying to store 20 automatically generated Matrices in a single column Matrix, so this last Matrix would be a 1x20 Matrix.
For this I am using numpy and vstack, but it doesn't work, it Keep on getting the following error:
ValueError: all the input arrays must have same number of dimensions
Even though all the Matrices tham I'm trying to stack together have the same dimensions (881 x 882)
So I'd like to know what is wrong About this or if there is any other way to stack all the Matrices in a way that if one of them is needed I can access easily to that one.
Try to change dimensions with expand and squeeze functions:
y = np.expand_dims(x, axis=0) # dim 20 become 1x20
y = np.squeeze(x, axis=0) # dim 1x20 become 20

How important are the rows vs columns in PCA?

So i have a dataset with pictures, where each column consist of a vector that can be reshaped into a 32x32 picture. The specific dimensions of my dataset is the following 1024 x 20000. Meaning 20000 samples of images.
Now when i look at various ways of doing PCA without using the built in functions from something like scikit-learn people tend to take either the mean of the rows and subtract the resulting matrix from the original one to get the covariance matrix. I.e the following
A = (1024x20000) #dimensions of the numpy array
mean_rows = A.mean(0)
new_A = A-mean_rows
Other times people tend to get the mean of the columns and the subtract that from the original matrix.
A = (1024x20000) #dimensions of the numpy array
mean_rows = A.mean(1)
new_A = A-mean_rows
Now my question is, when are you supposed to do what? Say i have a dataset as my example which of the methods would i use?
Looked at a variety of websites such as https://machinelearningmastery.com/calculate-principal-component-analysis-scratch-python/,
http://sebastianraschka.com/Articles/2014_pca_step_by_step.html
I think you're talking about normalizing the dataset to have zero mean. You should compute the mean across the axis that contains each observation.
In your example, you have 20,000 observations with 1,024 dimensions each and your matrix has laid out each observation as a column so you should compute the mean of the columns.
In code that would be:
A = A - A.mean(axis=0)

Reshape array in numpy

I have a numpy array of size 5000x32x32x3. The number 5000 is the number of images and each image is 32x32 in width and height and has 3 color channels.
Now I would like to create a numpy array of shape 5000x3x32x32 in a way that the data is preserved.
What I mean by preserving data is :
There should be 5000 data points in the resulting array
The 2nd dimension (3) of the array correctly determines the color channel i.e all the elements whose 2nd dimension is 0 belong to red channel, whose 2nd dimension is 1 belong to green channel,whose 2nd dimension is 2 belong to blue channel.
Simply reshaping the by np.reshape(data,(5000,3,32,32)) would not work as it would not preserve the channels but just reshape the data into the desired shape.
I think you are looking for a permutation of the axes, numpy.transpose can get this job done:
data = np.transpose( data, (0, 3, 1, 2))

Categories