Transform 1-D numpy array into 3D RGB array - python

What is the best way to transform an 1D array that contains rgb data into a 3D RGB array ?
If the array was in this order, it would be easy, (a single reshape)
RGB RGB RGB RGB...
However my array is in the form,
RRRR...GGGG....BBBB
or sometimes even,
GGGG....RRRR....BBBB (result still should be RGB not GRB)
I could of course derive some Python way to achieve this, I even did try a numpy solution, it works but It is obviously a bad solution, I wonder what is the best way, maybe a built-in numpy function ?
My solution:
for i in range(len(video_string) // 921600 - 1): # Consecutive frames iterated over.
frame = video_string[921600 * i: 921600 * (i + 1)] # One frame
array = numpy.fromstring(frame, dtype=numpy.uint8) # Numpy array from one frame.
r = array[:307200].reshape(480, 640)
g = array[307200:614400].reshape(480, 640)
b = array[614400:].reshape(480, 640)
rgb = numpy.dstack((b, r, g)) # Bring them together as 3rd dimention
Don't let the for loop confuse you, I just have frames concatenated to each other in a string, like a video, which is not a part of the question.
What did not help me: In this question, r, g, b values are already 2d arrays so not helping my situation.
Edit1: Desired array shape is 640 x 480 x 3

Reshape to 2D, transpose and then reshape back to 3D for RRRR...GGGG....BBBB form -
a1D.reshape(3,-1).T.reshape(height,-1,3) # assuming height is given
Or use reshape with Fortran order and then swap axes -
a1D.reshape(-1,height,3,order='F').swapaxes(0,1)
Sample run -
In [146]: np.random.seed(0)
In [147]: a = np.random.randint(11,99,(4,2,3)) # original rgb image
In [148]: a1D = np.ravel([a[...,0].ravel(), a[...,1].ravel(), a[...,2].ravel()])
In [149]: height = 4
In [150]: np.allclose(a, a1D.reshape(3,-1).T.reshape(height,-1,3))
Out[150]: True
In [151]: np.allclose(a, a1D.reshape(-1,height,3,order='F').swapaxes(0,1))
Out[151]: True
For GGGG....RRRR....BBBB form, simply append : [...,[1,0,2]].

Related

Subtract Mean from Multidimensional Numpy-Array

I'm currently learning about broadcasting in Numpy and in the book I'm reading (Python for Data Analysis by Wes McKinney the author has mentioned the following example to "demean" a two-dimensional array:
import numpy as np
arr = np.random.randn(4, 3)
print(arr.mean(0))
demeaned = arr - arr.mean(0)
print(demeaned)
print(demeand.mean(0))
Which effectively causes the array demeaned to have a mean of 0.
I had the idea to apply this to an image-like, three-dimensional array:
import numpy as np
arr = np.random.randint(0, 256, (400,400,3))
demeaned = arr - arr.mean(2)
Which of course failed, because according to the broadcasting rule, the trailing dimensions have to match, and that's not the case here:
print(arr.shape) # (400, 400, 3)
print(arr.mean(2).shape) # (400, 400)
Now, i have gotten it to work mostly, by substracting the mean from every single index in the third dimension of the array:
demeaned = np.ones(arr.shape)
for i in range(3):
demeaned[...,i] = arr[...,i] - means
print(demeaned.mean(0))
At this point, the returned values are very close to zero and i think, that's a precision error. Am i actually right with this thought or is there another caveat, that i missed?
Also, this doesn't seam to be the cleanest, most 'numpy'-way to achieve what i wanted to achieve. Is there a function or a principle that i can make use of to improve the code?
As of numpy version 1.7.0, np.mean, and several other functions, accept a tuple in their axis parameter. This means that you can perform the operation on the planes of the image all at once:
m = arr.mean(axis=(0, 1))
This mean will have shape (3,), with one element for each plane of the image.
If you want to subtract the means of each pixel individually, you have to remember that broadcasting aligns shape tuples on the right edge. That means that you need to insert an extra dimension:
n = arr.mean(axis=2)
n = n.reshape(*n.shape, 1)
Or
n = arr.mean(axis=2)[..., None]
Try np.apply_along_axis().
np.apply_along_axis(lambda x: x - np.mean(x), 2, arr)
Output: you get the array of the same shape where each cell is demeaned in the dimension you want (the second parameter, here it is 2).

Advanced Indexing in 3 Dimensional Numpy ndarray In Python

I have a ndarray of shape (68, 64, 64) called 'prediction'. These dimensions correspond to image_number, height, width. For each image, I have a tuple of length two that contains coordinates that corresponds to a particular location in each 64x64 image, for example (12, 45). I can stack these coordinates into another Numpy ndarray of shape (68,2) called 'locations'.
How can I construct a slice object or construct the necessary advanced indexing indices to access these locations without using a loop? Looking for help on the syntax. Using pure Numpy matrixes without loops is the goal.
Working loop structure
Import numpy as np
# example code with just ones...The real arrays have 'real' data.
prediction = np.ones((68,64,64), dtype='float32')
locations = np.ones((68,2), dtype='uint32')
selected_location_values = np.empty(prediction.shape[0], dtype='float32')
for index, (image, coordinates) in enumerate(zip(prediction, locations)):
selected_locations_values[index] = image[coordinates]
Desired approach
selected_location_values = np.empty(prediction.shape[0], dtype='float32')
correct_indexing = some_function_here(locations). # ?????
selected_locations_values = predictions[correct_indexing]
A straightforward indexing should work:
img = np.arange(locations.shape[0])
r = locations[:, 0]
c = locations[:, 1]
selected_locations_values = predictions[img, r, c]
Fancy indexing works by selecting elements of the indexed array that correspond to the shape of the broadcasted indices. In this case, the indices are quite straightforward. You just need the range to tell you what image each location corresponds to.

How resize images when those converted to numpy array

Consider we only have images as.npy file. Is it possible to resizing images without converting their to images (because I'm looking for a way that is fast when run the code).
for more info, I asked the way without converting to image, I have images but i don't want use those in code, because my dataset is too large and running with images is so slow, on the other hand, Im not sure which size is better for my imeges, So Im looking for a way that first convert images to npy and save .npy file and then preprocess npy file, for example resize the dimension of images.
Try PIL, maybe it's fast enough for you.
import numpy as np
from PIL import Image
arr = np.load('img.npy')
img = Image.fromarray(arr)
img.resize(size=(100, 100))
Note that you have to compute the aspect ratio if you want to keep it. Or you can use Image.thumbnail(), which can take an antialias filter.
There's also scikit-image, but I suspect it's using PIL under the hood. It works on NumPy arrays:
import skimage.transform as st
st.resize(arr, (100, 100))
I guess the other option is OpenCV.
If you are only dealing with numpy arrays, I think slicing would be enough
Say, the shape of the loaded numpy array is (m, n) (one channel), and the target shape is (a, b). Then, the stride can be (s1, s2) = (m // a, n // b)
So the original array can be sliced by
new_array = old_array[::s1, ::s2]
EDIT
To scale up an array is also quite straight forward if you use masks for advanced slicing. For example, the shape of the original array is (m, n), and the target shape is (a, b). Then, as an example
a, b = 300, 200
m, n = 3, 4
original = np.linspace(1, 12, 12).reshape(3, 4)
canvas = np.zeros((a, b))
(s1, s2) = (a // m, b // n) # the scalar
# the two masks
mask_x = np.concatenate([np.ones(s1) * ind for ind in range(m)])
mask_y = np.concatenate([np.ones(s2) * ind for ind in range(n)])
# make sure the residuals are taken into account
if len(mask_x) < a: mask_x = np.concatenate([mask_x, np.ones(len(mask_x) % a) * (m - 1)])
if len(mask_y) < b: mask_y = np.concatenate([mask_y, np.ones(len(mask_y) % b) * (n - 1)])
mask_x = mask_x.astype(np.int8).tolist()
mask_y = mask_y.astype(np.int8).tolist()
canvas = original[mask_x, :]
canvas = canvas[:, mask_y]

how to merge different dimensions arrays in python?

I am analyzing some image represented datasets using keras. I am stuck that I have two different dimensions of images. Please see the snapshot. Features has 14637 images having dimension (10,10,3) and features2 has dimension (10,10,100)
Is there any way that I can merge/concatenate these two data together.?
If features and features2 contain the features of the same batch of images, that is features[i] is the same image of features2[i] for each i, then it would make sense to group the features in a single array using the numpy function concatenate():
newArray = np.concatenate((features, features2), axis=3)
Where 3 is the axis along which the arrays will be concatenated. In this case, you'll end up with a new array having dimension (14637, 10, 10, 103).
However, if they refer to completely different batches of images and you would like to merge them on the first axis such that the 14637 images of features2 are placed after the first 14637 image, then, there no way you can end up with an array, since numpy array are structured as matrix, non as a list of objects.
For instance, if you try to execute:
> a = np.array([[0, 1, 2]]) // shape = (1, 3)
> b = np.array([[0, 1]]) // shape = (1, 2)
> c = np.concatenate((a, b), axis=0)
Then, you'll get:
ValueError: all the input array dimensions except for the concatenation axis must match exactly
since you are concatenating along axis = 0 but axis 1's dimensions differ.
If dealing with numpy arrays, you should be able to use concatenate method and specify the axis, along which the data should be merged. Basically: np.concatenate((array_a, array_b), axis=2)
I think it would be better if you use class.
class your_class:
array_1 = []
array_2 = []
final_array = []
for x in range(len(your_previous_one_array)):
temp_class = your_class
temp_class.array_1 = your_previous_one_array
temp_class.array_2 = your_previous_two_array
final_array.append(temp_class)

Python: extract a 2D array from a 3D array

I have a 3D numpy array (1L, 420L, 580L) the 2nd and 3rd dimension is a gray scale image that I want to display using openCV. How do I pull the 2D array from the 3D array?
I created a short routine to do this, but I bet there is a better way.
# helper function to remove 1st dimension
def pull_image(in_array):
rows = in_array.shape[1] # vertical
cols = in_array.shape[2] # horizontal
out_array = np.zeros((rows, cols), np.uint8) # create new array to hold image data
for r in xrange(rows):
for c in xrange(cols):
out_array[r, c] = in_array[:, r, c]
return out_array
If you always only have the first dimension == 1, then you could simply reshape the array...
if in_array.shape[0] == 1:
return in_array.reshape(in_array.shape[1:])
otherwise, you can use numpy's advanced list slicing...
else:
return in_array[0,:,:]

Categories