I have an array img with shape is 64x128x512x3 that is concated from three images 64x128x512. I want to compute mean of each image individually, given by the array img. Hence, I performed the code as bellows:
import numpy as np
img_means = np.mean(img, (0, 1, 2)))
Is it correct? My expected result is that img_means[0,:,:,:] is mean of the first image, img_means[1,:,:,:] is mean of second image, img_means[2,:,:,:] of third image.
Yes it is correct, but note that img_means is just an array of three numbers (each one is the mean of the corresponding figure).
Your code is not working in python 3.x
Do it like this:
First generate the data
import numpy as np
img=np.arange(64*128*512*3).reshape(64,128,512,3)
And this is what you want:
img_means=[img[:,:,:,i].mean() for i in range(img.shape[3]) ]
Related
I'm trying to change an image to a matrix. An image is loaded using OpenCV-python and successfully converted to NumPy matrix. Now, I have to use each pixel's square value as NumPy matrix, however, it does not give the correct result. I checked three methods to get a square matrix of NumPy matrix: np.square(mat), mat*mat, np.power(mat, 2). Those are worked successfully when I use an example array like : numpy.array([[1,2],[3,4]])
But! As shown below, it does not give correct value when I use NumPy matrix converted from the OpenCV image.
For example, B value at [0, 0] pixel is 80 (below example) and square matrix value is 0 for all three methods. What's the problem?? Of course, the value the I want is 80*80 = 1600.
because the format of cv is numpy.array(int8), the max value is 255
you can change the type to numpy.array(int32/64), the value will be right
as follows:
B = B.astype(np.int32)
I have an array of variable length filled with 2d coordinate points (coming from a point cloud) which are distributed around (0,0) and i want to convert them into a 2d matrix (=grayscale image).
# have
array = [(1.0,1.1),(0.0,0.0),...]
# want
matrix = [[0,100,...],[255,255,...],...]
how would i achieve this using python and numpy
Looks like matplotlib.pyplot.hist2d is what you are looking for.
It basically bins your data into 2-dimensional bins (with a size of your choice).
here the documentation and a working example is given below.
import numpy as np
import matplotlib.pyplot as plt
data = [np.random.randn(1000), np.random.randn(1000)]
plt.scatter(data[0], data[1])
Then you can call hist2d on your data, for instance like this
plt.hist2d(data[0], data[1], bins=20)
note that the arguments of hist2d are two 1-dimensional arrays, so you will have to do a bit of reshaping of our data prior to feed it to hist2d.
Quick solution using only numpy without the need for matplotlib and therefor plots:
import numpy as np
# given a 2dArray "array" and a desired image shape "[x,y]"
matrix = np.histogram2d(array[:,0], array[:,1], bins=[x,y])
The specific problem I try to solve is:
I have a binary image binary map that I want to generate a heatmap (density map) for, my idea is to get the 2D array of this image, let say it is 12x12
a = np.random.randint(20, size=(12, 12));
index and process it with a fixed-size submatrix (let say 3x3), so for every submatrix, a pixel percentage value will be calculated (nonzero pixels/total pixel).
submatrix = a[0:3, 0:3]
pixel_density = np.count_nonzero(submatrix) / submatrix.size
At the end, all the percentage values will made up a new 2D array (a smaller, 4x4 density array) that represent the density estimation of the original image. Lower resolution is fine because the data it will be compared to has a lower resolution as well.
I am not sure how to do that through numpy, especially for the indexing part. Also if there is a better way for generating heatmap like that, please let me know as well.
Thank you!
Maybe a 2-D convolution? Basically this will sweep through the a matrix with the b matrix, which is just 1s below. So it will do the summation you were looking for. This link has a visual demo of convolution near the bottom.
import numpy as np
from scipy import signal
a = np.random.randint(2, size=(12, 12))
b = np.ones((4,4))
signal.convolve2d(a,b, 'valid') / b.sum()
Suppose I have an image of shape 512x512, I should add 8 more rows and 8 more columns to it to make it's shape 520x520. so that the image can be divided in to segments of 40x40 shape.
Using code :
import numpy as np
from skimage.util.shape import view_as_blocks
import gdal
ds = gdal.Open('D:\512.tiff')
band = ds.GetRasterBand(1)
arr1 = band.ReadAsArray()
arr1.shape
>>(512,512)
arr2 = np.zeros((8,512), dtype='float32')
arr3=np.vstack((arr1,arr2))
arr4=np.zeros((520,8), dtype='float32')
arr=np.hstack((arr3,arr4))
arr.shape
>>(520,520)
#Now, I can use this command to divide the image in to segements each of shape of 40x40 :
img= view_as_blocks(arr, block_shape=(40,40))
Here my problem is, always I want to divide the image in to segments of shape 40x40 but always my input image will not be of same size(512x512) ie. it can be (512x516) (529x517) or anything.
So my code requirement is that the code should be able to read the shape of the input image and it must automatically add 'n' no.of rows and columns required to make my image in to segments of shape 40x40
You can use the ceiling function for this:
import math
import numpy as np
new_width = int(math.ceil(float(arr1.shape[1])/segment_width)*segment_width)
new_height = int(math.ceil(float(arr1.shape[0])/segment_height)*segment_height)
new_arr1 = np.zeros((new_height, new_width), dtype=arr1.dtype)
new_arr1[:arr1.shape[0], :arr1.shape[1]] = arr1
The float conversion is just to make sure there is a float division and is not necessary in python 3.
I have an image's numpy array of shape (224,224,4). Each pixel has 4 dimension - r,g,b,alpha. I need to extract the (r,g,b) values for each pixel where it's alpha channel is 255.
I thought to first delete all elements in the array where alpha value is <255, and then extract only the first 3 values(r,g,b) of these remaining elements, but doing it in simple loops in Python is very slow. Is there a fast way to do it using numpy operations?
Something similar to this? https://stackoverflow.com/a/21017621/4747268
This should work: arr[arr[:,:,3]==255][:,:,:3]
something like this?
import numpy as np
x = np.random.random((255,255,4))
y = np.where(x[:,:,3] >0.5)
res = x[y][:,0:3]
where you have to fit > 0.5 to your needs (e.g. ==255). The result will be a matrix with all pixels stacked vertically