I'm trying to change an image to a matrix. An image is loaded using OpenCV-python and successfully converted to NumPy matrix. Now, I have to use each pixel's square value as NumPy matrix, however, it does not give the correct result. I checked three methods to get a square matrix of NumPy matrix: np.square(mat), mat*mat, np.power(mat, 2). Those are worked successfully when I use an example array like : numpy.array([[1,2],[3,4]])
But! As shown below, it does not give correct value when I use NumPy matrix converted from the OpenCV image.
For example, B value at [0, 0] pixel is 80 (below example) and square matrix value is 0 for all three methods. What's the problem?? Of course, the value the I want is 80*80 = 1600.
because the format of cv is numpy.array(int8), the max value is 255
you can change the type to numpy.array(int32/64), the value will be right
as follows:
B = B.astype(np.int32)
Related
I have a labelled array obtained by using scipy measure.label on a binary 2 dimensional array. For argument sake it might look like this:
[
[1,1,0,0,2],
[1,1,1,0,2],
[1,0,0,0,0],
[0,0,0,3,3]
]
I want to get the indices of each group of labels. So in this case:
[
[(0,0),(0,1),(1,0),(1,1),(1,2),(2,0)],
[(0,4),(1,4)],
[(3,3),(3,4)]
]
I can do this using builtin Python like so (n and m are the dimensions of the array):
_dict = {}
for coords in itertools.product(range(n), range(m)):
_dict.setdefault(labelled_array[coords], []).append(coords)
blobs = [np.array(item) for item in _dict.values()]
This is very slow (about 10 times slower than the initial labelling of the binary array using measure.label!)
Scipy also has a function find_objects:
from scipy import ndimage
objs = ndimage.find_objects(labelled_array)
From what I can gather though this is returning the bounding box for each group (object). I don't want the bounding box I want the exact coordinates of each value in the group.
I have also tried using np.where for each integer in the number of labels. This is very slow.
it also seems to me that what I'm tring to do here is something like the minesweeper algorithm. I suspect there must be an efficient solution using numpy or scipy.
Is there an efficient way to obtain these coordinates?
The specific problem I try to solve is:
I have a binary image binary map that I want to generate a heatmap (density map) for, my idea is to get the 2D array of this image, let say it is 12x12
a = np.random.randint(20, size=(12, 12));
index and process it with a fixed-size submatrix (let say 3x3), so for every submatrix, a pixel percentage value will be calculated (nonzero pixels/total pixel).
submatrix = a[0:3, 0:3]
pixel_density = np.count_nonzero(submatrix) / submatrix.size
At the end, all the percentage values will made up a new 2D array (a smaller, 4x4 density array) that represent the density estimation of the original image. Lower resolution is fine because the data it will be compared to has a lower resolution as well.
I am not sure how to do that through numpy, especially for the indexing part. Also if there is a better way for generating heatmap like that, please let me know as well.
Thank you!
Maybe a 2-D convolution? Basically this will sweep through the a matrix with the b matrix, which is just 1s below. So it will do the summation you were looking for. This link has a visual demo of convolution near the bottom.
import numpy as np
from scipy import signal
a = np.random.randint(2, size=(12, 12))
b = np.ones((4,4))
signal.convolve2d(a,b, 'valid') / b.sum()
I'm importing grayscale images that are RGBA (4-channels) formatted using scikit-image.
from skimage import io
example = io.imread("example.png", as_gray=True)
print(example.shape)
print(example)
plt.imshow(example)
I was expecting to get an array with values in the range 0-255. However, I found in the docs, that the above method returns an array of (64-bit) floating points.
Does this mean the values are already normalized (X / 255)? Or do I need to be aware of something else? Thanks in advance.
Min-Max Feature Scaling aka Min-Max Normalization / Unity-based Normalization is a technique that brings all values in a set into the range [0, 1] (or an arbitrary range [a, b]).
The mathematical definition of min-max normalization is as follows:
Notice that calling np.max(example) will result in a value less than or equal to 1.0.
Notice that calling np.min(example) will return a value greater than or equal to 0.0.
Yes, the features have been normalized such that a=0 and b=255 in the equation above.
I have an array img with shape is 64x128x512x3 that is concated from three images 64x128x512. I want to compute mean of each image individually, given by the array img. Hence, I performed the code as bellows:
import numpy as np
img_means = np.mean(img, (0, 1, 2)))
Is it correct? My expected result is that img_means[0,:,:,:] is mean of the first image, img_means[1,:,:,:] is mean of second image, img_means[2,:,:,:] of third image.
Yes it is correct, but note that img_means is just an array of three numbers (each one is the mean of the corresponding figure).
Your code is not working in python 3.x
Do it like this:
First generate the data
import numpy as np
img=np.arange(64*128*512*3).reshape(64,128,512,3)
And this is what you want:
img_means=[img[:,:,:,i].mean() for i in range(img.shape[3]) ]
I have an image's numpy array of shape (224,224,4). Each pixel has 4 dimension - r,g,b,alpha. I need to extract the (r,g,b) values for each pixel where it's alpha channel is 255.
I thought to first delete all elements in the array where alpha value is <255, and then extract only the first 3 values(r,g,b) of these remaining elements, but doing it in simple loops in Python is very slow. Is there a fast way to do it using numpy operations?
Something similar to this? https://stackoverflow.com/a/21017621/4747268
This should work: arr[arr[:,:,3]==255][:,:,:3]
something like this?
import numpy as np
x = np.random.random((255,255,4))
y = np.where(x[:,:,3] >0.5)
res = x[y][:,0:3]
where you have to fit > 0.5 to your needs (e.g. ==255). The result will be a matrix with all pixels stacked vertically