I have a NumPy array storing color data as floats ranging from 0-1 that are rough divisions of 255. The array has a shape of (4096, 3). How do I multiply every number in the NumPy array by 255 and then round it to the nearest whole integer?
Related
I have an Numpy array of the size (size_x, size_y) holding different values. These values are a Gaussian random field and the size in both dimension is given.
Also, I have a Numpy array of the size (nr_points, 2) with an amount of two-dimensional coordinates. nr_points is the amount of xy-coordinates in this array and given.
The sizes (size_x, size_y) are different of the bounding box of all points given in the second array.
How do I efficiently scale and map the values of the first array to the points?
Here is a graphical sketch of the desired task.
Normalize the coordinate values to the range of the size of the field array which will probably produce fractional (not-integer) coordinates.
scale = (field_array_size - 1) / (coord_max - coord_min)
scaled_coords = coordinates * scale
normed_coords = scaled_coords - scaled_coords_min
coordinate x values should be scaled to the field array x dimension size
coordinate y values should be scaled to the field array y dimension size
You can only index the field array with integers so you have two choices:
round the new coordinates to zero decimal places and convert to ints then use them as indices
interpolate the field array values using the new coordinates
I have a very large binary array which I compress using
arr1 = np.random.randint(0,2,(100, 100))
bitArray = np.packbits(arr1)
How can I then multiply another numpy integer array by this packed array,
arr2 = np.random.randint(0,10,(100,100))
result = MULTIPLY(arr2,bitArray)
treating the values as standard ones and zeros such that the result would be the same as
np.dot(arr2,arr1)
without ever converting the bitarray out of the packed format?
I am working on a vision algorithm with OpenCV in Python. One of the components of it requires comparing points in color-space, where the x and y components are not integers. Our list of points is stored as ndarray with dtype = float64, and our numbers range from -10 to 10 give or take.
Part of our algorithm involves running a convex hull on some of the points in this space, but cv2.convexHull() requires an ndarray with dtype = int.
Given the narrow range of the values we are comparing, simple truncation causes us to lose ~60 bits of information. Is there any way to have numpy directly interpret the float array as an int array? Since the scale has no significance, I would like all 64 bits to be considered.
Is there any defined way to separate the exponent from the mantissa in a numpy float, without doing bitwise extraction for every element?
"Part of our algorithm involves running a convex hull on some of the points in this space, but cv2.convexHull() requires an ndarray with dtype = int."
cv2.convexHull() also accepts numpy array with float32 number.
Try using cv2.convexHull(numpy.array(a,dtype = 'float32')) where a is a list of dimension n*2 (n = no. of points).
My numpy array (name: data) has following size: (10L,3L,256L,256L).
It has 10 images with each 3 color channels (RGB) and each an image size of 256x256 pixel.
I want to compute the mean pixel value for each color channel of all 10 images. If I use the numpy function np.mean(data), I receive the mean for all pixel values. Using np.mean(data, axis=1) returns a numpy array with size (10L, 256L, 256L).
If I understand your question correctly you want an array containing the mean value of each channel for each of the three images. (i.e. an array of shape (10,3) ) (Let me know in the comments if this is incorrect and I can edit this answer)
If you are using a version of numpy greater than 1.7 you can pass multiple axes to np.mean as a tuple
mean_values = data.mean(axis=(2,3))
Otherwise you will have to flatten the array first to get it into the correct shape.
mean_values = data.reshape((data.shape[0], data.shape[1], data.shape[2]*data.shape[3])).mean(axis=2)
I have been struggling with changing a 2D numpy array to a 2D numpy matrix. I know that I can use numpy.asmatrix(x) to change array x into a matrix, however, the size for the matrix is not the size I wish to have. For example, I want to have a numpy.matrix((2,10)). It is easier for me to use two separate numpy.arrays to form each rows of the matrix. then I used numpy.append to put these two arrays into a matrix. However, when I use numpy.asmatrix to make this 2d array into a 2d matrix, the size is not the same size as my matrix (my desired matrix should have a size of 2*10 but when I change arrays to matrix, the size is 1*2). Does anybody know how I can change size of this asmatrix to my desired size?
code (a and b are two numpy.matrix with size of (1*10)):
m=10
c=sorted(random.sample(range(m),2))
n1=numpy.array([a[0:c[0]],b[c[0]:c[1]],a[c[1]:]])
n2=numpy.array([b[0:c[0]],a[c[0]:c[1]],b[c[1]:]])
n3=numpy.append(n1,n2)
n3=numpy.asmatrix(n3)
n1 and n2 are each arrays with shape 3 and n3 is matrix with shape 6. I want n3 to be a matrix with size 2*10
Thanks