Is there a floodFill function for python/openCV that takes a list of seeds and starts changing the color of its neighbours? I know that simplecv as a function like that SimpleCV floodFill. OpenCV says it has two floodFill functions when that uses a mask and another one that doesn't, documentation, I'm not being able to use the opencv floodfill function without a mask and with a list of seeds. Any help?
This is what I'm trying to do so far:
A=array([[0,1,1,0],[0,0,0,0],[1,1,1,1],[1,1,1,1]],np.uint8)
mask = np.ones((A.shape[0]+2,A.shape[0]+2),np.uint8)
mask[1:-1,1:-1] = np.zeros((A.shape))
cv.floodFill(A, mask, (3,0), 0,0,0, flags=4|cv.FLOODFILL_MASK_ONLY)
print mask
returned mask:
[[1 1 1 1 1 1]
[1 1 0 0 1 1]
[1 1 1 1 1 1]
[1 0 0 0 0 1]
[1 0 0 0 0 1]
[1 1 1 1 1 1]]
Expected mask:
[[1 1 1 1 1 1]
[1 0 0 0 0 1]
[1 0 0 0 0 1]
[1 1 1 1 1 1]
[1 1 1 1 1 1]
[1 1 1 1 1 1]]
Original Image:
[[0 1 1 0]
[0 0 0 0]
[1 1 1 1]
[1 1 1 1]]
If you look closely at the documentation, that's one of the purpose of mask. You can call multiple times the function (2nd version) every time with a different seed, and at the end mask will contain the area that has been floodfilled. If a new seed belongs to an area already floodfilled, your function call will return immediately.
Use the FLOODFILL_MASK_ONLY flag, and then use this mask to paint your input image with the desidered filling color at the end with a setTo() (You'll have to use a subimage of Mask! Removing first and last row and column). Note that your floodfill might produce different results depending on the order you process your seed points if you set loDiff or upDiff to something different than the default value zero.
Take also a look at this.
Related
I have a numpy 2d array (named lda_fit) with probabilities, where I want to replace the probabilities with 0 or 1, based on the max value in each line.
array([[0.06478282, 0.80609092, 0.06511851, 0.06400775],
[0.50386571, 0.02621445, 0.44400621, 0.02591363],
[0.259538 , 0.04266385, 0.65470484, 0.04309331],
...,
[0.01415491, 0.01527508, 0.22211579, 0.74845422],
[0.01419367, 0.01537099, 0.01521318, 0.95522216],
[0.25 , 0.25 , 0.25 , 0.25 ]])
So after all the first line should look like [0,1,0,0], the second like [1,0,0,0] and so on. I have tried, and this works, but only for a given threshold (0.5):
np.where(lda_fit < 0.5,0,1)
But as I might not have the largest value being greater than 0.5, I want to specify a new threshold for each line. Unfortunately this gives me the max value of the whole array.
np.where(lda_fit < np.max(lda_fit),0,1)
You can use np.max with specifying axis:
(lda_fit.max(1,keepdims=True)==lda_fit)+0
Note: if there is more than one max in a row, it will return 1 for all of them. For alternative solution follow the next method.
output for example input in question:
[[0 1 0 0]
[1 0 0 0]
[0 0 1 0]
[0 0 0 1]
[0 0 0 1]
[1 1 1 1]]
In case of multiple max in a row, if you want to have only first one as 1 and the rest of max as 0, you can use argmax:
(lda_fit.argmax(axis=1)[:,None] == range(lda_fit.shape[1]))+0
or equally:
lda_fit_max = np.zeros(lda_fit.shape, dtype=int)
lda_fit_max[np.arange(len(lda_fit)),lda_fit.argmax(axis=1)]=1
output:
[[0 1 0 0]
[1 0 0 0]
[0 0 1 0]
[0 0 0 1]
[0 0 0 1]
[1 0 0 0]]
Numpy Three Four Five Dimensional Array in Python
Input 1: 3
Output 1:
[[0 1 0]
[1 1 1]
[0 1 0]]
Input 2:5
Output 1:
[[0 0 1 0 0]
[0 0 1 0 0]
[1 1 1 1 1]
[0 0 1 0 0]
[0 0 1 0 0]]
Notice that the 1s in the arrays make a shape like +.
My logic is shown below
a=np.zeros((n,n),dtype='int')
a[-3,:] = 1
a[:,-3] = 1 print(a)
This logic is only working for five dimensional array but not for three dimensional array.
can someone assist me to get the expected output for both three and five dimensional array using np.zeros & integer division //
As you can see, n//2 = 3 when n=5. So, that's the solution to your question as see here:
import numpy as np
def create_plus_matrix(n):
a = np.zeros((n,n),dtype='int')
a[-n//2,:] = 1
a[:,-n//2] = 1
return a
So, let's try it out:
>>> create_plus_matrix(3)
[[0 1 0]
[1 1 1]
[0 1 0]]
>> create_plus_matrix(5)
[[0 0 1 0 0]
[0 0 1 0 0]
[1 1 1 1 1]
[0 0 1 0 0]
[0 0 1 0 0]]
Do this
import numpy as np
def plus(size):
a = np.zeros([size,size], dtype = int)
a[int(size/2)] = np.ones(size)
for i in a:
i[int(size/2)] = 1
return a
print(plus(3)) //3 is the size
//Output
[[0 1 0]
[1 1 1]
[0 1 0]]
So I have a gray (2D) image of type np.array with a lot of zeros and objects inside of it. Each object is defined by its pixels having the same value, e.g. 1.23e15.
I now want to label the image, i.e. I want to rescale all pixels of a certain value (eg 200 pixels of the above value 1.23e15) to one integer number.
Apart from the background which is zero, I want each region to be set to one of the values in range(1,nbr_of_regions_in_img+1).
How can I do this time efficiently (I have hundreds of thousands of images) without the obvious looping solution?
Scipy has an extensive library for image manipulation and analysis. The function, you are looking for is probably scipy.ndimage.label
import scipy.ndimage
import numpy as np
pix = np.array([[0,0,1,1,0,0],
[0,0,1,1,1,0],
[1,1,0,0,1,0],
[0,1,0,0,0,0]])
mask_obj, n_obj = scipy.ndimage.label(pix)
The output gives you both, a labelled mask with a different number for each identified object and the number of identified objects.
>>>print(n_obj)
>>>2
>>>print(mask_obj)
>>>[[0 0 1 1 0 0]
[0 0 1 1 1 0]
[2 2 0 0 1 0]
[0 2 0 0 0 0]]
You can also define, what should count as a neighbouring cell with the structure parameter:
s = np.asarray([[1,1,1],
[1,1,1],
[1,1,1]])
mask_obj, n_obj = scipy.ndimage.label(pix, structure = s)
>>>print(n_obj)
>>>1
>>>print(mask_obj)
>>>[[0 0 1 1 0 0]
[0 0 1 1 1 0]
[1 1 0 0 1 0]
[0 1 0 0 0 0]]
Difficulties will arise, if different objects touch each other, i.e. they are not separated by a zero value.
I have a 3D numpy array and want to change a particular element based on a conditional test of another element. (The applications is to change the 'alpha' of a RGBA image array to play with the transparency in a 3D pyqtgraph image - should ideally be pretty fast).
a= np.ones((2,4,5),dtype=np.int) #create a ones array
a[0,0,0] = 3 #change a few values
a[0,2,0] = 3
a[1,2,0] = 3
print(a)
>>>[[[3 1 1 1 1]
[1 1 1 1 1]
[3 1 1 1 1]]
[[1 1 1 1 1]
[1 1 1 1 1]
[3 1 1 1 1]]]
Now I want to conditionally test the first element of the lowest dimension(??) and then change the last element based on the result.
if a[0,:,:] > 1: #this does not work - for example only - if first element > 1
a[3,:,:]=255 #then change the last element in the same dimension to 255
print(a) #this is my idealized result
>>>[[[3 1 1 1 255]
[1 1 1 1 1]
[3 1 1 1 255]]
[[1 1 1 1 1]
[1 1 1 1 1]
[3 1 1 1 255]]]
This seems to do the trick:
mask = a[:,:,0] > 1
a[:,:,4][mask] = 255
So the indexing just needed to be a little different and then it's just standard practice of applying a mask.
edit
#Ophion showed this is much better written as:
a[mask,:,-1] = 255
Past midnight and maybe someone has an idea how to tackle a problem of mine. I want to count the number of adjacent cells (which means the number of array fields with other values eg. zeroes in the vicinity of array values) as sum for each valid value!.
Example:
import numpy, scipy
s = ndimage.generate_binary_structure(2,2) # Structure can vary
a = numpy.zeros((6,6), dtype=numpy.int) # Example array
a[2:4, 2:4] = 1;a[2,4] = 1 # with example value structure
print a
>[[0 0 0 0 0 0]
[0 0 0 0 0 0]
[0 0 1 1 1 0]
[0 0 1 1 0 0]
[0 0 0 0 0 0]
[0 0 0 0 0 0]]
# The value at position [2,4] is surrounded by 6 zeros, while the one at
# position [2,2] has 5 zeros in the vicinity if 's' is the assumed binary structure.
# Total sum of surrounding zeroes is therefore sum(5+4+6+4+5) == 24
How can i count the number of zeroes in such way if the structure of my values vary?
I somehow believe to must take use of the binary_dilation function of SciPy, which is able to enlarge the value structure, but simple counting of overlaps can't lead me to the correct sum or does it?
print ndimage.binary_dilation(a,s).astype(a.dtype)
[[0 0 0 0 0 0]
[0 1 1 1 1 1]
[0 1 1 1 1 1]
[0 1 1 1 1 1]
[0 1 1 1 1 0]
[0 0 0 0 0 0]]
Use a convolution to count neighbours:
import numpy
import scipy.signal
a = numpy.zeros((6,6), dtype=numpy.int) # Example array
a[2:4, 2:4] = 1;a[2,4] = 1 # with example value structure
b = 1-a
c = scipy.signal.convolve2d(b, numpy.ones((3,3)), mode='same')
print numpy.sum(c * a)
b = 1-a allows us to count each zero while ignoring the ones.
We convolve with a 3x3 all-ones kernel, which sets each element to the sum of it and its 8 neighbouring values (other kernels are possible, such as the + kernel for only orthogonally adjacent values). With these summed values, we mask off the zeros in the original input (since we don't care about their neighbours), and sum over the whole array.
I think you already got it. after dilation, the number of 1 is 19, minus 5 of the starting shape, you have 14. which is the number of zeros surrounding your shape. Your total of 24 has overlaps.