"filling" areas based on mask from other image - python

I have the following problem with a minimal example:
I have an two images of the same size (see below) - one shows diffrent ellipses and the other shows regions of interest within some of those ellipses. What I want now is to color the ellipses that contain those regions red and those that don't contain those blue, or use some other labeling separating those types of ellipses.
For this I already tried using connectedComponents with partial sucess:
import cv2 as cv
import numpy as np
example = cv.imread('example.png', cv.IMREAD_GRAYSCALE)
roi = cv.imread('roi.png', cv.IMREAD_GRAYSCALE)
_, comp = cv.connectedComponents(example)
# get list of labels where the roi is present
redmarks = np.unique(comp[roi != 0])
# components with labels mentioned above are red otherwise blue
redmask = np.isin(comp, redmarks)
bluemask = np.invert(redmask) & comp > 0
result = cv.cvtColor(example, cv.COLOR_GRAY2RGB)
result[redmask] = [0, 0, 255]
result[bluemask] = [255, 0, 0]
cv.imshow("Test", result)
k = cv.waitKey(0)
Now there are three questions:
For the bluemask for some reason two ellipses are missed. The two parts of the logic - inverted redmask and not being the black background label 0 - individualy include both parts, so both should be present. Am I missing something?
Is there potentially a better and faster way to achieve what I want? There may be a build-in method that does, what I`m trying to do, that I have not encountered yet, as I'm fairly new to OpenCV.
In the end I also want to assess whether this particle is "for reals" red or not, e.g. if a roi is very small in comparison to the whole associated ellipse it shall not be labeled with red. I haven't thought about it much yet, as I want the basic thing to work properly and I already feel that my implementation would be slow as hell. So if you have any ideas regarding this, I would be very happy.
Thanks!
example.png:
roi.png
Overlapped:
Current result:
Edit:
Alright, I have a solution for 3.:
import cv2 as cv
import numpy as np
if __name__ == '__main__':
example = cv.imread('example.png', cv.IMREAD_GRAYSCALE)
roi = cv.imread('roi.png', cv.IMREAD_GRAYSCALE)
result = np.zeros((example.shape[0], example.shape[1], 3), dtype=np.uint8)
count, comp = cv.connectedComponents(example)
for c in range(1, count+1):
# mask of c-connected component
mask = np.zeros(example.shape, np.uint8)
mask[comp == c] = 1
# average of roi in this mask
mn, _, _, _ = cv.mean(roi, mask=mask)
# everything over 20% shared area red, else blue
pr = 20
if mn > 255*pr/100:
result[comp == c] = [0, 0, 255]
else:
result[comp == c] = [255, 0, 0]
cv.imshow("Test", result)
k = cv.waitKey(0)
BUT the problem of 2. is as expected pretty big for this solution. While for this example it is as fast as the other code without respect towards area, it is much slower for examples with larger resolution and amount of components (can`t/won't share this example).
Is there a way to optimize the code inside the for-loop?

Related

Edge Detection minimum line length?

I'm trying to filter out short lines from my canny edge detection. Here's what I'm currently using as well as a brief explanation:
I start out by taking a single channel of the image and running CV2's Canny edge detection. Following that, I scan through each pixel and detect if there are any around it that are white (True, 255). If it is, I add it to a group of true pixels and then check every pixel around it (and keep looping until there are no white/True pixels left. I then replace all the pixels with black/False if the group count is less than a designated threshold (In this case, 100 pixels).
While this works (as shown below) it's awfully slow. I'm wondering if there's a faster, easier way to do this.
import cv2
img = cv2.imread("edtest.jpg")
img_r = img.copy()
img_r[:, :, 0] = 0
img_r[:, :, 1] = 0
img_r = cv2.GaussianBlur(img_r, (3, 3), 0)
basic_edge = cv2.Canny(img_r, 240, 250)
culled_edge = basic_edge.copy()
min_threshold = 100
for x in range(len(culled_edge)):
print(x)
for y in range(len(culled_edge[x])):
test_pixels = [(x, y)]
true_pixels = [(x, y)]
while len(test_pixels) != 0:
xorigin = test_pixels[0][0]
yorigin = test_pixels[0][1]
if 0 < xorigin < len(culled_edge) - 1 and 0 < yorigin < len(culled_edge[0]) - 1:
for testx in range(3):
for testy in range(3):
if culled_edge[xorigin-1+testx][yorigin - 1 + testy] == 255 and (xorigin-1+testx, yorigin-1+testy) not in true_pixels:
test_pixels.append((xorigin-1+testx, yorigin-1+testy))
true_pixels.append((xorigin-1+testx, yorigin-1+testy))
test_pixels.pop(0)
if 1 < len(true_pixels) < min_threshold:
for i in range(len(true_pixels)):
culled_edge[true_pixels[i][0]][true_pixels[i][1]] = 0
cv2.imshow("basic_edge", basic_edge)
cv2.imshow("culled_edge", culled_edge)
cv2.waitKey(0)
Source Image:
Canny Detection and Filtered (Ideal) Results:
The operation you are applying is called an area opening. I don't think there is an implementation in OpenCV, but you can find one in either scikit-image (skimage.morphology.area_opening) or DIPlib (dip.BinaryAreaOpening).
For example with DIPlib (disclosure: I'm an author) you'd amend your code as follows:
import diplib as dip
# ...
basic_edge = cv2.Canny(img_r, 240, 250)
min_threshold = 100
culled_edge = dip.BinaryAreaOpening(basic_edge > 0, min_threshold)
The output, culled_edge, is now a dip.Image object, which is compatible with NumPy arrays and you should be able to use it as such in many situations. If there's an issue, then you can cast it back to a NumPy array with culled_edge = np.array(culled_edge).

How To Get The Pixel Count Of A Segmented Area in an Image I used Vgg16 for Segmentation

I am new to deep learning but have succeeded in semantic segmentation of the image I am trying to get the pixel count of each class in the label. As an example in the image I want to get the pixel count of the carpet, or the chandelier or the light stand. How do I go about? Thanks any suggestions will help.
Edit: In what format the regions are returned? Do you have only the final image or the regions are given as contours? If you have them as contours (list of coordinates), you can apply findContourArea directly on that structure.
If you can receive/sample the regions one by one in an image (but do not have the contour), you can sequentially paint each of the colors/classes in a clear image, either convert it to grayscale or directly paint it in grayscale or binary, or binarize with threshold; then numberPixels = len(cv2.findNonZero(bwImage)). cv2.findContour and cv2.contourArea should do the same.
Instead of rendering each class in a separate image, if your program receives only the final segmentation and not per-class contours, you can filter/mask the regions by color ranges on that image. I built that and it seemed to do the job, 14861 pixels for the pink carpet:
import cv2
import numpy as np
# rgb 229, 0, 178 # the purple carpet in RGB (sampled with IrfanView)
# b,g,r = 178, 0, 229 # cv2 uses BGR
class_color = [178, 0, 229]
multiclassImage = cv2.imread("segmented.png")
cv2.imshow("MULTI", multiclassImage)
filteredImage = multiclassImage.copy()
low = np.array(class_color);
mask = cv2.inRange(filteredImage, low, low)
filteredImage[mask == 0] = [0, 0, 0]
filteredImage[mask != 0] = [255,255,255]
cv2.imshow("FILTER", filteredImage)
# numberPixelsFancier = len(cv2.findNonZero(filteredImage[...,0]))
# That also works and returns 14861 - without conversion, taking one color channel
bwImage = cv2.cvtColor(filteredImage, cv2.COLOR_BGR2GRAY)
cv2.imshow("BW", bwImage)
numberPixels = len(cv2.findNonZero(bwImage))
print(numberPixels)
cv2.waitKey(0)
If you don't have the values of the colors given or/and can't control them, you can use numpy.unique(): https://numpy.org/doc/stable/reference/generated/numpy.unique.html and it will return the unique colors, then they could be applied in the algorithm above.
Edit 2: BTW, another way to compute or verify such counts is by calculating histograms. That's with IrfanView on the black-white image:

Remove background of the image using opencv Python

I have two images, one with only background and the other with background + detectable object (in my case its a car). Below are the images
I am trying to remove the background such that I only have car in the resulting image. Following is the code that with which I am trying to get the desired results
import numpy as np
import cv2
original_image = cv2.imread('IMG1.jpg', cv2.IMREAD_COLOR)
gray_original = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
background_image = cv2.imread('IMG2.jpg', cv2.IMREAD_COLOR)
gray_background = cv2.cvtColor(background_image, cv2.COLOR_BGR2GRAY)
foreground = np.absolute(gray_original - gray_background)
foreground[foreground > 0] = 255
cv2.imshow('Original Image', foreground)
cv2.waitKey(0)
The resulting image by subtracting the two images is
Here is the problem. The expected resulting image should be a car only.
Also, If you take a deep look in the two images, you'll see that they are not exactly same that is, the camera moved a little so background had been disturbed a little. My question is that with these two images how can I subtract the background. I do not want to use grabCut or backgroundSubtractorMOG algorithm right now because I do not know right now whats going on inside those algorithms.
What I am trying to do is to get the following resulting image
Also if possible, please guide me with a general way of doing this not only in this specific case that is, I have a background in one image and background+object in the second image. What could be the best possible way of doing this. Sorry for such a long question.
I solved your problem using the OpenCV's watershed algorithm. You can find the theory and examples of watershed here.
First I selected several points (markers) to dictate where is the object I want to keep, and where is the background. This step is manual, and can vary a lot from image to image. Also, it requires some repetition until you get the desired result. I suggest using a tool to get the pixel coordinates.
Then I created an empty integer array of zeros, with the size of the car image. And then I assigned some values (1:background, [255,192,128,64]:car_parts) to pixels at marker positions.
NOTE: When I downloaded your image I had to crop it to get the one with the car. After cropping, the image has size of 400x601. This may not be what the size of the image you have, so the markers will be off.
Afterwards I used the watershed algorithm. The 1st input is your image and 2nd input is the marker image (zero everywhere except at marker positions). The result is shown in the image below.
I set all pixels with value greater than 1 to 255 (the car), and the rest (background) to zero. Then I dilated the obtained image with a 3x3 kernel to avoid losing information on the outline of the car. Finally, I used the dilated image as a mask for the original image, using the cv2.bitwise_and() function, and the result lies in the following image:
Here is my code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load the image
img = cv2.imread("/path/to/image.png", 3)
# Create a blank image of zeros (same dimension as img)
# It should be grayscale (1 color channel)
marker = np.zeros_like(img[:,:,0]).astype(np.int32)
# This step is manual. The goal is to find the points
# which create the result we want. I suggest using a
# tool to get the pixel coordinates.
# Dictate the background and set the markers to 1
marker[204][95] = 1
marker[240][137] = 1
marker[245][444] = 1
marker[260][427] = 1
marker[257][378] = 1
marker[217][466] = 1
# Dictate the area of interest
# I used different values for each part of the car (for visibility)
marker[235][370] = 255 # car body
marker[135][294] = 64 # rooftop
marker[190][454] = 64 # rear light
marker[167][458] = 64 # rear wing
marker[205][103] = 128 # front bumper
# rear bumper
marker[225][456] = 128
marker[224][461] = 128
marker[216][461] = 128
# front wheel
marker[225][189] = 192
marker[240][147] = 192
# rear wheel
marker[258][409] = 192
marker[257][391] = 192
marker[254][421] = 192
# Now we have set the markers, we use the watershed
# algorithm to generate a marked image
marked = cv2.watershed(img, marker)
# Plot this one. If it does what we want, proceed;
# otherwise edit your markers and repeat
plt.imshow(marked, cmap='gray')
plt.show()
# Make the background black, and what we want to keep white
marked[marked == 1] = 0
marked[marked > 1] = 255
# Use a kernel to dilate the image, to not lose any detail on the outline
# I used a kernel of 3x3 pixels
kernel = np.ones((3,3),np.uint8)
dilation = cv2.dilate(marked.astype(np.float32), kernel, iterations = 1)
# Plot again to check whether the dilation is according to our needs
# If not, repeat by using a smaller/bigger kernel, or more/less iterations
plt.imshow(dilation, cmap='gray')
plt.show()
# Now apply the mask we created on the initial image
final_img = cv2.bitwise_and(img, img, mask=dilation.astype(np.uint8))
# cv2.imread reads the image as BGR, but matplotlib uses RGB
# BGR to RGB so we can plot the image with accurate colors
b, g, r = cv2.split(final_img)
final_img = cv2.merge([r, g, b])
# Plot the final result
plt.imshow(final_img)
plt.show()
If you have a lot of images you will probably need to create a tool to annotate the markers graphically, or even an algorithm to find markers automatically.
The problem is that you're subtracting arrays of unsigned 8 bit integers. This operation can overflow.
To demonstrate
>>> import numpy as np
>>> a = np.array([[10,10]],dtype=np.uint8)
>>> b = np.array([[11,11]],dtype=np.uint8)
>>> a - b
array([[255, 255]], dtype=uint8)
Since you're using OpenCV, the simplest way to achieve your goal is to use cv2.absdiff().
>>> cv2.absdiff(a,b)
array([[1, 1]], dtype=uint8)
I recommend using OpenCV's grabcut algorithm. You first draw a few lines on the foreground and background, and keep doing this until your foreground is sufficiently separated from the background. It is covered here: https://docs.opencv.org/trunk/d8/d83/tutorial_py_grabcut.html
as well as in this video: https://www.youtube.com/watch?v=kAwxLTDDAwU

How do I increase the contrast of an image in Python OpenCV

I am new to Python OpenCV. I have read some documents and answers here but I am unable to figure out what the following code means:
if (self.array_alpha is None):
self.array_alpha = np.array([1.25])
self.array_beta = np.array([-100.0])
# add a beta value to every pixel
cv2.add(new_img, self.array_beta, new_img)
# multiply every pixel value by alpha
cv2.multiply(new_img, self.array_alpha, new_img)
I have come to know that Basically, every pixel can be transformed as X = aY + b where a and b are scalars.. Basically, I have understood this. However, I did not understand the code and how to increase contrast with this.
Till now, I have managed to simply read the image using img = cv2.imread('image.jpg',0)
Thanks for your help
I would like to suggest a method using the LAB color space.
LAB color space expresses color variations across three channels. One channel for brightness and two channels for color:
L-channel: representing lightness in the image
a-channel: representing change in color between red and green
b-channel: representing change in color between yellow and blue
In the following I perform adaptive histogram equalization on the L-channel and convert the resulting image back to BGR color space. This enhances the brightness while also limiting contrast sensitivity. I have done the following using OpenCV 3.0.0 and python:
Code:
import cv2
import numpy as np
img = cv2.imread('flower.jpg', 1)
# converting to LAB color space
lab= cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
l_channel, a, b = cv2.split(lab)
# Applying CLAHE to L-channel
# feel free to try different values for the limit and grid size:
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
cl = clahe.apply(l_channel)
# merge the CLAHE enhanced L-channel with the a and b channel
limg = cv2.merge((cl,a,b))
# Converting image from LAB Color model to BGR color spcae
enhanced_img = cv2.cvtColor(limg, cv2.COLOR_LAB2BGR)
# Stacking the original image with the enhanced image
result = np.hstack((img, enhanced_img))
cv2.imshow('Result', result)
Result:
The enhanced image is on the right
You can run the code as it is.
To know what CLAHE (Contrast Limited Adaptive Histogram Equalization) is about, refer this Wikipedia page
For Python, I haven't found an OpenCV function that provides contrast. As others have suggested, there are some techniques to automatically increase contrast using a very simple formula.
In the official OpenCV docs, it is suggested that this equation can be used to apply both contrast and brightness at the same time:
new_img = alpha*old_img + beta
where alpha corresponds to a contrast and beta is brightness. Different cases
alpha 1 beta 0 --> no change
0 < alpha < 1 --> lower contrast
alpha > 1 --> higher contrast
-127 < beta < +127 --> good range for brightness values
In C/C++, you can implement this equation using cv::Mat::convertTo, but we don't have access to that part of the library from Python. To do it in Python, I would recommend using the cv::addWeighted function, because it is quick and it automatically forces the output to be in the range 0 to 255 (e.g. for a 24 bit color image, 8 bits per channel). You could also use convertScaleAbs as suggested by #nathancy.
import cv2
img = cv2.imread('input.png')
# call addWeighted function. use beta = 0 to effectively only operate one one image
out = cv2.addWeighted( img, contrast, img, 0, brightness)
output = cv2.addWeighted
The above formula and code is quick to write and will make changes to brightness and contrast. But they yield results that are significantly different than photo editing programs. The rest of this answer will yield a result that will reproduce the behavior in the GIMP and also LibreOffice brightness and contrast. It's more lines of code, but it gives a nice result.
Contrast
In the GIMP, contrast levels go from -127 to +127. I adapted the formulas from here to fit in that range.
f = 131*(contrast + 127)/(127*(131-contrast))
new_image = f*(old_image - 127) + 127 = f*(old_image) + 127*(1-f)
To figure out brightness, I figured out the relationship between brightness and levels and used information in this levels post to arrive at a solution.
#pseudo code
if brightness > 0
shadow = brightness
highlight = 255
else:
shadow = 0
highlight = 255 + brightness
new_img = ((highlight - shadow)/255)*old_img + shadow
brightness and contrast in Python and OpenCV
Putting it all together and adding using the reference "mandrill" image from USC SIPI:
import cv2
import numpy as np
# Open a typical 24 bit color image. For this kind of image there are
# 8 bits (0 to 255) per color channel
img = cv2.imread('mandrill.png') # mandrill reference image from USC SIPI
s = 128
img = cv2.resize(img, (s,s), 0, 0, cv2.INTER_AREA)
def apply_brightness_contrast(input_img, brightness = 0, contrast = 0):
if brightness != 0:
if brightness > 0:
shadow = brightness
highlight = 255
else:
shadow = 0
highlight = 255 + brightness
alpha_b = (highlight - shadow)/255
gamma_b = shadow
buf = cv2.addWeighted(input_img, alpha_b, input_img, 0, gamma_b)
else:
buf = input_img.copy()
if contrast != 0:
f = 131*(contrast + 127)/(127*(131-contrast))
alpha_c = f
gamma_c = 127*(1-f)
buf = cv2.addWeighted(buf, alpha_c, buf, 0, gamma_c)
return buf
font = cv2.FONT_HERSHEY_SIMPLEX
fcolor = (0,0,0)
blist = [0, -127, 127, 0, 0, 64] # list of brightness values
clist = [0, 0, 0, -64, 64, 64] # list of contrast values
out = np.zeros((s*2, s*3, 3), dtype = np.uint8)
for i, b in enumerate(blist):
c = clist[i]
print('b, c: ', b,', ',c)
row = s*int(i/3)
col = s*(i%3)
print('row, col: ', row, ', ', col)
out[row:row+s, col:col+s] = apply_brightness_contrast(img, b, c)
msg = 'b %d' % b
cv2.putText(out,msg,(col,row+s-22), font, .7, fcolor,1,cv2.LINE_AA)
msg = 'c %d' % c
cv2.putText(out,msg,(col,row+s-4), font, .7, fcolor,1,cv2.LINE_AA)
cv2.putText(out, 'OpenCV',(260,30), font, 1.0, fcolor,2,cv2.LINE_AA)
cv2.imwrite('out.png', out)
I manually processed the images in the GIMP and added text tags in Python/OpenCV:
Note: #UtkarshBhardwaj has suggested that Python 2.x users must cast the contrast correction calculation code into float for getting floating result, like so:
...
if contrast != 0:
f = float(131*(contrast + 127))/(127*(131-contrast))
...
Contrast and brightness can be adjusted using alpha (α) and beta (β), respectively. These variables are often called the gain and bias parameters. The expression can be written as
OpenCV already implements this as cv2.convertScaleAbs(), just provide user defined alpha and beta values
import cv2
image = cv2.imread('1.jpg')
alpha = 1.5 # Contrast control (1.0-3.0)
beta = 0 # Brightness control (0-100)
adjusted = cv2.convertScaleAbs(image, alpha=alpha, beta=beta)
cv2.imshow('original', image)
cv2.imshow('adjusted', adjusted)
cv2.waitKey()
Before -> After
Note: For automatic brightness/contrast adjustment take a look at automatic contrast and brightness adjustment of a color photo
There are quite a few answers here ranging from simple to complex. I want to add another on the simpler side that seems a little more practical for actual contrast and brightness adjustments.
def adjust_contrast_brightness(img, contrast:float=1.0, brightness:int=0):
"""
Adjusts contrast and brightness of an uint8 image.
contrast: (0.0, inf) with 1.0 leaving the contrast as is
brightness: [-255, 255] with 0 leaving the brightness as is
"""
brightness += int(round(255*(1-contrast)/2))
return cv2.addWeighted(img, contrast, img, 0, brightness)
We do the a*x+b adjustment through the addWeighted() function. However, to change the contrast without also modifying the brightness, the data needs to be zero centered. That's not the case with OpenCVs default uint8 datatype. So we also need to adjust the brightness according to how the distribution is shifted.
Best explanation for X = aY + b (in fact it f(x) = ax + b)) is provided at https://math.stackexchange.com/a/906280/357701
A Simpler one by just adjusting lightness/luma/brightness for contrast as is below:
import cv2
img = cv2.imread('test.jpg')
cv2.imshow('test', img)
cv2.waitKey(1000)
imghsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
imghsv[:,:,2] = [[max(pixel - 25, 0) if pixel < 190 else min(pixel + 25, 255) for pixel in row] for row in imghsv[:,:,2]]
cv2.imshow('contrast', cv2.cvtColor(imghsv, cv2.COLOR_HSV2BGR))
cv2.waitKey(1000)
raw_input()
img = cv2.imread("/x2.jpeg")
image = cv2.resize(img, (1800, 1800))
alpha=1.5
beta=20
new_image=cv2.addWeighted(image,alpha,np.zeros(image.shape, image.dtype),0,beta)
cv2.imshow("new",new_image)
cv2.waitKey(0)
cv2.destroyAllWindows()

Image Analysis: Finding proteins in an image

I am attempting to write a program that will automatically locate a protein in an image, this will ultimately be used to differentiate between two proteins of different heights that are present.
The white area on top of the background is a membrane in which the proteins sit and the white blobs that are present are the proteins. The proteins have two lobes hence they appear in pairs (actually one protein).
I have been writing a script in Fiji (Jython) to try and locate the proteins so we can work out the height from the local background. This so far involves applying an adaptive histogram equalisation and then subtracting the background with a rolling ball of radius 10 pixels. After that I have been applying a kernel of sorts which is 10 pixels by 10 pixels and works out the average of the 5 centre pixels and divides it by the average of the pixels on the 4 edges of the kernel to get a ratio. if the ratio is above a certain value then it is a candidate.
the output I got was this image which apart from some wrapping and sensitivity (ratio=2.0) issues seems to be ok. My questions are:
Is this a reasonable approach or is there an obviously better way of doing this?
Can you suggest a way on from here? I am a little stuck now and not really sure how to proceed.
code if necessary: http://pastebin.com/D45LNJCu
Thanks!
Sam
How about starting off a bit more simple and using the Harris-point approach and detect local maxima. Eg.
import numpy as np
import Image
from scipy import ndimage
import matplotlib.pyplot as plt
roi = 2.5
peak_threshold = 120
im = Image.open('Q766c.png');
image = im.copy()
size = 2 * roi + 1
image_max = ndimage.maximum_filter(image, size=size, mode='constant')
mask = (image == image_max)
image *= mask
# Remove the image borders
image[:size] = 0
image[-size:] = 0
image[:, :size] = 0
image[:, -size:] = 0
# Find peaks
image_t = (image > peak_threshold) * 1
# get coordinates of peaks
f = np.transpose(image_t.nonzero())
# Show
img = plt.imshow(np.asarray(im))
plt.plot(f[:, 1], f[:, 0], 'o', markeredgewidth=0.45, markeredgecolor='b', markerfacecolor='None')
plt.axis('off')
plt.savefig('local_max.png', format='png', bbox_inches='tight')
plt.show()
Which gives this:
ImageJ "Find maxima" does also similar.
Here is the Jython code
from ij import ImagePlus, IJ, Prefs
from ij.plugin import RGBStackMerge
from ij.process import ImageProcessor, ImageConverter
from ij.plugin.filter import Binary, MaximumFinder
from jarray import array
# define background is black (0)
Prefs.blackBackground = True
# find maxima
#imp = IJ.getImage()
imp = ImagePlus('http://i.stack.imgur.com/Q766c.png')
ImageConverter(imp).convertToGray8()
ip = imp.getProcessor()
segip = MaximumFinder().findMaxima( ip, 10, 200, MaximumFinder.SINGLE_POINTS , False, False)
# display detection result
binner = Binary()
binner.setup("dilate", None)
binner.run(segip)
segimp = ImagePlus("seg", segip)
mergeimp = RGBStackMerge.mergeChannels(array([segimp, imp, None, None, None, None, None], ImagePlus), True)
mergeimp.show()
EDIT: Updated the code to allow processing PNG image (RGB), and directly loading image from this thread. See comments for more details.

Categories