This is the image and my aim is to detect only the middle nuclei of cell
Code for detecting nuclei shape (similar to circle) from below image
import cv2
import numpy as np
planets = cv2.imread('52.BMP')
gray_img = cv2.cvtColor(planets, cv2.COLOR_BGR2GRAY)
img = cv2.medianBlur(gray_img, 5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,120,param1=100,param2=30,minRadius=0,maxRadius=0)
circles=circles.astype(float)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(planets,(i[0],i[1]),i[2],(0,255,0),6)
cv2.circle(planets,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow("HoughCirlces", planets)
cv2.waitKey()
cv2.destroyAllWindows()
I'm getting this error everytime
AttributeError: 'NoneType' object has no attribute 'rint'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\ABHISHEK\PycharmProjects\cervical_project1\hsv.py", line 33, in <module>
circles = np.uint64(np.around(circles))
File "<__array_function__ internals>", line 180, in around
File "C:\python310\lib\site-packages\numpy\core\fromnumeric.py", line 3348, in around
return _wrapfunc(a, 'round', decimals=decimals, out=out)
File "C:\python310\lib\site-packages\numpy\core\fromnumeric.py", line 54, in _wrapfunc
return _wrapit(obj, method, *args, **kwds)
File "C:\python310\lib\site-packages\numpy\core\fromnumeric.py", line 43, in _wrapit
result = getattr(asarray(obj), method)(*args, **kwds)
TypeError: loop of ufunc does not support argument 0 of type NoneType which has no callable rint method
Is there any other way to detect only the nuclei of cell?
For large images hough transformation based searches can become slow.
Since you have a contrast difference between cell and nucleus you could transform your image into a grayscale image (maybe even using only one color channel if the contrast is better in a single one) and then applying blob detection. That also allows for filtering the nuclei according to certain shapes or sizes.
Edit:
Your submitted traceback says that the error comes from line 33 of hsv.py. As Christoph Rackwitz already stated, the provided code is missing this part or the error message doesn't correspond to your code.
Anyway in your line
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,120,param1=100,param2=30,minRadius=0,maxRadius=0)
you are setting minRadius and maxRadius to 0. Probably because of this no circle is found and np.around tries to act on None which leads to your final error message.
Binarization of the green component gives a usable result. Use contouring, and filter out the unwanted blobs. (You could rely on the isoperimetric ratio.)
Unfortunately, this method will be very sensitive to the color mixture.
Related
I am trying to apply cv2.createBackgroundSubtractorMOG() to this Image:
to eliminate all background brightness and only leave the two bright objects in the middle for further analysis. Is this the right approach for this task? If not, how would I do that?
import cv2
img = cv2.imread('image.png')
sharp_img = cv2.createBackgroundSubtractorMOG().apply(img)
Output:
Traceback (most recent call last):
File "/home/artur/Desktop/test.py", line 4, in <module>
sharp_img = cv2.createBackgroundSubtractorMOG().apply(img)
AttributeError: module 'cv2.cv2' has no attribute 'createBackgroundSubtractorMOG
Edit:
MOG does not seem to work.
Code:
import cv2
img = cv2.imread('image.png')
sharp_img = cv2.bgsegm.createBackgroundSubtractorMOG().apply(img)
cv2.imwrite('image2.png', sharp_img)
Output:
Traceback (most recent call last):
File "/home/artur/Desktop/test.py", line 4, in <module>
sharp_img = cv2.bgsegm.createBackgroundSubtractorMOG().apply(img)
AttributeError: module 'cv2.cv2' has no attribute 'bgsegm'
MOG2 seems to work but with no satisfying result:
Code:
import cv2
img = cv2.imread('image.png')
sharp_img = cv2.createBackgroundSubtractorMOG2().apply(img)
cv2.imwrite('image2.png', sharp_img)
Output Image:
I tried to play around with the args of the MOG2 Method from the docs but with no change.
from the docs, try this:
sharp_img = cv.bgsegm.createBackgroundSubtractorMOG().apply(img)
or
sharp_img = cv2.createBackgroundSubtractorMOG2().apply(img)
import cv2
img = cv2.imread('image.png')
max,min = img.max(),imgg.min()
print(max,min) #helps in giving thresholding values
threshold_img = cv2.threshold(blurred, 127, 255,cv2.THRESH_BINARY) #good starting point to give t1 value as half of max value of image
cv2.imshow(threshold_img)
This approach is a good starting point in your case, as you have two bright peaks that you want to separate from the noise. Once you have identified the required threshold limits, you should be able to isolate the two spots from the noise in the background. You can further use cv2.erode and cv2.dilate if needed to remove further noise.
when I run my PIL code ,it has this error:
from PIL import Image,ImageDraw, ImageColor, ImageChops
# Load images
im1 = Image.open('im1.png')
im2 = Image.open('im2.png')
# Flood fill white edges of image 2 with black
seed = (0, 0)
black = ImageColor.getrgb("black")
ImageDraw.floodfill(im2, seed, black, thresh=127)
# Now select lighter pixel of image1 and image2 at each pixel location and
result = ImageChops.lighter(im1, im2)
result.save('result.png')
the error is in my image processing:
Traceback (most recent call last):
File "C:\Users\Martin
Ma\Desktop\test\36\light_3_global\close_open\gray\main.py", line 96, in <module>
ImageDraw.floodfill(im2, seed, black, thresh=127)
File "E:\python\lib\site-packages\PIL\ImageDraw.py", line 346, in floodfill
if _color_diff(value, background) <= thresh:
File "E:\python\lib\site-packages\PIL\ImageDraw.py", line 386, in _color_diff
return abs(rgb1[0]-rgb2[0]) + abs(rgb1[1]-rgb2[1]) + abs(rgb1[2]-rgb2[2])
TypeError: 'int' object is not subscriptable
enter link description here
how can I solve it? thanks a lot !
You have changed image type without thinking about the consequences. JPEG and PNG are fundamentally different beasts, and you need to be aware of that:
JPEG images are lossily saved, so your data will not generally be read back with the same values you wrote - this seems to shock everyone. They threshold an image so that all values above 127 go white and others go black and have a true binary image, they then save as JPEG and are amazed that on reloading, the image has 78 colours despite having thresholded it.
JPEG images have all sorts of artefacts - chunky blocks of noise which will mess up your processing - especially if you look at saturation.
PNG images are often palettised where each pixel stores an index into a 256-colour palette, rather than an RGB triplet. Most operations will fail on palettised images because you are comparing an index with an RGB colour triplet.
PNG images are often greyscale - so there is only one channel and comparisons with RGB triplets will fail because the number of channels differs.
So, in answer to your question, I suspect your PNG image is palettised (especially likely when it only has 2 colours). You therefore need to convert it to RGB or maybe Luminance mode on opening:
im1 = Image.open('im1.png').convert('RGB')
I'm trying to zoom in an image.
import numpy as np
from scipy.ndimage.interpolation import zoom
import Image
zoom_factor = 0.05 # 5% of the original image
img = Image.open(filename)
image_array = misc.fromimage(img)
zoomed_img = clipped_zoom(image_array, zoom_factor)
misc.imsave('output.png', zoomed_img)
Clipped Zoom Reference:
Scipy rotate and zoom an image without changing its dimensions
This doesn't works and throws this error:
ValueError: could not broadcast input array from shape
Any Help or Suggestions on this
Is there a way to zoom an image given a zoom factor. And what's the problem ?
Traceback:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1443, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "title_apis_proxy.py", line 798, in get
image, msg = resize_image(image_local_file, aspect_ratio, image_url, scheme, radius, sigma)
File "title_apis_proxy.py", line 722, in resize_image
z = clipped_zoom(face, 0.5, order=0)
File "title_apis_proxy.py", line 745, in clipped_zoom
out[top:top+zh, left:left+zw] = zoom(img, zoom_factor, **kwargs)
ValueError: could not broadcast input array from shape (963,1291,2) into shape (963,1291,3)
The clipped_zoom function you're using from my previous answer was written for single-channel images only.
At the moment it's applying the same zoom factor to the "color" dimension as well as the width and height dimensions of your input array. The ValueError occurs because the the out array is initialized to the same number of channels as the input, but the result of zoom has fewer channels because of the zoom factor.
To make it work for multichannel images you could either pass each color channel separately to clipped_zoom and concatenate the results, or you could pass a tuple rather than a scalar as the zoom_factor argument to scipy.ndimage.zoom.
I've updated my previous answer using the latter approach, so that it will now work for multichannel images as well as monochrome.
I am trying to write a fairly simple demonstration of some of the capabilities of a vision system for a robot. The program I am writing is supposed to find the largest contour in a thresholded image, then track the path of the center of the largest contour's bounding rectangle for the past 100 frames. However, when I call cv2.boundingRect(bigCont), I see TypeError: points is not a numpy array, neither a scalar. I am using Python 2.7.9, Anaconda 2.2.0 (64-bit), and OpenCV 2.4.9.1, on Win7 SP1 64-bit. I have already looked at this thread and this one; however, both of these seemed to involve setting cv2.findContours to a single variable, when the function in fact returns two values. My code already sets cv2.findContours equal to two separate variables, so I don't think that's the problem. The section of code involving isolating the center of the largest contour is:
# find the largest contour in the thresholded image
conts, _ = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
bigCont = []
bigContSize = 0
for cont in conts:
contSize = cv2.contourArea(cont)
if contSize > bigContSize:
bigContSize = contSize
bigCont = cont
# find the center of the largest contour's bounding rectangle
x,y,w,h = cv2.boundingRect(bigCont)
centerX = x + (w / 2)
centerY = y + (h / 2)
contCenter = (centerX, centerY)
And the full traceback of the error is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 682, in runfile
execfile(filename, namespace)
File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 71, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "D:/newVisionDemo.py", line 73, in <module>
showHist(orig)
File "D:/newVisionDemo.py", line 51, in showHist
x,y,w,h = cv2.boundingRect(bigCont)
TypeError: points is not a numpy array, neither a scalar
As a secondary question, this was written based on a file we wrote earlier for the actual robot code. The API seems to indicate that cv2.boundingRect now returns only a single value, yet doesn't quite explain what that value represents. If someone could explain how to use the current implementation of cv2.boundingRect, that would be much appreciated.
Also, please feel free to let me know if you need to see more of the original code.
UPDATE: On a suggestion from another member of the team, I tried changing my call to cv2.boundingRect to cv2.boundingRect(np.array(bigCont)). This raised a different traceback, which I have included below:
OpenCV Error: Assertion failed (points.checkVector(2) >= 0 && (points.depth() == CV_32F || points.depth() == CV_32S)) in cv::boundingRect, file ..\..\..\modules\imgproc\src\contours.cpp, line 1895
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 682, in runfile
execfile(filename, namespace)
File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 71, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "D:/newVisionDemo.py", line 68, in <module>
showHist(orig)
File "D:/newVisionDemo.py", line 48, in showHist
x,y,w,h = cv2.boundingRect(np.array(bigCont))
cv2.error: ..\..\..\modules\imgproc\src\contours.cpp:1895: error: (-215) points.checkVector(2) >= 0 && (points.depth() == CV_32F || points.depth() == CV_32S) in function cv::boundingRect
Any help would be much appreciated. Thanks in advance.
After playing around with the threshold values, I determined that the error was being thrown because there were no contours--the binary image was completely black. I solved this by only attempting to find the center if bigContSize > 0 and setting contCenter to (0, 0) otherwise (since it will be used later). That problem solved, I now turn to the next issue plaguing this code...
When cropped_image = image.crop( cords ) works properly, it returns an Image object that works with Scipy's assaray:
bitmap <PIL.Image.Image image mode=RGBA size=1600x1200 at 0xAC9CFEC>
#SCIPY'S ASARRAY WORKS PROPERLY!
pic!! [[[ 16 18 31 255]
[ 16 18 31 255]
[ 16 18 31 255]
...,
But now I get a PIL.Image._ImageCrop object which fails.
bitmap <PIL.Image._ImageCrop image mode=RGBA size=1600x80 at 0x99635AC>
#SCIPY'S ASARRAY FAILS WITHOUT WARNING
pic!! <PIL.Image._ImageCrop image mode=RGBA size=1600x80 at 0x99635AC>
Traceback (most recent call last):
File "/root/dev/spectrum/final/image_handler.py", line 216, in on_left_down
self._sample_callback()
File "/root/dev/spectrum/final/image_handler.py", line 237, in _sample_callback
self.__callback_function( sample )
File "/root/dev/spectrum/final/plot_handler.py", line 117, in __init__
self.InitUI()
File "/root/dev/spectrum/final/plot_handler.py", line 163, in InitUI
self.canvas_panel.draw(self.__crop_section)
File "/root/dev/spectrum/final/plot_handler.py", line 78, in draw
pic_avg = pic.mean(axis=2)
ValueError: axis(=2) out of bounds
Why is such a problem happening?
It is a silent failure that came out of two circumstances:
First, the crop method was supplied non-integer numbers.
Second, the crop operation happens to be a lazy operation, cropping happens only when .load() method is called (Im unclear on this, please edit this if you know better!) .
Hence the crop operation did not happen and it gave no obvious signal. Giving it valid coordinates solved the problem.