I am running blob detection on a camera image of circular objects, using OpenCV 2.4.9. I run a number of standard filters on the image (blur, adaptive threshold, skeletonization using skimage routines, and dilation of the skeleton), and would like to identify blobs (contiguous black areas) on the result. There is SimpleBlobDetector just for that, but however I set its parameters, it is not doing what I would like to.
This is the original image:
and this is the processed version, with keypoints from the detector drawn in yellow:
The keypoints don't seem to respect area constraints, and also don't appear where I would expect them.
The script looks like the following, is there something obviously wrong? Or any other suggestions?
import numpy as np
import cv2
import skimage, skimage.morphology
img0=cv2.imread('capture_b_07.cropped.png')
img1=cv2.cvtColor(img0,cv2.COLOR_BGR2GRAY)
img2=cv2.medianBlur(img1,5)
img3=cv2.bilateralFilter(img2,9,75,75)
img4=cv2.adaptiveThreshold(img3,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,21,0)
img5=skimage.img_as_ubyte(skimage.morphology.skeletonize(skimage.img_as_bool(img4)))
img6=cv2.dilate(img5,cv2.getStructuringElement(cv2.MORPH_RECT,(3,3)),iterations=1)
# blob detection
pp=cv2.SimpleBlobDetector_Params()
pp.filterByColor=True
pp.blobColor=0
pp.filterByArea=True
pp.minArea=500
pp.maxArea=5000
pp.filterByCircularity=True
pp.minCircularity=.4
pp.maxCircularity=1.
det=cv2.SimpleBlobDetector(pp)
keypts=det.detect(img6)
img7=cv2.drawKeypoints(img6,keypts,np.array([]),(0,255,255),cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('capture_b_07.blobs.png',img7)
Similar pipeline from ImageJ (Analyze Particles, circularity 0.5-1.0, area 500-5000 px^2), which I am trying to reproduce using OpenCV, gives something like this:
You can get a similar result to that of ImageJ using watershed.
I inverted your img6, labeled it and then used it as the marker for opencv watershed. Then I enlarged the watershed segmented boundary lines using a morphological filter and found connected components using opencv findContours. Below are the results. Not sure if this is what you want. I'm not posting the code as I quickly tried this out with a combination of python and c++, so it's a bit messy.
watershed segmentation
connected components
I was able to use SBD and produce a somewhat acceptable result. My first step was to try and even brightness over your image as the lower left was brighter than the upper right. Errors of omission > than commissions so there is still refinements possible.
Related
I need to delete the noise from this image. My problem is that I need a neat contour without all the lines like in this image.
Do you have any suggestions how to do that using python?
Looking at your example images, I suppose you are looking for an image processing algorithm that finds the edges of your image (in your case, the border lines of the ground plan).
Have a look the Canny edge detection algorithm which might be a well-suited for this task. A tutorial with an example implementation in python can be found here.
I am working on a project where I have to find the background of a given gray-scale image.
I did several kinds of research on the internet and I've found some algorithms using OpenCV library (like the following: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_video/py_bg_subtraction/py_bg_subtraction.html#py-background-subtraction).
This kind of approach doesn't work for me.
The image I want to elaborate is:
As you can see it is in gray-scale and we see the "gray static" background. I would love to see only the nucleus of the cell (the image will improve resolution and quality in the time, this is a pretty raw one)
I tried to subtract the 2D magnitude FFT of the background from the main image but the results is not good:
What I am asking is: What kind of process do you suggest to use to eliminate background?
Did you already try watershed algorithm ? I saw on a paper it's already used and improved for cell image segmentation.
Background subtraction won't work for your images because your background is not consistent. image's SNR is too low!
So you have 2 options:
1) Using deep learning method (like UNET) if you have enough data
2) Using bilateral filter then, some methods like active contour or GLCM Texture Feature or k-means clustering.
I don't know why opencv can't find out the lines using HoughLines,
the result is follow,
But if I use another picture from the internet,
voila. Any ideas?
It seems that Hough detect the lines extremities. A gradient is probably computed during the process.
A solution would be to:
Binarize the image
Compute the black region skeleton
Skeleton pruning
Apply the Hough Transform.
Doing that, each line will be reduced to 1 pixel width.
[EDIT] Here is an example:
The test image binarized
The skeleton (simple thinning).
The result (I skipped the pruning).
Is it possible to use opencv for better detection of posters (see example image)? I have tried following approach:
create mask of higher intensity pixels (higher v value)
apply erosion and dilation to remove noise and make smooth.
findContours and draw bounding boxes.
Result of this approach is only good if there are lights behind the posters(poster glowing). However, for my goal is to detect posters even when its not in the highest intensities. Please anyone guide me about it.
First thought was neural nets... and openCV has an implementation:
http://docs.opencv.org/2.4/modules/ml/doc/neural_networks.html
They call them 'Multi Layer Perceptrons'
Other machine learning examples in openCV here:
http://bytefish.de/blog/machine_learning_opencv/
If you know how the posters, you want to detect, look like, you could search for keypoints and match them by their descriptors.
See the example Features2D + Homography to find a known object in the documentation for code.
Hi I am wanting to use the python imaging library to crop images to a specific size for a website. I have a problem, these images are meant to show people's faces so I need to automatically crop based on them.
I know face detection is a difficult concept so I'm thinking of using the face.com API http://developers.face.com/tools/#faces/detect which is fine for what I want to do.
I'm just a little stuck on how I would use this data to crop a select area based on the majority of faces.
Can anybody help?
Joe
There is a library for python that have a concept of smart-cropping that among other options, can use face detection to do a smarter cropping.
It uses opencv under the hood, but you are isolated from it.
https://github.com/globocom/thumbor
If you have some rectangle that you want to excise from an image, here's what I might try first:
(optional) If the image is large, do a rough square crop centered on the face with dimensions sqrt(2) larger than the longer edge (if rectangular). Worst-case (45° rotation), it will still grab everything important.
Rotate based on the face orientation (something like rough_crop.rotate(math.degrees(math.atan(ydiff/xdiff)), trig is fun)
Do a final crop. If you did the initial crop, the face should be centered, otherwise you'll have to transform (rotate) all your old coordinates to the new image (more trig!).