I am trying to do background subtraction on static images of faces (extract the face), but the only OpenCV tutorials I can find are for video.
I understand that background is easier to identify from a video, but are there any approaches for photos?
Haar Cascade is widely used in that purpose. It is good to detect human face, but hardly used to classify who it is.
Here is nice reference : https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html
Related
I want to ask for some advice about the procedure that I should implement for image segmentation working with opencv in python.
I have this kind of image and my purpose is to detect the white fiber like here
Does anyone have a proposition of the steps of image processing that I should do?
Since I can notice that object's color is different than the background, I found this guide helpful. The concept is the following :
1.apply RGB filters to your image,
2.grab contours using OpenCV, then
3.apply some handcraft conditions to them so as to fit your desired output, and finally
4.produce the box.
If all of your images share the same color patterns, this should work.. If not, it will prove noisy ..
I'm trying to learn computer vision and more specifically open-cv in python.
I want to make a program that would track my barbell in a video and show me its path. (I know apps like this exists but I want to make it myself). I tried using the Canny edge detection and the HoughCircles functions but I seem to get everything but a good result.
I have been using this code to find the edges of my image:
gray = cv.cvtColor(src=img, code=cv.COLOR_BGR2GRAY)
blur = cv.blur(gray, (2,2))
canny = cv.Canny(blur, 60, 60)
And then this code to find the circle:
circles = cv.HoughCircles(canny, cv.HOUGH_GRADIENT, dp=2, minDist=1000, circles=None,maxRadius=50)
This is the result:
Result
left = original image with detected circle // right = canny image
Is this the right way to go or should I use another method?
Train the YOLO model for the barbell to detect barbel object is better than anything you tried with OpenCV. You need at least 500 images. Those images can be found on the internet easily. This tutorial is kick start tutorial on YOLO. Let's give a try.
If you tweak the parameters of HoughCircles it may recognize the barbell [EDIT: but with more preprocessing, gamma correction, blurring etc., so better not], however OpenCV has many algorithms for such object tracking - only a region from the image has to be specified first (if that's OK).
In your case the object is always visible and is not changing much, so I guess many of the available algorithms would work fine.
OpenCV has a built-in function for selection:
initBB = cv2.selectROI("Frame", frame, fromCenter=False, showCrosshair=True)
See this tutorial for tracking: https://www.pyimagesearch.com/2018/07/30/opencv-object-tracking/
The summary from the author suggestion is:
CSRT Tracker: Discriminative Correlation Filter (with Channel and Spatial Reliability). Tends to be more accurate than KCF but slightly slower. (minimum OpenCV 3.4.2)
Use CSRT when you need higher object tracking accuracy and can tolerate slower FPS throughput
I guess accuracy is what you want, if it is for offline usage.
Can you share a sample video?
What's your problem exactly? Why do you track the barbell? Do you need semantic segmentation or normal detection? These are important questions. Canny is a very basic approach It' needs a very stable background to use it. That's why there is deep learning to handle that kind of problem If we need to talk about deep learning you can use MaskRCNN, yolvoV4, etc. there are many available solutions out there.
I was asked to recognize logo in an image using opencv. The lecturer told me that I don't have to do logo detection but logo recognition only. I am using opencv in c++. Can I know the easiest way to do it??
Ps: newbie in computer vision.
It largely depends on your kind of images.
If your logo occupies say 90% of the image, you don't need detection, since you are probably good with color histograms.
If the logo is small compared to the image, you should "find" the logo, in order to focus your comparison on that and not on the background clutter.
There could be multiple logos on the same image?
The logo is always fully visible?
The logo is rigid? Or could be deformed? (think for example of a logo on a shirt or a small bottle)
Assuming that you have a single complete rigid logo to find, the simplest thing to try is template matching.
A more accurate approach is to match descriptors.
You can also see a related topic on SO here
Other more robust approaches would require to build constellations of keypoints on your reference logo, and match those constellations on the target image. See here and here for an example.
Last, but not least, have fun on Google!
I agree with #Miki , you need to do template matching, my recomendation to you is to use sum of square differences and only use a rigid transformation, you can find a lot of information here. The last is one of the best books that I've red is simple to understand and it have the major part of the equations step by step.
I am trying to crop out features from a photo using opencv and haven't quite been able to find anything that helps to do so. I have photos in which I am trying to crop out rivets from metal panels to create a dataset of rivets that focus in on just the rivets. I have been able to use feature detection and feature matching using Orb to match features but I am unsure of how to then crop out those features. Ideally each photo should provide me with multiple cropped out photos of the rivets. Does anyone have any experience with anything such as this?
For template matching with OpenCV, you can use template matching (which is nicely described here)
If your template is skewed, rotated, etc. in the photo, you can use feature homography
For cropping the part of the image, you can look at this previously answered question.
I am trying to detect the objects in an image which look similar to the reference image. Here is how i'm trying to accomplish it:
Here is the sample Image:
and here is the image with SURF keypoints:
The rectangle is drawn based on Clustering method like "Hierarchial Clustering".
The main problem is, in this case it doesnt detect the objects individually, it detects everything as one object.
Is there a way to seperate these keypoints, so as to detect each vehicle seperately?
Is this a good way to detect objects or if there is a better way please suggest.
SURF keypoints are useful in detecting similar images, or images taken of the same place from different perspectives. Although you can use Haar classifiers for the purpose of object detection. It is also a part of OpenCV library.
Here is another great tutorial regarding object detection using OpenCV.