I am implementing motion based object tracking program which is using background substraction, Kalman Filter and Hungarian algorithm. Everything is working fine except the occlusions. When two object are close enough to each other the background substraction recognizes it as one of these two objects. After they split the program recognizes these two objects correctly. I am looking for solution/algorithm which will detect occlusion like shown in point c) in the example below.
I will appreciate any references or code examples reffering to occlusion detecion problem when using background substraction.
Object detection using a machine learning algorithm should reliably distinguish between these objects, even with significant occlusion. You haven't shared anything about your environment so I don't know what kind of constraints you have, but using an ML approach, here is how I would tackle your problem.
import cv2
from sort import *
tracker = Sort() # Create instance of tracker (see link below for repo)
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
# Not sure what your environment is,
#but get your objects bounding boxes like this
detected_objects = detector.detect(frame) #pseudo code
# Get tracking IDs for objects and bounding boxes
detected_objects_with_ids = tracker.update(detected_objects)
...
The above example uses this Kalman Filter and Hungarian algorithm, which can track multiple objects in real-time.
Again, not sure about your environment, but you could find pre-built object detection algorithms on the Tensorflow site.
Related
I'm trying to learn computer vision and more specifically open-cv in python.
I want to make a program that would track my barbell in a video and show me its path. (I know apps like this exists but I want to make it myself). I tried using the Canny edge detection and the HoughCircles functions but I seem to get everything but a good result.
I have been using this code to find the edges of my image:
gray = cv.cvtColor(src=img, code=cv.COLOR_BGR2GRAY)
blur = cv.blur(gray, (2,2))
canny = cv.Canny(blur, 60, 60)
And then this code to find the circle:
circles = cv.HoughCircles(canny, cv.HOUGH_GRADIENT, dp=2, minDist=1000, circles=None,maxRadius=50)
This is the result:
Result
left = original image with detected circle // right = canny image
Is this the right way to go or should I use another method?
Train the YOLO model for the barbell to detect barbel object is better than anything you tried with OpenCV. You need at least 500 images. Those images can be found on the internet easily. This tutorial is kick start tutorial on YOLO. Let's give a try.
If you tweak the parameters of HoughCircles it may recognize the barbell [EDIT: but with more preprocessing, gamma correction, blurring etc., so better not], however OpenCV has many algorithms for such object tracking - only a region from the image has to be specified first (if that's OK).
In your case the object is always visible and is not changing much, so I guess many of the available algorithms would work fine.
OpenCV has a built-in function for selection:
initBB = cv2.selectROI("Frame", frame, fromCenter=False, showCrosshair=True)
See this tutorial for tracking: https://www.pyimagesearch.com/2018/07/30/opencv-object-tracking/
The summary from the author suggestion is:
CSRT Tracker: Discriminative Correlation Filter (with Channel and Spatial Reliability). Tends to be more accurate than KCF but slightly slower. (minimum OpenCV 3.4.2)
Use CSRT when you need higher object tracking accuracy and can tolerate slower FPS throughput
I guess accuracy is what you want, if it is for offline usage.
Can you share a sample video?
What's your problem exactly? Why do you track the barbell? Do you need semantic segmentation or normal detection? These are important questions. Canny is a very basic approach It' needs a very stable background to use it. That's why there is deep learning to handle that kind of problem If we need to talk about deep learning you can use MaskRCNN, yolvoV4, etc. there are many available solutions out there.
I'm trying to build a python program to count the number of people crossing the road in 2 directions. The video file is something like this
Now for the detection phase I'm using BackgroundSubtractorMOG() to detect the peoples , now the problem is I want to identify each object separately and track their movements in each consecutive frames .
I'm thinking of using MeanShift for that purpose, now the problem is I'm not getting how to transfer to tracking phase for an object, or initialize the tracking window. In my case I'm ending up detecting the objects as separate in each frame.
I want to know how to detect that an if an object is already detected previously.
Provide some of your code here for reference.
And Instead of object detection try object tracking with detection algorithm being run continuously after some interval. This might solve your issue of finding the previously detected object.
The various tracking algorithms are Boosting, MIL, KCF, TLD
I am trying to detect a vehicle in an image (actually a sequence of frames in a video). I am new to opencv and python and work under windows 7.
Is there a way to get horizontal edges and vertical edges of an image and then sum up the resultant images into respective vectors?
Is there a python code or function available for this.
I looked at this and this but would not get a clue how to do it.
You may use the following image for illustration.
EDIT
I was inspired by the idea presented in the following paper (sorry if you do not have access).
Betke, M.; Haritaoglu, E. & Davis, L. S. Real-time multiple vehicle detection and tracking from a moving vehicle Machine Vision and Applications, Springer-Verlag, 2000, 12, 69-83
I would take a look at the squares example for opencv, posted here. It uses canny and then does a contour find to return the sides of each square. You should be able to modify this code to get the horizontal and vertical lines you are looking for. Here is a link to the documentation for the python call of canny. It is rather helpful for all around edge detection. In about an hour I can get home and give you a working example of what you are wanting.
Do some reading on Sobel filters.
http://en.wikipedia.org/wiki/Sobel_operator
You can basically get vertical and horizontal gradients at each pixel.
Here is the OpenCV function for it.
http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=sobel#sobel
Once you get this filtered images then you can collect statistics column/row wise and decide if its an edge and get that location.
Typically geometrical approaches to object detection are not hugely successful as the appearance model you assume can quite easily be violated by occlusion, noise or orientation changes.
Machine learning approaches typically work much better in my opinion and would probably provide a more robust solution to your problem. Since you appear to be working with OpenCV you could take a look at Casacade Classifiers for which OpenCV provides a Haar wavelet and a local binary pattern feature based classifiers.
The link I have provided is to a tutorial with very complete steps explaining how to create a classifier with several prewritten utilities. Basically you will create a directory with 'positive' images of cars and a directory with 'negative' images of typical backgrounds. A utiltiy opencv_createsamples can be used to create training images warped to simulate different orientations and average intensities from a small set of images. You then use the utility opencv_traincascade setting a few command line parameters to select different training options outputting a trained classifier for you.
Detection can be performed using either the C++ or the Python interface with this trained classifier.
For instance, using Python you can load the classifier and perform detection on an image getting back a selection of bounding rectangles using:
image = cv2.imread('path/to/image')
cc = cv2.CascadeClassifier('path/to/classifierfile')
objs = cc.detectMultiScale(image)
I am trying to detect the objects in an image which look similar to the reference image. Here is how i'm trying to accomplish it:
Here is the sample Image:
and here is the image with SURF keypoints:
The rectangle is drawn based on Clustering method like "Hierarchial Clustering".
The main problem is, in this case it doesnt detect the objects individually, it detects everything as one object.
Is there a way to seperate these keypoints, so as to detect each vehicle seperately?
Is this a good way to detect objects or if there is a better way please suggest.
SURF keypoints are useful in detecting similar images, or images taken of the same place from different perspectives. Although you can use Haar classifiers for the purpose of object detection. It is also a part of OpenCV library.
Here is another great tutorial regarding object detection using OpenCV.
I have a camera that will be stationary, pointed at an indoors area. People will walk past the camera, within about 5 meters of it. Using OpenCV, I want to detect individuals walking past - my ideal return is an array of detected individuals, with bounding rectangles.
I've looked at several of the built-in samples:
None of the Python samples really apply
The C blob tracking sample looks promising, but doesn't accept live video, which makes testing difficult. It's also the most complicated of the samples, making extracting the relevant knowledge and converting it to the Python API problematic.
The C 'motempl' sample also looks promising, in that it calculates a silhouette from subsequent video frames. Presumably I could then use that to find strongly connected components and extract individual blobs and their bounding boxes - but I'm still left trying to figure out a way to identify blobs found in subsequent frames as the same blob.
Is anyone able to provide guidance or samples for doing this - preferably in Python?
The latest SVN version of OpenCV contains an (undocumented) implementation of HOG-based pedestrian detection. It even comes with a pre-trained detector and a python wrapper. The basic usage is as follows:
from cv import *
storage = CreateMemStorage(0)
img = LoadImage(file) # or read from camera
found = list(HOGDetectMultiScale(img, storage, win_stride=(8,8),
padding=(32,32), scale=1.05, group_threshold=2))
So instead of tracking, you might just run the detector in each frame and use its output directly.
See src/cvaux/cvhog.cpp for the implementation and samples/python/peopledetect.py for a more complete python example (both in the OpenCV sources).
Nick,
What you are looking for is not people detection, but motion detection. If you tell us a lot more about what you are trying to solve/do, we can answer better.
Anyway, there are many ways to do motion detection depending on what you are going to do with the results. Simplest one would be differencing followed by thresholding while a complex one could be proper background modeling -> foreground subtraction -> morphological ops -> connected component analysis, followed by blob analysis if required. Download the opencv code and look in samples directory. You might see what you are looking for. Also, there is an Oreilly book on OCV.
Hope this helps,
Nand
This is clearly a non-trivial task. You'll have to look into scientific publications for inspiration (Google Scholar is your friend here). Here's a paper about human detection and tracking: Human tracking by fast mean shift mode seeking
This is similar to a project we did as part of a Computer Vision course, and I can tell you right now that it is a hard problem to get right.
You could use foreground/background segmentation, find all blobs and then decide that they are a person. The problem is that it will not work very well since people tend to go together, go past each other and so on, so a blob might very well consist of two persons and then you will see that blob splitting and merging as they walk along.
You will need some method of discriminating between multiple persons in one blob. This is not a problem I expect anyone being able to answer in a single SO-post.
My advice is to dive into the available research and see if you can find anything there. The problem is not unsolvavble considering that there exists products which do this: Autoliv has a product to detect pedestrians using an IR-camera on a car, and I have seen other products which deal with counting customers entering and exiting stores.