Tracking Multiple Objects Using Opencv Python - python

I'm trying to build a python program to count the number of people crossing the road in 2 directions. The video file is something like this
Now for the detection phase I'm using BackgroundSubtractorMOG() to detect the peoples , now the problem is I want to identify each object separately and track their movements in each consecutive frames .
I'm thinking of using MeanShift for that purpose, now the problem is I'm not getting how to transfer to tracking phase for an object, or initialize the tracking window. In my case I'm ending up detecting the objects as separate in each frame.
I want to know how to detect that an if an object is already detected previously.

Provide some of your code here for reference.
And Instead of object detection try object tracking with detection algorithm being run continuously after some interval. This might solve your issue of finding the previously detected object.
The various tracking algorithms are Boosting, MIL, KCF, TLD

Related

Occlusion handling in object tracking

I am implementing motion based object tracking program which is using background substraction, Kalman Filter and Hungarian algorithm. Everything is working fine except the occlusions. When two object are close enough to each other the background substraction recognizes it as one of these two objects. After they split the program recognizes these two objects correctly. I am looking for solution/algorithm which will detect occlusion like shown in point c) in the example below.
I will appreciate any references or code examples reffering to occlusion detecion problem when using background substraction.
Object detection using a machine learning algorithm should reliably distinguish between these objects, even with significant occlusion. You haven't shared anything about your environment so I don't know what kind of constraints you have, but using an ML approach, here is how I would tackle your problem.
import cv2
from sort import *
tracker = Sort() # Create instance of tracker (see link below for repo)
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
# Not sure what your environment is,
#but get your objects bounding boxes like this
detected_objects = detector.detect(frame) #pseudo code
# Get tracking IDs for objects and bounding boxes
detected_objects_with_ids = tracker.update(detected_objects)
...
The above example uses this Kalman Filter and Hungarian algorithm, which can track multiple objects in real-time.
Again, not sure about your environment, but you could find pre-built object detection algorithms on the Tensorflow site.

What is the best method of detecting a collision between these two objects in opencv (python)?

I am required to prevent an extruder from touching the base of a stage. I chose to use opencv on python to detect collision between the two objects. After researching past posts and reading up on opencv's processing tools I decided to try a few methods. The first method was edge detection which proved to be lacking in position detection. Next I tried using color as an advantage to isolate the needle and the stage, place rectangles surrounding the objects, and then prevent the rectangles from touching. This is proving to be a challenge because the background and needle are pretty much the same color. The last method requires using box-box collision but my guess is that if I am having issues with the second method then this method will also prove difficult. I am thinking about trying out deep learning on opencv, taking a bunch of photos, and the training the program but I am not sure how that would play out since ill be getting feed from video. Can anyone give me any tips? Any algorithms that would be helpful here? I see that the needle is clearly defined due to its edges so how can I use that to my advantage here? Any help is appreciated.
Photo of needle and stage:

Dlib - Object tracking over several shots

I am trying to find the start + end time of a person appears in a video.
My current approach is to find the person using face detection, and then track his face using dlib object tracking (i.e. if the person is turning around in the video, i can't know that he is still in the video using face recognition. Therefore i need both detection and tracking techniques).
The problem is that the object tracking still tracks after an object, even if there was a camera shot cut or scene changed.
So, I tried to initialize the tracking object every shot. But, it's not so easy to detect the shots. even with very high sensitivity, ffmpeg and http://mklab.iti.gr/project/video-shot-segm don't return all of the shot cuts.
So, it turns out that I need to compare the object rectangle of the previous frame, with the rectangle detected in the current frame.
Any idea of a function that can give me a "similarity score" between two rectangles in two frames?

How to track and count multiple cars in a video using contours?

Steps I have followed:
Background subtraction with preprocessing.
Contour detection.
With these two steps, I am able to draw contours on all moving cars in the video. But how do I track contours to count number of cars in the video ?
I searched around a bit and there seem to be different techniques like Kalman Filter, Lucas Kannade and Optical Flow... But I don't know which one to use for my usecase. I am using opencv3-python.
Actually, this seems like a general question, but I am going to give a point of view (Myself, I had the same problem but with pointclouds, although it may be different than what you asked, I hope it will give you an idea of how to proceed).
Most of the times, once your contours are detected, tracking moving objects in the scene involves 3 main steps:
Feature Matching :
This step is about detecting features in your object (Frame N) and match it to features of objects in frame (N+1). the detection part has some standard algorithms and descriptors available in OpenCV (SURF, SIFT, ORB...) as well as the Features matching part.
Kalman Filter
The Kalman filter is used to get an initial prediction (generally by applying a constant velocity model for your objects). For each appearance-point of the track, a correspondence search is executed. If the average distance is above a specified threshold, feature matching is applied to get a better initial estimate.
In order to do that, you need to model your problem in a way it can be solved by a Kalman filter.
Dynamic Mapping
After the motion estimation, the appearance of each track is updated. In contrast to standard mapping techniques, dynamic mapping is an approach which tries to accumulate appearance details of both static and dynamic objects. thus refining your motion estimation and tracking process.
There are a lot of papers out there, you may as well take a further look at these papers :
Robust Visual Tracking and Vehicle Classification via Sparse Representation
Motion Estimation from Range Images in Dynamic Outdoor Scenes
Multiple Objects Tracking using CAMshift Algorithm in OpenCV
Hope it helps !

How can I detect and track people using OpenCV?

I have a camera that will be stationary, pointed at an indoors area. People will walk past the camera, within about 5 meters of it. Using OpenCV, I want to detect individuals walking past - my ideal return is an array of detected individuals, with bounding rectangles.
I've looked at several of the built-in samples:
None of the Python samples really apply
The C blob tracking sample looks promising, but doesn't accept live video, which makes testing difficult. It's also the most complicated of the samples, making extracting the relevant knowledge and converting it to the Python API problematic.
The C 'motempl' sample also looks promising, in that it calculates a silhouette from subsequent video frames. Presumably I could then use that to find strongly connected components and extract individual blobs and their bounding boxes - but I'm still left trying to figure out a way to identify blobs found in subsequent frames as the same blob.
Is anyone able to provide guidance or samples for doing this - preferably in Python?
The latest SVN version of OpenCV contains an (undocumented) implementation of HOG-based pedestrian detection. It even comes with a pre-trained detector and a python wrapper. The basic usage is as follows:
from cv import *
storage = CreateMemStorage(0)
img = LoadImage(file) # or read from camera
found = list(HOGDetectMultiScale(img, storage, win_stride=(8,8),
padding=(32,32), scale=1.05, group_threshold=2))
So instead of tracking, you might just run the detector in each frame and use its output directly.
See src/cvaux/cvhog.cpp for the implementation and samples/python/peopledetect.py for a more complete python example (both in the OpenCV sources).
Nick,
What you are looking for is not people detection, but motion detection. If you tell us a lot more about what you are trying to solve/do, we can answer better.
Anyway, there are many ways to do motion detection depending on what you are going to do with the results. Simplest one would be differencing followed by thresholding while a complex one could be proper background modeling -> foreground subtraction -> morphological ops -> connected component analysis, followed by blob analysis if required. Download the opencv code and look in samples directory. You might see what you are looking for. Also, there is an Oreilly book on OCV.
Hope this helps,
Nand
This is clearly a non-trivial task. You'll have to look into scientific publications for inspiration (Google Scholar is your friend here). Here's a paper about human detection and tracking: Human tracking by fast mean shift mode seeking
This is similar to a project we did as part of a Computer Vision course, and I can tell you right now that it is a hard problem to get right.
You could use foreground/background segmentation, find all blobs and then decide that they are a person. The problem is that it will not work very well since people tend to go together, go past each other and so on, so a blob might very well consist of two persons and then you will see that blob splitting and merging as they walk along.
You will need some method of discriminating between multiple persons in one blob. This is not a problem I expect anyone being able to answer in a single SO-post.
My advice is to dive into the available research and see if you can find anything there. The problem is not unsolvavble considering that there exists products which do this: Autoliv has a product to detect pedestrians using an IR-camera on a car, and I have seen other products which deal with counting customers entering and exiting stores.

Categories