I'm writting a program to detect object using OpenCV DNN with a pre-trained model like SSD Mobilenet or yolo v3.
I want to connect several cameras to my program and do object detection.
I don't know what is the best approach to this problem.
Can I have for example 20 threads which get frames from different cameras and each in turn give the frame to a single opencv dnn object to do detection (use queue to store frame to analyse) ?
OR
Can I instanciate 20 detection objects for each camera and do detection when frame is available. So potentially, I can have 20 detections at the same time.
Related
I am new to object detection using USB webcam.
I have a USB webcam which is capable of recording at 30fps FHD. I've connected this camera to a linux machine to capture video. The USB camera is connected to USB 3.0 port.
ffmpeg command line is used to capture a minute long, 15fps, 640x720, bitrate 5M video.
A simple opencv based python program reads this video file, frame by frame using cap.read(). However, I've noticed that when there is an moving object (e.g. human) in the frame, it becomes very blurry. (Here is a link of an example) I am wondering if this is normal or some adjustments are missing.
I am asking this question because I would like to run an object detection algorithm (SSD + MobileNet v2) on this video that I am capturing. But for many of the frames, if the object is moving, object detection fails to spot the object. (Yes, of course there isn't a perfect detection algorithm for all video analytics and there are various reasons for it to fail object detection)
Could you give pointers to remove the blurriness of this video frames?
1) Is it due to the video recording resolution is too low?
2) Is it because the python program is reading at different frame rate? (approximately 13~14 fps)
I want to make a drone that can detect objects from up. I found examples
of Background Subtraction but it detects things and then considers new image as background object. I want the drone to come to it's way point and see if something new is detected.
Drone will fly by itself and the image processing will be done using Opencv on Raspberry pi. How do I write the code in python for this? I can code in python. Please tell me what should I follow.
Thanks in advance.
Background subtraction don't works on drones, a stabilized camera don't help. It need to search a homography matrix between frames with subpixel quality and create custom background subtraction algorithm. This work is not work Raspberry and Python.
If you know anything about objects then try to use neural networks for detection. MobileNet v3 can work on Raspberry.
For training do you can use datasets:
http://aiskyeye.com/
https://gdo152.llnl.gov/cowc/
http://cvgl.stanford.edu/projects/uav_data/
https://github.com/gjy3035/Awesome-Crowd-Counting#datasets
I tried writing the code but was not able to do both detections simultaneously
Using HAAR cascades to train for the shape. But will be limited due to orientations etc. Tensorflow can be used. However, detection by color and shape will be a bigger problem.
I'm working on a real-time object detector with tensorflow and opencv.
I've used different SSD and Faster-RCNN based frozen inference graphs and they almost never fail.
The video stream comes from a IR camera fixed to a wall that has a background that almost never changes. There are some misdetections at particular hours of the day (e.g. when the light changes in the afternoon) that occur in the background area or on small objects too close to the camera.
So to fix this little mistakes i wanted to finetune the model with images from the same background.
Being the background always the same, how do i approach the retraining of the model having 1000 misdetections pics that are all almost the same?
In case of variations in the background lighting, it might be possible to use Background Subtraction
https://docs.opencv.org/3.4.1/d1/dc5/tutorial_background_subtraction.html
,while dynmically updating it as shown here:
https://www.pyimagesearch.com/2015/06/01/home-surveillance-and-motion-detection-with-the-raspberry-pi-python-and-opencv/
Thank you.
I am working on a face detection and recognition app in python using tensorflow and opencv. The overall flow is as follow:
while True:
#1) read one frame using opencv: ret, frame = video_capture.read()
#2) Detect faces in the current frame using tensorflow (using mxnet_mtcnn_face_detection)
#3) for each detected face in the current frame, run facenet algorithm (tensorflow) and compare with my Database, to find the name of the detected face/person
#4) displlay a box around each face with the name of the person using opencv
Now, my issue is that the overhead (runtime) of face detection and recognition is very high and thus sometime the output video is more like a slow motion! I tried to use tracking methods (e.g., MIL, KCF), but in this case, I cannot detect new faces coming into the frame! Any approach to increase the speedup? At least to get rid of "face recognition function" for those faces that already recognized in previous frames!