I would like to take pictures using the USB webcam. When I use the VideoCapture method of OpenCV, it actually gives frames from the video. In most cases, still images cover more area than the video. Therefore, I am looking for a way to take pictures using the webcam which cover more of the available camera FOV.
Related
I'm using the DeepFace library for face recognition and detection.
I was wondering if there is a better format (png, jpg, etc) than others to get better results.
Is there a preferred image format for face recognition and face detection generally? and specifically in this library?
Deepface is wrapped around several face recognition frameworks so the answer to your question should be: it is case-depending issue. However, all basic FR fameworks are not working with the original inpit images, they converting them first to greyscale, downsizing, making numpy arrays and so on usually using OpenCV and PIL for that. So... So, my oppinion is that image file format does not matter. Image file size, colour depth do matter.
This answer is based on an answer from 'Bhargav'.
In python, images are processed as bit bitmaps using the color depth of the graphic system. Converting a PNG image to a bitmap is really fast (20 x times) when compared to jpegs.
In my case, I had the image and needed to save it before proceeding, so I saved it as png so I won't lose quality (jpg is lossy).
Deepace - Currently accepting only 2 types of image input formats. PNG/Jpeg. there is no way to use other formats of images directly as you are using their libraries. If you want to use another input formats so then at least you need to convert either to PNG or Jpeg to give input to the functions. Which may cost you extra execution time while bringing other format images to PNG/Jpegs.
If you want to improve face recognition and face detection with deepface library then use some preprocessing filters.
Some of the filters you can try for better results. ultimate guide
Grayscale conversions
Face straightening
Face cropping (#Deepcae automatically do this while processing so no need to this)
Image resizing
Normalization
Image enhancement with PIL like sharpening.
image equalization.
Some basic filtering will be done by deepface. If your results are not accurate, which means filtering done by deepface is not sufficient, you have to try each and every filter. Something like a trail and error method until you got good results.
sharpening and grayscaling are the first methods to try.
I've been using OpenCV and MoviePy to get images out of a video (1 image per second) and once extracted, I analyze the image with pytesseract. The part where the script extract images takes quite a bit of time. Is it possible or is there a function that I've overlooked in MoviePy or OpenCV that allows video frames to be analyzed without having to create images first? This could tremendously speed up the process.
Current steps:
Scan and extract 1fps with a specific video as argument
From each of those images, perform analysis on a specific area
Desired:
Perform analysis on a specific area of the video itself at 1 fps.
If this function exists, please inform me. Otherwise, would there be a workaround for this? Suggestions?
Thanks!!
I am using ROS to control a drone for real-time image processing applications. I have calibrated camera by using cameracalibrator.py node in Ros. When I use image_proc node to compare raw and rectified images, I don't get what I want. Although image is rectified, border of the image is getting distorted toward opposite direction as the image below:
As a result, rectified image is still useless for me.
Thus, this time, I calibrated my camera using opencv so that I can get region of interest (ROI) in the image after undistortion operation. Thus, the rectified image becomes perfect for me. However, I need ROS to do that while streaming rectified image by using image_proc. Is there anyway to do that ?
You can directly use the image_proc/crop_decimate nodelet.
You can configure it using dynamic_reconfigure to set up ROI or interpolation.
However, since these are software operations, interpolation methods should be handled with care (but fastest NN method is standard anyway) since you have a real time application.
I successfully processed a video and had the algorithm detect faces, but I am attempting to detect faces in real-time, capturing images from the screen (such as when I'm playing games, etc.) This is the bit of the code I used to process a captured video:
capture = cv2.VideoCapture('source_video.avi')
How can I change this to capture images from the screen in real-time? Please give me some code examples if possible.
Don't use openCV for this. Better use
from PIL import ImageGrab
ImageGrab.grab().save("screen_capture.jpg", "JPEG")
I'm using Opencv 2.4.5 with python 2.7 to track people in video surveillance. At the beginning I used .avi and .mpeg videos to test my code, now I want to use a hcv-m100c camera. I am using a simple difference between frames (an initial frame compared with each frame) to identify the objects in movement, It works very well with the .avi and .mpeg videos I have, but when I use the camera the results are so bad because a lot of noise and stains appear in my video. I thought that the problem was my camera, but I made an .avi video with the same camera and I tested that video with my code and it works fine.
Now, I'm using the cv2.BackgroundSubtractorMOG but the problem is still there.
So, I think I need to do a pre-processing when I use the camera
Just for completeness:
Solution concept:
Possibly you could stream the video camera with something like ffmpeg which can transcode as well and then use OpenCV to read the network stream. It might be easier to use VLC to stream instead.
Solution detail:
VLC Stream code (Shell):
vlc "http://192.168.180.60:82/videostream.cgi?user=admin&pwd=" --sout "#transcode{vcodec=mp2v,vb=800,scale=1,acodec=mpga,ab=128,channels=2,samplerate=??44100}:duplicate{dst=rtp{sdp=rtsp://:8554/output.mpeg},dst=display}" --sout-keep
OpenCV Code (Python):
cap=cv2.VideoCapture("rtsp://:8554/output.mpeg")