Relaxed implementation to detect QR codes within confidence interval in Python - python

I am working on detecting a QR code attached to someone from a video of that person walking away from me. I have tried a few different implementations using OpenCV and pyzbar but they are not very accurate and will detect the QR code in about 50% of frames.
I have found that cropping the image helps with detection but since it is stuck on a person, the cropping would have to be much more complex/robust than I would like to use.
I only really need to decode/read the QR code in a few frames of the video. Detecting its location is more important than reading and detecting if it exists at all is most important.
I can't fully understand how the OpenCV implementation for QRCodeDetector().detect() works but I am trying to find a method (does not have to be OpenCV or pyzbar) where I could find a QR code within a confidence interval. I am not sure what threshold the implementations I have tried so far use, but sometimes between frames, it does not look like the QR code changes position/orientation but it cannot 'detect' it in subsequent frames, so I am looking for some way to relax those requirements.
This is an example of a frame from the video. This shows the relative size of the QR code to the entire frame that needs to be searched.
This is my current setup:
video = cv2.VideoCapture(input_video)
ret, frame = video.read()
while ret:
ret, frame = video.read()
qrCodeDetector = cv2.QRCodeDetector()
points = qrCodeDetector.detect(image)[1]
if points is not None:
points = points[0]
# get center of all points
center = tuple(np.mean(np.array(points),axis=0).astype(int))
# draw circle around QR code
cv2.circle(frame,center,50,color=(255,0,0),thickness=2)
cv2.imshow('frame', frame)
cv2.waitKey(1)
video.release()
cv2.destroyAllWindows()

For these cases, you should probably rely on a Machine/Deep Learning based solution. As far as I have experimented, pyzbar works pretty well for the decoding, but have a lower detection rate. For cases like the one you propose, I tend to use QReader. It actually implements a Detection + Decode pipeline that is supported by a YoloV7 QR detection model. And uses pyzbar, sweetened with some image preprocessing fallbacks, on the decoding side.
from qreader import QReader
import cv2
# Create a QReader instance
qreader = QReader()
video = cv2.VideoCapture(input_video)
ret, frame = video.read()
while ret:
# Read the image and cast it to RGB
ret, frame = cv2.cvtColor(video.read(), cv2.COLOR_BGR2RGB)
# Use the detect_and_decode function to get the decoded QR data
found_qrs = qreader.detect_and_decode(image=image, return_bboxes=True)
# Process the detections as you like
for (x1, y1, x2, y2), decoded_text in found_qrs:
# Draw on your image
pass
video.release()

Related

Recording the real time face expression detection

I have coded the face expression detection using Jupyter notebook, detecting seven expressions of the face (Anger, Sad, Disgust, Happy, ...) and tried the real-time detection using the camera of my laptop. Now I want to record those expressions detected by the model in the real-time detection and create a figure of the detected expressions over time. First of all, is it possible to do so? If not, what other options do I have? For example, can I record the video taken by the camera and later detect the expressions from the video and make a figure from all the expressions detected over time? Thank you all for helping me!
You could do something like this:
from tensorflow import keras
import cv2
all_labels = ["Anger", "Sad", "Disgust", "Happy"]
# load the trained model, or train a model
model = keras.models.load_model('path/to/location')
# Open the camera
cap = cv2.VideoCapture(0)
# Or similarly open a saved video
cap = cv2.VideoCapture('path/to/video')
# Check if camera was opened correctly
if not (cap.isOpened()):
print("Could not open video device")
# Fetch one frame at a time from your camera in real-time or from the video
i=0
while(True):
# frame is a numpy array, that you can predict on
ret, frame = cap.read()
# Obtain the prediction (you may have to reshape frame according to your model)
prediction = model(frame, training=False)
# obtain a label from prediction, depending on your label list
# saving the frame in a different folder depending on label predicted
if label in all_labels:
cv2.imwrite('{}/frame_{}.jpg'.format(label, i), frame)
i = i+1
#Waits for a user input to quit the application
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
cap.release()
cv2.destroyAllWindows()
I made an answer to a similar but not identical problem. Maybe you can draw inspiration from that. Also this is a great tutorial for capturing live videos made by OpenCV.

Python Opencv high resolution bug

I'm using opencv as part of a beam profiler software. For this I have a high resolution camera (5496x3672, Daheng Imaging MER-2000-19U3M). I'm using now a basic program to show the captured frames. The program works fine for a normal webcam, however when I connect my high resolution camera (through USB 3.0) it becomes buggy. Most of the frame is black and at the top there are three small instances of the recording (screenshot here). On the other hand, the camera software displays the image properly, so I assume there must be a problem in how opencv access the camera. Here is the code used to diplay the image:
import cv2
cap = cv2.VideoCapture(0)
cap.set(3,5496)
cap.set(4,3672)
while(True):
ret, frame = cap.read()
frame2=cv2.resize(frame,(1280,720))
cv2.imshow('frame',frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

Open CV to Capture Unique Objects from a Video

I was doing a frame slicing from the OpenCV Library in Python, and I am successfully able to create frames from the video being tested on.
I am doing it on a CCTV Camera installed at a parking entry gateway where the video plays 24x7, and at times the car is standing still for good number of minutes, leading to having consecutive frames of the same vehicle.
My question is how can I create a frame only when a new vehicle enters the parking lot?
Stackoverflow is for code related queries. I suggest you try some code and share your results and your problems before posting anything here. That being said, you can start with object detection tutorials like this and then do tracking with sort. Many pre trained models are available that include the cars class so you won't even need to train a new model.
Do you need to detect license plates, etc? Or just notice if something happens? For the latter, you could use a very simple approach. Take an average of say the frames of the last 30 seconds and subtract that from a current frame. If the mean absolute average of the delta image is above a threshold, that could be the change you are looking for.
You could do some simpler motion detection with opencv, it's nicely explained in https://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/
So if you have a picture of the background as reference, you can compare each new image to the background and only save the image if it's different enough from the background (hopefully only when a car entered). Then make this the new background and reset for a new car when the new images again start looking like the original background.
Hopefully I stated my idea clear enough and that link provides enough information to implement it. If not just ask for a clarification!
First you have to have a specific xml to detect only cars.You can get it from the here.I have developed a code just to uniquely identify and count the cars that are visible to the cctv you are using,sometimes it totally depends on the frame rate and detection too,so you can control the frame rate and also the total count variable.
import cv2
cascade_src = 'cars.xml'
cap = cv2.VideoCapture('rtsp_of_ur_cctv')
car_cascade = cv2.CascadeClassifier(cascade_src)
prev_count=0
total_count=0
while True:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(gray, 1.1, 1)
if len(cars)>prev_count:
diffrence=len(cars)-prev_count
total_count=total_count+diffrence
#here yo can save unique new entry and possibly avoid the recursive ones
print(total_count)
for (x,y,w,h) in cars:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
prev_count=len(cars)
cv2.imshow('video', img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

How do I prevent false flags in Background subtraction using OpenCV?

So, I am developing a hobby project where a camera is mounted on a bot, powered by Raspberry Pi. The bot moves around a room and does some processing based on the camera's response. I apologize, if this is not the right place to ask.
Problem:-
The camera attached to the bot will perform Background subtraction continuously.The bot will be moving simultaneously. In case the background algorithm detects an object in front of the bot, it'll stop the bot and do further processing with respect to the object. Here the working assumption is that the ground is of only one color and uniform to a great extent.
The algorithm works great under very controlled lighting conditions. The problem arises when there is slight lighting changes or when the ground has small patches/potholes/uneveness in it. The above scenarios generate false flags and as a result my bot stops. I want to know if there is any way to prevent these false flags with the help of any modifications in the following code ?
import picamera,cv2,time
camera = PiCamera()
camera.resolution = (512,512)
camera.awb_mode="fluorescent"
camera.iso = 800
camera.contrast=25
camera.brightness=64
camera.sharpness=100
rawCapture = PiRGBArray(camera, size=(512, 512))
first_time=0 # This flag is to capture the first frame as background image.
frame_buffer=0 # This flag is to change the background image every 30 frames.
counter=0
camera.start_preview()
sleep(1)
def imageSubtract(img):
luv=cv2.cvtColor(img,cv2.COLOR_BGR2LUV)
l,u,v=cv2.split(luv)
return v
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# Here first 10 frames are being rejected .
if first_time==0:
rawCapture.truncate(0)
if frame_buffer<10:
print("Frame rejected -",str(frame_buffer))
frame_buffer+=1
continue
os.system("clear")
refImg=frame.array
refThresh=imageSubtract(refImg)
first_time=1
frame_buffer=0
frame_buffer+=1
cv2.imshow("Background",refImg)
image = frame.array
cv2.imshow("Foreground", image)
key = cv2.waitKey(1)
rawCapture.truncate(0)
newThresh=imageSubtract(image)
diff=cv2.absdiff(refThresh,newThresh) #Here the background image is sub from foreground
kernel = np.ones((5,5),np.uint8)
diff = cv2.morphologyEx(diff, cv2.MORPH_OPEN, kernel)
diff=cv2.dilate(diff,kernel,iterations = 2)
_, thresholded = cv2.threshold(diff, 0 , 255, cv2.THRESH_BINARY +cv2.THRESH_OTSU)
_, contours, _= cv2.findContours(thresholded,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
try:
c=max(contours,key=cv2.contourArea)
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(thresholded,(x,y),(x+w,y+h),(125,125,125),2)
if cv2.contourArea(c)>300 and len(contours)<=3:
if counter==0:
print("Going to sleep for 0.1 second") # allowing the device to move ahead for 0.1 sec before processing the object
time.sleep(0.1)
counter=1
continue
else:
print("Object found !")
cv2.imshow("Threshold",thresholded)
if frame_buffer%30==0:
frame_buffer=0
refImg=image
refThresh=imageSubtract(refImg)
os.system("clear")
print("Refrence Image changed")
except Exception as e:
print(e)
pass
NOTE :- The above algorithm used continuous capture mode of the PiCamera. Also, first 10 frames are rejected because I have noticed that the PiCamera takes some time to adjust the colors once it starts up. Another thing is that the background image is being changed every 30 frames because I wanted the background image to remain as close as possible to the foreground image. Since the room is quite big, there is going to be some local changes in the light/color of the ground between one corner of the room and the other. Hence, felt the need to update the background image after every 30 frames. The object needs to have an area greater than 300 for it to be detected. I have also given a delay of 0.1 sec (time.sleep(0.1)) after the object has been detected, because I wanted the object to enter the frame completely and be right in the middle of the frame before the device stops.
Some solutions that I had in mind was :-
I thought of attaching few IR sensors at the base of the device. In case, an object is detected ( Real/False Flags), it'll check the output from IR sensor just to check if any object is being picked up by it as well. In case of shadows and patches, the IR response is going to be NULL, so the bot continues to move forward.
I thought of calculating the height of the detected object. If the height was above a certain threshold , then presence of object could have been confirmed. Otherwise it is a false flag. But, the camera is going to be facing down, which means the image is taken from top. So I don't think there is any way to ascertain the height of the object from it's top-down image.
Please suggest any alternative solutions. I want to make the above algorithm as perfect as possible because the entire working of the device depends upon the accuracy of the background subtraction algorithm.
Thank you for your help in advance !
EDIT 1-
Background Image - Back
Foreground Image - Front
Background subtraction works well when your camera is not moving and objects are moving in front of the camera. Your case is just the opposite and it will easily detect movement everywhere.
If your background is uniform, any edge detection filter may work better than your background subtraction.
Another alternative is using thresholding (see https://docs.opencv.org/3.4.0/d7/d4d/tutorial_py_thresholding.html). I would suggest an adaptive threshold (Gaussian or mean) that will work better if the edges are darker than the center. Try different sizes of the neighbourhood and different values for the constant C.
Then you can erode/dilate as you already do and filter by size or position as you suggest. Adding more sensors to your robot is a good idea as well.

Suggestion In Improving the code of webcam

I have written a basic code which captures an image from webcam using OpenCV & Python 2.7.
The code is as follows:
import numpy
import cv2
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imshow('image',frame)
cap.release()
cv2.waitKey(0)
cv2.destroyAllWindows()
This code gives the correct output but my camera takes a few seconds to focus so I get a black or dim image as output instead of a bright proper focused image..
How can I solve this problem in a more mature way?
You need an "auto capture" algorithm. Auto capturing algorithms are various depending on what your case is. For example if you need take a shoot for a document that you want to OCR it later, you have to check how much this text is OCRable in order to take the image. However, in the general case there is something called Reference-less Image Quality Assurance that will help you to rate how much this image is good. Then, if it is good enough, take a shoot. However, implementing it is not an easy task.
If you need something fast and easy, just compute the sharpness of the image and depend on it to take the photo or not. See this :http://answers.opencv.org/question/5395/how-to-calculate-blurriness-and-sharpness-of-a-given-image/
Another option could be using a face detector if you are taking photos for humans. OpenCV has a cascade classifier with pre-trained model for human face. Just try to detect it and when it is detected, take the shoot.
You may also combine the last two types together in a hybrid mode. In other words, Detect the face then make sure it is sharp enough then take the photo
You could wait till the video capturing has been initialized by modifying the code as:
import cv2
cv2.namedWindow("output")
cap = cv2.VideoCapture(0)
if cap.isOpened(): # Getting the first frame
ret, frame = cap.read()
else:
ret = False
while ret:
cv2.imshow("output", frame)
ret, frame = cap.read()
key = cv2.waitKey(20)
if key == 27: # exit on Escape key
break
cv2.destroyWindow("output")

Categories