Issues multi-trackers and opencv's version - python

I'm trying to implement the OpenCV's multi-tracker as explained in this tutorial (https://www.learnopencv.com/multitracker-multiple-object-tracking-using-opencv-c-python/) but I'm facing some issues with the version of OpenCV.
As told in other answers on SO, for the single and multi tracker constructors I have to use opencv-contrib; however, once installed opencv-contrib in place of opencv, I'm able no more to use the VideoCapture function. I'm in a deadlock since I need both functions but contrib-opencv and opencv seem cannot co-exist in the same interpreter.
Has anyone faced the same issue and can help me with this?
import sys
import cv2
from random import randint
trackerTypes = ['BOOSTING', 'MIL', 'KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT']
def createTrackerByName(trackerType):
# Create a tracker based on tracker name
if trackerType == trackerTypes[0]:
tracker = cv2.TrackerBoosting_create()
elif trackerType == trackerTypes[1]:
tracker = cv2.TrackerMIL_create()
elif trackerType == trackerTypes[2]:
tracker = cv2.TrackerKCF_create()
elif trackerType == trackerTypes[3]:
tracker = cv2.TrackerTLD_create()
elif trackerType == trackerTypes[4]:
tracker = cv2.TrackerMedianFlow_create()
elif trackerType == trackerTypes[5]:
tracker = cv2.TrackerGOTURN_create()
elif trackerType == trackerTypes[6]:
tracker = cv2.TrackerMOSSE_create()
elif trackerType == trackerTypes[7]:
tracker = cv2.TrackerCSRT_create()
else:
tracker = None
print('Incorrect tracker name')
print('Available trackers are:')
for t in trackerTypes:
print(t)
return tracker
def main():
# Set video to load
videoPath = "../../data/videos/case-1-2D.avi"
# Create a video capture object to read videos
cap = cv2.VideoCapture(videoPath)
# Read first frame
success, frame = cap.read()
# quit if unable to read the video file
if not success:
print('Failed to read video')
sys.exit(1)
## Select boxes
bboxes = []
colors = []
while True:
bbox = cv2.selectROI('MultiTracker', frame)
bboxes.append(bbox)
colors.append((randint(0, 255), randint(0, 255), randint(0, 255)))
print("Press q to quit selecting boxes and start tracking")
print("Press any other key to select next object")
k = cv2.waitKey(0) & 0xFF
if (k == 113): # q is pressed
break
print('Selected bounding boxes {}'.format(bboxes))
# Specify the tracker type
trackerType = "CSRT"
# Create MultiTracker object
multiTracker = cv2.MultiTracker_create()
# Initialize MultiTracker
for bbox in bboxes:
multiTracker.add(createTrackerByName(trackerType), frame, bbox)
# Process video and track objects
while cap.isOpened():
success, frame = cap.read()
if not success:
break
# get updated location of objects in subsequent frames
success, boxes = multiTracker.update(frame)
# draw tracked objects
for i, newbox in enumerate(boxes):
p1 = (int(newbox[0]), int(newbox[1]))
p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3]))
cv2.rectangle(frame, p1, p2, colors[i], 2, 1)
# show frame
cv2.imshow('MultiTracker', frame)
# quit on ESC button
if cv2.waitKey(1) & 0xFF == 27: # Esc pressed
break
if __name__ == '__main__':
main()
If I use opencv-python package I get the error:
AttributeError: module 'cv2.cv2' has no attribute
'MultiTracker_create'
If I use opencv-contrib-python package I get the error:
AttributeError: module 'cv2' has no attribute
'VideoCapture'
If it can help, I'm working with PyCharm on MacOS using a VirtualEnv for my Python interpreter.
I already tried to installed opencv-contrib (alone), opencv (alone), keep both opencv-contrib and opencv on my interpreter, downgrade opencv to an early version that should support the functions I need (opencv 3.4).

Related

OpenCV Python kernel crashes when clicking the trackbar

I am trying to create a simple trackbar to test some edge detection, and I've been following the official opencv tutorial from here in Python. When I run the code, the window is created and I can see the sliders, but when I click the sliders, the kernel crashes.
I tried this with some other code example from the internet and it crashed then as well. Basically the moment I click in the region of the slider, the kernel crashes. I know that one thing you seem to need to do in macs is increase the waitKey time and I did do that.
How can I make this work?
def canny_threshold(low_val, src, src_gray):
low_threshold = low_val
img_blur = cv.blur(src_gray, (3,3))
detected_edges = cv.Canny(img_blur, low_threshold, low_threshold * RATIO)
mask = detected_edges != 0
dst = src * (mask[:,:, None].astype(src.dtype))
return dst
src = cv.imread("magpie_house.jpeg")
src_gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
def nothing(x):
pass
cv.namedWindow(WINDOW_NAME)
cv.startWindowThread()
cv.createTrackbar(TITLE_TRACK_BAR, WINDOW_NAME, 0, MAX_LOW_THRESHOLD, nothing)
cv.setTrackbarPos(TITLE_TRACK_BAR, WINDOW_NAME, 50)
while True:
COUNTER += 1
if COUNTER >= 700:
break
low_val = cv.getTrackbarPos(TITLE_TRACK_BAR, WINDOW_NAME)
dst = canny_threshold(low_val, src, src_gray)
cv.imshow(WINDOW_NAME, dst)
if cv.waitKey(10) & 0xFF == ord("q"):
break
cv.waitKey(1)
cv.destroyAllWindows()
It appears that there is a known issue on macs regarding trackbars.

using a python queue with an external class - not in the same file

I have a question regarding threading and queueing in python. I have a class that recognises faces and another script that is supposed to retrieve these recognised faces. And I'm unable to retrieve the recognised faces, once recognition starts
I posted another question that relates to this one, but here it's more about the actual user recognition part (this just in case someone stumbles over my other question and thinks this may be a duplicate).
So as said, I have a class that uses imutile and face_recognition to do just that - do face recognition. My issue with the class is that, once it's started it does recognise faces perfectly but it's not possible from an outside class (from within another script) to call any other method for example to retrieve a dict with the currently identified faces. I think this is problably because, once the actual recognition is called, the call to other methods within this class is not going through because of threading???. I attached the complete code for reference below.
Here is the code that is supposed to start the recogniser class and retrieve the results from within another script:
class recognizerAsync(Thread):
def __init__(self):
super(recognizerAsync,self).__init__()
print("initiating recognizer class from recognizerAsync")
if (use_user == True):
#myRecognizer = recognizer(consoleLog, debugMsg, run_from_flask)
self.myRecognizer = the_recognizer.recognizerAsync(False, True, True)
#face_thread = recognizerAsync(consoleLog, debugMsg, False)
def run(self):
print("starting up recogizer from callRecognizerThread")
self.myRecognizer.run()
if (use_user == True):
self.myRecognizer.start()
while (True):
if ( not q.full() ):
#fd = random.randint(1,10)
fd = self.myRecognizer.returnRecognizedFaces()
print(f"PRODUCER: putting into queue {fd}")
q.put(fd)
#else:
# print("ERROR :: Queue q is full")
time.sleep(10)
And I start this like so in the very end:
mirror = GUI(window)
mirror.setupGUI()
window.after(1000, mirror.updateNews)
face_thread = recognizerAsync()
face_thread.start()
window.mainloop()
My question is, how would I need to change either the recogniser classier the recognizerAsync class in the other script, so that while the method faceRecognizer() is running indefinitely, one can still call other methods - specifically returnRecognizedFaces()???
Thank you very much, folks.
#!/usr/bin/env python3
# import the necessary packages for face detection
from imutils.video import VideoStream
from imutils.video import FPS
import face_recognition
import imutils
import pickle
import time
import base64
import json
from threading import Thread
class recognizerAsync(Thread):
# How long in ms before a person detection is considered a new event
graceTimeBeforeNewRecognition = 6000 #60000
# the time in ms when a person is detected
timePersonDetected = 0
# ceate an empty faces dictionary to compare later with repetive encounterings
faces_detected_dict = {"nil": 0}
# Determine faces from encodings.pickle file model created from train_model.py
encodingsP = "" # "encodings.pickle"
# load the known faces and embeddings along with OpenCV's Haar
# cascade for face detection - i do this in the class initialization
#data = pickle.loads(open(encodingsP, "rb").read())
data = ''
# print dictionary of recognized faces on the console?
print_faces = False
debug_mnessagesa = False
called_from_flask = True # this changes the path to the
def __init__(self, print_val=False, debug_val=False, called_from_flask=True):
super(recognizerAsync,self).__init__()
self.print_faces = print_val
self.debug_messages = debug_val
if (called_from_flask == False):
encodingsP = "encodings.pickle"
else:
encodingsP = "Recognizer/encodings.pickle"
# load the known faces and embeddings along with OpenCV's Haar
# cascade for face detection
self.data = pickle.loads(open(encodingsP, "rb").read())
if (self.debug_messages == True):
print("Faces class initialized")
def run(self):
self.faceRecognizer()
def returnRecognizedFaces(self):
if (self.debug_messages == True):
print("from returnRecognizedFaces: returning: " + str((({k:self.faces_detected_dict[k] for k in self.faces_detected_dict if k!='nil'}))))
# print(f"from returnRecognizedFaces: returning: {self.faces_detected_dict}")
return(({k:self.faces_detected_dict[k] for k in self.faces_detected_dict if k!='nil'}))
def faceRecognizer(self):
try:
# initialize the video stream and allow the camera sensor to warm up
# Set the ser to the followng
# src = 0 : for the build in single web cam, could be your laptop webcam
# src = 2 : I had to set it to 2 inorder to use the USB webcam attached to my laptop
#vs = VideoStream(src=2,framerate=10).start()
vs = VideoStream(src=0,framerate=10).start()
#vs = VideoStream(usePiCamera=True).start()
time.sleep(2.0)
# start the FPS counter
fps = FPS().start()
if (self.debug_messages == True):
print("starting face detection - press Ctrl C to stop")
# loop over frames from the video file stream
while (True):
# grab the frame from the threaded video stream and resize it
# to 500px (to speedup processing)
frame = vs.read()
try:
frame = imutils.resize(frame, width=500)
except:
# Error: (h, w) = image.shape[:2]
# AttributeError: 'NoneType' object has no attribute 'shape'
break
# Detect the fce boxes
boxes = face_recognition.face_locations(frame)
# compute the facial embeddings for each face bounding box
encodings = face_recognition.face_encodings(frame, boxes)
names = []
# loop over the facial embeddings
for encoding in encodings:
# attempt to match each face in the input image to our known encodings
matches = face_recognition.compare_faces(self.data["encodings"], encoding)
name = "unknown" #if face is not recognized, then print Unknown
timePersonDetected = time.time()*1000.0
# check to see if we have found a match
if (True in matches):
# find the indexes of all matched faces then initialize a
# dictionary to count the total number of times each face
# was matched
matchedIdxs = [i for (i, b) in enumerate(matches) if b]
counts = {}
# loop over the matched indexes and maintain a count for
# each recognized face face
for i in matchedIdxs:
name = self.data["names"][i]
counts[name] = counts.get(name, 0) + 1
# determine the recognized face with the largest number
# of votes (note: in the event of an unlikely tie Python
# will select first entry in the dictionary)
name = max(counts, key=counts.get)
# If someone in your dataset is identified, print their name on the screen and provide them through rest
if (max(self.faces_detected_dict, key=self.faces_detected_dict.get) != name or timePersonDetected > self.faces_detected_dict[name] + self.graceTimeBeforeNewRecognition):
# put the face in the dictionary with time detected so we can prrovide this info
# in the rest endpoint for others - this is not really used internally,
# except for the timePersonDetected time comparison above
self.faces_detected_dict[name] = timePersonDetected
# update the list of names
names.append(name)
# exemplary way fo cleaning up the dict and removing the nil entry - kept here for reference:
#new_dict = ({k:self.faces_detected_dict[k] for k in self.faces_detected_dict if k!='nil'})
self.last_recognized_face = name
if (self.print_faces == True):
print(self.last_recognized_face)
# clean up the dictionary
new_dict = {}
for k, v in list(self.faces_detected_dict.items()):
if (v + self.graceTimeBeforeNewRecognition) < (time.time()*1000.0) and str(k) != 'nil':
if (self.debug_messages == True):
print('entry ' + str(k) + " dropped due to age")
else:
new_dict[k] = v
self.faces_detected_dict = new_dict
if (self.debug_messages == True):
print(f"faces dict: {self.faces_detected_dict}")
# update the FPS counter
fps.update()
time.sleep(1)
except KeyboardInterrupt:
if (self.debug_messages == True):
print("Ctrl-C received - cleaning up and exiting")
pass

'NoneType' object has no attribute 'size' - how to face detection using mtcnn?

i'm trying to build face recognition in real time in neural network(facenet network) using pytorch and face detection using MTCNN
i've tried this for detecting faces in real time (from webcam) but doesnt work
read frames then going through mtcnn detector
import cv2
capture = cv2.VideoCapture(0)
while(True):
ret, frame = capture.read()
frames_tracked = []
print('\rTracking frame: {}'.format(i + 1), end='')
boxes,_ = mtcnn.detect(frame)
frame_draw = frame.copy()
draw = ImageDraw.Draw(frame_draw)
for box in boxes:
draw.rectangle(box.tolist(), outline=(255, 0, 0), width=6)
frames_tracked.append(frame_draw.resize((640, 360), Image.BILINEAR))
d = display.display(frames_tracked[0], display_id=True)
i = 1
try:
while True:
d.update(frames_tracked[i % len(frames_tracked)])
i += 1
except KeyboardInterrupt:
pass
if cv2.waitKey('q') == 27:
break
capture.release()
cv2.destroyAllWindows()
but it will rise this error :
this is the entire traceback http://dpaste.com/0HR58RQ
AttributeError: 'NoneType' object has no attribute 'size'
is there a solution for this problem ? what caused this error? thanks for your advice
Let's take a look at that error again.
AttributeError: 'NoneType' object has no attribute 'size'
so, somewhere in your code you(or mtcnn) are trying to call size attribute from a None variable. you are passing frame to mtcnn using following command :
boxes,_ = mtcnn.detect(frame)
this is exactly where you see that error. because you are passing a None variable to mtcnn. in order to prevent it, you can prevent it before calling this method. in other words :
ret, frame = capture.read()
if frame == None:
continue

TypeError: mat data type = 17 is not supported, show IR data with realsense d435

I'm currently working with a d435 and I want to display IR images (both left and right but for the moments just focus on one), following my code:
import pyrealsense2 as rs
import numpy as np
import cv2
# We want the points object to be persistent so we can display the
#last cloud when a frame drops
points = rs.points()
# Create a pipeline
pipeline = rs.pipeline()
#Create a config and configure the pipeline to stream
config = rs.config()
config.enable_stream(rs.stream.infrared, 1, 1280, 720, rs.format.y8, 30)
# Start streaming
profile = pipeline.start(config)
# Streaming loop
try:
while True:
# Get frameset of color and depth
frames = pipeline.wait_for_frames()
ir1_frame = frames.get_infrared_frame(1) # Left IR Camera, it allows 1, 2 or no input
image = np.asanyarray(ir1_frame)
cv2.namedWindow('IR Example', cv2.WINDOW_AUTOSIZE)
cv2.imshow('IR Example', image)
key = cv2.waitKey(1)
# Press esc or 'q' to close the image window
if key & 0xFF == ord('q') or key == 27:
cv2.destroyAllWindows()
break
finally:
pipeline.stop()
Everything works fine till the line:
cv2.imshow('IR Example', image)
I get the error:
TypeError: mat data type = 17 is not supported
I found this link:
TypeError: src data type = 17 is not supported
but I still can't figure out how to show my image.
Does anyone have some ideas? Please share, i'm a newbie with opencv.
image.shape = ()
image.dtype = dtype('O')
Cheers
You need to call get_data() to get the image from the frame.
image = np.asanyarray(ir1_frame.get_data())

Accessing Beaglebone python opencv usb camera but the display showing black screen

I am facing issue while accessing USB camera using beagle-bone black wireless.
Firstly the error is "select timeout" exception which was resolved by this post
Now I am facing the black screen in output.
Here is the testing code I am using.
from cv2 import *
# initialize the camera
cam = VideoCapture(0) # 0 -> index of camera
print "Cam capture"
cam.set(3,320)
cam.set(4,240)
print "Cam set"
s, img = cam.read()
print "Cam read"
if s: # frame captured without any errors
namedWindow("cam-test",CV_WINDOW_AUTOSIZE)
imshow("cam-test",img)
while True:
key = waitKey(30)
if key == ord('q') :
destroyWindow("cam-test")
I have already check that video0 in /dev directory.
The issue is that you need to call 'cam.read()andimshow()` inside the while loop
What you're doing is that you're reading just the first frame, then showing it, and you whileloop isn't doing anything. When the camera boots, the first frame is just a blank screen , which is what you see.
The code should be more like:
while True:
s, img = cam.read()
imshow("cam-test",img)
key = waitKey(30)
if key == ord('q') :
destroyWindow("cam-test")

Categories