How can I capture detected image by Yolov4 - python

I want to capture the box recognized while using YOLOv4 webcam recognition.
So i used this code.
import cv2
import detect as dt
from darknet import Darknet
from PIL import Image
vidcap = cv2.VideoCapture(0)
success, image = vidcap.read()
count = 0
m = Darknet('darknet/data/yolo-obj.cfg')
m.load_weights('darknet/backup/yolo-obj_30000.weights')
use_cuda = 1
m.cuda()
while success:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
im_pil = Image.fromarray(image)
im_pil = im_pil.resize((m.width, m.height))
boxes = dt.do_detect(m, im_pil, 0.5, 0.4, use_cuda)
result = open('Desktop/captureyolobox/capture%04d.jpg'%(count), 'w')
for i in range(len(boxes)):
result.write(boxes[i])
count = count + 1
success, image = vidcap.read()
result.close()
I've encountered this problem. I surfed the web to solve the problem, but I couldn't find it. Can you help me?
Traceback (most recent call last):
File "yoloshort.py", line 2, in <module>
import detect as dt
ImportError: No module named detect

Do you mean detect_image in darknet.py? You can check the darknet.py which have you want or not.

Related

face tracking error: (-215:Assertion failed) !empty() in function 'cv::CascadeClassifier::detectMultiScale

I continue getting this error. I've read that it means the images are not available. I've copied the path and used the proper directory. I've cloned the haarcascade_frontalface_default.xml directly from opencv Github. Not sure what I messed up on here.
I'm following this tutorial for facetracking online: https://www.youtube.com/watch?v=LmEcyQnfpDA&t=7600s
Any help appreciated!
Here are the code and errors:
Code:
import cv2
import numpy as np
def findFace(img):
faceCascade = cv2.CascadeClassifier('Resources/haarcascade_frontalface_default.xml')
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(imgGray,1.1,8)
myFaceListC = []
myFaceListArea = []
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x + w, y + h),(0,0,255),2)
cap = cv2.VideoCapture(0)
while True:
_, img = cap.read()
findFace(img)
cv2.imshow("Output",img)
cv2.waitKey(1)
Errors:
Traceback (most recent call last):
File "C:/Users/zachd/PycharmProjects/droneproject2/venv/Face Tracking.py", line 20, in <module>
findFace(img)
File "C:/Users/zachd/PycharmProjects/droneproject2/venv/Face Tracking.py", line 7, in findFace
faces = faceCascade.detectMultiScale(imgGray,1.1,8)
cv2.error: OpenCV(4.5.1) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\objdetect\src\cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function 'cv::CascadeClassifier::detectMultiScale'
[ WARN:1] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-oduouqig\opencv\modules\videoio\src\cap_msmf.cpp (434) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
Avoid jiggle or jumpy rectangles for the best results. In line 7, I changed the second and third parameters.
import cv2
import numpy as np
def findFace(img):
faceCascade = cv2.CascadeClassifier('Resources/haarcascade_frontalface_default.xml')
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(imgGray,1.3, 5)
myFaceListC = []
myFaceListArea = []
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x + w, y + h),(0,0,255),2)
cap = cv2.VideoCapture(0)
while True:
_, img = cap.read()
findFace(img)
cv2.imshow("Output", img)
cv2.waitKey(1)
Your error:
(-215:Assertion failed) !empty() in function 'cv::CascadeClassifier::detectMultiScale'
OpenCV uses the empty() method of the CascadeClassifier to test whether the cascade was loaded properly, in each detect() call. It does not refer to image data passed to the detect() call.
Right after instantiation, check:
# faceCascade = cv2.CascadeClassifier('Resources/haarcascade_frontalface_default.xml')
assert not faceCascade.empty(), "it didn't find the XML file"
If that fails, you need to learn about relative paths and the Current Working Directory. print(os.getcwd()) and consider if that is where you expected your program to run.
There are a few ways to fix the issue. The recommended way is:
faceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
but that assumes them to be where OpenCV was installed. If you put those files elsewhere, you are responsible for finding them.
Many people have had this error message. Always use the search function. You should have found this previous question with some good answers
Messing with the parameters of the detect call won't do anything at all to fix your issue.

How to extract thermal frame from FLIR video export file?

I have a IR camera video file. I want to extract this video into n-frames. I followed normal opencv method to extract frames from video. like below,
import cv2
vidcap = cv2.VideoCapture('3.mp4')
success,image = vidcap.read()
count = 0
while success:
cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file
success,image = vidcap.read()
print('Read a new frame: ', success)
count += 1
It extracts image as a normal image, instead of thermal image.
I found that using below code.
import flirimageextractor
from matplotlib import cm
from glob import glob
flir = flirimageextractor.FlirImageExtractor(palettes=[cm.jet, cm.bwr, cm.gist_ncar])
for file_ in glob("images/*.jpg"):
flir.process_image(file_)
flir.save_images()
flir.plot()
it throws KeyError: 'RawThermalImageType'
Full stack trace
Traceback (most recent call last):
File "thermal_camera.py", line 8, in
flir.process_image(file_)
File "/usr/local/lib/python3.5/dist-packages/flirimageextractor/flirimageextractor.py", line 101, in process_image
if self.get_image_type().upper().strip() == "TIFF":
File "/usr/local/lib/python3.5/dist-packages/flirimageextractor/flirimageextractor.py", line 133, in get_image_type
return json.loads(meta_json.decode())[0]["RawThermalImageType"]
KeyError: 'RawThermalImageType'
But the above code works well for sample thermal images. Which means i am not extracting frame from the video as a proper frame.
How to extract frame from FLIR video without losing thermal(raw) information?
Just export the movie from the Flir Research Software as a .wmv file then it will work just fine.
import cv2
vidcap = cv2.VideoCapture('3.wmv')
success,image = vidcap.read()
count = 0
while success:
cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file
success,image = vidcap.read()
print('Read a new frame: ', success)
count += 1

Unable to load image into window using cv2.Imshow

I have been attempting to produce an OCR tool following this tutorial on youtube, and using the following script:
import os
import sys
import cv2
import numpy as np
input_f = 'letter.data'
img_resize_factor = 12
start, end = 6, -1
height, width = 16, 8
with open(input_f, 'r') as f:
for line in f.readlines():
data = np.array([255*float(x) for x in line.split('\t')[start:end]])
img = np.reshape(data, (height, width))
img_scaled = cv2.resize(img, None, fx=img_resize_factor, fy=img_resize_factor)
print(line)
cv2.imshow('img', img_scaled)
c = cv2.waitKey()
if c == 27:
break
The code falls over when attempting to use cv2.imshow('img', img_scaled) the window appears however is non responding and the image is not loaded into it.
I am using the most up to date version of OpenCV, I am running this in VisualStudio, and have had to add "python.linting.pylintArgs": ["--extension-pkg-whitelist=cv2"] to the user settings.
The error I get is:
Exception has occurred: cv2.error OpenCV(4.0.0)
c:\projects\opencv-python\opencv\modules\imgproc\src\color.hpp:261:
error: (-2:Unspecified error) in function '__cdecl
cv::CvtHelper,struct cv::Set<3,4,-1>,struct
cv::Set<0,2,5>,2>::CvtHelper(const class cv::_InputArray &,const class
cv::_OutputArray &,int)' > Unsupported depth of input image: >
'VDepth::contains(depth)' > where > 'depth' is 6 (CV_64F) File
"C:\Users\aofarrell\Desktop\Python\NeuralNetworks\SimpleOCR.py", line
23, in
break
Everything in your script is wrong.
Solution
1) If you are opening a file, open just file, get data and get out of with statement
2) The error you are experiencing is due to wrong shape
I opened the file and extracted images
import os
import sys
import cv2
import numpy as np
input_f = 'letter.data'
start, end = 6, -1
def file_opener(input_f):
with open(input_f,'r') as fl:
for line in fl.readlines():
yield np.array([255*float(x) for x in line.split('\t')[start:-1]])
iterator = file_opener(input_f)
images = np.array([row for row in iterator]).reshape(52152,16,8) # array with 52152 images
for image_index in range(images.shape[0]):
IMAGE = cv2.resize(images[image_index,:],(0,0),fx=5,fy=5)
cv2.imshow('image {}/{}'.format(image_index,images.shape[0]),IMAGE)
cv2.waitKey(0)
cv2.destroyAllWindows(0)

AttributeError: 'module' object has no attribute 'io' in caffe

I am trying to do a gender recognition program, below is the code..
import caffe
import os
import numpy as np
import sys
import cv2
import time
#Models root folder
models_path = "./models"
#Loading the mean image
mean_filename=os.path.join(models_path,'./mean.binaryproto')
proto_data = open(mean_filename, "rb").read()
a = caffe.io.caffe_pb2.BlobProto.FromString(proto_data)
mean_image = caffe.io.blobproto_to_array(a)[0]
#Loading the gender network
gender_net_pretrained=os.path.join(models_path,
'./gender_net.caffemodel')
gender_net_model_file=os.path.join(models_path,
'./deploy_gender.prototxt')
gender_net = caffe.Classifier(gender_net_model_file, gender_net_pretrained)
#Reshaping mean input image
mean_image = np.transpose(mean_image,(2,1,0))
#Gender labels
gender_list=['Male','Female']
#cv2 Haar Face detector
face_cascade=cv2.CascadeClassifier(os.path.join
(models_path,'haarcascade_frontalface_default.xml'))
#Getting prediction from live camera
cap = cv2.VideoCapture(0)
while True:
ret,frame = cap.read()
if ret is True:
start_time = time.time()
frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
rects = face_cascade.detectMultiScale(frame_gray, 1.3, 5)
#Finding the largest face
if len(rects) >= 1:
rect_area = [rects[i][2]*rects[i][3] for i in xrange(len(rects))]
rect = rects[np.argmax(rect_area)]
x,y,w,h = rect
cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
roi_color = frame[y:y+h, x:x+w]
#Resizing the face image
crop = cv2.resize(roi_color, (256,256))
#Subtraction from mean file
#input_image = crop -mean_image
input_image = rect
#Getting the prediction
start_prediction = time.time()
prediction = gender_net.predict([input_image])
gender = gender_list[prediction[0].argmax()]
print("Time taken by DeepNet model: {}").format(time.time()-start_prediction)
print prediction,gender
cv2.putText(frame,gender,(x,y), cv2.FONT_HERSHEY_SIMPLEX, 1,(0,255,0),2)
print("Total Time taken to process: {}").format(time.time()-start_time)
#Showing output
cv2.imshow("Gender Detection",frame)
cv2.waitKey(1)
#Delete objects
cap.release()
cv2.killAllWindows()
When I am running the I am getting an error:
a = caffe.io.caffe_pb2.BlobProto.FromString(proto_data)
AttributeError: 'module' object has no attribute 'io'
How Can I solve it. I am using cnn_gender_age_prediction model. I want to make a real time gender recognition program using python and cnn_gender_age model.
io is a module in caffe package. Basically when you type import caffe, it will not automatically try to import all modules in caffe package including io. There are two solutions.
First one: import caffe.io manually
import caffe
import caffe.io
Second one: update to the latest caffe version, in which you should find a line in __init__.py under python/caffe directory:
from . import io

OpenCv pytesseract for OCR

How to use opencv and pytesseract to extract text from image?
import cv2
import pytesseract
from PIL import Image
import numpy as np
from matplotlib import pyplot as plt
img = Image.open('test.jpg').convert('L')
img.show()
img.save('test','png')
img = cv2.imread('test.png',0)
edges = cv2.Canny(img,100,200)
#contour = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#print pytesseract.image_to_string(Image.open(edges))
print pytesseract.image_to_string(edges)
But this is giving error-
Traceback (most recent call last):
File "open.py", line 14, in
print pytesseract.image_to_string(edges)
File "/home/sroy8091/.local/lib/python2.7/site-packages/pytesseract/pytesseract.py", line 143, in image_to_string
if len(image.split()) == 4:
AttributeError: 'NoneType' object has no attribute 'split'
If you like to do some pre-processing using opencv (like you did some edge detection) and later on if you wantto extract text, you can use this command,
# All the imports and other stuffs goes here
img = cv2.imread('test.png',0)
edges = cv2.Canny(img,100,200)
img_new = Image.fromarray(edges)
text = pytesseract.image_to_string(img_new, lang='eng')
print (text)
You cannot use directly Opencv objects with tesseract methods.
Try:
from PIL import Image
from pytesseract import *
image_file = 'test.png'
print(pytesseract.image_to_string(Image.open(image_file)))

Categories