Webcam based object Scanner in C# 9/10 .net 6 [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Im trying to port this Python code from theDataFox card_scanner_app to an C# WPF Application.
The Python Programm uses a webcam to recognize a Object (Trading Card) and makes a Snapshot of the Object by pressing the Space bar.
As i know the Code Hashes the Snapshot and compares it to a database of 64.000 Images wich are already hashed.
Then the Code uses the Image ID, given in the Image Filename and generates the Output from the Database with all Informations about this specific Object.
import cv2
import pygame
import imutils
import numpy as np
import pygame.camera
from pygame.locals import KEYDOWN, K_q, K_s, K_SPACE
from . import util
import argparse
import glob
DEBUG = False
# calibrate this for camera position
MIN_CARD_AREA = 250000.0 / 3
# calibrate these
THRESHOLD = (100, 255)
FILTER = (11, 17, 17)
ROTATION = 0
# noinspection PyUnresolvedReferences
def scan(img):
# preprocess image
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.bilateralFilter(gray, FILTER[0], FILTER[1], FILTER[2])
ret, gray = cv2.threshold(gray, THRESHOLD[0], THRESHOLD[1], cv2.THRESH_BINARY)
edges = imutils.auto_canny(gray)
# extract contours
cnts, _ = cv2.findContours(edges.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = [c for c in cnts if cv2.contourArea(c) >= MIN_CARD_AREA]
card, c = None, None
if cnts:
# get largest contour
c = sorted(cnts, key=cv2.contourArea, reverse=True)[0]
# approximate the contour
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.05 * peri, True)
pts = np.float32(approx)
x, y, w, h = cv2.boundingRect(c)
# Find center point of card by taking x and y average of the four corners.
# average = np.sum(pts, axis=0)/len(pts)
# cent_x = int(average[0][0])
# cent_y = int(average[0][1])
# center = [cent_x, cent_y]
# Warp card into 200x300 flattened image using perspective transform
card = util.flattener(img, pts, w, h)
card = util.cv2_to_pil(card).rotate(ROTATION)
return card, c, gray, edges
# noinspection PyUnresolvedReferences
def detect(on_detect):
dim = (800, 600)
pygame.init()
pygame.camera.init()
cams = pygame.camera.list_cameras()
# print(cams)
display = pygame.display.set_mode(dim, 0)
cam = pygame.camera.Camera(cams[-1], dim)
cam.start()
capture = True
while capture:
img = cam.get_image()
img = pygame.transform.scale(img, dim)
img = util.pygame_to_cv2(img)
card, c, gray, edges = scan(img)
# q to quit
for event in pygame.event.get():
if event.type == KEYDOWN:
if event.key == K_q:
capture = False
elif event.key == K_s or event.key == K_SPACE:
if card is None:
print('nothing found')
else:
on_detect(card)
# display
if c is not None:
cv2.drawContours(img, [c], -1, (0, 0, 255), 3)
img = util.cv2_to_pygame(img)
display.blit(img, (0, 0))
pygame.display.update()
if card is not None:
card_img = util.pil_to_pygame(card)
display.blit(card_img, (0, 0))
if DEBUG:
for layer in [gray, edges]:
layer = cv2.cvtColor(layer, cv2.COLOR_GRAY2RGB)
layer = util.cv2_to_pygame(layer)
layer.set_alpha(100)
display.blit(layer, (0, 0))
pygame.display.flip()
cam.stop()
pygame.quit()
Now i'm trying to find a Solution to get similar Webcam Access with C# (Visual Studio).
As to i'm still an Beginner I have no Idea how to do this.
I tried several NuGet Packages and got nothing working. I guess my skills aren't good enough.
Therefor I now have to ask you, do you have any Ideas or Solutions that could do the given task?
I already ported most of the Programm and it's working fine.
This is my full Code

There is no built in webcam api directly in c#, the closest you can get is the UWP video capture API. When I tried this, it did not work very well with my webcamera, but that might just be me.
There are lots of image processing libraries that have webcamera support built in. This includes OpenCV/emguCV, but also for example aforge.
I have used a "versatile webcam library" that is a wrapper around the native webcam APIs, and it seem to work well.
More or less all of the libraries have good documentation with nice examples how to get images, many also have sample applications that demonstrate how to use the library, and this should be plenty to get you started. So "I tried and it didn't work" is not a good explanation. What have you tried? Why did it not work?

Related

How to simulate mouse clicks and dragging using co-ordinates from opencv object tracking (Python)

I'm working on a project which uses openCV to detect a blue LED and obtain its x and y co-ords. So far, i have everything working however i can't seem to find any successful way of using the co-ordinates to move the cursor in the same way you can with a physical mouse.
I have tried using the python mouse module and pynput but they both have the same issue which is that the "press" feature is very inconsistent in how it works.
What i want to be able to do is for the LED to always be detected as a single click unless it is held in which case it should drag.
The problem is, dragging only works on some windows like file explorer and doesnt work on VScode or Chrome. Also i cant draw smooth lines using the press function as it only draws straight lines.
The only way i can think of doing something like this would be to draw small straight lines in regular intervals in order to form a smooth line but i'm unsure of how something like this would be done
Maybe there is a module that already does this but i cant seem to find anything on the subject. Most questions asked here are about automating the mouse events but thats not what im after.
The code i have so far is as follow:
from cv2 import warpPerspective
from LScalibrate import warpImage
import cv2
from ast import literal_eval
import numpy as np
import mouse
def start(root, pointsstr, maskparamsmalformed, width, height):
points = literal_eval(pointsstr)
maskparamsstr = ''.join([letter for letter in maskparamsmalformed if letter not in("array()")])
maskparams = literal_eval(maskparamsstr)
lower, upper = np.array(maskparams[0]), np.array(maskparams[1])
root.withdraw()
cap = cv2.VideoCapture(0)
cap.set(15, 3) # may have to change the 2nd arg. Only supported for some cameras. Testing with droidcam therefore cannot use this myself
mat = warpImage(cap, points)
while True:
check, frame = cap.read()
if not check:
break
frame = warpPerspective(frame, mat, (1000, 1000))
hsvimg = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
maskedimg = cv2.inRange(hsvimg, lower, upper)
image = cv2.bitwise_and(frame, frame, mask=maskedimg)
contours, rel = cv2.findContours(maskedimg, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
pts = None
contourpts = []
if len(contours) != 0:
for contour in contours:
if cv2.contourArea(contour) > 10:
x, y, w, h = cv2.boundingRect(contour)
x = (x+(x+w))//2
y = (y+(y+h))//2
pts = (x, y)
contourpts.append(pts)
check = set(contourpts)
if len(check) > 1:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5,5), 0)
minv, maxv, minl, maxl = cv2.minMaxLoc(gray)
pts = maxl
if pts is not None:
controlCursor(pts, width, height)
cv2.circle(frame, pts, 3, (0, 0, 255), -1)
else:
mouse.release("left")
cv2.imshow("win", frame)
if cv2.waitKey(1) & 0xFF == 27:
break
cap.release()
cv2.destroyAllWindows()
root.deiconify()
def minContour(contours):
return sorted(contours, key=cv2.contourArea, reverse=False)[0]
def controlCursor(pos, w, h):
print(pos)
print(w, h)
x = (pos[0]/1000)*w
y = (pos[1]/1000)*h
print(x, y)
mouse.move(x, y, True)
mouse.press("left")
I have included a video showing how it's currently working and all the program code if needed:
Video: https://youtu.be/Q9tOIyy_tsE
Github (all code): https://github.com/ImaadNisar/Lightscreen-Touchscreen-Detection
Thanks!

Camera blockage detection that is mounted on a car [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I'm developing an algorithm that detects a camera blockage using python OpenCV.
Actually, as I'm not good at computer vision yet i'm not sure if the algorithm I came up with is appropriate.
Can anybody take a look at the code I made?
def detectCameraBlockage_absdiff():
cnt = 0
cap = cv2.VideoCapture(0)
# print('width: ', cap.get(3))
# print('height: ', cap.get(4))
IMAGE_WIDTH = cap.get(3)
IMAGE_HEIGHT = cap.get(4)
background = np.zeros((480, 640))
while True:
if cnt == 0:
cnt += 1
pass
else:
ret, frame = cap.read()
# frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
if ret:
cv2.imshow('if you want to quit press the key [q]', frame)
# cv2.imshow('if you want to quit press the key [q]', frame[:, :, 0])
mean_frame = frame.mean(axis=2)
# cv2.imshow('mean_frame', mean_frame)
diff = cv2.absdiff(background, mean_frame)
# print(type(diff))
cv2.imshow('diff', diff)
_, diff = cv2.threshold(diff, 30, 255, cv2.THRESH_BINARY)
# print(type(_), type(diff), sep='\n') #float, ndarray
if cnt%10 == 0:
background = mean_frame
cnt += 1
print('cnt: ', cnt)
if cv2.waitKey(1) == ord('q'):
break
else:
print('camera failed.')
break
This code doesn't work for me.
I cannot think out how should i deal with that pixel values to get the difference between the image that has no blockage and that has.
An idea would be to measure the similarity between a reference image and the current image. The reference image would be the image without the blockage.
If your camera is static, a simple way to implement it is to use SSIM (https://docs.opencv.org/4.5.2/d5/dc4/tutorial_video_input_psnr_ssim.html).

OpenCv Project Guidance for tracking humans

I am currently making a program with OpenCv that detects 2 colours. My next step is to leave a "translucent" path of where both these colours have moved. The idea is that every time they cross over their trail it will get a shade darker.
Here is my current code:
# required libraries
import cv2
import numpy as np
# main function
def main():
# returns vid from camera -- cameras are indexed(0 is the front camera, 1 is rear)
cap = cv2.VideoCapture(0)
# cap is opened if pc is receiving cam data
if cap.isOpened():
ret, frame = cap.read()
else:
ret = False
while ret:
ret, frame = cap.read()
# setting color range
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# BLUE color range
blue_low = np.array([100, 50, 50])
blue_high = np.array([140, 255, 255])
# GREEN color range
green_low = np.array([40, 40, 40])
green_high = np.array(([80, 255, 255]))
# creating masks
blue_mask = cv2.inRange(hsv, blue_low, blue_high)
green_mask = cv2.inRange(hsv, green_low, green_high)
# combination of masks
blue_green_mask = cv2.bitwise_xor(blue_mask, green_mask)
blue_green_mask_colored = cv2.bitwise_and(blue_mask, green_mask, mask=blue_green_mask)
# create the masked version (shows the background black and the specified color the color coming through cam)
output = cv2.bitwise_and(frame, frame, mask=blue_green_mask)
# create/open windows
cv2.imshow("image mask", blue_green_mask)
cv2.imshow("orig webcam feed", frame)
cv2.imshow("color tracking", output)
# if q is pressed the project breaks
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# once broken the program will run remaining instructions (closing windows and stopping cam)
cv2.destroyAllWindows()
cap.release
if __name__ == "__main__":
main()
My question now is how would I add the trail of where both colours have gone? I have also read that I will run into a problem when the trail is implemented as insignificant objects may be detected as one of the colours and leave an unwanted trail.. meaning I will need to find a way to only trail the largest object of the specified colours.
EDIT:
For further clarification:
I am using 2 black highlighters (one with a blue cap and one with a green cap).
With regards to the trail I am referring to something similar to this:..
trail clarification
This guy did an okay job at explaining it but I was still very confused which is why I came to stack overflow for help.
with the trails; I would like for them to be 'translucent' and not solid like in the picture above. therefore if the object crosses over its path again that section of the path will become a shade darker.
hope this helped:)

Face Detection with less cpu load cv2 [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
for a university project I am programming a face mask recognition. For detecting faces, I use the cv2.CascadeClassifier('face_detector.xml'). As I noticed, this program is taking up way too much of the CPU resulting in a heavily disordered video stream frame rate.
I am running the code on a MacBook Air with a 1.6Hz Dual Core (Intel Core i5).
Can someone explain what I can change to make it smoother? Or maybe recommend another face detection?
Here is my code:
import numpy as np
import os
import tensorflow as tf
import cv2
from matplotlib.pyplot import gray
# Disable tensorflow compilation warnings
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import cv2
# Load the cascade
face_cascade = cv2.CascadeClassifier('face_detector.xml')
# To capture video from webcam.
cap = cv2.VideoCapture(0)
# To use a video file as input
# cap = cv2.VideoCapture('filename.mp4')
model = tf.keras.models.load_model('checkpoint19.ckpt')
i = 0
while True:
# Read the frame
_, img = cap.read()
# Detect the faces
faces = face_cascade.detectMultiScale(img, 1.3, 4)
# save each frame as image with PNG format
image = cv2.imwrite('database/{index}.png'.format(index=i), img)
i += 1
# cut out the fragment in the box of the image
# Draw the rectangle around each face
for (x, y, w, h) in faces:
crop_img = img[y:y + h, x:x + w]
resizedImg = cv2.resize(crop_img, (224, 224))
gray = cv2.cvtColor(resizedImg, cv2.COLOR_BGR2GRAY)
imgArrNew = gray.reshape(1, 224, 224, 1)
prediction = model.predict(imgArrNew)
print(prediction)
label = np.argmax(prediction)
print(label)
# font
font = cv2.FONT_HERSHEY_SIMPLEX
# org
for (x, y, w, h) in faces:
org = (x, y+h+30)
# fontScale
fontScale = 1
# Blue color in BGR
color = (255, 0, 0)
# Line thickness of 2 px
thickness = 2
# output the predicted label/sign on the live-stream frame
if label == 0:
color = (0,0,225)
label_out = "Mask off"
if label == 1:
color = (50,205,50)
label_out = "Mask on"
if label == 2:
color = (0,255,225)
label_out = "incorrect Mask"
cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)
image1 = cv2.putText(img, label_out, org, font,
fontScale, color, thickness, cv2.LINE_AA)
# Display
cv2.imshow('Face_Regonition', img)
# Stop if escape key is pressed
k = cv2.waitKey(30) & 0xff
if k == 27:
break
# Release the VideoCapture object
cap.release()
Thanks for your help :)
haar cascaded classifier is slow. . To do detection in every single frame is hard for low-end computing devices.
The easiest way is to use a lower resolution image or lower FPS. But it will appear to be cheap
The better way is to use a detection and tracking framework where detection happens at a 1hz interval at a new thread and tracking can happen at 30hz, which human eye cant tell the difference.
For detection of face, you can choose any method such as hear, HOG, CNN and put it in a new thread. In the main tracking thread (which can run in real time) update the model and predict the bounding box and display it.
You may look for the tracking from here. I suggest KCF based method for it is fast and reliable.
https://www.pyimagesearch.com/2018/07/30/opencv-object-tracking/
Just put the detection box rect as input rect box for the tracking. THen it should work directly.

opencv/python : motion detect weird thresholding

I am trying to make a motion detect program using my webcam, but I'm getting this weird results when thresholding the difference of frames:
When Im moving: (seems okay I guess)
![enter image description here][1]
When Im not moving:
![enter image description here][2]
What can this be? I already ran a couple of programs that got exactly the same algorithm and the thresholding is doing fine..
Heres my code:
import cv2
import random
import numpy as np
# Create windows to show the captured images
cv2.namedWindow("window_a", cv2.CV_WINDOW_AUTOSIZE)
cv2.namedWindow("window_b", cv2.CV_WINDOW_AUTOSIZE)
# Structuring element
es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,4))
## Webcam Settings
capture = cv2.VideoCapture(0)
#dimensions
frameWidth = capture.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)
frameHeight = capture.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)
while True:
# Capture a frame
flag,frame = capture.read()
current = cv2.blur(frame, (5,5))
difference = cv2.absdiff(current, previous) #difference is taken of the current frame and the previous frame
frame2 = cv2.cvtColor(difference, cv2.cv.CV_RGB2GRAY)
retval,thresh = cv2.threshold(frame2, 10, 0xff, cv2.THRESH_OTSU)
dilated1 = cv2.dilate(thresh, es)
dilated2 = cv2.dilate(dilated1, es)
dilated3 = cv2.dilate(dilated2, es)
dilated4 = cv2.dilate(dilated3, es)
cv2.imshow('window_a', dilated4)
cv2.imshow('window_b', frame)
previous = current
key = cv2.waitKey(10) #20
if key == 27: #exit on ESC
cv2.destroyAllWindows()
break
Thanks in advance!
[1]: http://i.stack.imgur.com/hslOs.png
[2]: http://i.stack.imgur.com/7fB95.png
The first thing that you need is a previous = cv2.blur(frame, (5,5)) to prime your previous sample after a frame grab before your while loop.
This will make the code you posted work, but will not solve your problem.
I think the issue that you are having is due to the type of thresholding algorithm that you are using. Try a binary, cv2.THRESH_BINARY, instead of Otsu's. It seemed to solve the problem for me.

Categories