I have been searching through OpenCV Docs, but I was unable to find the answer to my problem
I want to remove all OpenCv prebuilt drawings from the image
Here is an example:
with the following generic code:
import numpy as np
import cv2
# Create a black image
img = np.zeros((512,512,3), np.uint8)
# Draw a diagonal blue line with thickness of 5 px
img = cv2.line(img,(0,0),(511,511),(255,0,0),5)
while True:
cv2.imshow('generic frame', img)
if cv2.waitKey(1) & 0xFF is ord('q'):
break
I want the output to be like this following image
The background needs to stay the same, without the OpenCv drawings
Related
With the below sample code, i am using basic dlib face detection.
I was initially drawing a bounding box around the detected face but I now wanted to display only within what is detected(AKA the face): img[top:bottom,left:right,:]
import sys
import dlib
import cv2
detector = dlib.get_frontal_face_detector()
cam = cv2.VideoCapture(1)
color_green = (0,255,0)
line_width = 3
while True:
ret_val, img = cam.read()
rgb_image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
dets = detector(rgb_image)
#for det in dets:
#cv2.rectangle(img,(det.left(), det.top()), (det.right(), det.bottom()), color_green, line_width)
new_img = img[top:bottom,left:right,:]
cv2.imshow('my webcam', new_img)
if cv2.waitKey(1) == 27:
break # esc to quit
cv2.destroyAllWindows()
The issue that I am facing, is that it is successfully showing me what is within the x,y,w,h but the image kept resizing depending on how close I am to the camera.
What I did is the following steps:
I got the coordinates of the detection: img[top:bottom,left:right,:]
I then resized the image to a 480 to 480 size focus_face = cv2.resize(img, (480, 480))
And then passed the image to show.
So the issue Im having is if I resize the array(img) it does not seem to be following the detected face but focusing at the centre of the screen, especially the more I move back.. So if im at the centre of the screen then it shows my whole face, if im at the sides, it will only show a part of my face.
I did my best to explain this, but If you have any questions please let me know.
Best.
I do a "Background Subtraction" with a VideoStream. Then I want to check inside the interior of a specified polygon, if there are white dots.
I thought about using https://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/point_polygon_test/point_polygon_test.html but I don't know how to do it, because the white points are existing after applying the filter. The original stream contains also white points which I also dont't want to count.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture()
cap.open("rtsp://LOGINNAME:PASSWORD#192.168.178.42:554")
#cap.open("C:\\Users\\001\\Desktop\\cam-20191025-220508-220530.mp4")
fgbg = cv2.bgsegm.createBackgroundSubtractorMOG()
while(1):
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
polygonInnenAutoErkennen_cnt = np.array( [(24, 719), (714,414), (1005,429),(1084,719)] )
cv2.drawContours(fgmask,[polygonInnenAutoErkennen_cnt],-1,(255,128,60))
#How can I check here?
cv2.imshow('frame',fgmask)
k = cv2.waitKey(30) & 0xff
if k == 27: # exit on ESC
break
cap.release()
cv2.destroyAllWindows()
The simplest way is to use mask image. Plot your polygon on binary image, and use it as mask for your white dots. You can just do per-pixel multiplication or logical AND.
Here is my solution for getting a binary image:
import cv2
import numpy as np
img = cv2.imread('crop.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
ok,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
cv2.imshow('image',img)
cv2.imshow('threshold',thresh)
k = cv2.waitKey(0) & 0xff
if k == ord('q'):
cv2.destroyAllWindows()
Below is the result I get. How can I remove background from hand?
original image
threshold image
You can use color detection to get a mask for hand region. If you want to do background subtraction on video, then that can be achieved by storing the background and subtracting the upcoming frames from the background.
import cv2
cap=cv2.VideoCapture(1)
j=0
while 1:
ret,frame=cap.read()
if(j==0):
bg=frame.copy().astype("float")
if(j<30):
cv2.accumulateWeighted(frame,bg,0.5)
j=j+1
diff=cv2.absdiff(frame,bg.astype("uint8"))
diff=cv2.cvtColor(diff,cv2.COLOR_BGR2GRAY)
thre,diff=cv2.threshold(diff,25,255,cv2.THRESH_BINARY)
cv2.imshow("j",diff)
if(cv2.waitKey(1) & 0XFF==ord('q')):
break
cap.release()
cv2.destroyAllWindows()
I am trying out OpenCV's ROI function. With this I am trying to crop out a section of an image that I load. After that I am trying to save the image as well as show it. Showing it is not much of a problem, but saving it is. The image is being stored as a big black rectangle instead of the actual cropped image. Here is my code:
import cv2
import numpy as np
from skimage.transform import rescale, resize
if __name__ == '__main__' :
# Read image
im = cv2.imread("/Path/to/Image.jpg")
img = resize(im, (400,400), mode='reflect')
# Select ROI
r = cv2.selectROI(img)
# Crop image
imCrop = img[int(r[1]):int(r[1]+r[3]), int(r[0]):int(r[0]+r[2])]
# Save first, then Display cropped image
cv2.imwrite("../../Desktop/Image.jpg", imCrop) # This is where there seems to be a problem
cv2.imshow("im", imCrop)
cv2.waitKey(0)
Can some one please help?
cv2.selectROI returns the (x,y,w,h) values of a rectangle similar to cv2.boundingRect(). My guess is that the saved black rectangle is due to rounding issues when converting the bounding box coordinates to an int type. So just unpack the (x,y,w,h) coordinates directly and use Numpy slicing to extract the ROI. Here's a minimum working example to extract and save a ROI:
Input image -> Program to extract ROI -> Saved ROI
Code
import cv2
image = cv2.imread('1.jpg')
(x,y,w,h) = cv2.selectROI(image)
ROI = image[y:y+h, x:x+w]
cv2.imshow("ROI", ROI)
cv2.imwrite("ROI.png", ROI)
cv2.waitKey()
I am having some difficulty combining matplotlib and opencv. I have the following code that shows a video stream and pauses when the space bar is hit. I then want to use matplotlib to draw some annotations on top of the paused image (The user can then hit space to unpause and repeat the process). However, whenever I add any matplotlib functions it opens a separate figure window and I would ideally like to have my annotations directly on the image. I also looked at the available drawing functions in opencv and there is not enough functionality there for me (I am looking to fit a complicated curve to the paused image). Any help would be greatly appreciated!!!!
import numpy as np
import cv2
from matplotlib import pyplot as plt
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, im = cap.read()
# Our operations on the frame come here
im_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
im_g = cv2.GaussianBlur(im_gray,(5,5),0)
(thresh, im_bw) = cv2.threshold(im_g, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
cv2.imshow('Window',im)
k = cv2.waitKey(1)
if k > 0:
print k
if k == 32:
##########
# Plot on top of image using matplotlib
##########
cv2.waitKey(0)
continue
elif k == 27:
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()