I want to display 3 videos at the same time like this:
the way i am opening 1 video right now in fullscreen is as follows:
import numpy as np
import cv2
cap = cv2.VideoCapture('C:\Users\NachoM\Videos\VTS_01_1.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("window", cv2.WND_PROP_FULLSCREEN,
cv2.WINDOW_FULLSCREEN)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('window',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Is there any other parameter i have to pass to cv2 to position the window like in the picture above?
Thank you
One way of doing this is using:
cv2.moveWindow("WindowName", x, y)
More details can be found at (assuming OpenCV 2.4 is used): http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html#movewindow
Related
My problem is opencv runs and show my video not in its original size (seems OpenCV shrinks the video little bit), effecting my region of interest.
How can OpenCV show the exact orginal size of my video? Below is my code and the difference of the video original size and what OpenCV display.
Code:
import cv2
cap = cv2.VideoCapture("video_link")
while True:
ret, frame = cap.read()
if not ret:
break
cv2.imshow("frame", frame)
key = cv2.waitKey(1)
if key == 27:
break
cap.release()
cv2.destroyWindow()
Original size:
OpenCV Video output size:
try this, good luck. Manually enter the height, and width of the camera feed.(edit: Dshow works only in WIN, if you have something else, use different solution)
import cv2
video="videolink"
height=1080 #for some reason, ANYTHING else works for my HD camera for example 1079..
width=1920
cap = cv2.VideoCapture(video, cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue
cv2.imshow('CameraFeed', image )
if cv2.waitKey(5) & 0xFF == ord("q"):
break
cap.release()
cap = cv2.VideoCapture(1)
count = 0
while(True):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('Frame', gray)
ret,thresh=cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
cv2.imwrite('capture.png', thresh
Hello I am using above code to see live view on a Python program. I would like to have only a part of the live view but I couldn't do that.
Could anyone comment on how I could do that?
Not Cropped View
I would like to see only red squared area on live view.
Are you trying to access your front webcam?
Try
cap = cv2.VideoCapture(0) instead of cap = cv2.VideoCapture(1)
cv2.imwrite('capture.png', thresh) instead of cv2.imwrite('capture.png', thresh
make sure to add:
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
after cv2.imwrite() or your screen will freeze up.
(I'm also not sure what your ret,thresh=cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) line is trying to do. I think that might be done wrong)
In the following snippet of code how would I set the size of the capture window? I want to take a 256*256 pixel picture.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
cv2.imshow('img1',frame)
if cv2.waitKey(1) & 0xFF == ord('y'):
cv2.imwrite('imag.png',frame)
cv2.destroyAllWindows()
cap.release()
You can alternatively use OpenCVs resize function or slice your frame if you are just interested in a certain region of your captured image.
You can set the size of the window like so
video_capture = cv2.VideoCapture(0)
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 256)
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 256)
I want my code to capture a single image from webcam and process it further, like later detect colours and sobel edges and a lot more.
In short, I want to do image acquisition.
To use your webcam, you can use VideoCapture:
import cv2
cap = cv2.VideoCapture(0) # use 0 if you only have front facing camera
ret, frame = cap.read() #read one frame
print(frame.shape)
cap.release() # release the VideoCapture object.
You start the webcam, read one image and immediately release it. The frame is the image and you can preprocess it however you want. You can view the image using imshow:
cv2.imshow('image', frame)
if cv2.waitKey(0) & 0xff == ord('q'): # press q to exit
cv2.destroyAllWindows()
import cv2
cap = cv2.VideoCapture(0) # Usually if u have only 1 camera, then it's 0, if u have multiple camera then it's may be 0,1,2 ...
ret, frame = cap.read() # ret is True or False status which shows if you are success reading frame from web cam, frame is an array
# If u want to loop to read continously
ret = True
while ret:
ret, frame = cap.read()
if frame is None:
continue # this will stop the loop if we failed to read frame, because ret will be False
If this is the answer that u wanted, then it have been asked multiple times. Make sure you have tried to search the answer before you asked
cam = cv2.VideoCapture(0)
image = cam.read()[1]
cv2.imshow("image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am following these tutorials http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#display-video and trying to capture video from camera. Although the code functions properly, but it doesn't show anything except black screen on the frame window. I don't know why is this happening.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
if cap.isOpened() == 0:
cap.open(0)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I have read in some posts that OpenCV might not be supporting my camera. So i don't know what is happening exactly?