How do I split my 800x480 5-inch screen into 2 parts - python

I am building a stand-alone VR headset using Raspberry Pi 3 model b. I am having a problem with splitting the screen as we see on our phone. I am still learning Python so I don't have much idea on how to do this.
Here in this code, I have tried to solve the above-mentioned problem but when I run this code on Raspbian an error occurs that ImageGrab function works only on Windows and Mac. I tried to use pyscreenshot module also, although it works on my PC screen fairly when I connect it with my 5-inch screen, a black window opens and I see nothing.
import numpy as np
from PIL import ImageGrab
import cv2
import time
while(True):
screen = np.array(ImageGrab.grab(bbox=(920,420,1320,900)))
frame = cv2.cvtColor(screen, cv2.COLOR_BGR2RGB)
frame = cv2.resize(frame, (0, 0), None, 1, .83)
numpy_horizontal = np.hstack((frame,frame))
#cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
#cv2.setWindowProperty("window", cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
cv2.imshow('window',numpy_horizontal)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break

Your problem is not splitting a screen, but to display an image on the screen. So you need a library to do that. In your example you are using OpenCV. This is usually a bad choice and only usefull for some simple debugging. You need a proper GUI library.
Here you have a gazillion of options. If you are into games, I would look into moderngl and moderngl-window. This is based on PySide2, and as far as I have seen Raspberry Pi now supports this.

Related

Holding down a key freezes a video pipeline in OpenCV

I have a program, which gets video from industrial camera. I use OpenCV in order to display video and do some image analysing. I have several circles displayed on an image and there's a possibility to move them with keyboard. And when I press down a keyboard button and hold it, image acquisition freezes, but moving of a circle still happens (you can see this by unpressing the keyboard button and the circle moved).
Here is a simplified example:
import numpy as np
import cv2
WINDOW_NAME = 'program'
cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
cv2.setWindowProperty(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN,
cv2.WINDOW_GUI_EXPANDED)
while True:
# Simplified image acquisition.
frame = (np.random.rand(1080, 1920, 3)*255).astype(np.uint8)
cv2.imshow(WINDOW_NAME, frame)
pressed_key_code = cv2.waitKey(1)
if pressed_key_code == ord('q'):
cv2.destroyAllWindows()
break
# Simplified action on key press.
elif pressed_key_code == ord('p'):
print('moving action')
In this example if one presses and holds p the image acquisition stops, but one can see prints still happening.
I tried using different functions for waiting key: cv2.pollKey(), cv2.waitKeyEx(). I used different waiting time. Also after researching the problem, I've found some suggestions on using several cv2.waitKey() functions simultaneously.
I expect to see action not freezing image acquisition. What can I do?
OpenCV version: 4.7.0
Platform: Windows 10.0.17763 AMD64
HighGUI backend: WIN32UI

OpenCV imshow() function shows a single colored window instead of an image

I've been trying to just take a picture with OpenCV but imshow() returns a single colored image instead.
It doesn't return any Error.
import cv2
from time import sleep
camera = cv2.VideoCapture(0)
ret, frame = camera.read()
sleep(1)
cv2.imshow("frame", frame)
cv2.waitKey(0)
cv2.imwrite("image.jpg", frame)
cv2.destroyAllWindows()
Here is the window created while running this code
My cam is fully working with other programs and python programs, but i can't make this work.
I've tried changing the camera port, rerun pycharm and even restart my computer and cam, but didn't work as well.
When i try to save the image with imwrite() it doesn't work either.
image saved with imwrite()
Help?
EDIT:
It probably was my camera, because i made a program that takes pictures every 5 seconds, and after the second picture, it worked normally.

OpenCV only showing a black image when cv.imshow is used

Hi whenever I use the cv2.imshow function all I get is a black rectangle like this:
My code is here, I am using python:
from cv2 import cv2
image = cv2.imread("scren.png")
cv2.imshow("image2", image)
I have tried using different file types as well as restarting my computer. Does anyone know how to fix this? Thanks.
I'll put the answer from #fmw42 here as it solved for me too and I was looking for it for some hours
Add cv2.waitKey(0) after cv2.imshow(...).
I don't know why it solves, but solved for me.
According to the documentation the imshow function requires a subsequent command to waitKey or else the image will not show. I believe this functionality has to do with how highgui works. So to display an image in Python using opencv you will always have to do something like this:
import cv2
image = cv2.imread(r"path\to\image")
cv2.imshow("image", image)
cv2.waitKey(0)
Note about imshow
This function should be followed by a call to cv::waitKey or cv::pollKey to perform GUI housekeeping tasks that are necessary to actually show the given image and make the window respond to mouse and keyboard events. Otherwise, it won't display the image and the window might lock up. For example, waitKey(0) will display the window infinitely until any keypress (it is suitable for image display). waitKey(25) will display a frame and wait approximately 25 ms for a key press (suitable for displaying a video frame-by-frame). To remove the window, use cv::destroyWindow.
See also.

Take an input stream from the desktop in OpenCV

I am using Python 3.5.1 and OpenCV 3.0.0.
I am working on a python program that can play games, so it needs to 'see' what is going on, on the screen. How can this be achieved?
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
#work with frames here
Is there a int 'a' such that cv2.VideoCapture(a) will take the desktop screen as video input? I tried making it and I followed a rather clumsy approach, I captured screen repeatedly using:
import os
os.system('screencapture test.jpg')
Then opening test.jpg using cv2.imread. This was a very slow approach, on searching online I found this question Screen Capture with OpenCV and Python-2.7 which does the same thing, but more efficiently. But the fact still remains that it is capturing individual screenshots and processing them one by one and not a true video stream. I also found this How to capture the desktop in OpenCV (ie. turn a bitmap into a Mat)? which I think is close to what I am trying but is in C++, if someone can help me convert this to Python, I will highly appreciate it.
The main thing is that the program will be doing something like MarI/O, so speed is a concern, any help is appreciated, go easy on me, I am (relatively) new to OpenCV.
Thanks.
Just an update on this question in case anyone wants a solution.
Taking screenshot can be achieved by using module pyautogui
import pyautogui
import matplotlib.pyplot as plt
image = pyautogui.screenshot()
plt.imshow(image)
If you want to read it as a stream,
while(True):
image = pyautogui.screenshot()
#further processing
if finished:
break
According to the documentation,
On a 1920 x 1080 screen, the screenshot() function takes roughly 100 milliseconds
So this solution can be used if your application does not demand high fps rate.
Taking screenshots in separate thread sounds good solution.
Also you can use virtual webcam, but it is a heavy solution.
Or you would capture desktop directly by using ffmpeg. https://trac.ffmpeg.org/wiki/Capture/Desktop

htImage display and synchronised face capture in python

I am trying to write a program for image display and face capture. I am somewhat new to python and OpenCV.
Please find the details below:
I am running Python 2.7.5 on win32 on windows XP
Open cv2 version is 3.0.0
For the program,
Images from a predefined folder needs to be displayed for a fixed time of 500 millisecond in random sequence.
The gap between the images should be covered through a black screen, which should come for any random time gap interval between 1000-1500 millisecond.
Face capture of the viewer needs to be done via webcam once image showed, in between the image show, i.e. at the point of 250 millisecond. The output of the face should be stored in another newly created folder each time the program is run.
I have written the code below, but not getting the sequence right with a synchronised face capture with Haarcascade integration(perhaps required).
I also read somewhere that 'camera index' could be involved in this with possibly the value zero assigned to it. What exactly could be its role?
Please assist in this. Thanks in advance.
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('C:\\Sourceimagepath.jpg', 1)
cv2.startWindowThread()
cv2.namedWindow("Demo")
cv2.imshow("Demo", img)
cv2.waitKey(0)
cv2.destroyAllWindows()

Categories