This question is related with this one:
how to display a full screen images with python2.7 and opencv2.4
I want to display a black image full screen, i have created even a black image with the same resolution of the screen.
But i get a little white stripe on top and on the left of the screen.
I don't know if it is a problem of my screen that is not aligned or its my code. I have tried in 2 displays and the white stripe is displayed.
So if you run this code below, do you get a full black image?
import numpy as np
import cv2
if __name__ == "__main__":
img = cv2.imread('nero.jpg')
cv2.namedWindow("test", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("test", cv2.WND_PROP_FULLSCREEN, cv2.cv.CV_WINDOW_FULLSCREEN)
cv2.imshow("test",img)
cv2.waitKey(0)
cv2.destroyAllWindows()
EDIT :
This method is not working for me. do you know anther way or libraries to display a full screen image?
EDIT 2: still unsolved, i am starting to think that it is an openCv bug
I have the same problem, there is a white stripe of 1 pixel on the left and on the top side of the window. Tested it with multiple monitors. OpenCV version 3.4.2
But there is a workaround which works perfectly fine in my case (see also https://gist.github.com/goraj/a2916da98806e30423d27671cfee21b6). Here's the code:
import cv2
import win32api
import win32gui
cv2.namedWindow("fullScreen", cv2.WINDOW_FREERATIO)
cv2.setWindowProperty("fullScreen",cv2.WND_PROP_FULLSCREEN,cv2.WINDOW_FULLSCREEN)
hwndMain = win32gui.FindWindow(None, "fullScreen")
rgb = win32gui.CreateSolidBrush(win32api.RGB(0, 0, 0))
GCLP_HBRBACKGROUND = -10
win32api.SetClassLong(hwndMain, GCLP_HBRBACKGROUND, rgb)
Yes I do.
img = np.zeros((900, 1600)) #my aspect ratio is 16x9
cv2.namedWindow("test", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("test", cv2.WND_PROP_FULLSCREEN, cv2.cv.CV_WINDOW_FULLSCREEN)
cv2.imshow("test",img)
cv2.waitKey(0)
gives me a fully black screen. Are you sure you are using the right aspect ratio?
Related
I am trying to crop a image into a circulor form (which works) and then pasting it to a white backround.
from PIL import Image,ImageFont,ImageDraw, ImageOps, ImageFilter
from io import BytesIO
import numpy as np
pfp = Image.open(avatar)
# cropping to circle
img=pfp.convert("RGB")
npImage=np.array(img)
h,w=img.size
alpha = Image.new('L', img.size,0)
draw = ImageDraw.Draw(alpha)
draw.pieslice([0,0,h,w],0,360,fill=255)
npAlpha=np.array(alpha)
npImage=np.dstack((npImage,npAlpha))
Image.fromarray(npImage).save('result.png')
background = Image.open('white2.png')
background.paste(Image.open('result.png'), (200, 200, h, w))
background.save('combined.png')
Heres what the cropped image looks like(It looks like it has a white background but that's it's transparent):
Cropped Image
But then when I paste it to the white background it changes to a square:
Pasted Image
Here is the original image I am working with:
Image
What you're doing is setting the Alpha of any pixel outside that circle to 0, so when you render it, it's gone, but that pixel data is still there. That's not a problem, but it important to know.
Problem
Your "white2.png" image does not have an alpha channel. Even if it's a PNG file, you have to add an alpha channel using your image editing tool. You can print("BGN:", background.getbands()), to see the channels it has. You'll see it says 'R','G','B', but no 'A'.
Solution 1
Replace your paste line with:
background.paste(pfp, (200, 200), alpha)
Here, we use the loaded in avatar as is, and the third argument is a mask which PIL figures out how to use to mask the image before pasting.
Solution 2
Give your white background image an alpha channel.
MS Paint doesn't do this. You have to use something else.
For GIMP, you simply right-click on the layer and click Add Alpha-channel.
Oh, and something worth noting.
Documentation for Paste.
See alpha_composite() if you want to combine images with respect to their alpha channels.
Probably an unusual question, but I am currently looking for a solution to display image files with PIL slower.
Ideally so that you can see how the image builds up, pixel by pixel from left to right.
Does anyone have an idea how to implement something like this?
It is a purely optical thing, so it is not essential.
Here an example:
from PIL import Image
im = Image.open("sample-image.png")
im.show()
Is there a way to "slow down" im.show()?
AFAIK, you cannot do this directly with PIL's Image.show() because it actually saves your image as a file to /var/tmp/XXX and then passes that file to your OS's standard image viewer to display on the screen and there is no further interaction with the viewer process after that. So, if you draw in another pixel, the viewer will not be aware and if you call Image.show() again, it will save a new copy of your image and invoke another viewer which will give you a second window rather than updating the first!
There are several possibilities to get around it:
use OpenCV's cv2.imshow() which does allow updates
use tkinter to display the changing image
create an animated GIF and start a new process to display that
I chose the first, using OpenCV, as the path of least resistance:
#!/usr/bin/env python3
import cv2
import numpy as np
from PIL import Image
# Open image
im = Image.open('paddington.png')
# Make BGR Numpy version for OpenCV
BGR = np.array(im)[:,:,::-1]
h, w = BGR.shape[:2]
# Make empty image to fill in slowly and display
d = np.zeros_like(BGR)
# Use "x" to avoid drawing and waiting for every single pixel
x=0
for y in range(h):
for x in range(w):
d[y,x] = BGR[y,x]
if x%400==0:
cv2.imshow("SlowLoader",d)
cv2.waitKey(1)
x += 1
# Wait for one final keypress to exit
cv2.waitKey(0)
Increase the 400 near the end to make it faster and update the screen after a greater number of pixels, or decrease it to make it update the screen after a smaller number of pixels meaning you will see them appear more slowly.
As I cannot share a movie on StackOverflow, I made an animated GIF to show how that looks:
I decided to try and do it with tkinter as well. I am no expert on tkinter but the following works just the same as the code above. If anyone knows tkinter better, please feel free to point out my inadequacies - I am happy to learn! Thank you.
#!/usr/bin/env python3
import numpy as np
from tkinter import *
from PIL import Image, ImageTk
# Create Tkinter Window and Label
root = Tk()
video = Label(root)
video.pack()
# Open image
im = Image.open('paddington.png')
# Make Numpy version for simpler pixel access
RGB = np.array(im)
h, w = RGB.shape[:2]
# Make empty image to fill in slowly and display
d = np.zeros_like(RGB)
# Use "x" to avoid drawing and waiting for every single pixel
x=0
for y in range(h):
for x in range(w):
d[y,x] = RGB[y,x]
if x%400==0:
# Convert the video for Tkinter
img = Image.fromarray(d)
imgtk = ImageTk.PhotoImage(image=img)
# Set the image on the label
video.config(image=imgtk)
# Update the window
root.update()
x += 1
So im trying to make a program that moves my mouse cursor and clicks every black pixel found on the screen.
I got to the point where I can input the screen, see it in a window and even let the mouse click on the black pixels. Without the clicking part in the program, I can see the windows changing in real time, but if I add the clicking part it stops refreshing.
import numpy as np
import pyautogui as py
from PIL import ImageGrab
import cv2 as cv
while(True):
# Record location of the program
screen_size = [1293, 171, 1647, 769]
screen = np.array(ImageGrab.grab(bbox=screen_size))
cv.imshow("window", cv.cvtColor(screen, cv.COLOR_BGRA2GRAY))
# Quit
if cv.waitKey(25) & 0xFF == ord("q"):
cv.destroyAllWindows()
break
for y in range(len(screen)):
for x in range(len(screen[y])):
if np.any(screen[y][x]) == 0:
py.click(x+1293,y+171)
I would like to have the screen refresh or something like that, so that lets say it looks at an video of black dots, it can see them and click on all of them. Now it is just stuck at the starting image and keeps clicking the starting dots even when they arent visiable anymore.
This is a video of the problem:
https://www.youtube.com/watch?v=QIrEnCgxe6E&feature=youtu.be
You can see here how it follows the black lines perfectly, but the window OpenCV creates doesnt change, and when I draw over some of the black parts, it still draws over it.
This is the window I see and how it converts the colors
The screen doesn't refresh/update, because the program is still working on the double for-loop that clicks all black pixels. If you want to use this for real-time video it is obviously way to slow a process. You could look into findContours, to click once on every black area. This will be much faster and gives the behavior that I think you want.
Below is some sample code that shows how findContours works. Here is additional info / examples.
Result:
Code:
# load image
img = cv2.imread("image.png")
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# create a mask that hold only black values (below 10)
ret,thresh1 = cv2.threshold(gray,10,255,cv2.THRESH_BINARY_INV)
# find contours in mask
ret, contours, hierarchy = cv2.findContours(thresh1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# draw outline of contour on image
for cnt in contours:
cv2.drawContours(img, [cnt], 0, (255,0,0), 2)
#show image
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I've found documentation regarding C ++, but not much with python.
The basic code to display in python is:
import numpy as np
import cv2
# Load an color image in grayscale
img = cv2.imread('messi.jpg',0)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
To show the image below. But how do I turn this
To look like this?
I also want to keep the size. So I've read some people saying to go "full screen". The only way I could think that might work is do "full screen, but then resize it? Not sure if that's a solution either though (also trying to find out how to do that as well... I'm brand new to OpenCV).
cap2 = cv2.VideoCapture(0)
cap2.set(3,320)
cap2.set(4,200)
ret2, image2 = cap2.read()
cv2.imshow('frame2',image2)
cv2.namedWindow('frame2',cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty('frame2', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
i have found a trick, just put
cv2.namedWindow('frame2',cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty('frame2', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
below
cv2.imshow('frame2',image2)
Here, what we are actualy doing is playing full video in smaller size. That's why there is no title bar and borders.
Did a little more looking around:
using these flags is how to do it with QT backend. CV_GUI_NORMAL or CV_GUI_EXPANDED: CV_GUI_NORMAL is the old way to draw the window without statusbar and toolbar, whereas CV_GUI_EXPANDED is a new enhanced GUI.
unfortunetly, cv2.namedWindow('image', flags=cv2.CV_GUI_EXPANDED) does not work, even though I'm pretty sure I have QT backend (actually I'm positive I do).
After looking up help(cv2), I found similar flags WINDOW_GUI_EXPANDED and WINDOW_GUI_NORMAL. So use those.
img = cv2.imread('messi.jpg',0)
# Removes toolbar and status bar
cv2.namedWindow('image', flags=cv2.WINDOW_GUI_NORMAL)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
But still having trouble trying to remove the title bar.
This is my issue:
import Image
im = Image.open("1.png")
im.show()
print im.mode
im.convert("RGBA").save("2.png")
Well, with my image you can see the difference.
My question is: how do I convert it properly?
Image:
Result:
NOTE: The original image has a semi-transparent glow, the result has a solid green "glow"
This issue was reported here:
https://bitbucket.org/effbot/pil-2009-raclette/issue/8/corrupting-images-in-palette-mode
In March 2012, a comment says it's now fixed in development version of PIL. The most recent released version is 1.1.7, so the fix won't be available until 1.2 comes out. PIL updates very slowly, so don't expect this to come out soon.
Unfortunately your PNG image is a type that PIL doesn't handle very well - a paletted image with an alpha channel. When you open the image, the alpha is thrown away and there's no way to get it back.
This is different from the usual palette transparency where one index of the palette is used to denote fully transparent pixels.
You could use scipy.misc.imread:
img = scipy.misc.imread(filename, mode='RGBA')
img = Image.fromarray(img)
Your problem is that you do not provide info about what PIL should use as source of ALPHA channel.
PIL will not on its own add transparency to your image.
What part of your image you want to be transparent?