A Kivy pixel measurement problem caused by Apple's Retina display - python

Problem
How to allow Kivy to use the same pixel measurement to create Window and Screen on devices with Retina display?
Description
The pixel measurements for Window and Screen are not consistent on Apple's Retina display. The width and height of a Window are always twice as long as input values. It seems Window uses 1 logical pixel as 1 unit, and Screen and its children use 1 physical pixel as 1 unit
Apple calls this mode HiDPI mode. In simpler words, it is one logical pixel = four physical pixels at the very beginning. Wikipedia.
Example Code
The following code creates a (500, 500)-size Window and a Screen with supposedly the same size. Inside this Screen, a Button on the top right corner and a 500-by-500 Rectangle are created. The Button shows the Screen size and other information when clicked.
main.py
from kivy.app import App
from kivy.uix.screenmanager import Screen
from kivy.config import Config
from kivy.metrics import Metrics
Config.set("graphics", "width", 500)
Config.set("graphics", "height", 500)
dpi_info = Metrics.dpi
pixel_density_info = Metrics.density
print("Before app starts, dpi is ", dpi_info)
print("Before app starts, pixel density is ", pixel_density_info)
class AppScreen(Screen):
#staticmethod
def show_metrics_data():
print("when app is running, dpi is ", dpi_info)
print("when app is running, pixel density is ", pixel_density_info)
class MainApp(App):
def build(self):
app_screen = AppScreen()
return app_screen
if __name__ == '__main__':
app = MainApp()
app.run()
main.kv
<AppScreen>
Button:
on_press:
print(root.size)
root.show_metrics_data()
Results
When the Button, which shows screen size, got pressed, it prints [1000, 1000] instead of [500, 500].
A 500-by-500 Rectangle only has half of the width and height comparing to the Window which configured also to be 500-by-500.
Before app runs and when app is running, the same values for dpi and pixel density are returned, with dpi being 192.0 and pixel density being 2.0 for both cases.Result
edit:
When connect my current Retina-display device to a monitor which does not have a Retina display and set this monitor to main screen, everything is as expected when running the code. The sizes of Screen, Window and Rectangle are all 500-by-500 and Rectangle fills the whole Screen instead of just a part of it.Result
edit:
Interestingly enough, if I put the above result onto Retina display, it suddenly looks like this:Result
Possible reasons
Currently, the only possible reason I can think of for this is because Window's size is created based on logical pixels and Screen adopts that created size. However, when creating widgets, executing canvas drawing instructions or retrieving Screen size values, somehow original size information is converted to physical-pixel based information.

Related

How to get x, y coordinates of touch screen upon touch using Python kivy?

I want to know the x,y coordinates of touch from a touch screen windows/Linux laptop for which kivy package would come in handy.
Since Kivy is an application builder, can its functionality be extended to get the entire screen's touch coordinates rather than from the widget application it hosts? I don't want it to show the black widget window and capture touch coordinates from there, rather it should fetch touch coordinates from the entire existing windows/Linux screen.
Or is there any other python package or code which can fetch me x, y coordinates of touch from the touch screen?
Following is the code tried to get coordinates of touch screen, but this code gives coordinates inside widget window. I want to get touch coordinates from the entire screen and the widget window shouldn't appear.
from kivy.app import App
from kivy.uix.widget import Widget
from pynput.mouse import Button, Controller
from kivy.config import Config
from kivy.core.window import Window
Window.fullscreen = True
class TouchInput(Widget=None):
def on_touch_down(self, touch):
print(f"X {touch.x} Y {touch.y}")
class SimpleKivy4(App):
def build(self):
return TouchInput()
if __name__ == "__main__":
SimpleKivy4().run()

Python canvas bigger size than screen

I´m trying to figure out how to set the size of my tkinter canvas to bigger then actually my screen is. My screen is 1920x1280, if I set in the following code any higher numbers, the size never gets above this (and I want to do it due to huge drawing there).
Code:
from tkinter import *
class Draw:
def __init__(self, min, max):
self.min = min
self.max = max
def draw(self):
master = Tk()
w = Canvas(master, width=2500, height=2500)
#...
I also tried master.geometry("2500x2500") but that didn´t work either.
You can't make windows larger than the physical screen. However, if your goal is to create a large drawing, you can do that without making the canvas physically large. The canvas widget is just a viewport into a much larger virtual drawing area.
For example, you can create a canvas that is only 400x400 pixels, yet draw an image that is 4000x4000. You can define the size of the virtual window by setting the scrollregion attribute to whatever size you want (up to a limit which I think is maybe around 64000x64000)
This happens because of your screen constraint, you cannot run 2500px * 2500px window on your 1920px * 1280px screen, if you try to run a window 1920px * 1280px on your screen, it would work.
This happens because your limits (2500px * 2500px) is too big for your monitor. The window tries to make 7250000 pixel on you 2457600 pixel screen!
So you would have to get a better screen possibly a 3k or 4k screen to run this.

Kivy: what is the proper method for animating images with canvas?

I am not fully understanding how to use canvas correctly for images with animations.
See the attached snippet, where I load an animated icon into an Image and do both:
(1) add_widget the Image
(2) create a Rectangle canvas instruction with a texture = Image's texture
The Image animates
The Rectangle texture does not
I have read through all of the Kivy manual and read through Image and Canvas and I get the idea that Image is a nice high level class with all of this image animation handling and Canvas is more of a raw low-level drawing canvas.
So here is my question - what is the Kivy-correct architecture for handling animations on a Canvas? I looked at Animation but that seems for more matrix-like animations such as translation, scaling, rotation.
Here is what I am doing now:
I have game with large map window and then a bunch of game UX in helper windows
The game UX helper windows I do all the kivy layouts and such and use generally Images and so my icons are animating nicely
However in the game map, I am using canvas:
Drawing all of my game objects using this paradigm:
r=Rectangle(texture=some_Image.texture)
map.canvas.add(r)
When the world needs to be re-drawn:
1) map.canvas.clear()
2) draw all of the stuff in their new positions and states
(to be faster, I should just track the dirty objects and locations and just draw those, but to be honest I am getting fantastic fps even with this nuclear-level clear on each draw)
This is of course a lot faster and lighter weight than creating and destroying hundreds of widget-classes - what map canvas is for - right?
But the problem is that my icons with animations in a zip file are not animating
Q: Am I thinking of canvas wrong? Should I instead be adding an Image for each of my game objects instead? (And take advantage of all the animated image support?)
from kivy.uix.relativelayout import RelativeLayout
from kivy.uix.image import Image
from kivy.app import App
from kivy.graphics import Rectangle
class MainApp(App):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.root = RelativeLayout()
# use any zip file of an animated image
self.animated_icon = Image(source='factory_icon.zip')
# If I add an Image, the icon animates
self.root.add_widget(self.animated_icon)
# If I add the Image's texture on to a Rectangle instruction, no animation
r = Rectangle(texture=self.animated_icon.texture, size=(100, 100), pos=(100, 100))
self.root.canvas.add(r)
def build(self):
return self.root
if __name__ == '__main__':
MainApp().run()
Image.texture property changes in time. It schedules internally methods to update it as the animation goes. This change doesn't propagate to your rectangle because you created it with texture value captured at a very certain point in time, between updates. Consider this example (I use a .gif file for the animation, but the principle should be the same):
from kivy.uix.relativelayout import RelativeLayout
from kivy.uix.image import Image
from kivy.app import App
from kivy.graphics import Rectangle
class MainApp(App):
def __init__(self, **kwargs):
super(MainApp, self).__init__(**kwargs)
self.root = RelativeLayout()
animated_icon = Image(source='test.gif')
animated_icon.bind(texture=self.update_texture)
self.r = Rectangle(texture=animated_icon.texture, size=(500, 255), pos=(100, 100))
self.root.canvas.add(self.r)
def update_texture(self, instance, value):
self.r.texture = value
def build(self):
return self.root
if __name__ == '__main__':
MainApp().run()
Here I bind my own update_texture method to image's texture property so every time it changes I can update the rectangle accordingly.

Animate texture in Kivy using blit_buffer and pos parameter

I have some image data that I'm blitting to a texture and displaying in Kivy. The image data is the same width, with a greater height than the widget. After the texture is created, I want to animate the y position of it for a scrolling effect.
Previously, I had been blitting the entire buffer to the texture, and animating the position of the widget itself. However, the buffer data is occasionally larger than the maximum supported texture size of my GPU, and nothing displays on the screen.
I figured a better approach would be to only blit the section of the source buffer I need, and leave the texture the same size as the widget.
After looking at the documentation for Kivy, I discovered there's a pos parameter. I figured I could use an animated property for the document ypos in the call to blit_buffer, but when I tried it, nothing displayed again.
I switched the pos parameter to (0, 100) to see if my usage of the parameter did what I expected, and it still didn't display. Switching it to (0, 0) works as expected.
Am I using blit_buffer() correctly? What's the best way to blit only a portion of an image data buffer?
EDIT: I recreated this issue with a standalone script:
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.core.image import Image
from kivy.graphics.texture import Texture
from kivy.graphics import Rectangle
import array
class MyWidget(Widget):
def __init__(self, **kwargs):
super(MyWidget, self).__init__(**kwargs)
texture = Texture.create(size=(512, 512))
buf = [int(x * 255 / (512*512*3)) for x in range(512*512*3)]
buf = array.array('B', buf).tostring()
texture.blit_buffer(
buf,
colorfmt='rgb',
bufferfmt='ubyte',
pos=(0, 100)
)
with self.canvas:
Rectangle(texture=texture, pos=self.pos, size=(512, 512))
class MyApp(App):
def build(self):
return MyWidget()
if __name__ == '__main__':
MyApp().run()
It seems the issue was that the pos argument of Texture.blit_buffer() specifies the destination coordinates of the image data, rather than the source coordinates.
I ended up solving my problem by using PIL to crop the image data, then creating a tiling renderer that contains multiple textures each tile is blit to.

PyQt4 - QWidget save as image

I'm trying to use Python to programmatically save a QWidget in PyQt4 as an image (any format would be fine - PNG, PDF, JPEF, GIF...etc)
I thought this would be very straightforward, but I actually wasn't able to find anything on the web about it. Can someone point me in the right direction?
To be clear, I'm trying to do this
gui = <SOME QMainWindow>
gui.show() # this displays the gui. it's useful, but what i need is to save the image
gui.save("image.png") ## How do I do this?
You can do this using the QPixmap.grabWindow() method.
From the docs:
Grabs the contents of the window window and makes a pixmap out of it.
Returns the pixmap.
The arguments (x, y) specify the offset in the window, whereas (w, h)
specify the width and height of the area to be copied.
If w is negative, the function copies everything to the right border
of the window. If h is negative, the function copies everything to the
bottom of the window.
Note that grabWindow() grabs pixels from the screen, not from the
window. If there is another window partially or entirely over the one
you grab, you get pixels from the overlying window, too.
Note also that the mouse cursor is generally not grabbed.
The reason we use a window identifier and not a QWidget is to enable
grabbing of windows that are not part of the application, window
system frames, and so on.
Warning: Grabbing an area outside the screen is not safe in general.
This depends on the underlying window system.
Warning: X11 only: If window is not the same depth as the root window
and another window partially or entirely obscures the one you grab,
you will not get pixels from the overlying window. The contests of the
obscured areas in the pixmap are undefined and uninitialized.
Sample code:
import sys
from PyQt4.QtGui import *
filename = 'Screenshot.jpg'
app = QApplication(sys.argv)
widget = QWidget()
widget.setLayout(QVBoxLayout())
label = QLabel()
widget.layout().addWidget(label)
def take_screenshot():
p = QPixmap.grabWindow(widget.winId())
p.save(filename, 'jpg')
widget.layout().addWidget(QPushButton('take screenshot', clicked=take_screenshot))
widget.show()
app.exec_()
This will produce a window that looks like this:
When you click the button, it will create a file named Screenshot.jpg in the current directory. Said image will look like this (notice the window frame is missing):

Categories