I have some image data that I'm blitting to a texture and displaying in Kivy. The image data is the same width, with a greater height than the widget. After the texture is created, I want to animate the y position of it for a scrolling effect.
Previously, I had been blitting the entire buffer to the texture, and animating the position of the widget itself. However, the buffer data is occasionally larger than the maximum supported texture size of my GPU, and nothing displays on the screen.
I figured a better approach would be to only blit the section of the source buffer I need, and leave the texture the same size as the widget.
After looking at the documentation for Kivy, I discovered there's a pos parameter. I figured I could use an animated property for the document ypos in the call to blit_buffer, but when I tried it, nothing displayed again.
I switched the pos parameter to (0, 100) to see if my usage of the parameter did what I expected, and it still didn't display. Switching it to (0, 0) works as expected.
Am I using blit_buffer() correctly? What's the best way to blit only a portion of an image data buffer?
EDIT: I recreated this issue with a standalone script:
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.core.image import Image
from kivy.graphics.texture import Texture
from kivy.graphics import Rectangle
import array
class MyWidget(Widget):
def __init__(self, **kwargs):
super(MyWidget, self).__init__(**kwargs)
texture = Texture.create(size=(512, 512))
buf = [int(x * 255 / (512*512*3)) for x in range(512*512*3)]
buf = array.array('B', buf).tostring()
texture.blit_buffer(
buf,
colorfmt='rgb',
bufferfmt='ubyte',
pos=(0, 100)
)
with self.canvas:
Rectangle(texture=texture, pos=self.pos, size=(512, 512))
class MyApp(App):
def build(self):
return MyWidget()
if __name__ == '__main__':
MyApp().run()
It seems the issue was that the pos argument of Texture.blit_buffer() specifies the destination coordinates of the image data, rather than the source coordinates.
I ended up solving my problem by using PIL to crop the image data, then creating a tiling renderer that contains multiple textures each tile is blit to.
Related
I'm making a 3D FPS in Ursina, and I'd like to have my arm skin with the weapon as an image, not an actual 3D model. Does anyone know how to do this ? I tried with Animate from the documentation, but this loads an image as an object in my scene.
What I could do is define a quad with player as parent, and positional arguments, so that it follows me and I see it at the right place, but even this wouldn't work as the texture argument doesn't accept gifs.
So, does anyone know how to do that ?
You can load an animated gif with Animation() which creates an entity. As part of the interface you'll want to attach it to the UI:
from ursina import *
app = Ursina()
gif = 'animation.gif'
a = Animation(gif, parent=camera.ui)
a.scale /= 5 # adjust right size to your needs
a.position = (0.5, -0.5) # lower right of the screen
app.run()
You will need the imageio Python package installed to load gifs.
OLD ANSWER
The Entity you use for the gun has to be anchored to the interface using its parent parameter. Here's an example for a Minecraft-style hand (just a block really):
class Hand(Entity):
def __init__(self):
super().__init__(
parent=camera.ui,
model='cube',
position=Vec2(0.5, -0.3),
scale=(0.2, 0.2, 0.5),
rotation=(150, -30, 0)
)
The important part is parent=camera.ui
Sadly, I cannot seem to find a way to play gifs. I know, #Jan Wilamowski updated his answer but, in my project, that does NOT work. You CAN show static images on this screen, though, by making a quad entity with an image texture, as shown:
class yourimage(Entity):
def __init__(self):
super().__init__(
parent=camera.ui,
model='quad',
#size, position, and rotate your image here
texture = 'yourimage')
yourimage()
Hello I am new at programming and I am making a tkinter project to learn.
In my project, because i didnt like the tkinter widgets, ive decided to only use the Canvas widget and draw all my program there.
To do that I am making my own widgets that will be drawn in the canvas.
What I want is a Widget that follows the following structure:
class Rectangle:
def __init__(self, canvas, coords, img):
self.canvas = canvas
self.img = img
def _redraw(self, coords):
self.img.redraw(coords)
I want the Widget to store an image_object of some sort. The function _redraw will give the coordinates of the new size like so (x0, y0, x1, y1), and I want the img object to redraw itself to fit the new coordinates.
I was wondering the best way to accomplish this (in terms of space and speed), idealy I would like to work with svg files. Any suggestions?
class Image:
def __init__(self, canvas, filePath):
self.canvas = canvas
self.img = tk.PhotoImage(file = filePath)
def _redraw(self, x, y):
return self.canvas.create_image(x, y, image = self.img)
def setScale(self, scale):
if scale >= 1:
self.img = self.img.zoom(int(scale))
else:
self.img = self.img.subsample(int(1/scale))
I've use it with pngs, untested with svgs.
So with pngs etc it's best to resize the images externally before loading, but the scale function works fine if that's inconvenient.
There are more options to add to create_image() like anchor etc, see the documentation.
Also, there are several ways to move the image. One is to undraw and redraw it elsewhere (you need to keep the id returned from create_image() so that you can undraw it). The other is to use the coords() function, see documentation, also see Change coords of line in python tkinter canvas
Problem
How to allow Kivy to use the same pixel measurement to create Window and Screen on devices with Retina display?
Description
The pixel measurements for Window and Screen are not consistent on Apple's Retina display. The width and height of a Window are always twice as long as input values. It seems Window uses 1 logical pixel as 1 unit, and Screen and its children use 1 physical pixel as 1 unit
Apple calls this mode HiDPI mode. In simpler words, it is one logical pixel = four physical pixels at the very beginning. Wikipedia.
Example Code
The following code creates a (500, 500)-size Window and a Screen with supposedly the same size. Inside this Screen, a Button on the top right corner and a 500-by-500 Rectangle are created. The Button shows the Screen size and other information when clicked.
main.py
from kivy.app import App
from kivy.uix.screenmanager import Screen
from kivy.config import Config
from kivy.metrics import Metrics
Config.set("graphics", "width", 500)
Config.set("graphics", "height", 500)
dpi_info = Metrics.dpi
pixel_density_info = Metrics.density
print("Before app starts, dpi is ", dpi_info)
print("Before app starts, pixel density is ", pixel_density_info)
class AppScreen(Screen):
#staticmethod
def show_metrics_data():
print("when app is running, dpi is ", dpi_info)
print("when app is running, pixel density is ", pixel_density_info)
class MainApp(App):
def build(self):
app_screen = AppScreen()
return app_screen
if __name__ == '__main__':
app = MainApp()
app.run()
main.kv
<AppScreen>
Button:
on_press:
print(root.size)
root.show_metrics_data()
Results
When the Button, which shows screen size, got pressed, it prints [1000, 1000] instead of [500, 500].
A 500-by-500 Rectangle only has half of the width and height comparing to the Window which configured also to be 500-by-500.
Before app runs and when app is running, the same values for dpi and pixel density are returned, with dpi being 192.0 and pixel density being 2.0 for both cases.Result
edit:
When connect my current Retina-display device to a monitor which does not have a Retina display and set this monitor to main screen, everything is as expected when running the code. The sizes of Screen, Window and Rectangle are all 500-by-500 and Rectangle fills the whole Screen instead of just a part of it.Result
edit:
Interestingly enough, if I put the above result onto Retina display, it suddenly looks like this:Result
Possible reasons
Currently, the only possible reason I can think of for this is because Window's size is created based on logical pixels and Screen adopts that created size. However, when creating widgets, executing canvas drawing instructions or retrieving Screen size values, somehow original size information is converted to physical-pixel based information.
I am not fully understanding how to use canvas correctly for images with animations.
See the attached snippet, where I load an animated icon into an Image and do both:
(1) add_widget the Image
(2) create a Rectangle canvas instruction with a texture = Image's texture
The Image animates
The Rectangle texture does not
I have read through all of the Kivy manual and read through Image and Canvas and I get the idea that Image is a nice high level class with all of this image animation handling and Canvas is more of a raw low-level drawing canvas.
So here is my question - what is the Kivy-correct architecture for handling animations on a Canvas? I looked at Animation but that seems for more matrix-like animations such as translation, scaling, rotation.
Here is what I am doing now:
I have game with large map window and then a bunch of game UX in helper windows
The game UX helper windows I do all the kivy layouts and such and use generally Images and so my icons are animating nicely
However in the game map, I am using canvas:
Drawing all of my game objects using this paradigm:
r=Rectangle(texture=some_Image.texture)
map.canvas.add(r)
When the world needs to be re-drawn:
1) map.canvas.clear()
2) draw all of the stuff in their new positions and states
(to be faster, I should just track the dirty objects and locations and just draw those, but to be honest I am getting fantastic fps even with this nuclear-level clear on each draw)
This is of course a lot faster and lighter weight than creating and destroying hundreds of widget-classes - what map canvas is for - right?
But the problem is that my icons with animations in a zip file are not animating
Q: Am I thinking of canvas wrong? Should I instead be adding an Image for each of my game objects instead? (And take advantage of all the animated image support?)
from kivy.uix.relativelayout import RelativeLayout
from kivy.uix.image import Image
from kivy.app import App
from kivy.graphics import Rectangle
class MainApp(App):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.root = RelativeLayout()
# use any zip file of an animated image
self.animated_icon = Image(source='factory_icon.zip')
# If I add an Image, the icon animates
self.root.add_widget(self.animated_icon)
# If I add the Image's texture on to a Rectangle instruction, no animation
r = Rectangle(texture=self.animated_icon.texture, size=(100, 100), pos=(100, 100))
self.root.canvas.add(r)
def build(self):
return self.root
if __name__ == '__main__':
MainApp().run()
Image.texture property changes in time. It schedules internally methods to update it as the animation goes. This change doesn't propagate to your rectangle because you created it with texture value captured at a very certain point in time, between updates. Consider this example (I use a .gif file for the animation, but the principle should be the same):
from kivy.uix.relativelayout import RelativeLayout
from kivy.uix.image import Image
from kivy.app import App
from kivy.graphics import Rectangle
class MainApp(App):
def __init__(self, **kwargs):
super(MainApp, self).__init__(**kwargs)
self.root = RelativeLayout()
animated_icon = Image(source='test.gif')
animated_icon.bind(texture=self.update_texture)
self.r = Rectangle(texture=animated_icon.texture, size=(500, 255), pos=(100, 100))
self.root.canvas.add(self.r)
def update_texture(self, instance, value):
self.r.texture = value
def build(self):
return self.root
if __name__ == '__main__':
MainApp().run()
Here I bind my own update_texture method to image's texture property so every time it changes I can update the rectangle accordingly.
I've got a small PyGI project which uses a Cairo image surface, which I then scale with a surface pattern and render on a Gtk.DrawingArea.
I'd like to write the scaled version to a PNG file. I've tried to write from the original surface with Surface.write_to_png(), but it only writes in the original (i.e. non-scaled) size, so I'm stuck in there.
Then I thought I could perhaps fetch the rendered image from the Gtk.DrawingArea and write that to disk, but I haven't found out how to do that in PyGI (this seems to be only possible in GTK+ 2 - save gtk.DrawingArea to file). So I'm trying to figure out how I can write my scaled image to disk.
Here's the code that creates the surface, scales it up and renders it:
def on_drawingarea1_draw (self, widget, ctx, data=None):
# 'widget' is a Gtk.DrawingArea
# 'ctx' is the Cairo context
text = self.ui.entry1.get_text()
if text == '':
return
# Get the data and encode it into the image
version, size, im = qrencode.encode(text)
im = im.convert('RGBA') # Cairo expects RGB
# Create a pixel array from the PIL image
bytearr = array.array('B', im.tostring())
height, width = im.size
# Convert the PIL image to a Cairo surface
self.surface = cairo.ImageSurface.create_for_data(bytearr,
cairo.FORMAT_ARGB32,
width, height,
width * 4)
# Scale the image
imgpat = cairo.SurfacePattern(self.surface)
scaler = cairo.Matrix()
scaler.scale(1.0/self.scale_factor, 1.0/self.scale_factor)
imgpat.set_matrix(scaler)
ctx.set_source(imgpat)
# Render the image
ctx.paint()
And here's the code to write the surface to a PNG file:
def on_toolbuttonSave_clicked(self, widget, data=None):
if not self.surface:
return
# The following two lines did not seem to work
# ctx = cairo.Context(self.surface)
# ctx.scale(self.scale_factor, self.scale_factor)
self.surface.write_to_png('/tmp/test.png')
So writing the surface creates an non-scaled image, and there is no write method in the cairo.SurfacePattern either.
My last resort is to fetch the scaled image as rendered in the gtk.DrawingArea, put it in a GtkPixbuf.Pixbuf or in a new surface, and then write that to disk. The pixbuf approach seemed to work in GTK+ 2, but not in GTK+ 3.
So does anyone know how I can write the scaled image to disk?
Ok, I found a way:
Remembering that Gtk.DrawingArea derives from Gtk.Window, I could use the Gdk.pixbuf_get_from_window() function to get the contents of the drawing area into a GdkPixbuf.Pixbuf and then use the GdkPixbuf.Pixbuf.savev() function to write the pixbuf as an image on disk.
def drawing_area_write(self):
# drawingarea1 is a Gtk.DrawingArea
window = self.ui.drawingarea1.get_window()
# Some code to get the coordinates for the image, which is centered in the
# in the drawing area. You can ignore it for the purpose of this example
src_x, src_y = self.get_centered_coordinates(self.ui.drawingarea1,
self.surface)
image_height = self.surface.get_height() * self.scale_factor
image_width = self.surface.get_width() * self.scale_factor
# Fetch what we rendered on the drawing area into a pixbuf
pixbuf = Gdk.pixbuf_get_from_window(window, src_x, src_y,
image_width, image_height)
# Write the pixbuf as a PNG image to disk
pixbuf.savev('/tmp/testimage.png', 'png', [], [])
While this works, it'd still be nice to see if someone could confirm this is the right way or to see if there is any other alternative.
I found another approach, using the Cairo context passed to the handler of draw events, but it resulted in capturing a region of the parent window that was larger than the DrawingArea.
What worked for me was to use the PixBuf as you have shown, but first calling the queue_draw() method for the DrawingArea, to force a full rendering, and waiting for the event to be processed (easy enough, I already had a draw handler). Otherwise, the resulting images can be partially undrawn.