Kivy Interpolation on Canvas - python

I am new to kivy and python. I am trying to have an texture (256x256) shown pixelated through my canvas, but it is blurred, and through some googling i think it is interpolated? I want it to show the pixels with sharp edging and not blurred. It also blends in color from surrounding tiles in the texture when i use texture.get_region(...) to select one tile (64x64) and display that, so i get a border from the other tiles in my new texture. I guess i am just really bad at reading Kivy's documentation.
texture = Image('template.png').texture //getting texture
topright = texture.get_region(64, 64, 64, 64) //Selecting a specific Tile
with self.canvas:
Rectangle(texture = texture, pos = self.pos, size = (512,512))

Set
texture.mag_filter = 'nearest'
texture.min_filter = 'nearest'
You might only need one of these, I don't remember, but you can test and see the doc at http://kivy.org/docs/api-kivy.graphics.texture.html

Related

3D Model not loading with colors in Ursina (python 3)

I'm trying to load low poly FPS arms I made in Blender. I gave them colors using a material that I set a color to, it appears nicely in Blender. But when I load them into Ursina, they are totally white, no color ! I clearly think the problem is the "way" I applied the color, which may be a problem for some reason or another.
Here is how I imported the file :
class Hand(Entity):
def __init__(self):
super().__init__(model = 'ARMS.obj',
scale = (0.1, 0.1, 0.1),
rotation = (0, -20, 0),
color = color.white,
position = (0, 2, 0))
(sorry for minimum reproductible example, would be hard to do without posting my whole code)
So yeah, it shows with the color I set in the color parameter (white to avoid affecting the colors I apply in Blender... That don't show). Does someone know how to do that please ? By this, I mean, does someone have experience loading 3D models into Ursina/an equivalent, and knows what I did wrong ? I did as well as I could, following this : https://blender.stackexchange.com/questions/75872/not-showing-colors-in-material-mode. I will join an image of the properties of my material I assigned into blender :
I'm really wondering how I can get my color/future textures to load into ursina !
OBJ models can't be loaded in with colours in ursina. What you have to do is
At the top of Blender click on the Texture Paint tab
On the Top Left side of the screen, you will see a button called Image.
Click it, and if you already have an image loaded (which it looks like you do), click Save and Save it into your project folder
In the hand class, include the texture param
class Hand(Entity):
def __init__(self):
super().__init__(
model = 'ARMS.obj',
scale = (0.1, 0.1, 0.1),
rotation = (0, -20, 0),
color = color.white,
position = (0, 2, 0),
texture = "ARMS.png"
)

Pyglet blit a texture region onto a texture

I would like to create a single texture from multiple smaller texture regions.
What I'm trying is:
import pyglet
import pyglet.gl as gl
large_texture = pyglet.image.Texture(width=800,
height=600,
target=gl.GL_TEXTURE_2D,
id=0)
texture = pyglet.resource.texture("assets/my_image.png")
region = texture.get_region(0, 0, 32, 32)
region_2 = texture.get_region(0, 32, 32, 32)
large_texture.blit_into(region.get_image_data(), x=0, y=0, z=0)
large_texture.blit_into(region_2.get_image_data(), x=0, y=32, z=0)
I was expecting that this would place the two texture regions onto the large_texture, but instead, I get this error:
pyglet.gl.lib.GLException: b'invalid value'
Is there something obvious that I am missing, or what (if any) is the correct way to achieve this?
You aren't actually creating a texture right now.
large_texture = pyglet.image.Texture(width=800,
height=600,
target=gl.GL_TEXTURE_2D,
id=0)
Is just declaring a Python object (usually used internally by Pyglet).
The actual texture creation is done in the create method: large_texture = pyglet.image.Texture.create(width=800, height=600)
Once you do that, it should work.

Python Pillow text vertical align

I'm trying to position 1 symbol in the top left corner of the given bounding box.
draw = ImageDraw.Draw(img)
font = ImageFont.truetype('LiberationSans-Regular.ttf', 150)
draw.text((x0, y0), "€", "green", font=font)
But when I place text, for example at (0, 0) of the box, it appears with some padding at the top. Also it seems padding size depends on font size.
Is there a way to calculate size of this padding? And maybe move it on that exact amount of pixels upwards.
Basically top pixel of the given symbol must be at y0 of the bounding box, regardless of font and font size.

Anti-Aliasing Images in a panel

I'm setting the shape of a wxPanel, but there are a lot of jagged edges. Is there a way to smooth these out?
Inside of the wx.Frame, I am setting the shape from a black and white .png image
mask = wx.Image('Resources/Images/Window/window_alpha_map.png')
mask.ConvertAlphaToMask()
shape = mask.ConvertToBitmap()
shape.SetMask(wx.Mask(shape, wx.BLACK))
self.SetShape(wx.RegionFromBitmap(shape))
Inside of the wxPanel, I am then setting the image in eraseBackground
def onEraseBackground(self, event):
dc = event.GetDC()
if not dc:
dc = wx.ClientDC(self)
rect = self.GetUpdateRegion().GetBox()
dc.SetClippingRect(rect)
dc.Clear()
background = wx.Image('Resources/Images/Window/window_background.png')
bmp = background.ConvertToBitmap()
dc.DrawBitmap(bmp, 0, 0)
Here are some examples of what I am talking about: http://clfu.se/PZn3 http://clfu.se/uE4
Is there a way to smooth these out in wxPython or even a trick in photoshop I am missing?
Any help is appreciated
You would probably be better off to just draw the image using a GraphicsContext widget as that supports anti-aliasing:
http://wxpython.org/Phoenix/docs/html/GraphicsContext.html
wxPython also allows drawing via Cairo, which also support anti-aliasing:
http://wiki.wxpython.org/UsingCairoWithWxPython
Finally, you could also take a look at FloatCanvas:
Best canvas for drawing in wxPython?

Pyglet: How to change resolution when you go fullscreen?

I'm using Pyglet and I have a little that includes an object moving over a background. Both of them are represented by images (png and jpg).
I've created a non-fullscreen window with size 800x600 and it works fine, but when I toggle to fullscreen... background and object have the same size as before and the rest of the screen is filled with black (empty color).
What I want to do is to "scale" the images or change the resolution when I toggle fullscreen mode.
I've read the documentation, but I can't find the answer to this.
I know that with Pygame, this problem solves itself automatically (if you change the window size, everything rescales automatically)... but how do you do this with pyglet?
This is my relevant code:
import pyglet
WIDTH = 800
HEIGHT = 600
working_dir = '/where/i/have/my/images/'
window = pyglet.window.Window(WIDTH, HEIGHT)
background = pyglet.image.load(working_dir + 'background.jpg')
flying_thing = pyglet.image.load(working_dir + 'flying_thing.png')
#window.event
def on_draw():
window.clear()
background.blit(0, 0)
flying_thing.blit(WIDTH // 2, HEIGHT // 2)
#window.event
def on_key_press(symbol, modifiers):
if symbol == pyglet.window.key.SPACE:
window.set_fullscreen(not window.fullscreen)
pyglet.app.run()
You can try this code changing working_dir, background.jpg and flying_thing.png to a working directory of yours and two images in it.
I didn't tried, but from pyglet docs, blit supports width and height. Its signature is
blit(self, x, y, z=0, width=None, height=None)
Have you tried using
background.blit(width=window.width, height=windows.height)
instead? (I'm not sure the window.width changes on full_screen, let's see...).
This answer can also be relevant to your question: https://stackoverflow.com/a/11183462/931303.

Categories