How to create PNG templates with Python - python

How would I go about creating PNG templates that I could pass data or information to so that it would show in the Image? For clarification, I'm thinking of something similar to how GitHub README Stats works, but with PNGs instead of SVGs. Or how widgets work for Discord's Widget Images (e.g. https://discordapp.com/api/guilds/guildID/widget.png?style=banner1).
If there isn't a library for this kind of thing what would it take to make one? (I need a time sink, so I'm pretty keen on making something, even if it only fits my needs).

You can use PIL
from PIL import Image, ImageDraw, ImageFont #Import PIL functions
class myTemplate(): #Your template
def __init__(self, name, description, image):
self.name=name #Saves Name input as a self object
self.description=description #Saves Description input as a self object
self.image=image #Saves Image input as a self object
def draw(self):
"""
Draw Function
------------------
Draws the template
"""
img = Image.open(r'C:\foo\...\template.png', 'r').convert('RGB') #Opens Template Image
if self.image != '':
pasted = Image.open(self.image).convert("RGBA") #Opens Selected Image
pasted=pasted.resize((278, int(pasted.size[1]*(278/pasted.size[0])))) #Resize image to width fit black area's width
pasted=pasted.crop((0, 0, 278, 322)) #Crop height
img.paste(pasted, (31, 141)) #Pastes image into template
imgdraw=ImageDraw.Draw(img) #Create a canvas
font=ImageFont.truetype("C:/Windows/Fonts/Calibril.ttf", 48) #Loads font
imgdraw.text((515,152), self.name, (0,0,0), font=font) #Draws name
imgdraw.text((654,231), self.description, (0,0,0), font=font) #Draws description
img.save(r'C:\foo\...\out.png') #Saves output
amaztemp=myTemplate('Hello, world!', 'Hi there', r'C:\foo\...\images.jfif')
amaztemp.draw()
Explanation
PIL is an Image Manipulation library, it can edit images like GIMP with Python (but its more limited).
In this code we declare a class called myTemplate, that will be our template, inside this class we have two functions, one will initialize the class, and request name, description and image, and the other will draw.
Well, from line 13 to 15, the program imports and verifies if there is a selected image, if yes it crops and resizes the selected image (16 and 17), then pastes the selected image in the template.
After this, name and description are drawn, then the program saves the file.
You can customize teh class and the line 26 for your needs
Image references
template.png
images.jfif
out.png

Related

Show image on screen with ursina

I'm making a 3D FPS in Ursina, and I'd like to have my arm skin with the weapon as an image, not an actual 3D model. Does anyone know how to do this ? I tried with Animate from the documentation, but this loads an image as an object in my scene.
What I could do is define a quad with player as parent, and positional arguments, so that it follows me and I see it at the right place, but even this wouldn't work as the texture argument doesn't accept gifs.
So, does anyone know how to do that ?
You can load an animated gif with Animation() which creates an entity. As part of the interface you'll want to attach it to the UI:
from ursina import *
app = Ursina()
gif = 'animation.gif'
a = Animation(gif, parent=camera.ui)
a.scale /= 5 # adjust right size to your needs
a.position = (0.5, -0.5) # lower right of the screen
app.run()
You will need the imageio Python package installed to load gifs.
OLD ANSWER
The Entity you use for the gun has to be anchored to the interface using its parent parameter. Here's an example for a Minecraft-style hand (just a block really):
class Hand(Entity):
def __init__(self):
super().__init__(
parent=camera.ui,
model='cube',
position=Vec2(0.5, -0.3),
scale=(0.2, 0.2, 0.5),
rotation=(150, -30, 0)
)
The important part is parent=camera.ui
Sadly, I cannot seem to find a way to play gifs. I know, #Jan Wilamowski updated his answer but, in my project, that does NOT work. You CAN show static images on this screen, though, by making a quad entity with an image texture, as shown:
class yourimage(Entity):
def __init__(self):
super().__init__(
parent=camera.ui,
model='quad',
#size, position, and rotate your image here
texture = 'yourimage')
yourimage()

How to resize ImageDraw object?

I am following these instructions: https://www.geeksforgeeks.org/python-pil-imagedraw-draw-line/ .
from PIL import Image, ImageDraw
w, h = 220, 190
shape = [(40, 40), (w - 10, h - 10)]
# creating new Image object
img = Image.new("RGB", (w, h))
# create line image
img1 = ImageDraw.Draw(img)
img1.line(shape, fill ="red", width = 0)
img.show()
I then tried to add this line immediately before img.show():
img1 = img1.resize((1024, 1024), Image.BOX)
Which is what I usually do to resize Image objects (I know, if this worked it would distort the image since it's a square, but I don't care about that right now).
When I run the code I get the AttributeError: 'ImageDraw' object has no attribute 'resize'.
So, either there is a different method to resize ImageDraw objects or I need to convert the ImageDraw object back into an Image object. In both cases I couldn't find a solution, can you help me out?
Using
img1 = img1.resize((1024, 1024), Image.BOX)
you're trying to call a resize method on some ImageDraw object. And, the error message tells you, that ImageDraw objects don't have such a method.
Let's have a look at the different modules, classes, and objects involved:
Pillow has an Image module providing an Image class, that "represents an image object". Instances of that class, i.e. Image objects, have a resize method.
Also, there is an ImageDraw module, that "provides simple 2D graphics for Image objects". From the documentation on ImageDraw.Draw:
Creates an object that can be used to draw in the given image.
Note that the image will be modified in place.
The first sentence tells you, that the created ImageDraw object is linked to your actual Image object, and that any draw operation is performed in that image (Image object). The second sentence tells you, that any modification is instantly performed. There's no need to explicitly "update" the Image object, or to somehow "convert" the ImageDraw object (back) to some Image object. (It's also simply not possible.)
So, fixing your code is very easy now. Simply call resize on your actual Image object img, and not on your ImageDraw object img1:
img = img.resize((1024, 1024), Image.BOX)

Python - OpenCV - imread - Displaying Image

I am currently working on reading an image and displaying it to a window. I have successfully done this, but upon displaying the image, the window only allows me to see a portion of the
full image. I tried saving the image after loading it, and it saved the entire image. So I am fairly certain that it is reading the entire image.
imgFile = cv.imread('1.jpg')
cv.imshow('dst_rt', imgFile)
cv.waitKey(0)
cv.destroyAllWindows()
Image:
Screenshot:
Looks like the image is too big and the window simply doesn't fit the screen.
Create window with the cv2.WINDOW_NORMAL flag, it will make it scalable. Then you can resize it to fit your screen like this:
from __future__ import division
import cv2
img = cv2.imread('1.jpg')
screen_res = 1280, 720
scale_width = screen_res[0] / img.shape[1]
scale_height = screen_res[1] / img.shape[0]
scale = min(scale_width, scale_height)
window_width = int(img.shape[1] * scale)
window_height = int(img.shape[0] * scale)
cv2.namedWindow('dst_rt', cv2.WINDOW_NORMAL)
cv2.resizeWindow('dst_rt', window_width, window_height)
cv2.imshow('dst_rt', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
According to the OpenCV documentation CV_WINDOW_KEEPRATIO flag should do the same, yet it doesn't and it's value not even presented in the python module.
This can help you
namedWindow( "Display window", CV_WINDOW_AUTOSIZE );// Create a window for display.
imshow( "Display window", image ); // Show our image inside it.
In openCV whenever you try to display an oversized image or image bigger than your display resolution you get the cropped display. It's a default behaviour.
In order to view the image in the window of your choice openCV encourages to use named window. Please refer to namedWindow documentation
The function namedWindow creates a window that can be used as a placeholder for images and trackbars. Created windows are referred to by their names.
cv.namedWindow(name, flags=CV_WINDOW_AUTOSIZE)
where each window is related to image container by the name arg, make sure to use same name
eg:
import cv2
frame = cv2.imread('1.jpg')
cv2.namedWindow("Display 1")
cv2.resizeWindow("Display 1", 300, 300)
cv2.imshow("Display 1", frame)

Tkinter create_image not displaying

I'm working on an app to display a grid of images, much like the main screen of iPhoto or other similar programs. To do so, I set up a Canvas, and iterate through a list of filenames, creating a PhotoImage from each, and displaying them on the canvas:
self.canvas = Canvas(self.bottomFrame, width = 700, height = 470, bg = "Red")
self.canvas.pack()
for i, filename in enumerate(image_list):
photo_image = PhotoImage(filename)
self.canvas.create_image(100*(round(i/4)+1), 100*(i+1), image = photo_image)
self.labelList.append(photo_image)
The labelList is an attribute of the Application class, and the image_list is populated with filenames of .gif photos. When I run the app, however, no images display. I know the canvas is there, because a red rectangle shows up, but there are no images on it.
What am I missing here - I've scrolled through endless pages of discussion looking for results and haven't found any that work.
photo_image = PhotoImage(filename)
should be
photo_image = PhotoImage(file=filename)
Otherwise, you just set name, since the __init__ function of PhotoImage looks like this:
__init__(self, name=None, cnf={}, master=None, **kw)
Also note that PhotoImage can only handle GIF and PGM/PPM files. If you want other file types, you have to use PIL (example).

Save the contents of a Gtk.DrawingArea or Cairo pattern to an image on disk

I've got a small PyGI project which uses a Cairo image surface, which I then scale with a surface pattern and render on a Gtk.DrawingArea.
I'd like to write the scaled version to a PNG file. I've tried to write from the original surface with Surface.write_to_png(), but it only writes in the original (i.e. non-scaled) size, so I'm stuck in there.
Then I thought I could perhaps fetch the rendered image from the Gtk.DrawingArea and write that to disk, but I haven't found out how to do that in PyGI (this seems to be only possible in GTK+ 2 - save gtk.DrawingArea to file). So I'm trying to figure out how I can write my scaled image to disk.
Here's the code that creates the surface, scales it up and renders it:
def on_drawingarea1_draw (self, widget, ctx, data=None):
# 'widget' is a Gtk.DrawingArea
# 'ctx' is the Cairo context
text = self.ui.entry1.get_text()
if text == '':
return
# Get the data and encode it into the image
version, size, im = qrencode.encode(text)
im = im.convert('RGBA') # Cairo expects RGB
# Create a pixel array from the PIL image
bytearr = array.array('B', im.tostring())
height, width = im.size
# Convert the PIL image to a Cairo surface
self.surface = cairo.ImageSurface.create_for_data(bytearr,
cairo.FORMAT_ARGB32,
width, height,
width * 4)
# Scale the image
imgpat = cairo.SurfacePattern(self.surface)
scaler = cairo.Matrix()
scaler.scale(1.0/self.scale_factor, 1.0/self.scale_factor)
imgpat.set_matrix(scaler)
ctx.set_source(imgpat)
# Render the image
ctx.paint()
And here's the code to write the surface to a PNG file:
def on_toolbuttonSave_clicked(self, widget, data=None):
if not self.surface:
return
# The following two lines did not seem to work
# ctx = cairo.Context(self.surface)
# ctx.scale(self.scale_factor, self.scale_factor)
self.surface.write_to_png('/tmp/test.png')
So writing the surface creates an non-scaled image, and there is no write method in the cairo.SurfacePattern either.
My last resort is to fetch the scaled image as rendered in the gtk.DrawingArea, put it in a GtkPixbuf.Pixbuf or in a new surface, and then write that to disk. The pixbuf approach seemed to work in GTK+ 2, but not in GTK+ 3.
So does anyone know how I can write the scaled image to disk?
Ok, I found a way:
Remembering that Gtk.DrawingArea derives from Gtk.Window, I could use the Gdk.pixbuf_get_from_window() function to get the contents of the drawing area into a GdkPixbuf.Pixbuf and then use the GdkPixbuf.Pixbuf.savev() function to write the pixbuf as an image on disk.
def drawing_area_write(self):
# drawingarea1 is a Gtk.DrawingArea
window = self.ui.drawingarea1.get_window()
# Some code to get the coordinates for the image, which is centered in the
# in the drawing area. You can ignore it for the purpose of this example
src_x, src_y = self.get_centered_coordinates(self.ui.drawingarea1,
self.surface)
image_height = self.surface.get_height() * self.scale_factor
image_width = self.surface.get_width() * self.scale_factor
# Fetch what we rendered on the drawing area into a pixbuf
pixbuf = Gdk.pixbuf_get_from_window(window, src_x, src_y,
image_width, image_height)
# Write the pixbuf as a PNG image to disk
pixbuf.savev('/tmp/testimage.png', 'png', [], [])
While this works, it'd still be nice to see if someone could confirm this is the right way or to see if there is any other alternative.
I found another approach, using the Cairo context passed to the handler of draw events, but it resulted in capturing a region of the parent window that was larger than the DrawingArea.
What worked for me was to use the PixBuf as you have shown, but first calling the queue_draw() method for the DrawingArea, to force a full rendering, and waiting for the event to be processed (easy enough, I already had a draw handler). Otherwise, the resulting images can be partially undrawn.

Categories