I want to create a (grayscale) image from a matrix, draw some lines into it and save it to a file.
In PIL it looks like this:
im = Image.new("RGB", (len(matrix), len(matrix[0])))
for x in range(0, len(matrix)):
for y in range(0, len(matrix[0])):
cl = int(matrix[x][y] * 255.0 / float(max_value))
im.putpixel((x, y), (cl, cl, cl))
draw = ImageDraw.Draw(im)
draw.polygon((off_x, off_y, off_x + a, off_y, off_x + x, off_y + y), outline="#FF0000")
im.save("pix.png")
Sadly, PIL's lines look ugly since it does not support anti-aliasing.
Pyglet is full of different surfaces, textures and images and I don't really know where to start. So how would a simple way to do this in pyglet look like? Or may there be any easier way?
Related
I would like to split an image into triangle shaped tiles (equilateral) . I have tried to generate the coordinates of a triangle using the function from https://alexwlchan.net/2016/10/tiling-the-plane-with-pillow/.
My code:
#import opencv
import math
image_path="/content/newspaper-icon-in-transparent-style-news-on-vector-25591681.jpg"
#Create Triangles
# https://alexwlchan.net/2016/10/tiling-the-plane-with-pillow/
#A horrizontal offset is added to ensure that images line up
#https://stackoverflow.com/questions/22588074/polygon-crop-clip-using-python-pil
def generate_coordinates_for_unit_triangles(image_width,image_height):
image_width=50;
image_height=50;
h=math.sin(math.pi/3)
for x in range(image_width):
for y in range(int(image_height / h)):
first_c,second_c,third_c=(x, y * h), (x+1, y * h), ((x+0.5, (y+1) * h))
first_sc, second_sc,third_sc=(x+1, y * h), (x+1.5, (y+1) * h), (x+0.5, (y+1) * h)
return first_c, second_c,third_c, first_sc, second_sc,third_sc
#return [(x, y * h), (x+1, y * h), (x+0.5, (y+1) * h)] ,[(x+1, y * h), (x+1.5, (y+1) * h), (x+0.5, (y+1) * h)]
##Generates the two triangles coordinates
first_c, second_c,third_c, first_sc, second_sc,third_sc=generate_coordinates_for_unit_triangles(50,50)
#convert image into numpy array
image_read=Image.open(image_path)
image_to_numpy=np.asarray(image_read)
shape_of_array=image_to_numpy.shape
print(shape_of_array)
mask_image=[first_c, second_c,third_c, first_sc, second_sc,third_sc]
I realized that this may not given my desired output.
The expected input and output is included below:
[Expected input and output][1]
Any guidance on how to approach the problem would be appreciated.
[1]: https://i.stack.imgur.com/vr7rV.jpg
I'm posting this as an answer because it's long, but it's not literally an answer. I'm hoping this will lead you to the next step in your design process.
Here are the design decisions you face, It's clear from your code that you can generate a list of triangle coordinates. Good, what next? You probably know the bounding box of your triangles (largest w and h) advance, so you can certainly create a set of images that contain your triangles, masked off with a black background or alpha=0 background. You could just copy the bounding rectangle to an image, then create a mask using the triangle as a path, and set the alpha to 0 outside of the mask. opencv should be able to do that.
But after you have those, what then? You talked about matching the edges. That's complicated. I suppose you could extract a vector of pixels from the three edges of each triangle, and then do some kind of approximate comparison.
If you do find matches that allow you to stitch together a composite, it is possible (assuming you have alpha=0 in the backgrounds) to blit all of these triangles back into some larger image, kind of like quilting. openvc can do block copy with alpha blending.
So, in the end, I think your problem is achievable, but it's going to be a lot of work, and probably more than we can offer here.
Hi could you please give me any tips on how to cut 4 triangles out of an image and mix them together, so that they look like this:
I want to use a python in order to achieve this.
Starting with this image (dog.jpg):
You could do something like this:
#!/usr/bin/env python3
from PIL import Image, ImageDraw
# Open image, generate a copy and rotate copy 180 degrees
im1 = Image.open('dog.jpg')
im2 = im1.copy().rotate(180)
# DEBUG: im2.save('im2.png')
# Get dimensions of image
w, h = im1.size
# Create an alpha layer same size as our image filled with black
alpha = Image.new('L',(w,h))
# Draw 2 white (i.e. transparent) triangles on the alpha layer
draw = ImageDraw.Draw(alpha)
draw.polygon([(0, 0), (w, 0), (w/2, h/2)], fill = (255))
draw.polygon([(0, h), (w, h), (w/2, h/2)], fill = (255))
# DEBUG: alpha.save('alpha.png')
# Composite rotated image over initial image while respecting alpha
result = Image.composite(im1, im2, alpha)
result.save('result.png')
The intermediate steps (commented with #DEBUG: in the code), look like this:
im2.png
and alpha.png:
I have a very similar question to this: Resize rectangular image to square, keeping ratio and fill background with black, but I would like to resize to a nonsquare image and center the image either horizontally or vertically if needed.
Here are some examples of desired outputs. I made this image entirely with Paint, so the images might not actually be perfectly centered, but centering is what I'd like to achieve:
I tried the following code that I edited from the question linked:
def fix_size(fn, desired_w=256, desired_h=256, fill_color=(0, 0, 0, 255)):
"""Edited from https://stackoverflow.com/questions/44231209/resize-rectangular-image-to-square-keeping-ratio-and-fill-background-with-black"""
im = Image.open(fn)
x, y = im.size
#size = max(min_size, x, y)
w = max(desired_w, x)
h = max(desired_h, y)
new_im = Image.new('RGBA', (w, h), fill_color)
new_im.paste(im, ((w - x) // 2, (h - y) // 2))
return new_im.resize((desired_w, desired_h))
That doesn't work however as it still stretches some images into square shaped ones (at least the image b in the example. What comes to big images, it seems to rotate them instead!
The problem lies in your incorrect calculation of the image size:
w = max(desired_w, x)
h = max(desired_h, y)
You're simply taking the maximum of dimension independently - without taking into account the aspect ratio of the image. Imagine if your input is a square 1000x1000 image. You would end up creating a black 1000x1000 image, pasting the original image over it, and then resizing it to 244x138. To get the correct result, you would have to create a 1768x1000 image instead of a 1000x1000 image.
Here's the updated code that takes the aspect ratio into account:
def fix_size(fn, desired_w=256, desired_h=256, fill_color=(0, 0, 0, 255)):
"""Edited from https://stackoverflow.com/questions/44231209/resize-rectangular-image-to-square-keeping-ratio-and-fill-background-with-black"""
im = Image.open(fn)
x, y = im.size
ratio = x / y
desired_ratio = desired_w / desired_h
w = max(desired_w, x)
h = int(w / desired_ratio)
if h < y:
h = y
w = int(h * desired_ratio)
new_im = Image.new('RGBA', (w, h), fill_color)
new_im.paste(im, ((w - x) // 2, (h - y) // 2))
return new_im.resize((desired_w, desired_h))
I'm creating some images with python imaging library (PIL). Now, like we zoom into a map at a particular location, I want to similarly zoom into my image at a specified point. Note that this is different from resizing the image. I want the size to remain the same. I couldn't find any inbuilt method in the documentation that does this. Is anyone aware of a method that might achieve this. I'd ideally like to do this without other dependencies like openCV.
I think you mean this:
def zoom_at(img, x, y, zoom):
w, h = img.size
zoom2 = zoom * 2
img = img.crop((x - w / zoom2, y - h / zoom2,
x + w / zoom2, y + h / zoom2))
return img.resize((w, h), Image.LANCZOS)
This will crop the image around the point where you zoom into and then upscale the resulting image to the original size.
I'm working on a program in which I need to separate spritesheets, or in other words, separate an image into contiguous regions of color.
I've never done any image processing before, so I'm wondering how I would go about this. What would I do after I test for pixel color? What's the best way to determine which pixel goes with each sprite?
All the input images have uniform backgrounds, and an alpha channel different from that of the background counts as color. The order of the output images needs to be left-right, up-down. My project is written in PySide, so I'm hoping to use it for this task too, but I could import more libraries if necessary.
Thanks your replies!
P.S.:
I'm not sure if the PySide tag is appropriate or not, since I'm using PySide, but the question doesn't involve the GUI aspects of it. If a mod feels it doesn't belong, feel free to remove it.
For example, I have a spritesheet that looks like this:
I want to separate it into these:
That sounds like something that should be implemented in anything that deals with sprites, but here we will implement our own sprite-spliter.
The first thing we need here is to extract the individual objects. In this situation, it is only a matter of deciding whether a pixel is a background one or not. If we assume the point at origin is a background pixel, then we are done:
from PIL import Image
def sprite_mask(img, bg_point=(0, 0)):
width, height = img.size
im = img.load()
bg = im[bg_point]
mask_img = Image.new('L', img.size)
mask = mask_img.load()
for x in xrange(width):
for y in xrange(height):
if im[x, y] != bg:
mask[x, y] = 255
return mask_img, bg
If you save the mask image created above and open it, here is what you would see on it (I added a rectangle inside your empty window):
With the image above, the next thing we need is to fill its holes if we want to join sprites that are inside others (like the rectangle added, see figure above). This is another simple rule: if a point cannot be reached from the point at [0, 0], then it is a hole and it must be filled. All that is left is then separating each sprite in individual images. This is done by connected component labeling. For each component we get its axis-aligned bounding box in order to define the dimensions of the piece, and then we copy from the original image the points that belong to a given component. To keep it short, the following code uses scipy for these tasks:
import sys
import numpy
from scipy.ndimage import label, morphology
def split_sprite(img, mask, bg, join_interior=True, basename='sprite_%d.png'):
im = img.load()
m = numpy.array(mask, dtype=numpy.uint8)
if join_interior:
m = morphology.binary_fill_holes(m)
lbl, ncc = label(m, numpy.ones((3, 3)))
for i in xrange(1, ncc + 1):
px, py = numpy.nonzero(lbl == i)
xmin, xmax, ymin, ymax = px.min(), px.max(), py.min(), py.max()
sprite = Image.new(img.mode, (ymax - ymin + 1, xmax - xmin + 1), bg)
sp = sprite.load()
for x, y in zip(px, py):
x, y = int(x), int(y)
sp[y - int(ymin), x - int(xmin)] = im[y, x]
name = basename % i
sprite.save(name)
print "Wrote %s" % name
sprite = Image.open(sys.argv[1])
mask, bg = sprite_mask(sprite)
split_sprite(sprite, mask, bg)
Now you have all the pieces (sprite_1.png, sprite_2.png, ..., sprite_8.png) exactly as you included in the question.