How to slice an image with different dimensions in python? - python

I have a 1024x1024 image and I want to slice it with boxes which are different sizes and will be selected randomly. For example 2 pieces 512x512,8 pieces 16x16 etc. Box positions is not important. And I want to use every pixel only one time. Below is my code but when I run it, a lot of pictures are created and same regions are being used. How can I make that each pixel will be used only 1 time. Below picture represents which I want.
'''
from PIL import Image
import random
infile = 'Da Vinci.jpg'
chopsize = [512,256,128,64,32]
img = Image.open(infile)
width, height = img.size
a= random.choice(chopsize)
for x0 in range(0, width):
for y0 in range(0, height):
box = (x0, y0,
x0+random.choice(chopsize) if x0+random.choice(chopsize) < width else width - 1,
y0+random.choice(chopsize) if y0+random.choice(chopsize) < height else height - 1)
print('%s %s' % (infile, box))
img.crop(box).save('%s.x%01d.y%01d.jpg' % (infile.replace('.jpg',''), x0, y0))
a=random.choice(chopsize)
That is what I want:

This is a fun problem! How do you randomly tile with your boxes an area but make sure none of the boxes overlap.
You have a couple of issues:
as you've written your code so far you are going to have boxes that spill over the border of your image. I don't know if you care about this - in your example picture the boxes fit perfectly into the space. If you do care you are going to have to figure that part out.
(although this code makes me think you have thought about it and don't care)
x0+random.choice(chopsize) if x0+random.choice(chopsize) < width else width - 1
The other issue which is what your question is really about is that you don't save a record of what pixels you have already visited. There are a few different ways you could do this.
One might be something like:
import numpy as np
filled_pixels = np.zeros((width, height))
x = 0
while x < width:
y=0
if filled_pixels[x,y] == 1:
x+=32 #the minimum dimensions of a square
while y < height:
chop = random.choice(chopsize)
if filled_pixels[x,y] == 1:
y+=1 #the minimum dimensions of a square
else:
filled_pixels[x:x+chop,y:y+chop] = 1
#do your stuff with making the boxes
y+=chop
you basically could raster through your image making boxes, making sure that you aren't making a square at any pixel where you already have a square (given by your filled value)

Related

How to split an image into multiple images based on white borders between them

I need to split an image into multiple images, based on the white borders between them.
for example:
output:
using Python, I don't know how to start this mission.
Here is a solution for the "easy" case where we know the grid configuration. I provide this solution even though I doubt this is what you were asked to do.
In your example image of the cat, if we are given the grid configuration, 2x2, we can do:
from PIL import Image
def subdivide(file, nx, ny):
im = Image.open(file)
wid, hgt = im.size # Size of input image
w = int(wid/nx) # Width of each subimage
h = int(hgt/ny) # Height of each subimage
for i in range(nx):
x1 = i*w # Horicontal extent...
x2 = x1+w # of subimate
for j in range(ny):
y1 = j*h # Certical extent...
y2 = y1+h # of subimate
subim = im.crop((x1, y1, x2, y2))
subim.save(f'{i}x{j}.png')
subdivide("cat.png", 2, 2)
The above will create these images:
My previous answer depended on knowing the grid configuration of the input image. This solution does not.
The main challenge is to detect where the borders are and, thus, where the rectangles that contain the images are located.
To detect the borders, we'll look for (vertical and horizontal) image lines where all pixels are "white". Since the borders in the image are not really pure white, we'll use a value less than 255 as the whiteness threshold (WHITE_THRESH in the code.)
The gist of the algorithm is in the following lines of code:
whitespace = [np.all(gray[:,i] > WHITE_THRESH) for i in range(gray.shape[1])]
Here "whitespace" is a list of Booleans that looks like
TTTTTFFFFF...FFFFFFFFTTTTTTTFFFFFFFF...FFFFTTTTT
where "T" indicates the corresponding horizontal location is part of the border (white).
We are interested in the x-locations where there are transitions between T and F. The call to the function slices(whitespace) returns a list of tuples of indices
[(x1, x2), (x1, x2), ...]
where each (x1, x2) pair indicates the xmin and xmax location of images in the x-axis direction.
The slices function finds the "edges" where there are transitions between True and False using the exclusive-or operator and then returns the locations of the transitions as a list of tuples (pairs of indices).
Similar code is used to detect the vertical location of borders and images.
The complete runnable code below takes as input the OP's image "cat.png" and:
Extracts the sub-images into 4 PNG files "fragment-0-0.png", "fragment-0-1.png", "fragment-1-0.png" and "fragment-1-1.png".
Creates a (borderless) version of the original image by pasting together the above fragments.
The runnable code and resulting images follow. The program runs in about 0.25 seconds.
from PIL import Image
import numpy as np
def slices(lst):
""" Finds the indices where lst changes value and returns them in pairs
lst is a list of booleans
"""
edges = [lst[i-1] ^ lst[i] for i in range(len(lst))]
indices = [i for i,v in enumerate(edges) if v]
pairs = [(indices[i], indices[i+1]) for i in range(0, len(indices), 2)]
return pairs
def extract(xx_locs, yy_locs, image, prefix="image"):
""" Locate and save the subimages """
data = np.asarray(image)
for i in range(len(xx_locs)):
x1,x2 = xx_locs[i]
for j in range(len(yy_locs)):
y1,y2 = yy_locs[j]
arr = data[y1:y2, x1:x2, :]
Image.fromarray(arr).save(f'{prefix}-{i}-{j}.png')
def assemble(xx_locs, yy_locs, prefix="image", result='composite'):
""" Paste the subimages into a single image and save """
wid = sum([p[1]-p[0] for p in xx_locs])
hgt = sum([p[1]-p[0] for p in yy_locs])
dst = Image.new('RGB', (wid, hgt))
x = y = 0
for i in range(len(xx_locs)):
for j in range(len(yy_locs)):
img = Image.open(f'{prefix}-{i}-{j}.png')
dst.paste(img, (x,y))
y += img.height
x += img.width
y = 0
dst.save(f'{result}.png')
WHITE_THRESH = 110 # The original image borders are not actually white
image_file = 'cat.png'
image = Image.open(image_file)
# To detect the (almost) white borders, we make a grayscale version of the image
gray = np.asarray(image.convert('L'))
# Detect location of images along the x axis
whitespace = [np.all(gray[:,i] > WHITE_THRESH) for i in range(gray.shape[1])]
xx_locs = slices(whitespace)
# Detect location of images along the y axis
whitespace = [np.all(gray[i,:] > WHITE_THRESH) for i in range(gray.shape[0])]
yy_locs = slices(whitespace)
extract(xx_locs, yy_locs, image, prefix='fragment')
assemble(xx_locs, yy_locs, prefix='fragment', result='composite')
Individual fragments:
The composite image:

Why is my tiled image shifted with pasting into Pillow?

I am writing a program where I chop up an image into many sub-tiles, process the tiles, then stitch them together. I am stuck at the stitching part. When I run my code, after the first row the tiles each shift one space over. I am working with 1000x1000 tiles and the image size can be variable. I also get this ugly horizontal padding that I can't figure out how to get rid of.
Here is a google drive link to the images:
https://drive.google.com/drive/folders/1HqRl29YlWUrsYoZP88TAztJe9uwgP5PS?usp=sharing
Clarification based on the comments
I take the original black and white image and crop it into 1000px x 1000px black and white tiles. These tiles are then re-colored to replace the white with a color corresponding to a density heatmap. The recolored tiles are then saved into that folder. The picture I included is one of the colored in tiles that I am trying to piece back together. When pieced together it should be the same shape but multi colored version of the black and white image
from PIL import Image
import os
stitched_image = Image.new('RGB', (large_image.width, large_image.height))
image_list = os.listdir('recolored_tiles')
current_tile = 0
for i in range(0, large_image.height, 1000):
for j in range(0, large_image.width, 1000):
p = Image.open(f'recolored_tiles/{image_list[current_tile]}')
stitched_image.paste(p, (j, i), 0)
current_tile += 1
stitched_image.save('test.png')
I am attaching the original image that I process in tiles and the current state of the output image:
An example of the tiles found in the folder recolored_tiles:
First off, the code below will create the correct image:
from PIL import Image
import os
stitched_image = Image.new('RGB', (original_image_width, original_image_height))
image_list = os.listdir('recolored_tiles')
current_tile = 0
for y in range(0, original_image_height - 1, 894):
for x in range(0, original_image_width - 1, 1008):
tile_image = Image.open(f'recolored_tiles/{image_list[current_tile]}')
print("x: {0} y: {1}".format(x, y))
stitched_image.paste(tile_image, (x, y), 0)
current_tile += 1
stitched_image.save('test.png')
Explanation
First off, you should notice, that your tiles aren't 1000x1000. They are all 1008x984 because 18145x16074 can't be divided up into 19 1000x1000 tiles each.
Therefore you will have to put the correct tile width and height in your for loops:
for y in range(0, 16074, INSERT CURRECT RECOLORED_TILE HEIGHT HERE):
for x in range(0, 18145, INSERT CURRECT RECOLORED_TILE WIDTH HERE):
Secondly, how python range works, it doesn't run on the last digit. Representation:
for i in range(0,5):
print(i)
The output for that would be:
0
1
2
3
4
Therefore the width and height of the original image will have to be minused by 1, because it thinks you have 19 tiles, but there isn't.
Hope this works and what a cool project you're working on :)

Find minimal number of rectangles in the image

I have binary images where rectangles are placed randomly and I want to get the positions and sizes of those rectangles.
If possible I want the minimal number of rectangles necessary to exactly recreate the image.
On the left is my original image and on the right the image I get after applying scipys.find_objects()
(like suggested for this question).
import scipy
# image = scipy.ndimage.zoom(image, 9, order=0)
labels, n = scipy.ndimage.measurements.label(image, np.ones((3, 3)))
bboxes = scipy.ndimage.measurements.find_objects(labels)
img_new = np.zeros_like(image)
for bb in bboxes:
img_new[bb[0], bb[1]] = 1
This works fine if the rectangles are far apart, but if they overlap and build more complex structures this algorithm just gives me the largest bounding box (upsampling the image made no difference). I have the feeling that there should already exist a scipy or opencv method which does this.
I would be glad to know if somebody has an idea on how to tackle this problem or even better knows of an existing solution.
As result I want a list of rectangles (ie. lower-left-corner : upper-righ-corner) in the image. The condition is that when I redraw those filled rectangles I want to get exactly the same image as before. If possible the number of rectangles should be minimal.
Here is the code for generating sample images (and a more complex example original vs scipy)
import numpy as np
def random_rectangle_image(grid_size, n_obstacles, rectangle_limits):
n_dim = 2
rect_pos = np.random.randint(low=0, high=grid_size-rectangle_limits[0]+1,
size=(n_obstacles, n_dim))
rect_size = np.random.randint(low=rectangle_limits[0],
high=rectangle_limits[1]+1,
size=(n_obstacles, n_dim))
# Crop rectangle size if it goes over the boundaries of the world
diff = rect_pos + rect_size
ex = np.where(diff > grid_size, True, False)
rect_size[ex] -= (diff - grid_size)[ex].astype(int)
img = np.zeros((grid_size,)*n_dim, dtype=bool)
for i in range(n_obstacles):
p_i = np.array(rect_pos[i])
ps_i = p_i + np.array(rect_size[i])
img[tuple(map(slice, p_i, ps_i))] = True
return img
img = random_rectangle_image(grid_size=64, n_obstacles=30,
rectangle_limits=[4, 10])
Here is something to get you started: a naïve algorithm that walks your image and creates rectangles as large as possible. As it is now, it only marks the rectangles but does not report back coordinates or counts. This is to visualize the algorithm alone.
It does not need any external libraries except for PIL, to load and access the left side image when saved as a PNG. I'm assuming a border of 15 pixels all around can be ignored.
from PIL import Image
def fill_rect (pixels,xp,yp,w,h):
for y in range(h):
for x in range(w):
pixels[xp+x,yp+y] = (255,0,0,255)
for y in range(h):
pixels[xp,yp+y] = (255,192,0,255)
pixels[xp+w-1,yp+y] = (255,192,0,255)
for x in range(w):
pixels[xp+x,yp] = (255,192,0,255)
pixels[xp+x,yp+h-1] = (255,192,0,255)
def find_rect (pixels,x,y,maxx,maxy):
# assume we're at the top left
# get max horizontal span
width = 0
height = 1
while x+width < maxx and pixels[x+width,y] == (0,0,0,255):
width += 1
# now walk down, adjusting max width
while y+height < maxy:
for w in range(x,x+width,1):
if pixels[x,y+height] != (0,0,0,255):
break
if pixels[x,y+height] != (0,0,0,255):
break
height += 1
# fill rectangle
fill_rect (pixels,x,y,width,height)
image = Image.open('A.png')
pixels = image.load()
width, height = image.size
print (width,height)
for y in range(16,height-15,1):
for x in range(16,width-15,1):
if pixels[x,y] == (0,0,0,255):
find_rect (pixels,x,y,width,height)
image.show()
From the output
you can observe the detection algorithm can be improved, as, for example, the "obvious" two top left rectangles are split up into 3. Similar, the larger structure in the center also contains one rectangle more than absolutely needed.
Possible improvements are either to adjust the find_rect routine to locate a best fit¹, or store the coordinates and use math (beyond my ken) to find which rectangles may be joined.
¹ A further idea on this. Currently all found rectangles are immediately filled with the "found" color. You could try to detect obviously multiple rectangles, and then, after marking the first, the other rectangle(s) to check may then either be black or red. Off the cuff I'd say you'd need to try different scan orders (top-to-bottom or reverse, left-to-right or reverse) to actually find the minimally needed number of rectangles in any combination.

PySide: Separating a spritesheet / Separating an image into contiguous regions of color

I'm working on a program in which I need to separate spritesheets, or in other words, separate an image into contiguous regions of color.
I've never done any image processing before, so I'm wondering how I would go about this. What would I do after I test for pixel color? What's the best way to determine which pixel goes with each sprite?
All the input images have uniform backgrounds, and an alpha channel different from that of the background counts as color. The order of the output images needs to be left-right, up-down. My project is written in PySide, so I'm hoping to use it for this task too, but I could import more libraries if necessary.
Thanks your replies!
P.S.:
I'm not sure if the PySide tag is appropriate or not, since I'm using PySide, but the question doesn't involve the GUI aspects of it. If a mod feels it doesn't belong, feel free to remove it.
For example, I have a spritesheet that looks like this:
I want to separate it into these:
That sounds like something that should be implemented in anything that deals with sprites, but here we will implement our own sprite-spliter.
The first thing we need here is to extract the individual objects. In this situation, it is only a matter of deciding whether a pixel is a background one or not. If we assume the point at origin is a background pixel, then we are done:
from PIL import Image
def sprite_mask(img, bg_point=(0, 0)):
width, height = img.size
im = img.load()
bg = im[bg_point]
mask_img = Image.new('L', img.size)
mask = mask_img.load()
for x in xrange(width):
for y in xrange(height):
if im[x, y] != bg:
mask[x, y] = 255
return mask_img, bg
If you save the mask image created above and open it, here is what you would see on it (I added a rectangle inside your empty window):
With the image above, the next thing we need is to fill its holes if we want to join sprites that are inside others (like the rectangle added, see figure above). This is another simple rule: if a point cannot be reached from the point at [0, 0], then it is a hole and it must be filled. All that is left is then separating each sprite in individual images. This is done by connected component labeling. For each component we get its axis-aligned bounding box in order to define the dimensions of the piece, and then we copy from the original image the points that belong to a given component. To keep it short, the following code uses scipy for these tasks:
import sys
import numpy
from scipy.ndimage import label, morphology
def split_sprite(img, mask, bg, join_interior=True, basename='sprite_%d.png'):
im = img.load()
m = numpy.array(mask, dtype=numpy.uint8)
if join_interior:
m = morphology.binary_fill_holes(m)
lbl, ncc = label(m, numpy.ones((3, 3)))
for i in xrange(1, ncc + 1):
px, py = numpy.nonzero(lbl == i)
xmin, xmax, ymin, ymax = px.min(), px.max(), py.min(), py.max()
sprite = Image.new(img.mode, (ymax - ymin + 1, xmax - xmin + 1), bg)
sp = sprite.load()
for x, y in zip(px, py):
x, y = int(x), int(y)
sp[y - int(ymin), x - int(xmin)] = im[y, x]
name = basename % i
sprite.save(name)
print "Wrote %s" % name
sprite = Image.open(sys.argv[1])
mask, bg = sprite_mask(sprite)
split_sprite(sprite, mask, bg)
Now you have all the pieces (sprite_1.png, sprite_2.png, ..., sprite_8.png) exactly as you included in the question.

Trim scanned images with PIL?

What would be the approach to trim an image that's been input using a scanner and therefore has a large white/black area?
the entropy solution seems problematic and overly intensive computationally. Why not edge detect?
I just wrote this python code to solve this same problem for myself. My background was dirty white-ish, so the criteria that I used was darkness and color. I simplified this criteria by just taking the smallest of the R, B or B value for each pixel, so that black or saturated red both stood out the same. I also used the average of the however many darkest pixels for each row or column. Then I started at each edge and worked my way in till I crossed a threshold.
Here is my code:
#these values set how sensitive the bounding box detection is
threshold = 200 #the average of the darkest values must be _below_ this to count (0 is darkest, 255 is lightest)
obviousness = 50 #how many of the darkest pixels to include (1 would mean a single dark pixel triggers it)
from PIL import Image
def find_line(vals):
#implement edge detection once, use many times
for i,tmp in enumerate(vals):
tmp.sort()
average = float(sum(tmp[:obviousness]))/len(tmp[:obviousness])
if average <= threshold:
return i
return i #i is left over from failed threshold finding, it is the bounds
def getbox(img):
#get the bounding box of the interesting part of a PIL image object
#this is done by getting the darekest of the R, G or B value of each pixel
#and finding were the edge gest dark/colored enough
#returns a tuple of (left,upper,right,lower)
width, height = img.size #for making a 2d array
retval = [0,0,width,height] #values will be disposed of, but this is a black image's box
pixels = list(img.getdata())
vals = [] #store the value of the darkest color
for pixel in pixels:
vals.append(min(pixel)) #the darkest of the R,G or B values
#make 2d array
vals = np.array([vals[i * width:(i + 1) * width] for i in xrange(height)])
#start with upper bounds
forupper = vals.copy()
retval[1] = find_line(forupper)
#next, do lower bounds
forlower = vals.copy()
forlower = np.flipud(forlower)
retval[3] = height - find_line(forlower)
#left edge, same as before but roatate the data so left edge is top edge
forleft = vals.copy()
forleft = np.swapaxes(forleft,0,1)
retval[0] = find_line(forleft)
#and right edge is bottom edge of rotated array
forright = vals.copy()
forright = np.swapaxes(forright,0,1)
forright = np.flipud(forright)
retval[2] = width - find_line(forright)
if retval[0] >= retval[2] or retval[1] >= retval[3]:
print "error, bounding box is not legit"
return None
return tuple(retval)
if __name__ == '__main__':
image = Image.open('cat.jpg')
box = getbox(image)
print "result is: ",box
result = image.crop(box)
result.show()
For starters, Here is a similar question. Here is a related question. And a another related question.
Here is just one idea, there are certainly other approaches. I would select an arbitrary crop edge and then measure the entropy* on either side of the line, then proceed to re-select the crop line (probably using something like a bisection method) until the entropy of the cropped-out portion falls below a defined threshold. As I think, you may need to resort to a brute root-finding method as you will not have a good indication of when you have cropped too little. Then repeat for the remaining 3 edges.
*I recall discovering that the entropy method in the referenced website was not completely accurate, but I could not find my notes (I'm sure it was in a SO post, however.)
Edit:
Other criteria for the "emptiness" of an image portion (other than entropy) might be contrast ratio or contrast ratio on an edge-detect result.

Categories