Memory Error on my crop and stack program - python

I recently started learning programming and have finally written my first program. This program input all the images in a folder and crops them to a certain size. Then these images are are stacked vertically to produce a new image.
So I have been running test images through my program and it appeared to work fine. But I have only been using 10 images at a time. When I decided to finally try the program for the purpose I intended, which requires over 500 images, the program crashes. I get the MemoryError.
Since I just started, I wrote together whatever I thought would work, whether it was the most efficient method or not. If anyone has time, are there any blatant things in my code that just consume way too much resources.
from PIL import Image
from natsort import natsorted
import glob
# Convert coordinate list into variables
print("Type in the coordinates for the upper left (x1,y1) and bottom right (x2,y2) points")
coordinates = list(map(int, input("Separate values with a space (x1 y1 x2 y2): ").strip().split()))[:4]
x1, y1, x2, y2 = coordinates
print("Generating image...")
# creating lists to hold the initial files and the edited files
image_list = []
cropped_images = []
# Accessing all the files using the glob module
# The function natsorted sorts values the way Windows does
# Opens all the files with the variable img in folder and adds them one by one to a list named image_list
for filename in natsorted(glob.glob("C:\\Users\\Alex\\Desktop\\One Dimensional Reaction Diffusion Program\\data\\*.tiff")):
img = Image.open(filename)
image_list.append(img)
# for each image in the image_list
# selected_region from previous user input
# cropped_region applied the crop function for the images in the selected region
# Cropping function
selected_region = (x1, y1, x2, y2)
cropped_region = image.crop(selected_region) # unsure if this is correct
# If cropped area is vertical, rotate into horizontal position
if (y2 - y1) > (x2 - x1):
rotated_image = cropped_region.rotate(90, expand=1) # Expand 1 ensures image isn't cut off while rotating
else:
rotated_image = cropped_region # Does nothing if image is horizontal
cropped_images.append(rotated_image) # appends all rotated_images to the list cropped_images
# Size of each individual image
widths, heights = zip(*(i.size for i in cropped_images))
# Final image dimensions
max_width = max(widths)
total_height = sum(heights)
# Create a new colored image (RGB)
new_img = Image.new('RGB', (max_width, total_height))
# Stacking function
# to offset horizontally, need to change final image dimensions to sum(widths) and max(heights)
# and change new_im.paste(img, (x_offset,0))
y_offset = 0 # Initial position is 0
for img in cropped_images: # Not sure if img will cause conflict because img used in above for loop
new_img.paste(img, (0, y_offset)) # (x,y) indicates which direction to offset in
y_offset += img.size[1] # location of new image is y_offset (which had position 0) + the height of the image
new_img.save("C:\\Users\\Alex\\Desktop\\One Dimensional Reaction Diffusion Program\\output\\stacked.tiff", quality=95)

Related

How to split an image into multiple images based on white borders between them

I need to split an image into multiple images, based on the white borders between them.
for example:
output:
using Python, I don't know how to start this mission.
Here is a solution for the "easy" case where we know the grid configuration. I provide this solution even though I doubt this is what you were asked to do.
In your example image of the cat, if we are given the grid configuration, 2x2, we can do:
from PIL import Image
def subdivide(file, nx, ny):
im = Image.open(file)
wid, hgt = im.size # Size of input image
w = int(wid/nx) # Width of each subimage
h = int(hgt/ny) # Height of each subimage
for i in range(nx):
x1 = i*w # Horicontal extent...
x2 = x1+w # of subimate
for j in range(ny):
y1 = j*h # Certical extent...
y2 = y1+h # of subimate
subim = im.crop((x1, y1, x2, y2))
subim.save(f'{i}x{j}.png')
subdivide("cat.png", 2, 2)
The above will create these images:
My previous answer depended on knowing the grid configuration of the input image. This solution does not.
The main challenge is to detect where the borders are and, thus, where the rectangles that contain the images are located.
To detect the borders, we'll look for (vertical and horizontal) image lines where all pixels are "white". Since the borders in the image are not really pure white, we'll use a value less than 255 as the whiteness threshold (WHITE_THRESH in the code.)
The gist of the algorithm is in the following lines of code:
whitespace = [np.all(gray[:,i] > WHITE_THRESH) for i in range(gray.shape[1])]
Here "whitespace" is a list of Booleans that looks like
TTTTTFFFFF...FFFFFFFFTTTTTTTFFFFFFFF...FFFFTTTTT
where "T" indicates the corresponding horizontal location is part of the border (white).
We are interested in the x-locations where there are transitions between T and F. The call to the function slices(whitespace) returns a list of tuples of indices
[(x1, x2), (x1, x2), ...]
where each (x1, x2) pair indicates the xmin and xmax location of images in the x-axis direction.
The slices function finds the "edges" where there are transitions between True and False using the exclusive-or operator and then returns the locations of the transitions as a list of tuples (pairs of indices).
Similar code is used to detect the vertical location of borders and images.
The complete runnable code below takes as input the OP's image "cat.png" and:
Extracts the sub-images into 4 PNG files "fragment-0-0.png", "fragment-0-1.png", "fragment-1-0.png" and "fragment-1-1.png".
Creates a (borderless) version of the original image by pasting together the above fragments.
The runnable code and resulting images follow. The program runs in about 0.25 seconds.
from PIL import Image
import numpy as np
def slices(lst):
""" Finds the indices where lst changes value and returns them in pairs
lst is a list of booleans
"""
edges = [lst[i-1] ^ lst[i] for i in range(len(lst))]
indices = [i for i,v in enumerate(edges) if v]
pairs = [(indices[i], indices[i+1]) for i in range(0, len(indices), 2)]
return pairs
def extract(xx_locs, yy_locs, image, prefix="image"):
""" Locate and save the subimages """
data = np.asarray(image)
for i in range(len(xx_locs)):
x1,x2 = xx_locs[i]
for j in range(len(yy_locs)):
y1,y2 = yy_locs[j]
arr = data[y1:y2, x1:x2, :]
Image.fromarray(arr).save(f'{prefix}-{i}-{j}.png')
def assemble(xx_locs, yy_locs, prefix="image", result='composite'):
""" Paste the subimages into a single image and save """
wid = sum([p[1]-p[0] for p in xx_locs])
hgt = sum([p[1]-p[0] for p in yy_locs])
dst = Image.new('RGB', (wid, hgt))
x = y = 0
for i in range(len(xx_locs)):
for j in range(len(yy_locs)):
img = Image.open(f'{prefix}-{i}-{j}.png')
dst.paste(img, (x,y))
y += img.height
x += img.width
y = 0
dst.save(f'{result}.png')
WHITE_THRESH = 110 # The original image borders are not actually white
image_file = 'cat.png'
image = Image.open(image_file)
# To detect the (almost) white borders, we make a grayscale version of the image
gray = np.asarray(image.convert('L'))
# Detect location of images along the x axis
whitespace = [np.all(gray[:,i] > WHITE_THRESH) for i in range(gray.shape[1])]
xx_locs = slices(whitespace)
# Detect location of images along the y axis
whitespace = [np.all(gray[i,:] > WHITE_THRESH) for i in range(gray.shape[0])]
yy_locs = slices(whitespace)
extract(xx_locs, yy_locs, image, prefix='fragment')
assemble(xx_locs, yy_locs, prefix='fragment', result='composite')
Individual fragments:
The composite image:

How to select a specific horizontal line of pixels in an image to analyze on Python?

I'm beginner on Python and started trying to program a code to analyze a spray image and plot a 'gray scale value' to see the spray pattern.
For a while I have this code:
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
filepath = "flat2.jpg"
img = Image.open(filepath).convert('L')
WIDTH, HEIGHT = img.size
pix = img.load()
data = np.asarray(img.getdata())
data = data.reshape((HEIGHT,WIDTH))
fig,ax = plt.subplots()
reduced_data = data.mean(axis=0)
ax.plot(reduced_data)
plt.show()
However, this code analyze the entire image and I need just a specific line, like the line 329 or something. As a mitigation I tried crop the image too, but was unsucessfully.
I'm trying to do a code like the tool "plot profile" on Image J.
Obsviously I just "made" this code with a help from some users here.
Flat fan spray image.
The line and imageJ plot profile
Since after several hours nobody supplied us with a sample of a correct efficient solution, here is the crop mitigation:
from PIL import Image
filepath = 'flat2.jpg'
img = Image.open(filepath).convert('L')
WIDTH, HEIGHT = img.size
LINE = 329 # <---- put here your desired line number
img = img.crop( (0, LINE, WIDTH, LINE+1 ) ).save('crop.png')
img = Image.open('crop.png')
# do stuff
# WIDTH, HEIGHT = img.size
# pix = img.load()
# etc
It should work but it's not exactly a correct and efficient solution at all. I believe it should be done via numpy etc.
As for the crop() function, it's a very simple thing. It just takes some rectangular area (box) from a given image. The four numbers inside the brackets (x1, y1, x2, y2) are tuple of coordinates of this box. Top-left corner (x1, y1) bottom-right corner (x2, y2). The only possible trouble is if any of this coordinates fall outside of image size.

How to make a shape larger or smaller without changing the resolution of the image using OpenCV or PIL in Python

I would like to be able to make a certain shape in either a PIL image or an OpenCV image 3 times larger and smaller without changing the resolution of the image or changing the shape of the shape I want to make larger. I have tried using OpenCV's dilation method but that is not it's intended use, plus it changed the shape of the image. For an example:
Thanks.
Here's a way of doing it:
find the interesting shape, i.e. non-white ROI area
extract it
scale it up by a factor
clear the original image to white
paste the scaled ROI back into image with same centre
#!/usr/bin/env python3
import cv2
import numpy as np
if __name__ == "__main__":
# Open image
orig = cv2.imread('image.png',cv2.IMREAD_COLOR)
# Get extent of interesting part, i.e. non-white part
y, x, _ = np.nonzero(~orig)
y0, y1 = np.min(y), np.max(y) # top and bottom rows
x0, x1 = np.min(x), np.max(x) # left and right cols
h, w = y1-y0, x1-x0 # height and width
ROI = orig[y0:y1, x0:x1] # extract ROI
cv2.imwrite('ROI.png', ROI) # DEBUG only
# Upscale ROI
factor = 3
scaledROI = cv2.resize(ROI, (w*factor,h*factor), interpolation=cv2.INTER_NEAREST)
newH, newW = scaledROI.shape[:2]
# Clear original image to white
orig[:] = [255,255,255]
# Get centre of original shape, and position of top-left of ROI in output image
cx, cy = (x0 + x1) //2, (y0 + y1)//2
top = cy - newH//2
left = cx - newW//2
# Paste in rescaled ROI
orig[top:top+newH, left:left+newW] = scaledROI
cv2.imwrite('result.png', orig)
That transforms this:
to this:
Puts me in mind of a pantograph:

combining two images, multiplying RGB values to vignette an image Jython/Python

http://i.stack.imgur.com/AAtUD.jpg
http://i.stack.imgur.com/eouLY.jpg
images to use for code.
the end result i am trying to do is combine the vignette picture and the CGI picture because the vignette images RGB values are darker towards the edges i need to multiply the original images corresponding pixels by the smaller numbers towards the edges, should make the picture have a darker frame around the edges of the original picture.
here's the code so far:
def addVignette(inputPic, vignette):
#create empty canvas to combine images correctly
canvas = makeEmptyPicture(getWidth(inputPic), getHeight(inputPic))
for x in range(0, getWidth(inputPic)):
for y in range(0, getHeight(inputPic)):
px = getPixel(canvas, x, y)
inputPx = getPixel(inputPic, x, y)
vignettePx = getPixel(vignette, x, y)
#make a new color from these values
newColour = getNewColorValues(vignettePx,inputPx)
#then assign this new color to the current pixel of the input image
setColor(px, newColour)
explore(canvas)
def getNewColourValues(inputPx, vignettePx):
inputRed = getRed(inputPx)
vignetteRed = getRed(vignettePx)
inputGreen = getGreen(inputPx)
vignetteGreen = getGreen(vignettePx)
inputBlue = getBlue(inputPx)
vignetteBlue = getBlue(vignettePx)
newRGB= setColor(inputPx,inputRed,inputGreen,inputBlue)*(vignettePx,vignetteRed,vignetteGreen,vignetteBlue)
newColour = makeColor(newRGB)
return newColour
def newPicture(newColour):
folder = pickAFolder()
filename = requestString("enter file name: ")
path = folder+filename+".jpg"
writePictureTo(inputPic, path)
when testing use vignette_profile image first then CGI image also the saving image doesnt work even though i've been trying to get it to work any help will be appreciated.
Saving the image
Let me start with saving the image. What I can see from the code you posted, you never actually call the newPicture() function which is why it's not saving the image. Also I noticed in the newPicture function that you don't pass a reference of the new image to the function.
Please my solution to this below. I have changed the function name from newPicture to saveNewImage()
Adding the Vignette
Please see the comments for the code block denote by ******* in the getNewColorValues() function.
You run the Main() function for this script to work
# Main function.
# *** THIS FUNCTION NEEDS TO BE CALLED IN THE CONSOLE ***
# i.e >>> main()
def main():
# Choose the files you wish to use
inputFile = pickAFile()
vignetteFile = pickAFile()
# Turn both files into picture objects
inputPic = makePicture(inputFile)
vignette = makePicture(vignetteFile)
# addVignette() function combines the input picture and vignette together
# and returns the result as a new picture object
newImage = addVignette(inputPic, vignette)
# saveNewImage() function stores the new image as file
saveNewImage(newImage)
# Main() calls this function to add input picture and vignette together
def addVignette(inputPic, vignette):
# Create empty canvas
canvas = makeEmptyPicture(getWidth(inputPic), getHeight(inputPic))
# Iterate through all the pixels of the input image. x and y are
# used as the current coordinates of the pixel
for x in range(0, getWidth(inputPic)):
for y in range(0, getHeight(inputPic)):
# Get the current pixels of inputPic and vignette
inputPixel = getPixel(inputPic, x, y)
vignettePixel = getPixel(vignette, x, y)
# The getNewColorValues() function, makes a new color from those
# values
newColor = getNewColorValues(inputPixel, vignettePixel)
# Assign this new color to the current pixel of the canvas
px = getPixel(canvas, x, y)
setColor(px, newColor)
# Show the result of combiming the input picture with the vignette
explore(canvas)
# return the new image to main() function.
return canvas
# Called from the addVignette() function to add the color values from
# the input picture and vignette together. It returns a new color
# object
def getNewColorValues(inputPixel, vignettePixel):
# Get the individual colour values
inputRed = getRed(inputPixel)
vignetteRed = getRed(vignettePixel)
inputGreen = getGreen(inputPixel)
vignetteGreen = getGreen(vignettePixel)
inputBlue = getBlue(inputPixel)
vignetteBlue = getBlue(vignettePixel)
# ***********************************************************
# Most important part. This will determine if the pixel is darkent
# and by how much. How it works is the darker the vignette pixel the less that will
# be taken away from 255. This means the result of `255 - vignetteRed` will be a higher
# value which means more will be taken away from the input colour.
# The light the vignette pixel the less that will be taken away from input pixel
newR = inputRed - (255 - vignetteRed)
newG = inputGreen - (255 - vignetteGreen)
newB = inputBlue - (255 - vignetteBlue)
# ***********************************************************
newC = makeColor(newR, newG, newB)
return newC
# Called from the main() function in order to save the new image
def saveNewImage(newImage):
folder = pickAFolder()
filename = requestString("Please enter file name: ")
path = folder + filename + ".jpg"
writePictureTo(newImage, path)
You can also try doing this in CV. Single pixel manipulation and file I/O are pretty straight forward.
img = cv2.imread('test.jpg')
pixel = img[10,10]
Ive never had any issues with file I/O in CV. Chances are its a permission error or excess white space.
cv2.imwrite('messigray.png',img)
You can also do some easy image previewing which in this case would let you experiment with the output a little more.

In Python, Python Image Library 1.1.6, how can I expand the canvas without resizing?

I am probably looking for the wrong thing in the handbook, but I am looking to take an image object and expand it without resizing (stretching/squishing) the original image.
Toy example: imagine a blue rectangle, 200 x 100, then I perform some operation and I have a new image object, 400 x 300, consisting of a white background upon which a 200 x 100 blue rectangle rests. Bonus if I can control in which direction this expands, or the new background color, etc.
Essentially, I have an image to which I will be adding iteratively, and I do not know what size it will be at the outset.
I suppose it would be possible for me to grab the original object, make a new, slightly larger object, paste the original on there, draw a little more, then repeat. It seems like it might be computationally expensive. However, I thought there would be a function for this, as I assume it is a common operation. Perhaps I assumed wrong.
The ImageOps.expand function will expand the image, but it adds the same amount of pixels in each direction.
The best way is simply to make a new image and paste:
newImage = Image.new(mode, (newWidth,newHeight))
newImage.paste(srcImage, (x1,y1,x1+oldWidth,y1+oldHeight))
If performance is an issue, make your original image bigger than needed and crop it after the drawing is done.
Based on interjays answer:
#!/usr/bin/env python
from PIL import Image
import math
def resize_canvas(old_image_path="314.jpg", new_image_path="save.jpg",
canvas_width=500, canvas_height=500):
"""
Resize the canvas of old_image_path.
Store the new image in new_image_path. Center the image on the new canvas.
Parameters
----------
old_image_path : str
new_image_path : str
canvas_width : int
canvas_height : int
"""
im = Image.open(old_image_path)
old_width, old_height = im.size
# Center the image
x1 = int(math.floor((canvas_width - old_width) / 2))
y1 = int(math.floor((canvas_height - old_height) / 2))
mode = im.mode
if len(mode) == 1: # L, 1
new_background = (255)
if len(mode) == 3: # RGB
new_background = (255, 255, 255)
if len(mode) == 4: # RGBA, CMYK
new_background = (255, 255, 255, 255)
newImage = Image.new(mode, (canvas_width, canvas_height), new_background)
newImage.paste(im, (x1, y1, x1 + old_width, y1 + old_height))
newImage.save(new_image_path)
resize_canvas()
You might consider a rather different approach to your image... build it out of tiles of a fixed size. That way, as you need to expand, you just add new image tiles. When you have completed all of your computation, you can determine the final size of the image, create a blank image of that size, and paste the tiles into it. That should reduce the amount of copying you're looking at for completing the task.
(You'd likely want to encapsulate such a tiled image into an object that hid the tiling aspects from the other layers of code, of course.)
This code will enlarge a smaller image, preserving aspect ratio, then center it on a standard sized canvas. Also preserves transparency, or defaults to gray background.
Tested with P mode PNG files.
Coded debug final.show() and break for testing. Remove lines and hashtag on final.save(...) to loop and save.
Could parameterize canvas ratio and improve flexibility, but it served my purpose.
"""
Resize ... and reconfigures. images in a specified directory
Use case: Images of varying size, need to be enlarged to exaxtly 1200 x 1200
"""
import os
import glob
from PIL import Image
# Source directory plus Glob file reference (Windows)
source_path = os.path.join('C:', os.sep, 'path', 'to', 'source', '*.png')
# List of UNC Image File paths
images = glob.glob(source_path)
# Destination directory of modified image (Windows)
destination_path = os.path.join('C:', os.sep, 'path', 'to', 'destination')
for image in images:
original = Image.open(image)
# Retain original attributes (ancillary chunks)
info = original.info
# Retain original mode
mode = original.mode
# Retain original palette
if original.palette is not None:
palette = original.palette.getdata()[1]
else:
palette = False
# Match original aspect ratio
dimensions = original.getbbox()
# Identify destination image background color
if 'transparency' in info.keys():
background = original.info['transparency']
else:
# Image does not have transparency set
print(image)
background = (64)
# Get base filename and extension for destination
filename, extension = os.path.basename(image).split('.')
# Calculate matched aspect ratio
if dimensions[2] > dimensions[3]:
width = int(1200)
modifier = width / dimensions[2]
length = int(dimensions[3] * modifier)
elif dimensions[3] > dimensions[2]:
length = int(1200)
modifier = length / dimensions[3]
width = int(dimensions[2] * modifier)
else:
width, length = (1200, 1200)
size = (width, length)
# Set desired final image size
canvas = (1200, 1200)
# Calculate center position
position = (
int((1200 - width)/2),
int((1200 - length)/2),
int((1200 - width)/2) + width,
int((1200 - length)/2) + length
)
# Enlarge original image proportionally
resized = original.resize(size, Image.LANCZOS)
# Then create sized canvas
final = Image.new(mode, canvas, background)
# Replicate original properties
final.info = info
# Replicate original palatte
if palette:
final.putpalette(palette)
# Cemter paste resized image to final canvas
final.paste(resized, position)
# Save final image to destination directory
final.show()
#final.save("{}\\{}.{}".format(destination_path, filename, extension))
break

Categories