How to add a stamp to a pdf with python (PyPDF) - python

I am trying to add a stamp to a pdf and have done so like this
import PyPDF2 as pypdf
workbook.save(excel_file)
reader = pypdf.PdfReader("/home/littlejiver/loadprocesser/images/PPB - DIFRTACC.pdf")
image_page = reader.pages[0]
writer = pypdf.PdfWriter()
reader = pypdf.PdfReader("/home/littlejiver/loadprocesser/Bills/Bison/INV-5163148A.PDF")
content_page = reader.pages[0]
mediabox = content_page.mediabox
content_page.merge_page(image_page)
content_page.mediabox = mediabox
writer.add_page(content_page)
with open("/home/littlejiver/loadprocesser/Bills/Bison/INV-5163148A.PDF", "wb") as fp:
writer.write(fp)
however, I am trying to write it to a specific location with coordinates
How would I do this?

You can use the PyPDF2.pdf.RectangleObject class to specify the position and size of the stamp on the PDF page. The RectangleObject class takes four arguments: lower left x, lower left y, upper right x, and upper right y. These values represent the lower left and upper right corners of a rectangle, in the unit of points. 1 point is equal to 1/72 inch.
Here's an updated version of your code:
import PyPDF2 as pypdf
reader = pypdf.PdfReader("/home/littlejiver/loadprocesser/images/PPB - DIFRTACC.pdf")
image_page = reader.pages[0]
writer = pypdf.PdfWriter()
reader = pypdf.PdfReader("/home/littlejiver/loadprocesser/Bills/Bison/INV-5163148A.PDF")
content_page = reader.pages[0]
mediabox = content_page.mediabox
# Specify the position and size of the stamp
left = 100 # x coordinate of the lower left corner in points
bottom = 100 # y coordinate of the lower left corner in points
right = 200 # x coordinate of the upper right corner in points
top = 200 # y coordinate of the upper right corner in points
rect = pypdf.pdf.RectangleObject([left, bottom, right, top])
image_page.artBox = rect
content_page.merge_page(image_page)
content_page.mediabox = mediabox
writer.add_page(content_page)
with open("/home/littlejiver/loadprocesser/Bills/Bison/INV-5163148A.PDF", "wb") as fp:
writer.write(fp)

Related

Slice image in dynamic number of squares (grid) and save corner coordinates of those squares in list

I'm reading in an image with the Pillow library in Python. I want to "slice" it into squares, and save the corner coordinates of each of the squares in a list. For example in the image below, I would like to save the corner coordinates for square 15. (Top left corner is 0,0)
The first think I do after reading in the image is calculate the modulus of the height and width in pixels by the number of slices, and crop the image so the resulting number of pixels per square is the same and an integer.
from PIL import Image, ImageDraw, ImageFont
fileName = 'eyeImg_86.png'
img = Image.open(fileName)
vertical_slices = 8
horizontal_slices = 4
height = img.height
width = img.width
new_height = height - (height % horizontal_slices)
new_width = width - (width % vertical_slices)
img = img.crop((0, 0, new_width, new_height))
Then I calculate the size in pixels of each vertical and horizontal step.
horizontal_step = int(new_width / vertical_slices)
vertical_step = int(new_height / horizontal_slices)
And then I loop over the ranges between 0 to the total number of vertical and horizontal slices and append to a nested list (each inner list is a row)
points = []
for i in range(horizontal_slices+1):
row = []
for j in range(vertical_slices+1):
row.append((horizontal_step*j, vertical_step*i))
points.append(row)
Here's where I'm struggling to draw and to calculate what I need inside each of these squares. If I try to loop over all those points and draw them on the image.
with Image.open(fileName) as im:
im = im.convert(mode='RGB')
draw = ImageDraw.Draw(im)
for i in range(horizontal_slices+1):
if i < horizontal_slices:
for j in range(vertical_slices+1):
if j < vertical_slices:
draw.line([points[i][j], points[i+1][j-1]], fill=9999999)
Is there an easy way that I can dynamically give it the rows and columns and save each of the square coordinates to a list of tuples for example?
I'd like to both be able to draw them on top of the original image, and also calculate the number of black pixels inside each of the squares.
EDIT: To add some clarification, since the number of rows and columns of the grid is arbitrary, it will likely not be made of squares but rectangles. Furthermore, the numbering of these rectangles should be done row-wise from left to right, like reading.
Thank you
There were (from my understanding) inconsistencies in your use of "horizontal/vertical"; I also removed the points list, since you can easily convert the rectangle number to its upper-left corner coords (see the function in the end); I draw the grid directly by drawing all horizontal lines and all vertical lines).
from PIL import Image, ImageDraw, ImageFont
fileName = 'test.png'
img = Image.open(fileName)
vertical_slices = 8
horizontal_slices = 6
height = img.height
width = img.width
new_height = height - (height % vertical_slices)
new_width = width - (width % horizontal_slices)
img = img.crop((0, 0, new_width, new_height))
horizontal_step = int(new_width / horizontal_slices)
vertical_step = int(new_height / vertical_slices)
# drawing the grid
img = img.convert(mode='RGB')
pix = img.load()
draw = ImageDraw.Draw(img)
for i in range(horizontal_slices+1):
draw.line([(i*horizontal_step,0), (i*horizontal_step,new_height)], fill=9999999)
for j in range(vertical_slices+1):
draw.line([(0,j*vertical_step), (new_width,j*vertical_step)], fill=9999999)
# with rectangles being numbered from 1 (upper left) to v_slices*h_slices (lower right) in reading order
def num_to_ul_corner_coords(num):
i = (num-1)%horizontal_slices
j = (num-1)//horizontal_slices
return(i*horizontal_step,j*vertical_step)
This should do what you want, provided your picture is pure black and white:
def count_black_pixels(num) :
cnt = 0
x, y = num_to_ul_corner_coords(num)
for i in range(horizontal_step):
for j in range(vertical_step):
if pix[x+i,y+j] == (0,0,0):
cnt += 1
perc = round(cnt/(horizontal_step*vertical_step)*100,2)
return cnt, perc

Python - OpenCV connectedComponents - user input + selecting feature/label

I have two images, let's call them image 1 and image 2. I can use the following to select a random feature in image 1 and display it:
segall_prev = cv2.imread(str(Path(seg_seg_output_path, prev_filename)), 0);
segall_prev = np.uint8(segall_prev)
ret, thresh = cv2.threshold(segall_prev, 127, 255, 0)
numLabels, labelImage, stats, centroids = cv2.connectedComponentsWithStats(thresh, 8)
# Pick random prev-seg feature
random_prev_seg = random.randint(0, np.amax(labelImage))
i = np.unique(labelImage)[random_prev_seg]
pixels = np.argwhere(labelImage == random_prev_seg)
labelMask = np.zeros(thresh.shape, dtype="uint8")
labelMask[labelImage == i] = 1
numPixels = cv2.countNonZero(labelMask)
# Display chosen feature from prev_seg image only
fig_segall_prev = plt.figure()
fig_segall_prev_ = plt.imshow(labelMask)
plt.title(prev_filename + ' prev_seg')
Which will display an image such as:
So the idea is that that is the feature from the previous frame (image 1), and then the user will select where that feature is in the next frame (image 2) - basically tracing the object across frames.
# Display seg image and allow click
seg_ret, seg_thresh = cv2.threshold(segall, 127, 255, 0)
seg_numLabels, seg_labelImage, seg_stats, seg_centroids = cv2.connectedComponentsWithStats(seg_thresh, 8)
mutable_object = {}
def onclick(event):
# Capture click pixel location
X_coordinate = int(event.xdata)
Y_coordinate = int(event.ydata)
mutable_object['click'] = X_coordinate
print('x= ' + str(X_coordinate))
print('y= ' + str(Y_coordinate))
# Compare captured location with feature locations
x = np.where(seg_labelImage == 1)
#print(x[0])
if X_coordinate in x[0] and Y_coordinate in x[1]:
print('yes')
fig_segall = plt.figure()
cid = fig_segall.canvas.mpl_connect('button_press_event', onclick)
fig_segall_ = plt.imshow(seg_labelImage)
plt.title(filename + ' seg')
When image 2 is shown it is as follows:
So my question is, how do I go about capturing the location where the user clicks in the second image and then check whether those x-y coordinates correspond to where a feature is as found by cv2.connectedComponentsWithStats? I then need to save just that feature as a separate image and then use the location of the selected feature for use in the corresponding colour image.
If anyone can help, or suggest ways to improve the code - because it is very messy as I've been trying to figure this out for hours now... then that would be much appreciated. Thanks!
The solution (thanks to asdf):
seg_numLabels, seg_labelImage, seg_stats, seg_centroids = cv2.connectedComponentsWithStats(seg_thresh, 8)
selected_features = {}
def onclick(event):
# Capture click pixel location
X_coordinate = int(event.xdata)
Y_coordinate = int(event.ydata)
print(f'{X_coordinate=}, {Y_coordinate=}')
obj_at_click = seg_labelImage[Y_coordinate, X_coordinate] # component label at x/y, 0 is background, above are components
if obj_at_click != 0: # not background
# clicked on a feature
selected_feature = np.unique(seg_labelImage) == obj_at_click # get only the selected feature
selected_features[filename] = selected_feature
print(f'Saved feature number {obj_at_click}')
else:
print('Background clicked')
# Display seg image
fig_segall = plt.figure()
fig_segall.canvas.mpl_connect('button_press_event', onclick)
plt.imshow(segall)
plt.title(filename + ' seg')
cv2.connectedComponentsWithStats labels all connected components of a binary image with numbers from 1 to n_components (0 is background). So you just need to check if the x and y coordinates of the mouse click lie within a segmented component. The following code should demonstrate how to get a binary mask of the component that was clicked:
selected_features = {}
img_name = 'example1'
def onclick(event):
# Capture click pixel location
X_coordinate = int(event.xdata)
Y_coordinate = int(event.ydata)
print(f'{X_coordinate=}, {Y_coordinate=}')
obj_at_click = seg_labelImage[y, x] # component label at x/y, 0 is background, above are components
if obj_at_click != 0: # not background
# clicked on a feature
selected_feature = seg_labelImage == obj_at_click # get only the selected feature
selected_features[img_name] = selected_feature
print(f'Saved feature number {obj_at_click}')
else:
print('Background clicked')
I can't test the code and I've never worked with matplotlib events, so I could be that the returned click coordinates need to be offset or scaled to get the correct image coordinates. You might want to display the image at its original size and without ticklabels etc. This stackoverflow answer might help you with this.
Let me know if I didn't get your question right!

Make objects of an image the closest to each other

I don't have much experience with PIL and I've got these images edited from a stack of microscopy image cells, each one is in a mask of an image size 30x30. I've been struggling to put these cells in a black background as closest as possible to each other without overlapping.
My code is the following:
def spread_circles(circles, rad, iterations,step):
radsqr = rad**2
for i in range(iterations):
for ix,c in enumerate(circles):
vecs = c-circles
dists = np.sum((vecs)**2,axis=1)
if len(dists)>0:
push = (vecs[dists<radsqr,:].T*dists[dists<radsqr]).T
push = np.sum(push,axis=0)
pushmag = np.sum(push*push)**0.5
if pushmag>0:
push = push/pushmag*step
circles[ix]+=push
return circles
def gen_image(sample,n_iter, height=850, width = 850, max_shape=30, num_circles=150):
circles = np.random.uniform(low=max_shape,high=height-max_shape,size=(num_circles,2))
circles = spread_circles(circles, max_shape, n_iter, 1).astype(int)
img = Image.new(mode='F',size=(height,width),color=0).convert('RGBA')
final1 = Image.new("RGBA", size=(height,width))
final1.paste(img, (0,0), img)
for n,c in enumerate(circles):
foreground = sample[n]
final1.paste(foreground, (c[0],c[1]), foreground)
return final1
But it's hard to avoid overlapping if I do few iterations, and if I Increase they'd be too much sparsed, like this:
What I want it's something similar like inside the red circles that I drew :
I need them closer as they can get, almost like tiles. How can I do that?
I have started thinking about this and have got a couple of strategies implemented. Anyone else fancying some fun is more than welcome to borrow, steal, appropriate or hack any chunks of my code that they can use! I'll probably play some more tomorrow.
#!/usr/bin/env python3
from PIL import Image, ImageOps
import numpy as np
from glob import glob
import math
def checkCoverage(im):
"""Determines percentage of image that is cells rather than background"""
N = np.count_nonzero(im)
return N * 100 / im.size
def loadImages():
"""Load all cell images in current directory into list of trimmed Numpy arrays"""
images = []
for filename in glob('*.png'):
# Open and convert to greyscale
im = Image.open(filename).convert('L')
# Trim to bounding box
im = im.crop(im.getbbox())
images.append(np.array(im))
return images
def Strategy1():
"""Get largest image and pad all images to that size - at least it will tesselate perfectly"""
images = loadImages()
N = len(images)
# Find height of tallest image and width of widest image
maxh = max(im.shape[0] for im in images)
maxw = max(im.shape[1] for im in images)
# Determine how many images we will pack across and down the output image - could be improved
Nx = int(math.sqrt(N))+1
Ny = int(N/Nx)+1
print(f'Padding {N} images each to height:{maxh} x width:{maxw}')
# Create output image
res = Image.new('L', (Nx*maxw,Ny*maxh), color=0)
# Pack all images from list onto regular grid
x, y = 0, 0
for im in images:
this = Image.fromarray(im)
h, w = im.shape
# Pack this image into top-left of its grid-cell, unless
# a) in first row, in which case pack to bottom
# b) in first col, in which case pack to right
thisx = x*maxw
thisy = y*maxh
if y==0:
thisy += maxh - h
if x==0:
thisx += maxw - w
res.paste(this, (thisx,thisy))
x += 1
if x==Nx:
x = 0
y += 1
# Trim extraneous black edges
res = res.crop(res.getbbox())
# Save as JPEG so we don't find it as a PNG in next strategy
res.save('strategy1.jpg')
cov = checkCoverage(np.array(res))
print(f'Strategy1 coverage: {cov}')
def Strategy2():
"""Rotate all images to portrait (tall rather than wide) and order by height so we tend to stack equal height images side-by-side"""
tmp = loadImages()
# Recreate list with all images in portrait format, i.e. tall
portrait = []
for im in tmp:
if im.shape[0] >= im.shape[1]:
# Already portrait, add as-is
portrait.append(im)
else:
# Landscape, so rotate
portrait.append(np.rot90(im))
images = sorted(portrait, key=lambda x: x.shape[0], reverse=True)
N = len(images)
maxh, maxw = 31, 31
# Determine how many images we will pack across and down the output image
Nx = int(math.sqrt(N))+1
Ny = int(N/Nx)+1
print(f'Packing images by height')
# Create output image
resw, resh = Nx*maxw, Ny*maxh
res = Image.new('L', (resw,resh), color=0)
# Pack all from list
xpos, ypos = 0, 0
# Pack first row L->R, second row R->L and alternate
packToRight = True
for im in images:
thisw, thish = im.shape
this = Image.fromarray(im)
if packToRight:
if xpos+thisw < resw:
# If it fits to the right, pack it there
res.paste(this,(xpos,ypos))
xpos += thisw
else:
# Else start a new row, pack at right end and continue packing to left
packToRight = False
res.paste(this,(resw-thisw,ypos))
ypos = res.getbbox()[3]
else:
if xpos>thisw:
# If it fits to the left, pack it there
res.paste(this,(xpos-thisw,ypos))
xpos -= thisw
else:
# Else start a new row, pack at left end and continue packing to right
ypos = res.getbbox()[3]
packToRight = True
res.paste(this,(0,ypos))
# Trim any black edges
res = res.crop(res.getbbox())
# Save as JPEG so we don't find it as a PNG in next strategy
res.save('strategy2.jpg')
cov = checkCoverage(np.array(res))
print(f'Strategy2 coverage: {cov}')
Strategy1()
Strategy2()
Strategy1 gives this at 42% coverage:
Strategy2 gives this at 64% coverage:

Calculate pixel values from latitude/longitude coordinates (using matplotlib Basemap)

I need to convert map coordinates into pixels (in order to make a clickable map in html).
Here is a sample map (made using the Basemap package from matplotlib). I have put some labels on it and attempted to calculate the midpoints of the labels in pixels:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
## Step 0: some points to plot
names = [u"Reykjavík", u"Höfn", u"Húsavík"]
lats = [64.133333, 64.25, 66.05]
lons = [-21.933333, -15.216667, -17.316667]
## Step 1: draw a map using matplotlib/Basemap
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
M = Basemap(projection='merc',resolution='c',
llcrnrlat=63,urcrnrlat=67,
llcrnrlon=-24,urcrnrlon=-13)
x, y = M(lons, lats) # transform coordinates according to projection
boxes = []
for xa, ya, name in zip(x, y, names):
box = plt.text(xa, ya, name,
bbox=dict(facecolor='white', alpha=0.5))
boxes.append(box)
M.bluemarble() # a bit fuzzy at this resolution...
plt.savefig('test.png', bbox_inches="tight", pad_inches=0.01)
# Step 2: get the coordinates of the textboxes in pixels and calculate the
# midpoints
F = plt.gcf() # get current figure
R = F.canvas.get_renderer()
midpoints = []
for box in boxes:
bb = box.get_window_extent(renderer=R)
midpoints.append((int((bb.p0[0] + bb.p1[0]) / 2),
int((bb.p0[1] + bb.p1[1]) / 2)))
These calculated points are in the approximately correct relative relation to each other, but do not coincide with the true points. The following code snippet should put a red dot on the midpoint of each label:
# Step 3: use PIL to draw dots on top of the labels
from PIL import Image, ImageDraw
im = Image.open("test.png")
draw = ImageDraw.Draw(im)
for x, y in midpoints:
y = im.size[1] - y # PIL counts rows from top not bottom
draw.ellipse((x-5, y-5, x+5, y+5), fill="#ff0000")
im.save("test.png", "PNG")
Red dots should be in the middle of the labels.
I guess that the error comes in where I extract the coordinates of the text boxes (in Step #2). Any help much appreciated.
Notes
Perhaps the solution is something along the lines of this answer?
Two things are happening to cause your pixel positions to be off.
The dpi used to calculated the text position is different from that used to save the figure.
When you use the bbox_inches option in the savefig call, it eliminates a lot of white space. You don't take this into account when you are drawing your circles with PIL (or checking where someone clicked. Also you add a padding in this savefig call that you may need to account for if it's very large (as I show in my example below). Probably it will not matter if you still use 0.01.
To fix this first issue, just force the figure and the savefig call to use the same DPI.
To fix the second issue, document the (0,0) position (Axes units) of the axes in pixels, and shift your text positions accordingly.
Here's a slightly modified version of your code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
## Step 0: some points to plot
names = [u"Reykjavík", u"Höfn", u"Húsavík"]
lats = [64.133333, 64.25, 66.05]
lons = [-21.933333, -15.216667, -17.316667]
## Step 1: draw a map using matplotlib/Basemap
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
# predefined dpi
FIGDPI=80
# set dpi of figure, so that all calculations use this value
plt.gcf().set_dpi(FIGDPI)
M = Basemap(projection='merc',resolution='c',
llcrnrlat=63,urcrnrlat=67,
llcrnrlon=-24,urcrnrlon=-13)
x, y = M(lons, lats) # transform coordinates according to projection
boxes = []
for xa, ya, name in zip(x, y, names):
box = plt.text(xa, ya, name,
bbox=dict(facecolor='white', alpha=0.5))
boxes.append(box)
M.bluemarble() # a bit fuzzy at this resolution...
# predefine padding in inches
PADDING = 2
# force dpi to same value you used in your calculations
plt.savefig('test.png', bbox_inches="tight", pad_inches=PADDING,dpi=FIGDPI)
# document shift due to loss of white space and added padding
origin = plt.gca().transAxes.transform((0,0))
padding = [FIGDPI*PADDING,FIGDPI*PADDING]
Step #2 is unchanged
Step #3 takes account of the origin
# Step 3: use PIL to draw dots on top of the labels
from PIL import Image, ImageDraw
im = Image.open("test.png")
draw = ImageDraw.Draw(im)
for x, y in midpoints:
# deal with shift
x = x-origin[0]+padding[0]
y = y-origin[1]+padding[1]
y = im.size[1] - y # PIL counts rows from top not bottom
draw.ellipse((x-5, y-5, x+5, y+5), fill="#ff0000")
im.save("test.png", "PNG")
This results in:
Notice that I used an exaggerated PADDING value to test that everything still works, and a value of 0.01 would produce your original figure.

Cutting one image into multiple images using the Python Image Library

I need to cut this image into three parts using PIL and pick the middle part.
How do I do it?
http://thedilbertstore.com/images/periodic_content/dilbert/dt110507dhct.jpg
Say you have a really long picture like this.
And now you want to slice it up into smaller vertical bits, because it is so long.
Here is a Python script that will do that. This was useful to me for in preparing very long images for LaTeX docs.
from __future__ import division
import Image
import math
import os
def long_slice(image_path, out_name, outdir, slice_size):
"""slice an image into parts slice_size tall"""
img = Image.open(image_path)
width, height = img.size
upper = 0
left = 0
slices = int(math.ceil(height/slice_size))
count = 1
for slice in range(slices):
#if we are at the end, set the lower bound to be the bottom of the image
if count == slices:
lower = height
else:
lower = int(count * slice_size)
#set the bounding box! The important bit
bbox = (left, upper, width, lower)
working_slice = img.crop(bbox)
upper += slice_size
#save the slice
working_slice.save(os.path.join(outdir, "slice_" + out_name + "_" + str(count)+".png"))
count +=1
if __name__ == '__main__':
#slice_size is the max height of the slices in pixels
long_slice("longcat.jpg","longcat", os.getcwd(), 300)
This is is the output
I wanted to up-vote Gourneau's solution, but lack the sufficient reputation. However, I figured I would post the code that I developed as a result of his answer just in case it might be helpful to somebody else. I also added the ability to iterate through a file structure, and choose an image width.
import Image
import os
# Set the root directory
rootdir = 'path/to/your/file/directory'
def long_slice(image_path, out_name, outdir, sliceHeight, sliceWidth):
img = Image.open(image_path) # Load image
imageWidth, imageHeight = img.size # Get image dimensions
left = 0 # Set the left-most edge
upper = 0 # Set the top-most edge
while (left < imageWidth):
while (upper < imageHeight):
# If the bottom and right of the cropping box overruns the image.
if (upper + sliceHeight > imageHeight and \
left + sliceWidth > imageWidth):
bbox = (left, upper, imageWidth, imageHeight)
# If the right of the cropping box overruns the image
elif (left + sliceWidth > imageWidth):
bbox = (left, upper, imageWidth, upper + sliceHeight)
# If the bottom of the cropping box overruns the image
elif (upper + sliceHeight > imageHeight):
bbox = (left, upper, left + sliceWidth, imageHeight)
# If the entire cropping box is inside the image,
# proceed normally.
else:
bbox = (left, upper, left + sliceWidth, upper + sliceHeight)
working_slice = img.crop(bbox) # Crop image based on created bounds
# Save your new cropped image.
working_slice.save(os.path.join(outdir, 'slice_' + out_name + \
'_' + str(upper) + '_' + str(left) + '.jpg'))
upper += sliceHeight # Increment the horizontal position
left += sliceWidth # Increment the vertical position
upper = 0
if __name__ == '__main__':
# Iterate through all the files in a set of directories.
for subdir, dirs, files in os.walk(rootdir):
for file in files:
long_slice(subdir + '/' + file, 'longcat', subdir, 128, 128)
For this particular image you would do
import Image
i = Image.open('dt110507dhct.jpg')
frame2 = i.crop(((275, 0, 528, 250)))
frame2.save('dt110507dhct_frame2.jpg')
If the boxes are not known on before hand I would run a simple edge finding filter over the image (both x and y directions) to find the boundaries of the box.
A simple approach would be:
Run horizontal edge filter over image. You now have an image where each pixel describes the changes in intensity left and right of that pixel. I.e. it will "find" vertical lines.
For each column in the horizontal-edge-image get the average absolute magnitude of its rows. In the resulting 1 x WIDTH sized array you will find the vertical lines at the positions of highest value. Since the lines are more than one pixel wide yo might have to be a bit clever here.
Do the same for the other axis to find the horizontal lines.
You could do some pre processing by first extracting only pixels that are black (or near black) if you believe that the borders of the boxes will always be black. But I doubt it'd be necessary since the above method should be very stable.
Look at the crop() method of PIL
http://effbot.org/imagingbook/image.htm
(requires knowledge of the bounding box of the image...assuming that the image has the same dimensions every day you should be able to determine the bounding box once and use it for all the time).
Load the Image
Get the Size
Use the Crop method
Save the middle image

Categories