I have used PIL
#back_color_width
for x in range(w):
for y in range(h):
if x==0 or y==0 or x==w-1 or y==h-1 :
pixels[x,y] = back_color
I need to add a border to the image with a width on all 4 sides of image
I would recommend using PIL's built-in expand() function, which allows you to add a border of any colour and width to an image.
So, starting with this:
#!/usr/bin/env python3
from PIL import Image, ImageOps
# Open image
im = Image.open('start.png')
# Add border and save
bordered = ImageOps.expand(im, border=10, fill=(0,0,0))
bordered.save('result.png')
If you want different sized borders on the top/bottom from the left-right, give two widths:
bordered = ImageOps.expand(im, border=(10,50), fill=(0,0,0))
If you want different sized borders on all sides, give 4 widths:
bordered = ImageOps.expand(im, border=(10,40,80,120), fill=(0,0,0))
Keywords: PIL, Pillow, ImageOps, Python, border, bordering, border outside, add border, expand, pad, extent, image, image processing.
This is what you need to change to make the border any number of px wide:
for x in range(w):
for y in range(h):
if (x<border_width
or y<border_width
or x>w-border_width-1
or y>h-border_width-1):
pixels[x,y] = (0,0,0)
#other 3 boxes and #primary box Doesn't make boxes but instead 3 points and 1 point respectively.
You are really close! You just need to change the first if statement. Right now you do have a border, but the border is 1 pixel wide on all sides. Maybe change to
if x<back_color_width or y<back_color_width or x > w+ back_color_width or y > w+back_color_width:
pixel[x,y]=back_color
If I understood what you meant, I think the solution is the following:
import numpy as np
def create_border(img, width, color = np.array([0,0,0]) ):
#color must be a np.array
img_shape = img.shape
upper_border = np.full((width, img_shape[1], 3), color) #for 3-channel image
side_border = np.full((img_shape[0] + 2*width, width, 3), color)
bordered = np.concatenate([upper_border, img, upper_border])
bordered = np.concatenate([side_border, bordered, side_border], axis=1)
return bordered
Related
I see lot of questions ask how to save 2D array to an image, and most of the answer is about saving image as gray scale image. But I'm trying to figure out a way to save image that can actually display each value in each cell of array. Is there a way save image that display value of array in python?
I has a quick try at this. You can play around with colours and sizes.
#!/usr/local/bin/python3
from PIL import Image, ImageFont, ImageDraw
import numpy as np
# Variables that can be edited
w, h = 8, 5 # width and height of Numpy array
cs = 100 # cell side length in pixels
# Make random array but repeatable
np.random.seed(39)
arr = np.random.randint(-1,33, (h,w), np.int)
# Generate a piece of canvas and draw text on it
canvas = Image.new('RGB', (w*cs,h*cs), color='magenta')
# Get a drawing context
draw = ImageDraw.Draw(canvas)
monospace = ImageFont.truetype("/Library/Fonts/Andale Mono.ttf", 40)
# Now write numbers onto canvas at appropriate points
for r in range(h):
draw.line([(0,r*cs),(w*cs,r*cs)], fill='white', width=1) # horizontal gridline
for c in range(w):
draw.line([(c*cs,0),(c*cs,h*cs)], fill='white', width=1) # vertical gridline
cx = cs // 2 + (c * cs) # centre of cell in x-direction
cy = cs // 2 + (r * cs) # centre of cell in y-direction
draw.text((cx, cy), f'{arr[r,c]}', anchor='mm', fill='white', font=monospace)
# Save
canvas.save('result.png')
I want to perform image translation by a certain amount (shift the image vertically and horizontally).
The problem is that when I paste the cropped image back on the canvas, I just get back a white blank box.
Can anyone spot the issue here?
Many thanks
img_shape = image.shape
# translate image
# percentage of the dimension of the image to translate
translate_factor_x = random.uniform(*translate)
translate_factor_y = random.uniform(*translate)
# initialize a black image the same size as the image
canvas = np.zeros(img_shape)
# get the top-left corner coordinates of the shifted image
corner_x = int(translate_factor_x*img_shape[1])
corner_y = int(translate_factor_y*img_shape[0])
# determine which part of the image will be pasted
mask = image[max(-corner_y, 0):min(img_shape[0], -corner_y + img_shape[0]),
max(-corner_x, 0):min(img_shape[1], -corner_x + img_shape[1]),
:]
# determine which part of the canvas the image will be pasted on
target_coords = [max(0,corner_y),
max(corner_x,0),
min(img_shape[0], corner_y + img_shape[0]),
min(img_shape[1],corner_x + img_shape[1])]
# paste image on selected part of the canvas
canvas[target_coords[0]:target_coords[2], target_coords[1]:target_coords[3],:] = mask
transformed_img = canvas
plt.imshow(transformed_img)
This is what I get:
For image translation, you can make use of the somewhat obscure numpy.roll function. In this example I'm going to use a white canvas so it is easier to visualize.
image = np.full_like(original_image, 255)
height, width = image.shape[:-1]
shift = 100
# shift image
rolled = np.roll(image, shift, axis=[0, 1])
# black out shifted parts
rolled = cv2.rectangle(rolled, (0, 0), (width, shift), 0, -1)
rolled = cv2.rectangle(rolled, (0, 0), (shift, height), 0, -1)
If you want to flip the image so the black part is on the other side, you can use both np.fliplr and np.flipud.
Result:
Here is a simple solution that translates an image by tx and ty pixels using only array indexing, that does not roll over, and handles negative values as well:
tx, ty = 8, 5 # translation on x and y axis, in pixels
N, M = image.shape
image_translated = np.zeros_like(image)
image_translated[max(tx,0):M+min(tx,0), max(ty,0):N+min(ty,0)] = image[-min(tx,0):M-max(tx,0), -min(ty,0):N-max(ty,0)]
Example:
(Note that for simplicity it does not handle cases where tx > M or ty > N).
Trying to find a way to efficiently shift image pixels in python. Ultimately want to display the results on screen after each shift. For example, pixel at 1920,1080 becomes pixel at 1920,1079 and 1920,1080 is filled with a new value, at the end pixel at 0,0 is discarded and filled with pixel from 0,1. Looking to achieve 30fps or greater on a slow processor like the raspi 3.
My intuition is to move everything to a 1D array, shift everything "left", add the new pixel at the end and turn back into an image.
Crude drawing of the "Z" shifting effect I'm looking for
Here is my naive implementation of just the pixel shifting, which is too slow:
from PIL import Image
im = Image.new("RGBA", (1920, 1080), "black") #create blank image
pix = im.load() #load image
#iterate over width and height
for x in xrange(1,1919):
pix[x,y] = pix[x+1, y] #shift x
for y in xrange(1,1079):
pix[x,y] = pix[x, y+1] #shift y
pix[1919,1079] = (255,255,255) #set new pixel white
I thought of a couple of ways of doing this but can only get 40fps on a desktop Mac at the moment.
Here's the first method which uses Numpy roll() at its heart in order to roll the top-left pixel down to the bottom right corner and then overwrite it with white.
#!/usr/bin/env python3
import cv2
import numpy as np
from PIL import Image
w, h = 640, 480
im = Image.new("L", (w, h), 127)
# Just do 1,000 shifts so it can be timed
for frame in range(1000):
# Ravel image out to row vector, roll it left, and reshape
im = np.roll(np.ravel(im),-1).reshape(h,w)
# Make bottom right pixel white
im[-1,-1] = 255
cv2.imshow("Slidey Pushy",im)
cv2.waitKey(1)
I then thought of adding in all the new pixels right at the beginning and then just changing the offset into the image on each iteration, but surprisingly (to me at least) this was no faster.
#!/usr/bin/env python3
import cv2
import numpy as np
from PIL import Image
w, h = 640, 480
im = Image.new("L", (w, h), 127)
# Allocate space for and append 10 rows of white pixels up front
im = np.concatenate((im,np.ones((10*h,w),dtype=np.uint8)*255), axis=0)
im = np.ravel(im)
# Slide in 1,000 white pixels so it can be timed
for frame in range(1000):
frame = im[frame:frame+w*h].reshape(h,w)
cv2.imshow("Slidey Pushy 2",frame)
cv2.waitKey(1)
I don't think you are going to get anywhere close to 30fps on a raspberry pi, but I would look at numpy and pandas for this.
If I have an image like below, how can I add border all around the image such that the overall height and width of the final image increases but the height and width of the original image stays as-is in the middle.
The following code adds a constant border of size 10 pixels to all four sides of your original image.
For the colour, I have assumed that you want to use the average gray value of the background, which I have calculated from the mean value of bottom two lines of your image. Sorry, somewhat hard coded, but shows the general how-to and can be adapted to your needs.
If you leave bordersize values for bottom and right at 0, you even get a symmetric border.
Other values for BORDER_TYPE are possible, such as BORDER_DEFAULT, BORDER_REPLICATE, BORDER_WRAP.
For more details cf: http://docs.opencv.org/trunk/d3/df2/tutorial_py_basic_ops.html#gsc.tab=0
import numpy as np
import cv2
im = cv2.imread('image.jpg')
row, col = im.shape[:2]
bottom = im[row-2:row, 0:col]
mean = cv2.mean(bottom)[0]
bordersize = 10
border = cv2.copyMakeBorder(
im,
top=bordersize,
bottom=bordersize,
left=bordersize,
right=bordersize,
borderType=cv2.BORDER_CONSTANT,
value=[mean, mean, mean]
)
cv2.imshow('image', im)
cv2.imshow('bottom', bottom)
cv2.imshow('border', border)
cv2.waitKey(0)
cv2.destroyAllWindows()
Answer in one line
outputImage = cv2.copyMakeBorder(
inputImage,
topBorderWidth,
bottomBorderWidth,
leftBorderWidth,
rightBorderWidth,
cv2.BORDER_CONSTANT,
value=color of border
)
Try This:
import cv2
import numpy as np
img=cv2.imread("img_src.jpg")
h,w=img.shape[0:2]
base_size=h+20,w+20,3
# make a 3 channel image for base which is slightly larger than target img
base=np.zeros(base_size,dtype=np.uint8)
cv2.rectangle(base,(0,0),(w+20,h+20),(255,255,255),30) # really thick white rectangle
base[10:h+10,10:w+10]=img # this works
Add border using openCV
import cv2
white = [255,255,255]
img1 = cv2.imread('input.png')
constant= cv2.copyMakeBorder(img1,20,20,20,20,cv2.BORDER_CONSTANT,value=white)
cv2.imwrite('output.png',constant)
i have this image with two people in it. it is binary image only contains black and white pixels.
first i want to loop over all the pixels and find white pixels in the image.
than what i want to do is that i want to find [x,y] for the one certain white pixel.
after that i want to use that particular[x,y] in the image which is for the white pixel in the image.
using that co-ordinate of [x,y] i want to convert neighbouring black pixels into white pixels. not whole image tho.
i wanted to post image here but i cant post it unfortunately. i hope my question is understandable now. in the below image you can see the edges.
say for example the edge of the nose i find that with loop using [x,y] and than turn all neighbouring black pixels into white pixels.
This is the binary image
The operation described is called dilation, from Mathematical Morphology. You can either use, for example, scipy.ndimage.binary_dilation or implement your own.
Here are the two forms to do it (one is a trivial implementation), and you can check the resulting images are identical:
import sys
import numpy
from PIL import Image
from scipy import ndimage
img = Image.open(sys.argv[1]).convert('L') # Input is supposed to the binary.
width, height = img.size
img = img.point(lambda x: 255 if x > 40 else 0) # "Ignore" the JPEG artifacts.
# Dilation
im = numpy.array(img)
im = ndimage.binary_dilation(im, structure=((0, 1, 0), (1, 1, 1), (0, 1, 0)))
im = im.view(numpy.uint8) * 255
Image.fromarray(im).save(sys.argv[2])
# "Other operation"
im = numpy.array(img)
white_pixels = numpy.dstack(numpy.nonzero(im != 0))[0]
for y, x in white_pixels:
for dy, dx in ((-1,0),(0,-1),(0,1),(1,0)):
py, px = dy + y, dx + x
if py >= 0 and px >= 0 and py < height and px < width:
im[py, px] = 255
Image.fromarray(im).save(sys.argv[3])