I see lot of questions ask how to save 2D array to an image, and most of the answer is about saving image as gray scale image. But I'm trying to figure out a way to save image that can actually display each value in each cell of array. Is there a way save image that display value of array in python?
I has a quick try at this. You can play around with colours and sizes.
#!/usr/local/bin/python3
from PIL import Image, ImageFont, ImageDraw
import numpy as np
# Variables that can be edited
w, h = 8, 5 # width and height of Numpy array
cs = 100 # cell side length in pixels
# Make random array but repeatable
np.random.seed(39)
arr = np.random.randint(-1,33, (h,w), np.int)
# Generate a piece of canvas and draw text on it
canvas = Image.new('RGB', (w*cs,h*cs), color='magenta')
# Get a drawing context
draw = ImageDraw.Draw(canvas)
monospace = ImageFont.truetype("/Library/Fonts/Andale Mono.ttf", 40)
# Now write numbers onto canvas at appropriate points
for r in range(h):
draw.line([(0,r*cs),(w*cs,r*cs)], fill='white', width=1) # horizontal gridline
for c in range(w):
draw.line([(c*cs,0),(c*cs,h*cs)], fill='white', width=1) # vertical gridline
cx = cs // 2 + (c * cs) # centre of cell in x-direction
cy = cs // 2 + (r * cs) # centre of cell in y-direction
draw.text((cx, cy), f'{arr[r,c]}', anchor='mm', fill='white', font=monospace)
# Save
canvas.save('result.png')
Related
I want to perform image translation by a certain amount (shift the image vertically and horizontally).
The problem is that when I paste the cropped image back on the canvas, I just get back a white blank box.
Can anyone spot the issue here?
Many thanks
img_shape = image.shape
# translate image
# percentage of the dimension of the image to translate
translate_factor_x = random.uniform(*translate)
translate_factor_y = random.uniform(*translate)
# initialize a black image the same size as the image
canvas = np.zeros(img_shape)
# get the top-left corner coordinates of the shifted image
corner_x = int(translate_factor_x*img_shape[1])
corner_y = int(translate_factor_y*img_shape[0])
# determine which part of the image will be pasted
mask = image[max(-corner_y, 0):min(img_shape[0], -corner_y + img_shape[0]),
max(-corner_x, 0):min(img_shape[1], -corner_x + img_shape[1]),
:]
# determine which part of the canvas the image will be pasted on
target_coords = [max(0,corner_y),
max(corner_x,0),
min(img_shape[0], corner_y + img_shape[0]),
min(img_shape[1],corner_x + img_shape[1])]
# paste image on selected part of the canvas
canvas[target_coords[0]:target_coords[2], target_coords[1]:target_coords[3],:] = mask
transformed_img = canvas
plt.imshow(transformed_img)
This is what I get:
For image translation, you can make use of the somewhat obscure numpy.roll function. In this example I'm going to use a white canvas so it is easier to visualize.
image = np.full_like(original_image, 255)
height, width = image.shape[:-1]
shift = 100
# shift image
rolled = np.roll(image, shift, axis=[0, 1])
# black out shifted parts
rolled = cv2.rectangle(rolled, (0, 0), (width, shift), 0, -1)
rolled = cv2.rectangle(rolled, (0, 0), (shift, height), 0, -1)
If you want to flip the image so the black part is on the other side, you can use both np.fliplr and np.flipud.
Result:
Here is a simple solution that translates an image by tx and ty pixels using only array indexing, that does not roll over, and handles negative values as well:
tx, ty = 8, 5 # translation on x and y axis, in pixels
N, M = image.shape
image_translated = np.zeros_like(image)
image_translated[max(tx,0):M+min(tx,0), max(ty,0):N+min(ty,0)] = image[-min(tx,0):M-max(tx,0), -min(ty,0):N-max(ty,0)]
Example:
(Note that for simplicity it does not handle cases where tx > M or ty > N).
I would like to be able to make a certain shape in either a PIL image or an OpenCV image 3 times larger and smaller without changing the resolution of the image or changing the shape of the shape I want to make larger. I have tried using OpenCV's dilation method but that is not it's intended use, plus it changed the shape of the image. For an example:
Thanks.
Here's a way of doing it:
find the interesting shape, i.e. non-white ROI area
extract it
scale it up by a factor
clear the original image to white
paste the scaled ROI back into image with same centre
#!/usr/bin/env python3
import cv2
import numpy as np
if __name__ == "__main__":
# Open image
orig = cv2.imread('image.png',cv2.IMREAD_COLOR)
# Get extent of interesting part, i.e. non-white part
y, x, _ = np.nonzero(~orig)
y0, y1 = np.min(y), np.max(y) # top and bottom rows
x0, x1 = np.min(x), np.max(x) # left and right cols
h, w = y1-y0, x1-x0 # height and width
ROI = orig[y0:y1, x0:x1] # extract ROI
cv2.imwrite('ROI.png', ROI) # DEBUG only
# Upscale ROI
factor = 3
scaledROI = cv2.resize(ROI, (w*factor,h*factor), interpolation=cv2.INTER_NEAREST)
newH, newW = scaledROI.shape[:2]
# Clear original image to white
orig[:] = [255,255,255]
# Get centre of original shape, and position of top-left of ROI in output image
cx, cy = (x0 + x1) //2, (y0 + y1)//2
top = cy - newH//2
left = cx - newW//2
# Paste in rescaled ROI
orig[top:top+newH, left:left+newW] = scaledROI
cv2.imwrite('result.png', orig)
That transforms this:
to this:
Puts me in mind of a pantograph:
I am trying to create an occupancy grid map by exporting an higher resolution image of the map to a very low resolution.
In most basic form an occupancy grid is a 2 dimensional binary array. The values stored in array denotes free(0) or occupied(1). Each value corresponds to a discrete location of the physical map (the following image depicts an area)
As seen in the above image each array location is a cell of physical world.
I have a 5 meter x 5 meter World, it is then discretized into cells of 5cm x 5cm. The world is thus 100 x 100 cells corresponding to 5m x 5m physical world.
The obstacle re randomly generated circular disks at location (x,y) and of a random radius r like follows:
I need to covert this (above) image into an array of size 100x100. That means evaluating if each cell is actually in the region of a obstacle or free.
To speed things, I have found the following workaround:
Create matplotlib figure populated with obstacles with figsize=(5,5) and save the image with dpi=20 in bmp format and finally import the bmp image as an numpy array. Alas, matplotlib does not support bmp. If I save the image in jpeg using plt.savefig('map.jpg', dpi=20, quality=100) or other formats then the cell's boundary becomes blurred and flows into other cells. Shown in this image :
So my question: How to save a scaled-down image from matplotlib that preserves the cell sharpness of image (akin to bmp).
Nice hack. However, I would rather compute the boolean mask corresponding to your discretized circles explicitly. One simple way to get such a boolean map is by using the contains_points method of matplotlib artists such as a Circle patch.
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
world_width = 100 # x
world_height = 100 # y
minimum_radius = 1
maximum_radius = 10
total_circles = 5
# create circle patches
x = np.random.randint(0, world_width, size=total_circles)
y = np.random.randint(0, world_height, size=total_circles)
r = minimum_radius + (maximum_radius - minimum_radius) * np.random.rand(total_circles)
circles = [Circle((xx,yy), radius=rr) for xx, yy, rr in zip(x, y, r)]
# for each circle, create a boolean mask where each cell element is True
# if its center is within that circle and False otherwise
X, Y = np.meshgrid(np.arange(world_width) + 0.5, np.arange(world_height) + 0.5)
masks = np.zeros((total_circles, world_width, world_height), dtype=bool)
for ii, circle in enumerate(circles):
masks[ii] = circle.contains_points(np.c_[X.ravel(), Y.ravel()]).reshape(world_width, world_height)
combined_mask = np.sum(masks, axis=0)
plt.imshow(combined_mask, cmap='gray_r')
plt.show()
If I have understood correctly, I think this can be done quite simply with PIL, specifically with the Image.resize fucntion. For example, does this do what you asked:
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image, ImageDraw
# Make a dummy image with some black circles on a white background
image = Image.new('RGBA', (1000, 1000), color="white")
draw = ImageDraw.Draw(image)
draw.ellipse((20, 20, 180, 180), fill = 'black', outline ='black')
draw.ellipse((500, 500, 600, 600), fill = 'black', outline ='black')
draw.ellipse((100, 800, 250, 950), fill = 'black', outline ='black')
draw.ellipse((750, 300, 800, 350), fill = 'black', outline ='black')
image.save('circles_full_res.png')
# Resize the image with nearest neighbour interpolation to preserve grid sharpness
image_lo = image.resize((100,100), resample=0)
image_lo.save("circles_low_res.png")
I have used PIL
#back_color_width
for x in range(w):
for y in range(h):
if x==0 or y==0 or x==w-1 or y==h-1 :
pixels[x,y] = back_color
I need to add a border to the image with a width on all 4 sides of image
I would recommend using PIL's built-in expand() function, which allows you to add a border of any colour and width to an image.
So, starting with this:
#!/usr/bin/env python3
from PIL import Image, ImageOps
# Open image
im = Image.open('start.png')
# Add border and save
bordered = ImageOps.expand(im, border=10, fill=(0,0,0))
bordered.save('result.png')
If you want different sized borders on the top/bottom from the left-right, give two widths:
bordered = ImageOps.expand(im, border=(10,50), fill=(0,0,0))
If you want different sized borders on all sides, give 4 widths:
bordered = ImageOps.expand(im, border=(10,40,80,120), fill=(0,0,0))
Keywords: PIL, Pillow, ImageOps, Python, border, bordering, border outside, add border, expand, pad, extent, image, image processing.
This is what you need to change to make the border any number of px wide:
for x in range(w):
for y in range(h):
if (x<border_width
or y<border_width
or x>w-border_width-1
or y>h-border_width-1):
pixels[x,y] = (0,0,0)
#other 3 boxes and #primary box Doesn't make boxes but instead 3 points and 1 point respectively.
You are really close! You just need to change the first if statement. Right now you do have a border, but the border is 1 pixel wide on all sides. Maybe change to
if x<back_color_width or y<back_color_width or x > w+ back_color_width or y > w+back_color_width:
pixel[x,y]=back_color
If I understood what you meant, I think the solution is the following:
import numpy as np
def create_border(img, width, color = np.array([0,0,0]) ):
#color must be a np.array
img_shape = img.shape
upper_border = np.full((width, img_shape[1], 3), color) #for 3-channel image
side_border = np.full((img_shape[0] + 2*width, width, 3), color)
bordered = np.concatenate([upper_border, img, upper_border])
bordered = np.concatenate([side_border, bordered, side_border], axis=1)
return bordered
I am drawing text onto a numpy array image in Python (using a custom font). Currently I am converting the image to PIL, drawing the text and then converting back to a numpy array.
import numpy as np
import cv2
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
char_image = np.zeros((200, 300, 3), np.uint8)
# convert to pillow image
pillowImage = Image.fromarray(char_image)
draw = ImageDraw.Draw(pillowImage)
# add chars to image
font = ImageFont.truetype("arial.ttf", 32)
draw.text((50, 50), 'ABC', (255, 255, 255), font=font)
# convert back to numpy array
char_image = np.array(pillowImage, np.uint8)
# show image on screen
cv2.imshow('myImage', char_image)
cv2.waitKey(0)
Is there anyway to draw the text on a given angle, ie. 33 degrees?
Rotating the image once the text has been drawn is not an option
You can use PIL to draw rotated text. I suggest drawing the text onto a blank image, rotating that image, and then pasting the rotated image into the main image. Something like:
Code:
def draw_rotated_text(image, angle, xy, text, fill, *args, **kwargs):
""" Draw text at an angle into an image, takes the same arguments
as Image.text() except for:
:param image: Image to write text into
:param angle: Angle to write text at
"""
# get the size of our image
width, height = image.size
max_dim = max(width, height)
# build a transparency mask large enough to hold the text
mask_size = (max_dim * 2, max_dim * 2)
mask = Image.new('L', mask_size, 0)
# add text to mask
draw = ImageDraw.Draw(mask)
draw.text((max_dim, max_dim), text, 255, *args, **kwargs)
if angle % 90 == 0:
# rotate by multiple of 90 deg is easier
rotated_mask = mask.rotate(angle)
else:
# rotate an an enlarged mask to minimize jaggies
bigger_mask = mask.resize((max_dim*8, max_dim*8),
resample=Image.BICUBIC)
rotated_mask = bigger_mask.rotate(angle).resize(
mask_size, resample=Image.LANCZOS)
# crop the mask to match image
mask_xy = (max_dim - xy[0], max_dim - xy[1])
b_box = mask_xy + (mask_xy[0] + width, mask_xy[1] + height)
mask = rotated_mask.crop(b_box)
# paste the appropriate color, with the text transparency mask
color_image = Image.new('RGBA', image.size, fill)
image.paste(color_image, mask)
How does it work:
Create a transparency mask.
Draw the text onto the mask.
Rotate the mask, and crop to proper size.
Paste the desired color into the image, using the rotated transparency mask containing the text.
Test Code:
import numpy as np
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
char_image = np.zeros((100, 150, 3), np.uint8)
# convert to pillow image
pillowImage = Image.fromarray(char_image)
# draw the text
font = ImageFont.truetype("arial.ttf", 32)
draw_rotated_text(pillowImage, 35, (50, 50), 'ABC', (128, 255, 128), font=font)
pillowImage.show()
Results:
Using matplotlib, first visualize array and draw on it, get the raw data from the figure back.
Pro: both tools are quite high level and let you deal with many details of the process. ax.annotate() offers flexibility for where and how to draw and set font properties, and plt.matshow() offers flexibility that lets you deal with aspects of array visualization.
import matplotlib.pyplot as plt
import scipy as sp
# make Data array to draw in
M = sp.zeros((500,500))
dpi = 300.0
# create a frameless mpl figure
fig, axes = plt.subplots(figsize=(M.shape[0]/dpi,M.shape[1]/dpi),dpi=dpi)
axes.axis('off')
fig.subplots_adjust(bottom=0,top=1.0,left=0,right=1)
axes.matshow(M,cmap='gray')
# set custom font
import matplotlib.font_manager as fm
ttf_fname = '/usr/share/fonts/truetype/ubuntu-font-family/Ubuntu-B.ttf'
prop = fm.FontProperties(fname=ttf_fname)
# annotate something
axes.annotate('ABC',xy=(250,250),rotation=45,fontproperties=prop,color='white')
# get fig image data and read it back to numpy array
fig.canvas.draw()
w,h = fig.canvas.get_width_height()
Imvals = sp.fromstring(fig.canvas.tostring_rgb(),dtype='uint8')
ImArray = Imvals.reshape((w,h,3))