I'm creating some images with python imaging library (PIL). Now, like we zoom into a map at a particular location, I want to similarly zoom into my image at a specified point. Note that this is different from resizing the image. I want the size to remain the same. I couldn't find any inbuilt method in the documentation that does this. Is anyone aware of a method that might achieve this. I'd ideally like to do this without other dependencies like openCV.
I think you mean this:
def zoom_at(img, x, y, zoom):
w, h = img.size
zoom2 = zoom * 2
img = img.crop((x - w / zoom2, y - h / zoom2,
x + w / zoom2, y + h / zoom2))
return img.resize((w, h), Image.LANCZOS)
This will crop the image around the point where you zoom into and then upscale the resulting image to the original size.
Related
I would like to split an image into triangle shaped tiles (equilateral) . I have tried to generate the coordinates of a triangle using the function from https://alexwlchan.net/2016/10/tiling-the-plane-with-pillow/.
My code:
#import opencv
import math
image_path="/content/newspaper-icon-in-transparent-style-news-on-vector-25591681.jpg"
#Create Triangles
# https://alexwlchan.net/2016/10/tiling-the-plane-with-pillow/
#A horrizontal offset is added to ensure that images line up
#https://stackoverflow.com/questions/22588074/polygon-crop-clip-using-python-pil
def generate_coordinates_for_unit_triangles(image_width,image_height):
image_width=50;
image_height=50;
h=math.sin(math.pi/3)
for x in range(image_width):
for y in range(int(image_height / h)):
first_c,second_c,third_c=(x, y * h), (x+1, y * h), ((x+0.5, (y+1) * h))
first_sc, second_sc,third_sc=(x+1, y * h), (x+1.5, (y+1) * h), (x+0.5, (y+1) * h)
return first_c, second_c,third_c, first_sc, second_sc,third_sc
#return [(x, y * h), (x+1, y * h), (x+0.5, (y+1) * h)] ,[(x+1, y * h), (x+1.5, (y+1) * h), (x+0.5, (y+1) * h)]
##Generates the two triangles coordinates
first_c, second_c,third_c, first_sc, second_sc,third_sc=generate_coordinates_for_unit_triangles(50,50)
#convert image into numpy array
image_read=Image.open(image_path)
image_to_numpy=np.asarray(image_read)
shape_of_array=image_to_numpy.shape
print(shape_of_array)
mask_image=[first_c, second_c,third_c, first_sc, second_sc,third_sc]
I realized that this may not given my desired output.
The expected input and output is included below:
[Expected input and output][1]
Any guidance on how to approach the problem would be appreciated.
[1]: https://i.stack.imgur.com/vr7rV.jpg
I'm posting this as an answer because it's long, but it's not literally an answer. I'm hoping this will lead you to the next step in your design process.
Here are the design decisions you face, It's clear from your code that you can generate a list of triangle coordinates. Good, what next? You probably know the bounding box of your triangles (largest w and h) advance, so you can certainly create a set of images that contain your triangles, masked off with a black background or alpha=0 background. You could just copy the bounding rectangle to an image, then create a mask using the triangle as a path, and set the alpha to 0 outside of the mask. opencv should be able to do that.
But after you have those, what then? You talked about matching the edges. That's complicated. I suppose you could extract a vector of pixels from the three edges of each triangle, and then do some kind of approximate comparison.
If you do find matches that allow you to stitch together a composite, it is possible (assuming you have alpha=0 in the backgrounds) to blit all of these triangles back into some larger image, kind of like quilting. openvc can do block copy with alpha blending.
So, in the end, I think your problem is achievable, but it's going to be a lot of work, and probably more than we can offer here.
I want to create distortion effect like Spiral, stretch, fisheye, Wedge and other effect like underwater and snow like this website using cv2 library in python.
I figured out fisheye distortion.
In OpenCV version 3.0 and above it is possible to perform it using cv2.fisheye.undistortImage(). I have the code in python if you need.
This is what I got for the following input image:
Input Image:
Distorted image:
The function accepts a matrix, which upon modification yield different distortions of the image.
UPDATE
In order to add a snowfall effect you can add some noise like Poisson noise.
Here is a replacement block to map out a fisheye in the middle of the image. Please look elsewhere for details on the math. Use this in place of the 2 for loops in the previous code.
As stated in the first half of my answer (see previous answer), the purpose of this block is to create 2 maps that work together to remap the source image into the destination image.
To create the two maps, this block sweeps through 2 for loops with the dimensions of the image. Values are calculated for the X and y maps (flex_x and flex_y). It starts with assigning each to simply x and y for a 1-to-1 replacement map. Then, if the radius (r) is between 0 and 1, the map for the tangential slide for the fisheye is applied and new flex_x and flex_y values are mapped.
Please see my other answer for more details.
# create simple maps with a modified assignment
# outside the bulge is normal, inside is modified
# this is where the magic is assembled
for y in range(h):
ny = ((2*y-250)/(h-250))-1 #play with the 250's to move the y
ny2 = ny*ny
for x in range(w):
nx = ((2*x-50)/(w-50))-1 #play with the 50's to move the x
nx2 = nx*nx
r = math.sqrt(nx2+ny2)
flex_x[y,x] = x
flex_y[y,x] = y
if r>0 and r<1:
nr1 = 1 - r**2
nr2 = math.sqrt(nr1)
nr = (r + (1.0-nr2)) / 2.0
theta = math.atan2(ny,nx)
nxn = nr*math.cos(theta)
nyn = nr*math.sin(theta)
flex_x[y,x] = (((nxn+1)*w)/2.0)
flex_y[y,x] = (((nyn+1)*h)/2.0)
Here is half of an answer. The cv2.remap function uses maps to choose a pixel from the source for each pixel in the destination. alkasm's answer to this: How do I use OpenCV's remap function?
does a great job of defining the process, but glosses over the usefulness of those maps. If you can get creative in the maps, you can make any effect you want. Here is what I came up with.
The program starts by loading the image and resizing it. This is a convenience for a smaller screen. Then the empty maps are created.
The maps need to be the same dimensions as the image that is being processed, but with a depth of 1. If the resized original is 633 x 400 x 3, the maps both need to be 633 x 400.
When the remapping is done, cv2.remap will used the value at each coordinate in the maps to determine which pixel in the original to use in the destination. For each x,y in the destination, dest[x,y] = src[map1[x,y],map2[x,y]].
The simplest mapping would be if for every (x,y), map1(x,y)=x and map2(x,y)=y. This creates a 1-to-1 map, and the destination would match the source. In this example, a small offset is added to each value. The cosine function in the offset creates both positive and negative shifts, creating waves in the final image.
Note that creating the maps is slow, but the cv2.remap is fast. Once you have created the map, the cv2.remap is fast enough to be applied to frames of video.
import numpy as np #create waves
import cv2
import math
# read in image and resize down to width of 400
# load your image file here
image = cv2.imread("20191114_154534.jpg")
r = 400.0 / image.shape[1]
dim = (400, int(image.shape[0] * r))
# Perform the resizing of the image
resized = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
# Grab the dimensions of the image and calculate the center
# of the image (center not needed at this time)
(h, w, c) = resized.shape
center = (w // 2, h // 2)
# set up the x and y maps as float32
flex_x = np.zeros((h,w),np.float32)
flex_y = np.zeros((h,w),np.float32)
# create simple maps with a modified assignment
# the math modifier creates ripples. increase the divisor for less waves,
# increase the multiplier for greater movement
# this is where the magic is assembled
for y in range(h):
for x in range(w):
flex_x[y,x] = x + math.cos(x/15) * 15
flex_y[y,x] = y + math.cos(y/30) * 25
# do the remap this is where the magic happens
dst = cv2.remap(resized,flex_x,flex_y,cv2.INTER_LINEAR)
#show the results and wait for a key
cv2.imshow("Resized",resized)
cv2.imshow("Flexed",dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
I have made a small program that reads an image, transforms the perspective and then redraws the image. Currently I rewrite each pixel to the output manually but this way a lot of points are lost and the result is image that is very faint (the larger the transformation the fainter the image). This is my code:
U, V = np.meshgrid(range(img_array.shape[1]), range(img_array.shape[0]))
UV = np.vstack((U.flatten(),V.flatten())).T
UV_warped = cv2.perspectiveTransform(np.array([UV]).astype(np.float32), H)
UV_warped = UV_warped[0]
UV_warped = UV_warped.astype(np.int)
x_translation = min(UV_warped[:,0])
y_translation = min(UV_warped[:,1])
new_width = np.amax(UV_warped[:,0])-np.amin(UV_warped[:,0])
new_height = np.amax(UV_warped[:,1])-np.amin(UV_warped[:,1])
UV_warped[:,0] = UV_warped[:,0] - int(x_translation)
UV_warped[:,1] = UV_warped[:,1] - int(y_translation)
# create box for image
new_img = np.ones((new_height+1, new_width+1))*255 # 0 = black 255 - white background
for uv_pix, UV_warped_pix in zip(UV, UV_warped):
x_orig = uv_pix[0] # x in origineel
y_orig = uv_pix[1] # y in origineel
color = img_array[y_orig, x_orig]
x_new = UV_warped_pix[0] # new x
y_new = UV_warped_pix[1] # new y
new_img[y_new, x_new] = np.array(color)
img = Image.fromarray(np.uint8(new_img))
img.save("test.jpg")
Is there a way to do this differently (with interpolation maybe?) so I won't loose so many pixels and the image is not so faint?
You are looking for the function warpPerspective (As already mentioned in answer to your previous question OpenCV perspective transform in python).
You can use this function like this (although I'm not familiar with python) :
cv2.warpPerspective(src_img, H_from_src_to_dst, dst_size, dst_img)
EDIT: You can refer to this OpenCV tutorial. It uses affine transformations, but there exists similar OpenCV functions for perspective transformations.
I want to create a (grayscale) image from a matrix, draw some lines into it and save it to a file.
In PIL it looks like this:
im = Image.new("RGB", (len(matrix), len(matrix[0])))
for x in range(0, len(matrix)):
for y in range(0, len(matrix[0])):
cl = int(matrix[x][y] * 255.0 / float(max_value))
im.putpixel((x, y), (cl, cl, cl))
draw = ImageDraw.Draw(im)
draw.polygon((off_x, off_y, off_x + a, off_y, off_x + x, off_y + y), outline="#FF0000")
im.save("pix.png")
Sadly, PIL's lines look ugly since it does not support anti-aliasing.
Pyglet is full of different surfaces, textures and images and I don't really know where to start. So how would a simple way to do this in pyglet look like? Or may there be any easier way?
I have a list of RGB triplets, and I'd like to plot them in such a way that they form something like a spectrum.
I've converted them to HSV, which people seem to recommend.
from PIL import Image, ImageDraw
import colorsys
def make_rainbow_rgb(colors, width, height):
"""colors is an array of RGB tuples, with values between 0 and 255"""
img = Image.new("RGBA", (width, height))
canvas = ImageDraw.Draw(img)
def hsl(x):
to_float = lambda x : x / 255.0
(r, g, b) = map(to_float, x)
h, s, l = colorsys.rgb_to_hsv(r,g,b)
h = h if 0 < h else 1 # 0 -> 1
return h, s, l
rainbow = sorted(colors, key=hsl)
dx = width / float(len(colors))
x = 0
y = height / 2.0
for rgb in rainbow:
canvas.line((x, y, x + dx, y), width=height, fill=rgb)
x += dx
img.show()
However, the result doesn't look very much like a nice rainbow-y spectrum. I suspect I need to either convert to a different color space or handle the HSL triplet differently.
Does anyone know what I need to do to make this data look roughly like a rainbow?
Update:
I was playing around with Hilbert curves and revisited this problem. Sorting the RGB values (same colors in both images) by their position along a Hilbert curve yields an interesting (if still not entirely satisfying) result:
You're trying to convert a three-dimensional space into a one-dimensional space. There's no guarantee that you can make a pleasing rainbow out of it, as Oli says.
What you can do is "bucket" the colors into a few different categories based on saturation and value/lightness, and then sort within the categories, to get several independent gradients. For example, high-saturation colors first for the classic rainbow, then mid-saturation high-value colors (pastels), then low-saturation (grays).
Alternately, if all you care about is the rainbow, convert to hsl, then slam saturation to 1.0 and value to 0.5, convert back to rgb and render that instead of the original color.
Presumably you are sorting by hue (i.e. H)? That will give a nice result if S and L (or V) are constant, but if they are varying independently, then you will get a bit of a mess!
An interesting method for reducing dimensionality of color spaces uses the space-filling Hilbert curve. Two relevant articles are:
Color Space Dimension Reduction - overview of several methods for reducing dimensionality of color data
Portrait of the Hilbert Curve - detailed article about Hilbert curves and application to color-space dimensionality reduction
They both consider 3d -> 2d reduction, but the intermediate step of mapping to the 1d curve could be a solution to your problem.
Here are some rainbows I made recently, you can modify the idea to do what you want
from PIL import Image, ImageDraw # pip install pillow
import numpy as np
from matplotlib import pyplot as plt
strip_h, strip_w = 100, 720
strip = 255*np.ones((strip_h,strip_w,3), dtype='uint8')
image_val = Image.fromarray(strip)
image_sat = Image.fromarray(strip)
draw0 = ImageDraw.Draw(image_val)
draw1 = ImageDraw.Draw(image_sat)
for y in range(strip_h):
for x in range(strip_w):
draw0.point([x, y], fill='hsl(%d,%d%%,%d%%)'%(x%360,y,50))
draw1.point([x, y], fill='hsl(%d,%d%%,%d%%)'%(x%360,100,y))
plt.subplot(2,1,1)
plt.imshow(image_val)
plt.subplot(2,1,2)
plt.imshow(image_sat)
plt.show()
This seems incorrect.
canvas.line((x, y, x + dx, y), width=height, fill=rgb)
Try this.
canvas.rectangle([(x, y), (x+dx, y+height)], fill=rgb)