Hi could you please give me any tips on how to cut 4 triangles out of an image and mix them together, so that they look like this:
I want to use a python in order to achieve this.
Starting with this image (dog.jpg):
You could do something like this:
#!/usr/bin/env python3
from PIL import Image, ImageDraw
# Open image, generate a copy and rotate copy 180 degrees
im1 = Image.open('dog.jpg')
im2 = im1.copy().rotate(180)
# DEBUG: im2.save('im2.png')
# Get dimensions of image
w, h = im1.size
# Create an alpha layer same size as our image filled with black
alpha = Image.new('L',(w,h))
# Draw 2 white (i.e. transparent) triangles on the alpha layer
draw = ImageDraw.Draw(alpha)
draw.polygon([(0, 0), (w, 0), (w/2, h/2)], fill = (255))
draw.polygon([(0, h), (w, h), (w/2, h/2)], fill = (255))
# DEBUG: alpha.save('alpha.png')
# Composite rotated image over initial image while respecting alpha
result = Image.composite(im1, im2, alpha)
result.save('result.png')
The intermediate steps (commented with #DEBUG: in the code), look like this:
im2.png
and alpha.png:
Related
I have a grayscale image and I want to create an alpha layer based on a range of pixel values. I want to know how can I create a fall-off function to generate such image.
The original image is the following:
I can use the color range in photoshop to select the shadows with fuzziness of 20%
And the resultant alpha channel is the following:
With fuzziness of 100%:
How can I generate such alpha channels in python with PIL?
I thought that maybe a subtract, but it does not generates a
The code to generate the image with Numpy and PIL:
from PIL import Image
import numpy as np
img = np.arange(0,256, 0.1).astype(np.uint8)
img = np.reshape(img, (img.shape[0], 1))
img = np.repeat((img), 500, axis=1)
img = Image.fromarray(img.T)
I tried to create a fall-off function from the distance of the pixel values but it does not have the same gradient. Maybe there is a different way?
def gauss_falloff(distance, c=0.2, alpha=255):
new_value = alpha * np.exp(-1 * ((distance) ** 2) / (c**2))
new_value = new_value.clip(0,255)
return new_value.astype(np.uint8)
test = img.T / 255
test = np.abs(test - pixel)
test = gauss_falloff(test, c=0.2, alpha=255)
test = Image.fromarray(test)
With my code:
Here's how you could do that
from PIL import Image, ImageDraw
# Create a new image with a transparent background
width, height = 300, 300
image = Image.new('RGBA', (width, height), (255, 255, 255, 0))
# Create a drawing context for the image
draw = ImageDraw.Draw(image)
# Set the starting and ending colors for the gradient
start_color = (255, 0, 0)
end_color = (0, 0, 255)
# Draw a gradient line with the specified color range
for x in range(width):
color = tuple(int(start_color[i] + (end_color[i] - start_color[i]) * x / width)
for i in range(3))
draw.line((x, 0, x, height), fill=color)
# Save the image
image.save('gradient.png')
This code creates a new image with a transparent background and a drawing context for that image. Then it draws a gradient line on the image with the specified color range. Finally, it saves the image as a PNG file.
Note: The Python Imaging Library (PIL) has been replaced by the Pillow library, which is a fork of PIL. If you are using Pillow, you can use the same code as above, but you need to import the Image and ImageDraw modules from the Pillow package instead of the PIL package.
I am trying to detect blurred images. Thanks to this post, I managed to create a script using Fast Fourier Transform and so far it worked quite well. But for some photos, I am not able to get correct results.
When the background is almost as the same color than the objects from the front, I think my script is not able to give good result.
Do you have any leads to correct this ?
import cv2
import imutils
from PIL import Image as pilImg
from IPython.display import display
import numpy as np
from matplotlib import pyplot as plt
def detect_blur_fft(image, size=60, thresh=17, vis=False):
"""
Detects blur by comparing the image to a blurred version of the image
:param image: The image to detect blur in
:param size: the dimension of the smaller square extracted from the image, defaults to 60 (optional)
:param thresh: the lower this value, the more blur is acceptable, defaults to 17 (optional)
:param vis: Whether or not to return a visualization of the detected blur points, defaults to False
(optional)
"""
# grab the dimensions of the image and use the dimensions to
# derive the center (x, y)-coordinates
(h, w) = image.shape
(cX, cY) = (int(w / 2.0), int(h / 2.0))
# compute the FFT to find the frequency transform, then shift
# the zero frequency component (i.e., DC component located at
# the top-left corner) to the center where it will be more
# easy to analyze
fft = np.fft.fft2(image)
fftShift = np.fft.fftshift(fft)
# check to see if we are visualizing our output
if vis:
# compute the magnitude spectrum of the transform
magnitude = 20 * np.log(np.abs(fftShift))
# display the original input image
(fig, ax) = plt.subplots(1, 2, )
ax[0].imshow(image, cmap="gray")
ax[0].set_title("Input")
ax[0].set_xticks([])
ax[0].set_yticks([])
# display the magnitude image
ax[1].imshow(magnitude, cmap="gray")
ax[1].set_title("Magnitude Spectrum")
ax[1].set_xticks([])
ax[1].set_yticks([])
# show our plots
plt.show()
# zero-out the center of the FFT shift (i.e., remove low
# frequencies), apply the inverse shift such that the DC
# component once again becomes the top-left, and then apply
# the inverse FFT
fftShift[cY - size:cY + size, cX - size:cX + size] = 0
fftShift = np.fft.ifftshift(fftShift)
recon = np.fft.ifft2(fftShift)
# compute the magnitude spectrum of the reconstructed image,
# then compute the mean of the magnitude values
magnitude = 20 * np.log(np.abs(recon))
mean = np.mean(magnitude)
# the image will be considered "blurry" if the mean value of the
# magnitudes is less than the threshold value
return (mean, mean <= thresh)
pathImg = "path to the image"
image = cv2.imread(pathImg)
# Resizing the image to 500 pixels in width.
image = imutils.resize(image, width= 500)
# Converting the image to gray scale.
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Using the FFT to detect blur.
(mean, blurry) = detect_blur_fft(gray, size=60)
image = np.dstack([gray] * 3)
# This is a conditional statement that will set the color to red if the image is blurry or green
color = (0, 0, 255) if blurry else (0, 255, 0)
text = "Blurry ({:.4f})" if blurry else "Not Blurry ({:.4f})"
text = text.format(mean)
# Adding text to the image.
cv2.putText(image, text, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.7, color, 2)
print("[INFO] {}".format(text))
# show the output image
display(pilImg.fromarray(image))
So I have started a program that takes two images, one that's the model image and the other that's an image with a change I want it to detect the differences and show me with circling the differences. I have come to an issue with finding the difference coordinates as my circle keeps ending up in the middle of the image.
This is the code I have:
import cv2 as cv
import numpy as np
from PIL import Image, ImageChops
#Ideal Image and The main Image
img2= cv.imread("ideal.jpg")
img1 = cv.imread("Actual.jpg")
#Verifys if there is or isnt a differance in the Image for the If statement
diff = cv.subtract(img2, img1)
results = not np.any(diff)
#Tells the User if there is a Differance within the 2 images with the model image and the image given
if results is True:
print("The Images are the same!")
else:
print("The images are differant")
#This is to make the image show the differance to circle
img_1=Image.open("Actual.jpg")
img_2=Image.open("ideal.jpg")
diff=ImageChops.difference(img_1,img_2)
diff.save("Differance.jpg")
#Reads the image Just saved
Differance = cv.imread("Differance.jpg", 0)
#Resize the Image to make it smaller
img1s = cv.resize(img1, (0, 0), fx=0.5, fy=0.5)
Differance = cv.resize(Differance, (0, 0), fx=0.5, fy=0.5)
# Find anything not black, i.e. The differance
nz = cv.findNonZero(Differance)
# Find top, bottom, left and right edge of the Differance
a = nz[:,0,0].min()
b = nz[:,0,0].max()
c = nz[:,0,1].min()
d = nz[:,0,1].max()
# Average top and bottom edges, left and right edges, to give centre
c0 = (a+b)/2
c1 = (c+d)/2
#The Center Coords
c3 = (int(c0),int(c1))
#Values for the below code so it doesnt look messy
radius = 50
color = (0, 0, 255)
thickness = 2
#This Places a Circle around the center of the differance
Finished = cv.circle(img1s, c3, radius, color, thickness)
#Saves the Final Image with the circle around it
cv.imwrite("Final.jpg", Finished)
And the Images attached 1
2
This code currently takes both images and blacks out the background leaving only the difference within the image then the program is meant to take the location of the difference and place a circle around the center of the main image that is the one with the difference on it.
Your main problem is JPG format which changes pixels to better compress image - and this creates differences in all area. If you display diff or difference then you should see many gray pixels
I hope you see pixels below ball
If you use PNG for original image (without ball) and later use this image to create image with ball and also save in PNG then code will works correctly.
My version without PIL.
Press any key to close window with image.
import cv2 as cv
import numpy as np
# load images
img1 = cv.imread("img1.png")
img2 = cv.imread("img2.png")
# calculate difference
diff = cv.subtract(img1, img2) # other order `(img2, img1)` gives worse result
# saves difference
cv.imwrite("difference.png", diff)
# show difference - press any key to close
cv.imshow('diff', diff)
cv.waitKey(0)
cv.destroyWindow('diff')
if not np.any(diff):
print("The images are the same!")
else:
print("The images are differant")
# resize images to make them smaller
#img1_resized = cv.resize(img1, (0, 0), fx=0.5, fy=0.5)
#diff_resized = cv.resize(diff, (0, 0), fx=0.5, fy=0.5)
img1_resized = img1
diff_resized = diff
# convert to grayscale (without saving and loading again)
diff_resized = cv.cvtColor(diff_resized, cv.COLOR_BGR2GRAY)
# find anything not black in differance
non_zero = cv.findNonZero(diff_resized)
#print(non_zero)
# find top, bottom, left and right edge of the differance
x_min = non_zero[:,0,0].min()
x_max = non_zero[:,0,0].max()
y_min = non_zero[:,0,1].min()
y_max = non_zero[:,0,1].max()
print('x:', x_min, x_max)
print('y:', y_min, y_max)
sizes = [x_max-x_min+1, y_max-y_min+1]
print('width :', sizes[0])
print('height:', sizes[1])
# center
center_x = (x_min + x_max) // 2
center_y = (y_min + y_max) // 2
center = (center_x, center_y)
print('center:', center)
# radius
radius = max(sizes) // 2
print('radius:', radius)
color = (0, 0, 255)
thickness = 2
# draw circle around the center of the differance
finished = cv.circle(img1_resized, center, radius, color, thickness)
# saves final image with circle
#cv.imwrite("final.png", finished)
# show final image - press any key to close
cv.imshow('finished', finished)
cv.waitKey(0)
cv.destroyWindow('finished')
img1.png
img2.png
difference.png
final.png
EDIT:
If you work with JPG then you can try to reduce noises
diff = cv.subtract(img1, img2)
diff_gray = cv.cvtColor(diff, cv.COLOR_BGR2GRAY)
diff_gray[diff_gray < 50] = 0
For different images you may need different values instead of 50.
You may also try thresholding
(_, diff_gray) = cv.threshold(diff_gray, 50, 0, cv.THRESH_TOZERO)
It may need also other functions like blur(), erode(), dilate(),
do not need PIL
take Differance image
threshold it
use findcontour to find regions
if contours finded then draw it
for cnt in contours:
out_image = cv2.drawContours(out_image, [cnt], 0, (255,0,0), -1)
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
out_image = cv2.circle(out_image,center,radius,(0,255,0),2)
I want to perform image translation by a certain amount (shift the image vertically and horizontally).
The problem is that when I paste the cropped image back on the canvas, I just get back a white blank box.
Can anyone spot the issue here?
Many thanks
img_shape = image.shape
# translate image
# percentage of the dimension of the image to translate
translate_factor_x = random.uniform(*translate)
translate_factor_y = random.uniform(*translate)
# initialize a black image the same size as the image
canvas = np.zeros(img_shape)
# get the top-left corner coordinates of the shifted image
corner_x = int(translate_factor_x*img_shape[1])
corner_y = int(translate_factor_y*img_shape[0])
# determine which part of the image will be pasted
mask = image[max(-corner_y, 0):min(img_shape[0], -corner_y + img_shape[0]),
max(-corner_x, 0):min(img_shape[1], -corner_x + img_shape[1]),
:]
# determine which part of the canvas the image will be pasted on
target_coords = [max(0,corner_y),
max(corner_x,0),
min(img_shape[0], corner_y + img_shape[0]),
min(img_shape[1],corner_x + img_shape[1])]
# paste image on selected part of the canvas
canvas[target_coords[0]:target_coords[2], target_coords[1]:target_coords[3],:] = mask
transformed_img = canvas
plt.imshow(transformed_img)
This is what I get:
For image translation, you can make use of the somewhat obscure numpy.roll function. In this example I'm going to use a white canvas so it is easier to visualize.
image = np.full_like(original_image, 255)
height, width = image.shape[:-1]
shift = 100
# shift image
rolled = np.roll(image, shift, axis=[0, 1])
# black out shifted parts
rolled = cv2.rectangle(rolled, (0, 0), (width, shift), 0, -1)
rolled = cv2.rectangle(rolled, (0, 0), (shift, height), 0, -1)
If you want to flip the image so the black part is on the other side, you can use both np.fliplr and np.flipud.
Result:
Here is a simple solution that translates an image by tx and ty pixels using only array indexing, that does not roll over, and handles negative values as well:
tx, ty = 8, 5 # translation on x and y axis, in pixels
N, M = image.shape
image_translated = np.zeros_like(image)
image_translated[max(tx,0):M+min(tx,0), max(ty,0):N+min(ty,0)] = image[-min(tx,0):M-max(tx,0), -min(ty,0):N-max(ty,0)]
Example:
(Note that for simplicity it does not handle cases where tx > M or ty > N).
I would like to be able to make a certain shape in either a PIL image or an OpenCV image 3 times larger and smaller without changing the resolution of the image or changing the shape of the shape I want to make larger. I have tried using OpenCV's dilation method but that is not it's intended use, plus it changed the shape of the image. For an example:
Thanks.
Here's a way of doing it:
find the interesting shape, i.e. non-white ROI area
extract it
scale it up by a factor
clear the original image to white
paste the scaled ROI back into image with same centre
#!/usr/bin/env python3
import cv2
import numpy as np
if __name__ == "__main__":
# Open image
orig = cv2.imread('image.png',cv2.IMREAD_COLOR)
# Get extent of interesting part, i.e. non-white part
y, x, _ = np.nonzero(~orig)
y0, y1 = np.min(y), np.max(y) # top and bottom rows
x0, x1 = np.min(x), np.max(x) # left and right cols
h, w = y1-y0, x1-x0 # height and width
ROI = orig[y0:y1, x0:x1] # extract ROI
cv2.imwrite('ROI.png', ROI) # DEBUG only
# Upscale ROI
factor = 3
scaledROI = cv2.resize(ROI, (w*factor,h*factor), interpolation=cv2.INTER_NEAREST)
newH, newW = scaledROI.shape[:2]
# Clear original image to white
orig[:] = [255,255,255]
# Get centre of original shape, and position of top-left of ROI in output image
cx, cy = (x0 + x1) //2, (y0 + y1)//2
top = cy - newH//2
left = cx - newW//2
# Paste in rescaled ROI
orig[top:top+newH, left:left+newW] = scaledROI
cv2.imwrite('result.png', orig)
That transforms this:
to this:
Puts me in mind of a pantograph: