Here's what I'm trying to do:
I have an image.
I want to take a circular region in the image, and have it appear as normal.
The rest of the image should appear darker.
This way, it will be as if the circular region is "highlighted".
I would much appreciate feedback on how to do it in Python.
Manually, in Gimp, I would create a new layer with a color of gray (less than middle gray). I would then create a circualr region on that layer, and make it middle gray. Then I would change the blending mode to soft light. Essentially, anything that is middle gray on the top layer will show up without modification, and anything darker than middle gray would show up darker.
(Ideally, I'd also blur out the top layer so that the transition isn't abrupt).
How can I do this algorithmically in Python? I've considered using the Pillow library, but it doesn't have these kinds of blend modes. I also considered using the Blit library, but I couldn't import (not sure it's maintained any more). Am open to scikit-image as well. I just need pointers on the library and some relevant functions.
If there's no suitable library, I'm open to calling command line tools (e.g. imagemagick) from within the Python code.
Thanks!
You can also do it using Python Image Library. This is a method that works, but might be optimized, since there is still a double for loop in it.
from PIL import Image
import math
def spotlight(img: Image, center: (int, int), radius: int) -> Image:
width, height = img.size
overlay_color = (0, 0, 0, 128)
img_overlay = Image.new(size=img.size, color=overlay_color, mode='RGBA')
for x in range(width):
for y in range(height):
dx = x - center[0]
dy = y - center[1]
distance = math.sqrt(dx * dx + dy * dy)
if distance < radius:
img_overlay.putpixel((x, y), (0, 0, 0, 0))
img.paste(img_overlay, None, mask=img_overlay)
return img
if __name__ == '__main__':
orig_file_name = 'amsterdam_1900x1500'
img = Image.open('{}.jpg'.format(orig_file_name))
spotlight_img = spotlight(img, (475, 900), 400)
spotlight_img.save('spotlight_{}.jpg'.format(orig_file_name))
Before:
After:
I finally did it with ImageMagick, using Python to calculate the various coordinates, etc.
This command will create the desired circle (radius 400, centered at (600, 600):
convert -size 1024x1024 xc:none -stroke black -fill steelblue -strokewidth 1 -draw "translate 600,600 circle 0,0 400,0" drawn.png
This command will then convert it to B/W to get a rudimentary mask:
convert drawn.png -alpha extract mask.png
This command will blur the mask (radius 180, sigma 16):
convert -channel RGBA -blur 100x16 mask.png mask2.png
The above three commands gives me the mask I need.
This command will darken the whole image (without the mask):
convert image.jpg -level 0%,130%,0.7 dark.jpg
And this command will put all 3 images together (original image, darkened image, and mask):
composite image.jpg dark.jpg mask2.png out.jpg
Related
I have 2 images,
1- White circle with black stroke
2- Black circle with black stroke
I want to compare both images and identify that both have the same circle but with different filling
I should only use python & pillow
I have already tried several methods like Edge Detection, but whenever I try to reform the picture for edge detection the new image appear as empty
from PIL import Image, ImageDraw
import numpy as np
from math import sqrt
# Load image:
input_image = Image.open("input.png")
input_pixels = input_image.load()
width, height = input_image.width, input_image.height
# Create output image
output_image = Image.new("RGB", input_image.size)
draw = ImageDraw.Draw(output_image)
# Convert to grayscale
intensity = np.zeros((width, height))
for x in range(width):
for y in range(height):
intensity[x, y] = sum(input_pixels[x, y]) / 3
# Compute convolution between intensity and kernels
for x in range(1, input_image.width - 1):
for y in range(1, input_image.height - 1):
magx = intensity[x + 1, y] - intensity[x - 1, y]
magy = intensity[x, y + 1] - intensity[x, y - 1]
# Draw in black and white the magnitude
color = int(sqrt(magx**2 + magy**2))
draw.point((x, y), (color, color, color))
output_image.save("edge.png")
expected result that the both pictures will be greyscaled with only the circle edges marked in white
actual result empty black image (as if it couldnt see the edges)
Well, If all you want is Edge Detection in an image, then you can try using Sobel Operator or its equivalents.
from PIL import Image, ImageFilter
image = Image.open(r"Circle.png").convert("RGB")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(r"ED_Circle.png")
The above code takes in an input image, converts it into RGB mode (certain images have P mode, which doesn't allows edge detection, therefore converting to RGB). Then finds edges in it via image.filter(ImageFilter.FIND_EDGES).
Sample Input Image (Black border with black circle):-
Output after processing through python program:-
Sample Image 2 (white circle with black border):-
Output after processing through python program:-
In the above sample, both the input images were of the same size and the circles in them were also of the same dimensions, the only difference between the two was that, one had a white circle inside a black border, and the other had a black circle inside black border.
Since the circles were of same dimensions, passing them through the edge detection process gave us same results.
NOTE:-
In the question, you wanted circle edges in white, and the rest of
part in greyscale. Which isn't the best choice for edge detection.
White and Black are inverse of each other, therefore edges could be
easily identified if the sample space of the image consists of these
two colors. Even then, if you want greyscale instead of black, then you can simple change each black pixel of the image to a grey one, or something that meets your needs
The results of above edge detection are same because the size of the
border is negligible. If the border is wider (a stroke), then when
the process is done on a white circle with black border, the edge
detection will create more then one white border. You can get through
that problem, by making the program ignore the inner edges and only
taking into account the outermost ones.
I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():
Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()
You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.
from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)
I want to rotate a black and white image. I am trying to use the rotate function as follows:
image.rotate(angle, fillcolor=255)
I am required to older versions of Python and Pillow, and they do not support the 'fillcolor' argument. I cannot upgrade to the newer versions due to certain restrictions and cannot use any external libraries.
Is there another way to fill the area outside the rotated image with white color using Pillow?
Rotated image has black color in the area outside the rotated part. I want to fill it with white color.
Original : Original image
Rotated :Rotated image
You can try Interpolating the Original Image, with the cropped one via Image.composite() to get rid of the black bars/borders.
from PIL import Image
img = Image.open(r"Image_Path").convert("RGBA")
angle = 30
img = img.rotate(angle)
new_img = Image.new('RGBA', img.size, 'white')
Alpha_Image = Image.composite(img, new_img, img)
Alpha_Image = Alpha_Image.convert(img.mode)
Alpha_Image.show()
The above code takes in an Image, converts it into mode RGBA (Alpha is required for this process), and then rotates the Image by 30 degrees. After that It creates a empty Image object of mode RGBA of the same dimensions as the original image, with each pixel having a default value of 255 each channel (i.e Pure white for RGB, and Full Opacity in the context of Alpha/Transparency). Then Interpolates the original image with this empty one using the mask of original Image (we are using the transparency mask of the first image). This results in the Desired images, where black bars/edges are replaced by white. In the end we convert the image color space to the original one.
ORIGINAL IMAGE:-
IMAGE AFTER ROTATING 30 DEGREES:-
An awkward option that has always worked for me, seeing as with my tools I always get a light gray "border" around the rotated image that interferes with filling:
add a border on the non-rotated image and use the fill color with that border.
The bordering operation is lossless and filling will be exact (and easy).
rotate the bordered image. The seam will now also be correct (but not exact unless you
rotate by 45° or 90°).
calculate the size of the rotated border using trigonometry. The result will not be exact (i.e. "131.12 pixel"). Usually you can do this in reverse, starting with an exact border on the rotated image and calculating the border you need to add, and adjust the border width so that the nonrotated border is exact. Example: with a rotated border of 170 pixels you get a nonrotated border of 140.3394 pixels. So you use a 510 pixel rotated border, resulting in the need to add a 421.018 pixel nonrotated border. This is close enough to 421 pixels that it is acceptable.
remove the rotated border.
This also helps avoiding some artefacts near the cut parts of the image that fall off the rotated image.
It has the drawback that you end up with a more massive rotation, with higher memory expenditure and computation time, especially if you use larger borders to increase precision.
Edit: As no external libraries are allowed, I would suggest cropping the rectangle you want and pasting it onto the original image, this could be done with magic numbers (of the rectangle's coordinates), this works for me (you might will need to tweek a little)
im = Image.open("mFul4.png")
rotated = im.rotate(105)
box = (55, 65,200,210)
d = rotated.crop(box=box)
im.paste(d, box=box)
im.save("ex.bmp" )
and the output
Edit2: This is the ugliest way, but it works, you might need to tweak the magic numbers a bit to have it more precise, I was working on your given image, so couldn't tell when i'm overdoing it. It produces the same output
from PIL import Image
im = Image.open("mFul4.png")
angle=105
cos = 0.240959049 # -cos(angle)
d = im.rotate(angle)
pix = d.load()
tri_x = 120
for i in range(4): # 4 triangles
for j in range(tri_x, -1, -1):
for k in range(int((tri_x-j)*cos)+1, -1, -1):
x,y =( j, k )if i <1 else (d.size[0]-j-1, d.size[1]-k-1)
if i in [2,3]:
y, x = (d.size[0] - j-2 , k) if i <3 else (j, d.size[1] - k)
pix[x,y] = (255, 255, 255, 255)
d.show()
I need to alpha-blend 2 images that are not the same size. I've managed to get them to composite by resizing to the same size, so I've got part of the logic:
import cv2 as cv
def combine_two_color_images_composited(foreground_image, background_image):
foreground = cv.resize(foreground_image, (400,400), interpolation=cv.INTER_CUBIC).copy()
background = cv.resize(background_image, (400,400), interpolation=cv.INTER_CUBIC).copy()
alpha =0.5
# do composite of foreground onto the background
cv.addWeighted(foreground, alpha, background, 1 - alpha, 0, background)
cv.imshow('composited image', background)
cv.waitKey(10000)
I'm wondering if I need to make a mask that is the same size as the larger image and then use that with my first image. If so, I don't know how to do masking yet in OpenCV.... this is but a tiny portion of my project so it's not something I've been able to spend a ton of time researching to learn how masking works.
I have searched all over but the code I'm finding does things like 'adds' the images together (side by side).
To combine the two images you can make use of numpy slicing to select the portion of the background image where you want to blend the foreground, then insert the newly blended portion in your background again.
import cv
def combine_two_color_images(image1, image2):
foreground, background = image1.copy(), image2.copy()
foreground_height = foreground.shape[0]
foreground_width = foreground.shape[1]
alpha =0.5
# do composite on the upper-left corner of the background image.
blended_portion = cv.addWeighted(foreground,
alpha,
background[:foreground_height,:foreground_width,:],
1 - alpha,
0,
background)
background[:foreground_height,:foreground_width,:] = blended_portion
cv.imshow('composited image', background)
cv.waitKey(10000)
edit:
To place the foreground at a specified location you use numpy indexing as before. Numpy indexing is very powerful and you will find it useful on many occasions. I linked the documentation above. Is really worth to take a look at.
def combine_two_color_images_with_anchor(image1, image2, anchor_y, anchor_x):
foreground, background = image1.copy(), image2.copy()
# Check if the foreground is inbound with the new coordinates and raise an error if out of bounds
background_height = background.shape[0]
background_width = background.shape[1]
foreground_height = foreground.shape[0]
foreground_width = foreground.shape[1]
if foreground_height+anchor_y > background_height or foreground_width+anchor_x > background_width:
raise ValueError("The foreground image exceeds the background boundaries at this location")
alpha =0.5
# do composite at specified location
start_y = anchor_y
start_x = anchor_x
end_y = anchor_y+foreground_height
end_x = anchor_x+foreground_width
blended_portion = cv.addWeighted(foreground,
alpha,
background[start_y:end_y, start_x:end_x,:],
1 - alpha,
0,
background)
background[start_y:end_y, start_x:end_x,:] = blended_portion
cv.imshow('composited image', background)
cv.waitKey(10000)
i have this image with two people in it. it is binary image only contains black and white pixels.
first i want to loop over all the pixels and find white pixels in the image.
than what i want to do is that i want to find [x,y] for the one certain white pixel.
after that i want to use that particular[x,y] in the image which is for the white pixel in the image.
using that co-ordinate of [x,y] i want to convert neighbouring black pixels into white pixels. not whole image tho.
i wanted to post image here but i cant post it unfortunately. i hope my question is understandable now. in the below image you can see the edges.
say for example the edge of the nose i find that with loop using [x,y] and than turn all neighbouring black pixels into white pixels.
This is the binary image
The operation described is called dilation, from Mathematical Morphology. You can either use, for example, scipy.ndimage.binary_dilation or implement your own.
Here are the two forms to do it (one is a trivial implementation), and you can check the resulting images are identical:
import sys
import numpy
from PIL import Image
from scipy import ndimage
img = Image.open(sys.argv[1]).convert('L') # Input is supposed to the binary.
width, height = img.size
img = img.point(lambda x: 255 if x > 40 else 0) # "Ignore" the JPEG artifacts.
# Dilation
im = numpy.array(img)
im = ndimage.binary_dilation(im, structure=((0, 1, 0), (1, 1, 1), (0, 1, 0)))
im = im.view(numpy.uint8) * 255
Image.fromarray(im).save(sys.argv[2])
# "Other operation"
im = numpy.array(img)
white_pixels = numpy.dstack(numpy.nonzero(im != 0))[0]
for y, x in white_pixels:
for dy, dx in ((-1,0),(0,-1),(0,1),(1,0)):
py, px = dy + y, dx + x
if py >= 0 and px >= 0 and py < height and px < width:
im[py, px] = 255
Image.fromarray(im).save(sys.argv[3])