How to analyze only a part of an image? - python

I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():

Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()

You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.

from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)

Related

how to make my code identify the difference between 2 circles (2 circles one filled with white and one with black) using python and pillow?

I have 2 images,
1- White circle with black stroke
2- Black circle with black stroke
I want to compare both images and identify that both have the same circle but with different filling
I should only use python & pillow
I have already tried several methods like Edge Detection, but whenever I try to reform the picture for edge detection the new image appear as empty
from PIL import Image, ImageDraw
import numpy as np
from math import sqrt
# Load image:
input_image = Image.open("input.png")
input_pixels = input_image.load()
width, height = input_image.width, input_image.height
# Create output image
output_image = Image.new("RGB", input_image.size)
draw = ImageDraw.Draw(output_image)
# Convert to grayscale
intensity = np.zeros((width, height))
for x in range(width):
for y in range(height):
intensity[x, y] = sum(input_pixels[x, y]) / 3
# Compute convolution between intensity and kernels
for x in range(1, input_image.width - 1):
for y in range(1, input_image.height - 1):
magx = intensity[x + 1, y] - intensity[x - 1, y]
magy = intensity[x, y + 1] - intensity[x, y - 1]
# Draw in black and white the magnitude
color = int(sqrt(magx**2 + magy**2))
draw.point((x, y), (color, color, color))
output_image.save("edge.png")
expected result that the both pictures will be greyscaled with only the circle edges marked in white
actual result empty black image (as if it couldnt see the edges)
Well, If all you want is Edge Detection in an image, then you can try using Sobel Operator or its equivalents.
from PIL import Image, ImageFilter
image = Image.open(r"Circle.png").convert("RGB")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(r"ED_Circle.png")
The above code takes in an input image, converts it into RGB mode (certain images have P mode, which doesn't allows edge detection, therefore converting to RGB). Then finds edges in it via image.filter(ImageFilter.FIND_EDGES).
Sample Input Image (Black border with black circle):-
Output after processing through python program:-
Sample Image 2 (white circle with black border):-
Output after processing through python program:-
In the above sample, both the input images were of the same size and the circles in them were also of the same dimensions, the only difference between the two was that, one had a white circle inside a black border, and the other had a black circle inside black border.
Since the circles were of same dimensions, passing them through the edge detection process gave us same results.
NOTE:-
In the question, you wanted circle edges in white, and the rest of
part in greyscale. Which isn't the best choice for edge detection.
White and Black are inverse of each other, therefore edges could be
easily identified if the sample space of the image consists of these
two colors. Even then, if you want greyscale instead of black, then you can simple change each black pixel of the image to a grey one, or something that meets your needs
The results of above edge detection are same because the size of the
border is negligible. If the border is wider (a stroke), then when
the process is done on a white circle with black border, the edge
detection will create more then one white border. You can get through
that problem, by making the program ignore the inner edges and only
taking into account the outermost ones.

Is there another way to fill the area outside a rotated image with white color? 'fillcolor' does not work with older versions of Python

I want to rotate a black and white image. I am trying to use the rotate function as follows:
image.rotate(angle, fillcolor=255)
I am required to older versions of Python and Pillow, and they do not support the 'fillcolor' argument. I cannot upgrade to the newer versions due to certain restrictions and cannot use any external libraries.
Is there another way to fill the area outside the rotated image with white color using Pillow?
Rotated image has black color in the area outside the rotated part. I want to fill it with white color.
Original : Original image
Rotated :Rotated image
You can try Interpolating the Original Image, with the cropped one via Image.composite() to get rid of the black bars/borders.
from PIL import Image
img = Image.open(r"Image_Path").convert("RGBA")
angle = 30
img = img.rotate(angle)
new_img = Image.new('RGBA', img.size, 'white')
Alpha_Image = Image.composite(img, new_img, img)
Alpha_Image = Alpha_Image.convert(img.mode)
Alpha_Image.show()
The above code takes in an Image, converts it into mode RGBA (Alpha is required for this process), and then rotates the Image by 30 degrees. After that It creates a empty Image object of mode RGBA of the same dimensions as the original image, with each pixel having a default value of 255 each channel (i.e Pure white for RGB, and Full Opacity in the context of Alpha/Transparency). Then Interpolates the original image with this empty one using the mask of original Image (we are using the transparency mask of the first image). This results in the Desired images, where black bars/edges are replaced by white. In the end we convert the image color space to the original one.
ORIGINAL IMAGE:-
IMAGE AFTER ROTATING 30 DEGREES:-
An awkward option that has always worked for me, seeing as with my tools I always get a light gray "border" around the rotated image that interferes with filling:
add a border on the non-rotated image and use the fill color with that border.
The bordering operation is lossless and filling will be exact (and easy).
rotate the bordered image. The seam will now also be correct (but not exact unless you
rotate by 45° or 90°).
calculate the size of the rotated border using trigonometry. The result will not be exact (i.e. "131.12 pixel"). Usually you can do this in reverse, starting with an exact border on the rotated image and calculating the border you need to add, and adjust the border width so that the nonrotated border is exact. Example: with a rotated border of 170 pixels you get a nonrotated border of 140.3394 pixels. So you use a 510 pixel rotated border, resulting in the need to add a 421.018 pixel nonrotated border. This is close enough to 421 pixels that it is acceptable.
remove the rotated border.
This also helps avoiding some artefacts near the cut parts of the image that fall off the rotated image.
It has the drawback that you end up with a more massive rotation, with higher memory expenditure and computation time, especially if you use larger borders to increase precision.
Edit: As no external libraries are allowed, I would suggest cropping the rectangle you want and pasting it onto the original image, this could be done with magic numbers (of the rectangle's coordinates), this works for me (you might will need to tweek a little)
im = Image.open("mFul4.png")
rotated = im.rotate(105)
box = (55, 65,200,210)
d = rotated.crop(box=box)
im.paste(d, box=box)
im.save("ex.bmp" )
and the output
Edit2: This is the ugliest way, but it works, you might need to tweak the magic numbers a bit to have it more precise, I was working on your given image, so couldn't tell when i'm overdoing it. It produces the same output
from PIL import Image
im = Image.open("mFul4.png")
angle=105
cos = 0.240959049 # -cos(angle)
d = im.rotate(angle)
pix = d.load()
tri_x = 120
for i in range(4): # 4 triangles
for j in range(tri_x, -1, -1):
for k in range(int((tri_x-j)*cos)+1, -1, -1):
x,y =( j, k )if i <1 else (d.size[0]-j-1, d.size[1]-k-1)
if i in [2,3]:
y, x = (d.size[0] - j-2 , k) if i <3 else (j, d.size[1] - k)
pix[x,y] = (255, 255, 255, 255)
d.show()

How do I find the image (pixel?) value of the bottom right corner of an image with Python?

At the upper left hand side of the image the "pixel" value is always going to be (0,0). But depending on the resolution of the camera and varying images the size of the picture changes, so how do I find the maximum pixel length and width of an image using python? (opencv or numpy)
Not what you asked, I realise, however, Pillow will do this for you.
>>> from PIL import Image
>>> im = Image.open(r"C:\Users\Bill\Pictures\Tilly.jpg")
>>> im.size
(2448, 2448)
The module that you used to read the image should usually give you a way to retrieve the dimension. I noticed you have the opencv tag. If you are using open cv , you can do this:
import cv2
img = cv2.imread('img.png')
#shape will give you [height, width, channel]
h, w = img.shape[0:2]
#get the bottom right pixel
bottom_right = img[h-1, w-1]

Efficient way to shift image pixels in python

Trying to find a way to efficiently shift image pixels in python. Ultimately want to display the results on screen after each shift. For example, pixel at 1920,1080 becomes pixel at 1920,1079 and 1920,1080 is filled with a new value, at the end pixel at 0,0 is discarded and filled with pixel from 0,1. Looking to achieve 30fps or greater on a slow processor like the raspi 3.
My intuition is to move everything to a 1D array, shift everything "left", add the new pixel at the end and turn back into an image.
Crude drawing of the "Z" shifting effect I'm looking for
Here is my naive implementation of just the pixel shifting, which is too slow:
from PIL import Image
im = Image.new("RGBA", (1920, 1080), "black") #create blank image
pix = im.load() #load image
#iterate over width and height
for x in xrange(1,1919):
pix[x,y] = pix[x+1, y] #shift x
for y in xrange(1,1079):
pix[x,y] = pix[x, y+1] #shift y
pix[1919,1079] = (255,255,255) #set new pixel white
I thought of a couple of ways of doing this but can only get 40fps on a desktop Mac at the moment.
Here's the first method which uses Numpy roll() at its heart in order to roll the top-left pixel down to the bottom right corner and then overwrite it with white.
#!/usr/bin/env python3
import cv2
import numpy as np
from PIL import Image
w, h = 640, 480
im = Image.new("L", (w, h), 127)
# Just do 1,000 shifts so it can be timed
for frame in range(1000):
# Ravel image out to row vector, roll it left, and reshape
im = np.roll(np.ravel(im),-1).reshape(h,w)
# Make bottom right pixel white
im[-1,-1] = 255
cv2.imshow("Slidey Pushy",im)
cv2.waitKey(1)
I then thought of adding in all the new pixels right at the beginning and then just changing the offset into the image on each iteration, but surprisingly (to me at least) this was no faster.
#!/usr/bin/env python3
import cv2
import numpy as np
from PIL import Image
w, h = 640, 480
im = Image.new("L", (w, h), 127)
# Allocate space for and append 10 rows of white pixels up front
im = np.concatenate((im,np.ones((10*h,w),dtype=np.uint8)*255), axis=0)
im = np.ravel(im)
# Slide in 1,000 white pixels so it can be timed
for frame in range(1000):
frame = im[frame:frame+w*h].reshape(h,w)
cv2.imshow("Slidey Pushy 2",frame)
cv2.waitKey(1)
I don't think you are going to get anywhere close to 30fps on a raspberry pi, but I would look at numpy and pandas for this.

how to add border around an image in opencv python

If I have an image like below, how can I add border all around the image such that the overall height and width of the final image increases but the height and width of the original image stays as-is in the middle.
The following code adds a constant border of size 10 pixels to all four sides of your original image.
For the colour, I have assumed that you want to use the average gray value of the background, which I have calculated from the mean value of bottom two lines of your image. Sorry, somewhat hard coded, but shows the general how-to and can be adapted to your needs.
If you leave bordersize values for bottom and right at 0, you even get a symmetric border.
Other values for BORDER_TYPE are possible, such as BORDER_DEFAULT, BORDER_REPLICATE, BORDER_WRAP.
For more details cf: http://docs.opencv.org/trunk/d3/df2/tutorial_py_basic_ops.html#gsc.tab=0
import numpy as np
import cv2
im = cv2.imread('image.jpg')
row, col = im.shape[:2]
bottom = im[row-2:row, 0:col]
mean = cv2.mean(bottom)[0]
bordersize = 10
border = cv2.copyMakeBorder(
im,
top=bordersize,
bottom=bordersize,
left=bordersize,
right=bordersize,
borderType=cv2.BORDER_CONSTANT,
value=[mean, mean, mean]
)
cv2.imshow('image', im)
cv2.imshow('bottom', bottom)
cv2.imshow('border', border)
cv2.waitKey(0)
cv2.destroyAllWindows()
Answer in one line
outputImage = cv2.copyMakeBorder(
inputImage,
topBorderWidth,
bottomBorderWidth,
leftBorderWidth,
rightBorderWidth,
cv2.BORDER_CONSTANT,
value=color of border
)
Try This:
import cv2
import numpy as np
img=cv2.imread("img_src.jpg")
h,w=img.shape[0:2]
base_size=h+20,w+20,3
# make a 3 channel image for base which is slightly larger than target img
base=np.zeros(base_size,dtype=np.uint8)
cv2.rectangle(base,(0,0),(w+20,h+20),(255,255,255),30) # really thick white rectangle
base[10:h+10,10:w+10]=img # this works
Add border using openCV
import cv2
white = [255,255,255]
img1 = cv2.imread('input.png')
constant= cv2.copyMakeBorder(img1,20,20,20,20,cv2.BORDER_CONSTANT,value=white)
cv2.imwrite('output.png',constant)

Categories