How to crop elliptical region from image using Python - python

I have code for rectangle cropping ,Honestly I'm beginner to python
this code was i saw on a site
I'm using PIL library
from PIL import Image
im = Image.open("lenna.png")
crop_rectangle = (50, 50, 200, 200)
cropped_im = im.crop(crop_rectangle)
cropped_im.show()
please help me to crop ellipse or circle region from a image
thank you in advance

Cropping an image to an elliptical or circle region will produce the same results as cropping to a square, if their extents are the same. I am assuming that you also want to mask the image as well as crop?
To do this, create a blank mask PIL Image with the same extent as the original, use PIL.ImageDraw.Draw to draw a polygon onto the image. The mask image should now now have binary pixel values where "1" represents masked. Then simply set all values in the original image to a masked value (i.e. np.nan) where the mask pixel values equal 1 (e.g. original_image[mask == 1] = np.nan).

Related

Keep track of reference pixel in PIL imgage while doing transformations

I want to keep track of a point/pixel for reference in a PIL image while I do a (perspective) transformation and cut off the transparent borders.
from PIL import Image
# load image
img = Image.open("img.png")
# do some perspective transformation
img.transform(new_size, Image.PERSPECTIVE, mapping_coeffs)
# cut the borders
img = img.crop(img.getbbox())
For the cropping I could keep track of a position by subtracting the size of the padding. But how can I do this for a perspective transformation, or even multiple transformations in a row?
For others with the same question, I made a black image with only the reference pixel in white using NumPy and transformed it in the same way as my image.
from PIL import Image
import numpy as np
# get black img with the same size
refArray = np.zeros(PILimg.size)
# make the reference pixel white
refArray[xRef, yRef] = 1e8
# to PIL image object
refImg = Image.fromarray(refArray.T)
Do the same transformations with the reference image, and then find the max value in the transformed reference image
ref = np.array(refImg).T
xRef, yRef = np.unravel_index(np.argmax(ref), ref.shape)
edit: For some transformations the pixel disappears, this is solved by using a small square of pixels (5x5) instead of a single pixel.

How to analyze only a part of an image?

I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():
Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()
You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.
from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)

Normalizing images in OpenCV produces black image?

I wrote the following code to normalize an image using NORM_L1 in OpenCV. But the output image was just black. How to solve this?
import cv2
import numpy as np
import Image
img = cv2.imread('img7.jpg')
gray_image = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
a = np.asarray(gray_image)
dst = np.zeros(shape=(5,2))
b=cv2.normalize(a,dst,0,255,cv2.NORM_L1)
im = Image.fromarray(b)
im.save("img50.jpg")
cv2.waitKey(0)
cv2.destroyAllWindows()
If you want to change the range to [0, 1], make sure the output data type is float.
image = cv2.imread("lenacolor512.tiff", cv2.IMREAD_COLOR) # uint8 image
norm_image = cv2.normalize(image, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)
The other answers normalize an image based on the entire image. But if your image has a predominant color (such as black), it will mask out the features that you're trying to enhance since it will not be as pronounced. To get around this limitation, we can normalize the image based on a subsection region of interest (ROI). Essentially we will normalize based on the section of the image that we want to enhance instead of equally treating each pixel with the same weight. Take for instance this earth image:
Input image -> Normalization based on entire image
If we want to enhance the clouds by normalizing based on the entire image, the result will not be very sharp and will be over saturated due to the black background. The features to enhance are lost. So to obtain a better result we can crop a ROI, normalize based on the ROI, and then apply the normalization back onto the original image. Say we crop the ROI highlighted in green:
This gives us this ROI
The idea is to calculate the mean and standard deviation of the ROI and then clip the frame based on the lower and upper range. In addition, we could use an offset to dynamically adjust the clip intensity. From here we normalize the original image to this new range. Here's the result:
Before -> After
Code
import cv2
import numpy as np
# Load image as grayscale and crop ROI
image = cv2.imread('1.png', 0)
x, y, w, h = 364, 633, 791, 273
ROI = image[y:y+h, x:x+w]
# Calculate mean and STD
mean, STD = cv2.meanStdDev(ROI)
# Clip frame to lower and upper STD
offset = 0.2
clipped = np.clip(image, mean - offset*STD, mean + offset*STD).astype(np.uint8)
# Normalize to range
result = cv2.normalize(clipped, clipped, 0, 255, norm_type=cv2.NORM_MINMAX)
cv2.imshow('image', image)
cv2.imshow('ROI', ROI)
cv2.imshow('result', result)
cv2.waitKey()
The difference between normalizing based on the entire image vs a specific section of the ROI can be visualized by applying a heatmap to the result. Notice the difference on how the clouds are defined.
Input image -> heatmap
Normalized on entire image -> heatmap
Normalized on ROI -> heatmap
Heatmap code
import matplotlib.pyplot as plt
import numpy as np
import cv2
image = cv2.imread('result.png', 0)
colormap = plt.get_cmap('inferno')
heatmap = (colormap(image) * 2**16).astype(np.uint16)[:,:,:3]
heatmap = cv2.cvtColor(heatmap, cv2.COLOR_RGB2BGR)
cv2.imshow('image', image)
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
Note: The ROI bounding box coordinates were obtained using how to get ROI Bounding Box Coordinates without Guess & Check and heatmap code was from how to convert a grayscale image to heatmap image with Python OpenCV
When you normalize a matrix using NORM_L1, you are dividing every pixel value by the sum of absolute values of all the pixels in the image.
As a result, all pixel values become much less than 1 and you get a black image. Try NORM_MINMAX instead of NORM_L1.

opencv python copy mask region (black or white pixels) onto a BGR image region

In OpenCV python, say we read an image with cv2.imread and get a BGR numpy array. We next generate a mask with the cv2.inRange command. The mask has the same width/height and each mask pixel is either black or white.
I want to copy a region from the mask (taken as an image of black and white pixels) onto a region of the color image.
How do I do that? This does not work
img[10:20,10:20] = mask[10:20,10:20]
Must I convert the mask to BGR image first? If so how?
Edit: I do not want to apply the whole mask to the image as in apply mask to color image. Another way to say what I want: see the mask as a black and white image. I want to copy a region of that image (as a set of black or white pixels) onto another image. The resulting image will be a color image except for one smaller rectangular region that contains only black or white pixels. The result will be similar to if I in photoshop copy a rectangular area of a black/white image and past that rectangle onto an area of a color image.
(I'm new to OpenCV)
If you try to do it with a single channel (grayscale) mask directly, the shapes of the array slices will not be the same, and the operation will fail.
>>> img[10:20,10:20] = mask[10:20,10:20]
ValueError: could not broadcast input array from shape (10,10) into shape (10,10,3)
You have to convert the mask to BGR, which will make it 3 channels, like the original image.
>>> bgr_mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
>>> img[10:20,10:20] = bgr_mask[10:20,10:20]

Replacing a segmented part of an image with it's unsegmented part

I am trying to replace a segmented part of an image with it's unsegmented part with OpenCV in Python. The pictures will make you understand what I mean.
The following picture is the first one, before segmentation :
This is the picture after segmentation :
This is the third picture, after doing what I'm talking about :
How can I do this ? Thanks in advance for your help !
This is actually pretty easy. All you have to do is take your picture after segmentation, and multiply it by a mask where any pixel in the mask that is 0 becomes 1, and anything else becomes 0.
This will essentially blacken all of the pixels with the exception of the pixels within the mask that are 1. By multiplying each of the pixels in your image by the mask, you would effectively produce what you have shown in the figure, but the background is black. All you would have to do now is figure out which locations in your mask are white and set the corresponding locations in your output image to white. In other words:
import cv2
# Load in your original image
originalImg = cv2.imread('Inu8B.jpg',0)
# Load in your mask
mask = cv2.imread('2XAwj.jpg', 0)
# Get rid of quantization artifacts
mask[mask < 128] = 0
mask[mask > 128] = 1
# Create output image
outputImg = originalImg * (mask == 0)
outputImg[mask == 1] = 255
# Display image
cv2.imshow('Output Image', outputImg)
cv2.waitKey(0)
cv2.destroyAllWindows()
Take note that I downloaded the images from your post and loaded them from my computer. Also, your mask has some quantization artifacts due to JPEG, and so I thresholded at intensity 128 to ensure that your image consists of either 0s or 1s.
This is the output I get:
Hope this helps!
Basically, you have a segmentation mask and an image. All you need to do is copy the pixels in the image corresponding to the pixels in the label mask. Generally, the mask dimensions and the image dimensions are the same (if not, you need to resize your mask to the image dimensions). Also, the segmentation pixels corresponding to a particular mask would have the same integer value (1,2,3 etc and background pixels would have a value of 0). So, find out which pixel co-ordinates have a value corresponding to the mask value and use those co-ordinates to find out the intensity values in the image. If you know the syntax of how to access a pixel co-ordinate, read an image in the programming environment you are using and follow the aforementioned procedure, you should be able to do it.

Categories