If I have an image like below, how can I add border all around the image such that the overall height and width of the final image increases but the height and width of the original image stays as-is in the middle.
The following code adds a constant border of size 10 pixels to all four sides of your original image.
For the colour, I have assumed that you want to use the average gray value of the background, which I have calculated from the mean value of bottom two lines of your image. Sorry, somewhat hard coded, but shows the general how-to and can be adapted to your needs.
If you leave bordersize values for bottom and right at 0, you even get a symmetric border.
Other values for BORDER_TYPE are possible, such as BORDER_DEFAULT, BORDER_REPLICATE, BORDER_WRAP.
For more details cf: http://docs.opencv.org/trunk/d3/df2/tutorial_py_basic_ops.html#gsc.tab=0
import numpy as np
import cv2
im = cv2.imread('image.jpg')
row, col = im.shape[:2]
bottom = im[row-2:row, 0:col]
mean = cv2.mean(bottom)[0]
bordersize = 10
border = cv2.copyMakeBorder(
im,
top=bordersize,
bottom=bordersize,
left=bordersize,
right=bordersize,
borderType=cv2.BORDER_CONSTANT,
value=[mean, mean, mean]
)
cv2.imshow('image', im)
cv2.imshow('bottom', bottom)
cv2.imshow('border', border)
cv2.waitKey(0)
cv2.destroyAllWindows()
Answer in one line
outputImage = cv2.copyMakeBorder(
inputImage,
topBorderWidth,
bottomBorderWidth,
leftBorderWidth,
rightBorderWidth,
cv2.BORDER_CONSTANT,
value=color of border
)
Try This:
import cv2
import numpy as np
img=cv2.imread("img_src.jpg")
h,w=img.shape[0:2]
base_size=h+20,w+20,3
# make a 3 channel image for base which is slightly larger than target img
base=np.zeros(base_size,dtype=np.uint8)
cv2.rectangle(base,(0,0),(w+20,h+20),(255,255,255),30) # really thick white rectangle
base[10:h+10,10:w+10]=img # this works
Add border using openCV
import cv2
white = [255,255,255]
img1 = cv2.imread('input.png')
constant= cv2.copyMakeBorder(img1,20,20,20,20,cv2.BORDER_CONSTANT,value=white)
cv2.imwrite('output.png',constant)
Related
I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():
Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()
You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.
from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)
I have this image of an eye where I want to get the center of the pupil:
Original Image
I applied adaptive threshold as well as laplacian to the image using this code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\Users\User\Documents\module4\input\left.jpg',0)
image = cv2.medianBlur(img,5)
th = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_MEAN_C,
cv2.THRESH_BINARY,11,2)
laplacian = cv2.Laplacian(th,cv2.CV_64F)
cv2.imshow('output', laplacian)
cv2.imwrite('C:\Users\User\Documents\module4\output\output.jpg', laplacian)
cv2.waitKey(0)
cv2.destroyAllWindows
and the resulting image looks like this: Resulting image by applying adaptive threshold
I want to draw a circle around the smaller inner circle and get its center. I've tried using contours and circular hough transform but it does not correctly detect any circles in the image.
Here is my code for Circular Hough Transform:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('C:\Users\User\Documents\module4\output\output.jpg',0)
circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(img,(i[0],i[1]),i[2],(255,255,0),2)
# draw the center of the circle
cv2.circle(img,(i[0],i[1]),2,(255,0,255),3)
cv2.imshow('detected circles',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
And here is the code for applying contour:
import cv2
import numpy as np
img = cv2.imread('C:\Users\User\Documents\module4\output\output.jpg',0)
_, contours,hierarchy = cv2.findContours(img, 1, 2)
cnt = contours[0]
(x,y),radius = cv2.minEnclosingCircle(cnt)
center = (int(x),int(y))
radius = int(radius)
img = cv2.circle(img,center,radius,(0,255,255),2)
cv2.imshow('contour', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The resulting image of this code exactly looks like the image wherein I applied adaptive threshold. I would really appreciate it if anyone can help me solve my problem. I've been stuck with this for a while now. Also, if any of you guys can suggest a better way to detect the center of the pupil besides this method, I would also really appreciate it.
try to apply edge detection instead of shareholding after filtering of original image and then apply hough circle
My thought would be to use the Hough transform like you're doing. But another method might be template matching like this. This assumes you know the approximate radius of the pupil in the image, you can try to build a template.
import skimage
import numpy as np
import matplotlib.pyplot as plt
img = skimage.io.imread('Wjioe.jpg')
#just use grayscale, but you could make separate template for each r,g,b channel
img = np.mean(img, axis=2)
(M,N) = img.shape
mm = M-20
nn = N-20
template = np.zeros([mm,nn])
## Create template ##
#darkest inner circle (pupil)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,4.5, shape=template.shape)
template[rr,cc]=-2
#iris (circle surrounding pupil)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,8, shape=template.shape)
template[rr,cc] = -1
#Optional - pupil reflective spot (if centered)
(rr,cc) = skimage.draw.circle(mm/2,nn/2,1.5, shape=template.shape)
template[rr,cc] = 1
plt.imshow(template)
normccf = skimage.feature.match_template(img, template,pad_input=True)
#center pixel
(i,j) = np.unravel_index( np.argmax(normccf), normccf.shape)
plt.imshow(img)
plt.plot(j,i,'r*')
You're defining a 3 channel color for a gray-scale image. Based on my test it will only read the first value in that tuple. Because the first value in your other colors (in the middle code) starts with 255, it draws a full white circle and because the first value in your last color (in your last code) starts with 0, it draws a full black circle which you can't see.
Just change your color values to a 1 channel color (an int between 0 and 255) and you'll be fine.
I have the following test code in Python to read, threshold and display an image:
import cv2
import numpy as np
from matplotlib import pyplot as plt
# read image
img = cv2.imread('slice-309.png',0)
ret,thresh = cv2.threshold(img,0,230, cv2.THRESH_BINARY)
height, width = img.shape
print "height and width : ",height, width
size = img.size
print "size of the image in number of pixels", size
# plot the binary image
imgplot = plt.imshow(img, 'gray')
plt.show()
I would like to count the number of pixels within the image with a certain label, for instance black.
How can I do that ? I looked at tutorials of OpenCV but did not find any help :-(
Thanks!
For black images you get the total number of pixels (rows*cols) and then subtract it from the result you get from cv2.countNonZero(mat).
For other values, you can create a mask using cv2.inRange() to return a binary mask showing all the locations of the color/label/value you want and then use cv2.countNonZero to count how many of them there are.
UPDATE (Per Miki's comment):
When trying to find the count of elements with a particular value, Python allows you to skip the cv2.inRange() call and just do:
cv2.countNonZero(img == scalar_value)
import cv2
image = cv2.imread("pathtoimg", 0)
count = cv2.countNonZero(image)
print(count)
I am trying to combine three images together. The image I want on the bottom is a 700x900 image with all black pixels. On top of that I want to paste an image that is 400x400 with an offset of 100,200. On top of that I want to paste an image border that is 700x900. The image border has alpha=0 in the inside of it and alpha=0 around it because it doesn't have straight edges. When I run the code I have pasted below I encounter 2 problems:
1) Everywhere on the border image where the alpha channel = 0, the alpha channel has been set to 255 and the color white shows instead of the black background and the image I am putting the border around.
2) The border image's quality has been significantly reduced and looks a lot different than it should.
Also: part of the border image will cover part of the Image I am putting the border around. So I can't just switch the order that I am pasting.
Thanks in advance for any help.
#!/usr/bin/python -tt
from PIL import ImageTk, Image
old_im2 = Image.open('backgroundImage1.jpg') # size = 400x400
old_im = Image.open('topImage.png') # size = 700x900
new_size = (700,900)
new_im = Image.new("RGBA", new_size) # makes the black image
new_im.paste(old_im2, (100, 200))
new_im.paste(old_im,(0,0))
new_im.show()
new_im.save('final.jpg')
I think you have a misconception about images - the border image does have pixels everywhere. It's not possible for it to be "missing" pixels. It is possible to have an image with an alpha channel, which is a channel like the R, G, and B channels, but indicates transparency.
Try this:
1. Make sure that topImage.png has a transparency channel, and that the pixels that you want to be "missing" are transparent (i.e. have a maximum alpha value). You can double check this way:
print old_im.mode # This should print "RGBA" if it has an alpha channel.
2. Create new_im in "RGBA" mode:
new_im = Image.new("RGBA", new_size) # makes the black image
# Note the "A" --------^
3. Try this paste statement instead:
new_im.paste(old_im,(0,0), mask=old_im) # Using old_im as the mask argument should tell the paste function to use old_im's alpha channel to combine the two images.
Trying to blur a picture in Jython. What I have does run but does not return a blurred picture. I'm kinda at a loss of what is wrong with it.
FINAL (WORKING) CODE EDITED IN BELOW. THANKS FOR HELP GUYS!
def main():
pic= makePicture( pickAFile() )
show( pic )
blurAmount=10
makeBlurredPicture(pic,blurAmount)
show(makeBlurredPicture(pic,blurAmount))
def makeBlurredPicture(pic,blurAmount):
w=getWidth(pic)
h=getHeight(pic)
blurPic= makeEmptyPicture( w-blurAmount, h )
for px in getPixels(blurPic):
x=getX(px)
y=getY(px)
if (x+blurAmount<w):
rTotal=0
gTotal=0
bTotal=0
for i in range(0,blurAmount):
origpx=getPixel(pic,x+i,y)
rTotal=rTotal+getRed(origpx)
gTotal=gTotal+getGreen(origpx)
bTotal=bTotal+getBlue(origpx)
rAverage=(rTotal/blurAmount)
gAverage=(gTotal/blurAmount)
bAverage=(bTotal/blurAmount)
setRed(px,rAverage)
setGreen(px,gAverage)
setBlue(px,bAverage)
return blurPic
The pseudo-code was as such : makeBlurredPicture(picture, blur_amount)
get width and height of picture and make an empty picture with the dimensions
(w-blur_amount, h ) call this blurPic
for loop, looping through all the pixels (in blurPic)
get and save x and y locations of the pixel
#make sure you are not too close to edge (x+blur) is less than width
Intialize rTotal, gTotal, and bTotal to 0
# add up the rgb values for all the pixels in the blur
For loop that loops (blur_amount) times
rTotal= rTotal +the red pixel amount of the picture (input argument) at the location (x+loop number,y) then same for green and blue
find the average of red,green, blue values, this is just rTotal/blur_amount (same for green, and blue)
set the red value of blurPic pixel to the redAverage (same for green and blue)
return blurPic
The problem is that you are overwriting the variable px from the outer loop which is the pixel in the blurred image with a pixel value from the original image.
So just replace your inner loop with:
for i in range(0,blurAmount):
origPx=getPixel(pic,x+i,y)
rTotal=rTotal+getRed(origPx)
gTotal=gTotal+getGreen(origPx)
bTotal=bTotal+getBlue(origPx)
In order to show the blurred picture change the last line in you main to
show( makeBlurredPicture(pic,blurAmount) )
Here is the simple way to do it:
import ImageFilter
def filterBlur(im):
im1 = im.filter(ImageFilter.BLUR)
im1.save("BLUR" + ext)
filterBlur(im1)
For a complete reference to the Image Library See: http://www.riisen.dk/dop/pil.html
def blur_image(image, radius):
blur = image.filter(ImageFilter.GaussianBlur(radius))
image.paste(blur,(0,0))
return image