What's wrong with this python image blur function? - python

EDIT: Thanks to Howard, I've corrected the code here and it seems to be working now.
EDIT2: I've updated the code to include a vertical blur as originally intended. Resulting sample output with various settings: Blur comparison images.jpg
Another reference for blur operations (Java): Blurring for Beginners
original post:
I'm trying to learn about basic image processing and duplicate this simple Blur method (the second function BlurHorizontal under "Reusing results") in python. I know there are already blur functions in PIL, but I want to try out the basic pixel operations myself.
This function should take a source image, then average RGB pixel values based on a certain radius and write the processed image to a new file. My problem is that I'm getting a lot of pixels with completely wrong averaged values (for example, bright green lines instead of red in certain areas).
With a blur radius of 2, the averaging method adds up the RGB values for the 5 pixels centered on the input pixel. It uses a "sliding window" to keep a running total, subtracting the outgoing pixel (left side) and adding the new incoming pixel (right side of window). Blur method explained here
Sample: Blur test image output.jpg
Any ideas where I've gone wrong? I'm not sure why some parts of the image blur cleanly while other areas are filled with colors completely unrelated to the surrounding areas.
Thanks for your help.
FIXED WORKING Code (Thanks Howard)
import Image, numpy, ImageFilter
img = Image.open('testimage.jpg')
imgArr = numpy.asarray(img) # readonly
# blur radius in pixels
radius = 2
# blur window length in pixels
windowLen = radius*2+1
# columns (x) image width in pixels
imgWidth = imgArr.shape[1]
# rows (y) image height in pixels
imgHeight = imgArr.shape[0]
#simple box/window blur
def doblur(imgArr):
# create array for processed image based on input image dimensions
imgB = numpy.zeros((imgHeight,imgWidth,3),numpy.uint8)
imgC = numpy.zeros((imgHeight,imgWidth,3),numpy.uint8)
# blur horizontal row by row
for ro in range(imgHeight):
# RGB color values
totalR = 0
totalG = 0
totalB = 0
# calculate blurred value of first pixel in each row
for rads in range(-radius, radius+1):
if (rads) >= 0 and (rads) <= imgWidth-1:
totalR += imgArr[ro,rads][0]/windowLen
totalG += imgArr[ro,rads][1]/windowLen
totalB += imgArr[ro,rads][2]/windowLen
imgB[ro,0] = [totalR,totalG,totalB]
# calculate blurred value of the rest of the row based on
# unweighted average of surrounding pixels within blur radius
# using sliding window totals (add incoming, subtract outgoing pixels)
for co in range(1,imgWidth):
if (co-radius-1) >= 0:
totalR -= imgArr[ro,co-radius-1][0]/windowLen
totalG -= imgArr[ro,co-radius-1][1]/windowLen
totalB -= imgArr[ro,co-radius-1][2]/windowLen
if (co+radius) <= imgWidth-1:
totalR += imgArr[ro,co+radius][0]/windowLen
totalG += imgArr[ro,co+radius][1]/windowLen
totalB += imgArr[ro,co+radius][2]/windowLen
# put average color value into imgB pixel
imgB[ro,co] = [totalR,totalG,totalB]
# blur vertical
for co in range(imgWidth):
totalR = 0
totalG = 0
totalB = 0
for rads in range(-radius, radius+1):
if (rads) >= 0 and (rads) <= imgHeight-1:
totalR += imgB[rads,co][0]/windowLen
totalG += imgB[rads,co][1]/windowLen
totalB += imgB[rads,co][2]/windowLen
imgC[0,co] = [totalR,totalG,totalB]
for ro in range(1,imgHeight):
if (ro-radius-1) >= 0:
totalR -= imgB[ro-radius-1,co][0]/windowLen
totalG -= imgB[ro-radius-1,co][1]/windowLen
totalB -= imgB[ro-radius-1,co][2]/windowLen
if (ro+radius) <= imgHeight-1:
totalR += imgB[ro+radius,co][0]/windowLen
totalG += imgB[ro+radius,co][1]/windowLen
totalB += imgB[ro+radius,co][2]/windowLen
imgC[ro,co] = [totalR,totalG,totalB]
return imgC
# number of times to run blur operation
blurPasses = 3
# temporary image array for multiple passes
imgTmp = imgArr
for k in range(blurPasses):
imgTmp = doblur(imgTmp)
print "pass #",k,"done."
imgOut = Image.fromarray(numpy.uint8(imgTmp))
imgOut.save('testimage-processed.png', 'PNG')

I suppose you have an issue with the line
for rads in range(-radius, radius):
which runs to radius-1 only (range excludes last). Add one to the second range argument.
Update: There is another small isue within the line
if (co-radius-1) > 0:
which should be
if (co-radius-1) >= 0:

I modified/refactored your code just a bit, and thought I'd share. I needed something to do a custom blur that would: 1) work on a data array, and 2) only wrap horizontally and not vertically. As the TODO notes, I'm thinking of further refactoring so it can do partial pixel blends (i.e. 0.5). Hope this helps someone:
def blur_image(image_data, blur_horizontal=True, blur_vertical=True, height=256, width=256, radius=1):
#TODO: Modify to support partial pixel blending
# blur window length in pixels
blur_window = radius*2+1
out_image_data = image_data
# blur horizontal row by row, and wrap around edges
if blur_horizontal:
for row in range(height):
for column in range(0, width):
total_red = 0
total_green = 0
total_blue = 0
for rads in range(-radius, radius+1):
pixel = (row*width) + ((column+rads) % width)
total_red += image_data[pixel][0]/blur_window
total_green += image_data[pixel][1]/blur_window
total_blue += image_data[pixel][2]/blur_window
out_image_data[row*width + column] = (total_red, total_green, total_blue, 255)
image_data = out_image_data
# blur vertical, but no wrapping
if blur_vertical:
for column in range(width):
for row in range(0, height):
total_red = 0
total_green = 0
total_blue = 0
blur_window = 0
for rads in range(-radius, radius+1):
if rads in range(0, height):
blur_window += 1
for rads in range(-radius, radius+1):
row_mod = row+rads
if row_mod in range(0, height):
pixel = (row_mod*width) + column
total_red += image_data[pixel][0]/blur_window
total_green += image_data[pixel][1]/blur_window
total_blue += image_data[pixel][2]/blur_window
out_image_data[row*width + column] = (total_red, total_green, total_blue, 255)
image_data = out_image_data
return image_data
You can use it when you've already got an image that's in an array of RGBA pixels, then run:
image_data = blur_image(image_data, height=height, width=width, radius=2)
im = Image.new('RGB', (width, height))
im.putdata(image_data)

Related

black pixels when save MRI dicom after applying lut

I'm trying to apply LUT to MR images using following codes. the original dicom pixel intensities ranges from 0 to 4334 and the pixel intensities after applying LUT function ranges from 0 to 254. I need to preserve the metadata info for the resulted image. When I plot the resulted image, it displays correct but when I save the image, it is all black pixels. I know I need to change some meta data tags but I'm new to dicom image processing and couldn't figure out what is causing the problem.
def make_lut(dcm_data, width, center, p_i):
"""
LUT: look-up tables
VOI: volume of interest
"""
slope= 1.0
intercept= 0.0
min_px= int(np.amin(dcm_data))
max_px= int(np.amax(dcm_data))
lut= [0] * (max_px + 1)
invert= False
if p_i == "MONOCHROME1":
invert= True
else:
center = (max_px - min_px) - center
for px_value in range(min_px, max_px):
lut_value = px_value * slope + intercept
voi_value= (((lut_value - center) / width + 0.5) * 255.0)
clamped_value= min(max(voi_value, 0), 255)
if invert:
lut[px_value] = round(255 - clamped_value)
else:
lut[px_value] = round(clamped_value)
return lut
def apply_lut(pixels_in, lut):
pixels= pixels_in.flatten()
pixels_out= [0] * len(pixels)
for i in range (0, len(pixels)):
pixel= pixels[i]
if pixel > 0:
pixels_out[i] = int(lut[pixel])
return np.reshape(pixels_out, (pixels_in.shape[0], pixels_in.shape[1]))
# apply the function
idx= 30
ds= pydicom.dcmread(dcm_files[idx])
raw_pixels= dcm_data.pixel_array
if dcm_data.WindowWidth != '' and dcm_data.WindowCenter != '':
window_width = dcm_data.WindowWidth
window_center = dcm_data.WindowCenter
lut = make_lut(raw_pixels, window_width, window_center, dcm_data.PhotometricInterpretation)
dcm_default_windowing = apply_lut(raw_pixels, lut)
# save the result
ds.PixelData = dcm_default_windowing.tobytes()
ds.save_as("test_luted.dcm")

Slice image in dynamic number of squares (grid) and save corner coordinates of those squares in list

I'm reading in an image with the Pillow library in Python. I want to "slice" it into squares, and save the corner coordinates of each of the squares in a list. For example in the image below, I would like to save the corner coordinates for square 15. (Top left corner is 0,0)
The first think I do after reading in the image is calculate the modulus of the height and width in pixels by the number of slices, and crop the image so the resulting number of pixels per square is the same and an integer.
from PIL import Image, ImageDraw, ImageFont
fileName = 'eyeImg_86.png'
img = Image.open(fileName)
vertical_slices = 8
horizontal_slices = 4
height = img.height
width = img.width
new_height = height - (height % horizontal_slices)
new_width = width - (width % vertical_slices)
img = img.crop((0, 0, new_width, new_height))
Then I calculate the size in pixels of each vertical and horizontal step.
horizontal_step = int(new_width / vertical_slices)
vertical_step = int(new_height / horizontal_slices)
And then I loop over the ranges between 0 to the total number of vertical and horizontal slices and append to a nested list (each inner list is a row)
points = []
for i in range(horizontal_slices+1):
row = []
for j in range(vertical_slices+1):
row.append((horizontal_step*j, vertical_step*i))
points.append(row)
Here's where I'm struggling to draw and to calculate what I need inside each of these squares. If I try to loop over all those points and draw them on the image.
with Image.open(fileName) as im:
im = im.convert(mode='RGB')
draw = ImageDraw.Draw(im)
for i in range(horizontal_slices+1):
if i < horizontal_slices:
for j in range(vertical_slices+1):
if j < vertical_slices:
draw.line([points[i][j], points[i+1][j-1]], fill=9999999)
Is there an easy way that I can dynamically give it the rows and columns and save each of the square coordinates to a list of tuples for example?
I'd like to both be able to draw them on top of the original image, and also calculate the number of black pixels inside each of the squares.
EDIT: To add some clarification, since the number of rows and columns of the grid is arbitrary, it will likely not be made of squares but rectangles. Furthermore, the numbering of these rectangles should be done row-wise from left to right, like reading.
Thank you
There were (from my understanding) inconsistencies in your use of "horizontal/vertical"; I also removed the points list, since you can easily convert the rectangle number to its upper-left corner coords (see the function in the end); I draw the grid directly by drawing all horizontal lines and all vertical lines).
from PIL import Image, ImageDraw, ImageFont
fileName = 'test.png'
img = Image.open(fileName)
vertical_slices = 8
horizontal_slices = 6
height = img.height
width = img.width
new_height = height - (height % vertical_slices)
new_width = width - (width % horizontal_slices)
img = img.crop((0, 0, new_width, new_height))
horizontal_step = int(new_width / horizontal_slices)
vertical_step = int(new_height / vertical_slices)
# drawing the grid
img = img.convert(mode='RGB')
pix = img.load()
draw = ImageDraw.Draw(img)
for i in range(horizontal_slices+1):
draw.line([(i*horizontal_step,0), (i*horizontal_step,new_height)], fill=9999999)
for j in range(vertical_slices+1):
draw.line([(0,j*vertical_step), (new_width,j*vertical_step)], fill=9999999)
# with rectangles being numbered from 1 (upper left) to v_slices*h_slices (lower right) in reading order
def num_to_ul_corner_coords(num):
i = (num-1)%horizontal_slices
j = (num-1)//horizontal_slices
return(i*horizontal_step,j*vertical_step)
This should do what you want, provided your picture is pure black and white:
def count_black_pixels(num) :
cnt = 0
x, y = num_to_ul_corner_coords(num)
for i in range(horizontal_step):
for j in range(vertical_step):
if pix[x+i,y+j] == (0,0,0):
cnt += 1
perc = round(cnt/(horizontal_step*vertical_step)*100,2)
return cnt, perc

Make objects of an image the closest to each other

I don't have much experience with PIL and I've got these images edited from a stack of microscopy image cells, each one is in a mask of an image size 30x30. I've been struggling to put these cells in a black background as closest as possible to each other without overlapping.
My code is the following:
def spread_circles(circles, rad, iterations,step):
radsqr = rad**2
for i in range(iterations):
for ix,c in enumerate(circles):
vecs = c-circles
dists = np.sum((vecs)**2,axis=1)
if len(dists)>0:
push = (vecs[dists<radsqr,:].T*dists[dists<radsqr]).T
push = np.sum(push,axis=0)
pushmag = np.sum(push*push)**0.5
if pushmag>0:
push = push/pushmag*step
circles[ix]+=push
return circles
def gen_image(sample,n_iter, height=850, width = 850, max_shape=30, num_circles=150):
circles = np.random.uniform(low=max_shape,high=height-max_shape,size=(num_circles,2))
circles = spread_circles(circles, max_shape, n_iter, 1).astype(int)
img = Image.new(mode='F',size=(height,width),color=0).convert('RGBA')
final1 = Image.new("RGBA", size=(height,width))
final1.paste(img, (0,0), img)
for n,c in enumerate(circles):
foreground = sample[n]
final1.paste(foreground, (c[0],c[1]), foreground)
return final1
But it's hard to avoid overlapping if I do few iterations, and if I Increase they'd be too much sparsed, like this:
What I want it's something similar like inside the red circles that I drew :
I need them closer as they can get, almost like tiles. How can I do that?
I have started thinking about this and have got a couple of strategies implemented. Anyone else fancying some fun is more than welcome to borrow, steal, appropriate or hack any chunks of my code that they can use! I'll probably play some more tomorrow.
#!/usr/bin/env python3
from PIL import Image, ImageOps
import numpy as np
from glob import glob
import math
def checkCoverage(im):
"""Determines percentage of image that is cells rather than background"""
N = np.count_nonzero(im)
return N * 100 / im.size
def loadImages():
"""Load all cell images in current directory into list of trimmed Numpy arrays"""
images = []
for filename in glob('*.png'):
# Open and convert to greyscale
im = Image.open(filename).convert('L')
# Trim to bounding box
im = im.crop(im.getbbox())
images.append(np.array(im))
return images
def Strategy1():
"""Get largest image and pad all images to that size - at least it will tesselate perfectly"""
images = loadImages()
N = len(images)
# Find height of tallest image and width of widest image
maxh = max(im.shape[0] for im in images)
maxw = max(im.shape[1] for im in images)
# Determine how many images we will pack across and down the output image - could be improved
Nx = int(math.sqrt(N))+1
Ny = int(N/Nx)+1
print(f'Padding {N} images each to height:{maxh} x width:{maxw}')
# Create output image
res = Image.new('L', (Nx*maxw,Ny*maxh), color=0)
# Pack all images from list onto regular grid
x, y = 0, 0
for im in images:
this = Image.fromarray(im)
h, w = im.shape
# Pack this image into top-left of its grid-cell, unless
# a) in first row, in which case pack to bottom
# b) in first col, in which case pack to right
thisx = x*maxw
thisy = y*maxh
if y==0:
thisy += maxh - h
if x==0:
thisx += maxw - w
res.paste(this, (thisx,thisy))
x += 1
if x==Nx:
x = 0
y += 1
# Trim extraneous black edges
res = res.crop(res.getbbox())
# Save as JPEG so we don't find it as a PNG in next strategy
res.save('strategy1.jpg')
cov = checkCoverage(np.array(res))
print(f'Strategy1 coverage: {cov}')
def Strategy2():
"""Rotate all images to portrait (tall rather than wide) and order by height so we tend to stack equal height images side-by-side"""
tmp = loadImages()
# Recreate list with all images in portrait format, i.e. tall
portrait = []
for im in tmp:
if im.shape[0] >= im.shape[1]:
# Already portrait, add as-is
portrait.append(im)
else:
# Landscape, so rotate
portrait.append(np.rot90(im))
images = sorted(portrait, key=lambda x: x.shape[0], reverse=True)
N = len(images)
maxh, maxw = 31, 31
# Determine how many images we will pack across and down the output image
Nx = int(math.sqrt(N))+1
Ny = int(N/Nx)+1
print(f'Packing images by height')
# Create output image
resw, resh = Nx*maxw, Ny*maxh
res = Image.new('L', (resw,resh), color=0)
# Pack all from list
xpos, ypos = 0, 0
# Pack first row L->R, second row R->L and alternate
packToRight = True
for im in images:
thisw, thish = im.shape
this = Image.fromarray(im)
if packToRight:
if xpos+thisw < resw:
# If it fits to the right, pack it there
res.paste(this,(xpos,ypos))
xpos += thisw
else:
# Else start a new row, pack at right end and continue packing to left
packToRight = False
res.paste(this,(resw-thisw,ypos))
ypos = res.getbbox()[3]
else:
if xpos>thisw:
# If it fits to the left, pack it there
res.paste(this,(xpos-thisw,ypos))
xpos -= thisw
else:
# Else start a new row, pack at left end and continue packing to right
ypos = res.getbbox()[3]
packToRight = True
res.paste(this,(0,ypos))
# Trim any black edges
res = res.crop(res.getbbox())
# Save as JPEG so we don't find it as a PNG in next strategy
res.save('strategy2.jpg')
cov = checkCoverage(np.array(res))
print(f'Strategy2 coverage: {cov}')
Strategy1()
Strategy2()
Strategy1 gives this at 42% coverage:
Strategy2 gives this at 64% coverage:

Scale and crop an image in python PIL without exceeding the image dimensions

I am cropping an image using python PIL. Say my image is as such:
This is the simple code snippet I use for cropping:
from PIL import Image
im = Image.open(image)
cropped_image = im.crop((topLeft_x, topLeft_y, bottomRight_x, bottomRight_y))
cropped_image.save("Out.jpg")
The result of this is:
I want to scale out this cropped image keeping the aspect ratio (proportionate width and height) same by say 20% to look something like this without exceeding the image dimensions.
How should I scale out the crop such that the aspect ratio is maintained while not exceeding the image boundary/ dimensions?
You should calculate the center of your crop and use it there on.
As an example:
crop_width = right - left
crop_height = bottom - top
crop_center_x = int(left + crop_width/2)
crop_center_y = (top + crop_height/2)
In this way you will obtain the (x,y) point which corresponds to the center of your crop w.r.t your original image.
In that case, you will know that the maximum width for your crop would be the minimum between the center value and the outer bounds of the original image minus the center itself:
im = Image.open("brad.jpg")
l = 200
t = 200
r = 300
b = 300
cropped = im.crop((l, t, r, b))
Which gives you:
If you want to "enlarge" it to the maximum starting from the same center, then you will have:
max_width = min(crop_center_x, im.size[0]-crop_center_x)
max_height = min(crop_center_y, im.size[1]-crop_center_y)
new_l = crop_center_x - max_width
new_t = crop_center_x - max_height
new_r = crop_center_x + max_width
new_b = crop_center_x + max_height
new_crop = im.crop((new_l, new_t, new_r, new_b))
which gives as a result, having the same center:
Edit
If you want to keep the aspect ratio you should retrieve it (the ratio) before and apply the crop only if the resulting size would still fit the original image. As an example, if you want to enlarge it by 20%:
ratio = crop_height/crop_width
scale = 20/100
new_width = int(crop_width + (crop_width*scale))
# Here we are using the previously calculated value for max_width to
# determine if the new one would be too large.
# Note that the width that we calculated here (new_width) refers to both
# sides of the crop, while the max_width calculated previously refers to
# one side only; same for height. Sorry for the confusion.
if max_width < new_width/2:
new_width = int(2*max_width)
new_height = int(new_width*ratio)
# Do the same for the height, update width if necessary
if max_height < new_height/2:
new_height = int(2*max_height)
new_width = int(new_height/ratio)
adjusted_scale = (new_width - crop_width)/crop_width
if adjusted_scale != scale:
print("Scale adjusted to: {:.2f}".format(adjusted_scale))
new_l = int(crop_center_x - new_width/2)
new_r = int(crop_center_x + new_width/2)
new_t = int(crop_center_y - new_height/2)
new_b = int(crop_center_y + new_height/2)
Once you have the width and height values the process to get the crop is the same as above.

Python code is bug or not

I try to test example all black pic it not show "mostly black" i wonder please help me
from PIL import Image
im = Image.open('im.gif')
pixels = im.getdata() # get the pixels as a flattened sequence
black_thresh = 50
nblack = 0
for pixel in pixels:
if pixel < black_thresh:
nblack += 1
n = len(pixels)
if (nblack / float(n)) > 0.5:
print("mostly black")
If the image returned as tuple containing each of the color components (e.g. RGB), not a single value. You can take the average value and compare against the threshold like this:
for pixel in pixels:
if sum(pixel) / len(pixel) < black_thresh:
nblack += 1

Categories