How to implement 3D bilinear interpolation using numpy? - python

I have reached to this bilinear interpolation code (added here), but I would like to improve this code to 3D, meaning update it to work with an RGB image (3D, instead of only 2D).
If you have any suggestions of how I can to that I would love to know.
This was the one dimension linear interpolation:
import math
def linear1D_resize(in_array, size):
"""
`in_array` is the input array.
`size` is the desired size.
"""
ratio = (len(in_array) - 1) / (size - 1)
out_array = []
for i in range(size):
low = math.floor(ratio * i)
high = math.ceil(ratio * i)
weight = ratio * i - low
a = in_array[low]
b = in_array[high]
out_array.append(a * (1 - weight) + b * weight)
return out_array
And this for the 2D:
import math
def bilinear_resize(image, height, width):
"""
`image` is a 2-D numpy array
`height` and `width` are the desired spatial dimension of the new 2-D array.
"""
img_height, img_width = image.shape[:2]
resized = np.empty([height, width])
x_ratio = float(img_width - 1) / (width - 1) if width > 1 else 0
y_ratio = float(img_height - 1) / (height - 1) if height > 1 else 0
for i in range(height):
for j in range(width):
x_l, y_l = math.floor(x_ratio * j), math.floor(y_ratio * i)
x_h, y_h = math.ceil(x_ratio * j), math.ceil(y_ratio * i)
x_weight = (x_ratio * j) - x_l
y_weight = (y_ratio * i) - y_l
a = image[y_l, x_l]
b = image[y_l, x_h]
c = image[y_h, x_l]
d = image[y_h, x_h]
pixel = a * (1 - x_weight) * (1 - y_weight) + b * x_weight * (1 - y_weight) + c * y_weight * (1 - x_weight) + d * x_weight * y_weight
resized[i][j] = pixel # pixel is the scalar with the value comptued by the interpolation
return resized

Check out some of the scipy ndimage interpolate functions. They will do what you're looking for and are 'using numpy'.
They are also very functional, fast and have been tested many times.
Richard

Related

Perlin noise generator isn't working, doesn't look smooth

I watched some tutorials and tried to create a Perlin noise generator in python.
It takes in a tuple for the number of vectors in the x and y directions and a scale for the distance in pixels between the arrays, then calculates the dot product between each pixel and each of the 4 arrays surrounding it, It then interpolates them bilinearly to get the pixel's value.
here's the code:
from PIL import Image
import numpy as np
scale = 16
size = np.array([8, 8])
vectors = []
for i in range(size[0]):
for j in range(size[1]):
rand = np.random.rand() * 2 * np.pi
vectors.append(np.array([np.cos(rand), np.sin(rand)]))
interpolated_map = np.zeros(size * scale)
def interpolate(x1, x2, w):
t = (w % scale) / scale
return (x2 - x1) * t + x1
def dot_product(a, b):
return a[0] * b[0] + a[1] * b[1]
for i in range(size[1] * scale):
for j in range(size[0] * scale):
dot_products = []
for m in range(4):
corner_vector_x = round(i / scale) + (m % 2)
corner_vector_y = round(j / scale) + int(m / 2)
x = i - corner_vector_x * scale
y = j - corner_vector_y * scale
if corner_vector_x >= size[0]:
corner_vector_x = 0
if corner_vector_y >= size[1]:
corner_vector_y = 0
corner_vector = vectors[corner_vector_x + corner_vector_y * (size[0])]
distance_vector = np.array([x, y])
dot_products.append(dot_product(corner_vector, distance_vector))
x1 = interpolate(dot_products[0], dot_products[1], i)
x2 = interpolate(dot_products[2], dot_products[3], i)
interpolated_map[i][j] = (interpolate(x1, x2, j) / 2 + 1) * 255
img = Image.fromarray(interpolated_map)
img.show()
I'm getting this image:
but I should be getting this:
I don't know what's going wrong, I've tried watching multiple different tutorials, reading a bunch of different articles, but the result is always the same.

Buffer dtype cannot be buffer in numba

Im trying to convert a bilateral filter I wrote to run on my GPU via numba but I can't seem to get it to work! Im getting the error
TypeError: Buffer dtype cannot be buffer, have dtype: array(float64, 2d, A)
from the following code.
#vectorize([(float64[:,:], float64[:,:])], target='cuda')
def apply_filter(img, filteredImage):
imh, imw = img.shape[:2]
hd = int((diameter - 1) / 2)
for h in range(hd, imh - hd):
for w in range(hd, imw - hd):
Wp = 0
filteredPixel = 0
radius = diameter // 2
for x in range(0, diameter):
for y in range(0, diameter):
currentX = w - (radius - x)
cureentY = h - (radius - y)
intensityDifferent = img[currentX][cureentY] - img[w][h]
intensity = (1.0/ (2 * math.pi * (sIntesity ** 2))* math.exp(-(intensityDifferent ** 2) / (2 * sIntesity ** 2)))
foo = (currentX - w) ** 2 + (cureentY - h) ** 2
distance = cmath.sqrt(foo)
smoothing = (1.0 / (2 * math.pi * (sSpace ** 2))) * math.exp( -(distance.real ** 2) / (2 * sSpace ** 2))
weight = intensity * smoothing
filteredPixel += img[currentX][cureentY] * weight
Wp += weight
filteredImage[h][w] = int(round(filteredPixel / Wp))
if __name__ == "__main__":
src = cv2.imread("messy2.png", cv2.IMREAD_GRAYSCALE)
src = src.astype(np.float64)
filtered_image_own = np.zeros(src.shape)
apply_filter(src, filtered_image_own)
filtered_image_own = filtered_image_own.astype(np.uint8)
cv2.imwrite("filtered_image4.png", filtered_image_own)
I've looked around and haven't found anything useless except that this error might be because a list is passed in? But both of my arguments are 2D arrays and the signature should be correct for that. Why am I getting this error?
To pass in arrays or take array outputs, it's better to use guvectorize().
Check it out at Numba docs or this blog for a detailed account of usage.

How can I speed up the compositing raycasting function?

I'm currently working on a volume rendering project in python where I use a compositing ray casting function to produce an image, given a 3D volume consisting of voxels. The function (which I show below) works correctly, but has a very long runtime. Do you guys have tips on how to make this function faster? The code is Python 3.6.8 and uses various numpy arrays.
def render_compositing(self, view_matrix: np.ndarray, volume: Volume, image_size: int, image: np.ndarray):
# Clear the image
self.clear_image()
# U, V, View vectors. See documentation in parent's class
u_vector = view_matrix[0:3]
v_vector = view_matrix[4:7]
view_vector = view_matrix[8:11]
# Center of the image. Image is squared
image_center = image_size / 2
# Center of the volume (3-dimensional)
volume_center = [volume.dim_x / 2, volume.dim_y / 2, volume.dim_z / 2]
# Define a step size to make the loop faster
step = 2 if self.interactive_mode else 1
for i in range(0, image_size, step):
for j in range(0, image_size, step):
sum_color = TFColor(0, 0, 0, 0)
for k in range(0, image_size, step):
# Get the voxel coordinate X
voxel_coordinate_x = u_vector[0] * (i - image_center) + v_vector[0] * (j - image_center) + \
view_vector[0] * (k - image_center) + volume_center[0]
# Get the voxel coordinate Y
voxel_coordinate_y = u_vector[1] * (i - image_center) + v_vector[1] * (j - image_center) + \
view_vector[1] * (k - image_center) + volume_center[1]
# Get the voxel coordinate Y
voxel_coordinate_z = u_vector[2] * (i - image_center) + v_vector[2] * (j - image_center) + \
view_vector[2] * (k - image_center) + volume_center[2]
color = self.tfunc.get_color(
get_voxel(volume, voxel_coordinate_x, voxel_coordinate_y, voxel_coordinate_z))
sum_color.r = color.a * color.r + (1 - color.a) * sum_color.r
sum_color.g = color.a * color.g + (1 - color.a) * sum_color.g
sum_color.b = color.a * color.b + (1 - color.a) * sum_color.b
sum_color.a = color.a + (1 - color.a) * sum_color.a
red = sum_color.r
green = sum_color.g
blue = sum_color.b
alpha = sum_color.a
# Compute the color value (0...255)
red = math.floor(red * 255) if red < 255 else 255
green = math.floor(green * 255) if green < 255 else 255
blue = math.floor(blue * 255) if blue < 255 else 255
alpha = math.floor(alpha * 255) if alpha < 255 else 255
# Assign color to the pixel i, j
image[(j * image_size + i) * 4] = red
image[(j * image_size + i) * 4 + 1] = green
image[(j * image_size + i) * 4 + 2] = blue
image[(j * image_size + i) * 4 + 3] = alpha
I don't understand why you want to use python for this code. Isn't using a shader the better approach if you are concerned about speed?
Anyways here are few things that can be done in the current code.
voxel coordinates can be calculated using a numpy. you can make a 3 channel 2d image and compute the x,y,z coordinates for an entire slice(k) in a single shot.
Above step can be further optimized by storing an image of x,y,z coordinated of first slice(k=0) and a constant view_directionstep (step_size). Now every other slice can be simply calculated by (XYZ#k=0) + kstep_size.
Use early ray termination by thresholding alpha value to 0.999 or 0.99. This does not look like much but gives a lot of speed gain.

Image enhancement with python and OpenCV

I am trying to enhance my image by first converting RGB color space to YUV color space and do histogram equalization to Y value. However, the output image does not look good.
For histogram equalization, I use the method found on Wikipedia.
Here is the input image:
Here is the output image:
I really don't know where the problem is, can anyone help me or give me some hint?
Below is my code,
import cv2
import numpy as np
img = cv2.imread('/Users/simon/Documents/DIP/Homework_3/input4.bmp')
shape = img.shape
Y_origin_hist = [0] * 256
U_origin = [[0 for i in range(0, shape[1])] for j in range(0, shape[0])]
V_origin = [[0 for i in range(0, shape[1])] for j in range(0, shape[0])]
Y_hist = [0] * 256
# Read RGB value and calculate YUV value
for i in range(0, shape[0]) :
for j in range(0, shape[1]) :
px = img[i,j]
y = int(0.299 * px[2] + 0.587 * px[1] + 0.114 * px[0])
u = int(-0.169 * px[2] - 0.331 * px[1] + 0.5 * px[0]) + 128
v = int(0.5 * px[2] - 0.419 * px[1] - 0.081 * px[0]) + 128
Y_origin_hist[y] = Y_origin_hist[y] + 1
U_origin[i][j] = u
V_origin[i][j] = v
# Histogram equalization
for i in range(0, 256) :
Y_hist[i] = int(((sum(Y_origin_hist[0:i]) - min(Y_origin_hist) - 1) * 255) / ((shape[0] * shape[1]) - 1))
# Write back to RGB value
for i in range(0, shape[0]) :
for j in range(0, shape[1]) :
px = img[i,j]
px[0] = int(Y_hist[px[0]] + 1.77216 * (U_origin[i][j] - 128) + 0.00099 * (V_origin[i][j] - 128))
px[1] = int(Y_hist[px[1]] - 0.3437 * (U_origin[i][j] - 128) - 0.71417 * (V_origin[i][j] - 128))
px[2] = int(Y_hist[px[2]] - 0.00093 * (U_origin[i][j] - 128) + 1.401687 * (V_origin[i][j] - 128))
cv2.imwrite('/Users/simon/Documents/DIP/Homework_3/output4.bmp', img)
For OpenCV in C++ the + and - operators are overloaded and they automatically prevent overflows. However, this is not the case when using Python. For this reason you should use cv2.add() and cv2.subtract() when doing math to get the same results that you would get using C++.

Trouble implementing Perlin noise in python

I'm attempting to implement the algorithm for generating 2D Perlin noise here but I'm having some trouble doing it in Python (which I am relatively new to).
I was expecting the final noise values ('z' in the linked example), to be somewhere between 0.0 and 1.0, but that's not what I'm getting. My code is below, I'd really appreciate any input.
Thanks!
perlin.py:
import math
import numpy
import random
import vector as vctr
from PIL import Image
def dot(v1, v2):
"""
Returns the dot product of the two input vectors.
Args:
v1 - First vector
v2 - Second vector
Return:
Resulting dot product
"""
return (v1.x * v2.x) + (v1.y * v2.y)
def fade(t):
"""
Fade 3t^2 - 2t^3
Args:
t - Value to fade.
Return:
Faded value.
"""
return (3 * (t ** 2)) - (2 * (t ** 3))
def lerp(minVal, maxVal, term):
"""
Args:
Return:
"""
return (maxVal - minVal) * term + minVal
def generateImage(noises, file="perlin.png"):
"""
Generates a image on disc of the resulting noise values
Args:
noises (list) - 2d list of noise values
file (str) - location of file to write to
"""
pixels = numpy.zeros((height, width, 3), dtype=numpy.uint8)
for x in range(0, width):
for y in range(0, height):
rgb = 255 * noises[x][y]
pixels[x, y] = [rgb, rgb, rgb]
# Print pixels as image
img = Image.fromarray(pixels, 'RGB')
img.save(file)
# Define the noise region
width = 300
height = 300
# Column ordered array of generated gradient vectors
g = numpy.zeros((width + 1, height + 1)).tolist()
# List of final noise values
z = numpy.zeros((width, height)).tolist()
# Fill list with randomly directed unit vectors (one for each grid point)
for x in range(0, width + 1):
for y in range(0, height + 1):
randX = random.uniform(-1.0, 1.0)
randY = random.uniform(-1.0, 1.0)
length = math.sqrt(randX**2 + randY**2)
g[x][y] = vctr.vector(randX / length, randY / length)
# For each cell in the sampling space (i.e. each pixel)
for x in range(0, width):
for y in range(0, height):
# Generate random point (p) within and relative to current cell
pX = random.uniform(0.0, 1.0)
pY = random.uniform(0.0, 1.0)
# Get the gradient vectors for each cell corner
g_tl = g[x][y]
g_tr = g[x + 1][y]
g_bl = g[x][y + 1]
g_br = g[x + 1][y + 1]
# Vectors from each cell corner to the generated point
# X axis is positive going right, Y is positive going down
tl = vctr.vector(pX, pY)
tr = vctr.vector(pX - 1, pY)
bl = vctr.vector(pX, pY - 1)
br = vctr.vector(pX - 1, pY - 1)
# Dot product these vectors to get gradient values
u = dot(tl, g_tl)
v = dot(tr, g_tr)
s = dot(bl, g_bl)
t = dot(br, g_br)
# Interpolate the gradient values
sX = fade(pX)
sY = fade(pY)
a = s + (sX * (t - s))
b = u + (sX * (v - u))
value = a + (sY * (a - b))
if (value < 0.0) or (value > 1.0):
print("VALUE IS OUT OF BOUNDS??? " + str(value))
z[x][y] = value
generateImage(z)
print("Completed Perlin noise generation!")
vector.py:
class vector:
def __init__(self, x, y):
"""
Initialise a new vector in 2D space with the input X and Y values.
x: X value of vector
y: Y value of vector
"""
self.x = x
self.y = y

Categories