I have two images, one with and other without alpha channel. Thus, image A and B has a shape of (x,y,4) and (x,y,3) respectively.
I want to merge both images in a single tensor using python, where B is the background and A is the upper image. The final image must have a shape of (x, y, 3). I tried if scikit-image or cv2 is capable of doing this, but I couldn't found any solution.
here is alpha blending in python
import numpy as np
import cv2
alpha = 0.4
img1 = cv2.imread('Desert.jpg')
img2 = cv2.imread('Penguins.jpg')
#r,c,z = img1.shape
out_img = np.zeros(img1.shape,dtype=img1.dtype)
out_img[:,:,:] = (alpha * img1[:,:,:]) + ((1-alpha) * img2[:,:,:])
'''
# if want to loop over the whole image
for y in range(r):
for x in range(c):
out_img[y,x,0] = (alpha * img1[y,x,0]) + ((1-alpha) * img2[y,x,0])
out_img[y,x,1] = (alpha * img1[y,x,1]) + ((1-alpha) * img2[y,x,1])
out_img[y,x,2] = (alpha * img1[y,x,2]) + ((1-alpha) * img2[y,x,2])
'''
cv2.imshow('Output',out_img)
cv2.waitKey(0)
The above solution works, however I have a more efficient one:
alpha = A[:,:,3]
A1 = A[:,:,:3]
C = np.multiply(A1, alpha.reshape(x,y,1)) + np.multiply(B, 1-alpha.reshape(x,y,1))
Related
I'm trying to draw a spiral square in python using OpenCV and Numpy.
I know that I can do it via turtle and there are so many examples on the internet but I need to do it as I described in the title.
So I drew chessboard chessboard via python OpenCV
This is code for it
import cv2
import numpy as np
mySize = 256
myOffset = 16
mySquare = 32
myNumberY = mySize // mySquare
myNumberX = mySize // mySquare
myColor = 255
img = np.ones((mySize + 2 * myOffset, mySize + 2 * myOffset), dtype = np.uint8) * 127
for y in range(myNumberY) :
for x in range(myNumberX) :
myColor = 0 if (x + y) % 2 == 0 else 255
print(y, x, myColor)
for ix in range(mySquare) :
for iy in range(mySquare) :
img[myOffset + y * mySquare + iy][myOffset + x * mySquare + ix] = myColor
cv2.imshow('my image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Next, I have some kind of squares - one inside the other.
And this is code for it.
import cv2, numpy
mySize, myOffset, mySquare, myColor = 256, 16, 16, 0
img = numpy.ones((mySize + 2 * myOffset, mySize + 2 * myOffset), dtype = numpy.uint8) * 127
for item in range(mySize // mySquare // 2) :
myTempOffsetStart = myOffset + item * mySquare
myTempOffsetFinish = myOffset + mySize - item * mySquare
myColor = 0 if myColor == 255 else 255
img[myTempOffsetStart : myTempOffsetFinish, myTempOffsetStart : myTempOffsetFinish] = myColor
cv2.imshow('my image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
You can change the size of drawing pictures (lines) by changing mySquare value.
So for me important is to draw square spirals like this in the following picture.
If you look at the picture you see that the pattern is actually pretty simple. You have to go right, up, left, down and repeat a few times. Also the distance you have to go starts with 3 and goes up by two every other step.
After having down that I calculate how much space I need to draw that and shift the points so the coordinates are not negative. Here is my code:
import cv2
import matplotlib.pyplot as plt
import itertools
import numpy as np
repeates = 10
directions = [(1,0),(0,1),(-1,0),(0,-1)]
moves = [(3+2*(i//2))*np.array(d)
for i,d in enumerate(itertools.chain(*itertools.repeat(directions, repeates)))]
points = (np.array([0,0]),*itertools.accumulate(moves))
coordinates = np.array(points).reshape(-1)
r1,r2 = coordinates.min(), coordinates.max()
n = r2-r1+1
img = np.zeros((n,n))
for p,q in zip(points[0:-1],points[1:]):
cv2.line(img, tuple(p-r1), tuple(q-r1), (1,1,1))
plt.imshow(img, cmap='gray')
And it looks like this
I want to apply a pinch/bulge filter on an image using Python OpenCV. The result should be some kind of this example:
https://pixijs.io/pixi-filters/tools/screenshots/dist/bulge-pinch.gif
I've read the following stackoverflow post that should be the correct formula for the filter: Formulas for Barrel/Pincushion distortion
But I'm struggling to implement this in Python OpenCV.
I've read about maps to apply filter on an image: Distortion effect using OpenCv-python
As for my understanding, the code could look something like this:
import numpy as np
import cv2 as cv
f_img = 'example.jpg'
im_cv = cv.imread(f_img)
# grab the dimensions of the image
(h, w, _) = im_cv.shape
# set up the x and y maps as float32
flex_x = np.zeros((h, w), np.float32)
flex_y = np.zeros((h, w), np.float32)
# create map with the barrel pincushion distortion formula
for y in range(h):
for x in range(w):
flex_x[y, x] = APPLY FORMULA TO X
flex_y[y, x] = APPLY FORMULA TO Y
# do the remap this is where the magic happens
dst = cv.remap(im_cv, flex_x, flex_y, cv.INTER_LINEAR)
cv.imshow('src', im_cv)
cv.imshow('dst', dst)
cv.waitKey(0)
cv.destroyAllWindows()
Is this the correct way to achieve the distortion presented in the example image? Any help regarding useful ressources or preferably examples are much appreciated.
After familiarizing myself with the ImageMagick source code, I've found a way to apply the formula for distortion. With the help of the OpenCV remap function, this is a way to distort an image:
import numpy as np
import cv2 as cv
f_img = 'example.jpg'
im_cv = cv.imread(f_img)
# grab the dimensions of the image
(h, w, _) = im_cv.shape
# set up the x and y maps as float32
flex_x = np.zeros((h, w), np.float32)
flex_y = np.zeros((h, w), np.float32)
# create map with the barrel pincushion distortion formula
for y in range(h):
delta_y = scale_y * (y - center_y)
for x in range(w):
# determine if pixel is within an ellipse
delta_x = scale_x * (x - center_x)
distance = delta_x * delta_x + delta_y * delta_y
if distance >= (radius * radius):
flex_x[y, x] = x
flex_y[y, x] = y
else:
factor = 1.0
if distance > 0.0:
factor = math.pow(math.sin(math.pi * math.sqrt(distance) / radius / 2), -amount)
flex_x[y, x] = factor * delta_x / scale_x + center_x
flex_y[y, x] = factor * delta_y / scale_y + center_y
# do the remap this is where the magic happens
dst = cv.remap(im_cv, flex_x, flex_y, cv.INTER_LINEAR)
cv.imshow('src', im_cv)
cv.imshow('dst', dst)
cv.waitKey(0)
cv.destroyAllWindows()
This has the same effect as using the convert -implode function from ImageMagick.
You can do that using implode and explode options in Python Wand, which uses ImageMagick.
Input:
from wand.image import Image
import numpy as np
import cv2
with Image(filename='zelda1.jpg') as img:
img.virtual_pixel = 'black'
img.implode(0.5)
img.save(filename='zelda1_implode.jpg')
# convert to opencv/numpy array format
img_implode_opencv = np.array(img)
img_implode_opencv = cv2.cvtColor(img_implode_opencv, cv2.COLOR_RGB2BGR)
with Image(filename='zelda1.jpg') as img:
img.virtual_pixel = 'black'
img.implode(-0.5 )
img.save(filename='zelda1_explode.jpg')
# convert to opencv/numpy array format
img_explode_opencv = np.array(img)
img_explode_opencv = cv2.cvtColor(img_explode_opencv, cv2.COLOR_RGB2BGR)
# display result with opencv
cv2.imshow("IMPLODE", img_implode_opencv)
cv2.imshow("EXPLODE", img_explode_opencv)
cv2.waitKey(0)
Implode:
Explode:
I would like to apply a simple algebraic operation to the RBG values of an image, that I have loaded via PIL. My current version works, but is slow:
from PIL import Image
import numpy as np
file_name = '1'
im = Image.open('data/' + file_name + '.jpg').convert('RGB')
pixels = np.array(im)
s = pixels.shape
p = pixels.reshape((s[0] * s[1], s[2]))
def update(ratio=0.5):
p2 = np.array([[min(rgb[0] + rgb[0] * ratio, 1), max(rgb[1] - rgb[1] * ratio, 0), rgb[2]] for rgb in p])
img = Image.fromarray(np.uint8(p2.reshape(s)))
img.save('result/' + file_name + '_test.png')
return 0
update(0.5)
Has someone a more efficient idea?
Make use of NumPy's vectorized operations to get rid of the loop.
I modified your original approach to compare performance between the following, different solutions. Also, I added a PIL only approach using ImageMath, if you want to get rid of NumPy completely.
Furthermore, I assume, there is/was a bug:
p2 = np.array([[min(rgb[0] + rgb[0] * ratio, 1), max(rgb[1] - rgb[1] * ratio, 0), rgb[2]] for rgb in p])
You actually do NOT convert to float, so it should be 255 instead of 1 in the min call.
Here's, what I've done:
import numpy as np
from PIL import Image, ImageMath
import time
# Modified, original implementation; fixed most likely wrong compare value in min (255 instead of 1)
def update_1(ratio=0.5):
pixels = np.array(im)
s = pixels.shape
p = pixels.reshape((s[0] * s[1], s[2]))
p2 = np.array([[min(rgb[0] + rgb[0] * ratio, 255), max(rgb[1] - rgb[1] * ratio, 0), rgb[2]] for rgb in p])
img = Image.fromarray(np.uint8(p2.reshape(s)))
img.save('result_update_1.png')
return 0
# More efficient vectorized approach using NumPy
def update_2(ratio=0.5):
pixels = np.array(im)
pixels[:, :, 0] = np.minimum(pixels[:, :, 0] * (1 + ratio), 255)
pixels[:, :, 1] = np.maximum(pixels[:, :, 1] * (1 - ratio), 0)
img = Image.fromarray(pixels)
img.save('result_update_2.png')
return 0
# More efficient approach only using PIL
def update_3(ratio=0.5):
(r, g, b) = im.split()
r = ImageMath.eval('min(float(r) / 255 * (1 + ratio), 1) * 255', r=r, ratio=ratio).convert('L')
g = ImageMath.eval('max(float(g) / 255 * (1 - ratio), 0) * 255', g=g, ratio=ratio).convert('L')
Image.merge('RGB', (r, g, b)).save('result_update_3.png')
return 0
im = Image.open('path/to/your/image.png')
t1 = time.perf_counter()
update_1(0.5)
print(time.perf_counter() - t1)
t1 = time.perf_counter()
update_2(0.5)
print(time.perf_counter() - t1)
t1 = time.perf_counter()
update_3(0.5)
print(time.perf_counter() - t1)
The performance on a [400, 400] RGB image on my machine:
1.723889293 s # your approach
0.055316339 s # vectorized NumPy approach
0.062502050 s # PIL only approach
Hope that helps!
In order to better understand how image manipulation works, I've decided to create my own image rotation algorithm rather than using cv2.rotate() However, I'm encountering a weird picture cropping and pixel misplacement issue.
I think it may have something to do with my padding, but there may be other errors
import cv2
import math
import numpy as np
# Load & Show original image
img = cv2.imread('Lena.png', 0)
cv2.imshow('Original', img)
# Variable declarations
h = img.shape[0] # Also known as rows
w = img.shape[1] # Also known as columns
cX = h / 2 #Image Center X
cY = w / 2 #Image Center Y
theta = math.radians(100) #Change to adjust rotation angle
imgArray = np.array((img))
imgArray = np.pad(imgArray,pad_width=((100,100),(100,100)),mode='constant',constant_values=0)
#Add padding in an attempt to prevent image cropping
# loop pixel by pixel in image
for x in range(h + 1):
for y in range(w + 1):
try:
TX = int((x-cX)*math.cos(theta)+(y-cY)*math.sin(theta)+cX) #Rotation formula
TY = int(-(x-cX)*math.sin(theta)+(y-cY)*math.cos(theta)+cY) #Rotation formula
imgArray[x,y] = img[TX,TY]
except IndexError as error:
print(error)
cv2.imshow('Rotated', imgArray)
cv2.waitKey(0)
Edit:
I think the misplaced image position may have something to do with lack of proper origin point, however I cannot seem to find a functioning solution to that problem.
Though I didn't dive in the math part of the domain, but based on the given information I think the matrix rotating formula should work like this:
UPDATE:
As I promised I dived a bit into the domain and got to the solution you can see as follows. The main trick that I've swapped the source and destination indices in the looping too, so the rounding doesn't mean any problem ever:
import cv2
import math
import numpy as np
# Load & Show original image
img = cv2.imread('/home/george/Downloads/lena.png', 0)
cv2.imshow('Original', img)
# Variable declarations
h = img.shape[0] # Also known as rows
w = img.shape[1] # Also known as columns
p = 120
h += 2 * p
w += 2 * p
cX = h / 2 #Image Center X
cY = h / 2 #Image Center Y
theta = math.radians(45) #Change to adjust rotation angle
imgArray = np.zeros_like((img))
#Add padding in an attempt to prevent image cropping
imgArray = np.pad(imgArray, pad_width=p, mode='constant', constant_values=0)
img = np.pad(img, pad_width=p, mode='constant', constant_values=0)
# loop pixel by pixel in image
for TX in range(h + 1):
for TY in range(w + 1):
try:
x = int( +(TX - cX) * math.cos(theta) + (TY - cY) * math.sin(theta) + cX) #Rotation formula
y = int( -(TX - cX) * math.sin(theta) + (TY - cY) * math.cos(theta) + cY) #Rotation formula
imgArray[TX, TY] = img[x, y]
except IndexError as error:
pass
# print(error)
cv2.imshow('Rotated', imgArray)
cv2.waitKey(0)
exit()
Note: See usr2564301 comment too, if you want to dive deeper in the domain.
First off, i am relatively new to Python and its libraries.
The purpose of the following code is to convert a HDR image to RGBM as detailed in WebGL Insights Chapter 16.
import argparse
import numpy
import imageio
import math
# Parse arguments
parser = argparse.ArgumentParser(description = 'Convert a HDR image to a 32bit RGBM image.')
parser.add_argument('file', metavar = 'FILE', type = str, help ='Image file to convert')
args = parser.parse_args()
# Load image
image = imageio.imread(args.file)
height = image.shape[0]
width = image.shape[1]
output = numpy.zeros((height, width, 4))
# Convert image
for i in numpy.ndindex(image.shape[:2]):
rgb = image[i]
rgba = numpy.zeros(4)
rgba[0:3] = (1.0 / 7.0) * numpy.sqrt(rgb)
rgba[3] = max(max(rgba[0], rgba[1]), rgba[2])
rgba[3] = numpy.clip(rgba[3], 1.0 / 255.0, 1.0)
rgba[3] = math.ceil(rgba[3] * 255.0) / 255.0
output[i] = rgba
# Save image to png
imageio.imsave(args.file.split('.')[0] + '_rgbm.png', output)
The code works and produces correct results, but it does so very slowly. This is of course caused by iterating over each pixels separately within python, which for larger images can take a long time (about 4:30 minutes for an image with a size of 3200x1600).
My question is: Is there a more efficient way to achieve what I'm after? I briefly looked into vectorization and broadcasting in numpy but haven't found a way to apply those to my problem yet.
Edit:
Thanks to Mateen Ulhaq, i found a solution:
# Convert image
rgb = (1.0 / 7.0) * numpy.sqrt(image)
alpha = numpy.amax(rgb, axis=2)
alpha = numpy.clip(alpha, 1.0 / 255.0, 1.0)
alpha = numpy.ceil(alpha * 255.0) / 255.0
alpha = numpy.reshape(alpha, (height, width, 1))
output = numpy.concatenate((rgb, alpha), axis=2)
This completes in only a few seconds.
The line
for i in numpy.ndindex(image.shape[:2]):
is just iterating over every pixel. It's probably faster to get rid of the loop and process every pixel in each line of code ("vectorized").
rgb = (1.0 / 7.0) * np.sqrt(image)
alpha = np.amax(rgb, axis=2)
alpha = np.clip(alpha, 1.0 / 255.0, 1.0)
alpha = np.ceil(alpha * 255.0) / 255.0
alpha = numpy.reshape(alpha, (height, width, 1))
output = np.concatenate((rgb, alpha), axis=2)
I think it's also a bit clearer.