I am trying to blend an image into another with python to set as your wallpaper for every frame of the fading animation.
I've look on several fourms and haven't found anyone trying to do the same thing and I have never messed with photo editing in python just game stuff and what im doing is completely different.
I have two png pictures a smily and a green triangle. Than you can do e.g. with openCV something like:
import cv2
import numpy as np
image1 = cv2.imread('Face1.png')
image2 = cv2.imread('Nose1.png')
print(f"Size image 1 and image 2 is {image1.shape} and {image2.shape}")
image2_resized = cv2.resize(image2, (image1.shape[1],image1.shape[0]))
print(f"After reshape image 1 and image 2 (after resizing) is {image1.shape} and {image2_resized.shape}")
blended_image = cv2.addWeighted(src1=image1,alpha=0.5,src2=image2_resized,beta=0.5,gamma=0)
#cv2.imshow('Im1',image2)
#cv2.imshow('Im2',image2)
cv2.imshow('Blended',blended_image)
cv2.waitKey(0)
Output:
Related
I have this image (which was taken from Planet Lab's PlanetScope product (not sure if that matters)). The image file is a .tif file and has 4 channels: R, G, B, and NIR. Link to original image here: https://drive.google.com/open?id=1g9tZar41tU3E_y5oev1Gy77hGiKB1q7L
I'm just trying to do something basic where I am trying to load the image into Pillow using Python and using Matplotlib to show the image. But, I think what's happening is that the fourth channel is being assumed to be an alpha channel so the images look a bit weird. I want to only visualize the R, G, B components. How do I do this?
I currently have something like this:
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
img = Image.open(image_file)
plt.imshow(img)
# Subsetting the array
img_arr = np.array(img)
# Plotting only the first 3 channels
plt.imshow(img_arr[:, :, 0:3])
# After asking PIL to drop NIR Channel
img_rgb = Image.open(image_files).convert('RGB')
plt.imshow(img_rgb)
In both cases, the images look like it's not really plotting RGB
EDIT: I did img = Image.open(image_file).convert('RGB') and the image looks roughly the same?
I have been messing around in python to see if I could "mix" two pictures together. What I mean by that is so that the image is transparent and you can see two pictures together. If that still does not make sense check out this link: (only I would mix a picture and a picture not a gif)
https://cdn.discordapp.com/attachments/652564556211683363/662770085844221963/communism.gif
Here is my code:
from PIL import Image
im1 = Image.open('oip.jpg')
im2 = Image.open('star.jpg')
bg = Image.blend(im1, im2, 0)
bg.save('star_oip_paste.jpg', quality=95)
and I get the error:
line 6, in <module> bg = Image.blend(im1, im2, 0) ValueError: images do not match
I'm not even sure if I'm using the right function for "mixing" two images together — so if I'm not, let me know.
There are several things going on here:
Your input images are both JPEG which doesn't support transparency, so you can only get a fixed blending throughout your image. I mean you can't see one image at one point and the other image at another. You will only see the same proportion of each image at each point. Is that what you want?
For example, if I take Paddington and Buckingham Palace and take 50% of each:
I get this:
If that's what you want, you need to resize the images to a common size and change this line:
bg = Image.blend(im1, im2, 0)
to
bg = Image.blend(im1, im2, 0.5) # blend half and half
If you want to paste something with transparency, so it only shows up in certain places, you need to load the overlay from a GIF or PNG with transparency and use:
background.paste(overlay, box=None, mask=overlay)
Then you can do this - note you can see different amounts of the two images at each point:
So, as a concrete example of overlaying a transparent image onto an opaque background, and starting with Paddington (400x400) and this star (500x500):
#!/usr/bin/env python3
from PIL import Image
# Open background and foreground and ensure they are RGB (not palette)
bg = Image.open('paddington.png').convert('RGB')
fg = Image.open('star.png').convert('RGBA')
# Resize foreground down from 500x500 to 100x100
fg_resized = fg.resize((100,100))
# Overlay foreground onto background at top right corner, using transparency of foreground as mask
bg.paste(fg_resized,box=(300,0),mask=fg_resized)
# Save result
bg.save('result.png')
If you want to grab an image from a website, use this:
from PIL import Image
import requests
from io import BytesIO
# Grab the star image from this answer
response = requests.get('https://i.stack.imgur.com/wKQCT.png')
# Make it into a PIL image
img = Image.open(BytesIO(response.content))
As an alternative, you could try with OpenCV (depending on your desired output)
import cv2
# Read the images
foreground = cv2.imread("puppets.png")
background = cv2.imread("ocean.png")
alpha = cv2.imread("puppets_alpha.png")
# Convert uint8 to float
foreground = foreground.astype(float)
background = background.astype(float)
# Normalize the alpha mask to keep intensity between 0 and 1
alpha = alpha.astype(float)/255
# Multiply the foreground with the alpha matte
foreground = cv2.multiply(alpha, foreground)
# Multiply the background with ( 1 - alpha )
background = cv2.multiply(1.0 - alpha, background)
# Add the masked foreground and background.
outImage = cv2.add(foreground, background)
# Display image
cv2.imshow("outImg", outImage/255)
cv2.waitKey(0)
I'm using PIL to resize a JPG. I'm expecting the same image, resized as output, but instead I get a correctly sized black box. The new image file is completely devoid of any information, just an empty file. Here is an excerpt for my script:
basewidth = 300
img = Image.open(path_to_image)
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize))
img.save(dir + "/the_image.jpg")
I've tried resizing with Image.LANCZOS as the second argument, (defaults to Image.NEAREST with 1 argument), but it didn't make a difference. I'm running Python3 on Ubunutu 16.04. Any ideas on why the image file is empty?
I also encountered the same issue when trying to resize an image with transparent background. The "resize" works after I add a white background to the image.
Code to add a white background then resize the image:
from PIL import Image
im = Image.open("path/to/img")
if im.mode == 'RGBA':
alpha = im.split()[3]
bgmask = alpha.point(lambda x: 255-x)
im = im.convert('RGB')
im.paste((255,255,255), None, bgmask)
im = im.resize((new_width, new_height), Image.ANTIALIAS)
ref:
Other's code for making thumbnail
Python: Image resizing: keep proportion - add white background
The simplest way to get to the bottom of this is to post your image! Failing that, we can check the various aspects of your image.
So, import Numpy and PIL, open your image and convert it to a Numpy ndarray, you can then inspect its characteristics:
import numpy as np
from PIL import Image
# Open image
img = Image.open('unhappy.jpg')
# Convert to Numpy Array
n = np.array(img)
Now you can print and inspect the following things:
n.shape # we are expecting something like (1580, 1725, 3)
n.dtype # we expect dtype('uint8')
n.max() # if there's white in the image, we expect 255
n.min() # if there's black in the image, we expect 0
n.mean() # we expect some value between 50-200 for most images
It's not that difficult to use OpenCV to get an output of one colour channel from an image and that can be easily done.
But is it possible that from the three main BGR colours of an image, I want to see the image in just a combination of Green and Red without Blue using a function directly?
So I am able to perform this above operation by setting all the Blue values to 0 and then see the image, the code of which is below:
import cv2
import numpy as np
gr = cv2.imread('imagetowork2.jpg')
gr[:, :, 0] = 0
cv2.namedWindow("random_image",cv2.WINDOW_NORMAL)
cv2.resizeWindow("random_image",590,332)
cv2.imshow("random_image",gr)
cv2.waitKey(0)
cv2.destroyAllWindows()
This above code I had to resort to, because, working with this gr[:, :, 1:3] didn't work. And I don't understand why the cv2 functions work with the whole 3 dimensional values but not with two dimensions?
This is the error, one gets, if they try to show the image using gr[:, :, 1:3]:
error: OpenCV(3.4.5) C:\projects\opencv-python\opencv\modules\imgcodecs\src\utils.cpp:622: error: (-15:Bad number of channels) Source image must have 1, 3 or 4 channels in function 'cvConvertImage'
So my direct and main question is, Is there an inbuilt function to perform this in OpenCV or any other library directly or the set a whole colour value as 0 is the only way to do this?
So this is the image I am working on(can use any kind of image actually):
And this is the output of what I performed(which I want to get using some in-built function and not setting some values to 0):
Is this at all possible?
I guess one thing you could do is hold the channels separately and combine what you want for viewing:
#!/usr/local/bin/python3
import numpy as np
import cv2
im = cv2.imread('sd71Z.jpg')
b,g,r = cv2.split(im)
zeros = np.zeros_like(b)
cv2.imshow("Zero blue", cv2.merge([zeros,g,r]))
cv2.imshow("Zero green", cv2.merge([b,zeros,r]))
cv2.imshow('Zero red', cv2.merge([b,g,zeros]))
cv2.waitKey()
I am using SimpleCV to stitch images. I have made some changes in SimpleCV's GitHub code and eventually got the image transformed correctly. But the problem is, the color of the image after getting transformed is changed.
I have used these images http://imgur.com/a/lrGw4. The output of my code is: http://i.imgur.com/2J722h.jpg
This is my code:
from SimpleCV import *
import cv2
import cv
img1 = Image("s.jpg")
img2 = Image("t.jpg")
dst = Image((2000, 1600))
# Find the keypoints.
ofimg = img1.findKeypointMatch(img2)
# The homography matrix.
homo = ofimg[1]
eh = dst.getMatrix()
# transform the image.
x = Image(cv2.warpPerspective(np.array((img2.getMatrix())), homo,
(eh.rows, eh.cols+300), np.array(eh), cv.INTER_CUBIC))
# blit the img1 now on coordinate (0, 0).
x = x.blit(img1, alpha=0.4)
x.save("rishi1.jpg")
It seems you're using an old revision of SimpleCV. In the latest version the way to get the homography matrix is [1]:
ofimg[0].getHomography()
Edit:
It seems the color problem you're mentioning is due to the change of color space. So please change the line you warp the image to:
x = Image(cv2.warpPerspective(np.array((img2.getMatrix())), homo,
(eh.rows, eh.cols+300), np.array(eh), cv.INTER_CUBIC), colorSpace=ColorSpace.RGB).toBGR()
I suspect what's happening is that the returned image after warping is in the BGR color space while SimpleCV by default uses the RGB color space. Please let me know how it goes.