I want to superimpose a given set of images of the same size (the AT&T facial images database). I have written the code to do so, which works as follows:
I have assigned the location of the images (for starting I am considering only 4 images).
imstack is used to read one image (as a base image) over which the layover (superimposition) will take place.
A for loop is run that goes through all the images and adds them to the base image (imstack). This adding is done by using the addWeighted() function with the parameters as the current image (im) and the base image (imstack) with the alpha values as 0.5 respectively.
After the loop has run till its completion (all the images are superimposed on the base image) I tried to print the updated imstack as 'compiledimg' by using the imshow().
Further I added the option to save the 'compiledimg' file by pressing 's'.
To fix the error what I have tried is to resize the image after every iteration so that the addWeighted() function receives the images with the same dimensions. First imsize (before entering the for loop) is resized as to set a firm base to the first image with the required size that I have taken as (97(rows),113(columns)).
I don't understand why the addWeighted function is not working because I am using the resize funtion to make sure that the size is kept the same after each iteration. Plus, if also tried to superimpose just two of the images and it worked perfectly fine however it does not work when I try to use the addWeighted() on the third image.
Say I used addWeighted on two images img1 and img2 and stored in img3. Now when I tried to use the addWeighted() on img3 and img4 I am getting the error. Even when I have used the resize function on img3.
Note: Initial size of the images is (97 (rows),113 (columns)) hence I am trying to keep the same image size.
import cv2
import numpy as np
import os
fnames =['~/Downloads/1.pgm','~/Downloads/2.pgm','~/Downloads/3.pgm']
imstack=cv2.imread('~/Downloads/4.pgm')
for path in fnames:
im=cv2.imread(os.path.expanduser(path))
im=cv2.resize(im,(97,113))
imstack=cv2.addWeighted(imstack,0.5,im,0.5,0)
imstack=cv2.resize(imstack,(97,113))
cv2.imshow('compiledimg',imstack)
k = cv2.waitKey(0) & 0xFF
if k == 27:
cv2.destroyAllWindows()
elif k == ord('s'):
cv2.imwrite('compiledimg.pgm',imstack)
cv2.destroyAllWindows()
Related
i create a function for croping eye image. because outer side of eye image is balack and unnecessary pixel .but if i using this function for loading image in opencv ,for large resolution ,allocate the memory. i want to crop the image using tensorflow imagedatagenerator preprocessing function by opencv .
'crop_image' is my function and its work properly.
train is a path . train> Normal and abnormal file where image are stored
def preprocessing_fun(train):
img=cv2.imread(train,cv2.IMREAD_COLOR)
img=crop_image(img)
if img is not None:
image=cv2.resize(img,(512,512))
img=cv2.addWeighted ( image,4, cv2.GaussianBlur( image , (0,0) ,224/10) ,-4 ,128)
return img
img_datagen = ImageDataGenerator(rescale=1./255,
preprocessing_function =
preprocessing_fun)
training_gen = img_datagen.flow_from_directory(train, target_size=(224,224),
color_mode='rgb',batch_size=32,)
after using this it see the number of image but show a error :
SystemError: returned NULL without setting an error
how to solve this problem??
your cv2.imread is not reading one image according to your function because cv2.imread only can read one image at a time.
For example: let's say if in your function train is a list then you can read by using for loop (for image in train:) something like that
I'm looking for a library that enables to "create pictures" (or even videos) with the following functions:
Accepting picture inputs
Resizing said inputs to fit given template / scheme
Positioning the pictures in pre-set up layers or coordinates
A rather schematic approach to look at this:
whereas the red spots are supposed to represent e.g. text, picture (or if possible video) elements.
The end goal would be to give the .py script multiple input pictures and the .py creating a finished version like mentioned above.
Solutions I tried were looking into Python PIL, but I wasn't able to find what I was looking for.
Yes, it is possible to do this with Python.
The library you are looking for is OpenCV([https://opencv.org][1]/).
Some basic OpenCV python tutorials (https://docs.opencv.org/master/d9/df8/tutorial_root.html).
1) You can use imread() function to read images from files.
2) You can use resize() function to resize the images.
3) You can create a empty master numpy array matching the size and depth(color depth) of the black rectangle in the figure you have shown, resize your image and copy the contents into the empty array starting from the position you want.
Below is a sample code which does something close to what you might need, you can modify this to suit your actual needs. (Since your requirements are not clear I have written the code like this so that it can at least guide you.)
import numpy as np
import cv2
import matplotlib.pyplot as plt
# You can store most of these values in another file and load them.
# You can modify this to set the dimensions of the background image.
BG_IMAGE_WIDTH = 100
BG_IMAGE_HEIGHT = 100
BG_IMAGE_COLOR_DEPTH = 3
# This will act as the black bounding box you have shown in your figure.
# You can also load another image instead of creating empty background image.
empty_background_image = np.zeros(
(BG_IMAGE_HEIGHT, BG_IMAGE_WIDTH, BG_IMAGE_COLOR_DEPTH),
dtype=np.int
)
# Loading an image.
# This will be copied later into one of those red boxes you have shown.
IMAGE_PATH = "./image1.jpg"
foreground_image = cv2.imread(IMAGE_PATH)
# Setting the resize target and top left position with respect to bg image.
X_POS = 4
Y_POS = 10
RESIZE_TARGET_WIDTH = 30
RESIZE_TARGET_HEIGHT = 30
# Resizing
foreground_image= cv2.resize(
src=foreground_image,
dsize=(RESIZE_TARGET_WIDTH, RESIZE_TARGET_HEIGHT),
)
# Copying this into background image
empty_background_image[
Y_POS: Y_POS + RESIZE_TARGET_HEIGHT,
X_POS: X_POS + RESIZE_TARGET_WIDTH
] = foreground_image
plt.imshow(empty_background_image)
plt.show()
I've got two images, the first one contain multiple items, which shows true colors. Then when I removed most of the item, then the webcam tried to auto-balance the image and yielded really false color.
Is there a way (in code) to apply the color profile of the first (true-color) image to the second image?
(or point me to some keywords, I'm new to the field, thanks)
Attached them here for easy comparison
True color
Falsely-adjusted color
I used Logitech webcam, which I can't figure out how to turn off auto-balance in code (in Linux).
I use this method and it works very well:
#pip install color_transfer
from color_transfer import color_transfer
# Load the two images
img1 = cv2.imread('image12.png')
img2 = cv2.imread('image1.png')
# Apply the color transfer
img2_transferred = color_transfer(img1, img2)
cv2.imshow("image", img2_transferred)
if cv2.waitKey(0) == chr("q"):
exit(0)
I currently have built a stitching program that is able to stitch any number of images however they have to be of specific order which is a problem for my specific image sets (working with 1452 images). My goal is to create an unordered stitcher program. I am convinced that the issue takes place when I am physically (well virtually) stitching the images to one another.
Here is code for my stitching (assume keypoints found are accurate as well as homography)
def stitchMatches(self,image1,image2,homography):
#gather x and y axis of images that will be stitched
height1, width1 = image1.shape[0], image1.shape[1]
height2, width2 = image2.shape[0], image2.shape[1]
#create blank image that will be large enough to hold stitched image
blank_image = np.zeros(((width1 + width2),(height1 + height2),3),np.uint8)
#stitch image two into the resulting image while using blank_image
#to create a large enough frame for images
result = cv2.warpPerspective((image1),homography,blank_image.shape[0:2])
#numpy notation for slicing a matrix together allows you to see the image
result[0:image2.shape[0], 0:image2.shape[1]] = image2
return result
I believe the issue is with these two lines.
result = cv2.warpPerspective((image1),homography,blank_image.shape[0:2])
result[0:image2.shape[0], 0:image2.shape[1]] = image2
When I run the code on an ordered image (image 1 is north east most)set this is the result.
However when I run the images in the reverse order I get this result.
To my understanding, this is because I have set the resulting image shape to map to image 2 but haven't been able to develop a work around.
Here is a photo of multiple images showing the same issue of the images getting cropped out.
Any help is appreciated.
Recently I have started learning opencv and python for image processing .I am facing problems with writing a function .
I was given a task as follows:
Write a function in python to open a color image and convert the image into grayscale.
You are required to write a function color_grayscale(filename,g) which takes two arguments:
a. filename: a color image (Test color image is in folder “Task1_Practice/test_images”. Pick first image to perform the experiment.)
b. g: an integer
Output of program should be a grayscale image if g = 1 and a color image otherwise.
The code i wrote is as follows :
import cv2
def color_grayscale(filename,g):
filename = cv2.imread("a15.jpg")
" Enter Value of g:"
if g == 1:
gray = cv2.cvtColor(filename, cv2.COLOR_BGR2GRAY)
img = cv2.imshow("gray",gray)
else:
img = cv2.imshow("original",filename)
return(img)
color_grayscale("a15.jpg",1)
The code when run gives no output whatsoever.
cv2.imshow should be followed by waitKey function which displays the image for specified milliseconds. Otherwise, it won’t display the image. For example, waitKey(0) will display the window infinitely until any keypress (it is suitable for image display). waitKey(25) will display a frame for 25 ms, after which display will be automatically closed. (If you put it in a loop to read videos, it will display the video frame-by-frame)
Just add cv2.waitKey(0) before you return img and then it will display the grayscale image