How to stitch these two images with some Python code using OpenCV? - python

I have two images
Image 1:
Image 2:
I tried various approaches but I got some errors. I also tried this approach. So, can we stitch these two images? If so, how can I do this in Python3?

What would be your error? I tested using your images and it indeed produces error because OpenCV Stitcher cannot find the overlapping features between the two images. You can try to other images with at least 25% overlapping between the two images and use the simpler code for image stitching below.
import cv2
img1 = cv2.imread("image1.jpg")
img2 = cv2.imread("image2.jpg")
tupleImages=(img1,img2)
stitcher = cv2.createStitcher(True)
result = stitcher.stitch(tupleImages)
cv2.imshow('result',result[1])
k = cv2.waitKey(0) & 0xff # press ESC to exit
if k == 27:
cv2.destroyAllWindows()
Try using the images below and the result would be

Related

How can I read multiple images in the below script?

cv2_imshow((predict[0].masks.masks[0].numpy() * 255).astype("uint8"))
In this script I can to read one image but how can reed multiple images in predict[]?
is predict a list of objects? if so, does this help?
import cv2
# Loop over all the masks in predict
for mask in predict:
# Display the mask for the first object in the list
cv2.imshow('Mask', (mask.masks.masks[0].numpy() * 255).astype("uint8"))
cv2.waitKey(0)
# Close all windows when done
cv2.destroyAllWindows()

Compare two images using one of that as reference

I need to compare two images and get RGB differences using one of that as reference:
I'm using this code,
from PIL import Image, ImageChops
img1 = Image.open("sagitale1pos.png")
img2 = Image.open("sagitale1pre.png")
diff = ImageChops.difference(img1, img2)
if diff.getbbox():
diff.show()
but it returns all differences between images, and I want to see only the changes in image 2.
Thanks for help

How to apply color balance from one image to another

I've got two images, the first one contain multiple items, which shows true colors. Then when I removed most of the item, then the webcam tried to auto-balance the image and yielded really false color.
Is there a way (in code) to apply the color profile of the first (true-color) image to the second image?
(or point me to some keywords, I'm new to the field, thanks)
Attached them here for easy comparison
True color
Falsely-adjusted color
I used Logitech webcam, which I can't figure out how to turn off auto-balance in code (in Linux).
I use this method and it works very well:
#pip install color_transfer
from color_transfer import color_transfer
# Load the two images
img1 = cv2.imread('image12.png')
img2 = cv2.imread('image1.png')
# Apply the color transfer
img2_transferred = color_transfer(img1, img2)
cv2.imshow("image", img2_transferred)
if cv2.waitKey(0) == chr("q"):
exit(0)

OpenCV Error: The operation is neither 'array op array'

I want to superimpose a given set of images of the same size (the AT&T facial images database). I have written the code to do so, which works as follows:
I have assigned the location of the images (for starting I am considering only 4 images).
imstack is used to read one image (as a base image) over which the layover (superimposition) will take place.
A for loop is run that goes through all the images and adds them to the base image (imstack). This adding is done by using the addWeighted() function with the parameters as the current image (im) and the base image (imstack) with the alpha values as 0.5 respectively.
After the loop has run till its completion (all the images are superimposed on the base image) I tried to print the updated imstack as 'compiledimg' by using the imshow().
Further I added the option to save the 'compiledimg' file by pressing 's'.
To fix the error what I have tried is to resize the image after every iteration so that the addWeighted() function receives the images with the same dimensions. First imsize (before entering the for loop) is resized as to set a firm base to the first image with the required size that I have taken as (97(rows),113(columns)).
I don't understand why the addWeighted function is not working because I am using the resize funtion to make sure that the size is kept the same after each iteration. Plus, if also tried to superimpose just two of the images and it worked perfectly fine however it does not work when I try to use the addWeighted() on the third image.
Say I used addWeighted on two images img1 and img2 and stored in img3. Now when I tried to use the addWeighted() on img3 and img4 I am getting the error. Even when I have used the resize function on img3.
Note: Initial size of the images is (97 (rows),113 (columns)) hence I am trying to keep the same image size.
import cv2
import numpy as np
import os
fnames =['~/Downloads/1.pgm','~/Downloads/2.pgm','~/Downloads/3.pgm']
imstack=cv2.imread('~/Downloads/4.pgm')
for path in fnames:
im=cv2.imread(os.path.expanduser(path))
im=cv2.resize(im,(97,113))
imstack=cv2.addWeighted(imstack,0.5,im,0.5,0)
imstack=cv2.resize(imstack,(97,113))
cv2.imshow('compiledimg',imstack)
k = cv2.waitKey(0) & 0xFF
if k == 27:
cv2.destroyAllWindows()
elif k == ord('s'):
cv2.imwrite('compiledimg.pgm',imstack)
cv2.destroyAllWindows()

Extract foreground from individual frames using opencv for python

The problem
I'm working with a camera that posts a snapshot to the web every 5 seconds or so. The camera is monitoring a line of people. I'd like my script to be able to tell me how long the line of people is.
What I've tried
At first, I thought I could do this using BackgroundSubtractorMOG, but this is just producing a black image. Here's my code for that, modified to use an image instead of a video capture:
import numpy as np
import cv2
frame = cv2.imread('sample.jpg')
fgbg = cv2.BackgroundSubtractorMOG()
fgmask = fgbg.apply(frame)
cv2.imshow('frame', fgmask)
cv2.waitKey()
Next, I looked at foreground extraction on an image, but this is interactive and doesn't suit my use case of needing the script to tell me how long the line of people is.
I also tried to use peopledetect.py, but since the image of the line is from an elevated position, that script doesn't detect any people.
I'm brand new to opencv, so any help is greatly appreciated. I can supply any additional details upon request.
Note:
I'm not so much looking for someone to solve the overall problem, as I am just trying to figure out a way to separate out the people from the background. However, I am open to approaching the problem a different way if you think you have a better solution.
EDIT: Here's a sample image as requested:
I figured it out! #QED helped me get there. Basically, you can't do this with just one image. You need AT LEAST 2 frames to compare so the algorithm can tell what's different (foreground) and what's the same (background). So I took 2 frames and looped through them to "train" the algorithm. Here's my code:
import numpy as np
import cv2
i = 1
while(1):
fgbg = cv2.BackgroundSubtractorMOG()
while(i < 3):
print 'img' + `i` + '.jpg'
frame = cv2.imread('img' + `i` + '.jpg')
fgmask = fgbg.apply(frame)
cv2.imshow('frame', fgmask)
i += 1
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cv2.destroyAllWindows()
And here's the result from 2 consecutive images!

Categories