I have a bunch of images that have different exposures and I want to stitch them together:
OpenCV has a Stitcher example but it relies on matching features between the images and they should overlap with each other.
My images are not overlapping BUT they are connected to each other contiguously (as if I partitioned a Panorama into multiple splits) and I want to connect them together and fix their exposures.
How can I do this in OpenCV Python? At the very least the Compensator class needs:
compensator.feed(corners=corners, images=images_warped, masks=masks_warped)
images_warped could just be normal input images and masks_warped could be a 2D array of Ones but I don't know how the corners value should be given manually?
a simple example would suffice, Thanks.
Individual images
With the assumption that each of your images have the same dimensions and since OpenCV images are Numpy arrays, we can use np.concatenate, np.hstack, or cv2.hconcat.
import cv2
import numpy as np
# Load images
image1 = cv2.imread('1.png')
image2 = cv2.imread('2.png')
image3 = cv2.imread('3.png')
# Method #1
hstack = np.hstack((image1, image2, image3))
# Method #2
concatenate = np.concatenate((image1, image2, image3), axis=1)
# Method #3
hconcat = cv2.hconcat([image1, image2, image3])
cv2.imshow('hstack', hstack)
cv2.imshow('concatenate', concatenate)
cv2.imshow('hconcat', hconcat)
cv2.waitKey()
The result should be the same
I would create an overlapping section by duplicating on each picture a small part of the adjacent picture.
Then there is a bit of work to process that as a normal stitching process.
Related
So I have an image and I want to cut it up into multiple images to feed into OCR to read.
image example
I only want the messages with the white bubbles and exclude anything with the grey bubbles. I can't figure out how to make a loop to separate each white bubble.
import numpy as np
from PIL import ImageGrab, Image, ImageFilter
img = Image.open('test1.png').convert('RGB')
na = np.array(img)
orig = na.copy()
img = img.filter(ImageFilter.MedianFilter(3))
whiteY, whiteX = np.where(np.all(na==[255,255,255],axis=2))
top, bottom = whiteY[1], whiteY[-1]
left, right = whiteX[1], whiteX[-1]
You could try using the opencv threshold function, followed by the findContours function. This will, if you threshold the image correctly, give you the 'borders' of the bubbles above. Using that, you could then crop out each text bubble.
Here's a simple example of contours being used:
https://www.geeksforgeeks.org/find-and-draw-contours-using-opencv-python/
Otherwise if you'd like to understand better how the opencv functions I mentioned or those that are used in the article above, have a look at the opencv documentation.
I have been trying to stack two images.
The end result will be used as the input to my convolutional neural network.
Now I tried to use dstack, I also tried to use PIL by importing Image.blend but I cannot seem to be arriving to my desired result.
I am asking if anyone has any other ideas which I can use would be greatly appreciated.
This could help you out.
from PIL import Image
image1 = Image.open("img1.jpg")
image2 = Image.open("img2.jpg")
image1 = image1.resize((224, 224))
image1_size = image1.size
image2_size = image2.size
new_image = Image.new('RGB',(2*image1_size[0], image1_size[1]), (250,250,250))
Resize them so that they are the same size, and then use np.stack with axis=3 (if you are using multi-channel images. Else, use axis=2.
Or are you trying to combine them into one image? If so, how? Masking, adding subtracting?
I am using Scikits SSIM to calculate how similar 2 pictures are, and it is working fine for one exception. When there is a lot of white pixels (lets say its a pure white background with a very simple black outlined shape) it will say they are very similar when the actual shape is in fact very different.
I tried looking for other questions about this but couldn't find one that accurately answered my question.
Some code:
from skimage.measure import compare_ssim
import numpy as np
import cv2
# With SSIM, compares image A to image B, and returns the result.
def compare_images(imageA, imageB):
return compare_ssim(imageA, imageB)
# Loads an image with a given filepath with imread.
def load_images(filepath):
picture = cv2.imread(filepath)
# Convert the images to grayscale
return cv2.cvtColor(picture, cv2.COLOR_BGR2GRAY)
# compare the images
original = load_images("images/images.png")
contrast = load_images("images/download.png")
result = compare_images(original, contrast)
print(result)
Mind you, I am just a Python novice. Any help would be welcome.
I currently have built a stitching program that is able to stitch any number of images however they have to be of specific order which is a problem for my specific image sets (working with 1452 images). My goal is to create an unordered stitcher program. I am convinced that the issue takes place when I am physically (well virtually) stitching the images to one another.
Here is code for my stitching (assume keypoints found are accurate as well as homography)
def stitchMatches(self,image1,image2,homography):
#gather x and y axis of images that will be stitched
height1, width1 = image1.shape[0], image1.shape[1]
height2, width2 = image2.shape[0], image2.shape[1]
#create blank image that will be large enough to hold stitched image
blank_image = np.zeros(((width1 + width2),(height1 + height2),3),np.uint8)
#stitch image two into the resulting image while using blank_image
#to create a large enough frame for images
result = cv2.warpPerspective((image1),homography,blank_image.shape[0:2])
#numpy notation for slicing a matrix together allows you to see the image
result[0:image2.shape[0], 0:image2.shape[1]] = image2
return result
I believe the issue is with these two lines.
result = cv2.warpPerspective((image1),homography,blank_image.shape[0:2])
result[0:image2.shape[0], 0:image2.shape[1]] = image2
When I run the code on an ordered image (image 1 is north east most)set this is the result.
However when I run the images in the reverse order I get this result.
To my understanding, this is because I have set the resulting image shape to map to image 2 but haven't been able to develop a work around.
Here is a photo of multiple images showing the same issue of the images getting cropped out.
Any help is appreciated.
I want to crop an image by selecting a subregion of it to turn into a new image file.
Based on what I see in the PIL docs, I am trying to decide between 2 approaches.
For both approaches:
# assume I already have a PIL-opened image called original_image
# assume I have a crop_box tuple for the area to crop, e.g. (0, 0, 100, 100)
Approaches
Cropping the original image directly, and then transforming the returned crop area into a new file. It seems that I'm giving up all the mode data / other metadata about the original image, which would then have to be re-loaded?
cropped_image = original_image.crop(crop_box)
# process cropped version as needed
Do an image.transform() in which I select a sub-region
# assume I have an output size of (100, 100)
cropped_image = original_image.transform(size, Image.EXTENT, crop_box)
# process cropped version as needed
Is one approach preferable to the other in terms of speed, data preservation, or another important factor that I'm missing?
The following is from the PIL.Image documentation under the transform function:
It is slightly slower than crop
Plus crop is simpler and shows your intent. That's what I'd use.