I currently have built a stitching program that is able to stitch any number of images however they have to be of specific order which is a problem for my specific image sets (working with 1452 images). My goal is to create an unordered stitcher program. I am convinced that the issue takes place when I am physically (well virtually) stitching the images to one another.
Here is code for my stitching (assume keypoints found are accurate as well as homography)
def stitchMatches(self,image1,image2,homography):
#gather x and y axis of images that will be stitched
height1, width1 = image1.shape[0], image1.shape[1]
height2, width2 = image2.shape[0], image2.shape[1]
#create blank image that will be large enough to hold stitched image
blank_image = np.zeros(((width1 + width2),(height1 + height2),3),np.uint8)
#stitch image two into the resulting image while using blank_image
#to create a large enough frame for images
result = cv2.warpPerspective((image1),homography,blank_image.shape[0:2])
#numpy notation for slicing a matrix together allows you to see the image
result[0:image2.shape[0], 0:image2.shape[1]] = image2
return result
I believe the issue is with these two lines.
result = cv2.warpPerspective((image1),homography,blank_image.shape[0:2])
result[0:image2.shape[0], 0:image2.shape[1]] = image2
When I run the code on an ordered image (image 1 is north east most)set this is the result.
However when I run the images in the reverse order I get this result.
To my understanding, this is because I have set the resulting image shape to map to image 2 but haven't been able to develop a work around.
Here is a photo of multiple images showing the same issue of the images getting cropped out.
Any help is appreciated.
Related
I am trying to create an image made up of coloured squares. I only need each square to be one pixel large, as it is just a single block colour. However, when I use this code, the image generated is extremely blurry. Is there anyway to make the boarders sharp?
def fancycolnw2(seq,m):
data=numbwall(seq,m)
#print(data)
for i in range(len(data)):
for j in range(len(data[i])):
if data[i][j]==' ':
data[i][j]=-1
im = Image.new('RGBA', (len(data[0]),len(data))) # create the Image of size 1 pixel
#print(data)
for i in range(len(data)-1):
for j in range(len(data[i])-1):
#print(i,j)
if data[i][j]==-1:
im.putpixel((j,i), ImageColor.getcolor('black', 'RGBA'))
if data[i][j]==0:
#print('howdy')
im.putpixel((j,i), ImageColor.getcolor('red', 'RGBA'))
if data[i][j]==1:
im.putpixel((j,i), ImageColor.getcolor('blue', 'RGBA'))
if data[i][j]==2:
im.putpixel((j,i), ImageColor.getcolor('grey', 'RGBA'))
im.show()
im.save('simplePixel.png') # or any image format
The result I get looks like this:
Image
It is the correct image, I just wish the boundaries between pixels were sharp. Any help would be greatly appreciated!
The image is perfectly sharp, but rather small. I suspect that you are "zooming in" to view it clearer, and that whatever program you are zooming with is filtering the image, because with most images this looks better. You need to find a viewing program that uses "nearest neighbour" resampling when zooming in, or generate a larger image to start with, for example by setting a 4-by-4 pixel block rather than individual pixels.
(Also, the code says "# or any other image format". Don’t use JPEG for this, as the lossy compression will likely wreck your image.)
I am working with TIFF files that represent the readings of detectors in electron microscopy, and I know how this particular image should look, but I'm unsure how to get that result from the raw data in question. The TIFF files in question have several pages corresponding to frames on which data was taken, but when I look at each individual frame, they seem more or less like white noise, so there must be some other way to massage the data to look how it's meant to. I've tried reading each frame into a numpy array and taking the sum over all frames to produce a new image, and it seemed like this almost worked for some of the images in question, though not all. Preferably, I'd like to produce a numpy array representing the new image that looks as it is meant to.
The actual TIFF image itself is too large to attach here, so I'll link to where it can be downloaded on the EMPIAR database. It is /data/ds1_tifs/20180309_Vn_ribosome_0001.tif on this page, the first image listed under ds1_tifs. You'll want to unselect everything else and download this image alone, since the full dataset is obviously absurdly large. The result image should look like this.
My posts on cryo-em discussion boards haven't gained much traction, so any help would be appreciated.
"produce a numpy array representing the new image that looks as it is meant to.": The PNG image looks like a thumbnail image, a somewhat arbitrary preview of the data in the TIFF file obtained by binning and scaling:
import tifffile
im = tifffile.imread('20180309_Vn_ribosome_0001.tif', maxworkers=6)
binsize = 25
height = im.shape[1] // binsize
width = im.shape[2] // binsize
im = im[:, :height * binsize, : width * binsize]
im = im.reshape(im.shape[0], height, binsize, width, binsize)
im = im.sum((0, 2, 4), dtype='uint32')
from matplotlib import pyplot
pyplot.imshow(im, cmap='gray')
pyplot.show()
I'm looking for a library that enables to "create pictures" (or even videos) with the following functions:
Accepting picture inputs
Resizing said inputs to fit given template / scheme
Positioning the pictures in pre-set up layers or coordinates
A rather schematic approach to look at this:
whereas the red spots are supposed to represent e.g. text, picture (or if possible video) elements.
The end goal would be to give the .py script multiple input pictures and the .py creating a finished version like mentioned above.
Solutions I tried were looking into Python PIL, but I wasn't able to find what I was looking for.
Yes, it is possible to do this with Python.
The library you are looking for is OpenCV([https://opencv.org][1]/).
Some basic OpenCV python tutorials (https://docs.opencv.org/master/d9/df8/tutorial_root.html).
1) You can use imread() function to read images from files.
2) You can use resize() function to resize the images.
3) You can create a empty master numpy array matching the size and depth(color depth) of the black rectangle in the figure you have shown, resize your image and copy the contents into the empty array starting from the position you want.
Below is a sample code which does something close to what you might need, you can modify this to suit your actual needs. (Since your requirements are not clear I have written the code like this so that it can at least guide you.)
import numpy as np
import cv2
import matplotlib.pyplot as plt
# You can store most of these values in another file and load them.
# You can modify this to set the dimensions of the background image.
BG_IMAGE_WIDTH = 100
BG_IMAGE_HEIGHT = 100
BG_IMAGE_COLOR_DEPTH = 3
# This will act as the black bounding box you have shown in your figure.
# You can also load another image instead of creating empty background image.
empty_background_image = np.zeros(
(BG_IMAGE_HEIGHT, BG_IMAGE_WIDTH, BG_IMAGE_COLOR_DEPTH),
dtype=np.int
)
# Loading an image.
# This will be copied later into one of those red boxes you have shown.
IMAGE_PATH = "./image1.jpg"
foreground_image = cv2.imread(IMAGE_PATH)
# Setting the resize target and top left position with respect to bg image.
X_POS = 4
Y_POS = 10
RESIZE_TARGET_WIDTH = 30
RESIZE_TARGET_HEIGHT = 30
# Resizing
foreground_image= cv2.resize(
src=foreground_image,
dsize=(RESIZE_TARGET_WIDTH, RESIZE_TARGET_HEIGHT),
)
# Copying this into background image
empty_background_image[
Y_POS: Y_POS + RESIZE_TARGET_HEIGHT,
X_POS: X_POS + RESIZE_TARGET_WIDTH
] = foreground_image
plt.imshow(empty_background_image)
plt.show()
I have a bunch of images that have different exposures and I want to stitch them together:
OpenCV has a Stitcher example but it relies on matching features between the images and they should overlap with each other.
My images are not overlapping BUT they are connected to each other contiguously (as if I partitioned a Panorama into multiple splits) and I want to connect them together and fix their exposures.
How can I do this in OpenCV Python? At the very least the Compensator class needs:
compensator.feed(corners=corners, images=images_warped, masks=masks_warped)
images_warped could just be normal input images and masks_warped could be a 2D array of Ones but I don't know how the corners value should be given manually?
a simple example would suffice, Thanks.
Individual images
With the assumption that each of your images have the same dimensions and since OpenCV images are Numpy arrays, we can use np.concatenate, np.hstack, or cv2.hconcat.
import cv2
import numpy as np
# Load images
image1 = cv2.imread('1.png')
image2 = cv2.imread('2.png')
image3 = cv2.imread('3.png')
# Method #1
hstack = np.hstack((image1, image2, image3))
# Method #2
concatenate = np.concatenate((image1, image2, image3), axis=1)
# Method #3
hconcat = cv2.hconcat([image1, image2, image3])
cv2.imshow('hstack', hstack)
cv2.imshow('concatenate', concatenate)
cv2.imshow('hconcat', hconcat)
cv2.waitKey()
The result should be the same
I would create an overlapping section by duplicating on each picture a small part of the adjacent picture.
Then there is a bit of work to process that as a normal stitching process.
I want to crop an image by selecting a subregion of it to turn into a new image file.
Based on what I see in the PIL docs, I am trying to decide between 2 approaches.
For both approaches:
# assume I already have a PIL-opened image called original_image
# assume I have a crop_box tuple for the area to crop, e.g. (0, 0, 100, 100)
Approaches
Cropping the original image directly, and then transforming the returned crop area into a new file. It seems that I'm giving up all the mode data / other metadata about the original image, which would then have to be re-loaded?
cropped_image = original_image.crop(crop_box)
# process cropped version as needed
Do an image.transform() in which I select a sub-region
# assume I have an output size of (100, 100)
cropped_image = original_image.transform(size, Image.EXTENT, crop_box)
# process cropped version as needed
Is one approach preferable to the other in terms of speed, data preservation, or another important factor that I'm missing?
The following is from the PIL.Image documentation under the transform function:
It is slightly slower than crop
Plus crop is simpler and shows your intent. That's what I'd use.