cv2.createStitcher() not enough keypoints? - python

I am working on a project to have multiple cameras, each taking an image and then the images will be stitched together. Currently I am trying to use the cv2.createStitcher().stitch(images) function. Below is the code that I use:
import cv2
imageFiles = ['imageCapture1_0.png','imageCapture2_0.png']
images = []
for filename in imageFiles:
img = cv2.imread(filename)
images.append(img)
cv2.ocl.setUseOpenCL(False)
stitcher = cv2.createStitcher()
status, result = stitcher.stitch(images)
cv2.imwrite('result.png',result)
The image input is:
left image:
right image:
However, result output type becomes NoneType with size 1 and value: NoneType object of builtins modules. From what I have googled, the cause of this is because there is not enough matching keypoint to stitch the images together. If so, is there a way to stitch image even with less keypoint? Is there a way to set the parameter? I read through the documentation with no luck trying to find the solution. Thank you in advance

The image stitching operation status, result = stitcher.stitch(images) returns two values, a status indicator and the resulting stitched image. You can check the value of status to determine whether or not the image stitching operation was a success. From the docs it can be one of four variables:
OK = 0: Image stitching was successful.
ERR_NEED_MORE_IMGS = 1: There were not enough keypoints detected in your input images to construct the panorama. You will need more input images.
ERR_HOMOGRAPHY_EST_FAIL = 2: This error occurs when the RANSAC homography estimation fails. Similarly, you may need more input images or the images provided do not have enough distinguishing features for keypoints to be accurately matched.
ERR_CAMERA_PARAMS_ADJUST_FAIL = 3: Usually related to failing to properly estimate camera features from the input images.
For your situation, you can either add more input images so there will be enough keypoints detected or you can look into your own implementation.

I copied and run your code. It is working all fine.
Left Image
Right Image
Result
What I think is that the function could not find the enough match points in your pictures. Trying this code for another set of pictures may help.

Related

ORB's detectAndCompute returns None for a slightly different image

I have some kind of alignment task to do. In the process, I need to extract descriptors and keypoints.
I'm using the following simple code for 2 images that are almost identical, with the same shape:
orb = cv2.ORB_create(maxFeatures)
(kpsA, descsA) = orb.detectAndCompute(image, None)
(kpsB, descsB) = orb.detectAndCompute(template, None)
ORB fails with the image on the left, but fine with the right one.
The returned (kpsA, descsA) are fine, but len(kpsB)==0 and descsB==None and I can't find the reason for that.
As mentioned in the comments, ORB fails to detect any features on the left image and probably only finds few features on the right image.
Instead consider doing image alignment/ image registration using a method that is not feature based. Have a look at dense optical flow algorithms such as cv::optflow::DenseRLOFOpticalFlow.
With that being said, your task looks challenging. Even humans will have difficulties solving it well. Good luck.
It's been a year since you asked the question but I want to suggest you try this. A plausible reason you see None for some images is because the default value of the threshold in the function is too high. So just play with the fastThreshold and edgeThreshold params and see if it works for you.
A good sanity check will be to set
orb = cv2.ORB_create(fastThreshold=0, edgeThreshold=0)
and see what's happen.
Next, you can chose whether to ignore these images or to try a smaller threshold.
The full explanation of the function params is here in the opencv doc.

Find Frames in a video that matches an image

I am trying to find frames that matches an image using opencv. I also want to find the timeframe at which the the image is found. The video is a masked video. The code so far:
def occurence_counter(self):
img = cv2.imread('ref_img.jpg', cv2.IMREAD_COLOR)
# shrink
img = cv2.resize(img, (10, 10))
# convert to b&w
img = color.rgb2gray(img)
similarities = []
result = self.parse_video(img, str(self.lineEdit.text()).strip(), 1, False)
print result
def parse_video(self, image, video, n_matches, break_point=False,
verbose=False):
similarities = [{'frame': 0, 'similarity': 0}]
frame_count = 0
cap = cv2.VideoCapture(video)
while cap.isOpened():
ret, frame = cap.read()
if type(frame) == type(None):
break
# increment frame counter
frame_count += 1
# resize current video frame
small_frame = cv2.resize(frame, (10, 10))
# convert to greyscale
small_frame_bw = color.rgb2gray(small_frame)
Finding the same frame is not so easy issue. There are many possible solutions.I will describe here the possible solutions in a very general way.
Template Matching
Template matching is algorithm that calculate the similarity of the corresponding pixels in the images. So if You are looking for very similar image (without rotation, translation, large intenisty changes) it is not so bad algorithm. It is not so fast for whole images. It is rather used to find the same fragment on several images or smaller image on bigger image, not to check similarity of two images.
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html
For whole images it is easier to just simply subtract the images then use template matching. It is much faster. There must be an assumption that they are really similar to each other.
Histogram Comparision
You can use histogram comparision. It is the fastest way, but it is not accurate. Grass and apples are both green, but dissimilar to each other. It's usually better to use the HSV color space when it comes to color.
https://docs.opencv.org/3.4.1/d8/dc8/tutorial_histogram_comparison.html
Feature matching
Algorithm is searching for simillar characteristic points on images. There are many algorithms to find features on images. They should be insensitive to scale change and rotation etc. But it depends on the feature extraction algoritm.
https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_features_meaning/py_features_meaning.html#features-meaning
Other alogithms
Other algoritms are PSNR or SSIM. I have never used it, but there are used to calculate similarity for original and blur image or for similarity of whole video sequence.
https://docs.opencv.org/3.4.2/d5/dc4/tutorial_video_input_psnr_ssim.html
You can too try to compare hashes of images. It is very interessting algorithm (for me), but it is not well documented.
https://www.pyimagesearch.com/2017/11/27/image-hashing-opencv-python/
Feature matching are the most-used algorithm for this type of task. The reason for that is feature matching alogrithms can detect similar fragments of images when images are taken from a different angle, in different conditions, or only partially overlap. Structure From Motion algorithms are often using feature matching. https://hub.packtpub.com/exploring-structure-motion-using-opencv/
The solution to the problem always depends on the data we have. So there is no one answer.
If I am not mistaken what you want to do is called Template Matching, you can find opencv tutorial of that feature here. Also this thread might be useful for you, especially #Sam answer, which beyond Template Matching describe also Comparing Histograms and Feature Matching.

Extract tiles from tiled TIFF and store in numpy array

My overall goal is to crop several regions from an input mirax (.mrxs) slide image to JPEG output files.
Here is what one of these images looks like:
Note that the darker grey area is part of the image, and the regions I ultimately wish to extract in JPEG format are the 3 black square regions.
Now, for the specifics:
I'm able to extract the color channels from the mirax image into 3 separate TIFF files using vips on the command line:
vips extract_band INPUT.mrxs OUTPUT.tiff[tile,compression=jpeg] C --n 1
Where C corresponds to the channel number (0-2), and each output file is about 250 MB in size.
The next job is to somehow recognize and extract the regions of interest from the images, so I turned to several python imaging libraries, and this is where I encountered difficulties.
When I try to load any of the TIFFs using OpenCV using:
i = cv2.imread('/home/user/input_img.tiff',cv2.IMREAD_ANYDEPTH)
I get an error error: (-211) The total matrix size does not fit to "size_t" type in function setSize
I managed to get a little more traction with Pillow, by doing:
from PIL import Image
tiff = Image.open('/home/user/input_img.tiff')
print len(tiff.tile)
print tiff.tile[0]
print tiff.info
which outputs:
636633
('jpeg', (0, 0, 128, 128), 8, ('L', ''))
{'compression': 'jpeg', 'dpi': (25.4, 25.4)}
However, beyond loading the image, I can't seem to perform any useful operations; for example doing tiff.tostring() results in a MemoryError (I do this in an attempt to convert the PIL object to a numpy array) I'm not sure this operation is even valid given the existence of tiles.
From my limited understanding, these TIFFs store the image data in 'tiles' (of which the above image contains 636633) in a JPEG-compressed format.
It's not clear to me, however, how would one would extract these tiles for use as regular JPEG images, or even whether the sequence of steps in the above process I outlined is a potentially useful way of accomplishing the overall goal of extracting the ROIs from the mirax image.
If I'm on the right track, then some guidance would be appreciated, or, if there's another way to accomplish my goal using vips/openslide without python I would be interested in hearing ideas. Additionally, more information about how I could deal with or understand the TIFF files I described would also be helpful.
The ideal situations would include:
1) Some kind of autocropping feature in vips/openslide which can generate JPEGs from either the TIFFs or original mirax image, along the lines of what the following command does, but without generated tens of thousands of images:
vips dzsave CMU-1.mrxs[autocrop] pyramid
2) Being able to extract tiles from the TIFFs and store the data corresponding to the image region as a numpy array in order to detect the 3 ROIs using OpenCV or another methd.
I would use the vips Python binding, it's very like PIL but can handle these huge images. Try something like:
from gi.repository import Vips
slide = Vips.Image.new_from_file(sys.argv[1])
tile = slide.extract_area(left, top, width, height)
tile.write_to_file(sys.argv[2])
You can also extract areas on the command-line, of course:
$ vips extract_area INPUT.mrxs OUTPUT.tiff left top width height
Though that will be a little slower than a loop in Python. You can use crop as a synonym for extract_area.
openslide attaches a lot of metadata to the image describing the layout and position of the various subimages. Try:
$ vipsheader -a myslide.mrxs
And have a look through the output. You might be able to calculate the position of your subimages from that. I would also ask on the openslide mailing list, they are very expert and very helpful.
One more thing you could try: get a low-res overview, corner-detect on that, then extract the tiles from the high-res image. To get a low-res version of your slide, try:
$ vips copy myslide.mrxs[level=7] overview.tif
Level 7 is downsampled by 2 ** 7, so 128x.

How to check the input image is matched with the cropped face pic or not?

I would like to check whether a given image is matched with a cropped face image. I tried to crop the face from the image using OpenCV Python. Now i want to check if the input image is a match with the cropped face or not. What methods can I use with OpenCV to achieve this?
For sufficiently small (and not scientifically accurate) purposes, you could use OpenCV's template matching.
Feature extraction and matching may give you more accurate results in many cases. Face detector comes in as a part of OpenCV. Face recognition, however, is a much larger problem altogether.

cv2 CascadeClassifier parameters

Can someone give me example of fully set classifier< I´m talking about parameters i just don´t understand this example:
cv2.CascadeClassifier.detectMultiScale(image, rejectLevels, levelWeights[, scaleFactor[, minNeighbors[, flags[, minSize[, maxSize[, outputRejectLevels]]]]]]) → objects
I am detecting my face but I need to set min and max size of it. To do that you have to set rejectLevels, levelWeights etc.
I´m using module CV2.
In this problem, first you have to create a collection file with bounding boxes on positive images before you create a list of negative images. Then you have to create opencv samples in order to train your cascade. Once you have finished that, You can simply use following code in order to detect your face samples.
#load detection file
cascade = cv2.CascadeClassifier("cascade.xml")
# detect objects, return as list
rects = cascade.detectMultiScale(img)
Then you can iterate over your rect list.
Please have a look on this ref:

Categories