stitching low feature images together - python

Is there a way to attach low feature images together from vertical? I have tried OpenCV's ORB, SIFT& SURF, however, if the images have no features or low features, it fails to stitch them together.
I want these images put together:
Please, let me know if there a way to stitch them together or if blending works?

the feature matching are often based on contours inside images, there is not contours on your both images like corners or particular poylgon. Once openCv can't find contours and so feature, it can't make template matching to check if feature are ok and assembly both images.
If you have some feature despite the image content, try to lower the matching threshold, it will allow algorithm to match feature that are not exactly the same, like in your two differents pictures

Related

Detection of pills using OpenCV in python

I would like to know if there is a clever way to segment individual pills using edge detector (e.g Canny) in the following image (without somekind of CNN or other ML way):
So far I have tried to use the Canny dector on a filtered image (box/gauss, with k=(3,3) or k=(5,5)) on different channels of several color spaces, e.g. GRAY, HSV, LAB. Unfortunately, I have not been able to find a perfect edges for several different experiments. If I could find the edges segmentation would already be quite simple, because I could compare the different colors of the pills and their sizes.
I thought that it could be done in another way: apply masks to the image corresponding to each pill and then find the edges on the filtered image, however, this method seems primitive and it will be difficult to distinguish between two-colored pills.
Best I can get so far:

Why does OpenCV Stitcher Class fail for image sub sections?

I am trying to stitch panoramas using the OpenCV Stitcher Class in Python. The input data consists of partly overlapping images, but sometimes also images showing a sub section of another image, e.g.:
Whenever there are images showing a sub section, the stitching either fails ("Need more images" or "Camera parameters adjusting failed.") or leaves out the image showing the subsection. This also applies to images that are overlapping almost completely (>~90%).
There are sufficient features and matches found. The homography gets estimated correctly; see this output from the cv.draw_matches() function:
I have tried different feature detection methods (ORB, AKAZE, SIFT, SURF) as well as tuning various other parameters like bundle adjustment, warp type and confidence (conf_thresh).
It is essential for my application, that all images are included in the panorama, even when they show an area already covered by previous images.
What am I doing wrong? Are there any other methods for generating image stitchings using opencv-python?
Are there any other methods for generating image stitchings using
opencv-python?
The stitching package uses opencv-python and offers a lot of insight possibilities into the stitching process using the tutorial

Crop features detected in OpenCV

I am trying to crop out features from a photo using opencv and haven't quite been able to find anything that helps to do so. I have photos in which I am trying to crop out rivets from metal panels to create a dataset of rivets that focus in on just the rivets. I have been able to use feature detection and feature matching using Orb to match features but I am unsure of how to then crop out those features. Ideally each photo should provide me with multiple cropped out photos of the rivets. Does anyone have any experience with anything such as this?
For template matching with OpenCV, you can use template matching (which is nicely described here)
If your template is skewed, rotated, etc. in the photo, you can use feature homography
For cropping the part of the image, you can look at this previously answered question.

OpenCV how to replace cut out object with background

I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.

How to align multiple camera images using opencv

Imagine someone taking a burst shot from camera, he will be having multiple images, but since no tripod or stand was used, images taken will be slightly different.
How can I align them such that they overlay neatly and crop out the edges
I have searched a lot, but most of the solutions were either making a 3D reconstruction or using matlab.
e.g. https://github.com/royshil/SfM-Toy-Library
Since I'm very new to openCV, I will prefer a easy to implement solution
I have generated many datasets by manually rotating and cropping images in MSPaint but any link containing corresponding datasets(slightly rotated and translated images) will also be helpful.
EDIT:I found a solution here
http://www.codeproject.com/Articles/24809/Image-Alignment-Algorithms
which gives close approximations to rotation and translation vectors.
How can I do better than this?
It depends on what you mean by "better" (accuracy, speed, low memory requirements, etc). One classic approach is to align each frame #i (with i>2) with the first frame, as follows:
Local feature detection, for instance via SIFT or SURF (link)
Descriptor extraction (link)
Descriptor matching (link)
Alignment estimation via perspective transformation (link)
Transform image #i to match image 1 using the estimated transformation (link)

Categories