I'm trying to stitch together screenshots. These screenshots have overlap to ensure I can correctly locate the screenshots in relation to each other.
I've tried stitching them together with OpenCV via cv2::Stitcher. This seems to be extensively documented for panoramic photos, but not as well on stack overflow for scans. I tried the below code, but it only seems to work for some subset of the photos, with some consecutive pairs of the photos returning an error. My code is below.
import cv2
print(cv2.__version__) # 4.6.0
#mode=0 photos, mode=1 scans
stitcher = cv2.Stitcher.create(mode=1)
img = []
for i in range(1,30):
img.append(cv2.imread(f"pics/{i}.png"))
result = stitcher.stitch(img)
cv2.imwrite("stitched.png", result[1])
I seem to get an error where sometimes some images seem to be just dropped and other times images seem to be attached diagonally and resized.
Related
I am trying to stitch panoramas using the OpenCV Stitcher Class in Python. The input data consists of partly overlapping images, but sometimes also images showing a sub section of another image, e.g.:
Whenever there are images showing a sub section, the stitching either fails ("Need more images" or "Camera parameters adjusting failed.") or leaves out the image showing the subsection. This also applies to images that are overlapping almost completely (>~90%).
There are sufficient features and matches found. The homography gets estimated correctly; see this output from the cv.draw_matches() function:
I have tried different feature detection methods (ORB, AKAZE, SIFT, SURF) as well as tuning various other parameters like bundle adjustment, warp type and confidence (conf_thresh).
It is essential for my application, that all images are included in the panorama, even when they show an area already covered by previous images.
What am I doing wrong? Are there any other methods for generating image stitchings using opencv-python?
Are there any other methods for generating image stitchings using
opencv-python?
The stitching package uses opencv-python and offers a lot of insight possibilities into the stitching process using the tutorial
I am using python 3.5 and opencv 3.4.1.
I have a set of 19 images that I need to stitch. They are blurry and the stitching module is unable to stitch them. I have read this post, but am wondering if I can find a way to stitch. I would appreciate some specific suggestions and solutions. I want to stitch these images.
I have tried changing the match_conf as reccomended by this post. How would I edit this as the source code states that it is a flag? I have tried using the line below to change the match_conf to 0.1, but it does not work and I get the error below.
stitcher = cv2.createStitcher(False)
stitcher.setFeaturesMatcher(detail = BestOf2NearestMatcher(false, 0.1))
result = np.empty(shape=[2048, 2048])
ret, result = stitcher.stitch(imgs, result)
'cv2.Stitcher' object has no attribute 'setFeaturesMatcher'
Check this post out.
Possibly gig into pipeline and change opencv C++ code.
"This is full pipleline of OPencv Stitching code. You can see that there are lot of parameters you can change to make your code give some good stitching result. Also I would suggest using a small image (640 X480) for the feature detection step. Using small images is better than using very large images"
Technically you should be able to change the parameter from python, but hopeful somebody else knows how to do that.s
I am working on a project to have multiple cameras, each taking an image and then the images will be stitched together. Currently I am trying to use the cv2.createStitcher().stitch(images) function. Below is the code that I use:
import cv2
imageFiles = ['imageCapture1_0.png','imageCapture2_0.png']
images = []
for filename in imageFiles:
img = cv2.imread(filename)
images.append(img)
cv2.ocl.setUseOpenCL(False)
stitcher = cv2.createStitcher()
status, result = stitcher.stitch(images)
cv2.imwrite('result.png',result)
The image input is:
left image:
right image:
However, result output type becomes NoneType with size 1 and value: NoneType object of builtins modules. From what I have googled, the cause of this is because there is not enough matching keypoint to stitch the images together. If so, is there a way to stitch image even with less keypoint? Is there a way to set the parameter? I read through the documentation with no luck trying to find the solution. Thank you in advance
The image stitching operation status, result = stitcher.stitch(images) returns two values, a status indicator and the resulting stitched image. You can check the value of status to determine whether or not the image stitching operation was a success. From the docs it can be one of four variables:
OK = 0: Image stitching was successful.
ERR_NEED_MORE_IMGS = 1: There were not enough keypoints detected in your input images to construct the panorama. You will need more input images.
ERR_HOMOGRAPHY_EST_FAIL = 2: This error occurs when the RANSAC homography estimation fails. Similarly, you may need more input images or the images provided do not have enough distinguishing features for keypoints to be accurately matched.
ERR_CAMERA_PARAMS_ADJUST_FAIL = 3: Usually related to failing to properly estimate camera features from the input images.
For your situation, you can either add more input images so there will be enough keypoints detected or you can look into your own implementation.
I copied and run your code. It is working all fine.
Left Image
Right Image
Result
What I think is that the function could not find the enough match points in your pictures. Trying this code for another set of pictures may help.
So I have quite an interesting image segmentation problem. Here, I have scraped instagram photos which are stacked vertically.
see image here(too long to post): https://imgur.com/a/gPr2J
What I am trying to do is quite simple. I just want to extract each post image from the screenshot, and save it to some directory. I am trying to find ways to make this work, like cropping by pixel color at a certain height but none of it is working perfectly.
Any method that would quickly segment this image. Python BTW.
I think you should start with segmenting each post out. Use the gaps between each post (which are always uniform) to segment each post out.
Then approach capturing the image inside the post - breaking this down into 2 different problems will make your algorithm simpler in my opinion.
I have a few ideas, not entirely sure how will they work for you, but thought they might give you some leads to try out:
1) All these instagram images seems to have a "heart" shaped icon just below the image you want to extract. Maybe figuring out detecting the heart shape might be good idea? Once you have found the "heart" you can look for the image just above it. Since it is a UI, my hope is that all the images that you want to extract will be a fixed number of pixels above the "heart". Moreover, they should also have the same height and width, I think.
2) Another possible idea is to find the edges in the image. Again, the images you want to extract seem to have a strong edge with respect to their background (but so does text and other UI elements). However, these edges should ideally have the largest area (which is also mostly fixed) enclosed between them. So, after finding the edges, you can use the find contours in function in opencv and then filter out the contours which have an area greater than a threshold. Have you tried something like this?
I want to sift through a collection of video files looking for a certain logo, and then record the 10-15 seconds leading up to it. I can recognize the logo by checking a certain pixel color.
How would you do it? Is there software or a python package that allows me to extract those chunks of files and write those into a new video?
What i have done so far:
I have found a library that is able to convert a video into a series of BMPs. What the programs. Its called pyMedia: http://pymedia.org/tut/src/dump_video.py.html and the reverse: http://pymedia.org/tut/src/make_video.py.html
So thats pretty neat. However its only working with python 2.3, not with python 3.
Seems like :
d= e.encode( yuvFrame )
fw.write( d )
writes a BMP file. So how do i look in for a certain colored pixel or logo in a BMP file and put it together? Thats what i cant get working somehow. Maybe someone can help me with this.
edit:
let me show you what i have done so far:
from PIL import Image
im = Image.open("bride.bmp")
i=0
width= 10
height = 10
top= 461
box = (left, top, left+width, top+height)
croppy=im.crop(box)
if (mycolor in croppy.getcolors()):
print "Logo found"
My logo has a certain color, so this looks for the pixel and prints logo found if the pixelcolor is found. Didnt really want to make a classifier for that.
Using perhaps OpenCV or another package -- essentially, you want to train a classifier to identify your logo and then feed it the bitmaps from your video. When it identifies the logo, then you trigger the code which captures the previous 15s of video.
This is a very detailed answer about how one might do this in Python General approach to developing an image classification algorithm for Dilbert cartoons