I'm trying to estimate transform for some images and stitch them using stitcher.estimateTransform() and stitcher.composePanorama() in python. After estimating transform, composePanorama gives the error as below:
pano is not a numpy array, neither a scalar
I tried to convert NumPy Array into a Mat object using cv2.fromarray(left), but it only works for cv, not cv2. Therefore how can I convert this numpy to MAT array in cv2. I don't find any examples of using composePanorama with python bindings. Any solution for this error or example of using stitcher.estimateTransform() with OpenCV-Python bindings would be appreciated.
Note: Although Stitching class in OpenCV-Python bindings is not complete (beacuse of auto-generated bindings), help(cv2.createStitcher()) demonstrates that it contains composePanorama() and estimateTransform().
Note: I can use stitcher.stitch() without any problems, but using stitcher.stitch() does not help me, because I'm trying to not calculate the transform for each iteration in the main loop.
My simple code :
leftStream = cv2.VideoCapture(0)
rightStream = cv2.VideoCapture(1)
left = leftStream.read()[1]
right = rightStream.read()[1]
st = cv2.createStitcher(False)
st.estimateTransform([left, right])
st.composePanorama([left, right])
To use stitcher.estimateTransform() and stitcher.composePanorama() you will need to
Download opencv https://github.com/opencv/opencv
navigate to opencv-master/modules/stitching/include/opencv2/stitching.hpp
add CV_WRAP in front of any methods you want to be able to call in Python. In this case those would be estimateTransform and composePanorama
Then to build the python module:
cd ~/opencv
mkdir build
cd build
cmake ../
make
sudo make install
Then move the module to your virtual environment from wherever it was installed to. In my case that was /usr/local/lib/python3.7/site-packages/cv2.
See https://www.pyimagesearch.com/2018/08/17/install-opencv-4-on-macos/ and https://docs.opencv.org/4.1.0/da/d49/tutorial_py_bindings_basics.html and https://docs.opencv.org/4.1.1/da/df6/tutorial_py_table_of_contents_setup.html for more info.
I have the same problem. From what I can see, composePanorama has two overloads.
CV_WRAP Status composePanorama(OutputArray pano);
Status composePanorama(InputArrayOfArrays images, OutputArray pano);
It's the second overload that we need, as the pano is an output parameter, which in Python is given as a return value. Unfortunately, the second overload is not marked by CV_WRAP which would make it available to the Python bindings. So the only solutions I can see are:
Use an alternative stitching implementation
Go through the C++ code of the missing composePanorama implementation and reimplement it yourself in Python
Register an issue on the Open CV Github and wait for an update
Build Open CV yourself from source and mark the function as CV_WRAP (I'm not sure it is actually as simple as that)
Work in C++ instead of Python
Although I'll be very happy if someone else can post an answer showing how to achieve this in Python without going through the complex tasks above.
Related
I am loading in a very large image (60,000 x 80,000 pixels) and am exceeding the max pixels I can load:
cv2.error: OpenCV(4.2.0) /Users/travis/build/skvark/opencv-python/opencv/modules/imgcodecs/src/loadsave.cpp:75:
error: (-215:Assertion failed) pixels <= CV_IO_MAX_IMAGE_PIXELS in function 'validateInputImageSize'
From what I have found this is referring to the limitation imposed on line 65
Ideally I'd change that to deal with at least 5 gigapixel images
#define CV_IO_MAX_IMAGE_PIXELS (1<<33)
I have seen some workarounds for this (OpenCV image size limit) but those don't seem to address the problem which is an arbitrary definition (I'm working off a high performance server with 700gb ram so compute not an issue).
My issue is that I have no idea where this file is. The error points me towards this "travis" directory which doesn't exist locally for me and in my local environment the c++ files aren't available.
Any idea on where to look to find the c++ library?
You have to modify the openCV source files and then compile it your own.
EDIT: You can also modify environment variables
export CV_IO_MAX_IMAGE_PIXELS=1099511627776
For my problem I should have specified it was a .tif file (NOTE most large images will be in this file format anyway). In which case a very easy way to load it in to a numpy array (so it can then work with OpenCV) is with the package tifffile.
pip install tifffile as tifi
This will install it in your python environment.
import tifffile as tifi
img = tifi.imread("VeryLargeFile.tif")
From here you can use it as you would with any numpy array and it is fully compatible with OpenCV etc.
Adding the following to your program should fix the issue in python opencv.
import os
os.environ["OPENCV_IO_MAX_IMAGE_PIXELS"] = str(pow(2,40))
import cv2
I have seven of star field images taken with CCD. Extensions of them are FIT. I'm trying to align them by Python but, i have confused. This time is my very first attempt to align images. I found a few module related with alignment of fits images but they seem to me very confusing. I need a help.
The APLpy module (https://aplpy.github.io/) does what you need to do.
However, it might not be the most straightforward thing to use for a first-timer.
What I would recommend is using PyRAF, which is a python wrapper for the IRAF data reduction software developed by NOAO (National Optical Astronomy Organization) in the 80's/90's to deal with CCD data reduction.
You can get pyraf by typing pip install pyraf. Once you have pyraf, I would recommend following Josh Wallawender's IRAF tutorial; skip to Section V ("Basic Reduction Steps for Imaging Data"). Keep in mind you are using PyRAF, so any IRAF-specific things (sections I-IV) don't necessarily apply to you. PyRAF is a much easier to use system.
The specific PyRAF tasks you need are imalign and imcombine. You'll also need to give a file with the rough shifts between each image (the help file for imalign is a fantastic resource, btw, and you can access it via epar imalign and clicking on the "Help" button when the GUI pops up).
I hope this gives you a starting point. There are other ways to do image combining in python, but astropy is kind of finicky for first-time users.
I am trying to implement an image warp using the ThinPlateSplineShapeTransformer in OpenCV using Python. I am using a C++ example posted in the OpenCV forum (link) but I am encountering various problems due to the differences in the OpenCV Python API.
As in the linked example, I am working with a single image onto which I will define a small number of source points and the corresponding target points. The end result should be a warped copy of the image. The code so far is as follows:
tps=cv2.createThinPlateSplineShapeTransformer()
sourceshape= np.array([[200,10],[400,10]],np.int32)
targetshape= np.array([[250,10],[450,30]],np.int32)
matches=list()
matches.append(cv2.DMatch(1,1,0))
matches.append(cv2.DMatch(2,2,0))
tps.estimateTransformation(sourceshape,targetshape,matches)
But I am getting an error in the estimateTransformation method:
cv2.error: D:\Build\OpenCV\opencv-3.1.0\modules\shape\src\tps_trans.cpp:193: error: (-215)
(pts1.channels()==2) && (pts1.cols>0) && (pts2.channels()==2) && (pts2.cols>0) in function cv::ThinPlateSplineShapeTransformerImpl::estimateTransformation
I can understand that something is incorrect in the data structures that I have passed onto estimateTransformation and I'm guessing it has to do with the channels since the rows and columns seem to be correct but I do not know how I can satisfy the assertion (pts1.channels()==2) since the parameter is an array of points which I am creating and not an array generated from an image load
I'd be grateful for any pointers to a TPS implementation for image transformation with Python or indeed any help on how to resolve this particular issue. I've tried to find the Python documentation for the ThinPlateShapeTransformer class but it has proved impossible - all I've found is the C++ docs and the only thing i have to go on are the results of the help() function - apologies if I am missing something obvious
I had the same problem. Simple reshaping solved that issue. It is late but someone might find it useful. Here are the lines for reshaping sourceshape and targetshape:
sourceshape=sourceshape.reshape(1,-1,2)
targetshape=targetshape.reshape(1,-1,2)
Can you try to check the number of matching points. In your code, there are only two matching points, it is difficult to interpolate. May be you can increase up to four matching points, than it will work.
sourceshape= np.array([[200,10],[400,10]],np.int32)//increase more point here
targetshape= np.array([[250,10],[450,30]],np.int32)//increase more point here
matches=list()
matches.append(cv2.DMatch(1,1,0))
matches.append(cv2.DMatch(2,2,0))
//add more matches here.
Here is the effect I am trying to achieve - Imagine a user submits an image, then a python script to cycle through each JPEG/PNG for a similar image in the current working directory.
Close to how Google image search works (when you submit your image and it returns similar ones). Should I use PIL or OpenCV?
Preferably using Python3.4 by the way, but Python 2.7 is fine.
Wilson
I mean, why not use both? It's trivial to convert PIL images into OpenCV images and vice-versa, and both have niche functions that can make your life easier. Pair them up with sklearn and numpy, and you're cooking with gas.
I created the undouble library in Python which seems a match for your issue.
It uses Hash functions to detect (near-)identical images in for example a directory. It works using a multi-step process of pre-processing the images (grayscaling, normalizing, and scaling), computing the image hash, and the grouping of images based on a threshold value.
I'm working with OpenCV for the first time and I'm a little bit confused with data types. I am working with Python.
I see that I can store an image as:
Numpy array
CvMat
Iplimage
The problem I'm having is that different parts of OpenCV seem to require different types and I keep having to try and convert back and forth- it's very confusing, and I'm certain it can't have been designed this way on purpose. I'm also a bit confused about when something should be in the cv module vs cv2.cv:
import cv
import cv2.cv
Can someone explain the logic? It would really help.
Thanks!
cv (or, cv2.cv)
is the old opencv python api, using IplImage and CvMat.
you should not use that anymore. it's being phased out, and won't be available in the next version.
cv2 is the new python api, using numpy arrays for almost anything instead,
so easy to combine with scipy, matplotlib and what not. (and , btw, much closer the the current c++ api)