How to publish/stream roi of rectified images by using image_proc? - python

I am using ROS to control a drone for real-time image processing applications. I have calibrated camera by using cameracalibrator.py node in Ros. When I use image_proc node to compare raw and rectified images, I don't get what I want. Although image is rectified, border of the image is getting distorted toward opposite direction as the image below:
As a result, rectified image is still useless for me.
Thus, this time, I calibrated my camera using opencv so that I can get region of interest (ROI) in the image after undistortion operation. Thus, the rectified image becomes perfect for me. However, I need ROS to do that while streaming rectified image by using image_proc. Is there anyway to do that ?

You can directly use the image_proc/crop_decimate nodelet.
You can configure it using dynamic_reconfigure to set up ROI or interpolation.
However, since these are software operations, interpolation methods should be handled with care (but fastest NN method is standard anyway) since you have a real time application.

Related

Why does OpenCV Stitcher Class fail for image sub sections?

I am trying to stitch panoramas using the OpenCV Stitcher Class in Python. The input data consists of partly overlapping images, but sometimes also images showing a sub section of another image, e.g.:
Whenever there are images showing a sub section, the stitching either fails ("Need more images" or "Camera parameters adjusting failed.") or leaves out the image showing the subsection. This also applies to images that are overlapping almost completely (>~90%).
There are sufficient features and matches found. The homography gets estimated correctly; see this output from the cv.draw_matches() function:
I have tried different feature detection methods (ORB, AKAZE, SIFT, SURF) as well as tuning various other parameters like bundle adjustment, warp type and confidence (conf_thresh).
It is essential for my application, that all images are included in the panorama, even when they show an area already covered by previous images.
What am I doing wrong? Are there any other methods for generating image stitchings using opencv-python?
Are there any other methods for generating image stitchings using
opencv-python?
The stitching package uses opencv-python and offers a lot of insight possibilities into the stitching process using the tutorial

Aruco marker detection with 360 camera?

recently I have been playing with the 360 fly HD camera and wondering if Aruco Marker can be detected during real time. The first thing come to my mind is to convert the fisheye image into perspective image first and then perform the detection on the perspective image(I am gonna try it and will update my result here later).
Converting a fisheye image into a panoramic, spherical or perspective projection
Hugin HowTo: Convert 360 Image to Cropped Flat Panoramic Image
I am not an expert in this field. Has anyone done this before? Is this something can be achieved by calibrating the camera differently such as correcting the camera matrix and distortion coefficient matrix?
If I am heading to the wrong direction, please let me know.
I was able to get a better understanding during the process.
First, I want to say that 360(fisheye, spherical, however you call it) image is NOT distorted. I was so tricked by my intuition and thought that the image was distorted based on what it looks like. NO it is not distorted. Please read enter link description here for more information.
Next, I have tried both 360 fly cameras and neither works. Every time I tried to access the camera with opencv, it automatically powers off and switch to storage mode. I guess the 360 dev team purposely implements this switching function to prevent "hacking" of their products. But, I've seen people successfully hacked the 360 fly, it's definitely workable.
At last, I was able to detect Aruco with Ricoh theta V(theta S should also work). It's so developer friendly and I was able to make it run in my first attempt. You just have to select the right camera and let the code run. The only problem is the range, which is expected(about 6ft) and Ricoh camera is kind of expensive($499).
click here to view succesful detection

Remove motion blur with real time performance on camera input

I have drawn simple pattern of geometrical shapes on a paper and placed it one a object as marker. I'm able to detect and analyze pattern successfully. However when object moves a little faster the motion blur is introduced which can be rotational or linear. This way detected regions overlap e.g. a strip of arrows moving in direction of arrows, is detected as a single line after introduction of motion blur. Therefore I need to fix it somehow. So I can detect individual arrows and analyze them.
Below are images of markers with and without motion blur.
Is there any python module or open source implementation that can be used to solve it?
Motion can be in any direction at any speed so PSF is not known and required for Wiener, Lucy-Richardson methods.
Also it is a realtime tracking problem so I need something that executes fast.
P.S. I'm using Python 2.7 and Opencv 3
This problem can be solved by limiting the exposure time of your camera. This can be done using opencv by using:
cap.set(cv2.CAP_PROP_EXPOSURE,40)
or using the v4l2-ctl command line utility.
first step is to check whether camera is suitable for opencv properties such as
CAP_PROP_FRAME_WIDTH
CAP_PROP_FRAME_HEIGHT
in order to check camera suitability
second step is to is use CV_CAP_PROP_EXPOSURE like
cap.set(cv2.CAP_PROP_EXPOSURE, 40)
value can be change accordingly to avoid motion blur

How to align multiple camera images using opencv

Imagine someone taking a burst shot from camera, he will be having multiple images, but since no tripod or stand was used, images taken will be slightly different.
How can I align them such that they overlay neatly and crop out the edges
I have searched a lot, but most of the solutions were either making a 3D reconstruction or using matlab.
e.g. https://github.com/royshil/SfM-Toy-Library
Since I'm very new to openCV, I will prefer a easy to implement solution
I have generated many datasets by manually rotating and cropping images in MSPaint but any link containing corresponding datasets(slightly rotated and translated images) will also be helpful.
EDIT:I found a solution here
http://www.codeproject.com/Articles/24809/Image-Alignment-Algorithms
which gives close approximations to rotation and translation vectors.
How can I do better than this?
It depends on what you mean by "better" (accuracy, speed, low memory requirements, etc). One classic approach is to align each frame #i (with i>2) with the first frame, as follows:
Local feature detection, for instance via SIFT or SURF (link)
Descriptor extraction (link)
Descriptor matching (link)
Alignment estimation via perspective transformation (link)
Transform image #i to match image 1 using the estimated transformation (link)

How can I use PIL to crop a select area based on face detection?

Hi I am wanting to use the python imaging library to crop images to a specific size for a website. I have a problem, these images are meant to show people's faces so I need to automatically crop based on them.
I know face detection is a difficult concept so I'm thinking of using the face.com API http://developers.face.com/tools/#faces/detect which is fine for what I want to do.
I'm just a little stuck on how I would use this data to crop a select area based on the majority of faces.
Can anybody help?
Joe
There is a library for python that have a concept of smart-cropping that among other options, can use face detection to do a smarter cropping.
It uses opencv under the hood, but you are isolated from it.
https://github.com/globocom/thumbor
If you have some rectangle that you want to excise from an image, here's what I might try first:
(optional) If the image is large, do a rough square crop centered on the face with dimensions sqrt(2) larger than the longer edge (if rectangular). Worst-case (45° rotation), it will still grab everything important.
Rotate based on the face orientation (something like rough_crop.rotate(math.degrees(math.atan(ydiff/xdiff)), trig is fun)
Do a final crop. If you did the initial crop, the face should be centered, otherwise you'll have to transform (rotate) all your old coordinates to the new image (more trig!).

Categories