Aruco marker pose estimation on curved surface - python

I assume the pose estimate from aruco markers is only valid when they are affixed to a flat surface. What about attaching them to a slightly curved surface? Is there any way to correct for the surface curvature in the resulting pose?

Yes, you should be able to get the pose estimate for a curved surface using an Aruco board, though it may be physically difficult to construct and measure. Aruco boards do not need to be planar; they can describe any arrangement of markers in 3D space. So the following steps should work:
attach markers to your curved surface (which may be a challenge if the surface is not developable).
calculate, or directly measure, the 3D positions of the physical markers' corners in your preferred Cartesian coordinate system.
define an Aruco board using the markers' 3D corner positions and IDs.
tune the Aruco detection parameters (at least adaptive threshold and polygonal approximation) to give robust marker detection in the presence of curvature in the marker edges and localised lighting variations due to the curved surface.
once marker detection is reliable, use estimatePoseBoard() to get the pose estimate of the board, and hence the curved surface, in the same Cartesian coordinate system. estimatePoseBoard() finds a best-fit solution for the pose while considering all visible markers simultaneously.
Note: I haven't actually tried this.

Related

Get a transparent colormap in 3D with Python

I have been looking for a Python library that would allow me, from a set of points in 3D and RGB colors associated to each point, to get a transparent surface (i. e., with some degree of transparency). The idea would be to be able to display (and manipulate/rotate) things similar to the image below:
The atoms and bonds are inside a 3D surface that is smooth and constructed from a series of points each with a RGB color.
I could get some rough Poisson reconstruction running with Mayavi but the colors appeared very pixelized and I couldn't find a way to make the surface transparent. I could obtain a lot of features I wanted for this work with Open3D (I actually place these objects inside crystal structures so I need to represent bonds, atoms, crystal edges, axes and so on), but here again I couldn't find a Poisson reconstruction algorithm to recreate the smooth surface from points nor any functionality to make a surface transparent. Any suggestion would be appreciated.

pyopengl How to color concave polygons

pyopengl How to color concave polygons,graphics like this,When filling the color, I always fill the concave place flat. I hope I can fill the color according to the outline of the figure
When filling the color, I always fill the concave place flat. I hope I can fill the color according to the outline of the figure
Don't draw a polygon. Draw many triangles, which combine to make the polygon.
Of course, you can generate the triangles in any way you like, such as with pencil and paper and a ruler and typing in the coordinates.
For this polygon, you might find GL_TRIANGLE_FAN useful, since not much change is needed: add one new vertex at the beginning of the vertices, in the middle of the polygon, so that every other vertex can see the one you added. OpenGL will generate triangles radiating outwards from the central point to all the edge vertices. This is convenient for polygons like yours which are "mostly convex". The triangles will look like this:
If the polygon wasn't "mostly convex", you would most likely need to use GL_TRIANGLES which allows you to split it up however you like.
Another option is to draw an alpha-textured quad, with alpha test turned on. The GPU will draw a square and ignore the pixels in the corners where the alpha is 0. Since your polygon is "almost a square" the wasted work to calculate those pixels and ignoring them could be less than the extra work to process a bunch more triangles. There's no way to perfectly predict that. With this approach, the corner shape would be pixel-based instead of triangle-based, so it would get blocky if you zoomed in.

The best fit triangle in cv2

TLDR: I am looking to fit a triangle to a boomerang-shaped object in order to detect its "head", potentially using python's opencv.
I have a collection of boomerang-shape objects (see image below), whose size and internal angles vary. Additionally, sometimes the "boomerangs" (unlike a real boomerang) can be asymmetric with one leg longer than the other, and can have defects and holes along their legs.
I can accurately extract the contours of these shapes, and now am trying to detect the direction the boomerang is facing (defined as the direction the "pointy" edge, the one marked by brown dots in the image below).
My plan so far was to use opencv's convex defects method to detect the internal angle, and from there detect the direction. However, my "boomerangs" are not perfect - they sometimes have holes and defects along their legs that confuse the convex defect algorithm.
My question is: is there a way to find the best-fit triangle (much like the best fit ellipse) that would fit the boomerang?

Fundamental understanding of tvecs rvecs in OpenCV-ArUco

I want to use ArUco to find the "space coordinates" of a marker.
I have problems understanding the tvecs and rvecs. I came so far as to the tvecs are the translation and the rvecs are for rotation. But how are they oriented, in which order are they written in the code, or how do I orient them?
I have a camera (laptop webcam just drawn to illustrate the orientation of the camera) at the position X,Y,Z, the Camera is oriented, which can be described with angle a around X, angle b around Y, angle c around Z (angles in Rad).
So if my camera is stationary I would take different pictures of the ChArUco Boards and give the camera calibration algorithm the tvecs_camerapos (Z,Y,X) and the rvecs_camerapos (c,b,a). I get the cameraMatrix, distCoeffs and tvecs_cameracalib, rvecs_cameracalib. t/rvecs_camerapos and t/rvecs_cameracalib are different which I find weird.
Is this nomination/order of t/rvecs correct?
Should I use camerapos or cameracalib for pose estimation if the camera does not move?
I think t/rvecs_cameracalib is negligible because I am only interested in the intrinsic parameters of the camera calibration algorithm.
Now I want to find the X,Y,Z position of the marker, I use aruco.estimatePoseSingleMarkers with t/rvecs_camerapos and retrive t/rvecs_markerpos. The tvecs_markerpos don't match my expected values.
Do I need a transformation of t/rvecs_markerpos to find X,Y,Z of the Marker?
Where is my misconception?
OpenCV routines that deal with cameras and camera calibration (including AruCo) use a pinhole camera model. The world origin is defined as the centre of projection of the camera model (where all light rays entering the camera converge), the Z axis is defined as the optical axis of the camera model, and the X and Y axes form an orthogonal system with Z. +Z is in front of the camera, +X is to the right, and +Y is down. All AruCo coordinates are defined in this coordinate system. That explains why your "camera" tvecs and rvecs change: they do not define your camera's position in some world coordinate system, but rather the markers' positions relative to your camera.
You don't really need to know how the camera calibration algorithm works, other than that it will give you a camera matrix and some lens distortion parameters, which you use as input to other AruCo and OpenCV routines.
Once you have calibration data, you can use AruCo to identify markers and return their positions and orientations in the 3D coordinate system defined by your camera, with correct compensation for the distortion of your camera lens. This is adequate to do, for example, augmented reality using OpenGL on top of the video feed from your camera.
The tvec of a marker is the translation (x,y,z) of the marker from the origin; the distance unit is whatever unit you used to define your printed calibration chart (ie, if you described your calibration chart to OpenCV using mm, then the distance unit in your tvecs is mm).
The rvec of a marker is a 3D rotation vector which defines both an axis of rotation and the rotation angle about that axis, and gives the marker's orientation. It can be converted to a 3x3 rotation matrix using the Rodrigues function (cv::Rodrigues()). It is either the rotation which transforms the marker's local axes onto the world (camera) axes, or the inverse -- I can't remember, but you can easily check.
In my understanding, the camera coordinate is the reference frame of the 3D world. The rvec and tvec are the transformations used to get the position of any other 3D point(in the world reference frame) w.r.t the camera coordinate system. So both these vectors are the extrinsic parameters [R|t]. The intrinsic parameters are generally derived from calibration. Now, if you want to project any other 3D point w.r.t the world reference frame on to the image plane, you will need to get that 3D point into the camera coordinate system first and then project it onto the image to get a correct perspective.
Point in image plane (u,v,1)=[intrinsic] [extrinsic] [3D point,1]
The reference coordinate system is the camera. rvec,tvec gives the 6D pose of the marker wrt to camera.

Draw a curve joining a set of points in opencv python

I have a set of points extracted from an image. I need to join these points to from a smooth curve. After drawing the curve on the image, I need to find the tangent to the curve and represent it on the image. I looked at cv2.approxPolyDP but it already requires a curve??
You can build polyline, if order of points is defined. Then it is possible to simplify this polyline with Douglas-Peucker algorithm (if number of points is too large). Then you can construct some kind of spline interpolation to create smooth curve.
If your question is related to the points being extracted in random order, the tool you need is probably the so called 2D alpha-shape. It is a generalization of the convex hull and will let you trace the "outline" of your set of points, and from there perform interpolation.

Categories