I am trying to rotate a vtk camera around its focal point. The aim being to 'orbit' the model.
I'm using the camera.SetPosiiton(x, y, z) call to set the camera location, and I know I can do the same at each update period in my render window.
The focal point has the location (0, 0, 0), and some other bounding box getting gives me my initial camera (x, y, z) location. The distance from the focal point (0, 0, 0) to the camera location 9x, y, z) describes the radius the of the sphere.
In my head, this essentially moving the camera in steps around the point (0, 0, 0) and I am presuming there is a maths function I could use to feed it my starting camera point, and work out my next camera location.
This should result in the model appearing to spin in space. My camera view is offset from all x, y, z, planes, making it a 3d problem, not a 2d problem. However, I do want my camera to remain the same distance from the model (focal point)
What I am trying to achieve is like this:- take a pencil (my model is long and narrow). Hold it in your finger tips at arms length, tip pointing to the ceiling. Tilt the pencil by ~30 degrees. This is the camera start position. Rotate the pencil body in your fingers, maintaining tilt angle, and the distance from your eye.
THis post looks helpful: Plotting a point on the edge of a sphere however, this assumes you know the radius to get to the initial x, y location.
Could some one point me towards the maths I need to do this, my maths is horribly rusty.
It seems what you want is to rotate a vector about an axis, this can be most easily done using a rotation matrix
So, if your desired axis of rotation is tilted 30 degrees from the z axis on the zx plane, your axis of rotation is (cos(pi/6),0,sin(pi/6)), increment the rotation angle, plug that into the rotation matrix to get matrix R, the new camera position vector will be R*(x,y,z)'
Start off with the points (+-1,0,0), (0,+-1,0), (0,0,+-1). These form two Pyramids with all the points on the unit sphere.
Now you can take the midpoints of each triangle, and project out so they lie on the unit sphere too. For each triangle this now gives you 3 new triangles, and you can repeat the process.
The alternative to the midpoint of the triangle is to take the midpoints of each side, and join them up. That gives 3 new points that can be projected out to the unit circle. This gives you 4 triangles for each sub division.
Repeat as many times as you need.
Related
I have an image which I want to extract the pixels of the specific part of that. This part is a quarter circle, and my desire is to obtain the pixels of that.
I have the coordinates of the center and points which the lines connected to the circle. How is it possible to extract one quarter and ignore other parts?
The disk has the equation (X-Xc)²+(Y-Yc)²≤R².
The half planes meeting at the center have equations c(X-Xc) + s(Y-Yc)≥0 where c and s are the cosine and sine of the angle.
Hence, scan the image (or just the bounding box of the circle) and consider the pixels (X, Y) such that the three constraints are satisfied.
I have a camera with width and height resolution and camera parameters:
[[position_x, position_y, position_z],
[focal_point_x, focal_point_y, focal_point_z],
[view_up_x, view_up_y, view_up_z]]
Say I was given a point of the camera image plane with coordinates x,y ( 0,0 top left corner width-1, height-1 bottom right corner).
How to get world coordinates of my point x,y from such Camera, Image data?
I'll try to bring some highlights that I found in a old book (reference at end of this answer).
First of all, you can't map directly z-axis of the world coordinate from your (x,y) image coordinate, because it's a one to many problem: one pixel point can be at any z-axis line point. However there's a lot of studies and algorithm about estimating this z-axis with just one image perspective and others using two image perspectives (stereo vision).
But if you already have the word coordinate Z axis, you can compute the the X and Y using the above formula.
where:
(x,y) = image coordinates of some point
(X0,Y0,Z0) = camera word coordinates
(X,Y,Z) = word coordinates of some point
α = angle between x and X axis
θ = angle between z and Z axis
λ = focal lenght
I'm not an expert in robotics, but maybe this can clarify your path. I use the images from below book (page 313).
Reference book: FU, K. S.; GONZALEZ, R. C.; LEE, C. S. G.. Robotics: control, sensing, vision and intelligence. [s. L.]: Mcgraw - Hill, 1987.
This problem is in 3D space.
There is a rectangle, defined by 4 vertices. We rotate it around one of its sides.
There is a triangle, defined by 3 vertices.
After a full 360 degree rotation, will the rectangle ever intersect/touch the triangle?
If so, what is the angle of rotation at which intersection first occurs? And what is the point of this first intersection?
After thinking about this for a while, it seems like there are 3 main cases:
triangle vertex touches rectangle surface
triangle surface touches rectangle vertex
triangle edge touches rectangle edge
And there are 2 unlikely cases where the two are perpendicular when the intersect:
rectangle edge hits triangle surface
rectangle surface hits triangle edge
However identifying these cases hasn't really gotten me closer to a solution. I'm hoping someone can point me in the right direction for how to solve this problem. I want to solve it fast for a small number of rectangles x a large number of triangles.
Context: the larger problem I'm trying to solve is I want to wrap a rectangle around a closed polygonal mesh. I wish to do this step by step by rotating the rectangle until it intersects, then rotating the remaining rectangle around the intersection point, etc.
When you rotate a rectangle around one of its sides, you get a cylinder. Intersect each of the lines with the cylinder. The position of the intersection points gives you the rotation angles. Since this doesn't catch the case where the triangle is completely contained within the cylinder, test whether the vertices' distance to the cylinder's axis is smaller than the cylinder's radius, too.
Say your rectangle has the vertices A to D. You want to rotate around the side AB. The radius of your cylinder is then r = |AD|.
First, transform the coordinates so that the rectangle is placed with the side that you want to rotate about along the z axis and the adjacent side along the x axis.
A′ = {M} · A = {0, 0, 0}
B′ = {M} · B = {0, 0, |AB|}
C′ = {M} · C = {r, 0, 0}
Apply the same transformation {M} to the vertices of the triangle.
Now find the intersections of all three sides of the triangle with the cylinder. Because the cylinder is aligned to the z axis, the problem can be separated into two subproblems: (1) Find any intersections with the top and bottom surfaces a z == 0 and z == |AB|. (2) Find the intersections with the "coat" of the cylinder; this is the intersection of a line with a circle in the xy plane.
You can then calculate the rotation angles with the tangent function of the y and x coordinates of these points as atan2(y, x).
If you need the coordinates of the intersection points in the original coordinates, don't forget to undo the transformation.
Can anyone please explain if it is possible, and if so how, to work with cv2.getPerspectiveTransform().
I have 3d information about my image: I know the length of a,b and also the Different heights of c,d,e,f and g. I made the height different to get more 3d information but if it isn't needed that will be preferable.
Ultimately I need to know where the pink dot really is in the rectangle after implementing the transform on my [x,y] position I get from the camera feed.
If you denote by C,D,E,F the positions of the four corners of the black polygon in the original image (each of them is a 2D point), and C',D',E',F' the positions of the corresponding points in your target image (probably (0,0), (a, 0), (a, b), (0, b)), M = cv2.getPerspectiveTransform({C,D,E,F}, {C',D',E',F'}) is the perspective transformation from one polygon to the other.
Given the position G of the vertical projection of g onto the black polygon in the original image, you can compute its position in the target image as cv2.transform(G, M). This will return a point (x,y,z), where the last coordinate z is a normalizing term. This z is zero when your point would be "at infinity" in the target image. If z is not zero, the point you are looking for is (x/z, y/z).
If z is zero, your point is at infinity, in the direction of the support of vector (x, y) (think of the case where G would be at the intersection of the supporting lines of two opposite sides of the black polygon in the source image).
If you know that the heights of c,d,e,f,g are equal, these points are also coplanar, and the exact same method applies to c,d,e,f,g instead of C,D,E,F,G.
I have been working around extracting the time series from shapes based on distances to center of mass clockwise starting from angle 0 to 360.
My Implementation that arranges contour points based on their angle to the [1,0], vector might be good for some shapes but is not useful for shapes that has much articulation. Consider the following code:
im = Image.open(os.path.join(path,filename))
im = im.filter(ifilter.MedianFilter)
contim = im.filter(ifilter.CONTOUR)
contim = contim[1:-1,1:-1] # this is because borders are extracted here as contours
contpts = np.where(contim ==0)
contpts = np.vstack((contpts[0],contpts[1])) # Just need to arrange these points clockwise with respect to the center of mass of the shape
Can anyone direct me to how I can extract that feature from any shape where I can start from a point and keep going along the contour to extract all the distances to the center of mass of the shape.
For more information about the feature, please view this paper: "LB_Keogh Supports Exact Indexing of Shapes under Rotation Invariance with Arbitrary Representations and Distance Measures"
If I understood, there's a geometrical figure in a discretized plane, represented as a matrix. If the entry is 1, you're inside the figure. If it's 0, you're outside. He wants to determine de distance between the edge of the figure and the center of the figure for all points in the edge. He parametrized it with a polar coordinate system. The center of the figure is the origin and now he wants to get the distance to the border as a function of the angle. This is what he calls his "time series".
Is this correct?
If yes, couldn't you just:
1. determine the center of mass,
2. reposition the origin to match the center of mass.
3. start angle at 0
4. r = 0
5. for each angle in [0,1,...,360]
1. If you're in inside the figure, increase r until you reach the border.
2. If you're outside the figure, decrease r until you reach the border.
3. When you reach the border, d(angle) = r
It the figure have a more or less continuous border, this will follow the contour.
Would this work?