I have the follow situation:
One point located on Earth Surface with 3D coordinates (X, Y, Z) and one camera inside the airplane that taken picture from surface. For the camera, I have too the 3D coordinates (X, Y, Z) for the exactly moment that the image was taken.
To this scenario I need calculate the light reflection angle between the point on Earth surface and the camera inside the airplane.
I would like suggestions or ideias to calculate this angle. I know that a possible solution will use the analytical geometry.
I have calculated the sun incidence angle to the point on surface using PVLIB library, but I can't found on pvlib a function to determine the light reflection angle.
Thx for help me!!
I suppose that you used the sun elevation and azimuth angle to calculate the sun incidence vector by some formula such as (suppose azimuth as [N=0 / E=90 / S=180 / W=270]):
Vx_s = sin(sun_azim) * cos(sun_elev)
Vy_s = cos(sun_azim) * cos(sun_elev)
Vz_s = sin(sun_elev)
Considering a light reflection on a flat surface (horizontal with normal vector to zenith), the vector of reflected light (forward light, not considering scattering/dispersion rays, e.g. mirror surface) will be
Vx_r = sin(sun_azim + 180) * cos(sun_elev)
Vy_r = cos(sun_azim + 180) * cos(sun_elev)
Vz_r = sin(sun_elev)
The vector of the plane camera is:
Vx_p = X_plane - X_surface
Vy_p = Y_plane - Y_surface
Vz_p = Z_plane - Z_surface
Then, the angle between the reflected ray and the airplane camera is (take into account that the plane-site vector is not an unit vector in this example):
alpha = arccos( (Vx_p*Vx_r + Vy_p*Vy_r + Vz_p*Vz_r) / sqrt(Vx_p**2 + Vy_p**2 + Vz_p**2) )
Related
I would like to plot a 2d plane of a set area (e.g. 10x10 square or 5dia cicle) in 3D space using plotly so I can include it in my dash app. I would also like to be able to rotate and position the plane in 3D space using a point on the plane (in the centre of the plane) and a normal vector to the plane (i.e. a line joining a point in space and the point on the plane).
I have tried using np.meshgrid to create an xy grid and then used linear algebra equation of a plane to calculate z points. I then used figure.add_surface(x,y,z) to plot the plane.
To get meshgrid, I used the point in the centre of the plane and added 'spread' to its x and y coordinates in positive and negative directions.
#Point in space
x_iso = 10
y_iso = 10
z_iso = 10
#Point in the centre of the required plane
x_pt = 50
y_pt = 50
z_pt = 50
#Components of the plane equation
delta_x = x_pt - x_iso
delta_y = y_pt - y_iso
delta_z = z_pt - z_iso
#Spread for xy mesh
spread = 10
#Creating x,y mesh
xx, yy = np.meshgrid(np.linspace(x_pt-spread, x_pt+spread), np.linspace(y_pt-spread, y_pt+spread))
#Calculating corresponding z of the xy mesh using linear algebra equation of a plane using a point and a normal vector
zz = ( delta_x*x_pt + delta_y*y_pt + delta_z*z_pt - delta_x*xx - delta_y*yy ) / delta_z
CT_figure.add_surface(x=xx,y=yy,z=zz,showscale=False,showlegend=False)
This method works and plots a plane in 3D. The issue is that the area of the plane depends on the xy mesh and the angle orientation of the plane relative to xy plane.
https://imgur.com/aGlcMfv
For a plane almost parallel to z axis, even for small spread in x or y the plane stretches out a lot in the z direction due to its angle. Also if the plane is parallel to an axis one of the delta_x,y,z is zero and in the case of z plane it becomes infinite.
I need to be able to set the area and shape of the plane and then rotate it in space using the point in the middle of the plane and a random point in space which creates a normal vector to the plane when joined to the point on the plane.
EDIT:
I have come to another solution which is to generate a plane in x,y plane of the required shape and break it down into points (x,y,z) where z is just a height off the xy plane.
Then I perform a point rotation by the required angles for all points on that plane. Seems to work fine, and I'm currently cleaning up the code and then going to try implement it into its required use. I will post an update once finished. Still open for other suggestions!
Goal
I need to retrieve the position and attitude angles of a camera (using OpenCV / Python).
Definitions
Attitude angles are defined by:
Yaw being the general orientation of the camera when it lays on an horizontal plane: toward north=0, toward east = 90°, south=180°, west=270°, etc.
Pitch being the "nose" orientation of the camera: 0° = looking horizontally at a point on the horizon, -90° = looking down vertically, +90° = looking up vertically, 45° = looking up at an angle of 45° from the horizon, etc.
Roll being if the camera is tilted left or right when in your hands (so it is always looking at a point on the horizon when this angle is varying): +45° = tilted 45° in a clockwise rotation when you grab the camera, thus +90° (and -90°) would be the angle needed for a portrait picture for example, etc.
World reference frame:
My world reference frame is oriented so:
Toward east = +X
Toward north = +Y
Up toward the sky = +Z
My world objects points are given in that reference frame.
Camera reference frame:
According to the doc, the camera reference frame is oriented like that:
What to achieve
Now, from cv2.solvepnp() over a bunch of images points and their corresponding world coordinates, I have computed both rvec and tvec.
But, according to the doc: http://docs.opencv.org/trunk/d9/d0c/group__calib3d.html#ga549c2075fac14829ff4a58bc931c033d , they are:
rvec ; Output rotation vector (see Rodrigues()) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.
tvec ; Output translation vector.
these vectors are given to go to the camera reference frame.
I need to do the exact inverse operation, thus retrieving camera position and attitude relative to world coordinates.
Camera position:
So I have computed the rotation matrix from rvec with Rodrigues():
rmat = cv2.Rodrigues(rvec)[0]
And if I'm right here, the camera position expressed in the world coordinates system is given by:
camera_position = -np.matrix(rmat).T * np.matrix(tvec)
(src: Camera position in world coordinate from cv::solvePnP )
This looks fairly well.
Camera attitude (yaw, pitch and roll):
But how to retrieve corresponding attitude angles (yaw, pitch and roll as describe above) from the point of view of the camera (as if it was in your hands basically)?
I have tried implementing this: http://planning.cs.uiuc.edu/node102.html#eqn:yprmat in a function:
def rotation_matrix_to_attitude_angles(R):
import math
import numpy as np
cos_beta = math.sqrt(R[2,1] * R[2,1] + R[2,2] * R[2,2])
validity = cos_beta < 1e-6
if not validity:
alpha = math.atan2(R[1,0], R[0,0]) # yaw [z]
beta = math.atan2(-R[2,0], cos_beta) # pitch [y]
gamma = math.atan2(R[2,1], R[2,2]) # roll [x]
else:
alpha = math.atan2(R[1,0], R[0,0]) # yaw [z]
beta = math.atan2(-R[2,0], cos_beta) # pitch [y]
gamma = 0 # roll [x]
return np.array([alpha, beta, gamma])
But results are not consistent with what I want. For example, I have a roll angle of ~ -90°, but the camera is horizontal so it should be around 0.
Pitch angle is around 0 so it seems correctly determined but I don't really understand why it's around 0 as the Z-axis of the camera reference frame is horizontal, so it's has already been tilted from 90° from the vertical axis of the world reference frame. I would have expected a value of -90° or +270° here. Anyway.
And yaw seems good. Mainly.
Question
Did I miss something with the roll angle?
The order of Euler rotations (pitch, yaw, roll) is important.
According to x-convention, the 3-1-3 extrinsic Euler angles φ, θ and ψ (around z-axis first , then x-axis and then again z-axis) can be obtained as follows:
sx = math.sqrt((R[2,0])**2 + (R[2,1])**2)
tolerance = 1e-6;
if (sx > tolerance): # no singularity
alpha = math.atan2(R[2,0], R[2,1])
beta = math.atan2(sx, R[2,2])
gamma= -math.atan2(R[0,2], R[1,2])
else:
alpha = 0
beta = math.atan2(sx, R[2,2])
gamma= 0
But this not an unique solution. For example ZYX,
sy = math.sqrt((R[0,0])**2 + (R[1,0])**2)
tolerance = 1e-6;
if (sy > tolerance): # no singularity
alpha = math.atan2(R[1,0], R[0,0])
beta = math.atan2(-R[2,0], sy)
gamma= math.atan2(R[2,1] , R[2,2])
else:
alpha = 0
beta = math.atan2(-R[2,0], sy)
gamma= math.atan2(-R[1,2], R[1,1])
I think your transformation is missing a rotation. If I interpret your question correctly, you are asking what the inverse of (rotation by R followed by translation T)
${\hat{R}|\vec{T}}.\vec{r}=\hat{R}.\vec{r}+\vec{T}$
The inverse should return the identity
${\hat{R}|\vec{T}}^{-1}.{\hat{R}|\vec{T}}={\hat{1}|0}$
Working this through yields
${\hat{R}|\vec{T}}^{-1}={\hat{R}^-1|-\hat{R}^-1\cdot \vec{T}}$
As far as I could tell you were using the $-\hat{R}^-1\cdot \vec{T}$ (undoing th translation) part of the answer but leaving out the inverse rotation $\hat{R}^-1$
Rotation+Translation:
${\hat{R}|\vec{T}}\vec{r}=\hat{R}\cdot\vec{r}+\vec{T}$
Inverse of (Rotation+Translation):
${\hat{R}|\vec{T}}^{-1}\vec{r}=\hat{R}^{-1}\cdot\vec{r}-\hat{R}^{-1}\cdot \vec{T}$
Non-latex mode (R^-1*r-R^-1*T) is the inverse of (R.r+T)
In this link: http://planning.cs.uiuc.edu/node102.html#eqn:yprmat they assume a different coordinate system for the object then the one of your camera.
They define:
Roll - Rotation around x (in your case its around z)
pitch - Rotation around y (in your case its around x)
yaw - Rotation around z (in your case its around y)
To get the right conversion you need to recalculate the full rotation matrix given the three angles as:
So for the reverse conversion you get:
cos_beta = math.sqrt(R[0,2] * R[0,2] + R[2,2] * R[2,2])
alpha = math.atan2(R[0,2], R[2,2]) # yaw [z]
beta = math.atan2(-R[1, 2], cos_beta) # pitch [y]
gamma = math.atan2(R[1, 0], R[1,1])
I am using a stereo system and so I am trying to get world coordinates of some points by triangulation.
My cameras present an angle, the Z axis direction (direction of the depth) is not normal to my surface. That is why when I observe flat surface, I get no constant depth but a "linear" variation, correct? And I want the depth from the baseline direction... How I can re-project?
A piece of my code with my projective arrays and triangulate function :
#C1 and C2 are the cameras matrix (left and rig)
#R_0 and T_0 are the transformation between cameras
#Coord1 and Coord2 are the correspondant coordinates of left and right respectively
P1 = np.dot(C1,np.hstack((np.identity(3),np.zeros((3,1)))))
P2 =np.dot(C2,np.hstack(((R_0),T_0)))
for i in range(Coord1.shape[0])
z = cv2.triangulatePoints(P1, P2, Coord1[i,],Coord2[i,])
-------- EDIT LATER -----------
Thanks scribbleink, so i tried to apply your proposal. But i think i have a mistake because it doesnt work well as you can see below. And the point clouds seems to be warped and curved towards the edges of the image.
U, S, Vt = linalg.svd(F)
V = Vt.T
#Right epipol
U[:,2]/U[2,2]
# The expected X-direction with C1 camera matri and C1[0,0] the focal length
vecteurX = np.array([(U[:,2]/U[2,2])[0],(U[:,2]/U[2,2])[1],C1[0,0]])
vecteurX_unit = vecteurX/np.sqrt(vecteurX[0]**2 + vecteurX[1]**2 + vecteurX[2]**2)
# The expected Y axis :
height = 2048
vecteurY = np.array([0, height -1, 0])
vecteurY_unit = vecteurY/np.sqrt(vecteurY[0]**2 + vecteurY[1]**2 + vecteurY[2]**2)
# The expected Z direction :
vecteurZ = np.cross(vecteurX,vecteurY)
vecteurZ_unit = vecteurZ/np.sqrt(vecteurZ[0]**2 + vecteurZ[1]**2 + vecteurZ[2]**2)
#Normal of the Z optical (the current Z direction)
Zopitcal = np.array([0,0,1])
cos_theta = np.arccos(np.dot(vecteurZ_unit, Zopitcal)/np.sqrt(vecteurZ_unit[0]**2 + vecteurZ_unit[1]**2 + vecteurZ_unit[2]**2)*np.sqrt(Zopitcal[0]**2 + Zopitcal[1]**2 + Zopitcal[2]**2))
sin_theta = (np.cross(vecteurZ_unit, Zopitcal))[1]
#Definition of the Rodrigues vector and use of cv2.Rodrigues to get rotation matrix
v1 = Zopitcal
v2 = vecteurZ_unit
v_rodrigues = v1*cos_theta + (np.cross(v2,v1))*sin_theta + v2*(np.cross(v2,v1))*(1. - cos_theta)
R = cv2.Rodrigues(v_rodrigues)[0]
Your expected z direction is arbitrary to the reconstruction method. In general, you have a rotation matrix that rotates the left camera from your desired direction. You can easily build that matrix, R. Then all you need to do is to multiply your reconstructed points by the transpose of R.
To add to fireant's response, here is one candidate solution, assuming that the expected X-direction coincides with the line joining the centers of projection of the two cameras.
Compute the focal lengths f_1 and f_2 (via pinhole model calibration).
Solve for the location of camera 2's epipole in camera 1's frame. For this, you can use either the Fundamental matrix (F) or the Essential matrix (E) of the stereo camera pair. Specifically, the left and right epipoles lie in the nullspace of F, so you can use Singular Value Decomposition. For a solid theoretical reference, see Hartley and Zisserman, Second edition, Table 9.1 "Summary of fundamental matrix properties" on Page 246 (freely available PDF of the chapter).
The center of projection of camera 1, i.e. (0, 0, 0) and the location of the right epipole, i.e. (e_x, e_y, f_1) together define a ray that aligns with the line joining the camera centers. This can be used as the expected X-direction. Call this vector v_x.
Assuming that the expected Y axis faces downward in the image plane, i.e, from (0, 0, f_1) to (0, height-1, f_1), where f is the focal length. Call this vector as v_y.
The expected Z direction is now the cross-product of vectors v_x and v_y.
Using the expected Z direction along with the optical axis (Z-axis) of camera 1, you can then compute a rotation matrix from two 3D vectors using, say the method listed in this other stackoverflow post.
Practical note:
Expecting the planar object to exactly align with the stereo baseline is unlikely without considerable effort, in my practical experience. Some amount of plane-fitting and additional rotation would be required.
One-time effort:
It depends on whether you need to do this once, e.g. for one-time calibration, in which case simply make this estimation process real-time, then rotate your stereo camera pair until the depth map variance is minimized. Then lock your camera positions and pray someone doesn't bump into it later.
Repeatability:
If you need to keep aligning your estimated depth maps to truly arbitrary Z-axes that change for every new frame captured, then you should consider investing time in the plane-estimation method and making it more robust.
I'm struggling to work out how I move point A to B in an arc motion in 3D. The duration of the movement doesn't matter so much. I have found a load of wikipedia pages on it but am having no luck understanding them as its been a long time since I was in college. Any code examples would be really useful for me to understand. Thank you, I would really appreciate your help. Here is an image that sort of shows what I am looking to achieve, although the image only represents the points in 2d, I am looking for a 3d solution.
Assuming your problem statement is:
Given points a and b, trace the circular path along the plane which lies tangent to the up vector:
And that you have the appropriate vector algebra libraries:
def interp(a, b, up, t):
""" 0 <= t <= 1"""
# find center and radius vector
center = (a + b) / 2
radius = a - center
# split path into upwards and downwards section
omega = math.acos(radius.dot(up)) # angle between center-a and center-top
t_top = omega / math.pi # time taken to reach the top
# redefine 0 as A, 1 as the top, and B as whatever remains linear
t = t / t_top
#slerp, with t intentionally > 1
sin = math.sin
return (
center +
sin((1 - t) * omega) / sin(omega) * radius +
sin(t * omega) / sin(omega) * up
)
it doesnt matter if its 2d or 3d .
you take the position of each dot and find the center beetwean them .
the distance beetwean the center and each dot is the radius .
after that give the object a moving direction and tell it to be always in a distance of radius from center . which a moving vector you can give it any direction you want .
How can i make a smooth circular orbit at a constant speed using pygame?
How would i calculate x, y on a circle?
Rotating about the 2d point center with the given radius and speed.
The parameter t is the time in units of seconds.
def circular_orbit(center, radius, speed, t):
theta = math.fmod(t * speed, math.PI * 2)
c = math.cos(theta)
s = math.sin(theta)
return center[0] + radius * c, center[1] + radius * s
Try using polar coordinates. It's natural :)
If you don't calculate enough frames to make your orbit look smooth, calculate 3-4 intermediate points of orbit to draw shorter line segments, without calculating the game state at these points. Make this radius-dependent. This helps proper collision detection, too.