Theory behind Wolfenstein-style 3D rendering - python
I'm currently working on a project about 3D rendering, and I'm trying to make simplistic program that can display a simple 3D room (static shading, no player movement, only rotation) with pygame
So far I've worked through the theory:
Start with a list of coordinates for the X and Z of each "Node"
Nodes are kept in an order which forms a closed loop, so that a pair of nodes will form either side of a wall
The height of the wall is determined when it is rendered, being relative to distance from the camera
Walls are rendered using painter's algorithm, so closer objects are drawn on top of further ones
For shading "fake contrast", which brightens/darkens walls based on the gradient between it's two nodes
While it seems simple enough, the process behind translating the 3D coordinates into 2D points on the screen is proving the difficult for me to understand.
Googling this topic has so far only yeilded these equations:
screenX = (worldX/worldZ)
screenY = (worldY/worldZ)
Which seem flawed to me, as you would get a divide by zero error if any Z coordinate is 0.
So if anyone could help explain this, I'd be really greatful.
Well the
screenX = (worldX/worldZ)
screenY = (worldY/worldZ)
is not the whole stuff that is just the perspective division by z and it is not meant for DOOM or Wolfenstein techniques.
Well in Doom there is only single angle of viewing (you can turn left/right but cannot look up/down only duck or jump which is not the same). So we need to know our player position and direction px,py,pz,pangle. The z is needed only if you want to implement also z axis movement/looking...
If you are looking in a straight line (Red) all the object that cross that line in the 3D are projected to single x coordinate in the player screen...
So if we are looking at some direction (red) any object/point crossing/touching this red line will be place at the center of screen (in x axis). What is left from it will be rendered on the left and similarly whats on right will be rendered on the right too...
With perspective we need to define how large viewing angle we got...
This limits our view so any point touches the green line will be projected on the edge of view (in x axis). From this we can compute screen x coordinate sx of any point (x,y,z) directly:
// angle of point relative to player direction
sx = point_ang - pangle;
if (sx<-M_PI) sx+=2.0*M_PI;
if (sx>+M_PI) sx-=2.0*M_PI;
// scale to pixels
sx = screen_size_x/2 + sx*screen_size_x/FOVx
where screen_size_x is resolution of our view area and point ang is angle of point x,y,z relative to origin px,py,pz. You can compute it like this:
point_ang = atan2(y-py,x-px)
but if you truly do a DOOM ray-casting then you already got this angle.
Now we need to compute the screen y coordinate sy which is dependent on the distance from player and wall size. We can exploit triangle similarity.
so:
sy = screen_size_y/2 (+/-) wall_height*focal_length/distance
Where focal length is the distance at which wall with 100% height will cover exactly the whole screen in y axis. As you can see we dividing by distance which might be zero. Such state must be avoided so you need to make sure your rays will be evaluated at the next cell if standing directly on cell boundary. Also we need to select the focal length so square wall will be projected as square.
Here a piece of code from mine Doom engine (putted all together):
double divide(double x,double y)
{
if ((y>=-1e-30)&&(y<=+1e-30)) return 0.0;
return x/y;
}
bool Doom3D::cell2screen(int &sx,int &sy,double x,double y,double z)
{
double a,l;
// x,y relative to player
x-=plrx;
y-=plry;
// convert z from [cell] to units
z*=_Doom3D_cell_size;
// angle -> sx
a=atan2(y,x)-plra;
if (a<-pi) a+=pi2;
if (a>+pi) a-=pi2;
sx=double(sxs2)*(1.0+(2.0*a/view_ang));
// perpendicular distance -> sy
l=sqrt((x*x)+(y*y))*cos(a);
sy=sys2+divide((double(plrz+_Doom3D_cell_size)-z-z)*wall,l);
// in front of player?
return (fabs(a)<=0.5*pi);
}
where:
_Doom3D_cell_size=100; // [units] cell cube size
view_ang=60.0*deg; // FOVx
focus=0.25; // [cells] view focal length (uncorrected)
wall=double(sxs)*(1.25+(0.288*a)+(2.04*a*a))*focus/double(_Doom3D_cell_size); // [px] projected wall size ratio size = height*wall/distance
sxs,sys = screen resolution
sxs2,sys2 = screen half resolution
pi=M_PI, pi2=2.0*M_PI
Do not forget to use perpendicular distances (multiplied by cos(a) as I did) otherwise serious fish-eye effect will occur. For more info see:
Ray Casting with different height size
Related
how to find change point of ball trajectory points after collision
I first track the path of the ball in the image. The scatter points in the image are the x and y coordinates of the ball. I want to know the point where the ball hits the ground and the stick. In D1 the ball hits the ground and in D2 ball to stick In two cases, the ball changes direction and angle. How can I find the point where the angle and direction change? I wrote this code to find the angle between two points, but it does not give the correct output. v1_theta = math.atan2(y1, x1) v2_theta = math.atan2(y2, x2) degree = (v2_theta - v1_theta) * (180.0 / math.pi) x1, y1 - previous position of the ball x2, y2 - current position of the ball How can I find the point where the angle and direction change?
The curve to the right of d1 suggests that you are modelling 2D motion with gravity. If we assume that all of the time intervals are the same, then it looks as if the ball hits the stick first, then the ground. So we divide the trajectory into three parts: the path before the stick, the path between the stick and the ground, and the path after bouncing off the ground. If we disregard the transitions (i.e. the bounces), the only force is gravity. (It is worth checking for air resistance, but if there is any, it appears to be in the noise and negligible.) Take the second derivative to measure gravity, then fit parabolae to those three paths. Solve for the intersections. (At each intersection you will have two velocities, so if you want to find the angle between them, you can use atan.)
Taking the coordinates of an object and creating a formula to drag an arrow
I am using OpenCV to triangulate the position of an object, and am trying to create some kind of formula to pass the coordinates that I obtain through to drag a pull arrow, casting a fishing rod. I tried using polynomial regression to a very high degree, but it is still inaccurate due to the regression not being able to take into account an (x,y) input to an (x,y) output, rather just an x input to x output etc. I have attached screenshots below for clarity, alongside my obtained formulas from the regression. Any help/ideas/suggestions would be appreciated, thanks. Edit: The xy coordinates are organized from the landing position to the position where the arrow was pulled to for the bobber to land there. This is because the fishing blob is the input, and the arrow pull end location comes from the blob location. I am using OpenCV to obtain the x,y coordinates, which I believe is just an x,y coordinate system of the 2d screen. The avatar position is locked, and the button to cast the rod is located at an absolute position of (957,748). The camera position is locked with no rotation or movement. I believe that the angle the rod is cast at is likely a 1:1 opposite of where it is pulled to. Ex: if the rod was pulled to 225 degrees it would cast at 45 degrees. I am not 100% sure, but I think that the strength is linear. I used linear regression partially because I was not sure about this. There is no altitude difference/slope/wind that affects the cast. The only affecting factor of landing position is where the arrow is dragged to. The arrow will not drag past the 180/360 degree position sideways (relative to cast button) and will simply lock the cast angle in the x direction if it is held there. The x-y data was collected with a simple program to move the mouse to the same position (957,748) and drag the arrow to cast the rod with different drag strengths/positions to create some kind of line of best fit for a general formula for casting the rod. The triang_x and y functions included are what the x and y coordinates were run through respectively to triangulate the ending drag coordinate for the arrow. This does not work very well because matching the x-to-x and y-to-y doesn't account for x and y data in each formula, just x-to-x etc. Left column is fishing spot coordinates, right column is where arrow is dragged to to hit the fish spot. (1133,359) to (890,890) (858,334) to (886, 900) (755,579) to (1012,811) (1013,255) to (933,934) (1166,469) to (885,855) (1344,654) to (855,794) (804,260) to (1024,939) (1288,287) to (822,918) (624,422) to (1075,869) (981,460) to (949,851) (944,203) to (963,957) (829,367) to (1005,887) (1129,259) to (885,932) (773,219) to (1036,949) (1052,314) to (919,908) (958,662) to (955,782) (1448,361) to (775,906) (1566,492) to (751,837) (1275,703) to (859,764) (1210,280) to (852,926) (668,513) to (1050,836) (830,243) to (1011,939) (688,654) to (1022,792) (635,437) to (1072,864) (911,252) to (976,935) (1499,542) to (785,825) (793,452) to (1017,860) (1309,354) to (824,891) (1383,522) to (817,838) (1262,712) to (867,758) (927,225) to (980,983) (644,360) to (1097,919) (1307,648) to (862,798) (1321,296) to (812,913) (798,212) to (1026,952) (1315,460) to (836,854) (700,597) to (1028,809) (868,573) to (981,811) (1561,497) to (758,838) (1172,588) to (896,816) Shows bot actions taken within function and how formula is used. coeffs_x = np.float64([ -7.9517089428836911e+005, 4.1678460255861210e+003, -7.5075555590709371e+000, 4.2001528427460097e-003, 2.3767929866943760e-006, -4.7841176483548307e-009, 6.1781765539212100e-012, -5.2769581174002655e-015, -4.3548777375857698e-019, 2.5342561455214514e-021, -1.4853535063513160e-024, 1.5268121610772846e-027, -2.9667978919426497e-031, -9.5670287721717018e-035, -2.0270490020866057e-037, -2.8248895597371365e-040, -4.6436110892973750e-044, 6.7719507722602512e-047, 7.1944028726480678e-050, 1.2976299392064562e-052, 7.3188205383162127e-056, -6.3972284918241943e-059, -4.1991571617797430e-062, 2.5577340340980386e-066, -4.3382682133956009e-068, 1.5534384486024757e-071, 5.1736875087411699e-075, 7.8137258396620031e-078, 2.6423817496804479e-081, 2.5418438527686641e-084, -2.8489136942892384e-087, -2.3969101111450846e-091, -3.3499890707855620e-094, -1.4462592756075361e-096, 6.8375394909274851e-100, -2.4083095685910846e-103, 7.0453288171977301e-106, -2.8342463921987051e-109 ]) triang_x = np.polynomial.Polynomial(coeffs_x) coeffs_y = np.float64([ 2.6215449742035207e+005, -5.7778572049616614e+003, 5.1995066291482431e+001, -2.3696608508824663e-001, 5.2377319234985116e-004, -2.5063316505492962e-007, -9.2022083686040928e-010, 3.8639053124052189e-013, 2.7895763914453325e-015, 7.3703786336356152e-019, -1.3411964395287408e-020, 1.5532055573746500e-023, -6.9719956967963252e-027, 1.9573598517734802e-029, -3.3847482160483597e-032, -5.5368209294319872e-035, 7.1463648457003723e-038, 4.6713369979545088e-040, -7.5070219026265008e-043, -4.5089676791698693e-047, -3.2970870269153785e-049, 1.6283636917056585e-051, -1.4312555782661719e-054, 7.8463441723355399e-058, 1.9439588820918080e-060, 2.1292310369635749e-063, -1.4191866473449773e-065, -2.1353539347524828e-070, 2.5876946863828411e-071, -1.6182477348921458e-074 ]) triang_y = np.polynomial.Polynomial(coeffs_y)
First you need to clarify few things: the xy data Is position of object you want to hit or position what you hit when used specific input data (which is missing in that case)?In what coordinate system? what position is your avatar? how is the view defined? is it fully 3D with 6DOF or just fixed (no rotation or movement) relative to avatar? what is the physics/logic of your rod casting is it angle (one or two), strength?Is the strength linear to distance?Does throwing acount for altitude difference between avatar and target?does ground elevation (slope) play a role?Are there any other factors like wind, tupe of rod etc? You shared the xy data but what against you want to correlate or make formula for it? it does not make sense you obviously forget to add something like each position was taken for what conditions? I would solve this by (no further details before you clarify stuff above): transform targets xy to player relative coordinate system aligned to ground compute azimut angle (geometricaly) simple atan2(y,x) will do but you need to take into account your coordinate system notations. compute elevation angle and strength (geometricaly) simple balistic physics should apply however depends on the physics the game or whatever you write this for uses. adjust for additional stuff You know for example wind can slightly change your angle and strength In case you have real physics and data you can do #3,#4 at the same time. See similar: C++ intersection time of 2 bullets [Edit1] puting your data into your image OK your coordinates obviously do not match your screenshot as the image taken is scaled after some intuition I rescaled it and draw into image in C++ to match again so here the result: I converted your Cartesian points: int ava_x=957,ava_y=748; // avatar int data[]= // target(x1,y1) , drag(x0,y0) { 1133,359,890,890, 858,334,886, 900, 755,579,1012,811, 1013,255,933,934, 1166,469,885,855, 1344,654,855,794, 804,260,1024,939, 1288,287,822,918, 624,422,1075,869, 981,460,949,851, 944,203,963,957, 829,367,1005,887, 1129,259,885,932, 773,219,1036,949, 1052,314,919,908, 958,662,955,782, 1448,361,775,906, 1566,492,751,837, 1275,703,859,764, 1210,280,852,926, 668,513,1050,836, 830,243,1011,939, 688,654,1022,792, 635,437,1072,864, 911,252,976,935, 1499,542,785,825, 793,452,1017,860, 1309,354,824,891, 1383,522,817,838, 1262,712,867,758, 927,225,980,983, 644,360,1097,919, 1307,648,862,798, 1321,296,812,913, 798,212,1026,952, 1315,460,836,854, 700,597,1028,809, 868,573,981,811, 1561,497,758,838, 1172,588,896,816, }; Into polar relative to ava_x,ava_y using atan2 and 2D distance formula and simply print the angular difference +180deg and ratio between line sizes (that is the yellow texts in left of the screenshot) first is ordinal number then angle difference [deg] and then ratio between line lengths... as you can see the angle difference is +/-10.6deg and length ratio is <2.5,3.6> probably because of inaccuracy of OpenCV findings and some randomness for fishing rod castings from the game logic itself. As you can see polar coordinates are best for this. For starters you could do simply this: // wanted target in polar (obtained by CV) x = target_x-ava_x; y = target_y-ava_y; a = atan2(y,x); l = sqrt((x*x)+(y*y)); // aiming drag in polar a += 3.1415926535897932384626433832795; // +=180 deg l /= 3.0; // "avg" ratio between line sizes // aiming drag in cartesian aim_x = ava_x + l*cos(a); aim_y = ava_y + l*sin(a); You can optimize it to: aim_x = ava_x - ((target_x-ava_x)/3); aim_y = ava_y - ((target_y-ava_y)/3); Now to improve precision you could measure the dependency or line ratio and line size (it might be not linear) , also the angular difference might be bigger for bigger lines ... Also note that second cast (ordinal 2) is probably a bug (wrongly detected x,y by CV) if you render the 2 lines you will see they do not match so you should not account that and throw them away from dataset. Also note that I code in C++ so my goniometrics use radians (not sure if true for python if not you need to convert to degrees) also equations might need some additional tweaking for your coordinate systems (negate y?)
Failed to display all the spheres in perspetive projection of 3D animation
I have generated an optic flow animation, with spheres (circles) that move towards the viewer in a 3D coordinates space. For some reason, although I define a number of 8 spheres, it never displays all the spheres every time I run the code; sometimes it displays 1, sometimes 4 (like in the gif). It is eventually a random number of spheres from 1 to 8. My code is available on Github
At perspective projection, the viewing volume is a frustum. So probably the spheres are clipped (not in the frustum) at the sides of the frustum, especially when they are close to the near plane. Note, most of the stars "leave" the window at its borders, when they come closer to the camera (except the ones, who leave the frustum through the near plane). Set the initial z-coordinate of the spheres to it's maximum (far plane), for debug reasons: for sphere in spheres: sphere.position.xy = np.random.uniform(-25, 25, size=2) #sphere.position.z = np.random.uniform(0.0, -50.0) sphere.position.z = 50 If you don't "see" all stars at all, then the range for the x and y coordinate ([-25, 25]) is to large. To compensate the in initial clipping you can scale the x and y component by the distance: for sphere in spheres: sphere.position.xy = np.random.uniform(-25, 25, size=2) z = np.random.uniform(0.0, -50.0) sphere.position.z = z sphere.position.xy[0] *= z/-50 sphere.position.xy[1] *= z/-50
finding all points around a point in a cube
Let's say I have list of points P0, P1, P2, P3 with coordinates X,Y,Z Then I have a list of points with coordinates X1, Y1, Z1 I had to find all points inside a certain radius around P0: I did this using python scipy library using kdtree and query_ball_point function. But now I'd like to find all points inside a cube. Points (P0 and P1) are not centered in the rectangle. (Z)height of the rectangle is Z+4. (Y)The left side of P0 is +2 and right side of P0 is +1. To get X we need to calculate the distance between P0 and P1... Any ideas? I have good programming knowledge but my math and geometry skills are lacking.
all you need to do is check all distnace conditions for every point in relation to your rectangle - in all dimensions x,y,z. Lets say you have center of rectangle with coordinates cx,cy,cz and you know that distance from X side is dX, from Y side is dY and from Z side is dZ. the coordinates of your socalled center is cx,cy,cz you can make loop for point in all_points: px,py,pz = point # coordinates of a point which you try to examine if abs(cx-point[x]) < dX: if abs(cy-point[y]) < dY: if abs(cz-point[z]) < dZ: print('point is inside so called cube') #abs(cx-point[x]) equals distance between your center and examined point in x-axis dimension... #dX is distance between cube side and cx (center of cube in x-axis) NOTE: This example is good for cube with center in the middle. Since your center is not really in the middle, I advice you to find the center and do the above example If you cant calculate center of your cube, you cant solve this problem anyway, so you better find the center.
How to use python to move the mouse in circles
I'm trying to write a script in python, to automatically force the movement of the mouse pointer without the user's input (it quits through the keyboard), and experimenting with PyAutoGUI, PyUserInput and ctypes, I've been figuring out ways to move the pointer with constant speed, instead of having it teleport across the screen(I need the user to be able to see the path it makes). However, I need it to be able to perform curves, and particularly, circles, and I haven't found a way to do so with the aforementioned libraries. Does anybody know of a way to code them into making the mouse describe circles across the screen at constant speed, instead of just straight lines? Thank you beforehand for any input or help you may provide.
This is my attempt at making circle at the center of the screen of radius R - also note if I don't pass parameter duration then the mouse pointer moves to the next coordinates instantly. So for a circle divided into 360 parts you can set the pace using a modulus. import pyautogui import math # Radius R = 400 # measuring screen size (x,y) = pyautogui.size() # locating center of the screen (X,Y) = pyautogui.position(x/2,y/2) # offsetting by radius pyautogui.moveTo(X+R,Y) for i in range(360): # setting pace with a modulus if i%6==0: pyautogui.moveTo(X+R*math.cos(math.radians(i)),Y+R*math.sin(math.radians(i)))
There is a way to do this using sin, cos, and tan. (I haven't been able to test this code yet, It might not work.) Import math Import pyautogui def circle(radius = 5, accuracy = 360, xpos=0, ypos=0, speed = 5): local y local x local angle angle = 360/accuracy local CurAngle CurAngle = 0 x = [] y = [] sped = speed/accuracy for i in range(accuracy): x.append(xpos + radius*math.sin(math.radians(CurAngle))) y.append(ypos + radius*math.cos(math.radians(CurAngle))) CurAngle += angle for i in len(x): pyautogui.moveTo(x[i], y[i], duration = sped) You put this near the top of your script, and pass arguments like this: circle(radius, accuracy, xpos, ypos, speed) Radius controls the width of the circle Accuracy controls how many equi-distant points the circle is to be broken up into, setting accuracy to 4 will put 4 invisible points along the circle for the mouse to travel tom which will make a square, not a circle, 5 makes a pentagon, 6 a hexagon, etc.. the bigger the radius, the bigger you will want the accuracy Xpos controls the x position of where the circle is centered Ypos controls the y position of where the circle is centered Speed controls how many seconds you want it to take to draw the circle. Hope this helps :) Would you mind elaborating what you are wanting when you say 'curves'