Move JoyStick in exact Pixels - python

I am working on a project where i want to move a virtual joystick (vJoy) x number of pixels. However I am unable to find a method of converting pixels into the Joystick's Axis (Axis are -32768 to 32767). An example might be more helpful to explain:
lets say i would like to move 50 Pixels on the x axis in a given time of 100 milliseconds, therefore i would have to find the exact axis force between -32768 to 32767 which would move the object 50 pixels in the given time.
As i require a great deal of accuracy in the movement, i am stumped on a method which would accomplish this task. Any help is appreciated
Thanks

Figure out the total range the physical joystick axis can traverse. You've given min and max values of -32768 to 32767, for a total range of 65535.
Figure out the total range of pixels you want the virtual joystick (vJoy) to be able to move. Let's say it starts out at pixel position 0, and its min and max allowed values are -100 and 100. This gives us a total range of 200.
Now, figure out how many units the physical joystick should travel per virtual joystick movement. We get that by dividing: 65535 / 200 (physical joystick range / virtual joystick range), which gives us 327.675. This number in hypothetical terms would mean "for every pixel the virtual joystick travels, the physical joystick travels 327.675". Let's assign this value to the variable "axis_per_pixel".
Finally, if we know that the value axis value has changed by amount "a" (from your example 50), we can determine how much the physical joystick value needs to change by multiplying: a * axis_per_pixel. In our example, this would be 50 * 327.675, which is 16383.75. So, when the virtual joystick position increases by 50, the physical joystick position should increase by 16383.75 to match.
I'll give a code example. You tagged this as Python, but I'm going to answer in JavaScript because my Python is still a bit rough. Hopefully it's clear enough.
function getPositionIncrease(
minPixelValue, // lowest allowed pixel position, ie -100
maxPixelValue, // highest allowed pixel position, ie 100
minAxisValue, // lowest allowed axis value, ie -32768
maxAxisValue, // highest allowed axis value, ie 32767,
pixelDelta, // what amount the pixel position has changed by
) {
const distPixel = maxPixelValue - minPixelValue;
const distAxis = maxAxisValue - minAxisValue;
const axisPerPixel = distAxis / distPixel;
const axisDelta = pixelDelta * axisPerPixel; // how much the physical axis needs to change to match
return axisDelta;
}

Related

Taking the coordinates of an object and creating a formula to drag an arrow

I am using OpenCV to triangulate the position of an object, and am trying to create some kind of formula to pass the coordinates that I obtain through to drag a pull arrow, casting a fishing rod. I tried using polynomial regression to a very high degree, but it is still inaccurate due to the regression not being able to take into account an (x,y) input to an (x,y) output, rather just an x input to x output etc. I have attached screenshots below for clarity, alongside my obtained formulas from the regression. Any help/ideas/suggestions would be appreciated, thanks.
Edit:
The xy coordinates are organized from the landing position to the position where the arrow was pulled to for the bobber to land there. This is because the fishing blob is the input, and the arrow pull end location comes from the blob location. I am using OpenCV to obtain the x,y coordinates, which I believe is just an x,y coordinate system of the 2d screen.
The avatar position is locked, and the button to cast the rod is located at an absolute position of (957,748).
The camera position is locked with no rotation or movement.
I believe that the angle the rod is cast at is likely a 1:1 opposite of where it is pulled to. Ex: if the rod was pulled to 225 degrees it would cast at 45 degrees. I am not 100% sure, but I think that the strength is linear. I used linear regression partially because I was not sure about this. There is no altitude difference/slope/wind that affects the cast. The only affecting factor of landing position is where the arrow is dragged to. The arrow will not drag past the 180/360 degree position sideways (relative to cast button) and will simply lock the cast angle in the x direction if it is held there.
The x-y data was collected with a simple program to move the mouse to the same position (957,748) and drag the arrow to cast the rod with different drag strengths/positions to create some kind of line of best fit for a general formula for casting the rod. The triang_x and y functions included are what the x and y coordinates were run through respectively to triangulate the ending drag coordinate for the arrow. This does not work very well because matching the x-to-x and y-to-y doesn't account for x and y data in each formula, just x-to-x etc.
Left column is fishing spot coordinates, right column is where arrow is dragged to to hit the fish spot.
(1133,359) to (890,890)
(858,334) to (886, 900)
(755,579) to (1012,811)
(1013,255) to (933,934)
(1166,469) to (885,855)
(1344,654) to (855,794)
(804,260) to (1024,939)
(1288,287) to (822,918)
(624,422) to (1075,869)
(981,460) to (949,851)
(944,203) to (963,957)
(829,367) to (1005,887)
(1129,259) to (885,932)
(773,219) to (1036,949)
(1052,314) to (919,908)
(958,662) to (955,782)
(1448,361) to (775,906)
(1566,492) to (751,837)
(1275,703) to (859,764)
(1210,280) to (852,926)
(668,513) to (1050,836)
(830,243) to (1011,939)
(688,654) to (1022,792)
(635,437) to (1072,864)
(911,252) to (976,935)
(1499,542) to (785,825)
(793,452) to (1017,860)
(1309,354) to (824,891)
(1383,522) to (817,838)
(1262,712) to (867,758)
(927,225) to (980,983)
(644,360) to (1097,919)
(1307,648) to (862,798)
(1321,296) to (812,913)
(798,212) to (1026,952)
(1315,460) to (836,854)
(700,597) to (1028,809)
(868,573) to (981,811)
(1561,497) to (758,838)
(1172,588) to (896,816)
Shows bot actions taken within function and how formula is used.
coeffs_x = np.float64([
-7.9517089428836911e+005,
4.1678460255861210e+003,
-7.5075555590709371e+000,
4.2001528427460097e-003,
2.3767929866943760e-006,
-4.7841176483548307e-009,
6.1781765539212100e-012,
-5.2769581174002655e-015,
-4.3548777375857698e-019,
2.5342561455214514e-021,
-1.4853535063513160e-024,
1.5268121610772846e-027,
-2.9667978919426497e-031,
-9.5670287721717018e-035,
-2.0270490020866057e-037,
-2.8248895597371365e-040,
-4.6436110892973750e-044,
6.7719507722602512e-047,
7.1944028726480678e-050,
1.2976299392064562e-052,
7.3188205383162127e-056,
-6.3972284918241943e-059,
-4.1991571617797430e-062,
2.5577340340980386e-066,
-4.3382682133956009e-068,
1.5534384486024757e-071,
5.1736875087411699e-075,
7.8137258396620031e-078,
2.6423817496804479e-081,
2.5418438527686641e-084,
-2.8489136942892384e-087,
-2.3969101111450846e-091,
-3.3499890707855620e-094,
-1.4462592756075361e-096,
6.8375394909274851e-100,
-2.4083095685910846e-103,
7.0453288171977301e-106,
-2.8342463921987051e-109
])
triang_x = np.polynomial.Polynomial(coeffs_x)
coeffs_y = np.float64([
2.6215449742035207e+005,
-5.7778572049616614e+003,
5.1995066291482431e+001,
-2.3696608508824663e-001,
5.2377319234985116e-004,
-2.5063316505492962e-007,
-9.2022083686040928e-010,
3.8639053124052189e-013,
2.7895763914453325e-015,
7.3703786336356152e-019,
-1.3411964395287408e-020,
1.5532055573746500e-023,
-6.9719956967963252e-027,
1.9573598517734802e-029,
-3.3847482160483597e-032,
-5.5368209294319872e-035,
7.1463648457003723e-038,
4.6713369979545088e-040,
-7.5070219026265008e-043,
-4.5089676791698693e-047,
-3.2970870269153785e-049,
1.6283636917056585e-051,
-1.4312555782661719e-054,
7.8463441723355399e-058,
1.9439588820918080e-060,
2.1292310369635749e-063,
-1.4191866473449773e-065,
-2.1353539347524828e-070,
2.5876946863828411e-071,
-1.6182477348921458e-074
])
triang_y = np.polynomial.Polynomial(coeffs_y)
First you need to clarify few things:
the xy data
Is position of object you want to hit or position what you hit when used specific input data (which is missing in that case)?In what coordinate system?
what position is your avatar?
how is the view defined?
is it fully 3D with 6DOF or just fixed (no rotation or movement) relative to avatar?
what is the physics/logic of your rod casting
is it angle (one or two), strength?Is the strength linear to distance?Does throwing acount for altitude difference between avatar and target?does ground elevation (slope) play a role?Are there any other factors like wind, tupe of rod etc?
You shared the xy data but what against you want to correlate or make formula for it? it does not make sense you obviously forget to add something like each position was taken for what conditions?
I would solve this by (no further details before you clarify stuff above):
transform targets xy to player relative coordinate system aligned to ground
compute azimut angle (geometricaly)
simple atan2(y,x) will do but you need to take into account your coordinate system notations.
compute elevation angle and strength (geometricaly)
simple balistic physics should apply however depends on the physics the game or whatever you write this for uses.
adjust for additional stuff
You know for example wind can slightly change your angle and strength
In case you have real physics and data you can do #3,#4 at the same time. See similar:
C++ intersection time of 2 bullets
[Edit1] puting your data into your image
OK your coordinates obviously do not match your screenshot as the image taken is scaled after some intuition I rescaled it and draw into image in C++ to match again so here the result:
I converted your Cartesian points:
int ava_x=957,ava_y=748; // avatar
int data[]= // target(x1,y1) , drag(x0,y0)
{
1133,359,890,890,
858,334,886, 900,
755,579,1012,811,
1013,255,933,934,
1166,469,885,855,
1344,654,855,794,
804,260,1024,939,
1288,287,822,918,
624,422,1075,869,
981,460,949,851,
944,203,963,957,
829,367,1005,887,
1129,259,885,932,
773,219,1036,949,
1052,314,919,908,
958,662,955,782,
1448,361,775,906,
1566,492,751,837,
1275,703,859,764,
1210,280,852,926,
668,513,1050,836,
830,243,1011,939,
688,654,1022,792,
635,437,1072,864,
911,252,976,935,
1499,542,785,825,
793,452,1017,860,
1309,354,824,891,
1383,522,817,838,
1262,712,867,758,
927,225,980,983,
644,360,1097,919,
1307,648,862,798,
1321,296,812,913,
798,212,1026,952,
1315,460,836,854,
700,597,1028,809,
868,573,981,811,
1561,497,758,838,
1172,588,896,816,
};
Into polar relative to ava_x,ava_y using atan2 and 2D distance formula and simply print the angular difference +180deg and ratio between line sizes (that is the yellow texts in left of the screenshot) first is ordinal number then angle difference [deg] and then ratio between line lengths...
as you can see the angle difference is +/-10.6deg and length ratio is <2.5,3.6> probably because of inaccuracy of OpenCV findings and some randomness for fishing rod castings from the game logic itself.
As you can see polar coordinates are best for this. For starters you could do simply this:
// wanted target in polar (obtained by CV)
x = target_x-ava_x;
y = target_y-ava_y;
a = atan2(y,x);
l = sqrt((x*x)+(y*y));
// aiming drag in polar
a += 3.1415926535897932384626433832795; // +=180 deg
l /= 3.0; // "avg" ratio between line sizes
// aiming drag in cartesian
aim_x = ava_x + l*cos(a);
aim_y = ava_y + l*sin(a);
You can optimize it to:
aim_x = ava_x - ((target_x-ava_x)/3);
aim_y = ava_y - ((target_y-ava_y)/3);
Now to improve precision you could measure the dependency or line ratio and line size (it might be not linear) , also the angular difference might be bigger for bigger lines ...
Also note that second cast (ordinal 2) is probably a bug (wrongly detected x,y by CV) if you render the 2 lines you will see they do not match so you should not account that and throw them away from dataset.
Also note that I code in C++ so my goniometrics use radians (not sure if true for python if not you need to convert to degrees) also equations might need some additional tweaking for your coordinate systems (negate y?)

How to convert USB Joystick Gamepad data output to angle

I'm currently working on a home automation project with raspberry pi 3 and Python. The goal for me is to control motor servos from a Thrustmaster T16000M USB Joystick but I'm stuck to "convert" the joystick values to angles in degrees.
For example, the X axis goes from 0 to 16383. But I would like to convert this to go from 0° to 180°. My problem is the mapping between the angle values in degrees and the Joystick values. I have the impression that I have to use lists with a "cursor" that determines for example: 0° = 0 for the Joytick, 90° = 8192 for the Joystick, 180° = 16383.
I can understand the logic but the implementation is more complicated because I don't know if it's the right method, and especially if it is possible.
I also looked at vectors, matrices etc., but after a day of working on it, I was able to understand the logic but the implementation is more complicated because I don't know if it's the right method, and especially if it's possible. I'm a bit confused about how to do it.
If I don't misunderstand your question, this should be very easy.
x_max = 16383
output_max = 180
# Assume x is an arbitrary value between 0 and x_max
output = x / x_max * output_max
That's it. Output will be a mapping of your x value between 0 and 16383 to a scale from 0 to 180.

Theory behind Wolfenstein-style 3D rendering

I'm currently working on a project about 3D rendering, and I'm trying to make simplistic program that can display a simple 3D room (static shading, no player movement, only rotation) with pygame
So far I've worked through the theory:
Start with a list of coordinates for the X and Z of each "Node"
Nodes are kept in an order which forms a closed loop, so that a pair of nodes will form either side of a wall
The height of the wall is determined when it is rendered, being relative to distance from the camera
Walls are rendered using painter's algorithm, so closer objects are drawn on top of further ones
For shading "fake contrast", which brightens/darkens walls based on the gradient between it's two nodes
While it seems simple enough, the process behind translating the 3D coordinates into 2D points on the screen is proving the difficult for me to understand.
Googling this topic has so far only yeilded these equations:
screenX = (worldX/worldZ)
screenY = (worldY/worldZ)
Which seem flawed to me, as you would get a divide by zero error if any Z coordinate is 0.
So if anyone could help explain this, I'd be really greatful.
Well the
screenX = (worldX/worldZ)
screenY = (worldY/worldZ)
is not the whole stuff that is just the perspective division by z and it is not meant for DOOM or Wolfenstein techniques.
Well in Doom there is only single angle of viewing (you can turn left/right but cannot look up/down only duck or jump which is not the same). So we need to know our player position and direction px,py,pz,pangle. The z is needed only if you want to implement also z axis movement/looking...
If you are looking in a straight line (Red) all the object that cross that line in the 3D are projected to single x coordinate in the player screen...
So if we are looking at some direction (red) any object/point crossing/touching this red line will be place at the center of screen (in x axis). What is left from it will be rendered on the left and similarly whats on right will be rendered on the right too...
With perspective we need to define how large viewing angle we got...
This limits our view so any point touches the green line will be projected on the edge of view (in x axis). From this we can compute screen x coordinate sx of any point (x,y,z) directly:
// angle of point relative to player direction
sx = point_ang - pangle;
if (sx<-M_PI) sx+=2.0*M_PI;
if (sx>+M_PI) sx-=2.0*M_PI;
// scale to pixels
sx = screen_size_x/2 + sx*screen_size_x/FOVx
where screen_size_x is resolution of our view area and point ang is angle of point x,y,z relative to origin px,py,pz. You can compute it like this:
point_ang = atan2(y-py,x-px)
but if you truly do a DOOM ray-casting then you already got this angle.
Now we need to compute the screen y coordinate sy which is dependent on the distance from player and wall size. We can exploit triangle similarity.
so:
sy = screen_size_y/2 (+/-) wall_height*focal_length/distance
Where focal length is the distance at which wall with 100% height will cover exactly the whole screen in y axis. As you can see we dividing by distance which might be zero. Such state must be avoided so you need to make sure your rays will be evaluated at the next cell if standing directly on cell boundary. Also we need to select the focal length so square wall will be projected as square.
Here a piece of code from mine Doom engine (putted all together):
double divide(double x,double y)
{
if ((y>=-1e-30)&&(y<=+1e-30)) return 0.0;
return x/y;
}
bool Doom3D::cell2screen(int &sx,int &sy,double x,double y,double z)
{
double a,l;
// x,y relative to player
x-=plrx;
y-=plry;
// convert z from [cell] to units
z*=_Doom3D_cell_size;
// angle -> sx
a=atan2(y,x)-plra;
if (a<-pi) a+=pi2;
if (a>+pi) a-=pi2;
sx=double(sxs2)*(1.0+(2.0*a/view_ang));
// perpendicular distance -> sy
l=sqrt((x*x)+(y*y))*cos(a);
sy=sys2+divide((double(plrz+_Doom3D_cell_size)-z-z)*wall,l);
// in front of player?
return (fabs(a)<=0.5*pi);
}
where:
_Doom3D_cell_size=100; // [units] cell cube size
view_ang=60.0*deg; // FOVx
focus=0.25; // [cells] view focal length (uncorrected)
wall=double(sxs)*(1.25+(0.288*a)+(2.04*a*a))*focus/double(_Doom3D_cell_size); // [px] projected wall size ratio size = height*wall/distance
sxs,sys = screen resolution
sxs2,sys2 = screen half resolution
pi=M_PI, pi2=2.0*M_PI
Do not forget to use perpendicular distances (multiplied by cos(a) as I did) otherwise serious fish-eye effect will occur. For more info see:
Ray Casting with different height size

Creating a fool proof graphing calculator using python - Python 2.7

I am trying to create a fool proof graphing calculator using python and pygame.
I created a graphing calculator that works for most functions. It takes a user string infix expression and converts it to postfix for easier calculations. I then loop through and pass in x values into the postfix expression to get a Y value for graphing using pygame.
The first problem I ran into was when taking calculations of impossible things. (like dividing by zero, square root of -1, 0 ^ non-positive number). If something like this would happen I would output None and that pixel wouldn't be added to the list of points to be graphed.
* I have showed all the different attempts I have made at this to help you understand where I cam coming from. If you would like to only see my most current code and method, jump down to where it says "current".
Method 1
My first method was after I acquired all my pixel values, I would paint them using the pygame aalines function. This worked, except it wouldn't work when there were missing points in between actual points because it would just draw the line across the points. (1/x would not work but something like 0^x would)
This is what 1/x looks like using the aalines method
Method 1.1
My next Idea was to split the line into two lines every time a None was printed back. This worked for 1/x, but I quickly realized that it would only work if one of the passed in X values exactly landed on a Y value of None. 1/x might work, but 1/(x+0.0001) wouldn't work.
Method 2
My next method was to convert the each pixel x value into the corresponding x point value in the window (for example, (0,0) on the graphing window actually would be pixel (249,249) on a 500x500 program window). I would then calculate every y value with the x values I just created. This would work for any line that doesn't have a slope > 1 or < -1.
This is what 1/x would look like using this method.
Current
My most current method is supposed to be a advanced working version of method 2.
Its kind of hard to explain. Basically I would take the x value in between each column on the display window. For every pixel I would do this just to the left and just to the right of it. I would then plug those two values into the expression to get two Y values. I would then loop through each y value on that column and check if the current value is in between both of the Y values calculated earlier.
size is a list of size two that is the dimensions of the program window.
xWin is a list of size two that holds the x Min and x Max of the graphing window.
yWin is a list of size two that holds the y Min and y Max of the graphing window.
pixelToPoint is a function that takes scalar pixel value (just x or just y) and converts it to its corresponding value on the graphing window
pixels = []
for x in range(size[0]):
leftX = pixelToPoint(x,size[0]+1, xWin, False)
rightX = pixelToPoint(x+1, size[0]+1, xWin, False)
leftY = calcPostfix(postfix, leftX)
rightY = calcPostfix(postfix, rightX)
for y in range(size[1]):
if leftY != None and rightY != None:
yPoint = pixelToPoint(y,size[1],yWin, True)
if (rightY <= yPoint <= leftY) or (rightY >= yPoint >= leftY):
pixels.append((x,y))
for p in pixels:
screen.fill(BLACK, (p, (1, 1)))
This fixed the problem in method 2 of having the pixels not connected into a continuous line. However, it wouldn't fix the problem of method 1 and when graphing 1/x, it looked exactly the same as the aalines method.
-------------------------------------------------------------------------------------------------------------------------------
I am stuck and can't think of a solution. The only way I can think of fixing this is by using a whole bunch of x values. But this way seems really inefficient. Also I am trying to make my program as resizable and customizable as possible so everything must be variably driven and I am not sure what type of calculations are needed to find out how many x values are needed to be used depending on the program window size and the graph's window size.
I'm not sure if I am on the right track or if there is a completely different method of doing this, but I want to create my graphing calculator to able to graph any function (just like my actual graphing calculator).
Edit 1
I just tried using as many x values as there are pixels (500x500 display window calculates 250,000 y values).
Its worked for every function I've tried with it, but it is really slow. It takes about 4 seconds to calculate (it fluctuates depending on the equation). I've looked around online and have found graphing calculators that are almost instantaneous in their graphing, but I cant figure out how they do it.
This online graphing calcuator is extremely fast and effective. There must be some algorithm other than using a bunch of x values than can achieve what I want because that site is doing it..
The problem you have is that to be able to know if between two point you can reasonably draw a line you have to know if the function is continuous in the interval.
It is a complex problem in General what you could do is use the following heuristic. If the slope of the line have changed too much from the previous one guess you have a non continuous point in the interval and don't draw a line.
Another solution would be based on solution 2.
After have draw the points that correspond to every value of the x axis try to draw for every adjacent x: (x1, x2) the y within (y1 = f(x1), y2 = f(x2)) that can be reach by an x within (x1, x2).
This can be done by searching by dichotomy or via the Newton search heuristic an x that could fit.

Selecting best range of values from histogram curve

Scenario :
I am trying to track two different colored objects. At the beginning, user is prompted to hold the first colored object (say, may be a RED) at a particular position in front of camera (marked on screen by a rectangle) and press any key, then my program takes that portion of frame (ROI) and analyze the color in it, to find what color to track. Similarly for second object also. Then as usual, use cv.inRange function in HSV color plane and track the object.
What is done :
I took the ROI of object to be tracked, converted it to HSV and checked the Hue histogram. I got two cases as below :
( here there is only one major central peak. But in some cases, I get two such peaks, One a bigger peak with some pixel cluster around it, and second peak, smaller than first one, but significant size with small cluster around it also. I don't have an sample image of it now. But it almost look like below (created in paint))
Question :
How can I get best range of hue values from these histograms?
By best range I mean, may be around 80-90% of the pixels in ROI lie in that range.
Or is there any better method than this to track different colored objects ?
If I understand right, the only thing you need here is to find a maximum in a graph, where the maximum is not necessarily the highest peak, but the area with largest density.
Here's a very simple not too scientific but fast O(n) approach. Run the histogram trough a low pass filter. E.g. a moving average. The length of your average can be let's say 20. In that case the 10th value of your new modified histogram would be:
mh10 = (h1 + h2 + ... + h20) / 20
where h1, h2... are values from your histogram. The next value:
mh11 = (h2 + h3 + ... + h21) / 20
which can be calculated much easier using the previously calculated mh10, by dropping it's first component and adding a new one to the end:
mh11 = mh10 - h1/20 + h21/20
Your only problem is how you handle numbers at the edge of your histogram. You could shrink your moving average's length to the length available, or you could add values before and after what you already have. But either way, you couldn't handle peaks right at the edge.
And finally, when you have this modified histogram, just get the maximum. This works, because now every value in your histogram contains not only himself but it's neighbors as well.
A more sophisticated approach is to weight your average for example with a Gaussian curve. But that's not linear any more. It would be O(k*n), where k is the length of your average which is also the length of the Gaussian.

Categories