Convert coordinates of trilateration to position on picture or svg - python

I'm working on a project to track a bt device using three bt readers in a room. This already works fine, I have some data.
I hope, I have done my math correct thus I can calculate my position using trilateration. Well, works fine on my paper sheet and arbitrary python script.
I used following tipps:
Trilateration C# How to get back into "normal" coordinates?
Trilateration example in java
and finally
https://math.stackexchange.com/questions/100448/finding-location-of-a-point-on-2d-plane-given-the-distances-to-three-other-know
As I know the coordinates of my 3 receivers in "the real world" and distances, I ask my self how to transform this informations onto my 2D picture (or svg).
For instance, how do I convert my three distances 3m, 5m and 6m into a picture with 600x800 pixel. How to I set the position of the readers onto the picture? Any suggestions or real world hints? What happens if I either zoom in or zoom out of the picture? How to find the coordinate for my position marker on my picture derived from real data?
Thanks

You're essentially asking how to draw a map of a small area.
Take the corners of the 600x800 image and decide where they should land in the real world. Ideally the they should make a rectangle of the same 3 by 4 shape, so that the conversion factor of real world distance to pixels is the same on horizontal and vertical. After that it's just linear interpolation.

Related

Get position of a camera on a 2d indoor map, from a QR code on the ceiling?

I want to create a visual 2d top-down map of my house, and as I walk around the house I want to film the ceiling where a number of QR codes have been placed. As I walk around, the map will be updated displaying the current position and orientation of the camera.
I believe this can be done using some geometry that takes into account things like the difference in size of the QR code in pixels in the image compared to it's actual known size in cm, the distance from the camera to the ceiling, the distance and angle of the camera frame center to the center of the QR code, and the orientation of the filmed QR code relative to it's actual orientation on the ceiling.
Problem is I have no idea how to put these together for a solution. Can this be done with geometry or another approach?
Secondly I discovered an OpenCV implementation of perspective-n-point that I think might be perfect especially since it takes into account rotation of yaw, pitch and roll but I can't see how the values it returns can be turned into a position on a map.
Thanks

edge detection of an image and saving cells of a grid

picture example
I have recently started learning Python with Spyder IDE and I'm a bit lost so I ask for advice.
The thing is that I need to program an algorithm that, given a random image representing a board with black spots in it (in the picture I upload It is a 4x5 board) so It recognizes the edges properly and draw a AxB grid on it. I also need to save each cell separately so as to work with them.
I know that open CV treat images and I have even tried auto_canny but I don't really know how to solve this problem. Can anybody give me some indications please?
as I understand from your question you need to have as an output the grid of the matrix in your picture (eg. 4x3) and each cell as separate image.
This is the way I would approach this problem:
Use canny + corner detection to get the intersection of the lines
With the coordinates of the corners you can form your regions of interest, crop each individually and save it as a new image
For the grid you can check the X's and the Y's of the coordinates, for example you will have something like: ((50, 30), (50,35),(50,40)) and from this you can tell that there are 3 points on the horizontal axis. I would encourage you to set a error margin as the points might not be all on the same coordinate, but may not differ a lot.
Good luck!

Normalise face landmark data using python

I am currently learning python and playing around with tensorflow.
I have a bunch of images where I have obtained the landmarks (pixel points) of a person's facial features such as ears and eyes. In addition, it also provides me with a box (4 coordinates) where the face exists.
My goal is to normalise all the data from different images into a standard sized rectangle / square and calculate the position of the landmarks relative to the normalised size.
Is there an API that allows me to do this already or should I get cracking and calculate the points myself?
Thanks in advance.
Actually I think I have figured it out, pretty simple maths actually, here is what i am going to do
Take every point and take away the first box point values - this will give me the points as if the box starts at [ 0,0 ]
Apply the box/normalised size ratio to every point

Get 3D coordinates from two 2D frames

I couldn't find the proper answer to my problem on the Web, so I'll ask it here. Let's say we're given two 2D photos of the same place taken from slightly different angles. I've chosen the set of points (edge detection), found correspondences between them (which point is which on other photo). Now I need to somehow find out world coordinates of these points in 3D.
For the last 5 hours I've read a lot about it but I still can't understand what steps should I follow. I've tried to estimate motion of a camera using the function recoverPose applied to an essential matrix and two sets of points on each frame. I can't understand what it gives me when I know rotation and translation matrices (thatrecoverPose returned). What should I do in order to achieve my goal?
I also know the calibration matrix of my camera (I use KITTI dataset). I've read opencv documentation but still don't understand.
It's monocular vision.

OpenCV [Python] - Perspective Warp an image with more than 4 points

I'm trying to make a simple scanner program, which takes in an image of a piece of paper and create a binary image based off of that. An example of what I am trying to do is below:
However, as you can see, the program uses the 4 corners of the paper to create the image. This means that the program doesn't take into account the curvature of the paper.
Is there anyway to "warp" the image with more than four points? By this, I mean find the bounding rectangle, and if the contour line is outside the rectangle, shrink that row of pixels, and if the contour line is inside, then extend it?
I feel like this should exist in some way or form, but if it doesn't, it may be time to delve into the depths of OpenCV :D
Thanks!

Categories