How to plot 2-D navigation data points on a floor map using Matlab/Python? - python

I want to show the tracking result of my indoor localization algorithm with respect to the ground truth reference path on the floor map. The floor plan and the walking route representing the ground truth is as follows:
Here, the red line is the ground truth route. The right-left side of the image represents the x-axis and it is compressed (original x-axis length is much larger) owing to space. The top-bottom of the image is the y-axis and it represents precisely according to the coordinate.
I want to draw the localization estimation points (2-D) on it. I tried to do it using Origin. I got the following image.
As seen in the figure above, the image does not resemble the floor plan precisely (using log on y-axis can reduce the dimension of y-axis but it does not yield a complete solution in my case).
To summarize:
What I have: (a) A set of 2-D coordinate points from each localization algorithm (I'm comparing my method with two other methods, so there are 3 sets of 2-D coordinate points) and (b) a floor plan image.
What I want: To plot the sets of 2-D coordinate points on the floor plan image.
If anyone could drop a sample Matlab/python code to plot the 2-D coordinates, I'd highly appreciate it.
Thank you.

To plot on top of an image, you have to provide the nessecary scaling information. This can be achieved using the image function passing x, y and C: https://de.mathworks.com/help/matlab/ref/image.html?s_tid=doc_ta
I don't know how your floor plan is scaled, but the resulting code should be something like:
image(x,y,C) % x and y provides the scaling informating, C is the image.
hold on
plot(...) % code you already have

Related

Python library for rotation and translation on a seesaw-like object

I'd like to do calculations on the 3D positions on both end's of a rigid object (see spot where the children are usually sitting in image below). The geometrical situation of the rigid object corresponds to a seesaw. Rotation has to be possible on three axes and can be represented by a ball bearing, which initially is located at the middle of the rod.
The input to the desired function should consist of three rotations performed at the position of the ball bearing, three translations along the bearing and the initial 3D positions of both ends of the object.
The output needs to be the calculated new 3D positions of both ends.
Does anyone know a python library that does provide functionalities regarding this issue?
I've just found out that Open3D has implemented exactly what I was looking for. As it is working with point clouds, all that needs to be done is to create two points in 3D space, define a rotation matrix and the center (= ball bearing in this case). The function "rotate" then calculates the altered positions of the rotated points. This also works for translation.
# Rotation
pcd.rotate(r, center = (0,0,0))
# Translation
pcd.translate(t)
With r = rotation matrix (3x3) and t = translation matrix (3x1).

How to consistently number contours in an image time series?

I have automated the task of measuring plant area over time to extrapolate growth rate using an image time-series and the following two methods: (1) Python + ArcGIS, and (2) Python + OpenCV.
In the first method, ArcGIS allows me to create a vector grid on the image. Each cell of the grid contains a single plant, so I number each cell starting from top-left to bottom-right. After creating a binary image in which plant pixels == 1 and everything else == 0, I apply Zonal Statistics to find my plant area. In this way the plant numbers stay consistent because I use the same grid over all the images in the time series, but it requires manual intervention.
In the second method, I use OpenCV to find plants via contours. The numbering of each contour is done automatically based on its centroid coordinates and bounding box dimensions. Currently I have them sorted 'top-to-bottom', but it obviously isn't as perfect a sort as the manually-made grid. In addition, plant #1 may not stay plant #1 in the second or third image because each plant grows and moves over the course of the experiment, and new plants emerge and change the total number of contours (images are taken every hour for up to several weeks). Therefore, I cannot compare plant #1 in the first image and plant #1 in subsequent images because they may not even be the same plant.
How can I consistently number the same plant through the entire time-series using the second method? I considered associating centroids in subsequent images to (x,y) coordinates in the previous image that were the most similar (once the data is in tabular form), but this would fail to provide an updated numbered contour image.
The solution to this problem lay in automatic circle detection via the OpenCV Hough Transform function (cv2.HoughCircles()), finding the resulting Hough Circle centroids and then overlaying them on the original RGB image to create a reference key. As I did not have an image without any plants in it at all, I adapted the method so it found the correct amount of origins, but the result would be better in an image with no plants.
I converted the resulting csv files for the hough circles reference image (columns: OID, X, Y) and plant contours (columns: CID, X, Y, Area etc.) to GeoPandas GeoDataFrames and used Scipy's cKDTree to combine them through a nearest neighbour algorithm.
Special thanks to JHuw's answer in https://gis.stackexchange.com/questions/222315/geopandas-find-nearest-point-in-other-dataframe as Shapely's nearest_points function did not work for me.

Calculate 3D Plane that Rests on a 3D Surface

I have about 300,000 points defining my 3D surface. I would like to know if I dropped a infinitely stiff sheet onto my 3D surface, what the equation of that plane would be. I know I need to find the 3 points the sheet would rest on as that defines a plane, but I'm not sure how to find my 3 points out of the ~300,000. You can assume this 3D surface is very bumpy and that this sheet will most likely lie on 3 "hills".
Edit: Some more background knowledge. This is point cloud data for a scan of a 3D surface which is nearly flat. What I would like to know is how this object would rest if I flipped it over and put it on a completely flat surface. I realize that this surface may be able to rest on the table in various different ways depending on the density and thickness of the object but you can assume the number of ways is finite and I would like to know all of the different ways just in case.
Edit: After looking at some point cloud libraries I'm thinking of doing something like computing the curvature using a kd tree (using SciPy) and only looking at regions that have a negative curvature and then there should be 3+ regions with negative curvature so some combinatorics + iterations should give the correct 3 points for the plane(s).

Determining the pattern orientation of a spatiotemporal image

How can I obtain average direction of the pattern shown in the figure below. It is the direction of the red arrow relative to the yellow (horizontal) line. Any ideas for an approach? I couldn't figure out a way to approach. This is a spatio-temporal image created from a video. Thank you.
Here is my original image:
The simplest approach would be to compute the gradient vector (x derivative and y derivative) and find its direction at each pixel (atan2(y,x)). The average orientation is what you want, not the average direction (will cancel out). So apply modulus pi, then average across the image.
The best way to compute image gradients is through the Gaussian gradients.
The structure tensor is the more robust way of accomplishing this. In short, it computes local averages of the gradient vector to reduce the effect of noise. It does this by computing the outer product of the gradient vector with itself, which produces a symmetric matrix. The individual components of this matrix can then be locally averaged (i.e. apply a smoothing filter). This is similar to computing the angle of the vectors, doubling the angles to make vectors in opposite directions equal, then averaging them.
Note that you can apply either of these solutions in 3D (you can think of the video data as 2D + time = 3D). That way, you compute both the speed and the direction of motion within the 2D frame, rather than just the speed along the direction in which you extracted the 2D image you show in the question. The image gradient and the concept of the structure tensor easily extend to 3D. This 3D approach is similar to the approach by Lucas-Kanade for optical flow. If you follow that link to Wikipedia, you'll see it uses the structure tensor in 2D, and adds gradients along the time dimension.
Might be useful to try Fourier transform.
In your case you should get two vertical lines in the middle of the transformed image corresponding to the information when traveling vertically in the image.
On the other hand there shouldn't be a horizontal line since when traveling horizontally in the image there is little information (little change)
For example you can use this online site to play with fourier transforms:
https://www.ejectamenta.com/Fourifier-fullscreen/
It might sound like the problem remains the same but in fact it is much easier now.
The 2D pattern is converted into dominant lines which are quite easy to find in the transformed image.
For example you can search for the strongest pixels in the image and simply determine if they are more likely to be horizontal line or a vertical line or determine the angle of the dominant line. Then rotate by 90 degrees.
For example see this image of wood grain and the resulting transformed image:
And don't worry about the two lines. The image is symmetric so just ignore 3/4 of the image and look in 1 quarter.
I recommend giving the Hough transform a go, it is available in OpenCv. The Hough transform maps lines to angles, and might be useful in your case.

How to draw coastlines over a custom map without resampling

I would like to display a satellite image (preferably using python, but other solutions are welcome). It consists in a floating-point parameter P, with dimension NxM, and each pixel is geolocated by the fields latitude and longitude (each of size NxM). So I would like to:
(1) create an image of parameter P with an associated color scale. The image should not be resampled, so it should have dimension NxM
(2) display coastlines over this image
Currently, I can do (1) using PIL. I can also use the basemap library to display an image and the coastlines, but I don't know how to do it without reprojection, by staying in the image native projection with size NxM.
Edit: the parameter P does not contain any information about the coastline. Only the location (lat, lon) of the pixels should be used to overlay the coastline. The coordinates for the coastline can be obtained from gshhs for example. gshhs is actually used in the basemap library.
If all you're trying to do is enhance the boundaries between land and water, it might be good to use a high-pass filter.
For instance, start out with Lena:
and apply a highpass filter:
then overlay the highpass on top of the original:
(more details and examples can be found here).
You can find filters in scipy here.
For those in the community still looking for an answer to this question, the method which I am currently implementing (for v. similar purposes - I'm trying to test the geolocation of satellite data) requires a landmask.
There are landmask datasets available all over the place online, each with different rules and characteristics. I am working with netCDF4 data in python and my landmask is a gridded .nc dataset in which ocean elements are valued as 1 and land elements are valued as 0.
Iterating through my satellite data I multiply each latitude and longitude value by the number of elements per degree in the landmask. In my case there are 120 elements per degree in lat/lon, so
lon_inds = (lons*120).astype(int)
lat_inds = (lats*120).astype(int)
A more general way of writing this would involve substituting 120 for
len(lons)/360
len(lats)/180
respectively. Both examples of these operations can be done nearly instantaneously if using numpy arrays (which is the case for the python netCDF4 module).
Now I create a mask of my own: it must have the same dimensions as the data array (for those not intimately acquainted with satellites, the data, lats and lons arrays will all have identical dimensions):
my_mask = np.zeros(data.shape, dtype=int)
Now all we need to do is replace values in the mask where there is a coastline. This is done by iterating through the lat_inds and lon_inds arrays, looking up the value in the landmask of
landmask[lon_inds[i,j],lat_inds[i,j]]
and changing the value of
mask[i,j]
to 1 if any of the neighbors
landmask[lon_inds[i,j]-1,lat_inds[i,j]]
landmask[lon_inds[i,j]+1,lat_inds[i,j]]
landmask[lon_inds[i,j],lat_inds[i,j]-1]
landmask[lon_inds[i,j],lat_inds[i,j]+1]
are not equal to 0 (of course, a smoother coastline can be generated by adding in the diagonal neighboring cells, but this should not be necessary as hopefully you should be using a landmask dataset with sharper spatial resolution than your satellite data).

Categories