Plotting gridded data using KML - python

We are beginning a project to visualize the results of a finite volume (FV) calculation using Google Earth. The FV data is essentially 2d (lat/long) data consisting of a Cartesian array of values (sea surface height, for example). Each value should be mapped to a color from some colormap, and then displayed as a single mesh cell in a gridded array suitable for Google Earth. The Cartesian array could be 100x100 or larger.
My question is, do we construct polygons for each mesh cell C_{ij} in the array, assigning a color corresponding to the q_{ij} value for that mesh cell? This would seem to create a huge KML file, if the coordinates of the four corners of every mesh cell must be described, (i.e. 10,000 polygons, for example).
Or are there KML tools we could use that would allow us to specify, for example, the lower and upper coordinates of the array, a generic mesh cell size (e.g. dX, dY values), and the array of q data (or, equivalently, colours) that should be used to fill the "patch"?
Alternatively, we could create an image file, containing for example, a rendered image of our data array (created by some other means), and then referenced from the KML file.
Our aim is to use PyKML for this project.
Any suggestions would be very helpful.

After much digging around, I think I now have a better understanding of what Google Earth can and cannot do, (or is not designed to do). It seems that Google Earth is not designed as a visualization tool for numerical data. This does not mean it cannot be done, but that one must create the image files elsewhere, and then overlay them onto Google Earth. For example, this link provides instructions for visualizing the output from a fire modeling code :
http://www.openwfm.org/wiki/Visualization_in_Google_Earth
The instructions here suggests how pseudocolor plots can used in at least one special case to visualize output in Google Earth.

Related

How do I label the pointcloud if I have the 3D boxes of the objects annotated?

I am trying to annotate my point cloud data. I found a number of tools but could only access the demo version of 3D Point cloud tool by Supervisely. Once the annotation is complete by drawing 3D boxes around the objects, the output annotation file is in the form of a JSON file. This contains, the class and global position, orientation and dimensions of the box. How do I use this file to assign labels to the points inside these 3D boxes. I want the output format in either .pcd file or .bin file.
The output of the JSON file is as follows:
[{"id":36698,"name":"vel_1558647824006782.pcd","annotations":[{"className":"Car","geometryType":"cuboid","classId":957,"geometry":{"position":{"x":9.539855967959713,"y":18.342023271012913,"z":0.43944128482454614},"rotation":{"x":0,"y":0,"z":0},"dimensions":{"x":5.691547052392309,"y":1.6625674002633986,"z":1.757779283656416}}},{"className":"ground","geometryType":"cuboid","classId":958,"geometry":{"position":{"x":28.890481890779242,"y":8.463823613489927,"z":-1.0314986175132965},"rotation":{"x":0,"y":0,"z":0},"dimensions":{"x":96.34273328620523,"y":18.714553504372063,"z":1.0544185995045456}}}]}]
I thought of using a crop_box filter by PCL. But is there any other way around it. It would also help if someone can point me to other point cloud annotation tools which could best help me solve the problem?
I was able to write a c++ script to read the json file and use PCL's cropbox filter to solve the problem. The methodology used is as follows:
1) Read the json file using nlohmann/json
std::ifstream ifs("somepath.json");
json j = json::parse(ifs);
2) Extract the "position" (centroid of cuboid), "orientation" of cuboid, "dimensions" of the cuboid and the "className" for each box. The code below shows a way to extract the position data into std::vector. (i is iterating over the number of boxes.)
std::vector<float> position {
j[0]["annotations"][i]["geometry"]["position"]["x"].get<float>(),
j[0]["annotations"][i]["geometry"]["position"]["y"].get<float>(),
j[0]["annotations"][i]["geometry"]["position"]["z"].get<float>()};
3) Get the max and min (x,y,z) coordinate points of the vertices of the box. This goes as input to the cropbox filter. (Note: the x,y,z point does not have to relate to a single vertex. Xmin will be minimum x of all 8 vertices and Ymin will minium of all 8 vertices and so on.)
4) Use cropbox filter from PCL. This allows you to get indices of all the points inside the given box. Examples can be found here and here.
5) Depending upon the class of the box, assign different colors to the points at those indices of the point cloud.
6) Save the point cloud.
This would be a generalized way of labeling every point of a point cloud from the an input json file which consists information of the position, dimension and orientation of the 3D cubes.

how to draw paths with bezier curves in opencv (python) created with paper.js

I have many paths associated with images, drawn with paper.js application, which produce a json object of format:
[[[x,y], [handle1_x, handle1_y], [handle2_x, handle2_y]], ...].
Thus, for each point on a polygon ("path"), I have got x, y and a notion of the handles that control the curvature ("bezier").
How can I turn these paths into contours in python for drawing these paths over an image (to get a mask)?
A contour in OpenCV is an array of points, std::vector<cv::Point>.
So I guess the easiest way would be to output the list of all points of the curve in your json file, (using Paper.js path.getPointAt() - from 0 to offset to "get all points/pixels" of the curve) and then to create a std::vector<cv::Point> in OpenCV.
This would create bigger json files though (the list of curve pixels will be bigger than the list of curve points & handles), unless you really have huge amount of data it should not be a problem. If it is a problem, you could also use something like OpenCV-Beziers (I have never used it) to compute the list of point in the OpenCV side.

How to flatten 3D object surface into 2D array?

I've got 3D objects which are represented as numpy arrays.
How can I unfold the "surface" of such objects to get a 2D map of values (I don't care about inner values)?
It's similar to unwrapping globe surface, but the shape is varied from case to case.
This is a vertices problem. Each triangle on the model is a flat surface that can be mapped to a 2D plane. So the most naive solution without any assumed structure would be to:
for triangle in mesh:
// project to plane defined by normal to avoid stretching
This solution is not ideal as it places all of the uv's on top of each other. The next step would be to spread out the triangles to fill a certain space. This is the layout stage that defines how the vertices are layed out in the 2D space.
Usually it is ideal to fit the UV's within a unit square. This allows for easy UV mapping from a single image.
Option: 2
You surround the object with a known 2D mapped shape and project each triangle onto the shape based on its normal. This provides a mechanism for unwrapping the uv's in a structured manor. An example object would be to project onto a cube.
Option: 3
consult academic papers and opensource libraries/tools like blender:
https://wiki.blender.org/index.php/Doc:2.4/Manual/Textures/Mapping/UV/Unwrapping
blender uses methods as described above to unwrap arbitrary geometry. There are other methods to accomplish this as described on the blender unwrap page. The nice thing about blender is that you can consult the source code for the implementation of the uv unwrap methods.
Hope this is helpful.

Given position and rotation position, how to i translate coordinates from one frame to another? (using baxter robot, ROS and python)

So what i want to do essentially, is transforming a set of coordinates, from one frame to another. I have my camera set on my robot's hand (which i know the position and orientation), and i'm viewing a certain object, and reading coordinates from the cameras frame.
How do i convert those coordinate to my base frame? I know that i can just first reverse the orientation, using the inverse orientation matrix, and then use some kind of translational matrix, but how do i obtain that matrix? Once the orientation is corrected, how do i do the translation?
Note: I think this is better suited as a comment but I lack the reputation points.
Assuming both your frames are available in ROS, i.e. if you run (as specified here):
rosrun tf tf_echo /source_frame /target_frame
you should see the translation and rotation between both frames. Then you could use lookupTransform to obtain this information inside your code (see: TF tutorial).
Hope this helps.

How to draw coastlines over a custom map without resampling

I would like to display a satellite image (preferably using python, but other solutions are welcome). It consists in a floating-point parameter P, with dimension NxM, and each pixel is geolocated by the fields latitude and longitude (each of size NxM). So I would like to:
(1) create an image of parameter P with an associated color scale. The image should not be resampled, so it should have dimension NxM
(2) display coastlines over this image
Currently, I can do (1) using PIL. I can also use the basemap library to display an image and the coastlines, but I don't know how to do it without reprojection, by staying in the image native projection with size NxM.
Edit: the parameter P does not contain any information about the coastline. Only the location (lat, lon) of the pixels should be used to overlay the coastline. The coordinates for the coastline can be obtained from gshhs for example. gshhs is actually used in the basemap library.
If all you're trying to do is enhance the boundaries between land and water, it might be good to use a high-pass filter.
For instance, start out with Lena:
and apply a highpass filter:
then overlay the highpass on top of the original:
(more details and examples can be found here).
You can find filters in scipy here.
For those in the community still looking for an answer to this question, the method which I am currently implementing (for v. similar purposes - I'm trying to test the geolocation of satellite data) requires a landmask.
There are landmask datasets available all over the place online, each with different rules and characteristics. I am working with netCDF4 data in python and my landmask is a gridded .nc dataset in which ocean elements are valued as 1 and land elements are valued as 0.
Iterating through my satellite data I multiply each latitude and longitude value by the number of elements per degree in the landmask. In my case there are 120 elements per degree in lat/lon, so
lon_inds = (lons*120).astype(int)
lat_inds = (lats*120).astype(int)
A more general way of writing this would involve substituting 120 for
len(lons)/360
len(lats)/180
respectively. Both examples of these operations can be done nearly instantaneously if using numpy arrays (which is the case for the python netCDF4 module).
Now I create a mask of my own: it must have the same dimensions as the data array (for those not intimately acquainted with satellites, the data, lats and lons arrays will all have identical dimensions):
my_mask = np.zeros(data.shape, dtype=int)
Now all we need to do is replace values in the mask where there is a coastline. This is done by iterating through the lat_inds and lon_inds arrays, looking up the value in the landmask of
landmask[lon_inds[i,j],lat_inds[i,j]]
and changing the value of
mask[i,j]
to 1 if any of the neighbors
landmask[lon_inds[i,j]-1,lat_inds[i,j]]
landmask[lon_inds[i,j]+1,lat_inds[i,j]]
landmask[lon_inds[i,j],lat_inds[i,j]-1]
landmask[lon_inds[i,j],lat_inds[i,j]+1]
are not equal to 0 (of course, a smoother coastline can be generated by adding in the diagonal neighboring cells, but this should not be necessary as hopefully you should be using a landmask dataset with sharper spatial resolution than your satellite data).

Categories