I'm currently trying to map a set of density coordinates onto a normalized reference. For example, say I have the following densities for 2 systems with the following reference:
System 1
|X|Y|Z|ESP|
|-|-|-|---|
|0|0|0|0.1|
|0|0|1|0.2|
|0|0|2|0.3|
|1|0|0|0.2|
|2|0|0|0.3|
System 2
|X|Y|Z|ESP|
|-|-|-|---|
|0|0|0|0.5|
|0|0|1|0.10|
|0|0|2|0.20|
|1|0|0|0.10|
|2|0|0|0.20|
|0|1|0|0.30|
|0|2|0|0.50|
Normalized Reference Map
|X|Y|Z|
|-|-|-|
|0|0|0|
|0|0|1|
|0|0|2|
|0|0|3|
|1|0|0|
|2|0|0|
|3|0|0|
|0|1|0|
|0|2|0|
|0|3|0|
I want to map the values of the independent systems onto the reference. We can assume that points that don't appear in the Systems on the reference can be the average of the surrounding ESP values. I've been trying various scipy interpolations methods like map_coordinates and interpolation, but I'm having no luck with getting everything done correctly. I was hoping to see if anyone had any thoughts here.
Related
I have a dataset of georeferenced flickr posts (ca. 35k, picture below) and I have an unrelated dataset of georeferenced polygons (ca. 40k, picture below), both are currently panda dataframes. The polygons do not cover the entire area where flickr posts are possible. I am having trouble understanding how to sort many different points in many different polygons (or check if they are close). In the end I want a map with the points from the flickerdata in polygons colord to an attribute (Tag). I am trying to do this in Python. Do you have any ideas or recommendations?
Point dataframe Polygon dataframe
Since, you don't have any sample data to load and play with, my answer will be descriptive in nature, trying to explain some possible strategies to approach the problem you are trying to solve.
I assume that:
these polygons are probably some addresses and you essentially want to place the geolocated flickr posts to the nearest best-match among the polygons.
First of all, you need to identify or acquire information on the precision of those flickr geolocations. How off could they possibly be because of numerous sources of errors (the reason behind those errors is not your concern, but the amount of error is). This will give you an idea of a circle of confusion (2D) or more likely a sphere of confusion (3D). Why 3D? Well, you might have flickr post from a certain elevation on a high-rise apartment, and so, (x: latitude,y: longitude, z: altitude) all may be necessary to consider. But, you have to study the data and any other information available to you to determine the best option here (2D/3D space-of-confusion).
Once you have figured out the type of ND-space-of-confusion, you will need a distance metric (typically just a distance between two points) -- call this sigma. Just to be on the safe side, find all the addresses (geopolygons) within a radius of 1 sigma and additionally within 2 sigma -- these are your possible set of target addresses. For each of these addresses have a variable that calculates its distances of its centroid, and the four corners of its rectangular outer bounding box from the flickr geolocations.
You will then want to rank these addresses for each flickr geolocation, based on their distances for all the five points. You will need a way of identifying a flickr point that is far from a big building's center (distance from centroid could be way more than distance from the corners) but closer to it's edges vs. a different property with smaller area-footprint.
For each flickr point, thus you would have multiple predictions with different probabilities (convert the distance metric based scores into probabilities) using the distances, on which polygon they belong to.
Thus, if you choose any flickr location, you should be able to show top-k geopolygons that flickr location could belong to (with probabilities).
For visualizations, I would suggest you to use holoviews with datashader as that should be able to take care of curse of dimension in your data. Also, please take a look at leafmap (or, geemap).
References
holoviews: https://holoviews.org/
datshader: https://datashader.org/
leafmap: https://leafmap.org/
geemap: https://geemap.org/
I have a .obj and .ply file of a 3D model.
What I want to do is read this 3D model file and see if a list of 3D coordinates are either inside or outside the 3D model space.
For example, if the 3D model is a sphere with radius 1, (0,0,0) would be inside (True) and (2,0,0) would be outside (False). Of course the 3D model I'm using is not as simple as a sphere.
I would like to add some of the methods I considered using.
Since I am using Python, I thought of using PyMesh, as their intersection feature looked promising. However the list of coordinates I have are not mesh files but just vectors, so it didn't seem to be the appropriate function to use.
I also found this method using ray casting. However, how to do this with PyMesh, or any other Python tool is something I need advice on.
Cast a ray from the 3D point along X-axis and check how many intersections with outer object you find.
Depending on the intersection number on each axis (even or odd) you can understand if your point is inside or outside.
You may want to repeat on Y and Z axes to improve the result (sometimes your ray is coincident with planar faces and intersection number is not reliable).
Converting my comment into an answer for future readers.
You can use a Convex Hull library to check whether a point is inside the hull. Most libraries use signed distance function to determine whether the point is inside. trimesh is one of the libraries that implements this feature.
Hello,
in my 2d software i have two inputs available:
an array of XY points
[(x,y),(1,1),(2,2),(2,3),(-1,3),...]
and another matrix representing the closed 2D bezier curve handles
[((x,y),(x,y),(x,y)),
((-1,-1),(1,1),(1,2)),
((1,1),(2,2),(2,3)),
...]
How can i check if a point is inside or outside the given curve using python ? using preferably numpy maybe
I don't know how the theory of Bezier curves, so if your second list of points is a kind of compressed way to represent a Bezier curve, first try to sample some points of the curve with the precision you want.
So you have n points of your curve, and then you can apply a simple PIP algorithm : https://en.wikipedia.org/wiki/Point_in_polygon
I can explain in details later if you want to know how to do it programmatically.
I cant write code right here, because I need the entire program to understand properly, however I may provide two approaches how to do that.
The hardest way is to approximate each Bézier curve by a polyline. And then, according to the wiki you can use two techniques:
Ray casting algorithm: the shorthand of the algorithm: You put a ray, which starting from a point and goes through the entire polygon to an another point. Some lines lies inside a polygon, some outside. And then you check to which line belongs a specific point Looks like this:
Winding number algorithm: A little bit about winding numbers. So if a winding number is non-zero, the point lies inside the polygon
The huge drawback of this approach is that the accuracy depends on how close you approximated a curve to a polyline.
The second way is to use a bitmap. For example, you set your points to the white then render the area under the curve to the black and see if your points remain white. This method is more accurate and the fastest one, because you can use the GPU for the render.
And some links related to the first a approach:
https://pomax.github.io/bezierinfo/#intersections
http://web.mit.edu/hyperbook/Patrikalakis-Maekawa-Cho/node80.html
I have about 300,000 points defining my 3D surface. I would like to know if I dropped a infinitely stiff sheet onto my 3D surface, what the equation of that plane would be. I know I need to find the 3 points the sheet would rest on as that defines a plane, but I'm not sure how to find my 3 points out of the ~300,000. You can assume this 3D surface is very bumpy and that this sheet will most likely lie on 3 "hills".
Edit: Some more background knowledge. This is point cloud data for a scan of a 3D surface which is nearly flat. What I would like to know is how this object would rest if I flipped it over and put it on a completely flat surface. I realize that this surface may be able to rest on the table in various different ways depending on the density and thickness of the object but you can assume the number of ways is finite and I would like to know all of the different ways just in case.
Edit: After looking at some point cloud libraries I'm thinking of doing something like computing the curvature using a kd tree (using SciPy) and only looking at regions that have a negative curvature and then there should be 3+ regions with negative curvature so some combinatorics + iterations should give the correct 3 points for the plane(s).
I have a 3D object which is not hollow, so there are many 3D points. How would you determine which of these points of such an object (especially with a very curvaceous surface) are on the surface? I understand how to extract them, but I need either a function somewhat like libraryUNK.surfacePoint... Which I don't know.
Or better an understanding of what is considered to be a surface point, which I don't know either and couldn't (yet) develop (for myself) any proper definition.
I know I can do triangulation to get the surface. But I don't get what to do next, as I will be left now with a set of triangles, some of which are on the surface, some of which are not... but again I have no definition how to consider what's on surface and what is not...
It seems to me that you want to compute the convex hull of your 3D points cloud.
It's not an easy problem, but there's plently of solutions (and algorithms) to do that. One of the efficients one is called "convex hull". There's a ConvexHull function in scipy.spatial.
Here is the details with an example (2D, but it works in any dimension)
http://scipy.github.io/devdocs/generated/scipy.spatial.ConvexHull.html
This function use the QHull library
http://www.qhull.org/
There's plently of ressources on the QHull page. There's also a Wikipedia page (Again, this is not the only method to compute convex hulls, you may want to try others):
https://en.wikipedia.org/wiki/Quickhull
Edit: after re-reading the question, it seems your volume may not be convex. Unfortunately, if it isn't, there's no way to tell whether a point is inside the volume or in the surface without additional informations on the volume or on the points cloud.