I have a database containing body scan results in point format.
For example :
point1=(x,y,z)
point2=(x2,y2,z2)
...
I want to convert these points to body shape.
And I want to do some processing on this points for example calculate neck diameter and some related calculation.
Any suggestion ? (module , tutorial etc...)
You need some basic textbook on computational geometry. See this question, for example: https://stackoverflow.com/questions/3308266/computational-geometry
Related
maybe somebody knows something, since I am not able to find anything that makes sense to me.
I have a dataset positions (lon, lat) and I want to snap them to the nearest road and calculate the distance between them.
So far I discovered OSM, however I can't find a working example on how to use the API using python.
If any of you could help, I am thankful for ever little detail.
Will try to find it out by myself in the meantime and publish the answer if successful (couldn't find any similar question so maybe it will help someone in the future)
Welcome! OSM is a wonderful resource, but is essentially a raw dataset that you have to download and do your own processing on. There are a number of ways to do this, if you need a relatively small extract of the data (as opposed to the full planet file) the Overpass API is the place to look. Overpass turbo (docs) is a useful tool to help with this API.
Once you have the road network data you need, you can use a library like Shapely to snap your points to the road network geometry, and then either calculate the distance between them (if you need "as the crow flies" distance), or split the road geometry by the snapped points and calculate the length of the line. If you need real-world distance that takes the curvature of the earth into consideration (as opposed to the distance as it appears on a projected map), you can use something like Geopy.
You may also want to look into the Map Matching API from Mapbox (full disclosure, I work there), which takes a set of coordinates, snaps them to the road network, and returns the snapped geometry as well as information about the route, including distance.
You might use KDTree of sklearn for this. You fill an array with coordinates of candidate roads (I downloaded this from openstreetmap). Then use KDTree to make a tree of this array. Finally, use KDTree.query(your_point, k=1) to get the nearest point of the tree (which is the nearest node of the coordinate roads). Since searching the tree is very fast (essentially log(N) for N points that form the tree), you can query lots of points.
how are you doing?
I have been working on this for quite a while now, but I am not able to proceed any further.
I was given .nc Files with Cloud Based Altitudes (CBA are measurements to indicate the height of clods for a particular area).
For now I am not even able to extract Data for a single Coordinate. This Data contains the area shown in this picture, from the top left to the bottom right:
Here is a screenshot of the data I am working with:
Sorry that I can not provide more Data than that, but I don't want to get in trouble.
The cba value is the information I want to extract. So let's say I have a set of coordinates (longitude/latitude) of an area I am interested in, how would I be able to get the appropriate cba values?
I don't need a specific solution, but a direction so I can continue on my own.
For additional information this is how the .cba and .cba.values look:
I have been working with data like that before, but not this nested, so I have absolutely no idea how I can even approach that.
Is there a module/package in python which would allow me to easily handle this, so when I put in coordinates for an area it would directly give me the corresponding cba-values?
I hope that is enough information to go on and you can help me, I would really appreciate it.
Regards
The package you are using (xarray) is already perfectly suited for this task.
dObj is an xarray.Dataset object, each contained variable (such as dObj['cba']) is an xarray.DataArray object.
You can access the data at a given set of coordinates like this:
lat = ...
lon = ...
dObj['cbd'].sel(y=lat, x=lon, method='nearest')
method='nearest' is needed as your coordinate array is unlikely to contain the requested floating point values exactly.
If you instead want to select a range of latitudes and longitudes you can do that as follows:
lat_min, lat_max = (...)
lon_min, lon_max = (...)
dObj['cbd'].sel(y=slice(lat_min, lat_max), x=slice(lon_min, lon_max))
For more information about dealing with xarray objects, have a look at the very comprehensive documentation: http://xarray.pydata.org/en/stable/index.html
I'm using GeoDjango to find all points within a Polygon but it seems to be using the Boundaries (NMW,NME,SME,SMW) to find points. So it brings back results that are outside of the primary shape.
polygon = Polygon((18.3825363358424 -33.97219070578159,...))
Order.objects.filter(location__within=polygon)
I would like the query to bring points inside the shape and not inside its bounds.
i.e if * was my shape; I'm getting points in # that seems to suggest that it's querying on the bounds instead of the actual shape.
**************
**************
******########
******########
******########
Please tell if I'm doing some wrong?
Ok looks like this is a MySQL Limitations
https://docs.djangoproject.com/en/2.2/ref/contrib/gis/db-api/#mysql-spatial-limitations
In other words, while spatial lookups such as contains are available in GeoDjango when using MySQL, the results returned are really equivalent to what would be returned when using bbcontains on a different spatial backend.
:( any other way of solving this problem
I am using win10, python and c#. I want to calculate circumference of human parts (belly, biceps etc) with using point cloud or 3d scans like .stl .obj .ply. Now I can get the point cloud of human body with kinect v2. I have point cloud of human body, scanned 3d human body in .stl .obj .ply formats.
I need some ideas and infos about it. I don't know how to analyse the things I have and how to calculate what I want.
Here I found an example of what I am trying to do but It doesn't need to be perfectly stable like that, Its for a school homework. Maybe you can give me some ideas about how to achieve my goal. Thank you for your help.
https://www.youtube.com/watch?time_continue=48&v=jOvaZGloNRo
I get 3d scanned object with kinect v2 and use PCL to convert it into point cloud.
I don't know about using PCL with Python or C#. In general you are looking at the following steps:
Filtering the points to the interested region
Segmenting the shape
Extracting the parameters
If you're interested in only Python, then OpenCV might be the best option. You can also develop the core logic in C++ and wrap it for Python or C#. C++ also has some nice UI libaries (Qt, nanogui), please see the following details for achieving the objective with PCL
Filtering
CropBox or PassThrough can be used for this. It'll result in similar results as shown in the image assuming that the frame has been chosen properly. If not, the points cloud can be easily transformed
Segmenting the shape
Assuming you want an average circumference, you might need to experiment with Circle 2D, Circle 3D and Cylinder models. More details regarding usage and API are here. The method chosen can be simple SAC (Sample Consensus) like RANSAC (Random SAC) or advanced method like LMEDS (Least Median of Squares) or MLESAC (Max Likelihood Estimation SAC)
Extracting the parameters
All models have a radius field which can be used to find the circumference using standard formula (2*pi*r)
Disclaimer: Please take note that the shape is circular, not ellipse and the cylinder are right angled cylinders. So if the object measured (arm, or bicep) is not circular, the computed value might not be close to ground truth in extreme cases
As part of an assignment I'm trying to visualize the flow of a viscous liquid through a mixer. The data provided for this is vtk StructuredPoint file, containing both scalar and vector data. The scalar data tells us if a given point is part of the mixer's structure, and the vector data contains the information about the flow of the liquid.
I've visualized the flow using streamlines, created using the vtkStreamTracer class. When the output is passed on to the vtkPolyDataMapper, it seems to default to using the scalar part of the data to color the lines.
I've been trying to find a way to change this, but I haven't been able to do so. I'm trying to use the magnitude of the flow vector to determine the color of the lines, so that the color represents the velocity of the liquid.
Any hints? I've included the relevant part of the visualization code below. Thanks in advance!
seeds = vtk.vtkPointSource()
seeds.SetRadius(15)
seeds.SetCenter(0,30.5,30.5)
seeds.SetNumberOfPoints(100)
integrator=vtk.vtkRungeKutta45()
streamer = vtk.vtkStreamTracer()
streamer.SetInputConnection(reader.GetOutputPort())
streamer.SetSourceConnection(seeds.GetOutputPort())
streamer.SetIntegrationDirectionToBoth()
streamer.SetIntegrator(integrator)
streamer.SetMaximumPropagation(500)
streamer.SetComputeVorticity(True)
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(streamer.GetOutputPort())
actor = vtk.vtkActor()
actor.SetMapper(mapper)
Check this website: http://www.uppmax.uu.se/docs/w/index.php/Streamlines
It has an example of Streamlines that might be useful for you. It uses a color transfer function so that color represents the velocity of the wind in this case, similar to what you are trying to achieve.
Hope this helps!