I'm trying to build a Model that selects points (stores) from a point shapefile that are within a polygon shapefile (block groups) and then assigns the the name attribute (Block Group ID) of that specific polygon to a new field in the points shapefile. Since I have a lot of points and a lot of polygons, I have made the process iterative meaning that the model will cycle through the total list of polygons and find any points located within a certain polygon to assign the name to the point shapefile's new field.
So far I have been able to select the a polygon and select the points within that polygon. I'm trying to find a way to write the name of that polygon to the new field in the points associated with it.
This is a screenshot of what I have so far:
http://i67.tinypic.com/qqv0y9.jpg
An alternative approach to consider is doing a Spatial Join.
Spatial Join (Analysis)
http://resources.arcgis.com/en/help/main/10.2/index.html#//00080000000q000000
You could also perform a Join based on Location
http://resources.arcgis.com/en/help/main/10.2/index.html#//005s0000002n000000
Related
I have a list of coordinate points that are already clustered. Each point is available to me as a row in a csv file, with one of the fields being the "zone id": the ID of the cluster to which a point belongs. I was wondering if there is a way, given the latitude, longitude and zone ID of each point, to draw polygons similar to Voronoi cells, such that:
each cluster is entirely contained within a polygon
each polygon contains points belonging to only one cluster
the union of the polygons is contiguous polygon that contains all the points. No holes: the polygons must border each other except at the edges. A fun extension would be to supply the "holes" (water bodies, for example) as part of the input.
I realise the problem is very abstract and could be very resource intensive, but I am curious to hear of any approaches. I am open to solutions using a variety or combination of tools, such as GIS software, Python, R, etc. I am also open to implementations that would be integrated into the clustering process.
Currently, I have a GeometryField, which holds a Polygon, which is a GEOSGeometry. I print the coordinates of the polygon, and they seem fine, right where I specified. Then, I save the instance of the model, and then deserialize with the GeoFeatureModelSerializer, only to find out that my polygon's coordinates have been changed to something very small and close to the equator.
This is the GEOSGeometry stored in the GeometryField initially that gets stored in the database.
POLYGON ((-79.94751781225206 40.44287206073545,
-79.94751781225206 40.44385187931003,
-79.94502872228624 40.44385187931003,
-79.94502872228624 40.44287206073545,
-79.94751781225206 40.44287206073545))
This is after that is serialized with the GeoFeatureModelSerializer and returned.
[[-0.000718176362453, 0.000363293553554],
[-0.000718176362453, 0.000363316438548],
[-0.000718135112337, 0.000363316438548],
[-0.000718135112337, 0.000363293553554],
[-0.000718176362453, 0.000363293553554]]
I have no idea what could be causing this.
Thanks a lot in advance.
This was resolved by specifying the SRID. According to the Django docs, the SRID is
Choosing an appropriate SRID for your model is an important decision that the developer should consider carefully. The SRID is an integer specifier that corresponds to the projection system that will be used to interpret the data in the spatial database. (https://docs.djangoproject.com/en/2.0/ref/contrib/gis/model-api/)
I performing operations on polygons with a particular SRID and returning another polygon with a different SRID. I simply had to 'cast' the polygon I was returning to the SRID I wanted, with GEOSGeometry(polygon, srid=some_value). Basically, the polygon I was returning was being projected to some other format that I didn't want.
For my research, I need to divide the geographical area of a city (i.e.Chicago or New York) using a grid. Later, I have data points consisting of GPS longitude and latitude location that I want to associate to its corresponding cell in the grid.
The simplest way to do this is dividing the space into squared cells of same size. However, this will lead to cells with very few points in non-populated (rural areas) areas and cells with a high number of points (city centre). In order to have a more fair representation and the relation between the number of points and cell size, an adaptative grid that create cells of size based on data density would be a better option.
I came across this paper that utilise a K-D tree to do the space partition and retrieve the cells from the nodes. However, I cannot find any implementation (in python) that does that. Many of the implementations out there only index data points in the tree to perform Nearest Neighbour search, but they not provide code to extract the polygon-rectangles that k-d tree generates.
For example, given the following image:
My resulting grid will contain 5 cells (node1 to node5) where each cell contains the associated data points.
Any idea on how to do that?
Anyone knows any implementation?
Many thanks,
David
I have a set of points with their coordinates (latitudes, longigutes), and I also have a region (a box). All the points are inside the box.
Now I want to distribute the points into small cells (rectangles) considering the density of points. Basically, a cell containing many points should be subdivided into smaller ones, while keeping larger cells with small number of points.
I have check this question, which has almost the same problem with me but I couldn't find a good answer. I think I should use a quad tree, but all implementations I found doesn't provide that.
For example, this library allows making a tree associated with a box, and then we can insert boxes as follows:
spindex = Index(bbox=(0, 0, 100, 100))
for item in items:
spindex.insert(item, item.bbox)
But it doens't allow inserting points. Moreover, I'll need to get the IDs of cells (or names), and check if a point belongs to a given cell.
Here is another lib I found. It does allow inserting points, but it doens't give me the cell's ID (or name), so I cannot check if a point belongs to a certain cell. Notice that the tree should do the decomposition automatically.
Could you please suggest me a solution?
Thanks.
Finally, I ended up using Google's s2-geometry-library with Python wrapper. In fact, each cell created by this lib is not a rectangle (it's a projection), but it satisfies my need. The lib already divided the earth surface into cells at different levels (quad tree). Given an point (lat,lng), I can easily get a corresponding cell at leaf level. From these leaf nodes, I go up and merge cells based on what I need (number of points in a cell).
This tutorial explains everything in details.
Here is my result:
This is my first question on stackoverflow, so go easy!
High level goal is this: find the intersection of apartments on craigslist with locations that are under 35m from my work.
I've used kimono labs' API builder to extract the addresses of apartments within my price range on Craigslist, and then converted those addresses to longitude/latitude coordinates.
I also have a list of pseudo-geoJson objects collectively describing the area within 35m, generated from mapnificent.net; eg
{"type":"Feature",
"properties":{"radius":1153.8461538461538},
"geometry":{"type":"Point",
"coordinates":[-73.97367450000002,
40.7589832]
}
},
First question is: how can I batch-convert these, in mongoDB or elsewhere, to a list of regular geoJSON polygons approximating the area that Mongo can understand?
Second question is: once I have a collection of regular polygons, how can I use all ~4,000 of these objects to define a 2dsphere index I can use to intersect with my list of points from Craigslist?
I have basic familiarity with MongoDB, Python, Ruby, and Java, but am by no means an expert. Quick learner though -- a day ago I didn't even know what geoJSON was or how to run MongoDB.
Thanks!