Display a map on streamlit by retrieving the data with an api - python

I would like to display a map on STREAMLIT by retrieving the data with an api.
I want to use the following result (it gives me the districts for a city in France) :
https://public.opendatasoft.com/api/records/1.0/search/?dataset=georef-france-iris-millesime&q=Lille&sort=year&facet=year&facet=reg_name&facet=dep_name&facet=arrdep_name&facet=ze2020_name&facet=bv2012_name&facet=epci_name&facet=ept_name&facet=com_name&facet=com_arm_name&facet=iris_name&facet=iris_area_code&facet=iris_type&refine.year=2020&refine.com_name=Lille
I try to have this (a geojson) :
POLYGON ((3.069402968274157 50.63987328751279, 3.069467907250858 50.63988940474122,...)
but i have this :
coordinates": [
[
[
3.061943586904849,
50.636758694822056
],
[
3.061342144816787,
50.63651758657737
],...]]
I'm looking for but have no idea how to get the data to be recognized to create a map.
Do you have any advice on how to convert the result of the api into geojson ?
Thanks for your help !

Here is how I would generate your geojson polygons from your API results:
import json
from geojson import Polygon
# Load the content of the API response
file = open('data.json')
data = json.load(file)
# This array will contain your polygons for each district
polygons = []
# Iterate through the response records
for record in data["records"]:
# This array will contain coordinates to draw a polygon
coordinates = []
# Iterate through the coordinates of the record
for coord in record["fields"]["geo_shape"]["coordinates"][0]:
lon = coord[0] # Longitude
lat = coord[1] # Latitude
# /!\ Order of lon & lat might be wrong here
coordinates.append((lon, lat))
# Append a new Polygon object to the polygons array
# (Note that there are outer brackets, I'm not sure if you can
# store all polygons in a single Polygon object)
polygons.append(Polygon([coordinates]))
print(polygons)
# Do something with your polygons here...
Your initial definition of a Polygon seems wrong to me, you should check this link: https://github.com/jazzband/geojson#polygon.
After looking around a bit, I think Streamlit might not be the best option to display your map as it does not seem to support drawing polygons (I might be wrong here). If that is the case, you should have a look at GeoPandas.

Related

Is there a way to convert a polygon shapefile into coordinates in Python?

I am trying to download satellite images from Sentinel 2 through ESA Sentinel data hub.
The code that I am using to get the shapefile layer's extent to set the query is not in lat/long coordinates but rather strange numbers. I carefully followed the practical instructions with no luck.
Any advice or help on how to solve this issue would be much appreciated!
Below is the code:
# Get the shapefile layer's extent
driver = ogr.GetDriverByName("ESRI Shapefile")
ds = driver.Open(shapefile, 0)
lyr = ds.GetLayer()
extent = lyr.GetExtent()
print("Extent of the area of interest (shapefile):\n", extent)
# get projection information from the shapefile to reproject the images to
outSpatialRef = lyr.GetSpatialRef().ExportToWkt()
ds = None # close file
print("\nSpatial referencing information of the shapefile:\n", outSpatialRef)
Extent of the area of interest (shapefile):
(363337.9978, 406749.40699999966, 565178.6085999999, 633117.0013999995)
Spatial referencing information of the shapefile:
PROJCS["OSGB_1936_British_National_Grid",GEOGCS["GCS_OSGB 1936",DATUM["OSGB_1936",SPHEROID["Airy_1830",6377563.396,299.3249646]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["false_easting",400000.0],PARAMETER["false_northing",-100000.0],PARAMETER["central_meridian",-2.0],PARAMETER["scale_factor",0.9996012717],PARAMETER["latitude_of_origin",49.0],UNIT["Meter",1.0]]
# extent of our shapefile in the right format for the Data Hub API.
def bbox(extent):
# Create a Polygon from the extent tuple
box = ogr.Geometry(ogr.wkbLinearRing)
box.AddPoint(extent[0],extent[2])
box.AddPoint(extent[1], extent[2])
box.AddPoint(extent[1], extent[3])
box.AddPoint(extent[0], extent[3])
box.AddPoint(extent[0],extent[2])
poly = ogr.Geometry(ogr.wkbPolygon)
poly.AddGeometry(box)
return poly
# Let's see what it does
print(extent)
print(bbox(extent))
(363337.9978, 406749.40699999966, 565178.6085999999, 633117.0013999995)
POLYGON ((363337.9978 565178.6086 0,406749.407 565178.6086 0,406749.407 633117.001399999 0,363337.9978 633117.001399999 0,363337.9978 565178.6086 0))
Turns out the coordinate system that the shapefile is in is quite crucial and it should be in GCS_WGS_1984.

How to convert Cell Data to Point Data in .vtu file

I am really new to the use of .vtu files and I need to extract grid data and data from some arrays in the solution to store them in two .npy files, one for the grid and one for the variables, and then go on with some post-processing.
While I was able to extract points from the grid and cell data from the arrays and convert them in numpy arrays, I don't get how to tranform Cell data to Point data.
Here is my code:
reader = vtk.vtkXMLUnstructuredGridReader()
reader.SetFileName("myfile.vtu")
reader.Update()
# Get the coordinates of nodes in the mesh
nodes_vtk_array= reader.GetOutput().GetPoints().GetData()
OH_vtk_array = reader.GetOutput().GetCellData().GetArray('OH mass frac.')
#Get the coordinates of the nodes
nodes_nummpy_array = vtk_to_numpy(nodes_vtk_array)
x,y,z= nodes_nummpy_array[:,0] , nodes_nummpy_array[:,1] , nodes_nummpy_array[:,2]
OH_numpy_array = vtk_to_numpy(OH_vtk_array)
OH = OH_numpy_array
I hope that someone can help me even if it is a very stupid question :)
Thanks a lot in advance!!
You can use the CellDataToPointData filter. (python as same API)
Something like:
converter = vtk.vtkCellDataToPointData()
converter.ProcessAllArraysOn()
converter.SetInputConnection(reader.GetOutputPort())
converter.Update()
OH_vtk_array = converter.GetOutput().GetPointData().GetArray('OH mass frac.')
From vtkIOXMLPython.vtkXMLUnstructuredGridReader
reader = vtk.vtkXMLUnstructuredGridReader()
reader.SetFileName(filenameVTU)
reader.Update()
From vtkCommonCorePython.vtkPoints
points = reader.GetOutput().GetPoints()
coordinates = np.array(points.GetData())
From vtkCommonExecutionModelPython.vtkAlgorithmOutput
reader.GetOutputPort()
From vtkFiltersCorePython.vtkCellDataToPointData
converter = vtk.vtkCellDataToPointData()
Here, SetInputConnection method requires a vtkAlgorithmOutput
converter.SetInputConnection(reader.GetOutputPort() )
converter.Update()
Finally from vtkCommonCorePython.vtkDoubleArray
pointArray = np.array(converter.GetOutput().GetPointData().GetArray('YOUR CELL DATA ARRAY NAME'))
Additionally, you can assert
assert pointArray.shape[0] == coordinates.shape[0]

in GeoPandas, select (line string) data within a latitude longitude box defined by user

I have a geopandas dataframe consisting of a combination of LineStrings and MultiLineStrings. I would like to select those LineStrings and MultiLineStrings containing a point within a box (defined by me) of latitude longitude, for which I don't have a geometry. In other words, I have some mapped USGS fault traces and I would like to pick a square inset of those fault lines within a certain distance from some lat/lons. So far I've had some success unwrapping just coordinates from the entire data frame and only saving points that fall within a box of lat/lon, but then I no longer keep the original geometry or information saved in the data frame. (i.e. like this:)
xvals=[]
yvals=[]
for flt in qfaults['geometry']:
for coord in flt.coords:
if coord[1] >= centroid[1]-1 and coord[1] <= centroid[1]+1 and coord[0]<=centroid[0]+1 and coord[0]>=centroid[0]-1:
xvals.append(coord[0])
yvals.append(coord[1])
Is there any intuition as to how to do this using the GeoPandas data frame? Thanks in advance.
GeoPandas has .cx indexer which works exactly like this. See https://geopandas.readthedocs.io/en/latest/docs/user_guide/indexing.html
Syntax is gdf.cx[xmin:xmax, ymin:ymax]
world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))
southern_world = world.cx[:, :0]
western_world = world.cx[:0, :]
western_europe = world.cx[1:10, 40:60]

how to determine if a point is inside a polygon using geojson and shapely

I am hoping to create a region on a map and be able to automatically determine if points (coordinates) are inside that region. I'm using a geojson file of the entire US and coordinates for New York City for this example.
Geojson: https://github.com/johan/world.geo.json
I have read the shapely documentation and just can't figure out why my results are returning False. Any help would be much appreciated.
import json
from shapely.geometry import shape, GeometryCollection, Point
with open('USA.geo.json', 'r') as f:
js = json.load(f)
point = Point(40.712776, -74.005974)
for feature in js['features']:
polygon = shape(feature['geometry'])
if polygon.contains(point):
print ('Found containing polygon:', feature)
I'm hoping to print the contained coordinates, but nothing is printed.
You need to swap the values of the Point() around:
point = Point(-74.005974, 40.712776)
The dataset you're using has the longitude first and the latitude second in their coordinates.

polygon showing up in Google Earth - simpleKML

background - I'm trying to create a circular polygon and add it to a kml using simpleKML.
The kml knows that there should be a polygon added, and it has the proper colour, width, and description, but whenever I zoom to the location it leads me to coordinates 0,0 and no polygon.
My code to create the polygon looks like:
pol = kml.newpolygon(name=pnt.name)
pol.description = ("A buffer for " + pnt.name)
pol.innerboundaryis = [newCoord]
pol.style.linestyle.color = simplekml.Color.green
pol.style.linestyle.width = 5
pol.style.polystyle.color = simplekml.Color.changealphaint(100, simplekml.Color.green)
where 'newCoord' is a 2D array with all of the lat/long information stored in it.
Because I thought the array might not format the data properly I tried to form a simple triangular polygon using the code:
pol1 = kml.newpolygon(name=pnt.name)
pol1.innerboundaryis = [(46.714,-75.6667),(44.60796,-74.502),(46.13910,-74.57411),(46.714,-75.6667)]
pol1.style.linestyle.color = simplekml.Color.green
pol1.style.linestyle.width = 5
pol1.style.polystyle.color = simplekml.Color.changealphaint(100, simplekml.Color.green)
but it has the same issue as the first.
I've tried forming the polygon with both .innerboundaryis() and .outerboundaryis() without success and I'm running out of ideas.
edit: I should add that I'm opening the kml file in Google Earth
There is almost no documentation on this issue online so I figured I would post the answer to my question for anyone who has this issue in the future.
This is the code that I used that got the polygon working.
newCoords = []
pol = kml.newpolygon(name=pnt.name)
pol.description = ("A buffer for " + pnt.name)
if pnt.name in bufferList:
bufferRange = input('Enter the buffer range. ' )
for i in range(360):
newCoords.append( ( math to calculate Lat, math to calculate Long ) )
pol.outerboundaryis.coords.addcoordinates([newCoords[i]])
pol.style.linestyle.color = simplekml.Color.green
pol.style.linestyle.width = 5
pol.style.polystyle.color = simplekml.Color.changealphaint(100, simplekml.Color.green)
You need to put your coordinates into a list before adding them to the polygon's outer boundary using the 'coords.addcoordinates()' function. Additionally it must be a one dimensional list, so both the latitude and longitude coordinate must be stored in the same place.
You can input floats directly with '.outerboundaryis()', example:
pol.outerboundaryis = [(18.333868,-34.038274), (18.370618,-34.034421),
(18.350616,-34.051677),(18.333868,-34.038274)]
But '.addcoordinates()' only accepts lists and integers.

Categories