How to convert Cell Data to Point Data in .vtu file - python

I am really new to the use of .vtu files and I need to extract grid data and data from some arrays in the solution to store them in two .npy files, one for the grid and one for the variables, and then go on with some post-processing.
While I was able to extract points from the grid and cell data from the arrays and convert them in numpy arrays, I don't get how to tranform Cell data to Point data.
Here is my code:
reader = vtk.vtkXMLUnstructuredGridReader()
reader.SetFileName("myfile.vtu")
reader.Update()
# Get the coordinates of nodes in the mesh
nodes_vtk_array= reader.GetOutput().GetPoints().GetData()
OH_vtk_array = reader.GetOutput().GetCellData().GetArray('OH mass frac.')
#Get the coordinates of the nodes
nodes_nummpy_array = vtk_to_numpy(nodes_vtk_array)
x,y,z= nodes_nummpy_array[:,0] , nodes_nummpy_array[:,1] , nodes_nummpy_array[:,2]
OH_numpy_array = vtk_to_numpy(OH_vtk_array)
OH = OH_numpy_array
I hope that someone can help me even if it is a very stupid question :)
Thanks a lot in advance!!

You can use the CellDataToPointData filter. (python as same API)
Something like:
converter = vtk.vtkCellDataToPointData()
converter.ProcessAllArraysOn()
converter.SetInputConnection(reader.GetOutputPort())
converter.Update()
OH_vtk_array = converter.GetOutput().GetPointData().GetArray('OH mass frac.')

From vtkIOXMLPython.vtkXMLUnstructuredGridReader
reader = vtk.vtkXMLUnstructuredGridReader()
reader.SetFileName(filenameVTU)
reader.Update()
From vtkCommonCorePython.vtkPoints
points = reader.GetOutput().GetPoints()
coordinates = np.array(points.GetData())
From vtkCommonExecutionModelPython.vtkAlgorithmOutput
reader.GetOutputPort()
From vtkFiltersCorePython.vtkCellDataToPointData
converter = vtk.vtkCellDataToPointData()
Here, SetInputConnection method requires a vtkAlgorithmOutput
converter.SetInputConnection(reader.GetOutputPort() )
converter.Update()
Finally from vtkCommonCorePython.vtkDoubleArray
pointArray = np.array(converter.GetOutput().GetPointData().GetArray('YOUR CELL DATA ARRAY NAME'))
Additionally, you can assert
assert pointArray.shape[0] == coordinates.shape[0]

Related

Display a map on streamlit by retrieving the data with an api

I would like to display a map on STREAMLIT by retrieving the data with an api.
I want to use the following result (it gives me the districts for a city in France) :
https://public.opendatasoft.com/api/records/1.0/search/?dataset=georef-france-iris-millesime&q=Lille&sort=year&facet=year&facet=reg_name&facet=dep_name&facet=arrdep_name&facet=ze2020_name&facet=bv2012_name&facet=epci_name&facet=ept_name&facet=com_name&facet=com_arm_name&facet=iris_name&facet=iris_area_code&facet=iris_type&refine.year=2020&refine.com_name=Lille
I try to have this (a geojson) :
POLYGON ((3.069402968274157 50.63987328751279, 3.069467907250858 50.63988940474122,...)
but i have this :
coordinates": [
[
[
3.061943586904849,
50.636758694822056
],
[
3.061342144816787,
50.63651758657737
],...]]
I'm looking for but have no idea how to get the data to be recognized to create a map.
Do you have any advice on how to convert the result of the api into geojson ?
Thanks for your help !
Here is how I would generate your geojson polygons from your API results:
import json
from geojson import Polygon
# Load the content of the API response
file = open('data.json')
data = json.load(file)
# This array will contain your polygons for each district
polygons = []
# Iterate through the response records
for record in data["records"]:
# This array will contain coordinates to draw a polygon
coordinates = []
# Iterate through the coordinates of the record
for coord in record["fields"]["geo_shape"]["coordinates"][0]:
lon = coord[0] # Longitude
lat = coord[1] # Latitude
# /!\ Order of lon & lat might be wrong here
coordinates.append((lon, lat))
# Append a new Polygon object to the polygons array
# (Note that there are outer brackets, I'm not sure if you can
# store all polygons in a single Polygon object)
polygons.append(Polygon([coordinates]))
print(polygons)
# Do something with your polygons here...
Your initial definition of a Polygon seems wrong to me, you should check this link: https://github.com/jazzband/geojson#polygon.
After looking around a bit, I think Streamlit might not be the best option to display your map as it does not seem to support drawing polygons (I might be wrong here). If that is the case, you should have a look at GeoPandas.

Re-distributing 2d data with max in middle

Hey all I have a set up seemingly random 2D data that I want to reorder. This is more for an image with specific values at each pixel but the concept will be the same.
I have large 2d array that looks very random, say:
x = 100
y = 120
np.random.random((x,y))
and I want to re-distribute the 2d matrix so that the maximum value is in the center and the values from the maximum surround it giving it sort of a gaussian fall off from the center.
small example:
output = [[0.0,0.5,1.0,1.0,1.0,0.5,0.0]
[0.0,1.0,1.0,1.5,1.0,0.5,0.0]
[0.5,1.0,1.5,2.0,1.5,1.0,0.5]
[0.0,1.0,1.0,1.5,1.0,0.5,0.0]
[0.0,0.5,1.0,1.0,1.0,0.5,0.0]]
I know it wont really be a gaussian but just trying to give a visualization of what I would like. I was thinking of sorting the 2d array into a list from max to min and then using that to create a new 2d array but Im not sure how to distribute the values down to fill the matrix how I want.
Thank you very much!
If anyone looks at this in the future and needs help, Here is some advice on how to do this effectively for a lot of data. Posted below is the code.
def datasort(inputarray,spot_in_x,spot_in_y):
#get the data read
center_of_y = spot_in_y
center_of_x = spot_in_x
M = len(inputarray[0])
N = len(inputarray)
l_list = list(itertools.chain(*inputarray)) #listed data
l_sorted = sorted(l_list,reverse=True) #sorted listed data
#Reorder
to_reorder = list(np.arange(0,len(l_sorted),1))
x = np.linspace(-1,1,M)
y = np.linspace(-1,1,N)
centerx = int(M/2 - center_of_x)*0.01
centery = int(N/2 - center_of_y)*0.01
[X,Y] = np.meshgrid(x,y)
R = np.sqrt((X+centerx)**2 + (Y+centery)**2)
R_list = list(itertools.chain(*R))
values = zip(R_list,to_reorder)
sortedvalues = sorted(values)
unzip = list(zip(*sortedvalues))
unzip2 = unzip[1]
l_reorder = zip(unzip2,l_sorted)
l_reorder = sorted(l_reorder)
l_unzip = list(zip(*l_reorder))
l_unzip2 = l_unzip[1]
sorted_list = np.reshape(l_unzip2,(N,M))
return(sorted_list)
This code basically takes your data and reorders it in a sorted list. Then zips it together with a list based on a circular distribution. Then using the zip and sort commands you can create the distribution of data you wish to have based on your distribution function, in my case its a circle that can be offset.

how to determine if a point is inside a polygon using geojson and shapely

I am hoping to create a region on a map and be able to automatically determine if points (coordinates) are inside that region. I'm using a geojson file of the entire US and coordinates for New York City for this example.
Geojson: https://github.com/johan/world.geo.json
I have read the shapely documentation and just can't figure out why my results are returning False. Any help would be much appreciated.
import json
from shapely.geometry import shape, GeometryCollection, Point
with open('USA.geo.json', 'r') as f:
js = json.load(f)
point = Point(40.712776, -74.005974)
for feature in js['features']:
polygon = shape(feature['geometry'])
if polygon.contains(point):
print ('Found containing polygon:', feature)
I'm hoping to print the contained coordinates, but nothing is printed.
You need to swap the values of the Point() around:
point = Point(-74.005974, 40.712776)
The dataset you're using has the longitude first and the latitude second in their coordinates.

polygon showing up in Google Earth - simpleKML

background - I'm trying to create a circular polygon and add it to a kml using simpleKML.
The kml knows that there should be a polygon added, and it has the proper colour, width, and description, but whenever I zoom to the location it leads me to coordinates 0,0 and no polygon.
My code to create the polygon looks like:
pol = kml.newpolygon(name=pnt.name)
pol.description = ("A buffer for " + pnt.name)
pol.innerboundaryis = [newCoord]
pol.style.linestyle.color = simplekml.Color.green
pol.style.linestyle.width = 5
pol.style.polystyle.color = simplekml.Color.changealphaint(100, simplekml.Color.green)
where 'newCoord' is a 2D array with all of the lat/long information stored in it.
Because I thought the array might not format the data properly I tried to form a simple triangular polygon using the code:
pol1 = kml.newpolygon(name=pnt.name)
pol1.innerboundaryis = [(46.714,-75.6667),(44.60796,-74.502),(46.13910,-74.57411),(46.714,-75.6667)]
pol1.style.linestyle.color = simplekml.Color.green
pol1.style.linestyle.width = 5
pol1.style.polystyle.color = simplekml.Color.changealphaint(100, simplekml.Color.green)
but it has the same issue as the first.
I've tried forming the polygon with both .innerboundaryis() and .outerboundaryis() without success and I'm running out of ideas.
edit: I should add that I'm opening the kml file in Google Earth
There is almost no documentation on this issue online so I figured I would post the answer to my question for anyone who has this issue in the future.
This is the code that I used that got the polygon working.
newCoords = []
pol = kml.newpolygon(name=pnt.name)
pol.description = ("A buffer for " + pnt.name)
if pnt.name in bufferList:
bufferRange = input('Enter the buffer range. ' )
for i in range(360):
newCoords.append( ( math to calculate Lat, math to calculate Long ) )
pol.outerboundaryis.coords.addcoordinates([newCoords[i]])
pol.style.linestyle.color = simplekml.Color.green
pol.style.linestyle.width = 5
pol.style.polystyle.color = simplekml.Color.changealphaint(100, simplekml.Color.green)
You need to put your coordinates into a list before adding them to the polygon's outer boundary using the 'coords.addcoordinates()' function. Additionally it must be a one dimensional list, so both the latitude and longitude coordinate must be stored in the same place.
You can input floats directly with '.outerboundaryis()', example:
pol.outerboundaryis = [(18.333868,-34.038274), (18.370618,-34.034421),
(18.350616,-34.051677),(18.333868,-34.038274)]
But '.addcoordinates()' only accepts lists and integers.

How do I produce a table of converted coordinates from Equatorial to AltAz?

I have tried debugging my code and I've realised that ultimately it breaks down when I try to save my AltAz coordinates into a .csv file because its not a numpy array, its a SkyCoord object. Could someone suggest a simple of way of converting a large table of Equatorial coordinates to AltAz or how I can get my code to save to file.
# Get time now
time = astropy.time.Time.now()
time.delta_ut1_utc = 0
# Geodetic coordinates of observatory (example here: Munich)
observatory = astropy.coordinates.EarthLocation(
lat=48.21*u.deg, lon=11.18*u.deg, height=532*u.m)
# Alt/az reference frame at observatory, now
frame = astropy.coordinates.AltAz(obstime=time, location=observatory)
# Look up (celestial) spherical polar coordinates of HEALPix grid.
theta, phi = hp.pix2ang(nside, np.arange(npix))
# Convert to Equatorial coordinates
radecs = astropy.coordinates.SkyCoord(
ra=phi*u.rad, dec=(0.5*np.pi - theta)*u.rad)
# Transform grid to alt/az coordinates at observatory, now
altaz = radecs.transform_to(frame)
#Transpose array from rows to columns
altaz_trans=np.transpose(altaz)
np.savetxt('altaz.csv',altaz_trans,fmt='%s', delimiter=',')
You'll want to use the to_string() method on altaz. That will give you a list of strings, each entry of which is has an altitude an azimuth number (they are separated by a space, so you can .split() them or whatever). Then you can write it out with numpy or your other library of choice.
Alternately, if you want to go straight to a file, you can create an astropy Table, and have columns 'alt' and 'az' that you respectively set equal to altaz.alt and altaz.az. Then you can .write(format='ascii') that table.

Categories