Evaluating geoid heights using egms in Python - python

Are there any libraries in Python that have functions that compute for geoid heights using egm84, egm96 and egm2008?
I know that geographiclib has a function (http://geographiclib.sourceforge.net/html/classGeographicLib_1_1GravityModel.html) that computes for geoid heights using the three egms but I don't know how to implement them in Python (if they are really applicable). How do you implement that? Or if it isn't applicable to Python, are there any libraries that can be used?
Thank you.

There is some relevant work done here on this:
https://github.com/mrJean1/PyGeodesy
Their instructions on use were not the most up to date, but here is a TLDR:
First download your choice of geoid from here, like so:
wget https://sourceforge.net/projects/geographiclib/files/geoids-distrib/egm2008-5.tar.bz2
bzip2 -d egm2008-5.tar.bz2
tar -xvf egm2008-5.tar
Then use in your python script:
import pygeodesy
from pygeodesy.ellipsoidalKarney import LatLon
ginterpolator = pygeodesy.GeoidKarney("./geoids/egm2008-5.pgm")
# Make an example location
lat=51.416422
lon=-116.217151
# Get the geoid height
single_position=LatLon(lat, lon)
h = ginterpolator(single_position)
print(h)
This will give you the deviation from the ellipsoid at that location in meters:
-11.973145778529625
This roughly matches up with what we find when we use an online calculator.

Related

coordinate conversion script isn't giving me an accurate reading SVY21 to WGS84

I would like to convert my dataset of SVY21 coordinates, into WGS84 coordinates.
I am currently using this script from this repo I found but this script this yields inaccurate results with a discrepancy of up to 0.04, so the coordinates that I convert end up being on an entirely different geographical location in the same country.
Was wondering if there is anyone who can assist to help me with a script for converting a large dataset from SVY21 to WGS84?
E.G I want to convert
38816.0396118, 34379.9602051
but instead I get
1.36728713070321, 103.890645888016
when I should be getting
1.327235496598071, 103.93042021823591
I would do it on those online converters but my file sizes are pretty big(can go up to few GB) so it's better to run a script on my local computer instead using either Python or C++ or any other alternatives that will work. Also most online converters have a limit on file size as well.
Here's an accurate converter link: https://dominoc925-pages.appspot.com/webapp/calc_svy21/default.html but it doesnt accept my file size.
Appreciate the help :D Thanks~!
You probably have your coordinates the wrong way around. Consider the following:
import pyproj
xfm = pyproj.Transformer.from_crs('EPSG:3414', 'EPSG:4326')
x, y = 38816.0396118, 34379.9602051
print(xfm.transform(x, y))
# prints: (1.3673123058118237, 103.89064694097199)
print(xfm.transform(y, x))
# prints: (1.3271927478890677, 103.93050656742128)
Still about a ten-thousandth of a degree off, but I don't know how good pyproj's coordinate space definitions are in this case.

conversion newick to graphml using python

I would like to convert a tree from newick to a format like graphml, that I can open with cytoscape.
So, I have a file "small.newick" that contain:
((raccoon:1,bear:6):0.8,((sea_lion:11.9, seal:12):7,((monkey:100,cat:47):20, weasel:18):2):3,dog:25);
So far, I did that way (Python 3.6.5 |Anaconda):
from Bio import Phylo
import networkx
Tree = Phylo.read("small.newick", 'newick')
G = Phylo.to_networkx(Tree)
networkx.write_graphml(G, 'small.graphml')
There is a problem with the Clade, that I can fix using this code:
from Bio import Phylo
import networkx
def clade_names_fix(tree):
for idx, clade in enumerate(tree.find_clades()):
if not clade.name:
clade.name=str(idx)
Tree = Phylo.read("small.newick", 'newick')
clade_names_fix(Tree)
G = Phylo.to_networkx(Tree)
networkx.write_graphml(G, 'small.graphml')
Giving me something that seem nice enough:
My questions are:
Is that a good way to do it? It seem weird to me that the function does not take care of the internal node names
If you replace one node name with a string long enough, it will be trimmed by the command Phylo.to_networkx(Tree). How to avoid that?
Example: substitution of "dog" by "test_tring_that_create_some_problem_later_on"
Looks like you got pretty far on this already. I can only suggest a few alternatives/extensions to your approach...
Unfortunately, I couldn't find a Cytoscape app that can read this format. I tried searching for PHYLIP, NEWICK and PHYLO. You might have more luck:
http://apps.cytoscape.org/
There is an old Cytoscape 2.x plugin that could read this format, but to run this you would need to install Cytoscape 2.8.3, import the network, then export as xGMML (or save as CYS) and then try to open in Cytoscape 3.7 in order to migrate back into the land of living code. Then again, if 2.8.3 does what you need for this particular case, then maybe you don't need to migrate:
http://apps.cytoscape.org/apps/phylotree
The best approach is programmatic, which you already explored. Finding an R or Python package that turns NEWICK into iGraph or GraphML is a solid strategy. Note that there are updated and slick Cytoscape libs in those languages as well, so you can do all label cleanup, layout, data visualization, analysis, export, etc all within the scripting environment:
https://bioconductor.org/packages/release/bioc/html/RCy3.html
https://py2cytoscape.readthedocs.io/en/latest/
After some research, I actually found a solution that work.
I decided to provide the link here for you, dear reader:
going to github
FYI for anyone coming across this now I think the first issue mentioned here has now been solved in BioPython. Using the same data as above, the networkx graph which is built contains all the internal nodes of the tree as well as the terminal nodes.
import matplotlib.pyplot as plt
import networkx
from Bio import Phylo
Tree = Phylo.read("small.newick", 'newick')
G = Phylo.to_networkx(Tree)
networkx.draw_networkx(G)
plt.savefig("small_graph.png")
Specs:
Python 3.8.10,
Bio 1.78,
networkx 2.5

mesh decimation in python

I have a high resolution triangular mesh with about 2 million triangles. I want to reduce the number of triangles and vertices to about ~10000 each, while preserving its general shape as much as possible.
I know this can be done in Matlab using reducepatch. Another alternative is qslim package. Also there is decimation functionality in VTK which has python interface, so technically it is possible in python as well. Meshlab is probably available in python as well (?).
How can I do this kind of mesh decimation in python? Examples would be greatly appreciated.
Here is a minimal python prototype translated from its c++ equivalent vtk example (http://www.vtk.org/Wiki/VTK/Examples/Cxx/Meshes/Decimation), as MrPedru22 well suggested.
from vtk import (vtkSphereSource, vtkPolyData, vtkDecimatePro)
def decimation():
sphereS = vtkSphereSource()
sphereS.Update()
inputPoly = vtkPolyData()
inputPoly.ShallowCopy(sphereS.GetOutput())
print("Before decimation\n"
"-----------------\n"
"There are " + str(inputPoly.GetNumberOfPoints()) + "points.\n"
"There are " + str(inputPoly.GetNumberOfPolys()) + "polygons.\n")
decimate = vtkDecimatePro()
decimate.SetInputData(inputPoly)
decimate.SetTargetReduction(.10)
decimate.Update()
decimatedPoly = vtkPolyData()
decimatedPoly.ShallowCopy(decimate.GetOutput())
print("After decimation \n"
"-----------------\n"
"There are " + str(decimatedPoly.GetNumberOfPoints()) + "points.\n"
"There are " + str(decimatedPoly.GetNumberOfPolys()) + "polygons.\n")
if __name__ == "__main__":
decimation()
I would recommend you to use vtkQuadricDecimation, the quality of the output model is visually better than using vtkDecimatePro (without proper settings).
decimate = vtkQuadricDecimation()
decimate.SetInputData(inputPoly)
decimate.SetTargetReduction(0.9)
One of the most important things is to use Binary representation when saving STL:
stlWriter = vtkSTLWriter()
stlWriter.SetFileName(filename)
stlWriter.SetFileTypeToBinary()
stlWriter.SetInputConnection(decimate.GetOutputPort())
stlWriter.Write()
Best elegant and most beautiful Python decimation tool using meshlab (mainly MeshlabXML Library) can be found in this Dr. Hussein Bakri's repository
https://github.com/HusseinBakri/3DMeshBulkSimplification
I use it all the time. Have a look at the code
Another option is to apply open-source library MeshLib, which can be called both from C++ and Python code (where it is installed by pip).
And the decimating code will look like
import meshlib.mrmeshpy as mr
# load high-resolution mesh:
mesh = mr.loadMesh(mr.Path("busto.stl"))
# decimate it with max possible deviation 0.5:
settings = mr.DecimateSettings()
settings.maxError = 0.5
result = mr.decimateMesh(mesh, settings)
print(result.facesDeleted)
# 708298
print(result.vertsDeleted)
# 354149
# save low-resolution mesh:
mr.saveMesh(mesh, mr.Path("simplified-busto.stl"))
Visually both meshes look as follows:

EXIF info in Python - libexif

I have been using pyexiv2 to read exif information from JPEG files in python, and noticed that one tag in particular - ExposureTime - is not reported the same by exiv2 as with another exif library, libexif.
Any exiv2-based utility I've tried will simplify the exposuretime tag to a "rational" such as 0/1, 0, or similar. libexif based utilities (in particular, a tool "exif") will report a much more detailed "1/-21474836 sec." for the same tag, in the same image.
Firstly I'd like to understand: what can account for this difference? I'm assuming that the latter of the two is correct.
Secondly, and assuming that the more detailed tag as reported by libexif is correct, I'd like to be able to obtain this value in Python, where as far as I can see it is not possible using any EXIF tools that I have come across (pyexiv2 for example). Is there a tool or method that I am not considering?
I have stumbled across one potential solution with the use of the libexif C library in python with ctypes as noted in this previously answered question - though I could not find examples of how I could do this.
Any help is greatly appreciated. Thanks!
In case this helps, here are some hacks I recently did to set missing lens / F-Number,.. informations as I was using a manual lens plus I computed actaul absolute EV for automatic retrieval by later HDR processing tools (HDR Luminace). I commented out the "write" action for safety below. Should be pretty much self explanatory.
The top files section makes a list of files to work on in the current folder (here all *.ARW (Sony raw files)). Adjust the pattern and path as needed.
#!/usr/bin/env python
import os
import time
import array
import math
# make file list (take all *.ARW files in current folder)
files = [f for f in os.listdir(".") if f.endswith(".ARW")]
files.sort() # just to be nice
# have a dict. of tags to work with in particular
tags = {'Aperture':10., 'Exposure Time ':1./1250, 'Shutter Speed':1./1250, 'ISO':200., 'Stops Above Base ISO':0., 'Exposure Compensation':0. }
# arbitrary chosen base EV to get final EV compensation numbers into +/-10 range
EVref = math.log (math.pow(tags['Aperture'],2.0)/tags['Shutter Speed'], 2.0) - 4
print ('EVref=', EVref)
for f in files:
print (f)
meta=os.popen("exiftool "+f).readlines()
for tag in meta:
set = str(tag).rstrip("\n").split(":")
for t,x in tags.items():
if str(set[0]).strip(" ") == t:
tags[t] = float ( str(os.popen("calc -- "+set[1]).readlines()).strip("[]'~\\t\\n"))
print (t, tags[t], set[1])
ev = math.log (math.pow(tags['Aperture'],2.0)/tags['Shutter Speed'], 2.0)
EV = EVref - ev + tags['Stops Above Base ISO']
print ('EV=', EV)
# uncomment/edit to update EXIF in place:
# os.system('exiftool -ExposureCompensation='+str(EV)+' '+f)
# os.system('exiftool -FNumber=10 '+f)
# os.system('exiftool -FocalLength=1000.0 '+f)
# os.system('exiftool -FocalLengthIn35mmFormat=1000.0 '+f)

Write Latitude and Longitude to Geotiff file

I am basically trying to achieve the opposite of this question.
I have a set of latitude and longitude coordinates (with values) in the WGS84 coordinate system, that I would like to write to a geotiff (or just add to a gdal dataset) via the gdal python bindings.
For example, my starting data might be:
lat = np.array([45.345,56.267,23.425])
lon = np.array([134.689,128.774,111.956])
value = np.array([3.0,6.2,2.5])
How might one do this? Thanks!
Although it is not in your question, it appears you need to project the lat/long data from the WGS84 datum to a UTM projection. This can be using the ogr2ogr command line from GDAL using the two options -a_srs 4326 -t_srs ???? (the target SRID). It can also be done internally with Python using the OGR module of GDAL. Here is an example of use.
There are two independent ways to get a raster from point data. The first is to interpolate the values in data, so that the values flood the region (or sometimes only the convex hull). There are many methods and tools to interpolate values in 2D. With GDAL, a comand-line tool gdal_grid is useful for this purpose, although I don't think it is possible to use from Python. Probably the simplest would be to use scipy.interpolate. Once you have a 2D NumPy array, it is simple to create a raster file with GDAL/Python.
The second method of converting the points to a raster is to burn the point locations to pixels on a raster. Unlike the first method, only the locations where the points are have values, while the values are not interpolated anywhere else in the raster. Rasterising or burning vectors into a raster can be done from a GDAL command line tool gdal_rasterize. It can also be done internally with GDAL/Python, here is an example.
It is possible to use gdal_grid from Python. I am using it.
All you need to do is construct the command as if you were using it from the command line and put it inside a subprocess.call(com, shell=True). You need to import subprocess module first.
This is actually how I am using it:
pcall= "gdal_grid --config 'NUM_THREADS=ALL_CPUS GDAL_CACHEMAX=2000'\
-overwrite -a invdist:power=2.0:smoothing=2.0:radius1=360.0:radius2=360.0\
-ot UInt16 -of GTiff -outsize %d %d -l %s -zfield 'Z' %s %s "%(npx, npy,\
lname,ptshapefile,interprasterfile)
subprocess.call(pcall, shell= True)
NUM_THREADS option is available from gdal 1.10+

Categories