I found this regarding Point type in Postgres: http://www.postgresql.org/docs/current/interactive/datatype-geometric.html
Is there the SQLAlchemy version of this?
I am storing values in this manner: (40.721959482, -73.878993913)
You can use geoalchemy2 whis is an extension to sqlalchemy and can be used with flask-sqlalchemy too.
from sqlalchemy import Column
from geoalchemy2 import Geometry
# and import others
class Shop(db.Model):
# other fields
coordinates = Column(Geometry('POINT'))
You can extend UserDefinedType to achieve what you want.
Here's an example I found that gets pretty close to what you want subclassing UserDefinedType
Note that Mohammad Amin's answer is valid only if your point is intended to be a geographic point (latitude and longitude constraints). It doesn't apply if you want to represent any point on a plane. Also, in that case you would need to install the PostGIS extension, which I encourage if you are working with geography points as it provides a lot of utlities and extra functions.
I found out it's a slight modification of Mohammad's answer.
Yes I need to add a point column via geo/sqlalchemy
from sqlalchemy import Column
from geoalchemy2 import Geometry
# and import others
class Shop(db.Model):
# other fields
coordinates = Column(Geometry('POINT'))
On my PostGIS/Gres and PGloader side, since I'm loading from a csv that formats my latitude and longitude points as: "(40.721959482, -73.878993913)" I needed to do a macro in vim for all of my entries (there's a lot) in order to force that column to adhere to how points are created in PostGIS, so I turned the csv column into point(40.721959482 -73.878993913), then upon table creation set the location column with the datatype as geometry(point) location geometry(point).
Related
I have a FITS file with a BinTableHDU that has many entries that have been converted from digital numbers various units like Volts and Currents. I would like to turn off this conversion and access the original digital number value that was stored in the table.
This table makes use of TTYPE, TSCAL, TZERO, and TUNIT keys in the header to accomplish the conversions. So I could use these header keys to undo the conversion manually.
undo = (converted - tzero ) / tscal
Since the astropy.table.Table and astropy.table.QTable class automatically interpret these fields I can't help but think I'm overlooking a way to use astropy functions to undo the conversion. Is there an attribute or function in the astropy QTable or Table class that could let me undo the conversion automatically. (or perhaps another easier way) to undo this conversion?
EDIT: Thanks to the answer by Tom Aldcroft below.
In my situation a FITS binary table can be grabbed and put into a pandas.DataFrame either converted or unconverted using the following code:
import pandas as pd
from astropy.table import Table
from astropy.io import fits
# grab converted data in various scaled values and units volts / current
converted = Table(fits.getdata(file, 2)).to_pandas()
# grab unconverted data in its original raw form
unconverted = Table(fits.getdata(file, 2)._get_raw_data()).to_pandas()
You'll need to be using the direct astropy.io.fits interface instead of the high-level Table interface. From there something like this might work:
from astropy.io import fits
with fits.open('data.fits') as hdus:
hdu1 = hdus[1].data
raw = hdu1._get_raw_data()
See https://github.com/astropy/astropy/blob/645a6f96b86238ee28a7057a4a82adae14885414/astropy/io/fits/fitsrec.py#L1022
I'm reading out an Oracle DB with geospatial geometries which I save in a pandas dataframe, say df having a geometric object of format <cx_Oracle.Object MDSYS.SDO_GEOMETRY at 0x7f28 in a column named 'geometry'. Let's store it as:
g = df.geometry[0]
What I want to do:
Transform the data stored g to present it on a folium map as a PolyLine from shapely. I know that it consists of a bunch of points representing a line object.
What I can do:
I can read out the SDO_GTYPE, i.e. g.SDO_GTYPE gives 2002.
I can read out SDO_ORDINATES, but it won't show me the coordinates, saying: <cx_Oracle.Object MDSYS.SDO_ORDINATE_ARRAY at 0x7f287848e4f0>.
What I cannot do:
Transform geometric information with shapely and asShape:
from shapely.geometry import asShape
shape = asShape(g)
gives error: 'Context does not provide geo interface'.
Use Get_WKT() or any other functions in SQL-Statements
There are colleagues reading the data with a GIS-tool, i.e. why I doubt that the data is corrupt. I would be happy about any suggestions regarding this issue.
Thanks a lot.
Don't have any experience with Oracle DB, but this SO question seems similar to yours.
This sample may be of help to you.
I am importing Polygon shapes into a PostGIS database, using Python (GeoPandas, SQLAlchemy, GeoAlchemy2). I followed the instructions mentioned here.
I have a database with a table named maps_region with a column/field called geom.
I am able to get the Polygon field (named geom) to import into the PostGIS database table in text format (WKT, WKB, and WKB Hex), but, I am unable to successfully convert this text column into a proper Polygon format in the database.
I tried importing with the geom field in several different formats-- in Well-Known Text (WKT) format, WKB format, and WKB Hex format-- but could not convert to Polygon from any of the three formats.
For instance, I imported the shapes into the geom field as WKT format, and then converted to WKB Hex format, using the following command, which worked fine:
database=> UPDATE maps_region SET geom = ST_GeomFromText(geom, 4326);
UPDATE 28
However, when I then try to convert the geom field from a text format into a Polygon type, I get the following errors:
database=> ALTER TABLE maps_region ALTER COLUMN geom TYPE Geometry(POLYGON, 4326);
ERROR: Geometry type (MultiPolygon) does not match column type (Polygon)
database=> ALTER TABLE maps_region ALTER COLUMN geom TYPE Geometry(MULTIPOLYGON, 4326);
ERROR: Geometry type (Polygon) does not match column type (MultiPolygon)
I tried both ways: converting to Polygon, and converting to MultiPolygon-- and neither worked. Instead, the error messages were just reversed!
Any help would be greatly, greatly appreciated.
Thanks in advance!
I realized that the shapes were being registered in mixed format: all but one were in Polygon format, while one was in MultiPolygon format -- see here. Looks like this sufficiently explains the issue/invalid conversion.
I have started using web2py for a web application and try to use SQLFORM.grid(...) to display a paginated listing of one of my db-table's data like in the following minimal example.
grid=SQLFORM.grid(query,
links=links,
fields=[db.example.date,db.example.foo, db.example.bar])
The db.example.date field contains a Python datetime.datetime object in UTC. At the moment it is displayed just plainly like that. However, I want to have more control about the actual output in a way that I can set the local timezone and modify the output string to have something like "2 hours ago".
As seen in another question[0] I can use the links to insert new columns. Unfortunately I can't seem to sort the rows by a field I have inserted in such way. Also, they are inserted on the right instead of actually replacing my first column. So that does not seem to be a solution.
To sum it up: How do I gain control about the way db.example.date is printed out in the end?
[0] Calculated Fields in web2py sqlgrid
You can achieve your goal when you define the table in your model. The represent parameter in the Field constructor that you used in define_table will be recognized by the SQLFORM.grid. For example, if you wanted to just print the date with the month name you could put the following in your model.
Field('a_date', type='date', represent=lambda x, row: x.strftime("%B %d, %Y")),
your function could also convert to local time.
You need to use prettydate to change the datetime arid format in a humanized string, and call it in the represent parameter of your Field() descriptor. For example :
from gluon.tools import prettydate
db.example.date.represent = lambda v,r: prettydate(r.date)
That way, any display of the db.example.date would be displayed humanized, including through SQLFORM.grid
If you don't want to have the date always represented in this way as per David Nehme's answer. Just before your grid creation, you can set the db.table.field.represent in the controller.
db.example.date.represent = lambda value, row: value.strftime("%B %d, %Y")
followed by.
grid = SQLFORM.grid(query,....
I use this often when I join tables. If there is a row.field in the represent from the model file it breaks because it then must be more specific, row.table.field.
I have a scientific model which I am running in Python which produces a lookup table as output. That is, it produces a many-dimensional 'table' where each dimension is a parameter in the model and the value in each cell is the output of the model.
My question is how best to store this lookup table in Python. I am running the model in a loop over every possible parameter combination (using the fantastic itertools.product function), but I can't work out how best to store the outputs.
It would seem sensible to simply store the output as a ndarray, but I'd really like to be able to access the outputs based on the parameter values not just indices. For example, rather than accessing the values as table[16][5][17][14] I'd prefer to access them somehow using variable names/values, for example:
table[solar_z=45, solar_a=170, type=17, reflectance=0.37]
or something similar to that. It'd be brilliant if I were able to iterate over the values and get their parameter values back - that is, being able to find out that table[16]... corresponds to the outputs for solar_z = 45.
Is there a sensible way to do this in Python?
Why don't you use a database? I have found MongoDB (and the official Python driver, Pymongo) to be a wonderful tool for scientific computing. Here are some advantages:
Easy to install - simply download the executables for your platform (2 minutes tops, seriously).
Schema-less data model
Blazing fast
Provides map/reduce functionality
Very good querying functionalities
So, you could store each entry as a MongoDB entry, for example:
{"_id":"run_unique_identifier",
"param1":"val1",
"param2":"val2" # etcetera
}
Then you could query the entries as you will:
import pymongo
data = pymongo.Connection("localhost", 27017)["mydb"]["mycollection"]
for entry in data.find(): # this will yield all results
yield entry["param1"] # do something with param1
Whether or not MongoDB/pymongo are the answer to your specific question, I don't know. However, you could really benefit from checking them out if you are into data-intensive scientific computing.
If you want to access the results by name, then you could use a python nested dictionary instead of ndarray, and serialize it in a .JSON text file using json module.
One option is to use a numpy ndarray for the data (as you do now), and write a parser function to convert the query values into row/column indices.
For example:
solar_z_dict = {...}
solar_a_dict = {...}
...
def lookup(dataArray, solar_z, solar_a, type, reflectance):
return dataArray[solar_z_dict[solar_z] ], solar_a_dict[solar_a], ...]
You could also convert to string and eval, if you want to have some of the fields to be given as "None" and be translated to ":" (to give the full table for that variable).
For example, rather than accessing the values as table[16][5][17][14]
I'd prefer to access them somehow using variable names/values
That's what numpy's dtypes are for:
dt = [('L','float64'),('T','float64'),('NMSF','float64'),('err','float64')]
data = plb.loadtxt(argv[1],dtype=dt)
Now you can access the data elements using date['T']['L']['NMSF']
More info on dtypes:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html