The documentation on GeoAlchemy2 doesn't seem fully featured (as compared to the pervious version).
I have a model:
class AddressCode(Base):
__tablename__ = 'address_codes'
id = Column(Integer, primary_key=True)
code = Column(Unicode(34))
geometry = Column(Geometry('POINT'))
And I want to store lat/long data, which I tried to save in the above model, example
"51.42553,-0.666085"
Which gives me the error:
"Parse error at position 9 within Geometry (the "," char")
Anyone able to shed some light on where I am going wrong here?
Also on the subject, how would I peform a query to say..
Show nearest 20 users:
class AddressCode(Base):
__tablename__ = 'address_codes'
id = Column(Integer, primary_key=True)
name = Column(Unicode(34))
geometry = Column(Geometry('POINT'))
Something like?
geom_var = "51.42553,-0.666085"
Session.query(User).filter(func.ST_DWithin, 20, geom_var).all()
In both GeoAlchemy and GeoAlchemy2 you need to specify the geometries in the well-known text format called WKT or Well-known text, or the Well-known binary format. For a point the syntax is 'POINT(X Y)', thus 'POINT(-0.666085 51.42553)' notice that the longitude comes first, then latitude.
The shapely module contains useful functions for handling geometries outside relational databases, along with easy conversions between Python geometry classes and WKT, WKB formats.
Here's how you do it:
this region table is defined as:
regionTable = Table('region', metadata,
Column('region_id', Integer, Sequence('region_region_id_seq'), primary_key=True),
Column('type_cd', String(30)),
Column('region_nm', String(255)),
Column('geo_loc', Geography )
)
how to query it:
(give me all regions within 50 miles of my current location..)
sqlstring = select([regionTable],
func.ST_DWithin(
regionTable.c.geo_loc,
'POINT(-74.78886216922375 40.32829276931833)',
1609*50 ) )
result = connection.execute(sqlstring)
for row in result:
print "region name:", row['region_nm']
Related
I'm using SQLAlchemy 1.4 to build my database models (posgresql).
I've stablished relationships between my models, which I follow using the different SQLAlchemy capabilities. When doing so, the fields of the related models get aliases which don't work for me.
Here's an example of one of my models:
from sqlalchemy import Column, DateTime, ForeignKey, Integer, func
from sqlalchemy.orm import relationship
class Process(declarative_model()):
"""Process database table class.
Process model. It contains all the information about one process
iteration. This is the proces of capturing an image with all the
provided cameras, preprocess the images and make a prediction for
them as well as computing the results.
"""
id: int = Column(Integer, primary_key=True, index=True, autoincrement=True)
"""Model primary key."""
petition_id: int = Column(Integer, ForeignKey("petition.id", ondelete="CASCADE"))
"""Foreign key to the related petition."""
petition: "Petition" = relationship("Petition", backref="processes", lazy="joined")
"""Related petition object."""
camera_id: int = Column(Integer, ForeignKey("camera.id", ondelete="CASCADE"))
"""Foreign key to the related camera."""
camera: "Camera" = relationship("Camera", backref="processes", lazy="joined")
"""Related camera object."""
n: int = Column(Integer, comment="Iteration number for the given petition.")
"""Iteration number for the given petition."""
image: "Image" = relationship(
"Image", back_populates="process", uselist=False, lazy="joined"
)
"""Related image object."""
datetime_init: datetime = Column(DateTime(timezone=True), server_default=func.now())
"""Datetime when the process started."""
datetime_end: datetime = Column(DateTime(timezone=True), nullable=True)
"""Datetime when the process finished if so."""
The model works perfectly and joins the data by default as expected, so far so good.
My problem comes when I make a query and I extract the results through query.all() or through pd.read_sql(query.statement, db).
Reading the documentation, I should get aliases for my fields like "{table_name}.{field}" but instead of that I'm getting like "{field}_{counter}". Here's an example of a query.statement for my model:
SELECT process.id, process.petition_id, process.camera_id, process.n, process.datetime_init, process.datetime_end, asset_quality_1.id AS id_2, asset_quality_1.code AS code_1, asset_quality_1.name AS name_1, asset_quality_1.active AS active_1, asset_quality_1.stock_quality_id, pit_door_1.id AS id_3, pit_door_1.code AS code_2, petition_1.id AS id_4, petition_1.user_id, petition_1.user_code, petition_1.load_code, petition_1.provider_code, petition_1.origin_code, petition_1.asset_quality_initial_id, petition_1.pit_door_id, petition_1.datetime_init AS datetime_init_1, petition_1.datetime_end AS datetime_end_1, mask_1.id AS id_5, mask_1.camera_id AS camera_id_1, mask_1.prefix_path, mask_1.position, mask_1.format, camera_1.id AS id_6, camera_1.code AS code_3, camera_1.pit_door_id AS pit_door_id_1, camera_1.position AS position_1, image_1.id AS id_7, image_1.prefix_path AS prefix_path_1, image_1.format AS format_1, image_1.process_id
FROM process LEFT OUTER JOIN petition AS petition_1 ON petition_1.id = process.petition_id LEFT OUTER JOIN asset_quality AS asset_quality_1 ON asset_quality_1.id = petition_1.asset_quality_initial_id LEFT OUTER JOIN stock_quality AS stock_quality_1 ON stock_quality_1.id = asset_quality_1.stock_quality_id LEFT OUTER JOIN pit_door AS pit_door_1 ON pit_door_1.id = petition_1.pit_door_id LEFT OUTER JOIN camera AS camera_1 ON camera_1.id = process.camera_id LEFT OUTER JOIN mask AS mask_1 ON camera_1.id = mask_1.camera_id LEFT OUTER JOIN image AS image_1 ON process.id = image_1.process_id
Does anybody know how can I change this behavior and make it alias the fields like “{table_name}_{field}"?
SQLAlchemy uses label styles to configure how columns are labelled in SQL statements. The default in 1.4.x is LABEL_STYLE_DISAMBIGUATE_ONLY, which will add a "counter" for columns with the same name in a query. LABEL_STYLE_TABLENAME_PLUS_COL is closer to what you want.
Default:
q = session.query(Table1, Table2).join(Table2)
q = q.set_label_style(LABEL_STYLE_DISAMBIGUATE_ONLY)
print(q)
gives
SELECT t1.id, t1.child_id, t2.id AS id_1
FROM t1 JOIN t2 ON t2.id = t1.child_id
whereas
q = session.query(Table1, Table2).join(Table2)
q = q.set_label_style(LABEL_STYLE_TABLENAME_PLUS_COL)
print(q)
generates
SELECT t1.id AS t1_id, t1.child_id AS t1_child_id, t2.id AS t2_id
FROM t1 JOIN t2 ON t2.id = t1.child_id
If you want to enforce a style for all orm queries you could sublcass Session:
class MySession(orm.Session):
_label_style = LABEL_STYLE_TABLENAME_PLUS_COL
and use this class for your sessions, or pass it it a sessionmaker, if you use one:
Session = orm.sessionmaker(engine, class_=MySession)
You can use the label argument of the Column or the relationship method to specify the custom name for a field.
For example, to give a custom label for the process.petition_id field, you can use:
petition_id = Column(Integer, ForeignKey("petition.id", ondelete="CASCADE"), label='process_petition_id')
And for the petition relationship, you can use:
petition = relationship("Petition", backref="processes", lazy="joined", lazyload=True, innerjoin=True, viewonly=False, foreign_keys=[petition_id], post_update=False, cascade='all, delete-orphan', passive_deletes=True, primaryjoin='Process.petition_id == Petition.id', single_parent=False, uselist=False, query_class=None, foreignkey=None, remote_side=None, remote_side_use_alter=False, order_by=None, secondary=None, secondaryjoin=None, back_populates=None, collection_class=None, doc=None, extend_existing=False, associationproxy=None, comparator_factory=None, proxy_property=None, impl=None, _create_eager_joins=None, dynamic=False, active_history=False, passive_updates=False, enable_typechecks=None, info=None, join_depth=None, innerjoin=None, outerjoin=None, selectin=None, selectinload=None, with_polymorphic=None, join_specified=None, viewonly=None, comparison_enabled=None, useexisting=None, label='process_petition')
With this, the fields should be aliased to process_petition_id and process_petition respectively.
I am writing a code which has a booking system done on SQLite and on one of the CSV files it has time as a variable, I need it in a time type as I do operations on the time, but it gives the error message as
SQLite Time type only accepts Python time objects as input.
How do I get around this?
My code is below.
class Flight(Base):
__tablename__ = 'flights'
id = Column(Integer, primary_key=True)
planeid = Column(Integer, ForeignKey('planes.id'))
leave = Column(Time)
arrive = Column(Time)
date = Column(Date)
passengers = Column(Integer)
destination = Column(String)
bookings = relationship("Booking", back_populates='flights')
plane = relationship("Plane", back_populates='flight')
...
if session.query(Flight).count() == 0:
with open("flights.csv", "r") as flights_file:
lines = flights_file.readlines()
for line in lines:
_, planeid, leave, arrive, date, passengers, destination = line.rstrip().split(",")
new_flight = Flight(planeid=planeid,
leave=leave,
arrive=arrive,
date=date,
passengers=passengers,
destination=destination)
objects_to_add.append(new_flight)
session.add_all(objects_to_add)
session.commit()
Are you importing Time from sqlalchemy? Please checkout this.
from sqlalchemy import (Column, Time)
arrive = Column(Time)
I'm working on an application which needs to be able to store a point in a PostGis database. I'm using GeoAlchemy, and it seems to store an incorrect longitude.
I have this code to process a request to add an Event with Point location data.
json_data = request.get_json(force=True)
if "location" in json_data:
json_location = json_data["location"]
geojson_geom = geojson.loads(json.dumps(json_location))
geom = from_shape(asShape(geojson_geom), srid=4326)
json_data["location"] = geom
event = Event(**json_data)
try:
session = Session()
session.add(event)
session.commit()
session.refresh(event)
except IntegrityError as e:
abort(409, error=e.args[0])
The model I use
class Event(Base):
__tablename__ = 'events'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
location = Column(Geography(geometry_type='POINT', srid=4326), nullable=False)
When I use this test data:
{
"name": "Test",
"location": {
"coordinates": [
47.65641,
-117.42733
],
"type": "Point"
}
}
Then str(geom) will equal 010100000039622d3e05d4474061a6ed5f595b5dc0, and if I use this converter I get POINT(47.65641 -117.42733), which is the correct location.
However when I look up the row in the database I see that 0101000020E610000039622D3E05D447403EB324404D494FC0 is stored in the location column, which is POINT(47.65641 -62.57267): a very different longitude.
As far as I know, I supply the correct data and format to GeoAlchemy2 and I would greatly appreciate if someone could hint at what I am doing wrong here.
In Postgis, the points coordinates are expressed as Point(longitude, latitude), so the value -117 is the latitude, which is invalid in 4326.
Try swapping the input coordinates in the test data.
I'm starting working with an existing database where attribute foo of table A is related to more then one other table, B.foo and C.foo. How do I form this relationship in ponyorm?
The database is organized like below.
from pony import orm
db = orm.Database()
class POI(db.Entity):
'''Point of interest on a map'''
name = orm.PrimaryKey(str)
coordinateID = orm.Optional(('cartesian', 'polar')) # doesn't work ofc
class cartesian(db.Entity):
coordinateID = orm.Required(POI)
x = orm.Required(float)
y = orm.Required(float)
class polar(db.Entity):
coordinateID = orm.Required(POI)
r = orm.Required(float)
phi = orm.Required(float)
Of course x,y from cartesian and r,phi from polar could be moved to POI, and in the database I work with, that's the same situation. But the tables are divided up between stakeholders (cartesian and polar in this example) and I don't get to change the schema anyway. I can't split coordinateID in the schema (but it would actually be nice to have different attributes of the python class).
It is not possible to relate one attribute for several enties in PonyORM except for the case when these entities are inherited from the same base entity, then you can specify base entity as the attribute type and use any of inherited entity as a real type.
If you use existing schema that you can't change, you probably can't use inheritance and need to specify raw id attribute instead of relationship:
from pony import orm
db = orm.Database()
class POI(db.Entity):
_table_ = "table_name"
name = orm.PrimaryKey(str)
coordinate_id = orm.Optional(int, column="coordinateID")
class Cartesian(db2.Entity):
_table_ = "cartesian"
id = orm.PrimaryKey(int, column="coordinateID")
x = orm.Required(float)
y = orm.Required(float)
class Polar(db2.Entity):
_table_ = "polar"
id = orm.PrimaryKey(int, column="coordinateID")
r = orm.Required(float)
phi = orm.Required(float)
And then you can perform queries like this:
left_join(poi for poi in POI
for c in Cartesian
for p in Polar
if poi.coordinate_id == c.id
and poi.coordinate_id = p.id
and <some additional conditions>)
Note that all entities used in the same query should be from the same database. If entities belongs to two different databases, you cannot use them in the same query. And need to issue separate queries:
with db_session:
poi = POI.get(id=some_id)
coord = Cartesian.get(id=poi.coordinate_id)
if coord is None:
coord = Polar.get(id=poi.coordinate_id)
<do something with poi and coord>
But in case, for example, of SQLite you can attach one database to another to make them appear as a single database.
I'm a newcomer to SQLAlchemy ORM and I'm struggling to accomplish complex-ish queries on multiple tables - queries which I find relatively straightforward to do in Doctrine DQL.
I have data objects of Cities, which belong to Countries. Some Cities also have a County ID set, but not all. As well as the necessary primary and foreign keys, each record also has a text_string_id, which links to a TextStrings table which stores the name of the City/County/Country in different languages. The TextStrings MySQL table looks like this:
CREATE TABLE IF NOT EXISTS `text_strings` (
`id` INT UNSIGNED NOT NULL,
`language` VARCHAR(2) NOT NULL,
`text_string` varchar(255) NOT NULL,
PRIMARY KEY (`id`, `language`)
)
I want to construct a breadcrumb for each city, of the form:
country_en_name > city_en_name OR
country_en_name > county_en_name > city_en_name,
depending on whether or not a County attribute is set for this city. In Doctrine this would be relatively straightforward:
$query = Doctrine_Query::create()
->select('ci.id, CONCAT(cyts.text_string, \'> \', IF(cots.text_string is not null, CONCAT(cots.text_string, \'> \', \'\'), cits.text_string) as city_breadcrumb')
->from('City ci')
->leftJoin('ci.TextString cits')
->leftJoin('ci.Country cy')
->leftJoin('cy.TextString cyts')
->leftJoin('ci.County co')
->leftJoin('co.TextString cots')
->where('cits.language = ?', 'en')
->andWhere('cyts.language = ?', 'en')
->andWhere('(cots.language = ? OR cots.language is null)', 'en');
With SQLAlchemy ORM, I'm struggling to achieve the same thing. I believe I've setup the objects correctly - in the form eg:
class City(Base):
__tablename__ = "cities"
id = Column(Integer, primary_key=True)
country_id = Column(Integer, ForeignKey('countries.id'))
text_string_id = Column(Integer, ForeignKey('text_strings.id'))
county_id = Column(Integer, ForeignKey('counties.id'))
text_strings = relation(TextString, backref=backref('cards', order_by=id))
country = relation(Country, backref=backref('countries', order_by=id))
county = relation(County, backref=backref('counties', order_by=id))
My problem is in the querying - I've tried various approaches to generating the breadcrumb but nothing seems to work. Some observations:
Perhaps using things like CONCAT and IF inline in the query is not very pythonic (is it even possible with the ORM?) - so I've tried performing these operations outside SQLAlchemy, in a Python loop of the records. However here I've struggled to access the individual fields - for example the model accessors don't seem to go n-levels deep, e.g. City.counties.text_strings.language doesn't exist.
I've also experimented with using tuples - the closest I've got to it working was by splitting it out into two queries:
# For cities without a county
for city, country in session.query(City, Country).\
filter(Country.id == City.country_id).\
filter(City.county_id == None).all():
if city.text_strings.language == 'en':
# etc
# For cities with a county
for city, county, country in session.query(City, County, Country).\
filter(and_(City.county_id == County.id, City.country_id == Country.id)).all():
if city.text_strings.language == 'en':
# etc
I split it out into two queries because I couldn't figure out how to make the Suit join optional in just the one query. But this approach is of course terrible and worse the second query didn't work 100% - it wasn't joining all of the different city.text_strings for subsequent filtering.
So I'm stumped! Any help you can give me setting me on the right path for performing these sorts of complex-ish queries in SQLAlchemy ORM would be much appreciated.
The mapping for Suit is not present but based on the propel query I would assume it has a text_strings attribute.
The relevant portion of SQLAlchemy documentation describing aliases with joins is at:
http://www.sqlalchemy.org/docs/orm/tutorial.html#using-aliases
generation of functions is at:
http://www.sqlalchemy.org/docs/core/tutorial.html#functions
cyts = aliased(TextString)
cits = aliased(TextString)
cots = aliased(TextString)
cy = aliased(Suit)
co = aliased(Suit)
session.query(
City.id,
(
cyts.text_string + \
'> ' + \
func.if_(cots.text_string!=None, cots.text_string + '> ', cits.text_string)
).label('city_breadcrumb')
).\
outerjoin((cits, City.text_strings)).\
outerjoin((cy, City.country)).\
outerjoin((cyts, cy.text_strings)).\
outerjoin((co, City.county))\
outerjoin((cots, co.text_string)).\
filter(cits.langauge=='en').\
filter(cyts.langauge=='en').\
filter(or_(cots.langauge=='en', cots.language==None))
though I would think its a heck of a lot simpler to just say:
city.text_strings.text_string + " > " + city.country.text_strings.text_string + " > " city.county.text_strings.text_string
If you put a descriptor on City, Suit:
class City(object):
# ...
#property
def text_string(self):
return self.text_strings.text_string
then you could say city.text_string.
Just for the record, here is the code I ended up using. Mike (zzzeek)'s answer stays as the correct and definitive answer because this is just an adaptation of his, which was the breakthrough for me.
cits = aliased(TextString)
cyts = aliased(TextString)
cots = aliased(TextString)
for (city_id, country_text, county_text, city_text) in \
session.query(City.id, cyts.text_string, cots.text_string, cits.text_string).\
outerjoin((cits, and_(cits.id==City.text_string_id, cits.language=='en'))).\
outerjoin((County, City.county)).\
outerjoin((cots, and_(cots.id==County.text_string_id, cots.language=='en'))).\
outerjoin((Country, City.country)).\
outerjoin((cyts, and_(cyts.id==Country.text_string_id, cyts.language=='en'))):
# Python to construct the breadcrumb, checking county_text for None-ness