Efficient way to build a MySQL update query in Python - python

I have a class variable called attributes which lists the instance variables I want to update in a database:
attributes = ['id', 'first_name', 'last_name', 'name', 'name_url',
'email', 'password', 'password_salt', 'picture_id']
Each of the class attributes are updated upon instantiation.
I would like to loop through each of the attributes and build a MySQL update query in the form of:
UPDATE members SET id = self._id, first_name = self._first name ...
Thanks.

class Ic(object):
attributes = ['id', 'first_name', 'last_name', 'name', 'name_url',
'email', 'password', 'password_salt', 'picture_id']
def __init__(self): ...
# and other methods that set all the attributes on self
def updater(self):
sqlbase = 'UPDATE members SET %s WHERE whateveryouwanthere'
setpieces = []
values = []
for atr in self.attributes:
setpieces.append('%s = ?' % atr)
values.append(getattr(self, atr, None))
return sqlbase % ', '.join(setpieces), values
The caller needs to build up an object of class Ic appropriately, then do
sql, values = theobj.updater()
and lastly call mycursor.execute(sql, values) on whatever DB API cursor it has to the database which needs to be updated (I have no idea about the WHERE conditions you want to use to identify the sepcific record to update, which is why I put a whatreveryouwanthere placeholder there;-).

First question: will all the variables in attributes be used? If so the easiest way is probably to use the DBAPI's execute method.
assuming your cursor is instantiated as csr:
sql="UPDATE mytable SET phone=? where username=?"
variables = ("a phone number","a username")
csr.execute(sql,variables)
There are additional ways of doing it, such as using dictionaries, positional indicators, etc.. see DBAPI FAQ for more details.

Related

Item update query produced by django is wrong

I am trying to update one item at a time using the Django ORM with TimescaleDB as my database.
I have a timesacle hypertable defined by the following model:
class RecordTimeSeries(models.Model):
# NOTE: We have removed the primary key (unique constraint) manually, since we don't want an id column
timestamp = models.DateTimeField(primary_key=True)
location = PointField(srid=settings.SRID, null=True)
station = models.ForeignKey(Station, on_delete=models.CASCADE)
# This is a ForeignKey and not an OneToOneField because of [this](https://stackoverflow.com/questions/61205063/error-cannot-create-a-unique-index-without-the-column-date-time-used-in-part)
record = models.ForeignKey(Record, null=True, on_delete=models.CASCADE)
temperature_celsius = models.FloatField(null=True)
class Meta:
unique_together = (
"timestamp",
"station",
"record",
)
When I update the item using save():
record_time_series = models.RecordTimeSeries.objects.get(
record=record,
timestamp=record.timestamp,
station=record.station,
)
record_time_series.location=record.location
record_time_series.temperature_celsius=temperature_celsius
record_time_series.save()
I get the following error:
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "5_69_db_recordtimeseries_timestamp_station_id_rec_0c66b9ab_uniq"
DETAIL: Key ("timestamp", station_id, record_id)=(2022-05-25 09:15:00+00, 2, 2) already exists.
and I see that the query that django used is the following:
{'sql': 'UPDATE "db_recordtimeseries" SET "location" = NULL, "station_id" = 2, "record_id" = 2, "temperature_celsius" = 26.0 WHERE "db_recordtimeseries"."timestamp" = \'2022-05-25T09:15:00\'::timestamp', 'time': '0.007'}
On the other hand the update is successful with update():
record_time_series = models.RecordTimeSeries.objects.filter(
record=record,
timestamp=record.timestamp,
station=record.station,
)
record_time_series.update(
location=record.location,
temperature_celsius=temperature_celsius,
)
and the sql used by django is:
{'sql': 'UPDATE "db_recordtimeseries" SET "location" = NULL, "temperature_celsius" = 25.0 WHERE ("db_recordtimeseries"."record_id" = 2 AND "db_recordtimeseries"."station_id" = 2 AND "db_recordtimeseries"."timestamp" = \'2022-05-25T09:15:00\'::timestamp)', 'time': '0.012'}
Obviously, the first query is wrong because it does not have the correct parameters in the WHERE clause, but why doesn't django include those parameters, since timestamp is not a unique key, and how can this be fixed?
I think the error was caused because of foregn_key:
Firstly:
Be aware that the update() method is converted directly to an SQL
statement. It is a bulk operation for direct updates. It doesn’t run
any save() methods on your models, or emit the pre_save or post_save
signals (which are a consequence of calling save()), or honor the
auto_now field option.
source
Secondly:
Analogous to ON DELETE there is also ON UPDATE which is invoked when a
referenced column is changed (updated). The possible actions are the
same. In this case, CASCADE means that the updated values of the
referenced column(s) should be copied into the referencing row(s).
source

Python SQLAlchemy ORM: Update row with instance of class

I'm trying to create a function for updating rows in a database, based on an instance of a class.
Basically I would like to do something like this:
def update_table(self, result):
session = self.Session()
session.query(result.__class__).filter_by(id=result.id).update(result)
session.commit()
session.close_all()
user = db.Model.User(
id = 1,
name = "foo"
)
# Store user to db
db.save(user)
updated_user = db.Model.User(
id = 1,
user = "bar"
)
# Update the users name with id=1
update_table(updated_user)
The problem is ofc that the session query results in a
TypeError: 'User' object is not iterable
but in my mind, this should end up with an updated user with name="bar".
Is there way to create such a function using the SQLAlchemy ORM?
You don't need an extra update procedure ...
user = db.Model.User(
id = 1,
name = "foo"
)
# Store user to db
db.save(user)
new_user = session.query(User).filter(User.id==1).first()
new_user.name = "bar"
session.commit()
I ended up with this solution:
# Update single entry
def update_table_by_id(self, entry):
# Open connection to database
session = self.Session()
# Fetch entry from database
db_result = session.query(entry.__class__).filter_by(id=entry.id)
# Convert Models to dicts
entry_dict = entry.as_dict()
db_result_dict = db_result.first().as_dict()
# Update database result with passed in entry. Skip of None
for value in entry_dict:
if entry_dict[value] is not None:
db_result_dict[value] = entry_dict[value]
# Update db and close connections
db_result.update(db_result_dict)
session.commit()
session.close_all()
It allows me to send in arbitrary models, and they are all handled the same.
Suggestions and improvements are welcome!

How to create entities dynamically with PonyORM?

I would like to create DB Entities in Pony ORM by a factory method so avoid code duplication for similar tables.
Here is my not fully working minimal example:
from pony.orm import *
def factory(db, tablename):
class TableTemplate(db.Entity):
_table_ = tablename
first_name = Required(str)
last_name = Required(str)
composite_index(first_name, last_name)
return TableTemplate
db = Database(provider='sqlite', filename=':memory:')
Table1 = factory(db, "TABLE_1")
# the following line produces the exception:
# pony.orm.core.ERDiagramError: Entity TableTemplate already exists
Table2 = factory(db, "TABLE_2")
db.generate_mapping(create_tables=True)
with db_session:
Table1(first_name="foo", last_name="bar")
The exception could be circumvented by creating the class with a dynamic name using type, but this does not work well with composite_index...
Is there a good way to have a table factory with Pony ORM?
Here's my take on your class factory:
def factory(db, tablename):
fields = {
'_table': tablename,
'first_name': Required(str)
# rest of the fields
}
table_template = type(tablename.capitalize(),(db.Entity,),fields)
return table_template
This will create a class by capitalizing the name in tablename and set the descriptors. I'm not sure about metaclasses though
UPDATE ON THE composite_index ISSUE
composite_index uses some pretty obscure features by calling this method:
def _define_index(func_name, attrs, is_unique=False):
if len(attrs) < 2: throw(TypeError,
'%s() must receive at least two attributes as arguments' % func_name)
cls_dict = sys._getframe(2).f_locals
indexes = cls_dict.setdefault('_indexes_', [])
indexes.append(Index(*attrs, is_pk=False, is_unique=is_unique))
A little experimentation tells me you mayb be able to perform the same by adding the field yourself. So that would make our factory fields variable looks like this:
fields = {
'_table': tablename,
'first_name': Required(str),
'_indexes_':[Index(('first_name','last_name'),is_pk=False,is_unique=False)]
# rest of the fields
}
Give it a try and let me know.
UPDATE ON OP EXPERIMENT
The final code would be something like this:
from pony.orm import *
from pony.orm.core import Index
def factory(db, tablename):
fields = {
'_table': tablename,
'first_name': Required(str)
# rest of the fields
}
fields['_indexes_'] = [Index(fields['first_name'],fields['last_name'],is_pk=False,is_unique=False)]
table_template = type(tablename.capitalize(),(db.Entity,),fields)
return table_template

SQLite triggers & datetime defaults in SQL DDL using Peewee in Python

I have a SQLite table defined like so:
create table if not exists KeyValuePair (
key CHAR(255) primary key not null,
val text not null,
fup timestamp default current_timestamp not null, -- time of first upload
lup timestamp default current_timestamp not null -- time of last upload
);
create trigger if not exists entry_first_insert after insert
on KeyValuePair
begin
update KeyValuePair set lup = current_timestamp where key = new.key;
end;
create trigger if not exists entry_last_updated after update of value
on KeyValuePair
begin
update KeyValuePair set lup = current_timestamp where key = old.key;
end;
I'm trying to write a peewee.Model for this table in Python. This is what I have so far:
import peewee as pw
db = pw.SqliteDatabase('dhm.db')
class BaseModel(pw.Model):
class Meta:
database = db
class KeyValuePair(BaseModel):
key = pw.FixedCharField(primary_key=True, max_length=255)
val = pw.TextField(null=False)
fup = pw.DateTimeField(
verbose_name='first_updated', null=False, default=datetime.datetime.now)
lup = pw.DateTimeField(
verbose_name='last_updated', null=False, default=datetime.datetime.now)
db.connect()
db.create_tables([KeyValuePair])
When I inspect the SQL produced by the last line I get:
CREATE TABLE "keyvaluepair" (
"key" CHAR(255) NOT NULL PRIMARY KEY,
"val" TEXT NOT NULL,
"fup" DATETIME NOT NULL,
"lup" DATETIME NOT NULL
);
So I have two questions at this point:
I've been unable to find a way to achieve the behavior of the entry_first_insert and entry_last_updated triggers. Does peewee support triggers? If not, is there a way to just create a table from a .sql file rather than the Model class definition?
Is there a way to make the default for fup and lup propogate to the SQL definitions?
I've figured out a proper answer to both questions. This solution actually enforces the desired triggers and default timestamps in the SQL DDL.
First we define a convenience class to wrap up the SQL for a trigger. There is a more proper way to do this with the peewee.Node objects, but I didn't have time to delve into all of that for this project. This Trigger class simply provides string formatting to output proper sql for trigger creation.
class Trigger(object):
"""Trigger template wrapper for use with peewee ORM."""
_template = """
{create} {name} {when} {trigger_op}
on {tablename}
begin
{op} {tablename} {sql} where {pk} = {old_new}.{pk};
end;
"""
def __init__(self, table, name, when, trigger_op, op, sql, safe=True):
self.create = 'create trigger' + (' if not exists' if safe else '')
self.tablename = table._meta.name
self.pk = table._meta.primary_key.name
self.name = name
self.when = when
self.trigger_op = trigger_op
self.op = op
self.sql = sql
self.old_new = 'new' if trigger_op.lower() == 'insert' else 'old'
def __str__(self):
return self._template.format(**self.__dict__)
Next we define a class TriggerTable that inherits from the BaseModel. This class overrides the default create_table to follow table creation with trigger creation. If any triggers fail to create, the whole create is rolled back.
class TriggerTable(BaseModel):
"""Table with triggers."""
#classmethod
def triggers(cls):
"""Return an iterable of `Trigger` objects to create upon table creation."""
return tuple()
#classmethod
def new_trigger(cls, name, when, trigger_op, op, sql):
"""Create a new trigger for this class's table."""
return Trigger(cls, name, when, trigger_op, op, sql)
#classmethod
def create_table(cls, fail_silently=False):
"""Create this table in the underlying database."""
super(TriggerTable, cls).create_table(fail_silently)
for trigger in cls.triggers():
try:
cls._meta.database.execute_sql(str(trigger))
except:
cls._meta.database.drop_table(cls, fail_silently)
raise
The next step is to create a class BetterDateTimeField. This Field object overrides the default __ddl__ to append a "DEFAULT current_timestamp" string if the default instance variable is set to the datetime.datetime.now function. There are certainly better ways to do this, but this one captures the basic use case.
class BetterDateTimeField(pw.DateTimeField):
"""Propogate defaults to database layer."""
def __ddl__(self, column_type):
"""Return a list of Node instances that defines the column."""
ddl = super(BetterDateTimeField, self).__ddl__(column_type)
if self.default == datetime.datetime.now:
ddl.append(pw.SQL('DEFAULT current_timestamp'))
return ddl
Finally, we define the new and improved KeyValuePair Model, incorporating our trigger and datetime field improvements. We conclude the Python code by creating the table.
class KeyValuePair(TriggerTable):
"""DurableHashMap entries are key-value pairs."""
key = pw.FixedCharField(primary_key=True, max_length=255)
val = pw.TextField(null=False)
fup = BetterDateTimeField(
verbose_name='first_updated', null=False, default=datetime.datetime.now)
lup = BetterDateTimeField(
verbose_name='last_updated', null=False, default=datetime.datetime.now)
#classmethod
def triggers(cls):
return (
cls.new_trigger(
'kvp_first_insert', 'after', 'insert', 'update',
'set lup = current_timestamp'),
cls.new_trigger(
'kvp_last_udpated', 'after', 'update', 'update',
'set lup = current_timestamp')
)
KeyValuePair.create_table()
Now the schema is created properly:
sqlite> .schema keyvaluepair
CREATE TABLE "keyvaluepair" ("key" CHAR(255) NOT NULL PRIMARY KEY, "val" TEXT NOT NULL, "fup" DATETIME NOT NULL DEFAULT current_timestamp, "lup" DATETIME NOT NULL DEFAULT current_timestamp);
CREATE TRIGGER kvp_first_insert after insert
on keyvaluepair
begin
update keyvaluepair set lup = current_timestamp where key = new.key;
end;
CREATE TRIGGER kvp_last_udpated after update
on keyvaluepair
begin
update keyvaluepair set lup = current_timestamp where key = old.key;
end;
sqlite> insert into keyvaluepair (key, val) values ('test', 'test-value');
sqlite> select * from keyvaluepair;
test|test-value|2015-12-07 21:58:05|2015-12-07 21:58:05
sqlite> update keyvaluepair set val = 'test-value-two' where key = 'test';
sqlite> select * from keyvaluepair;
test|test-value-two|2015-12-07 21:58:05|2015-12-07 21:58:22
You can override the save function of the model where you insert the timestamps. See TimeStampModel for an example.
I stumbled across exactly this issue a while ago, and spent some time coming up with an optimal design to support Triggers in PeeWee (inspired by the above answer). I am quite happy with how we ended up implementing it, and wanted to share this. At some point I will do a PR into Peewee for this.
Creating Triggers & TriggerListeners in PeeWee
Objective
This document describes how to do this in two parts:
How to add a Trigger to a model in the database.
How to create a ListenThread that will have a callback function that is notified each time the table is updated.
How-To Implementation
The beauty of this design is you only need one item: the TriggerModelMixin Model. Then it is easy to create listeners to subscribe/have callback methods.
The TriggerModelMixin can be copy-pasted as:
class TriggerModelMixin(Model):
""" PeeWee Model with support for triggers.
This will create a trigger that on all table updates will send
a NOTIFY to {tablename}_updates.
Note that it will also take care of updating the triggers as
appropriate/necesary.
"""
_template = """
CREATE OR REPLACE FUNCTION {function_name}()
RETURNS trigger AS
$BODY$
BEGIN
PERFORM pg_notify(
CAST('{notify_channel_name}' AS text),
row_to_json(NEW)::text);
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION {function_name}() OWNER TO postgres;
DROP TRIGGER IF EXISTS {trigger_name} ON "{tablename}";
CREATE TRIGGER {trigger_name}
AFTER INSERT OR UPDATE OR DELETE
ON "{tablename}"
{frequency}
EXECUTE PROCEDURE {function_name}();
"""
function_name_template = "{table_name}updatesfunction"
trigger_name_template = "{table_name}updatestrigger"
notify_channel_name_template = "{table_name}updates"
frequency = "FOR EACH ROW"
#classmethod
def get_notify_channel(cls):
table_name = cls._meta.table_name
return cls.notify_channel_name_template.format(**{"table_name": table_name})
#classmethod
def create_table(cls, fail_silently=False):
""" Create table and triggers """
super(TriggerModelMixin, cls).create_table()
table_name = cls._meta.table_name
notify_channel = cls.get_notify_channel()
function_name = cls.function_name_template.format(**{"table_name": table_name})
trigger_name = cls.trigger_name_template.format(**{"table_name": table_name})
trigger = cls._template.format(**{
"function_name": function_name,
"trigger_name": trigger_name,
"notify_channel_name": notify_channel,
"tablename": table_name,
"frequency": cls.frequency
}
)
logger.info(f"Creating Triggers for {cls}")
cls._meta.database.execute_sql(str(trigger))
#classmethod
def create_db_listener(cls):
''' Returns an object that will listen to the database notify channel
and call a specified callback function if triggered.
'''
class Trigger_Listener:
def __init__(self, db_model):
self.db_model = db_model
self.running = True
self.test_mode = False
self.channel_name = ""
def stop(self):
self.running = False
def listen_and_call(self, f, *args, timeout: int = 5, sync=False):
''' Start listening and call the callback method `f` if a
trigger notify is received.
This has two styles: sync (blocking) and async (non-blocking)
Note that `f` must have `record` as a keyword parameter - this
will be the record that sent the notification.
'''
if sync:
return self.listen_and_call_sync(f, *args, timeout=timeout)
else:
t = threading.Thread(
target=self.listen_and_call_sync,
args=(f, *args),
kwargs={'timeout': timeout}
)
t.start()
def listen_and_call_sync(self, f, *args, timeout: int = 5):
''' Call callback function `f` when the channel is notified. '''
self.channel_name = self.db_model.get_notify_channel()
db = self.db_model._meta.database
db.execute_sql(f"LISTEN {self.channel_name};")
conn = db.connection()
while self.running:
# The if see's if the response is non-null
if not select.select([conn], [], [], timeout) == ([], [], []):
# Wait for the bytes to become fully available in the buffer
conn.poll()
while conn.notifies:
record = conn.notifies.pop(0)
logger.info(f"Trigger recieved with record {record}")
f(*args, record=record)
if self.test_mode:
break
return Trigger_Listener(cls)
Example Implementation:
db_listener = FPGExchangeOrder.create_db_listener()
def callback_method(record=None):
# CallBack Method to handle the record.
logger.info(f"DB update on record: f{record}")
# Handle the update here
db_listener.listen_and_call(callback_method)
How to use this
1. Add a Trigger to a model in the database
This is very easy. Just add the mixin TriggerModelMixin to the model that you want to add support to. This Mixin will handle the creation of the triggers, and the Listening method to notify when the triggers are called.
2. Create a ListenThread to have a Callback
We have two modes for the listener: async (non-blocking) and sync (blocking). By default, it will be non-blocking, you can change this with the sync=True if you want it to be blocking.
To use it (in either case), create a callback method. Note that this callback method will be blocking when updates are received (records are processed in serial), so do not have heavy load or I/O in this method. The only requirement of this method is a keyed parameter of record - which will be where the record from the database is returned as a dictionary.
From this, just create the listener, then call listen_and_call.

sqlalchemy: union query few columns from multiple tables with condition

I'm trying to adapt some part of a MySQLdb application to sqlalchemy in declarative base. I'm only beginning with sqlalchemy.
The legacy tables are defined something like:
student: id_number*, semester*, stateid, condition, ...
choice: id_number*, semester*, choice_id, school, program, ...
We have 3 tables for each of them (student_tmp, student_year, student_summer, choice_tmp, choice_year, choice_summer), so each pair (_tmp, _year, _summer) contains information for a specific moment.
select *
from `student_tmp`
inner join `choice_tmp` using (`id_number`, `semester`)
My problem is the information that is important to me is to get the equivalent of the following select:
SELECT t.*
FROM (
(
SELECT st.*, ct.*
FROM `student_tmp` AS st
INNER JOIN `choice_tmp` as ct USING (`id_number`, `semester`)
WHERE (ct.`choice_id` = IF(right(ct.`semester`, 1)='1', '3', '4'))
AND (st.`condition` = 'A')
) UNION (
SELECT sy.*, cy.*
FROM `student_year` AS sy
INNER JOIN `choice_year` as cy USING (`id_number`, `semester`)
WHERE (cy.`choice_id` = 4)
AND (sy.`condition` = 'A')
) UNION (
SELECT ss.*, cs.*
FROM `student_summer` AS ss
INNER JOIN `choice_summer` as cs USING (`id_number`, `semester`)
WHERE (cs.`choice_id` = 3)
AND (ss.`condition` = 'A')
)
) as t
* used for shorten the select, but I'm actually only querying for about 7 columns out of the 50 availables.
This information is used in many flavors... "Do I have new students? Do I still have all students from a given date? Which students are subscribed after the given date? etc..." The result of this select statement is to be saved in another database.
Would it be possible for me to achieve this with a single view-like class? The information is read-only so I don't need to be able to modify/create/delte. Or do I have to declare a class for each table (ending up with 6 classes) and every time I need to query, I have to remember to filter?
Thanks for pointers.
EDIT: I don't have modification access to the database (I cannot create a view). Both databases may not be on the same server (so I cannot create a view on my second DB).
My concern is to avoid the full table scan before filtering on condition and choice_id.
EDIT 2: I've set up declarative classes like this:
class BaseStudent(object):
id_number = sqlalchemy.Column(sqlalchemy.String(7), primary_key=True)
semester = sqlalchemy.Column(sqlalchemy.String(5), primary_key=True)
unique_id_number = sqlalchemy.Column(sqlalchemy.String(7))
stateid = sqlalchemy.Column(sqlalchemy.String(12))
condition = sqlalchemy.Column(sqlalchemy.String(3))
class Student(BaseStudent, Base):
__tablename__ = 'student'
choices = orm.relationship('Choice', backref='student')
#class StudentYear(BaseStudent, Base):...
#class StudentSummer(BaseStudent, Base):...
class BaseChoice(object):
id_number = sqlalchemy.Column(sqlalchemy.String(7), primary_key=True)
semester = sqlalchemy.Column(sqlalchemy.String(5), primary_key=True)
choice_id = sqlalchemy.Column(sqlalchemy.String(1))
school = sqlalchemy.Column(sqlalchemy.String(2))
program = sqlalchemy.Column(sqlalchemy.String(5))
class Choice(BaseChoice, Base):
__tablename__ = 'choice'
__table_args__ = (
sqlalchemy.ForeignKeyConstraint(['id_number', 'semester',],
[Student.id_number, Student.semester,]),
)
#class ChoiceYear(BaseChoice, Base): ...
#class ChoiceSummer(BaseChoice, Base): ...
Now, the query that gives me correct SQL for one set of table is:
q = session.query(StudentYear, ChoiceYear) \
.select_from(StudentYear) \
.join(ChoiceYear) \
.filter(StudentYear.condition=='A') \
.filter(ChoiceYear.choice_id=='4')
but it throws an exception...
"Could not locate column in row for column '%s'" % key)
sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for column '*'"
How do I use that query to create myself a class I can use?
If you can create this view on the database, then you simply map the view as if it was a table. See Reflecting Views.
# DB VIEW
CREATE VIEW my_view AS -- #todo: your select statements here
# SA
my_view = Table('my_view', metadata, autoload=True)
# define view object
class ViewObject(object):
def __repr__(self):
return "ViewObject %s" % str((self.id_number, self.semester,))
# map the view to the object
view_mapper = mapper(ViewObject, my_view)
# query the view
q = session.query(ViewObject)
for _ in q:
print _
If you cannot create a VIEW on the database level, you could create a selectable and map the ViewObject to it. The code below should give you the idea:
student_tmp = Table('student_tmp', metadata, autoload=True)
choice_tmp = Table('choice_tmp', metadata, autoload=True)
# your SELECT part with the columns you need
qry = select([student_tmp.c.id_number, student_tmp.c.semester, student_tmp.stateid, choice_tmp.school])
# your INNER JOIN condition
qry = qry.where(student_tmp.c.id_number == choice_tmp.c.id_number).where(student_tmp.c.semester == choice_tmp.c.semester)
# other WHERE clauses
qry = qry.where(student_tmp.c.condition == 'A')
You can create 3 queries like this, then combine them with union_all and use the resulting query in the mapper:
view_mapper = mapper(ViewObject, my_combined_qry)
In both cases you have to ensure though that a PrimaryKey is properly defined on the view, and you might need to override the autoloaded view, and specify the primary key explicitely (see the link above). Otherwise you will either receive an error, or might not get proper results from the query.
Answer to EDIT-2:
qry = (session.query(StudentYear, ChoiceYear).
select_from(StudentYear).
join(ChoiceYear).
filter(StudentYear.condition == 'A').
filter(ChoiceYear.choice_id == '4')
)
The result will be tuple pairs: (Student, Choice).
But if you want to create a new mapped class for the query, then you can create a selectable as the sample above:
student_tmp = StudentTmp.__table__
choice_tmp = ChoiceTmp.__table__
.... (see sample code above)
This is to show what I ended up doing, any comment welcomed.
class JoinedYear(Base):
__table__ = sqlalchemy.select(
[
StudentYear.id_number,
StudentYear.semester,
StudentYear.stateid,
ChoiceYear.school,
ChoiceYear.program,
],
from_obj=StudentYear.__table__.join(ChoiceYear.__table__),
) \
.where(StudentYear.condition == 'A') \
.where(ChoiceYear.choice_id == '4') \
.alias('YearView')
and I will elaborate from there...
Thanks #van

Categories