Flask Restless: increment column on many to many table - python

I am currently working with Flask, Flask-SQLAchlemy and Flask-Restless to provide endpoints for an AngularJS backend application. I have set up the necessary many to many relationships between tables thanks to another user.
I have a new issue, which is that I have to set the count column in my junction table. In addition, I also have to increment count by 1. I have been looking through the Flask Restless Docs to find a solution, but I haven't seen any reference to accessing a junction table column. I have also tried accessing it through include columns without success. Here are my involved tables:
surveyOptions = db.Table('survey_options',
db.Column('survey_id', db.Integer, db.ForeignKey('survey.id')),
db.Column('option_id', db.Integer, db.ForeignKey('option.id')),
db.Column('count', db.Integer) # increment this value
)
class Survey(db.Model):
id = db.Column(db.Integer, primary_key=True)
category = db.Column(db.String(50))
question = db.Column(db.Text)
startDate = db.Column(db.DateTime)
endDate = db.Column(db.DateTime)
options = db.relationship('Option', secondary=surveyOptions,
backref=db.backref('surveys', lazy='dynamic'), lazy='dynamic')
class Option(db.Model):
id = db.Column(db.Integer, primary_key=True)
text = db.Column(db.Text)
Any guidance on how to increment the surveyOptions.count column would be greatly appreciated. I am more than willing to provide more code samples if you need any.
Edit
count will be incremented everytime that a user selects one of the options, to keep track of how many people prefer each option

Related

Flask-SQLAlchemy - get the last quote from a users followed job

My app Model is structured as so:
user_jobs = db.Table('user_jobs',
db.Column('user_id', db.Integer, db.ForeignKey('user.id')),
db.Column('job_id', db.Integer, db.ForeignKey('market.id'))
)
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True, unique=True)
# Other user model fields....
jobs = db.relationship('Job', secondary=user_jobs, backref='users')
class Job(db.Model):
id = db.Column(db.Integer, primary_key=True)
# Other related fields and relationships
quotes = db.relationship('Quote', backref='job', lazy='dynamic')
class Quote(db.Model):
id = db.Column(db.Integer, primary_key=True)
timestamp = db.Column(db.DateTime, index=True, default=datetime.utcnow)
price = db.Column(db.Integer())
# Other related fields
job_id = db.Column(db.Integer, db.ForeignKey('job.id'))
This model allows users to follow multiple jobs while jobs can have multiple followed users (Many to Many). A job can have multiple Quotes (One to Many).
In my flask app, I am creating a dashboard that displays the users followed jobs. For the followed jobs on the dashboard, I want to display the most recent Quote price and timestamp.
My current thinking is to create a function on the user model to return a joined table of User - Job - Quote, ordering by desc and limit(1). I however am stuck on how to do this.
class User(UserMixin, db.Model):
.....
def get followed_jobs(self):
return ...
Any help would be greatly appreciated.
EDIT:
Given there is a list of users and I'm trying to find the latest quotes that user 1 is following, the raw SQL appears to be:
Select
*
FROM
(
SELECT
job.id, job.job_name, latest_quote.timestamp,
latest_quote.price, user_job.user_id
FROM
(SELECT
job_id, max(timestamp) AS timestamp,
price FROM quote
GROUP BY job_id) AS latest_quote
JOIN
job
ON
job.id = latest_quote.job_id
JOIN
user_job
ON
user_job.job_id = latest_quote.job_id
) as aquery
WHERE user_id = 1;
Can this be made more efficient in SQL?
The below answer might be helpful to get the required data for many-to-many relationship.
SqlAlchemy and Flask, how to query many-to-many relationship
If you require data in serialisable format in many-to-many relationship which is your use-case, I would suggest you use nested schemas in marshmallow.
Flask Marshmallow/SqlAlchemy: Serializing many-to-many relationships

Querying two tables using SQLAlchemy and PostgreSQL

I need help improving my SQLAlchemy query. I'm using Python 3.7, SQLAlchemy 1.3.15 and PosgresSQL 9.4.3 as database. I'm trying to return the count of appointments for a specific date and timeslot. However, my appointments and appointment slot tables are separate and I'm having to query both models/tables to get the desired results. Here's what I have;
Appointments Model
The appointment table have a few columns, which includes a foreign key to the appointment slots table.
class Appointment(ResourceMixin, db.Model):
__tablename__ = 'appointments'
id = db.Column(db.Integer, primary_key=True)
user_id = db.Column(db.Integer, db.ForeignKey('users.id', onupdate='CASCADE', ondelete='CASCADE'), index=True, nullable=True)
slot_id = db.Column(db.Integer, db.ForeignKey('appointment_slots.id', onupdate='CASCADE', ondelete='CASCADE'), index=True, nullable=False)
appointment_date = db.Column(db.DateTime(), nullable=False)
appointment_type = db.Column(db.String(128), nullable=False, default='general')
Appointment Slots Table
The appointment slots table contains the time slots for the appointments. The Model consist of a relationship back to the appointments table.
class AppointmentSlot(ResourceMixin, db.Model):
__tablename__ = 'appointment_slots'
id = db.Column(db.Integer, primary_key=True)
# Relationships.
appointments = db.relationship('Appointment', uselist=False,
backref='appointments', lazy='joined', passive_deletes=True)
start_time = db.Column(db.String(5), nullable=False, server_default='08:00')
end_time = db.Column(db.String(5), nullable=False, server_default='17:00')
SQLAlchemy Query
Currently I'm running the following SQLAlchemy query to get the appointment count for a specific date and time slot;
appointment_count = db.session.query(func.count(Appointment.id)).join(AppointmentSlot)\
.filter(and_(Appointment.appointment_date == date, AppointmentSlot.id == Appointment.id,
AppointmentSlot.start_time == time)).scalar()
The query above return the correct results, which is a single digit value, but I'm worried that the query is not optimized. Currently the query returns in 380ms , but there's only 8 records in the appointments and appointment_slots tables. These tables will eventually have in the 100s of thousands of records. I'm worried that even though the query is working now that it will eventually struggle with an increase of records.
How can I improved or optimized this query to improve performance? I was looking at SQLAlchemy subqueries using the appointment relationship on the appointment_slots table, but was unable to get it to work and confirm the performance. I'm thinking there must be a better way to run this query especially using the appointments relationship on the appointment_slots table, but I'm currently stumped. Any suggestions?
I was incorrect about the query load time. I was actually looking at the page load that was 380ms. I also change the some fields on the models by removing the slot_id from the appointments model and adding a appointment_id foreign key to the appointment_slots model. The page load for the following query;
appointment_count = db.session.query(func.count(Appointment.id)).join(AppointmentSlot)\
.filter(and_(Appointment.appointment_date == date,
AppointmentSlot.appointment_id == Appointment.id, AppointmentSlot.start_time == time)).scalar()
ended up being; 0.4637ms.
However, I still tried to improve the query and was able to do so by using a SQLAlchemy subquery. The following subquery;
subquery = db.session.query(Appointment.id).filter(Appointment.appointment_date == date).subquery()
query = db.session.query(func.count(AppointmentSlot.id))\
.filter(and_(AppointmentSlot.appointment_id.in_(subquery),
AppointmentSlot.start_time == time)).scalar()
Return a load time of 0.3700ms which shows a much better performance than using the join query.

Support duplicate entries in a M2M associative table

I am working on a SQL-Alchemy app using flask and flask-db and have been scratching my head over how to solve this question. My models looks like this:
class event_schematics_map():
event_schematics_table = db.Table(
'event_schematics_table',
db.Column('fk_schematic_id', db.Integer, db.ForeignKey('schematics.id')),
db.Column('fk_event_id', db.Integer, db.ForeignKey('events.id'))
)
class Events(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(120), index=True, unique=False)
date = db.Column(db.String(120), unique=False)
owner = db.Column(db.Integer, db.ForeignKey('user.id'))
schematics = db.relationship('Recipe', secondary=event_schematics_map.event_schematics_table, backref='schematic')
class Schematics(db.Model):
__tablename__ = 'schematics'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
name = db.Column(db.VARCHAR(70), index=True)
schematics_description = db.Column(db.String(1024), index=True)
creator_id = db.Column(db.Integer, db.ForeignKey('user.id'))
Schematics are created and are on the many side of O2M with a user table separately not shown above. The map table is used as the glue in the M2M relationship.
Currently I am adding new schematics to each event and updating the assoc table like so: events.schematics.append(SomeNewSchematic) which works fine until I attempt to enter multiple instances of the exact same Schematic like this:
schem1 = Schematics(name='TheOnlySchematic')
schem2 = Schematics(name='TheOnlySchematic')
event.schematics.append(schem1)
event.schematics.append(schem2)
etc
in which case I can only apply one as I think the entry is being duplicated. I believe this may be solved by an additional field in the assoc table event_schematics_map, but unsure if I am overlooking something simpler or how to implement this.
Effectively I want to support multiple entries of the exact same model
I believe my problem is along the same lines as can I append twice the same object to an instrumentedlist in sqlalchemy - but I could not see a solution for this.
Really appreciate any pointers or to know how to solve this problem.
Thank you for your reply and setting me straight here,
You are quite right, duplicate entries does not make sense.
I ended up solving this by using an associative table as discussed in other answers to track the occurrence of each schematic.

SQLalchemy Core, retrive id/s of the updated row/s

I am trying to learn how to use the SQlalchemy core properly and currently, I have this query.
up = Airport.__table__.update().where(Airport.__table__.c.iata_code == iata_code).values(city=city)
I am using it to update values in a table that has this structure:
class Airport(Base):
__tablename__ = 'airports'
id = Column(Integer, primary_key=True)
iata_code = Column(String(64), index=True, nullable=False)
city = Column(String(256), nullable=False)
The problem is that after the execution of the update procedure I need the ids of the updated rows.
Is it possible to update the values and obtain the ids in only one query? I would like to avoid to have to perform 2 queries for this operation.
The DBMS I am using is mysql.
Disclaimer: This is for SQLAlchemy ORM, not Core
Get the object, and update it. SQLAlchemy will update the instance's ID in the same DB round trip.
airport = Airport.filter_by(Airport.__table__.c.iata_code == iata_code).first()
airport.city = city
db.session.commit()
print(airport.id)

SQLAlchemy Join to retrieve data from multiple tables

I'm trying to retrieve data from multiple tables with SQLAlchemy using the .join() method.
When I run the query I was expecting to get a single object back which had all the data from the different tables joined so that I could use a.area_name and so on where area_name is on one of the joined tables. Below is the query I am running and the table layout, if anyone could offer insight into how to achieve the behavior I'm aiming for I would greatly appreciate it! I've been able to use the .join() method with this same syntax to match results and return them, I figured it would return the extra data from the rows as well since it joins the tables (perhaps I'm misunderstanding how the method works or how to retrieve the information via the query object?).
If it helps with the troubleshooting I'm using MySQL as the database
query:
a = User.query.filter(User.user_id==1).join(UserGroup,
User.usergroup==UserGroup.group_id).join(Areas, User.area==Areas.area_id).first()
and the tables:
class User(db.Model):
user_id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(20), unique=True)
usergroup = db.Column(db.Integer, db.ForeignKey('user_group.group_id'), nullable=False)
area = db.Column(db.Integer, db.ForeignKey('areas.area_id'), nullable=False)
class UserGroups(db.Model):
id = db.Column(db.Integer, primary_key=True)
group_id = db.Column(db.Integer, nullable=False, unique=True)
group_name = db.Column(db.String(64), nullable=False, unique=True)
class Areas(db.Model):
id = db.Column(db.Integer, primary_key=True)
area_id = db.Column(db.Integer, nullable=False, unique=True)
area_name = db.Column(db.String(64), nullable=False, unique=True)
So it seems that I need to use a different approach to the query, and that it returns a tuple of objects which I then need to parse.
What worked is:
a = db.session.query(User, UserGroups, Areas
).filter(User.user_id==1
).join(UserGroup,User.usergroup==UserGroup.group_id
).join(Areas, User.area==Areas.area_id
).first()
The rest remaining the same. This then returned a tuple that I could parse where the data from User is a[0], from UserGroups is a[1], and Areas is a[2]. I can then access the group_name column with a[1].group_name etc.
Hopefully this helps someone else who's trying to work with this!
Take a look at SQLAlchemy's relationship function:
http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#one-to-many
You may want to add a new attribute to your User class like so:
group = sqlalchemy.relationship('UserGroups', back_populates='users')
This will automagically resolve the one-to-many relationship between User and UserGroups (assuming that a User can only be member of one UserGroup at a time). You can then simply access the attributes of the UserGroup once you have queried a User (or set of Users) from your database:
a = User.query.filter(...).first()
print(a.group.group_name)
SQLAlchemy resolves the joins for you, you do not need to explicitly join the foreign tables when querying.
The reverse access is also possible; if you just query for a UserGroup, you can access the corresponding members directly (via the back_populates-keyword argument):
g = UserGroup.query.filter(...).first()
for u in g.users:
print(u.name)

Categories