I am using a flask API as my rest point for my Angular application. Currently I am testing the API. I tested my /users point to make sure I got all the users.
//importing db, app, models, schema etc.
from flask import jsonify, request
#app.route('/users')
def get_users():
# fetching from database
users_objects = User.query.all()
# transforming into JSON-serializable objects
users_schema = UserSchema(many=True)
result = users_schema.dump(users_objects)
# serializing as JSON
return jsonify(result.data)
That worked. However, now that I am trying to get other data(which has more than 9000 objects.. it doesn't work(when I try querying all of them). I first just grabbed the first item
#app.route('/aggregated-measurements')
def get_aggregated_measurements():
aggregated_measurements_objects = AggregatedMeasurement.query.first()
# transforming into JSON-serializable objects
aggregated_measurement_schema = AggregatedMeasurementSchema()
result = aggregated_measurement_schema.dump(aggregated_measurements_objects)
return jsonify(result.data)
That showed me the first AggregatedMeasurement. However when I try to query all of them aggregated_measurements_objects = AggregatedMeasurement.query.all() Nothing displays. I did the same thing on my jupyter notebook and that displayed them. I then thought that maybe this was too much info, so I tried to just limit the query like this aggregated_measurements_objects = AggregatedMeasurement.query.all()[:5]. That works on the jupyter notebook, but displays nothing when I hit the route.
I don't understand why when I hit the /users point I can see all of them, but when I try to do the same for aggregated-measurements I get nothing(even when I limit the query). I am using flask_sqlalchemy with sqlite db.
**update with model and schema **
from datetime import datetime
# ... import db
import pandas as pd
from marshmallow import Schema, fields
class AggregatedMeasurement(db.Model):
id = db.Column(db.Integer, primary_key=True)
created = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
time = db.Column(db.DateTime, nullable=False)
speed = db.Column(db.Float, nullable=False)
direction = db.Column(db.Float, nullable=False)
# related fields
point_id = db.Column(db.Integer, db.ForeignKey('point.id'), nullable=False)
point = db.relationship('Point',backref=db.backref('aggregated_measurements', lazy=True))
class AggregatedMeasurementSchema(Schema):
id = fields.Int(dump_only=True)
time = fields.DateTime()
speed = fields.Number()
direction = fields.Number()
point_id = fields.Number()
SECOND UPDATE found the error.
After verifying that indeed it was hitting the db( thank you #gbozee) I noticed that on the /aggregated-measurements route when I made the schema I did it for just one object. I forgot to include the many = True like I did in the users_schema. Therefore that is why only one point appeared and when I tried more, it did not. I was using the marshmallow(an object serialization package).
Related
I am using flaks python and sqlalchemy to connect to a huge db, where a lot of stats are saved. I need to create some useful insights with the use of these stats, so I only need to read/get the data and never modify.
The issue I have now is the following:
Before I can access a table I need to replicate the table in my models file. For example I see the table Login_Data in the DB. So I go into my models and recreate the exact same table.
class Login_Data(Base):
__tablename__ = 'login_data'
id = Column(Integer, primary_key=True)
date = Column(Date, nullable=False)
new_users = Column(Integer, nullable=True)
def __init__(self, date=None, new_users=None):
self.date = date
self.new_users = new_users
def get(self, id):
if self.id == id:
return self
else:
return None
def __repr__(self):
return '<%s(%r, %r, %r)>' % (self.__class__.__name__, self.id, self.date, self.new_users)
I do this because otherwise I cant query it using:
some_data = Login_Data.query.limit(10)
But this feels unnecessary, there must be a better way. Whats the point in recreating the models if they are already defined. What shall I use here:
some_data = [SOMETHING HERE SO I DONT NEED TO RECREATE THE TABLE].query.limit(10)
Simple question but I have not found a solution yet.
Thanks to Tryph for the right sources.
To access the data of an existing DB with sqlalchemy you need to use automap. In your configuration file where you load/declare your DB type. You need to use the automap_base(). After that you can create your models and use the correct table names of the DB without specifying everything yourself:
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine
import stats_config
Base = automap_base()
engine = create_engine(stats_config.DB_URI, convert_unicode=True)
# reflect the tables
Base.prepare(engine, reflect=True)
# mapped classes are now created with names by default
# matching that of the table name.
LoginData = Base.classes.login_data
db_session = Session(engine)
After this is done you can now use all your known sqlalchemy functions on:
some_data = db_session.query(LoginData).limit(10)
You may be interested by reflection and automap.
Unfortunately, since I never used any of those features, I am not able to tell you more about them. I just know that they allow to use the database schema without explicitly declaring it in Python.
I'm trying to use search capability on flask application. It seems to be saving in database properly however query isn't returning me anything.
DATABASE MODEL:
app = Flask(__name__)
csrf = CsrfProtect(app)
csrf.init_app(app)
db = SQLAlchemy(app)
class ArticleQuery(BaseQuery, SearchQueryMixin):
pass
class latest_movies_scraper(db.Model):
query_class = ArticleQuery
__tablename__ = 'latest_movies_scraper'
id = db.Column(sa.Integer, primary_key=True)
name = db.Column(db.Unicode(255))
url = db.Column(db.Unicode(255))
image_url = db.Column(db.Unicode(255))
create = db.Column(db.DateTime, default=datetime.datetime.utcnow)
search_vector = db.Column(TSVectorType('name'))
How i'm saving to database:
check_if_exists = latest_movies_scraper.query.filter_by(name=dictionary['title']).first()
if check_if_exists:
print check_if_exists.name
print 'skipping this...'
pass
else:
insert_to_db = latest_movies_scraper(name=dictionary['title'], url=dictionary['href'], image_url=dictionary['featured_image'])
db.session.add(insert_to_db)
db.session.commit()
How I am using search capbilitiy functionality:
name = latest_movies_scraper.query.search(u'Black Panther (2018)').limit(5).all()
Name returns empty array, but it should return me the name list instead.
ABOVE MY GOAL is to query the name from the database. It doesn't return me anything when in fact the name Black Panther 2018 exists in my database.
So the search functionality isn't working as expected.
SQLAlchemy-Searchable doesn't index existing data. This has to be done manually by performing a synchronisation. For the table definition above the code below is sufficient:
from sqlalchemy_searchable import sync_trigger
def sync_fts():
sync_trigger(db.engine, 'latest_movies_scraper', 'search_vector', ['name'])
This code would normally be part of the db management tools (Flask-Script, Click).
Description
I have an Flask application with original SQLalchemy. Application is intended to be used internally in a company for easier saving of measurement data with MySQL
On one page I have a table with all devices used for measurement and a form that is used to add, remove or modify measurement devices.
Problem
The problem is that when I enter a new device in the database, the page is automatically refreshed to fetch new data from DB and new device is sometimes shown and sometimes it is not when I refresh the page. In other words, added row in table is appearing and dissapearing even though the row is visible on database. Same goes when i try to delete the device from database. The row is sometimes shown, sometimes not when refreshing the page with row being deleted from DB.
The same problem appears for all examples similar to this one (adding, deleting and modifying data).
What i have tried
Bellow is the code for table model:
class DvDevice(Base):
__tablename__ = "dvdevice"
id = Column("device_id", Integer, primary_key=True, autoincrement=True)
name = Column("device_name", String(50), nullable=True)
code = Column("device_code", String(10), nullable=True, unique=True)
hw_ver = Column("hw_ver", String(10), nullable=True)
fw_ver = Column("fw_ver", String(10), nullable=True)
sw_ver = Column("sw_ver", String(10), nullable=True)
And here is the code that inserts/deletes data from table.
#Insertion
device = DvDevice()
device.code = self.device_code
device.name = self.device_name
device.hw_ver = self.hw_ver
device.fw_ver = self.fw_ver
device.sw_ver = self.sw_ver
ses.add(device)
ses.commit()
ses.expire_all() #Should this be here?
# Deletion
ses.query(DvDevice).filter_by(id=self.device_id).delete()
ses.commit()
ses.expire_all() # Should this be here?
I have read from some posts on stack to include the following decorator function in models.py
#app.teardown_appcontext
def shutdown_session(exception=None):
ses.expire_all() #ses being database session object.
I tried this and it still doesn't work as it should be. Should I put the decorator function somewhere else?
Second thing i tried is to put ses.expire_all() after all commits and it still doesnt work.
What should I do to prevent this from happening?
Edit 1
from sqlalchemy import create_engine, update
from sqlalchemy.orm import sessionmaker
from sqlalchemy.pool import NullPool
from config import MYSQLCONNECT
engine = create_engine(MYSQLCONNECT)
Session = sessionmaker(bind=engine)
session = Session()
I solved the problem with the use of following function from http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it:
from contextlib import contextmanager
#contextmanager
def session_scope():
"""Provide a transactional scope around a series of operations."""
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
with session_scope() as session:
... # code that uses session
The problem was that I created the session object in the beggining and then never closed it.
Using Python/Flask/SQLAlchemy/Heroku.
Want to store dictionaries of objects as properties of an object:
TO CLARIFY
class SoccerPlayer(db.Model):
name = db.Column(db.String(80))
goals_scored = db.Column(db.Integer())
^How can I set name and goals scored as one dictionary?
UPDATE: The user will input the name and goals_scored if that makes any difference.
Also, I am searching online for an appropriate answer, but as a noob, I haven't been able to understand/implement the stuff I find on Google for my Flask web app.
I would second the approach provided by Sean, following it you get properly
normalized DB schema and can easier utilize RDBMS to do the hard work for you. If,
however, you insist on using dictionary-like structure inside your DB, I'd
suggest to try out hstore
data type which allows you to store key/value pairs as a single value in
Postgres. I'm not sure if hstore extension is created by default in Postgres
DBs provided by Heroku, you can check that by typing \dx command inside
psql. If there are no lines with hstore in them, you can create it by
typing CREATE EXTENSION hstore;.
Since hstore support in SQLAlchemy is available in version 0.8 which is not
released yet (but hopefully will be in coming weeks), you need to install it
from its Mercurial repository:
pip install -e hg+https://bitbucket.org/sqlalchemy/sqlalchemy#egg=SQLAlchemy
Then define your model like this:
from sqlalchemy.dialects.postgresql import HSTORE
from sqlalchemy.ext.mutable import MutableDict
class SoccerPlayer(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(80), nullable=False, unique=True)
stats = db.Column(MutableDict.as_mutable(HSTORE))
# Note that hstore only allows text for both keys and values (and None for
# values only).
p1 = SoccerPlayer(name='foo', stats={'goals_scored': '42'})
db.session.add(p1)
db.session.commit()
After that you can do the usual stuff in your queries:
from sqlalchemy import func, cast
q = db.session.query(
SoccerPlayer.name,
func.max(cast(SoccerPlayer.stats['goals_scored'], db.Integer))
).group_by(SoccerPlayer.name).first()
Check out HSTORE docs
for more examples.
If you are storing such information in a database I would recommend another approach:
class SoccerPlayer(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(80))
team_id = db.Column(db.Integer, db.ForeignKey('Team.id'))
stats = db.relationship("Stats", uselist=False, backref="player")
class Team(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(80))
players = db.relationship("SoccerPlayer")
class Stats(db.Model):
id = db.Column(db.Integer, primary_key=True)
player_id = db.Column(db.Integer, db.ForeignKey('SoccerPlayer.id'))
goals_scored = db.Column(db.Integer)
assists = db.Column(db.Integer)
# Add more stats as you see fit
With this model setup you can do crazy things like this:
from sqlalchemy.sql import func
max_goals_by_team = db.session.query(Team.id,
func.max(Stats.goals_scored).label("goals_scored")
). \
join(SoccerPlayer, Stats). \
group_by(Team.id).subquery()
players = SoccerPlayer.query(Team.name.label("Team Name"),
SoccerPlayer.name.label("Player Name"),
max_goals_by_team.c.goals_scored). \
join(max_goals_by_team,
SoccerPlayer.team_id == max_goals_by_team.c.id,
SoccerPlayer.stats.goals_scored == max_goals_by_team.c.goals_scored).
join(Team)
thus making the database do the hard work of pulling out the players with the highest goals per team, rather than doing it all in Python.
Not even django(a bigger python web framework than flask) doesn't support this by default. But in django you can install it, it's called a jsonfield( https://github.com/bradjasper/django-jsonfield ).
What i'm trying to tell you is that not all databases know how to store binaries, but they do know how to store strings and jsonfield for django is actually a string that contains the json dump of a dictionary.
So, in short you can do in flask
import simplejson
class SoccerPlayer(db.Model):
_data = db.Column(db.String(1024))
#property
def data(self):
return simplejson.loads(self._data)
#data.setter
def data(self, value):
self._data = simplejson.dumps(value)
But beware, this way you can only assign the entire dictionary at once:
player = SoccerPlayer()
player.data = {'name': 'Popey'}
print player.data # Will work as expected
{'name': 'Popey'}
player.data['score'] = '3'
print player.data
# Will not show the score becuase the setter doesn't know how to input by key
{'name': 'Popey'}
I have recently made a decision to start using the Pyramid (python web framework) for my projects from now on.
I have also decided to use SQLalchemy, and I want to use raw MySQL (personal reasons) but still keep the ORM features.
The first part of the code in models.py reads:
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
Base = declarative_base()
Now from here how do I exectue a query for CREATE TABLE using raw MySQL.
the traditional SQLalchemy way would be:
class Page(Base):
__tablename__ = 'pages'
id = Column(Integer, primary_key=True)
name = Column(Text, unique=True)
data = Column(Text)
def __init__(self, name, data):
self.name = name
self.data = data
DBSession.execute('CREATE TABLE ....')
Have a look at sqlalchemy.text() for parametrized queries.
My own biased suggestion would be to use http://pypi.python.org/pypi/khufu_sqlalchemy to setup the sqlalchemy engine.
Then inside a pyramid view you can do something like:
from khufu_sqlalchemy import dbsession
db = dbsession(request)
db.execute("select * from table where id=:id", {'id':7})
Inside the views.py if you are adding form elements, first create an object of the database.
In your snippet, do it as
pg = Page()
and add it with
DBSession.add(pg)
for all the form elements you want to add e.g name and data from your snippet.
the final code would be similar to:
pg = Page()
name = request.params['name']
data = request.params['data']
DBSession.add(pg)