I'm using the SQLAlchemy recipe here to magically JSON encode/decode a column from the DB in my model like:
class Thing(Base):
__tablename__ = 'things'
id = Column(Integer(), primary_key=True)
data = Column(JSONEncodedDict)
I hit a snag when I wanted to create an extra "raw_data" field in my model to access the same underlying JSON data, but without encoding/decoding it:
raw_data = Column("data", VARCHAR)
SQLAlchemy seems to get confused by the name collision and leave one column un-mapped. Is there any way I can convince SQLAlchemy to actually map both attributes to the same column?
I would just define the raw_data column through SQLAlchemy and then use Python's property/setter to make transparent use of data. I.e.:
class Thing(Base):
__tablename__ = 'things'
id = Column(Integer(), primary_key=True)
raw_data = Column(String())
#property
def data(self):
# add some checking here too
return json.loads(self.raw_data)
#data.setter
def data(self, value):
# dito
self.raw_data = json.dumps(value)
Related
I'm wondering what's the proper way of retrieving data from the model. Let's take the classes below for the example:
class A(db.Model):
def get_attributes():
return self.product_category.attributes
class Attribute(db.Mdel):
attribute_id = db.Column(db.Integer, primary_key=True, autoincrement=True)
label = db.Column(db.String(255), nullable=False)
Let's say that calling get_attributes() returns three objects of the Attribute class.
In my route, I only want to receive the list of attribute labels. What I'm currently doing is looping through the objects and retrieve the label property like this:
labels = [i.label for i in obj.get_attributes()]
Which I don't think is a proper way of doing it. Is there any better way to achieve this?
You should use relationship between A and Attribute.
For example
class A(db.Model):
# ...
# table primary key, columns etc.
# ...
attributes = db.relationship(
"Attribute",
backref="as",
lazy=True
)
class Attribute(db.Model):
# ...
# table primary key, columns etc.
# ...
a_id = db.Column(db.Integer, db.ForeignKey('a.id'))
After that, you can access certain "a" attributes with a.attributes. To add an attribute to "a" just: a.attributes.append(a)
How can I automatically truncate string values in a data model across many attributes, without explicitly defining a #validates method for each one?
My current code:
from sqlalchemy import Column, Integer, String
from sqlalchemy.orm import validates
class MyModel:
__tablename__ = 'my_model'
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(40), nullable=False, unique=True)
# I can "force" truncation to my model using "validates"
# I'd prefer not to use this solution though...
#validates('name')
def validate_code(self, key, value):
max_len = getattr(self.__class__, key).prop.columns[0].type.length
if value and len(value) > max_len:
value = value[:max_len]
return value
My concern is that my ORM will span many tables and fields and there's a high risk of oversight in including attributes in string length validation. In simpler words, I need a solution that'll scale. Ideally, something in my session configuration which'll automatically truncate strings that are too long...
You could create a customised String type that automatically truncates its value on insert.
import sqlalchemy.types as types
class LimitedLengthString(types.TypeDecorator):
impl = types.String
def process_bind_param(self, value, dialect):
return value[:self.impl.length]
def copy(self, **kwargs):
return LimitedLengthString(self.impl.length)
class MyModel:
__tablename__ = 'my_model'
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(LimitedLengthString(40), nullable=False, unique=True)
The extended type will still create VARCHAR(40) in the database, so it should be possible to replace String(40) with LimitedLengthString(40)* in your code without a database migration.
* You might want to choose a shorter name.
I want to generate some unique URLs from IDs with the purpose of appending at the end of some endpoints. The result will be something like .../32dwr4. I would like to insert these short urls into the database on instantiation, based on the primary key id.
I do not know if there is some kind of 'flushing' for operating inside the model:
class Storm(db.Model):
id = db.Column(db.Integer, primary_key=True)
_url = db.Column(db.String(200), index=True)
## Relationships
#hybrid_property
def url(self):
return self._url
#url.setter
def _set_url(self, plaintext):
self._url= base64.b64encode(plaintext)
def __init__(self, name, question, *):
self.name = name
self.question = question
self.url = self.id # <<------ is it possible to pass its id on the fly, convert it through the setter and store it?
If it is not possible, which approach do you recommend?
how can i define a column as a positive integer using flask sqlalchemy?
i am hoping the answer would look something like this:
class City(db.Model):
id = db.Column(db.Integer, primary_key=True)
population = db.Column(db.Integer, positive=True)
def __init__(self,population):
self.population = population
however, this class definition will throw an error b/c sqlalchemy does not know about a 'positive' argument.
i could raise an exception if an object is instantiated with a negative value for the population. but i don't know how to ensure that the population remains positive after an update.
thanks for any help.
unfortunately, on the python side, sqlalchemy does its best to stay out of the way; there's no 'special sqlalchemy' way to express that the instance attribute must satisfy some constraint:
>>> class Foo(Base):
... __tablename__ = 'foo'
... id = Column(Integer, primary_key=True)
... bar = Column(Integer)
...
>>> f = Foo()
>>> f.bar = "not a number!"
>>> f.bar
'not a number!'
If you tried to commit this object, sqlalchey would complain because it doesn't know how to render the supplied python value as SQL for the column type Integer.
If that's not what you're looking for, you just want to make sure that bad data doesn't reach the database, then you need a Check Constraint.
class Foo(Base):
__tablename__ = 'foo'
id = Column(Integer, primary_key=True)
bar = Column(Integer)
__table_args__ = (
CheckConstraint(bar >= 0, name='check_bar_positive'),
{})
I know this is old but for what it's worth, my approach was to use marshmallow (de/serialization and data validation library) to validate input data.
Create the schema to your model as such:
from marshmallow import validate, fields, Schema
...
class CitySchema(Schema):
population = fields.Integer(validate=validate.Range(min=0, max=<your max value>))
Then use your schema to serialize/deserialize the data when appropriate:
...
city_data = {...} # your city's data (dict)
city_schema = CitySchema()
deserialized_city, validation_errors = city_schema.load(city_data) # validation done at deserialization
...
The advantage of using a de/serialization library is that you can enforce all your data integrity rules in one place
I'm using SQLAlchemy, and many classes in my object model have the same two attributes: id and (integer & primary key), and name (a string). I'm trying to avoid declaring them in every class like so:
class C1(declarative_base()):
id = Column(Integer, primary_key = True)
name = Column(String)
#...
class C2(declarative_base()):
id = Column(Integer, primary_key = True)
name = Column(String)
#...
What's a good way to do that? I tried using metaclasses but it didn't work yet.
You could factor out your common attributes into a mixin class, and multiply inherit it alongside declarative_base():
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
class IdNameMixin(object):
id = Column(Integer, primary_key=True)
name = Column(String)
class C1(declarative_base(), IdNameMixin):
__tablename__ = 'C1'
class C2(declarative_base(), IdNameMixin):
__tablename__ = 'C2'
print C1.__dict__['id'] is C2.__dict__['id']
print C1.__dict__['name'] is C2.__dict__['name']
EDIT: You might think this would result in C1 and C2 sharing the same Column objects, but as noted in the SQLAlchemy docs, Column objects are copied when originating from a mixin class. I've updated the code sample to demonstrate this behavior.
Could you also use the Column's copy method? This way, fields can be defined independently of tables, and those fields that are reused are just field.copy()-ed.
id = Column(Integer, primary_key = True)
name = Column(String)
class C1(declarative_base()):
id = id.copy()
name = name.copy()
#...
class C2(declarative_base()):
id = id.copy()
name = name.copy()
#...
I think I got it to work.
I created a metaclass that derives from DeclarativeMeta, and made that the metaclass of C1 and C2. In that new metaclass, I simply said
def __new__(mcs, name, base, attr):
attr['__tablename__'] = name.lower()
attr['id'] = Column(Integer, primary_key = True)
attr['name'] = Column(String)
return super().__new__(mcs, name, base, attr)
And it seems to work fine.