Consider this simple table definition (using SQLAlchemy-0.5.6)
from sqlalchemy import *
db = create_engine('sqlite:///tutorial.db')
db.echo = False # Try changing this to True and see what happens
metadata = MetaData(db)
user = Table('user', metadata,
Column('user_id', Integer, primary_key=True),
Column('name', String(40)),
Column('age', Integer),
Column('password', String),
)
from sqlalchemy.ext.declarative import declarative_base
class User(declarative_base()):
__tablename__ = 'user'
user_id = Column('user_id', Integer, primary_key=True)
name = Column('name', String(40))
I want to know what is the max length of column name e.g. from user table and from User (declarative class)
print user.name.length
print User.name.length
I have tried (User.name.type.length) but it throws exception
Traceback (most recent call last):
File "del.py", line 25, in <module>
print User.name.type.length
File "/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.6-py2.5.egg/sqlalchemy/orm/attributes.py", line 135, in __getattr__
key)
AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object has an attribute 'type'
User.name.property.columns[0].type.length
Note, that SQLAlchemy supports composite properties, that's why columns is a list. It has single item for simple column properties.
This should work (tested on my machine) :
print user.columns.name.type.length
I was getting errors when fields were too big so I wrote a generic function to trim any string down and account for words with spaces. This will leave words intact and trim a string down to insert for you. I included my orm model for reference.
class ProductIdentifierTypes(Base):
__tablename__ = 'prod_id_type'
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(length=20))
description = Column(String(length=100))
def trim_for_insert(field_obj, in_str) -> str:
max_len = field_obj.property.columns[0].type.length
if len(in_str) <= max_len:
return in_str
logger.debug(f'Trimming {field_obj} to {max_len} max length.')
trim_str = in_str[:(max_len-1)]
if ' ' in trim_str[:int(max_len*0.9)]:
return(str.join(' ', trim_str.split(' ')[:-1]))
return trim_str
def foo_bar():
from models.deals import ProductIdentifierTypes, ProductName
_str = "Foo is a 42 year old big brown dog that all the kids call bar."
print(_str)
print(trim_for_insert(ProductIdentifierTypes.name, _str))
_str = "Full circle from the tomb of the womb to the womb of the tomb we come, an ambiguous, enigmatical incursion into a world of solid matter that is soon to melt from us like the substance of a dream."
print(_str)
print(trim_for_insert(ProductIdentifierTypes.description, _str))```
If you have access to the class:
TableClass.column_name.type.length
If you have access to an instance, you access the Class using the __class__ dunder method.
table_instance.__class__.column_name.type.length
So in your case:
# Via Instance
user.__class__.name.type.length
# Via Class
User.name.type.length
My use case is similar to #Gregg Williamson
However, I implemented it differently:
def __setattr__(self, attr, value):
column = self.__class__.type
if length := getattr(column, "length", 0):
value = value[:length]
super().__setattr__(name, value)
Related
Question
I created a hybrid property which is composed of a string and a decimal formatted as percentage but I am getting a TypeError when using the hybrid expression. I've tried several variations on the f-string including converting it to float first but I still get the error on the same line. What is the best way to do this string formatting and concatenation on the hybrid property expression?
I want to know why 'result_1' is producing an error, and 'result_2' works correctly
Model
from decimal import Decimal as D
class SupplierDiscount(Base):
__tablename__ = "tblSupplierDiscount"
id = Column(Integer, primary_key=True)
discount = Column(DECIMAL(5, 4), nullable=False)
description = Column(String, nullable=False)
#hybrid_property
def disc_desc(self):
return f'{self.description}: {self.discount * 100:.4f}%'
#disc_desc.expression
def disc_desc(cls):
return f'{cls.description}: {cls.discount * 100:.4f}%' # Error generated here
result_1 - Preferred method - but results in error
result_1 = session.query(SupplierDiscount.id.label("SDId"),
SupplierDiscount.disc_desc.label("SDDDesc")
).all()
print('Below is from result_1')
print(result_1)
for i in result_1:
print(i.id, i.disc_desc)
Error produced in result_1
TypeError: unsupported format string passed to BinaryExpression.__format__
result_2 - This works but this is not the preferred method
result_2 = session.query(SupplierDiscount).all()
print('Below is from result_2')
print(result_2)
for i in result_2:
print(i.id, i.disc_desc)
Environment
SQLAlchemy==1.3.20
PostgreSQL 13
There is a distinction between class-level and instance-level access. The hybrid property is used on instance-level and uses python string formatting to get the desired result. As the expression is used for class-level access this needs to be defined as an SQL expression to return the desired result. See the docs for reference.
Knowing this we can rewrite the code to the following to achieve the same result on class-level and instance-level:
from sqlalchemy import Column, Integer, create_engine, DECIMAL, String, func
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import Session
Base = declarative_base()
class Discount(Base):
__tablename__ = 'Discount'
id = Column(Integer, primary_key=True)
description = Column(String)
discount = Column(DECIMAL(5,4))
#hybrid_property
def disc_desc(self):
return f'{self.description}: {self.discount * 100:.4f}%'
#disc_desc.expression
def disc_desc(cls):
return cls.description + ' ' + func.cast(cls.discount * 100, String) + '%'
engine = create_engine(dburl)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
with Session(engine) as session:
discount = Discount(description='test', discount=5.4)
session.add(discount)
session.commit()
disc = session.query(Discount).first()
print('Instance-level access')
print(disc.disc_desc)
print('')
disc = session.query(Discount.disc_desc).first()
print('Class-level access')
print(disc.disc_desc)
This results in:
Instance-level access
test: 540.0000%
Class-level access
test: 540.0000%
Given a my_obj instance of MyType with a my_collection relationship for RelType, I have a validation method decorated with #validates('my_collection') that coerces appended dicts with a primary-key key/value pair into instances of RelType.
So, it works perfectly in this case:
my_obj.my_collection.append(RelType(rel_type_id=x))
And in this case, automatically coercing the values:
my_obj.my_collection.append({"rel_type_id": x})
However, the validation method is not called when the whole collection is replaced. It doesn't work for this case:
my_obj.my_collection = [{"rel_type_id": x}]
I get a TypeError: unhashable type: 'dict' because I'm trying to assign the dict directly, not an instance of RelType.
From the documentation, it looks like the only way to get it to work in that case is to have a custom collection class with a method tagged as #collection.converter, but besides the extra complication of using a custom collection, it looks like that would just duplicate code that's already in the validator.
Am I missing something? Is there a better/easier way?
UPDATE
Here's a minimal example reproducing the problem, SQLAlchemy 1.1.5
from sqlalchemy import Column
from sqlalchemy import ForeignKey
from sqlalchemy import Integer
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from sqlalchemy.orm import sessionmaker
from sqlalchemy.orm import validates
engine = create_engine('sqlite:///:memory:')
Session = sessionmaker(bind=engine)
Base = declarative_base()
class RelType(Base):
__tablename__ = 'rel_type'
rel_type_id = Column(Integer, primary_key=True)
my_type_id = Column(Integer, ForeignKey('my_type.my_type_id'))
class MyType(Base):
__tablename__ = 'my_type'
my_type_id = Column(Integer, primary_key=True)
my_collection = relationship('RelType')
#validates('my_collection')
def validate_my_collection(self, key, value):
if value is not None and not isinstance(value, RelType):
value = RelType(**value)
return value
def main():
Base.metadata.create_all(engine)
obj = MyType(my_type_id=1)
# this works
obj.my_collection.append({'rel_type_id': 2})
# but this immediately raises TypeError: unhashable type: 'dict'
obj.my_collection = [{'rel_type_id': 1}]
if __name__ == '__main__':
main()
This is the exception:
Traceback (most recent call last):
File "sqlamin.py", line 57, in <module>
main()
File "sqlamin.py", line 51, in main
obj.my_collection = [{'rel_type_id': 1}]
File ".env/lib/python3.4/site-packages/sqlalchemy/orm/attributes.py", line 224, in __set__
instance_dict(instance), value, None)
File ".env/lib/python3.4/site-packages/sqlalchemy/orm/attributes.py", line 1081, in set
new_values, old_collection, new_collection)
File ".env/lib/python3.4/site-packages/sqlalchemy/orm/collections.py", line 748, in bulk_replace
constants = existing_idset.intersection(values or ())
File ".env/lib/python3.4/site-packages/sqlalchemy/util/_collections.py", line 612, in intersection
result._members.update(self._working_set(members).intersection(other))
TypeError: unhashable type: 'dict'
Indeed, this is unexpected behavior, more of a design flaw than a bug. There's an issue opened for a better fix on 1.2, but meanwhile the workaround is using a custom collection with the #collection.converter decorator:
class MyCollection(list):
#collection.converter
def convert(self, value):
return [RelType(**v) if not isinstance(v, RelType) else v for v in value]
And use that with the relationship:
my_collection = relationship('RelType', collection_class=MyCollection)
Unfortunately the #collection.appender also doesn't work for similar reasons, so you have to implement the validator and the converter to catch both append and replace cases.
I'm using python-storm as orm. The many-to-many reference set is giving me headaches :(
These are the relevant objects:
class Author(object):
__storm_table__ = "author"
id = Int(primary=True)
name = Unicode()
institution_id = Int()
institution = Reference(institution_id, Institution.id)
def __init__(self, name):
self.name = name
class Paper(object):
__storm_table__ = "paper"
id = Int(primary=True)
name = Unicode()
conference_id = Int()
conference = Reference(conference_id, Conference.id)
def __init__(self, name):
self.name = name
class AuthorPapers(object):
__storm_table__ = "authorpapers"
__storm_primary__ = "author_id", "paper_id"
author_id = Int()
paper_id = Int()
The respective sqlite table look like this
store.execute("CREATE TABLE if not exists author (id INTEGER PRIMARY KEY, name VARCHAR, institution_id INTEGER, FOREIGN KEY (institution_id) REFERENCES institution(id))")
store.execute("CREATE TABLE if not exists paper (id INTEGER PRIMARY KEY, name VARCHAR, conference_id INTEGER, FOREIGN KEY (conference_id) REFERENCES conference(id))")
store.execute("CREATE TABLE if not exists authorpapers (author_id INTEGER, paper_id INTEGER, PRIMARY KEY (author_id, paper_id))")
Now say if a have two author the collaborated on a paper
a = Author(u"Steve Rogers")
b = Author(u"Captain America")
and a paper
p6 = Paper(u"Bunga Bunga")
So now I want to associate both author to the paper using
Author.papers = ReferenceSet(Author.id, AuthorPapers.author_id, Paper.id, AuthorPapers.paper_id)
and doing this
a.papers.add(p6)
b.papers.add(p6)
This is btw it says it is supposed to work in the storm tutorial...but I get
File "/usr/lib64/python2.7/site-packages/storm/references.py", line 376, in add
self._relation2.link(remote, link, True)
File "/usr/lib64/python2.7/site-packages/storm/references.py", line 624, in link
pairs = zip(self._get_local_columns(local.__class__),
File "/usr/lib64/python2.7/site-packages/storm/references.py", line 870, in _get_local_columns
for prop in self.local_key)
File "/usr/lib64/python2.7/site-packages/storm/references.py", line 870, in <genexpr>
for prop in self.local_key)
File "/usr/lib64/python2.7/site-packages/storm/properties.py", line 53, in __get__
return self._get_column(cls)
File "/usr/lib64/python2.7/site-packages/storm/properties.py", line 97, in _get_column
attr = self._detect_attr_name(cls)
File "/usr/lib64/python2.7/site-packages/storm/properties.py", line 82, in _detect_attr_name
raise RuntimeError("Property used in an unknown class")
RuntimeError: Property used in an unknown class
And I'm not really able to make sense of this right now.
I'm not really, familiar with storm, but looking at the documentation example, looks like is just an issue related to the order in which the arguments to ReferenceSet are passed. I tried to use this:
Author.papers = ReferenceSet(Author.id, AuthorPapers.author_id, AuthorPapers.paper_id, Paper.id)
instead of this:
Author.papers = ReferenceSet(Author.id, AuthorPapers.author_id, Paper.id, AuthorPapers.paper_id)
and no exception was raised.
This is a beginner-level question.
I have a catalog of mtypes:
mtype_id name
1 'mtype1'
2 'mtype2'
[etc]
and a catalog of Objects, which must have an associated mtype:
obj_id mtype_id name
1 1 'obj1'
2 1 'obj2'
3 2 'obj3'
[etc]
I am trying to do this in SQLAlchemy by creating the following schemas:
mtypes_table = Table('mtypes', metadata,
Column('mtype_id', Integer, primary_key=True),
Column('name', String(50), nullable=False, unique=True),
)
objs_table = Table('objects', metadata,
Column('obj_id', Integer, primary_key=True),
Column('mtype_id', None, ForeignKey('mtypes.mtype_id')),
Column('name', String(50), nullable=False, unique=True),
)
mapper(MType, mtypes_table)
mapper(MyObject, objs_table,
properties={'mtype':Relationship(MType, backref='objs', cascade="all, delete-orphan")}
)
When I try to add a simple element like:
mtype1 = MType('mtype1')
obj1 = MyObject('obj1')
obj1.mtype=mtype1
session.add(obj1)
I get the error:
AttributeError: 'NoneType' object has no attribute 'cascade_iterator'
Any ideas?
Have you tried:
Column('mtype_id', ForeignKey('mtypes.mtype_id')),
instead of:
Column('mtype_id', None, ForeignKey('mtypes.mtype_id')),
See also: https://docs.sqlalchemy.org/en/13/core/constraints.html
I was able to run the code you have shown above so I guess the problem was removed when you simplified it for the purpose of this question. Is that correct?
You didn't show a traceback so only some general tips can be given.
In SQLAlchemy (at least in 0.5.8 and above) there are only two objects with "cascade_iterator" attribute: sqlalchemy.orm.mapper.Mapper and sqlalchemy.orm.interfaces.MapperProperty.
Since you didn't get sqlalchemy.orm.exc.UnmappedClassError exception (all mappers are right where they should be) my wild guess is that some internal sqlalchemy code gets None somewhere where it should get a MapperProperty instance instead.
Put something like this just before session.add() call that causes the exception:
from sqlalchemy.orm import class_mapper
from sqlalchemy.orm.interfaces import MapperProperty
props = [p for p in class_mapper(MyObject).iterate_properties]
test = [isinstance(p, MapperProperty) for p in props]
invalid_prop = None
if False in test:
invalid_prop = props[test.index(False)]
and then use your favourite method (print, python -m, pdb.set_trace(), ...) to check the value of invalid_prop. It's likely that for some reason it won't be None and there lies your culprit.
If type(invalid_prop) is a sqlalchemy.orm.properties.RelationshipProperty then you have introduced a bug in mapper configuration (for relation named invalid_prop.key). Otherwise it's hard to tell without more information.
I have some problems with setting up the dictionary collection in Python's SQLAlchemy:
I am using declarative definition of tables. I have Item table in 1:N relation with Record table. I set up the relation using the following code:
_Base = declarative_base()
class Record(_Base):
__tablename__ = 'records'
item_id = Column(String(M_ITEM_ID), ForeignKey('items.id'))
id = Column(String(M_RECORD_ID), primary_key=True)
uri = Column(String(M_RECORD_URI))
name = Column(String(M_RECORD_NAME))
class Item(_Base):
__tablename__ = 'items'
id = Column(String(M_ITEM_ID), primary_key=True)
records = relation(Record, collection_class=column_mapped_collection(Record.name), backref='item')
Now I want to work with the Items and Records. Let's create some objects:
i1 = Item(id='id1')
r = Record(id='mujrecord')
And now I want to associate these objects using the following code:
i1.records['source_wav'] = r
but the Record r doesn't have set the name attribute (the foreign key). Is there any solution how to automatically ensure this? (I know that setting the foreign key during the Record creation works, but it doesn't sound good for me).
Many thanks
You want something like this:
from sqlalchemy.orm import validates
class Item(_Base):
[...]
#validates('records')
def validate_record(self, key, record):
assert record.name is not None, "Record fails validation, must have a name"
return record
With this, you get the desired validation:
>>> i1 = Item(id='id1')
>>> r = Record(id='mujrecord')
>>> i1.records['source_wav'] = r
Traceback (most recent call last):
[...]
AssertionError: Record fails validation, must have a name
>>> r.name = 'foo'
>>> i1.records['source_wav'] = r
>>>
I can't comment yet, so I'm just going to write this as a separate answer:
from sqlalchemy.orm import validates
class Item(_Base):
[...]
#validates('records')
def validate_record(self, key, record):
record.name=key
return record
This is basically a copy of Gunnlaugur's answer but abusing the validates decorator to do something more useful than exploding.
You have:
backref='item'
Is this a typo for
backref='name'
?