List of object attributes in pydantic model - python

I use Fast API to create a web service.
There are following sqlAlchemy models:
class User(Base):
__tablename__ = 'user'
account_name = Column(String, primary_key=True, index=True, unique=True)
email = Column(String, unique=True, index=True, nullable=False)
roles = relationship("UserRole", back_populates="users", lazy=False, uselist=True)
class UserRole(Base):
__tablename__ = 'user_role'
__table_args__ = (UniqueConstraint('role_name', 'user_name', name='user_role_uc'),)
role_name = Column(String, ForeignKey('role.name'), primary_key=True)
user_name = Column(String, ForeignKey('user.account_name'), primary_key=True)
users = relationship("User", back_populates="roles")
Pydantic schemas are below:
class UserRole(BaseModel):
role_name: str
class Config:
orm_mode = True
class UserBase(BaseModel):
account_name: str
email: EmailStr
roles: List[UserRole] = []
class Config:
orm_mode = True
What I have now is:
{
"account_name": "Test.Test",
"email": "Test.Test#test.com",
"roles": [
{
"role_name": "all:admin"
},
{
"role_name": "all:read"
}
]
}
What I want to achieve is to get user from api in following structure:
{
"account_name": "Test.Test",
"email": "Test.Test#test.com",
"roles": [
"all:admin",
"all:read"
]
}
Is that possible? How should I change schemas to get this?

If you are okay
with handling the how to "get user from api" problem statement
by modifying the fastapi path definition, see below.
Can you change the response model used by the fastapi path definition
in order to handle the desired output format?
Example pydantic response model definition:
class UserResponse(BaseModel):
account_name: str
email: EmailStr
roles: List[str]
Example sqlalchemy query + serialization function:
def get_user_response(user_id) -> UserResponse:
user = User.query.get(user_id)
user_roles = UserRole.query.filter(user=user_id).all()
role_names = [r.role_name for r in user_roles]
response = UserResponse(
account_name=user.account_name,
email=user.email,
roles=role_names
}
return response
Example fastapi path definition:
#app.get("/users/{user_id}", response_model=UserResponse)
async def read_item(user_id):
return get_user_response(user_id)
Considerations:
I'm using a user_id for the user queries but this can be replaced with whatever you end up using as your primary key for that table.
The UserResponse response model is very similar to UserBase (you could potentially
subclass UserBase instead of model to avoid the redefinition of
account_name and email, with the tradeoff of having to override the
class' Config).
There may be a way to override the serialization format of the UserBase
sqlalchemy model object that gets automatically serialized when you query the model from the database
and allows you to eliminate or reduce the code in the get_user_response() function in the example definition above.

As UserRole is a class, it is represented as an object (using a dictionary). If you want to represent it as a list of strings you'll have to transform the data (and change your Pydantic model's field declaration). There's several approaches to this, but the pydantic model documentation is a good place to start. Mind that the ORM model serves as the data representation layer and the pydantic model is the validation (and perhaps serialization) layer, so there's a lot of places where you could 'hook in'.

Related

SQLModel Relationship that returns count of related objects

I'm working on a FastAPI application and using SQLModel with a Postgres backend. I have Post objects, each of which can be upvoted by Users. We represent this with a PostUpvote many-to-many relation between Users and Posts. So far, so boring.
from datetime import datetime
from typing import List, Optional
from pydantic import BaseModel
from sqlmodel import Field, Relationship, SQLModel
import uuid as uuid_pkg
def uuid_hex():
return uuid_pkg.uuid4().hex
def PkIdField():
return Field(
default_factory=uuid_hex,
primary_key=True,
index=True,
nullable=False,
)
class PostBase(SQLModel):
title: str
description: str
class Post(PostBase, table=True):
creator_id: str = Field(foreign_key="amp_users.id")
id: str = PkIdField()
created_at: datetime = Field(default_factory=datetime.utcnow, nullable=False)
creator: User = Relationship(back_populates="posts")
upvotes: List["PostUpvote"] = Relationship(back_populates="post")
class UserBase(SQLModel):
email: str
class User(UserBase, table=True):
# "user" table is reserved by postgres
__tablename__ = "app_users"
id: str = PkIdField()
posts: List["Post"] = Relationship(back_populates="creator")
class PostUpvote(SQLModel, table=True):
post: Post = Relationship(back_populates="upvotes")
post_id: str = Field(foreign_key="posts.id", primary_key=True)
user_id: str = Field(foreign_key="app_users.id", primary_key=True)
As you can see, I've set up an upvotes relationship on my Post object, which will give me a list of all the upvotes for that post. But when I'm returning this to the frontend, I don't need or want a list of all the upvotes. I just need the count. Obviously, I can use len(post.updates) to get this, but that still requires us to fetch all the individual upvote objects for that post. So my question is, is there some way to add an upvote_count relationship to my Post object, like so:
class Post(PostBase, table=True):
creator_id: str = Field(foreign_key="amp_users.id")
id: str = PkIdField()
created_at: datetime = Field(default_factory=datetime.utcnow, nullable=False)
creator: User = Relationship(back_populates="posts")
upvotes: List["PostUpvote"] = Relationship(back_populates="post")
upvote_count: int = Relationship(...)
Note that this is using SQLModel's Relationship feature (https://sqlmodel.tiangolo.com/tutorial/relationship-attributes/), not SQLAlchemy relations (though I am running SQLAlchemy under the hood).
If there's some way to provide a custom SQLAlchemy query to the SQLModel relationship, that would solve the problem neatly. But I've not been able to find anything in the SQLModel docs about how to do so. Is this even possible? Or should I just resign myself to doing the query manually?

Is there a way to remove the primary key from the SQLAlchemy query results?

I am working on a application with FastAPI, Pydantic and SQLAlchemy.
I want to return data matching a Pydantic scheme like
class UserResponseBody(BaseModel):
name: str
age: int
The database model looks like
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True, index=True)
name = Column(String, index=True)
age = Column(Integer)
When I query the users in CRUD the records also contain the primary_key id,
which I don't want to expose to the user.
So far I am converting the query results to a dict and pop the primary key like
# https://stackoverflow.com/a/37350445/7919597
def object_as_dict(obj):
return {c.key: getattr(obj, c.key)
for c in inspect(obj).mapper.column_attrs}
query_result = db.query(models.User).first()
query_result_dict = object_as_dict(obj)
query_result_dict.pop("id", None)
return UserResponseBody(**query_result_dict)
But that feels kind of hacky and I would like to ask, if someone knows a better solution to this.
You already have your response model defined, you just need to tell FastAPI that you want to use it, and that Pydantic should attempt to use .property-notation to resolve values as well:
class UserResponseBody(BaseModel):
name: str
age: int
class Config:
orm_mode = True
#app.get('/users/first', response_model=UserResponseBody)
def get_first_user():
return db.query(models.User).first()
Only the fields defined in your response_model will be included, which doesn't include id in your case. No need to do the conversion manually, FastAPI and Pydantic does what you want as long as you've told them what you want.

How to properly setup a One To Many bidirectional relationship using fastAPI, Pydantic and SQLAlchemy

I use full-stack-fastapi-postgresql, fastapi version 0.54.1 and pydantic version 1.4.
I have no idea how to setup pydantic, so it properly works with a many to one bidirectional relationship in SQLAlchemy. For some reason, my current implementation blows the stack with a maximum recursion error.
I am aware of a github discussion from_orm() should detect cycles when loading SQLAlchemy bi-directional relationships rather than blow stack
. I believe this is closely related, but I could not make anything useful out of it so far.
Any help would be greatly appreciated.
Error message from fastAPI:
File "/usr/local/lib/python3.7/site-packages/fastapi/encoders.py",
line 113, in jsonable_encoder
sqlalchemy_safe=sqlalchemy_safe, File "/usr/local/lib/python3.7/site-packages/fastapi/encoders.py", line
166, in jsonable_encoder
sqlalchemy_safe=sqlalchemy_safe, File "/usr/local/lib/python3.7/site-packages/fastapi/encoders.py", line 52,
in jsonable_encoder
if isinstance(obj, BaseModel): File "/usr/local/lib/python3.7/abc.py", line 139, in instancecheck
return _abc_instancecheck(cls, instance) RecursionError: maximum recursion depth exceeded in comparison
The models:
app/models/company.py
from typing import TYPE_CHECKING
from sqlalchemy import Boolean, Column, Integer, String
from sqlalchemy.orm import relationship
from app.db.base_class import Base
if TYPE_CHECKING:
from .company import Company # noqa: F401
class Company(Base):
id = Column(Integer, primary_key=True, index=True)
enabled = Column(Boolean(), default=True)
logourl = Column(String, index=True)
name = Column(String, index=True)
users = relationship("User", back_populates="company")
app/models/user.py
from typing import TYPE_CHECKING
from sqlalchemy import Column, ForeignKey, Integer, String, Boolean, PrimaryKeyConstraint,UniqueConstraint, DateTime
from sqlalchemy.orm import relationship
from app.db.base_class import Base
if TYPE_CHECKING:
from .job import Job # noqa: F401
from .company import Company # noqa: F401
class User(Base):
id = Column(Integer, primary_key=True, index=True)
full_name = Column(String, index=True)
email = Column(String, unique=True, index=True, nullable=False)
hashed_password = Column(String, nullable=False)
is_active = Column(Boolean(), default=True)
is_superuser = Column(Boolean(), default=False)
company_id = Column(Integer, ForeignKey("company.id"))
company = relationship("Company", back_populates="users")
The schemas:
app/schemas/user.py
from typing import Optional, Any
from pydantic import BaseModel, EmailStr
# Shared properties
class UserBase(BaseModel):
email: Optional[EmailStr] = None
is_active: Optional[bool] = True
is_superuser: bool = False
full_name: Optional[str] = None
company_id: Optional[int] = None
company: Optional[Any] = None
# Properties to receive via API on creation
class UserCreate(UserBase):
email: EmailStr
password: str
company_id: int
# Properties to receive via API on update
class UserUpdate(UserBase):
password: Optional[str] = None
company_id: Optional[int] = None
class UserInDBBase(UserBase):
id: Optional[int] = None
class Config:
orm_mode = True
# Additional properties to return via API
class User(UserInDBBase):
pass
# Additional properties stored in DB
class UserInDB(UserInDBBase):
hashed_password: str
app/schemas/company.py
from typing import Optional, List
from pydantic import BaseModel
from .user import User
# Shared properties
class CompanyBase(BaseModel):
enabled: Optional[bool] = None
logourl: Optional[str] = None
name: Optional[str] = None
users: List[User] = None
# Properties to receive on Company creation
class CompanyCreate(CompanyBase):
name: str
# Properties to receive on Company update
class CompanyUpdate(CompanyBase):
pass
# Properties shared by models stored in DB
class CompanyInDBBase(CompanyBase):
id: int
name: str
class Config:
orm_mode = True
# Properties to return to client
class Company(CompanyInDBBase):
pass
# Properties properties stored in DB
class CompanyInDB(CompanyInDBBase):
pass
You are trying to have the user schema contain the company, but also want the company schema to contain the user, this means when you retrieve a specific company for example, its users will contain the same company, and that company will again contain all the users, hence the recursion issue.
You need to decide what data you want to show when a user or a company is retrieved, there is probably no use for a company's full data to be retrieved when all you're trying to do is retrieve a specific user.
Solution
It should be enough to remove company from the UserBase schema.

How to validate JSON object before feeding to SQLAlchemy object?

When I want to create or update an object in my database, I send JSON from the client. I use SQLAlchemy to operate data. A model already has all the needed information about types like:
class Project(db.Model):
__tablename__ = 'project'
creation_date = db.Column(db.DateTime, default=datetime.utcnow)
title = db.Column(db.String(255), nullable=False)
def __init__(self, **kwargs):
super(Project, self).__init__(**kwargs)
So, I want to be able to feed a raw JSON into the constructor and check if all the fields fit the types. I can do it with Marshmallow but I found I must recreate the schema like that:
class ProjectSchema(db_schema.Schema):
title = fields.Str(required=True)
class Meta:
fields = ('date', 'event', 'comment')
what is very annoying and not safe because all the information about fields should be duplicated. Is there any way to do it automatically based on one source?

Subclassing Class Inheritance Hierarchies from SQLAlchemy

I struggled writing this question. It gets into what may be a complicated and uncommon use case.
I have defined several ORM classes in one project which is responsible for maintaining a common database schema and core functionality. For example, let's say this is the model.email module.
from sqlalchemy import Column, Index, ForeignKey
from sqlalchemy import Boolean, Integer, Text
from . import Base
class CampaignDB(Base):
"""
Note: this basic database mapper class is expected to be extended.
When sub-classing, be mindful to override mappings to other extended classes.
"""
__tablename__ = 'campaigns'
audience_id = Column(Integer, ForeignKey("audiences.id"))
active = Column(Boolean)
name = Column(Text)
These ORM classes are imported into several other projects as a package. In some cases, these ORM classes are subclassed to provide additional functionality. For example, here the CampaignDB class is subclassed to provide support for sending email in a specific project context.
from model.email import CampaignDB
class Campaign(CampaignDB):
"""
Provides sending capability to the email campaign ORM class.
"""
def __init__(self, audience_id=None, active=None, name=None):
self.audience_id = audience_id
self.active = active
self.name = name
def send(self):
print("send emails to the audience")
Now I would like to refactor the CampaignDB and subclassed Campaign classes to be polymorphic bases using SQLAlchemy's Class Inheritance Hierarchies. For example, I'd like make CampaignDB a base class for EmailCampaignDB and PushCampaignDB. I would then like to extend EmailCampaignDB and PushCampaignDB separately, as say, EmailCampaign and PushCampaign in the importing project. However, I would like to still be able to query for Campaign and be returned instances of EmailCampaign and PushCampaign.
I have made several attempts to solve this but run into problems. In particular, session.query(Campaign).all() returns no results because SQLAlchemy doesn't seem to consider it as a base class. The generated SQL has the following WHERE clause: WHERE email.campaigns.type IN (NULL)
Here's the gist of what I am attempting.
class CampaignDB(Base):
"""
Note: this basic database mapper class is expected to be extended.
When sub-classing, be mindful to override mappings to other extended classes.
"""
__tablename__ = 'campaigns'
audience_id = Column(Integer, ForeignKey("audiences.id"))
active = Column(Boolean)
name = Column(Text)
type = Column(String(16))
__mapper_args__ = {
'polymorphic_on': type
}
class EmailCampaignDB(CampaignBaseDB):
__mapper_args__ = {
'polymorphic_identity': 'email'
}
class PushCampaignDB(CampaignBaseDB):
__mapper_args__ = {
'polymorphic_identity': 'push'
}
def send(self):
print("send push notifications to the audience")
class Campaign(CampaignDB):
pass
class EmailCampaign(EmailCampaignDB):
def send(self):
print("send emails to the audience")
class PushCampaign(PushCampaignDB):
def send(self):
print("send push notifications to the audience")
Is this possible? Is there a better way to achieve this in this "packaged ORM" context?
I managed to find a way to make this work by changing the way I think about the problem. I gave up on trying to create and query a Campaign subclass of CampaignDB. I also used the Declarative API which seems to facilitate subclassing EmailCampaignDB and PushCampaignDB.
The model.email module in the core project:
from sqlalchemy.ext.declarative import declared_attr
from sqlalchemy import Column, Index, ForeignKey
from sqlalchemy.orm import relationship
from sqlalchemy import Boolean, Integer, Text
from . import Base
class CampaignBaseDB(Base):
"""
Note: this basic database mapper class is expected to be extended.
When sub-classing, be mindful to override mappings to other extended classes.
"""
__tablename__ = 'campaign_bases'
#declared_attr
def __mapper_args__(cls):
return {
'polymorphic_on': cls.type,
}
audience_id = Column(Integer, ForeignKey("audiences.id"))
active = Column(Boolean)
name = Column(Text)
type = Column(String(16))
class EmailCampaignDB(CampaignBaseDB):
#declared_attr
def __mapper_args__(cls):
return {
'polymorphic_identity': 'email'
}
class PushCampaignDB(CampaignBaseDB):
#declared_attr
def __mapper_args__(cls):
return {
'polymorphic_identity': 'push'
}
Subclassing the campaign classes in the importing project:
from model.email import EmailCampaignDB, PushCampaignDB
class EmailCampaign(EmailCampaignDB):
def send(self):
print("send emails to the audience")
class PushCampaign(PushCampaignDB):
def send(self):
print("send push notifications to the audience")
Polymorphic query:
for campaign in db.query(CampaignBaseDB).all():
campaign.send()
#> send emails to the audience
#> send push notifications to the audience
This approach does result in "SAWarning: Reassigning polymorphic association for identity", so I still feel like there would be a better way.

Categories