How to model POST body in a clean architecture - python

I'm asking myself a question about clean architecture.
Let's imagine a small api that allow us to create and get a user using that type of archi. This app has two endpoints and store the data in a database.
Let's say that we have a db model that look like
class User:
id: int
firstname: str
lastname: str
Firstly, the GET endpoint will use the usecase GetUser and use a User entity. This entity will look like:
class User:
id: int
firstname: str
lastname: str
My question concerns the POST endpoint.
The data passed in this endpoint is only the fields firstname and lastname, obviously.
Do I have to do another entity like this one under ?
class UserRequest:
firstname: str
lastname: str
I found this unsatisfying because it does not make sense to imagine such an entity as a business point-of-view.
Nevertheless, it seems a bit wobbly to make an entity "composite" such as:
class User:
id: Optional[int]
firstname: str
lastname: str
A third option is to use a class inside the usecase file that have for only purpose to model the past coming from the POST request. ie
class UserRequest:
firstname: str
lastname: str
class CreateUserUseCase:
def __init__():
...
def execute(request: UserRequest):
...
So the question is: According to clean architecture principles, What is the best way to model data coming from a POST request that is not a business entity?
Thanks a lot for your help, and don't hesitate to ask question if my examples are not clear enough.
Stef.

It would be helpful to view multiple endpoints (use-cases) in the context of the same entity as the lifecycle of that entity, for example:
Creating (POST) a new user 'xyz' (writing to database)
Mutating (POST/PUT/PATCH) user 'xyz' (writing to database)
Querying (GET) user 'xyz' (reading from database)
Each of the above actions should involve the same business entity User:
Creating: User entity is being constructed inside use-case (application layer) using UserRequest DTO (you have actually demonstrated exactly that) then passed to repository object for persistence.
Mutating: User entity is being retrieved from database (repository object) then modified (application) finally passed to repository object for persistence.
Querying: User entity is being retrieved from database (repository object) then passed back to presentation layer finally translated into response DTO.
One of the principles in CA is to have DTOs inside presentation layer being mapped to/from input/output ports. The heart of CA is domain entities, being constructed either from input (representing request DTO) or from database.

Related

Pydantic schema logic

So, I'm building an API to interact with my personal wine labels collections database.
For what I understand, a pydantic model purpose is to serve as a "verifier" of the schema that is sent to the API. So, my pydantic schema for adding a label is the following:
from pydantic import BaseModel
from typing import Optional
class WineLabels(BaseModel):
name: Optional[str]
type: Optional[str]
year = Optional[int]
grapes = Optional[str]
country = Optional[str]
region = Optional[str]
price = Optional[float]
id = Optional[str]
None of the fields is to be updated automatically. This is equal to the sqlalchemy model since I want to add all the fields manually.
So my question is, let's say I want to create a call to search by ID and another one to search by name. I do not believe these schema should be applied. Should I create another schema ? Should I create one like this?:
class SearchWineLabel(WineLabels):
id: str
Should a schema be created for each purpose that cannot be fulfilled by an already existing schema?
Sorry, but I can't understand the logic behind it.
Thanks!!
If you want to search by id or name, I'm not sure if you even need a schema - one or more get parameters would usually be enough in those cases (and is usually better semantically).
In any case, the schema would be written for what the endpoint is expected to receive, not by using a general schema that contains the field in some other way. Think of the schemas as the input/output definitions for given resources and endpoints.
You usually want to have different schemas for adding and updating (since adding will require certain fields to be present, while updating may allow null or a missing field in any location).
The Pydantic schemas will allow you to express these differences without writing code, and it will be reflected in your generated api docs under /docs

"ObjectId' object is not iterable" error, while fetching data from MongoDB Atlas

Okay, so pardon me if I don't make much sense. I face this 'ObjectId' object is not iterable whenever I run the collections.find() functions. Going through the answers here, I'm not sure where to start. I'm new to programming, please bear with me.
Every time I hit the route which is supposed to fetch me data from Mongodb, I getValueError: [TypeError("'ObjectId' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')].
Help
Exclude the "_id" from the output.
result = collection.find_one({'OpportunityID': oppid}, {'_id': 0})
I was having a similar problem to this myself. Not having seen your code I am guessing the traceback similarly traces the error to FastAPI/Starlette not being able to process the "_id" field - what you will therefore need to do is change the "_id" field in the results from an ObjectId to a string type and rename the field to "id" (without the underscore) on return to avoid incurring issues with Pydantic.
First of all, if we had some examples of your code, this would be much easier. I can only assume that you are not mapping your MongoDb collection data to your Pydantic BaseModel correctly.
Read this:
MongoDB stores data as BSON. FastAPI encodes and decodes data as JSON strings. BSON has support for additional non-JSON-native data types, including ObjectId which can't be directly encoded as JSON. Because of this, we convert ObjectIds to strings before storing them as the _id.
I want to draw attention to the id field on this model. MongoDB uses _id, but in Python, underscores at the start of attributes have special meaning. If you have an attribute on your model that starts with an underscore, pydantic—the data validation framework used by FastAPI—will assume that it is a private variable, meaning you will not be able to assign it a value! To get around this, we name the field id but give it an alias of _id. You also need to set allow_population_by_field_name to True in the model's Config class.
Here is a working example:
First create the BaseModel:
class PyObjectId(ObjectId):
""" Custom Type for reading MongoDB IDs """
#classmethod
def __get_validators__(cls):
yield cls.validate
#classmethod
def validate(cls, v):
if not ObjectId.is_valid(v):
raise ValueError("Invalid object_id")
return ObjectId(v)
#classmethod
def __modify_schema__(cls, field_schema):
field_schema.update(type="string")
class Student(BaseModel):
id: PyObjectId = Field(default_factory=PyObjectId, alias="_id")
first_name: str
last_name: str
class Config:
allow_population_by_field_name = True
arbitrary_types_allowed = True
json_encoders = {ObjectId: str}
Now just unpack everything:
async def get_student(student_id) -> Student:
data = await collection.find_one({'_id': student_id})
if data is None:
raise HTTPException(status_code=404, detail='Student not found.')
student: Student = Student(**data)
return student
Use the response model inside app decorator Here is the sample example
from pydantic import BaseModel
class Todo(BaseModel):
title:str
details:str
main.py
#app.get("/{title}",response_model=Todo)
async def get_todo(title:str):
response=await fetch_one_todo(title)
if not response:
raise
HTTPException(status_code=status.HTTP_404_NOT_FOUND,detail='not found')
return response
use db.collection.find(ObjectId:"12348901384918")
here db.collection is database name and use double quotes for the string .
I was trying to iterate through all the documents and what worked for me was this solution https://github.com/tiangolo/fastapi/issues/1515#issuecomment-782835977
These lines just needed to be added after the child of ObjectID class. An example is given in the following link.
https://github.com/tiangolo/fastapi/issues/1515#issuecomment-782838556
I had this issue until I upgraded from mongodb version 5.0.9 to version 6.0.0 so mongodb made some changes on their end to handle this if you have the ability to upgrade! I ran into this issue when creating a test server and when I created a new test server that was 6.0.0, it fixed the error.

App Engine Query Users

I have the User model in my datastore which contains some attributes:
I need to query all users filtering by the company attribute.
So, as I would normally do, I do this:
from webapp2_extras.appengine.auth.models import User
employees = User.query().filter(User.company == self.company_name).fetch()
This gives me:
AttributeError: type object 'User' has no attribute 'company'
And when I do:
employees = User.query().filter().fetch()
It gives me no error and shows the list with all the Users.
How do I query by field? Thanks
Your question is a bit misdirected. You ask how to query by field, which you are already doing with correct syntax. The problem, as Jeff O'Neill noted, is your User model does not have that company field, so your query-by-field attempt results in an error. (Here is some ndb documentation that you should definitely peruse and bookmark if you haven't already.) There are three ways to remedy your missing-field problem:
Subclass the User model, as Jeff shows in his answer. This is quick and simple, and may be the best solution for what you want.
Create your own User model, completely separate from the webapp2 one. This is probably overkill for what you want, just judging from your question, because you would have to write most of your own authentication code that the webapp2 user already handles for you.
Create a new model that contains extra user information, and has a key property containing the corresponding user's key. That would look like this:
class UserProfile(ndb.Expando):
user_key = ndb.KeyProperty(kind='User', required=True)
company = ndb.StringProperty()
# other possibilities: profile pic? address? etc.
with queries like this:
from models.user_profile import UserProfile
from webapp2_extras.appengine.auth.models import User
from google.appengine.ext import ndb
# get the employee keys
employee_keys = UserProfile.query(UserProfile.company == company_name).fetch(keys_only=True)
# get the actual user objects
employees = ndb.get_multi(employee_keys)
What this solution does is it separates your User model that you use for authentication (webapp2's User) from the model that holds extra user information (UserProfile). If you want to store profile pictures or other relatively large amounts of data for each user, you may find this solution works best for you.
Note: you can put your filter criteria in the .query() parentheses to simplify things (I find I rarely use the .filter() method):
# long way
employees = User.query().filter(User.company == self.company_name).fetch()
# shorter way
employees = User.query(User.company == self.company_name).fetch()
You've imported a User class defined by webapp2. This User class does not have an attribute called company so that is why you are getting the error from User.company.
You probably want to do create your own User model by subclassing the one provided by webapp2:
from webapp2_extras.appengine.auth.models import User as Webapp2_User
class User(Webapp2_User):
company = ndb.StringProperty()
Then your query should work.
One caveat, I've never used webapp2_extras.appengine.auth.models so I don't know what that is exactly.

django tastypie alter model fetching

I'm trying to find a way to make tastypie return results that are a little bit different than the default ones. For example, by default the api returns the following:
{
created_at: "2011-10-18T14:22:27",
email_address: "paul.mccartney#beatles.com",
first_name: "Paul",
id: 1,
is_active: true,
is_super_admin: true,
last_login: "2011-10-18T14:22:27",
last_name: "McCartney",
resource_uri: "/api/v1/user/1/",
updated_at: "2011-10-18T14:22:27",
username: "pmc"
}
And I would like to replace first_name and last_name with full_name being Paul McCartney. Is that possible to override model fields? If so - how to do that?
It seems you have to use the dehydrate cicle. From the docs:
Tastypie uses a “dehydrate” cycle to prepare data for serialization, which is to say that it takes the raw, potentially complicated data model & turns it into a (generally simpler) processed data structure for client consumption. This usually means taking a complex data object & turning it into a dictionary of simple data types.
I hope this helps!

Is there a way to transparently perform validation on SQLAlchemy objects?

Is there a way to perform validation on an object after (or as) the properties are set but before the session is committed?
For instance, I have a domain model Device that has a mac property. I would like to ensure that the mac property contains a valid and sanitized mac value before it is added to or updated in the database.
It looks like the Pythonic approach is to do most things as properties (including SQLAlchemy). If I had coded this in PHP or Java, I would probably have opted to create getter/setter methods to protect the data and give me the flexibility to handle this in the domain model itself.
public function mac() { return $this->mac; }
public function setMac($mac) {
return $this->mac = $this->sanitizeAndValidateMac($mac);
}
public function sanitizeAndValidateMac($mac) {
if ( ! preg_match(self::$VALID_MAC_REGEX) ) {
throw new InvalidMacException($mac);
}
return strtolower($mac);
}
What is a Pythonic way to handle this type of situation using SQLAlchemy?
(While I'm aware that validation and should be handled elsewhere (i.e., web framework) I would like to figure out how to handle some of these domain specific validation rules as they are bound to come up frequently.)
UPDATE
I know that I could use property to do this under normal circumstances. The key part is that I am using SQLAlchemy with these classes. I do not understand exactly how SQLAlchemy is performing its magic but I suspect that creating and overriding these properties on my own could lead to unstable and/or unpredictable results.
You can add data validation inside your SQLAlchemy classes using the #validates() decorator.
From the docs - Simple Validators:
An attribute validator can raise an exception, halting the process of mutating the attribute’s value, or can change the given value into something different.
from sqlalchemy.orm import validates
class EmailAddress(Base):
__tablename__ = 'address'
id = Column(Integer, primary_key=True)
email = Column(String)
#validates('email')
def validate_email(self, key, address):
# you can use assertions, such as
# assert '#' in address
# or raise an exception:
if '#' not in address:
raise ValueError('Email address must contain an # sign.')
return address
Yes. This can be done nicely using a MapperExtension.
# uses sqlalchemy hooks to data model class specific validators before update and insert
class ValidationExtension( sqlalchemy.orm.interfaces.MapperExtension ):
def before_update(self, mapper, connection, instance):
"""not every instance here is actually updated to the db, see http://www.sqlalchemy.org/docs/reference/orm/interfaces.html?highlight=mapperextension#sqlalchemy.orm.interfaces.MapperExtension.before_update"""
instance.validate()
return sqlalchemy.orm.interfaces.MapperExtension.before_update(self, mapper, connection, instance)
def before_insert(self, mapper, connection, instance):
instance.validate()
return sqlalchemy.orm.interfaces.MapperExtension.before_insert(self, mapper, connection, instance)
sqlalchemy.orm.mapper( model, table, extension = ValidationExtension(), **mapper_args )
You may want to check before_update reference because not every instance here is actually updated to the db.
"It looks like the Pythonic approach is to do most things as properties"
It varies, but that's close.
"If I had coded this in PHP or Java, I would probably have opted to create getter/setter methods..."
Good. That's Pythonic enough. Your getter and setter functions are bound up in a property; that's pretty good.
What's the question?
Are you asking how to spell property?
However, "transparent validation" -- if I read your example code correctly -- may not really be all that good an idea.
Your model and your validation should probably be kept separate. It's common to have multiple validations for a single model. For some users, fields are optional, fixed or not used; this leads to multiple validations.
You'll be happier following the Django design pattern of using a Form for validation, separate form the model.

Categories