I'm trying to find a way to make tastypie return results that are a little bit different than the default ones. For example, by default the api returns the following:
{
created_at: "2011-10-18T14:22:27",
email_address: "paul.mccartney#beatles.com",
first_name: "Paul",
id: 1,
is_active: true,
is_super_admin: true,
last_login: "2011-10-18T14:22:27",
last_name: "McCartney",
resource_uri: "/api/v1/user/1/",
updated_at: "2011-10-18T14:22:27",
username: "pmc"
}
And I would like to replace first_name and last_name with full_name being Paul McCartney. Is that possible to override model fields? If so - how to do that?
It seems you have to use the dehydrate cicle. From the docs:
Tastypie uses a “dehydrate” cycle to prepare data for serialization, which is to say that it takes the raw, potentially complicated data model & turns it into a (generally simpler) processed data structure for client consumption. This usually means taking a complex data object & turning it into a dictionary of simple data types.
I hope this helps!
Related
So, I'm building an API to interact with my personal wine labels collections database.
For what I understand, a pydantic model purpose is to serve as a "verifier" of the schema that is sent to the API. So, my pydantic schema for adding a label is the following:
from pydantic import BaseModel
from typing import Optional
class WineLabels(BaseModel):
name: Optional[str]
type: Optional[str]
year = Optional[int]
grapes = Optional[str]
country = Optional[str]
region = Optional[str]
price = Optional[float]
id = Optional[str]
None of the fields is to be updated automatically. This is equal to the sqlalchemy model since I want to add all the fields manually.
So my question is, let's say I want to create a call to search by ID and another one to search by name. I do not believe these schema should be applied. Should I create another schema ? Should I create one like this?:
class SearchWineLabel(WineLabels):
id: str
Should a schema be created for each purpose that cannot be fulfilled by an already existing schema?
Sorry, but I can't understand the logic behind it.
Thanks!!
If you want to search by id or name, I'm not sure if you even need a schema - one or more get parameters would usually be enough in those cases (and is usually better semantically).
In any case, the schema would be written for what the endpoint is expected to receive, not by using a general schema that contains the field in some other way. Think of the schemas as the input/output definitions for given resources and endpoints.
You usually want to have different schemas for adding and updating (since adding will require certain fields to be present, while updating may allow null or a missing field in any location).
The Pydantic schemas will allow you to express these differences without writing code, and it will be reflected in your generated api docs under /docs
Is there a way to set foreign key relationship using the integer id of a model? This would be for optimization purposes.
For example, suppose I have an Employee model:
class Employee(models.Model):
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
type = models.ForeignKey('EmployeeType')
and
EmployeeType(models.Model):
type = models.CharField(max_length=100)
I want the flexibility of having unlimited employee types, but in the deployed application there will likely be only a single type so I'm wondering if there is a way to hardcode the id and set the relationship this way. This way I can avoid a db call to get the EmployeeType object first.
Yep:
employee = Employee(first_name="Name", last_name="Name")
employee.type_id = 4
employee.save()
ForeignKey fields store their value in an attribute with _id at the end, which you can access directly to avoid visiting the database.
The _id version of a ForeignKey is a particularly useful aspect of Django, one that everyone should know and use from time to time when appropriate.
caveat: [ < Django 2.1 ]
#RuneKaagaard points out that employee.type is not accurate afterwards in recent Django versions, even after calling employee.save() (it holds its old value). Using it would of course defeat the purpose of the above optimisation, but I would prefer an accidental extra query to being incorrect. So be careful, only use this when you are finished working on your instance (eg employee).
Note: As #humcat points out below, the bug is fixed in Django 2.1
An alternative that uses create to create the object and save it to the database in one line:
employee = Employee.objects.create(first_name='first', last_name='last', type_id=4)
I'm asking myself a question about clean architecture.
Let's imagine a small api that allow us to create and get a user using that type of archi. This app has two endpoints and store the data in a database.
Let's say that we have a db model that look like
class User:
id: int
firstname: str
lastname: str
Firstly, the GET endpoint will use the usecase GetUser and use a User entity. This entity will look like:
class User:
id: int
firstname: str
lastname: str
My question concerns the POST endpoint.
The data passed in this endpoint is only the fields firstname and lastname, obviously.
Do I have to do another entity like this one under ?
class UserRequest:
firstname: str
lastname: str
I found this unsatisfying because it does not make sense to imagine such an entity as a business point-of-view.
Nevertheless, it seems a bit wobbly to make an entity "composite" such as:
class User:
id: Optional[int]
firstname: str
lastname: str
A third option is to use a class inside the usecase file that have for only purpose to model the past coming from the POST request. ie
class UserRequest:
firstname: str
lastname: str
class CreateUserUseCase:
def __init__():
...
def execute(request: UserRequest):
...
So the question is: According to clean architecture principles, What is the best way to model data coming from a POST request that is not a business entity?
Thanks a lot for your help, and don't hesitate to ask question if my examples are not clear enough.
Stef.
It would be helpful to view multiple endpoints (use-cases) in the context of the same entity as the lifecycle of that entity, for example:
Creating (POST) a new user 'xyz' (writing to database)
Mutating (POST/PUT/PATCH) user 'xyz' (writing to database)
Querying (GET) user 'xyz' (reading from database)
Each of the above actions should involve the same business entity User:
Creating: User entity is being constructed inside use-case (application layer) using UserRequest DTO (you have actually demonstrated exactly that) then passed to repository object for persistence.
Mutating: User entity is being retrieved from database (repository object) then modified (application) finally passed to repository object for persistence.
Querying: User entity is being retrieved from database (repository object) then passed back to presentation layer finally translated into response DTO.
One of the principles in CA is to have DTOs inside presentation layer being mapped to/from input/output ports. The heart of CA is domain entities, being constructed either from input (representing request DTO) or from database.
Okay, so pardon me if I don't make much sense. I face this 'ObjectId' object is not iterable whenever I run the collections.find() functions. Going through the answers here, I'm not sure where to start. I'm new to programming, please bear with me.
Every time I hit the route which is supposed to fetch me data from Mongodb, I getValueError: [TypeError("'ObjectId' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')].
Help
Exclude the "_id" from the output.
result = collection.find_one({'OpportunityID': oppid}, {'_id': 0})
I was having a similar problem to this myself. Not having seen your code I am guessing the traceback similarly traces the error to FastAPI/Starlette not being able to process the "_id" field - what you will therefore need to do is change the "_id" field in the results from an ObjectId to a string type and rename the field to "id" (without the underscore) on return to avoid incurring issues with Pydantic.
First of all, if we had some examples of your code, this would be much easier. I can only assume that you are not mapping your MongoDb collection data to your Pydantic BaseModel correctly.
Read this:
MongoDB stores data as BSON. FastAPI encodes and decodes data as JSON strings. BSON has support for additional non-JSON-native data types, including ObjectId which can't be directly encoded as JSON. Because of this, we convert ObjectIds to strings before storing them as the _id.
I want to draw attention to the id field on this model. MongoDB uses _id, but in Python, underscores at the start of attributes have special meaning. If you have an attribute on your model that starts with an underscore, pydantic—the data validation framework used by FastAPI—will assume that it is a private variable, meaning you will not be able to assign it a value! To get around this, we name the field id but give it an alias of _id. You also need to set allow_population_by_field_name to True in the model's Config class.
Here is a working example:
First create the BaseModel:
class PyObjectId(ObjectId):
""" Custom Type for reading MongoDB IDs """
#classmethod
def __get_validators__(cls):
yield cls.validate
#classmethod
def validate(cls, v):
if not ObjectId.is_valid(v):
raise ValueError("Invalid object_id")
return ObjectId(v)
#classmethod
def __modify_schema__(cls, field_schema):
field_schema.update(type="string")
class Student(BaseModel):
id: PyObjectId = Field(default_factory=PyObjectId, alias="_id")
first_name: str
last_name: str
class Config:
allow_population_by_field_name = True
arbitrary_types_allowed = True
json_encoders = {ObjectId: str}
Now just unpack everything:
async def get_student(student_id) -> Student:
data = await collection.find_one({'_id': student_id})
if data is None:
raise HTTPException(status_code=404, detail='Student not found.')
student: Student = Student(**data)
return student
Use the response model inside app decorator Here is the sample example
from pydantic import BaseModel
class Todo(BaseModel):
title:str
details:str
main.py
#app.get("/{title}",response_model=Todo)
async def get_todo(title:str):
response=await fetch_one_todo(title)
if not response:
raise
HTTPException(status_code=status.HTTP_404_NOT_FOUND,detail='not found')
return response
use db.collection.find(ObjectId:"12348901384918")
here db.collection is database name and use double quotes for the string .
I was trying to iterate through all the documents and what worked for me was this solution https://github.com/tiangolo/fastapi/issues/1515#issuecomment-782835977
These lines just needed to be added after the child of ObjectID class. An example is given in the following link.
https://github.com/tiangolo/fastapi/issues/1515#issuecomment-782838556
I had this issue until I upgraded from mongodb version 5.0.9 to version 6.0.0 so mongodb made some changes on their end to handle this if you have the ability to upgrade! I ran into this issue when creating a test server and when I created a new test server that was 6.0.0, it fixed the error.
I've used MongoEngine a lot lately. Apart from the MongoDB integration, I like the idea of defining the structures of entities explicitly. Field definitions make code easier to understand. Also, using those definitions, I can validate objects to catch potential bugs or serialize/deserialize them more accurately.
The problem with MongoEngine is that it is designed specifically to work with a storage engine. The same applies for Django and SQLAlchemy models, which also lack list and set types. My question is, then, is there an object schema/model library for Python that does automated object validation and serialization, but not object-relational mapping or any other fancy stuff?
Let me give an example.
class Wheel(Entity):
radius = FloatField(1.0)
class Bicycle(Entity):
front = EntityField(Wheel)
back = EntityField(Wheel)
class Owner(Entity):
name = StringField()
bicycles = ListField(EntityField(Bicycle))
owner = Owner(name='Eser Aygün', bicycles=[])
bmx = Bicycle()
bmx.front = Wheel()
bmx.back = Wheel()
trek = Bicycle()
trek.front = Wheel(1.2)
trek.back = Wheel(1.2)
owner.bicycles.append(bmx)
owner.bicycles.append(trek)
owner.validate() # checks the structure recursively
Given the structure, it is also easy to serialize and deserialize objects. For example, owner.jsonify() may return the dictionary
{
'name': 'Eser Aygün',
'bicycles': [{
'front': {
radius: 1.0
},
'back': {
radius: 1.0
}
}, {
'front': {
radius: 1.2
},
'back': {
radius: 1.2
}
}],
}
and you can easily convert it back calling owner.dejsonify(dic).
If anyone is still looking, as python-entities hasn't been updated for a while, there are some good libraries out there:
schematics - https://schematics.readthedocs.org/en/latest/
colander - http://docs.pylonsproject.org/projects/colander/en/latest/
voluptuous - https://pypi.python.org/pypi/voluptuous (more of a validation library)
Check out mongopersist, which uses mongo as a persistence layer for Python objects like ZODB. It does not perform schema validation, but it lets you move objects between Mongo and Python transparently.
For validation or other serialization/deserialization scenarios (e.g. forms), consider colander. Colander project description:
Colander is useful as a system for validating and deserializing data obtained via XML, JSON, an HTML form post or any other equally simple data serialization. It runs on Python 2.6, 2.7 and 3.2. Colander can be used to:
Define a data schema.
Deserialize a data structure composed of strings, mappings, and lists into an arbitrary Python structure after validating the data structure against a data schema.
Serialize an arbitrary Python structure to a data structure composed of strings, mappings, and lists.
What you're describing can be achieved with remoteobjects, which contains a mechanism (called dataobject) to allow you to define the structure of an object such that it can be validated and it can be marshalled easily to and from JSON.
It also includes some functionality for building a REST client library that makes HTTP requests, but the use of this part is not required.
The main remoteobjects distribution does not come with specific StringField or IntegerField types, but it's easy enough to implement them. Here's an example BooleanField from a codebase I maintain that uses remoteobjects:
class BooleanField(dataobject.fields.Field):
def encode(self, value):
if value is not None and type(value) is not bool:
raise TypeError("Requires boolean")
return super(BooleanField, self).encode(value)
This can then be used in an object definition:
class ThingWithBoolean(dataobject.DataObject):
my_boolean = BooleanField()
And then:
thing = ThingWithBoolean.from_dict({"my_boolean":true})
thing.my_boolean = "hello"
return json.dumps(thing.to_dict()) # will fail because my_boolean is not a boolean
As I said earlier in a comment, I've decided to invent my own wheel. I started implementing an open-source Python library, Entities, that just does what I wanted. You can check it out from https://github.com/eseraygun/python-entities/.
The library supports recursive and non-recursive collection types (list, set and dict), nested entities and reference fields. It can automatically validate, serialize, deserialize and generate hashable keys for entities of any complexity. (In fact, de/serialization feature is not complete yet.)
This is how you use it:
from entities import *
class Account(Entity):
id = IntegerField(group=PRIMARY) # this field is in primary key group
iban = IntegerField(group=SECONDARY) # this is in secondary key group
balance = FloatField(default=0.0)
class Name(Entity):
first_name = StringField(group=SECONDARY)
last_name = StringField(group=SECONDARY)
class Customer(Entity):
id = IntegerField(group=PRIMARY)
name = EntityField(Name, group=SECONDARY)
accounts = ListField(ReferenceField(Account), default=list)
# Create Account objects.
a_1 = Account(1, 111, 10.0) # __init__() recognizes positional arguments
a_2 = Account(id=2, iban=222, balance=20.0) # as well as keyword arguments
# Generate hashable key using primary key.
print a_1.keyify() # prints '(1,)'
# Generate hashable key using secondary key.
print a_2.keyify(SECONDARY) # prints '(222,)'
# Create Customer object.
c = Customer(1, Name('eser', 'aygun'))
# Generate hashable key using primary key.
print c.keyify() # prints '(1,)'
# Generate hashable key using secondary key.
print c.keyify(SECONDARY) # prints '(('eser', 'aygun'),)'
# Try validating an invalid object.
c.accounts.append(123)
try:
c.validate() # fails
except ValidationError:
print 'accounts list is only for Account objects'
# Try validating a valid object.
c.accounts = [a_1, a_2]
c.validate() # succeeds