When I want to create or update an object in my database, I send JSON from the client. I use SQLAlchemy to operate data. A model already has all the needed information about types like:
class Project(db.Model):
__tablename__ = 'project'
creation_date = db.Column(db.DateTime, default=datetime.utcnow)
title = db.Column(db.String(255), nullable=False)
def __init__(self, **kwargs):
super(Project, self).__init__(**kwargs)
So, I want to be able to feed a raw JSON into the constructor and check if all the fields fit the types. I can do it with Marshmallow but I found I must recreate the schema like that:
class ProjectSchema(db_schema.Schema):
title = fields.Str(required=True)
class Meta:
fields = ('date', 'event', 'comment')
what is very annoying and not safe because all the information about fields should be duplicated. Is there any way to do it automatically based on one source?
Related
I'm using marshmallow_sqlalchemy library to build object from objects from schemas and then add the database. My Marshmallow and Sqlalchemy classes are defined as follows:
from marshmallow_sqlalchemy import ModelSchema
from marshmallow import fields
class Person(db.Model):
id = db.Column(db.Integer, autoincrement=True, primary_key=True, unique=True)
first_name = db.Column(db.Text)
last_name = db.Column(db.Text)
address = db.Column(db.Text)
class PersonSchema(ModelSchema):
first_name = fields.String(allow_none=False)
last_name = fields.String(allow_none=False)
address = fields.String(allow_none=True)
class Meta:
model = Person
sqla_session = db.session
fields = ('id', 'first_name', 'last_name', 'address')
I have two session; db.session and session_2. Objects created through PersonSchema are attached to db.session by default. But I need to use session_2 to add those objects to the database:
person_data = {'first_name': 'Bruno', 'last_name': 'Justin', 'Address': 'Street 34, DF'}
person = PersonSchema().load(person_data) # <Person (transient XXXXXXXXXXX)>
If I use db.session to add this object to db it will work fine:
db.session.add(person)
db.session.commit()
But, when I want to use session_2 like this:
session_2.add(person)
session_2.commit()
I get an this error:
sqlalchemy.exc.InvalidRequestError: Object '<Person at 0x7ff9193e27d0>' is already attached to session '2' (this is '4')
which is expected as objects build with PersonSchema should be attached to db.session by default.
My question here is: Is there a way to unbind person from db.session to be able to use it with session_2 without changing the definition of PersonSchema? Or maybe how to override the by default session when calling .load(I tried .load(person_data, session=session_2) and it didn't work)? What is the best way to avoid/fix this situation?
I'm working on a Project using Python(3), Django(1.11) and DRF in which I have to filter the data on the base of a json object field which is saved as JSONFIELD in db model.
Here's what I have tried:
# model.py
from django.db import models
import jsonfield
class MyModel(models.Model):
id = models.CharField(primary_key=True, max_length=255)
type = models.CharField(max_length=255)
props = jsonfield.JSONField()
repo = jsonfield.JSONField()
created_at = models.DateTimeField()
# serializers.py
class MyModelSerializer(serializers.ModelSerializer):
props = serializers.JSONField()
repo = serializers.JSONField()
class Meta:
model = EventModel
fields = "__all__"
# JSON object
{
"id":4633249595,
"type":"PushEvent",
"props":{
"id":4276597,
"login":"iholloway",
"avatar_url":"https://avatars.com/4276597"
},
"repo":{
"id":269910,
"name":"iholloway/aperiam-consectetur",
"url":"https://github.com/iholloway/aperiam-consectetur"
},
"created_at":"2016-04-18 00:13:31"
}
# views.py
class PropsEvents(generics.RetrieveAPIView):
serializer_class = MyModelSerializer
def get_object(self):
print(self.request.parser_context['kwargs']['id'])
queryset = MyModel.objects.filter(data__props__id=self.request.parser_context['kwargs']['id'])
obj = get_object_or_404(queryset)
return obj
It should return the MyModel records by props ID and should be able to
return the JSON array of all the MyModel objects where the props ID by
the GET request at /mymodel/props/<ID>. If the requested props does not
exist then HTTP response code should be 404, otherwise, the response
code should be 200. The JSON array should be sorted in ascending order
by MyModel ID.
When I sent a request to this view, it returns an error:
> django.core.exceptions.FieldError: Unsupported lookup 'id' for JSONField or join on the field not permitted.
> [18/Feb/2019 10:37:39] "GET /events/actors/2790311/ HTTP/1.1" 500 16210
So, how can I filter the objects based on the id of props?
Help me, please!
Thanks in advance!
The feature your are looking for is possible, unfortunately it is not that straightforward. As far as I know it is not supported by the jsonfield package, but instead you would have to use Postgres as your database backend and use its internal JSONField. I think you can choose one of the following:
switch to django.contrib.postgres.fields.JSONField and use Postgres as your db backend in all enviroments (and then support such lookups)
make the data follow certain schema and change the JSONField to a separate model and table
use a hybrid storage solution with a dedicated solution for JSON documents
extract the fields you need to query against to your model - enabling the querying, but keeping the unstructured data in the JSONField.
class MyModel(models.Model):
id = models.CharField(primary_key=True, max_length=255)
type = models.CharField(max_length=255)
props = jsonfield.JSONField()
props_id = models.IntegerField(null=True)
repo = jsonfield.JSONField()
repo_id = models.IntegerField(null=True)
created_at = models.DateTimeField()
And then set the id values manually or in the save() of the model:
def save(self, *args, **kwargs):
self.repo_id = self.repo.get("id")
self.props_id = self.props.get("id")
return super().save(*args, **kwargs)
You should use
from django.contrib.postgres.fields import JSONField
Instead of
import jsonfield
And after that I think everything should work correctly
Looking at graphene_django, I see they have a bunch of resolvers picking up django model fields mapping them to graphene types.
I have a subclass of JSONField I'd also like to be picked up.
:
# models
class Recipe(models.Model):
name = models.CharField(max_length=100)
instructions = models.TextField()
ingredients = models.ManyToManyField(
Ingredient, related_name='recipes'
)
custom_field = JSONFieldSubclass(....)
# schema
class RecipeType(DjangoObjectType):
class Meta:
model = Recipe
custom_field = ???
I know I could write a separate field and resolver pair for a Query, but I'd prefer it to be available as part of the schema for that model.
What I realize I could do:
class RecipeQuery:
custom_field = graphene.JSONString(id=graphene.ID(required=True))
def resolve_custom_field(self, info, **kwargs):
id = kwargs.get('id')
instance = get_item_by_id(id)
return instance.custom_field.to_json()
But -- this means a separate round trip, to get the id then get the custom_field for that item, right?
Is there a way I could have it seen as part of the RecipeType schema?
Ok, I can get it working by using:
# schema
class RecipeType(DjangoObjectType):
class Meta:
model = Recipe
custom_field = graphene.JSONString(resolver=lambda my_obj, resolve_obj: my_obj.custom_field.to_json())
(the custom_field has a to_json method)
I figured it out without deeply figuring out what is happening in this map between graphene types and the django model field types.
It's based on this:
https://docs.graphene-python.org/en/latest/types/objecttypes/#resolvers
Same function name, but parameterized differently.
It will be simplest to explain with code example, in Python I can do so to achieve model inheritance:
"""Image model"""
from sqlalchemy import Column, ForeignKey
from sqlalchemy.types import Integer, String, Text
from miasto_3d.model.meta import Base
class Image(Base):
__tablename__ = "image"
image_id = Column(Integer, primary_key=True)
path = Column(String(200))
def get_mime(self):
#function to get mime type from file
pass
"""WorkImage model"""
class WorkImage(Image, Base):
__tablename__ = "work_images"
image_id = Column(Integer, ForeignKey("image.image_id"), primary_key=True)
work_id = Column(Integer, ForeignKey("work.id"))
work = relation("Work", backref=backref('images',order_by='WorkImage.work_id'))
"""UserAvatar model"""
class UserAvatar(Image, Base):
__tablename__ = "user_avatars"
image_id = Column(Integer, ForeignKey("image.image_id"), primary_key=True)
user_id = Column(Integer, ForeignKey("user.id"))
user = relation("User", backref=backref('images',order_by='UserAvatar.user_id'))
How I do similar things in Rails? Or maybe there is another, better way to do it?
I know paperclip, but I don't like it's conception to use shared table to store photo and model data.
It looks like you're wanting either a polymorphic association or perhaps single table inheritance.
Since you don't define database fields in the model, you cannot inherit database schema in this way - all your fields will need to be specified per table in a migration. You probably should use paperclip, if only because reinventing the wheel is a pain. It works really well, and abstracts away from the actual database structure for you.
In Rails, rather than model inheritance, shared functionality tends to be implemented in modules, like so:
http://handyrailstips.com/tips/14-drying-up-your-ruby-code-with-modules
What is the best way to pass a sqlalchemy query's result to the view?
I have a declaratively declared table such as:
class Greeting(Base):
__tablename__ = 'greetings'
id = Column(Integer, primary_key=True)
author = Column(String)
content = Column(Text)
date = Column(DateTime)
def __init__(self, author, content, date = datetime.datetime.now()):
self.author = author
self.content = content
self.date = date
Then, I run a query with q = session.query(Greeting).order_by(Greeting.date), but when I try to simply return q, it throws some JSON serialization error. From what I understand, this is due to the date field. Is there any simple way to fix this?
Take a look at http://www.sqlalchemy.org/docs/core/serializer.html.
Serializer/Deserializer objects for usage with SQLAlchemy query structures, allowing “contextual” deserialization.
Any SQLAlchemy query structure, either based on sqlalchemy.sql.* or sqlalchemy.orm.* can be used. The mappers, Tables, Columns, Session etc. which are referenced by the structure are not persisted in serialized form, but are instead re-associated with the query structure when it is deserialized.