Unbind Marshmallow transient object from database session - python

I'm using marshmallow_sqlalchemy library to build object from objects from schemas and then add the database. My Marshmallow and Sqlalchemy classes are defined as follows:
from marshmallow_sqlalchemy import ModelSchema
from marshmallow import fields
class Person(db.Model):
id = db.Column(db.Integer, autoincrement=True, primary_key=True, unique=True)
first_name = db.Column(db.Text)
last_name = db.Column(db.Text)
address = db.Column(db.Text)
class PersonSchema(ModelSchema):
first_name = fields.String(allow_none=False)
last_name = fields.String(allow_none=False)
address = fields.String(allow_none=True)
class Meta:
model = Person
sqla_session = db.session
fields = ('id', 'first_name', 'last_name', 'address')
I have two session; db.session and session_2. Objects created through PersonSchema are attached to db.session by default. But I need to use session_2 to add those objects to the database:
person_data = {'first_name': 'Bruno', 'last_name': 'Justin', 'Address': 'Street 34, DF'}
person = PersonSchema().load(person_data) # <Person (transient XXXXXXXXXXX)>
If I use db.session to add this object to db it will work fine:
db.session.add(person)
db.session.commit()
But, when I want to use session_2 like this:
session_2.add(person)
session_2.commit()
I get an this error:
sqlalchemy.exc.InvalidRequestError: Object '<Person at 0x7ff9193e27d0>' is already attached to session '2' (this is '4')
which is expected as objects build with PersonSchema should be attached to db.session by default.
My question here is: Is there a way to unbind person from db.session to be able to use it with session_2 without changing the definition of PersonSchema? Or maybe how to override the by default session when calling .load(I tried .load(person_data, session=session_2) and it didn't work)? What is the best way to avoid/fix this situation?

Related

How to validate JSON object before feeding to SQLAlchemy object?

When I want to create or update an object in my database, I send JSON from the client. I use SQLAlchemy to operate data. A model already has all the needed information about types like:
class Project(db.Model):
__tablename__ = 'project'
creation_date = db.Column(db.DateTime, default=datetime.utcnow)
title = db.Column(db.String(255), nullable=False)
def __init__(self, **kwargs):
super(Project, self).__init__(**kwargs)
So, I want to be able to feed a raw JSON into the constructor and check if all the fields fit the types. I can do it with Marshmallow but I found I must recreate the schema like that:
class ProjectSchema(db_schema.Schema):
title = fields.Str(required=True)
class Meta:
fields = ('date', 'event', 'comment')
what is very annoying and not safe because all the information about fields should be duplicated. Is there any way to do it automatically based on one source?

Simple Many-to-Many issue in Flask-Admin

I'm adding Flask-Admin to an existing Flask app (using Python 3, and MySQL with SQLAlchemy), and I simply cannot figure out how to get a many-to-many relationship to render correctly. I've read a number of questions about this here, and it looks like I am following the right practices.
I have a Quotation table, a Subject table, and a QuotationSubject table, which has to be an actual class rather than an association table, but I don't care about the extra columns in the association table for this purpose; they're things like last_modified that I don't need to display or edit. The relationships seem to work in the rest of the application.
Trimming out the fields and definitions that don't matter here, I have:
class Quotation(db.Model):
__tablename__ = 'quotation'
id = db.Column(db.Integer, primary_key=True)
word = db.Column(db.String(50))
description = db.Column(db.Text)
created = db.Column(db.TIMESTAMP, default=db.func.now())
last_modified = db.Column(db.DateTime, server_default=db.func.now())
subject = db.relationship("QuotationSubject", back_populates="quotation")
def __str__(self):
return self.word
class Subject(db.Model):
__tablename__ = 'subject'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50))
created = db.Column(db.TIMESTAMP, default=db.func.now())
last_modified = db.Column(db.DateTime, server_default=db.func.now())
quotation = db.relationship("QuotationSubject", back_populates="subject")
def __str__(self):
return self.name
class QuotationSubject(db.Model):
__tablename__ = 'quotation_subject'
id = db.Column(db.Integer, primary_key=True)
quotation_id = db.Column(db.Integer, db.ForeignKey('quotation.id'), default=0, nullable=False)
subject_id = db.Column(db.Integer, db.ForeignKey('subject.id'), default=0, nullable=False)
created = db.Column(db.TIMESTAMP, default=db.func.now())
last_modified = db.Column(db.DateTime, server_default=db.func.now())
quotation = db.relationship('Quotation', back_populates='subject', lazy='joined')
subject = db.relationship('Subject', back_populates='quotation', lazy='joined')
In my admin.py, I simply have:
class QuotationModelView(ModelView):
column_searchable_list = ['word', 'description']
form_excluded_columns = ['created', 'last_modified']
column_list = ('word', 'subject')
admin.add_view(QuotationModelView(Quotation, db.session))
And that's it.
In my list view, instead of seeing subject values, I get the associated entry in the QuotationSubject table, e.g.
test <QuotationSubject 1>, <QuotationSubject 17>, <QuotationSubject 18>
book <QuotationSubject 2>
Similarly, in my create view, instead of getting a list of a dozen or so subjects, I get an enormous list of everything from the QuotationSubject table.
I've looked at some of the inline_models stuff, suggested by some posts here, which also hasn't worked, but in any case there are other posts (e.g. Flask-Admin view with many to many relationship model) which suggest that what I'm doing should work. I'd be grateful if someone could point out what I'm doing wrong.
First of all, I fear there's something missing from your question because I don't see the Citation class defined. But that doesn't seem to be the problem.
The most classic example of many-to-many relationships in Flask is roles to users. Here is what a working role to user M2M relationship can look like:
class RolesUsers(Base):
__tablename__ = 'roles_users'
id = db.Column(db.Integer(), primary_key=True)
user_id = db.Column(db.Integer(), db.ForeignKey('user.id'))
role_id = db.Column(db.Integer(), db.ForeignKey('role.id'))
class Role(RoleMixin, Base):
__tablename__ = 'role'
id = db.Column(db.Integer(), primary_key=True)
name = db.Column(db.String(80), unique=True)
def __repr__(self):
return self.name
class User(UserMixin, Base):
__tablename__ = 'user'
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(120), index=True, unique=True)
roles = db.relationship('Role', secondary='roles_users',
backref=db.backref('users', lazy='dynamic'))
And in Flask-Admin:
from user.models import User, Role
admin.add_view(PVCUserView(User, db.session))
admin.add_view(PVCModelView(Role, db.session))
Note that the relationship is only declared once, with a backref so it's two-way. It looks like you're using back_populates for this, which I believe is equivalent.
For the case you're describing, it looks like your code declares relationships directly to the M2M table. Is this really what you want? You say that you don't need access to the extra columns in the QuotationSubject table for Flask-Admin. But do you need them elsewhere? It seems very odd to me to have a call to quotation or subject actually return an instance of QuotationSubject. I believe this is why Flask-Admin is listing all the QuotationSubject rows in the create view.
So my recommendation would be to try setting your relationships to point directly to the target model class while putting the M2M table as the secondary.
If you want to access the association model in other places (and if it really can't be an Association Proxy for some reason) then create a second relationship in each model class which explicitly points to it. You will then likely need to exclude that relationship in Flask-Admin using form_excluded_columns and column_exclude_list.

Graphene_sqlalchemy and flask-sqlalchemy disagree on what constitutes a Valid SQLAlchemy Model?

Playing with Flask, Graphene and am running into a problem. Consider the following.
The Model project.model.site:
from project import db
from project.models import user
from datetime import datetime
class Site(db.Model):
__tablename__ = 'sites'
id = db.Column(db.Integer(), primary_key=True)
owner_id = db.Column(db.Integer, db.ForeignKey('users.id'))
name = db.Column(db.String(50))
desc = db.Column(db.Text())
location_lon = db.Column(db.String(50))
location_lat = db.Column(db.String(50))
creation_date = db.Column(db.DateTime(), default=datetime.utcnow())
users = db.relationship(
user,
backref=db.backref('users',
uselist=True,
cascade='delete,all'))
The model schema (project.schemas.site_schema)
from graphene_sqlalchemy import SQLAlchemyObjectType
from project.models import site as site_model
import graphene
class SiteAttributes:
owner_id = graphene.ID(description="Site owners user.id")
name = graphene.String(description="Site Name")
desc = graphene.String(description="Site description")
location_lon = graphene.String(description="Site Longitude")
location_lat = graphene.String(description="Site Latitude")
creation_date = graphene.DateTime(description="Site Creation Date")
class Site(SQLAlchemyObjectType, SiteAttributes):
"""Site node."""
class Meta:
model = site_model
interfaces = (graphene.relay.Node,)
and finally the main schema through which I plan to expose the GraphQL api (project.schemas.schema))
from graphene_sqlalchemy import SQLAlchemyConnectionField
import graphene
from project.schemas import site_schema, trade_schema, user_schema
class Query(graphene.ObjectType):
"""Query objects for GraphQL API."""
node = graphene.relay.Node.Field()
user = graphene.relay.Node.Field(user_schema.User)
userList = SQLAlchemyConnectionField(user_schema.User)
site = graphene.relay.Node.Field(site_schema.Site)
siteList = SQLAlchemyConnectionField(site_schema.Site)
trade = graphene.relay.Node.Field(trade_schema.Trade)
tradeList = SQLAlchemyConnectionField(trade_schema.Trade)
schema = graphene.Schema(query=Query)
If I load the model as such when starting up all is well. Migrations happen, the application runs perfectly fine. If I load the model through the schema though the application fails with the following message:
AssertionError: You need to pass a valid SQLAlchemy Model in Site.Meta, received "<module 'project.models.site' from '/vagrant/src/project/models/site.py'>".
I initialized SQLAlchemy with flask_sqlalchemy. Which makes me wonder is the model that is created not considered a valid SQLAlchemy Model ? Or am I doing a basic error here that I am just not seeing. I am assuming it's the latter.
Based on the error message, it seems that project.models.site (imported in the second snippet with from project.models import site as site_model) is a Python module rather than a subclass of db.Model or similar. Did you perhaps mean to import Site (uppercase) instead of site?
So fixing packages to classes finally go me in the right direction. It turns out that the issue was deeper then that. And the only way to get to it was by reading the hidden exceptions.
First I ensured that actual models where loaded rather than the modules. Thank you So much for that one #jwodder
In the end this https://github.com/graphql-python/graphene-sqlalchemy/issues/121 was ended up pointing me in the right direction. By checking the actual exception messages I found my way to a solution

Django DRF - What's the use of serializers?

I've been using Django for over 3 years now, but have never felt the need to use DRF. However, seeing the growing popularity of DRF, I thought of giving it a try.
Serializing is the concept I find it most difficult. Consider for eg:- I want to save user details. Following is the user related models.
class Users(models.Model):
GENDER_CHOICES = (
('M', 'MALE'),
('F', 'FEMALE'),
('O', 'OTHERS'),
)
first_name = models.CharField(max_length=255, blank=True, null=True)
middle_name = models.CharField(max_length=255, blank=True, null=True)
last_name = models.CharField(max_length=255, blank=True, null=True)
gender = models.CharField(choices=GENDER_CHOICES, max_length=1, blank=True,
null=True)
class UserAddress(models.Model):
ADDRESS_TYPE_CHOICES = (
('P', 'Permanent'),
('Cu', 'Current'),
('Co', 'Correspondence')
)
line1 = models.CharField(max_length=255)
line2 = models.CharField(max_length=255, blank=True, null=True)
pincode = models.IntegerField()
address_type = models.CharField(choices=ADDRESS_TYPE_CHOICES,
max_length=255)
user_id = models.ForeignKey(Users, related_name='uaddress')
class UserPhone(models.Model):
phone = models.CharField(max_length=10)
user_id = models.ForeignKey(Users, related_name='uphone')
class UserPastProfession(models.Model):
profession = models.CharField(max_length=10) # BusinessMan, software Engineer, Artist etc.
user_id = models.ForeignKey(Users, related_name='uprofession')
I'm getting all the user details bundled in one POST endpoint.
{
'first_name': 'first_name',
'middle_name': 'middle_name',
'last_name': 'last_name',
'gender': 'gender',
'address': [{
'line1': 'line1',
'line2': 'line2',
'address_type': 'address_type',
}],
'phone': ['phone1', 'phone2'],
'profession': ['BusinessMan', 'Software Engineer', 'Artist']
}
Without using DRF, I would have made a Users object first, linking it with UserAddress, UserPhone and UserPastProfession object.
How the same could be done using DRF? I mean validating, serializing, and then saving the details. How serializers.py file will be look like?
If you want to make your life easy, you will surely use it.
Serializers allow complex data such as querysets and model instances to be converted to native Python datatypes that can then be easily rendered into JSON, XML or other content types. Serializers also provide deserialization, allowing parsed data to be converted back into complex types, after first validating the incoming data.
This gives you a generic way to control the output of your responses, as well as a ModelSerializer class which provides a useful shortcut for creating serializers that deal with model instances and querysets.
They save you from writing a lot of custom code. Let’s look at some examples.
Pretend we have an app that tracks a list of tasks that the user has to complete by a certain date. The Task model might look something like the following:
class Task(models.Model):
title = models.CharField(max_length=255)
due_date = models.DateField()
completed = models.BooleanField(default=False)
When the user requests those tasks, our app returns a response in the form of a JSON-serialized string. What happens when we try to serialize our Django objects using the built-in json library?
import json
task = Task.objects.first()
json.dumps(task)
We get a TypeError. Task is not JSON serializable. To bypass this, we have to explicitly create a dictionary with each of the attributes from Task.
json.dumps({
'title': task.title,
'due_date': task.due_date.strftime('%Y-%m-%d'),
'completed': task.completed
})
Serializing a Python object from a JSON string or from request data is just as painful.
from datetime import datetime
title = request.data.get('title')
due_date = datetime.strptime(request.data.get('due_date'), '%Y-%m-%d').date()
completed = request.data.get('completed')
task = Task.objects.create(title=title, due_date=due_date, completed=completed)
Now, imagine having to follow these steps in multiple views if you have more than one API that needs to serialize (or deserialize) JSON data. Also, if your Django model changes, you have to track down and edit all of the custom serialization code.
Creating and using a serializer is easy:
from rest_framework import serializers
class TaskSerializer(serializers.ModelSerializer):
def create(self, validated_data):
return Task.objects.create(**validated_data)
class Meta:
model = Task
fields = ('title', 'due_date', 'completed')
# Serialize Python object to JSON string.
task_data = TaskSerializer(task).data
# Create Python object from JSON string.
task_data = TaskSerializer(request.data)
task = task_data.create()
If you update the Django model, you only have to update the serializer in one place and all of the code that depends on it works. You also get a lot of other goodies including (as you mentioned) data validation.
Hope that helps!
If I got you correctly, my answer is:
It is not necessary to write one serializer for a model, even for method type (POST,GET etc.). You can pretty much create serializers for your model as much as you need and set fields you want to operate on. You can also set those different serializers as serializer_class property of your APIView class per each method.
I strongly recommend you to take some time to look at the Django Rest Framework Tutorial
below is how your serializer can look.. but please go through this DRF serializer realtionship
from rest_framework.serializers import (
ModelSerializer,
PrimaryKeyRelatedField
)
class UserSerializer(ModelSerializer):
"""
Serializer for the users models.. Please dont forget to import the model
"""
class Meta:
model = Users
field = "__all__"
class UserPhoneSerializer(ModelSerializer):
"""
Serializer for the users address model..
Pass the previously created user id within the post.. serializer will automatically validate
it
"""
user_id = PrimaryKeyRelatedField(queryset=Users.objects.all())
class Meta:
model = UserPhone
field = "__all__"
class UserAddressSerializer(ModelSerializer):
"""
Serializer for the users address model..
Pass the previously created user id within the post.. serializer will automatically validate
it
"""
user_id = PrimaryKeyRelatedField(queryset=Users.objects.all())
class Meta:
model = UserAddress
field = "__all__"
class UserPastProfessionSerializer(ModelSerializer):
"""
Serializer for the UserPastProfession model..
Pass the previously created user id within the post.. serializer will automatically validate
it
"""
user_id = PrimaryKeyRelatedField(queryset=Users.objects.all())
class Meta:
model = UserPastProfession
field = "__all__"

Model is not defined

I'm using Flask and Peewee to create models for Users and Appointments, and each of them have a ForeignKeyField referencing each other. The problem is, if I define one above the other, Flask will give me x is not defined.
For example:
class User(Model):
appointments = ForeignKeyField(Appointment, related_name='appointments')
class Appointment(Model):
with_doctor = ForeignKeyField(User, related_name='doctor')
This would return 'User' is not defined. How am I able to fix this problem?
You generally don't define the foreign key on both sides of the relationship, especially when the relationship isn't a one-to-one.
You should also set the related_name to something that makes sense on the related model. user.appointments is probably a better name for accessing a User's Appointments than user.doctor.
class User(Model):
pass
class Appointment(Model):
with_doctor = ForeignKeyField(User, related_name='appointments')
You can define it as a string as far as i know:
class User(Model):
appointments = ForeignKeyField("Appointment", related_name='appointments')
class Appointment(Model):
with_doctor = ForeignKeyField(User, related_name='doctor')
if the declaration is before the creation, you have the option to write it as a string.
I'm not sure, i think that the bind is at runtime.
try some changes like this, an example for Names & Addresses :
class Person(db.Model):
id = db.Column(db.Integer, primary_key=True)
addresses = db.relationship('Address', backref='person', lazy='dynamic')
class Address(db.Model):
id = db.Column(db.Integer, primary_key=True)
person_id = db.Column(db.Integer, db.ForeignKey('person.id'))
as far as i know, you should use db.ForeignKey() for a relationship like this ... ForeignKeyField() defined in django

Categories