I have a question regarding flask restful extension. I'm just started to use it and faced one problem. I have flask-sqlalchemy entities that are connected many-to-one relation and I want that restful endpoint return parent entity with all its children in json using marshaller. In my case Set contains many parameters. I looked at flask-restful docs but there wasn't any explanation how to solve this case.
Seems like I'm missing something obvious but cannot figure out any solution.
Here is my code:
# entities
class Set(db.Model):
id = db.Column("id", db.Integer, db.Sequence("set_id_seq"), primary_key=True)
title = db.Column("title", db.String(256))
parameters = db.relationship("Parameters", backref="set", cascade="all")
class Parameters(db.Model):
id = db.Column("id", db.Integer, db.Sequence("parameter_id_seq"), primary_key=True)
flag = db.Column("flag", db.String(256))
value = db.Column("value", db.String(256))
set_id = db.Column("set_id", db.Integer, db.ForeignKey("set.id"))
# marshallers
from flask.ext.restful import fields
parameter_marshaller = {
"flag": fields.String,
"value": fields.String
}
set_marshaller = {
'id': fields.String,
'title': fields.String,
'parameters': fields.List(fields.Nested(parameter_marshaller))
}
# endpoint
class SetApi(Resource):
#marshal_with(marshallers.set_marshaller)
def get(self, set_id):
entity = Set.query.get(set_id)
return entity
restful_api = Api(app)
restful_api.add_resource(SetApi, "/api/set/<int:set_id>")
Now when i call /api/set/1 I get server error:
TypeError: 'Set' object is unsubscriptable
So I need a way to correctly define set_marshaller that endpoint return this json:
{
"id": : "1",
"title": "any-title",
"parameters": [
{"flag": "any-flag", "value": "any-value" },
{"flag": "any-flag", "value": "any-value" },
.....
]
}
I appreciate any help.
I found solution to that problem myself.
After playing around with flask-restful i find out that i made few mistakes:
Firstly set_marshaller should look like this:
set_marshaller = {
'id': fields.String,
'title': fields.String,
'parameters': fields.Nested(parameter_marshaller)
}
Restless marshaller can handle case if parameter is list and marshals to json list.
Another problem was that in API Set parameters has lazy loading, so when i try to marshall Set i got KeyError: 'parameters', so I need explicitly load parameters like this:
class SetApi(Resource):
#marshal_with(marshallers.set_marshaller)
def get(self, set_id):
entity = Set.query.get(set_id)
entity.parameters # loads parameters from db
return entity
Or another option is to change model relationship:
parameters = db.relationship("Parameters", backref="set", cascade="all" lazy="joined")
This is an addition to Zygimantas's answer:
I'm using Flask-RESTful and this is a solution to the loading of the nested properties.
You can pass a callable to the marshal decorator:
class OrgsController(Resource):
#marshal_with(Organization.__json__())
def get(self):
return g.user.member.orgs
Then update the models to return the resource fields for its own entity. Nested entities will thus return the resource fields for its entity relatively.
class Organization(db.Model):
id = db.Column(db.Integer, primary_key=True)
...
#staticmethod
def __json__(group=None):
_json = {
'id': fields.String,
'login': fields.String,
'description': fields.String,
'avatar_url': fields.String,
'paid': fields.Boolean,
}
if group == 'flat':
return _json
from app.models import Repository
_json['repos'] = fields.Nested(Repository.__json__('flat'))
return _json
class Repository(db.Model):
id = db.Column(db.Integer, primary_key=True)
owner_id = db.Column(db.Integer, db.ForeignKey('organization.id'))
owner = db.relationship('Organization', lazy='select', backref=db.backref('repos', lazy='select'), foreign_keys=[owner_id])
...
#staticmethod
def __json__(group=None):
_json = {
'id': fields.String,
'name': fields.String,
'updated_at': fields.DateTime(dt_format='iso8601'),
}
if group == 'flat':
return _json
from app.models import Organization
_json['owner'] = fields.Nested(Organization.__json__('flat'))
return _json
This gives the representation I'm looking for, and honoring the lazy loading:
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/18945?v=3",
"description": "lorem ipsum.",
"id": "1805",
"login": "foobar",
"paid": false,
"repos":
[
{
"id": "9813",
"name": "barbaz",
"updated_at": "2014-01-23T13:51:30"
},
{
"id": "12860",
"name": "bazbar",
"updated_at": "2015-04-17T11:06:36"
}
]
}
]
I like
1) how this approach allows me to define my resource fields per entity and it is available to all my resource routes across the app.
2) how the group argument allows me to customise the representation however I desire. I only have 'flat' here, but any logic can be written and passed down to deeper nested objects.
3) entities are only loaded as necessary.
Related
Setup: Postgres13, Python 3.7, SQLAlchemy 1.4
My question is regarding creating classes dynamically rather than relying on the contents of models.py. I have a
schema.json file with metadata for many tables. Number of columns, column names, column constraints differ from table to table and are not known in advance.
The JSON gets parsed and its results are mapped to the ORM Postgres dialect (ex: {'column_name1': 'bigint'} becomes 'column_name1 = Column(BigInt)'). This creates a dictionary that contains
a table name, column names, and column constraints. Since all the tables get passed the Augmented Base they automatically
receive a PK id field.
I then pass this dictionary to a create_class function that uses this data to create the tables dynamically
and commit these new tables to the database.
The challenge is that when I run the code the tables do get created but only with a single column - that of the PK id
which it received automatically. All the other columns are ignored.
I suspect I am creating this error in the way
that I am invoking the Session or Base or in the way I am passing the column constraints. I'm not sure how to indicate to the ORM that I am passing in Column and Constraint objects.
I have tried changing things like:
the way the classes are created - passing in a Column object instead of a Column string
ex: constraint_dict[k] = f'= Column({v})' VS constraint_dict[k] = f'= {Column}({v})'
changing the way the column constraints are collected
calling Base and create in different ways. I try to show these variations in the commented out lines in create_class below.
I can't work out which bits of interaction are creating this error. Any help is much appreciated!
Here is the code:
example of schema.json
"groupings": {
"imaging": {
"owner": { "type": "uuid", "required": true, "index": true },
"tags": { "type": "text", "index": true }
"filename": { "type": "text" },
},
"user": {
"email": { "type": "text", "required": true, "unique": true },
"name": { "type": "text" },
"role": {
"type": "text",
"required": true,
"values": [
"admin",
"customer",
],
"index": true
},
"date_last_logged": { "type": "timestamptz" }
}
},
"auths": {
"boilerplate": {
"owner": ["read", "update", "delete"],
"org_account": [],
"customer": ["create", "read", "update", "delete"]
},
"loggers": {
"owner": [],
"customer": []
}
}
}
base.py
from sqlalchemy import Column, create_engine, Integer, MetaData
from sqlalchemy.orm import declared_attr, declarative_base, scoped_session, sessionmaker
engine = create_engine('postgresql://user:pass#localhost:5432/dev', echo=True)
db_session = scoped_session(
sessionmaker(
bind=engine,
autocommit=False,
autoflush=False
)
)
# Augment the base class by using the cls argument of the declarative_base() function so all classes derived
# from Base will have a table name derived from the class name and an id primary key column.
class Base:
#declared_attr
def __tablename__(cls):
return cls.__name__.lower()
id = Column(Integer, primary_key=True)
metadata_obj = MetaData(schema='collect')
Base = declarative_base(cls=Base, metadata=metadata_obj)
models.py
from base import Base
from sqlalchemy import Column, DateTime, Integer, Text
from sqlalchemy.dialects.postgresql import UUID
import uuid
class NumLimit(Base):
org = Column(UUID(as_uuid=True), default=uuid.uuid4, unique=True)
limits = Column(Integer)
limits_rate = Column(Integer)
rate_use = Column(Integer)
def __init__(self, org, limits, allowance_rate, usage, last_usage):
super().__init__()
self.org = org
self.limits = limits
self.limits_rate = limits_rate
self.rate_use = rate_use
create_tables.py (I know this one is messy! Just trying to show all the variations attempted ...)
def convert_snake_to_camel(name):
return ''.join(x.capitalize() or '_' for x in name.split('_'))
def create_class(table_data):
constraint_dict = {'__tablename__': 'TableClass'}
table_class_name = ''
column_dict = {}
for k, v in table_data.items():
# Retrieve table, alter the case, store it for later use
if 'table' in k:
constraint_dict['__tablename__'] = v
table_class_name += convert_snake_to_camel(v)
# Retrieve the rest of the values which are the column names and constraints, ex: 'org = Column(UUID(as_uuid=True), default=uuid.uuid4, unique=True)'
else:
constraint_dict[k] = f'= Column({v})'
column_dict[k] = v
# When type is called with 3 arguments it produces a new class object, so we use it here to create the table Class
table_cls = type(table_class_name, (Base,), constraint_dict)
# Call ORM's 'Table' on the Class
# table_class = Table(table_cls) # Error "TypeError: Table() takes at least two positional-only arguments 'name' and 'metadata'"
# db_session.add(table_cls) # Error "sqlalchemy.orm.exc.UnmappedInstanceError: Class 'sqlalchemy.orm.decl_api.DeclarativeMeta'
# is not mapped; was a class (__main__.Metadata) supplied where an instance was required?"
# table_class = Table(
# table_class_name,
# Base.metadata,
# constraint_dict) # Error "sqlalchemy.orm.exc.UnmappedInstanceError: Class 'sqlalchemy.orm.decl_api.DeclarativeMeta'
# is not mapped; was a class (__main__.Metadata) supplied where an instance was required?"
# table_class = Table(
# table_class_name,
# Base.metadata,
# column_dict)
# table_class.create(bind=engine, checkfirst=True) # sqlalchemy.exc.ArgumentError: 'SchemaItem' object, such as a 'Column' or a 'Constraint' expected, got {'limits': 'Integer'}
# table_class = Table(
# table_class_name,
# Base.metadata,
# **column_dict) # TypeError: Additional arguments should be named <dialectname>_<argument>, got 'limits'
# Base.metadata.create_all(bind=engine, checkfirst=True)
# table_class.create(bind=engine, checkfirst=True)
new_row_vals = table_cls(**column_dict)
db_session.add(new_row_vals) # sqlalchemy.exc.ArgumentError: 'SchemaItem' object, such as a 'Column' or a 'Constraint' expected, got {'limits': 'Integer'}
db_session.commit()
db_session.close()
I have created a self-contained example for you. This should give you the base building blocks to build this yourself. It includes the typemap, mapping type strings to sqlalchemy types and argumentmap, mapping non-sqlalchemy arguments to their sqlalchemy counterparts (required: True is nullable: False in sqlalchemy).
This approach uses metadata to define the tables and then converts these into declarative mappings, as described in Using a Hybrid Approach with __table__ with the python type() function. These generated classes are then exported into the module scope's globals().
Not everything from your provided schema.json is supported, but this should give you a nice starting point.
from sqlalchemy import Column, Integer, Table, Text
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.orm import declarative_base
def convert_snake_to_camel(name):
return "".join(part.capitalize() for part in name.split("_"))
data = {
"groupings": {
"imaging": {
"id": {"type": "integer", "primary_key": True},
"owner": {"type": "uuid", "required": True, "index": True},
"tags": {"type": "text", "index": True},
"filename": {"type": "text"},
},
"user": {
"id": {"type": "integer", "primary_key": True},
"email": {"type": "text", "required": True, "unique": True},
"name": {"type": "text"},
"role": {
"type": "text",
"required": True,
"index": True,
},
},
},
}
Base = declarative_base()
typemap = {
"uuid": UUID,
"text": Text,
"integer": Integer,
}
argumentmap = {
"required": lambda value: ("nullable", not value),
}
for tablename, columns in data["groupings"].items():
column_definitions = []
for colname, parameters in columns.items():
type_ = typemap[parameters.pop("type")]
params = {}
for name, value in parameters.items():
try:
name, value = argumentmap[name](value)
except KeyError:
pass
finally:
params[name] = value
column_definitions.append(Column(colname, type_(), **params))
# Create table in metadata
table = Table(tablename, Base.metadata, *column_definitions)
classname = convert_snake_to_camel(tablename)
# Dynamically create a python class with definition
# class classname:
# __table__ = table
class_ = type(classname, (Base,), {"__table__": table})
# Add the class to the module namespace
globals()[class_.__name__] = class_
I am setting up a GraphQL Server with Python using Starlette and Graphene and ran into a problem I cannot find a solution for. The Graphene Documentation does not go into detail regarding the union type, which I am trying to implement.
I set up a minimum example based on the graphene documentation which you can run to replicate this problem
import os
import uvicorn
from graphene import ObjectType, Field, List, String, Int, Union
from graphene import Schema
from starlette.applications import Starlette
from starlette.graphql import GraphQLApp
from starlette.routing import Route
mock_data = {
"episode": 3,
"characters": [
{
"type": "Droid",
"name": "R2-D2",
"primaryFunction": "Astromech"
},
{
"type": "Human",
"name": "Luke Skywalker",
"homePlanet": "Tatooine"
},
{
"type": "Starship",
"name": "Millennium Falcon",
"length": 35
}
]
}
class Human(ObjectType):
name = String()
homePlanet = String()
class Droid(ObjectType):
name = String()
primary_function = String()
class Starship(ObjectType):
name = String()
length = Int()
class Characters(Union):
class Meta:
types = (Human, Droid, Starship)
class SearchResult(ObjectType):
characters = List(Characters)
episode = Int()
class RootQuery(ObjectType):
result = Field(SearchResult)
#staticmethod
def resolve_result(_, info):
return mock_data
graphql_app = GraphQLApp(schema=Schema(query=RootQuery))
routes = [
Route("/graphql", graphql_app),
]
api = Starlette(routes=routes)
if __name__ == "__main__":
uvicorn.run(api, host="127.0.0.1", port=int(os.environ.get("PORT", 8080)))
If you then go to http://localhost:8080/graphq and enter the following query
query Humans{
result {
episode
characters {
... on Human {
name
}
}
}
}
I get this error
{
"data": {
"result": {
"episode": 3,
"characters": null
}
},
"errors": [
{
"message": "Abstract type Characters must resolve to an Object type at runtime for field SearchResult.characters with value \"[{'type': 'Droid', 'name': 'R2-D2', 'primaryFunction': 'Astromech'}, {'type': 'Human', 'name': 'Luke Skywalker', 'homePlanet': 'Tatooine'}, {'type': 'Starship', 'name': 'Millennium Falcon', 'length': 35}]\", received \"None\".",
"locations": [
{
"line": 4,
"column": 5
}
]
}
]
}
which I am now stuck with. Maybe someone has done this already and can help out? How can I resolve this at runtime. I have already tried different approaches for example I changed classes Character and RootQuery:
class Character(Union):
class Meta:
types = (Human, Droid, Starship)
def __init__(self, data, *args, **kwargs):
super().__init__(*args, **kwargs)
self.data = data
self.type = data.get("type")
def resolve_type(self, info):
if self.type == "Human":
return Human
if self.type == "Droid":
return Droid
if self.type == "Starship":
return Starship
class RootQuery(ObjectType):
result = Field(SearchResult)
#staticmethod
def resolve_result(_, info):
return {**mock_data, "characters": [Character(character) for character in mock_data.get('characters')]}
resulting in
{
"data": {
"result": {
"episode": 3,
"characters": [
{},
{
"name": null
},
{}
]
}
}
}
Any ideas would be very appreciated!
jkimbo answered the question here:
class Character(Union):
class Meta:
types = (Human, Droid, Starship)
#classmethod
def resolve_type(cls, instance, info):
if instance["type"] == "Human":
return Human
if instance["type"] == "Droid":
return Droid
if instance["type"] == "Starship":
return Starship
class RootQuery(ObjectType):
result = Field(SearchResult)
def resolve_result(_, info):
return mock_data
Note I'm just returning mock_data and I've updated the resolve_type method to switch based on the data. The Union type uses the same resolve_type method as Interface to figure out what type to resolve to at runtime: https://docs.graphene-python.org/en/latest/types/interfaces/#resolving-data-objects-to-types
Spoiler alert: I posted my solution as an answer to this question
I am using flastk-resptlus to create an API. I have to provide the data in a specific structure, which I have problems to get, see an example below:
What I need to get is this structure:
{
"metadata": {
"files": []
},
"result" : {
"data": [
{
"user_id": 1,
"user_name": "user_1",
"user_role": "editor"
},
{
"user_id": 2
"user_name": "user_2",
"user_role": "editor"
},
{
"user_id": 3,
"user_name": "user_3",
"user_role": "curator"
}
]
}
}
But the problem comes that I cannot manage to get the structure of "result" : { "data": []} without making "data" a model itself.
What I tried to do so far (and did not work)
# define metadata model
metadata_model = api.model('MetadataModel', {
"files": fields.List(fields.String(required=False, description='')),
}
# define user model
user_model = api.model('UserModel', {
"user_id": fields.Integer(required=True, description=''),
"user_name": fields.String(required=True, description=''),
"user_role": fields.String(required=False, description='')
}
# here is where I have the problems
user_list_response = api.model('ListUserResponse', {
'metadata': fields.Nested(metadata_model),
'result' : {"data" : fields.List(fields.Nested(user_model))}
})
Complains that cannot get the "schema" from "data" (because is not a defined model), but I don't want to be a new api model, just want to append a key called "data". Any suggestions?
This I tried and works, but is not what I want (because I miss the "data"):
user_list_response = api.model('ListUserResponse', {
'metadata': fields.Nested(metadata_model),
'result' : fields.List(fields.Nested(user_model))
})
I don't want data to be a model because the common structure of the api is the following:
{
"metadata": {
"files": []
},
"result" : {
"data": [
<list of objects> # here must be listed the single model
]
}
}
Then, <list of objects> can be users, addresses, jobs, whatever.. so I want to make a "general structure" in which then I can just inject the particular models (UserModel, AddressModel, JobModel, etc) without creating a special data model for each one.
A possible approach is to use fields.Raw which returns whatever serializable object you pass. Then, you can define a second function, which creates your result and uses marshal. marshal transforms your data according to a model and accepts an additional parameter called envelope. envelope surrounds your modeled data by a given key and does the trick.
from flask import Flask
from flask_restplus import Api, fields, Resource, marshal
app = Flask(__name__)
api = Api()
api.init_app(app)
metadata_model = api.model("metadata", {
'file': fields.String()
})
user_model = api.model('UserModel', {
"user_id": fields.Integer(required=True, description=''),
"user_name": fields.String(required=True, description=''),
"user_role": fields.String(required=False, description='')
})
response_model = api.model("Result", {
'metadata': fields.List(fields.Nested(metadata_model)),
'result': fields.Raw()
})
#api.route("/test")
class ApiView(Resource):
#api.marshal_with(response_model)
def get(self):
data = {'metadata': {},
'result': self.get_user()}
return data
def get_user(self):
# Access database and get data
user_data = [{'user_id': 1, 'user_name': 'John', 'user_role': 'editor'},
{'user_id': 2, 'user_name': 'Sue', 'user_role': 'curator'}]
# The kwarg envelope does the trick
return marshal(user_data, user_model, envelope='data')
app.run(host='0.0.0.0', debug=True)
My workaround solution that solves all my problems:
I create a new List fields class (it is mainly copied from fields.List), and then I just tune the output format and the schema in order to get the 'data' as key:
class ListData(fields.Raw):
'''
Field for marshalling lists of other fields.
See :ref:`list-field` for more information.
:param cls_or_instance: The field type the list will contain.
This is a modified version of fields.List Class in order to get 'data' as key envelope
'''
def __init__(self, cls_or_instance, **kwargs):
self.min_items = kwargs.pop('min_items', None)
self.max_items = kwargs.pop('max_items', None)
self.unique = kwargs.pop('unique', None)
super(ListData, self).__init__(**kwargs)
error_msg = 'The type of the list elements must be a subclass of fields.Raw'
if isinstance(cls_or_instance, type):
if not issubclass(cls_or_instance, fields.Raw):
raise MarshallingError(error_msg)
self.container = cls_or_instance()
else:
if not isinstance(cls_or_instance, fields.Raw):
raise MarshallingError(error_msg)
self.container = cls_or_instance
def format(self, value):
if isinstance(value, set):
value = list(value)
is_nested = isinstance(self.container, fields.Nested) or type(self.container) is fields.Raw
def is_attr(val):
return self.container.attribute and hasattr(val, self.container.attribute)
# Put 'data' as key before the list, and return the dict
return {'data': [
self.container.output(idx,
val if (isinstance(val, dict) or is_attr(val)) and not is_nested else value)
for idx, val in enumerate(value)
]}
def output(self, key, data, ordered=False, **kwargs):
value = fields.get_value(key if self.attribute is None else self.attribute, data)
if fields.is_indexable_but_not_string(value) and not isinstance(value, dict):
return self.format(value)
if value is None:
return self._v('default')
return [marshal(value, self.container.nested)]
def schema(self):
schema = super(ListData, self).schema()
schema.update(minItems=self._v('min_items'),
maxItems=self._v('max_items'),
uniqueItems=self._v('unique'))
# work around to get the documentation as I want
schema['type'] = 'object'
schema['properties'] = {}
schema['properties']['data'] = {}
schema['properties']['data']['type'] = 'array'
schema['properties']['data']['items'] = self.container.__schema__
return schema
I have an store procedure Oracle that returns a variable type CLOB with information in JSON format. That variable caught her in a Python
and I return it for a Web Service. The store procedure output is of this style:
{"role":"Proof_Rol","identification":"31056235002761","class":"Proof_Clase","country":"ARGENTINA","stateOrProvince":"Santa Fe","city":"Rosario","locality":"Rosario","streetName":"Brown","streetNr":"2761","x":"5438468,710153","y":"6356634,962204"}
But at the Python service exit it is shown as follows:
{"Atributos": "{\"role\":\"Proof_Rol\",\"identification\":\"31056235002761\",\"class\":\"Proof_Clase\",\"country\":\"ARGENTINA\",\"stateOrProvince\":\"Santa Fe\",\"city\":\"Rosario\",\"locality\":\"Rosario\",\"streetName\":\"Brown\",\"streetNr\":\"2761\",\"x\":\"5438468,710153\",\"y\":\"6356634,962204\"}"}
Does anyone know how to prevent the escape characters from entering that string at the exit of the service?
Part of my Python Code is:
api = Api(APP, version='1.0', title='attributes API',
description='Attibute Microservice\n'
'Conection DB:' + db_str + '\n'
'Max try:' + limite)
ns = api.namespace('attributes', description='Show descriptions of an object')
md_respuesta = api.model('attributes', {
'Attribute': fields.String(required=True, description='Attribute List')
})
class listAtriClass:
Attribute = None
#ns.route('/<string:elementId>')
#ns.response(200, 'Success')
#ns.response(404, 'Not found')
#ns.response(429, 'Too many request')
#ns.param('elementId', 'Id Element (ej:31056235002761)')
class attibuteClass(Resource):
#ns.doc('attributes')
#ns.marshal_with(md_respuesta)
def post(self, elementId):
try:
cur = database.db.cursor()
listOutput = cur.var(cx_Oracle.CLOB)
e, l = cur.callproc('attributes.get_attributes', (elementId, listOutput))
except Exception as e:
database.init()
if database.db is not None:
log.err('Reconection OK')
cur = database.db.cursor()
listOutput = cur.var(cx_Oracle.CLOB)
e, l = cur.callproc('attributes.get_attributes', (elementId, listOutput))
print(listOutput)
else:
log.err('Conection Fails')
listOutput = None
result = listAtriClass()
result.Attribute =listOutput.getvalue()
print(result.Attribute)
return result, 200
Attribute is defined to render as a fields.String but actually it should be defined to render as fields.Nested.
attribute_fields = {
"role": fields.String,
"identification": fields.String,
"class": fields.String,
# ...you get the idea.
}
md_respuesta = api.model('attributes', {
'Attribute': fields.Nested(attribute_fields)
})
Update for flask-restplus
In flask-restplus, a nested field must also register a model.
attribute_fields = api.model('fields', {
"role": fields.String,
"identification": fields.String,
"class": fields.String,
# ...you get the idea.
})
Another way is to inline attribute_fields instead of registering a separate model for it.
md_respuesta = api.model('attributes', {
'Attribute': {
'role': fields.String,
'identification': fields.String,
'class': fields.String,
# ...you get the idea.
}
})
So I have this model in Flask RestPlus:
NS = Namespace('parent')
PARENT_MODEL = NS.model('parent', {
'parent-id': fields.String(readOnly=True,
'parent-name': fields.String(required=True)
})
CHILD_MODEL = NS.inherit('child', SUBSCRIPTION_MODEL, {
'child-id': fields.String(required=True, readOnly=True),
'child-name': fields.String(required=True),
'child-some-property': fields.String(required=True)
})
CHILD_PROPERTY_MODEL = NS.inherit('child-other-property', RESOURCE_GROUP_MODEL, {
'child-other-property': fields.Raw(required=False)
})
It doesn't work as expected, I get this output (and similar structure on the swagger docs).
[
{
"parent-id": "string",
"parent-name": "string",
"child-id": "string",
"child-name": "string",
"child-some-property": "string",
"child-other-property": {}
}
]
instead of something like this:
[
{
"parent-id": "string",
"parent-name": "string", {
"child-id": "string",
"child-name": "string",
"child-some-property": "string",{
"child-other-property": {}
}
}
}
]
I'm probably missing something simple, but can't understand what. This is what I'm consulting to figure out Models in Flask Restplus.
NS = Namespace('sample')
child_model = NS.model('child', {
'childid': fields.String(required=True, readOnly=True),
'childname': fields.String(required=True),
'data': fields.String(required=True),
'complexdata': fields.Raw(required=False)
})
parent_model = NS.model('parent', {
'id': fields.String(readOnly=True),
'name': fields.String(required=True),
'childdata': fields.List(
fields.Nested(child_model, required=True)
)
})
this is what works for me. It appears that Flask Restplus github is dead, no answer from maintainers. This might help someone.
This how I declared the nested fields in a serializer.py file
from flask_restplus import fields
from api.restplus import api
child2 = api.model('child2', {
'child2name': fields.Url(description='child2 name'),
})
child1= api.model('child1', {
'child2': fields.Nested(child2)
})
parent = {
'name': fields.String(description='name'),
'location': fields.String(description='location details'),
}
parent ["child1"] = fields.Nested(child1)
resource_resp = api.model('Response details', parent )
Usage in view.py, I am marshaling/generating the json with #api.marshal_with(resource_resp)
from flask import request, jsonify
from flask_restplus import Resource
from serializers import *
ns = api.namespace('apiName', description='API Description')
#ns.route('/route/<some_id>')
class ResourceClient(Resource):
#ns.response(401, "Unauthorized")
#ns.response(500, "Internal Server Error")
#api.doc(params={'some_id': 'An ID'})
#api.marshal_with(resource_resp )
def get(self, some_id):
"""
Do GET
"""
# Logic
return {"status" : "success"}