How to load in arguments using marshmallow and flask_restful? - python

I have a flask_restful API set up but want users to pass numpy arrays when making a post call. I first tried using reqparse from flask_restful but it doesn't support data types of numpy.array. I am trying to use marshmallow (flask_marshmallow) but am having trouble understanding how the arguments get passed to the API.
Before I was able to do something like:
from flask_restful import reqparse
parser = reqparse.RequestParser()
parser.add_argument('url', type=str)
parser.add_argument('id', type=str, required=True)
But now I am not sure how to translate that into marshmallow. I have this set up:
from flask_marshmallow import Marshmallow
app = Flask(__name__)
ma = Marshmallow(app)
class MySchema(ma.Schema):
id = fields.String(required=True)
url = fields.String()
But how can I load in what a user passes in when making a call by using my newly defined schema?

I don't know flask-marshmallow and the docs only show how to use it to serialize responses.
I guess to deserialize requests, you need to use webargs, developped and maintained by the marshmallow team.
It has marshmallow integration:
from marshmallow import Schema, fields
from webargs.flaskparser import use_args
class UserSchema(Schema):
id = fields.Int(dump_only=True) # read-only (won't be parsed by webargs)
username = fields.Str(required=True)
password = fields.Str(load_only=True) # write-only
first_name = fields.Str(missing="")
last_name = fields.Str(missing="")
date_registered = fields.DateTime(dump_only=True)
#use_kwargs(UserSchema())
def profile_update(username, password, first_name, last_name):
update_profile(username, password, first_name, last_name)
# ...

Related

Is there a way to add mySQL database/schema name in flask sqlalchemy connection

I have a flask restful app connected to mySQL database and I am using SQLAlchemy. We can connect to the mySQL server using the following -
app.config['SQLALCHEMY_DATABASE_URI'] = f"mysql+pymysql://root:password#127.0.0.1:3306"
I am working on a use case where the database name will be provided on real-time basis through a GET request. Based on the database name provided, the app will connect to the respective database and perform the operations. For this purpose, I would like to have a way where I can tell the flask app to talk to the provided database (Flask app is already connected to the mySQL server). Currently, I am creating the connection again in the API class.
API: Calculate.py
from flask_restful import Resource, reqparse
from app import app
class Calculate(Resource):
def get(self):
parser = reqparse.RequestParser()
parser.add_argument('schema', type=str, required=True, help='Provide schema name.')
args = parser.parse_args()
session['schema_name'] = args.get('schema')
app.config['SQLALCHEMY_DATABASE_URI'] = f"mysql+pymysql://root:password#127.0.0.1:3306/{session['schema_name']}"
from db_models.User import User
...
DB Model: User.py
from flask import session
from flask_sqlalchemy import SQLAlchemy
from app import app
db = SQLAlchemy(app)
class User(db.Model):
__tablename__ = 'user'
__table_args__ = {"schema": session['schema_name']}
User_ID = db.Column(db.Integer, primary_key=True)
Name = db.Column(db.String(50))
db.create_all()
The above thing works for me. But I would want to understand if there is an alternative to this or a better way of doing this.
Edit: The above code does not work. It references the first schema name that was provided even if I provide a new schema name in the same running instance of the app.
you can write the SQLALCHEMY path like this:
SQLALCHEMY_DATABASE_URI='mysql+pymysql://root:password#localhost:3306/database name'
According to the docs not all values can be updated (first parragraph), in your use case you should use SQLALCHEMY_BINDS variable in your use case this is a dict and create a Model for each schema. Example:
Db Model
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
SQLALCHEMY_DATABASE_URI = f"mysql+pymysql://root:password#127.0.0.1:3306/schema_name1"
SQLALCHEMY_BINDS = {
'db1': SQLALCHEMY_DATABASE_URI, # default
'db2': f"mysql+pymysql://root:password#127.0.0.1:3306/schema_name2"
}
app = Flask(__name__)
db = SQLALchemy(app)
then create a model for each schema
class UserModeldb1(db.Model):
__tablename__ = 'user'
__bind_key__ = 'db1' #this parameter is set according to the database
id = db.Column(db.Integer, primary_key=True)
...
class UserModeldb2(db.Model):
__tablename__ = 'user'
__bind_key__ = 'db2'
id = db.Column(db.Integer, primary_key=True)
...
finally in your get method add some logic to capture the schema and execute your model accorddingly. you should look this question is really helpful Configuring Flask-SQLAlchemy to use multiple databases with Flask-Restless

Getting Error: ValueError: not enough values to unpack (expected 2, got 0) when building REST API's with Flask

Receiving this error when executing a simple GET request via postman. The following is the API call and its failing on author_schema.dump(fetched).
I am following the examples in Building REST APIs with Flask - the book is from 2019 so I assume I am encountering possible issue with dependencies.
The following are the versions that I currently have:
flask_marshmallow -> 0.14.0
flask_sqlalchemy -> 2.5.1 flask -> 2.0.1
marshmallow_sqlalchemy -> 0.26.1
marshmallow 3.13.0
I have condensed the code into a single file to share:
from flask import Flask, request, jsonify, make_response
from flask_marshmallow.sqla import SQLAlchemyAutoSchema
from flask_sqlalchemy import SQLAlchemy
from marshmallow import Schema, fields, ValidationError
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///author_book_publisher.db"
db = SQLAlchemy(app)
class Author(db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
first_name = db.Column(db.String(20))
last_name = db.Column(db.String(20))
created = db.Column(db.DateTime, server_default=db.func.now())
class AuthorSchema(SQLAlchemyAutoSchema):
class Meta(SQLAlchemyAutoSchema.Meta):
model = Author
id = fields.Int(dump_only=True)
first_name = fields.Str(required=True)
last_name = fields.Str(required=True)
created = fields.Str(dump_only=True)
#app.route('/api/authors', methods=['GET'])
def get_author_list():
fetched = Author.query.all()
author_schema = AuthorSchema(many=True, only=('first_name', 'last_name', 'id'))
authors, error = author_schema.dump(fetched)
return {"authors": authors}
# return response_with(resp.SUCCESS_200, value={"authors": authors})
It seems that dump() was changed to return a single value and throw an exception for errors instead of returning a 2-tuple.
from marshmallow.exceptions import ValidationError
#...
try:
authors = author_schema.dump(fetched)
except ValidationError as error:
# handle error
pass
return {"authors": authors}
Changed in version 3.0.0b7: This method returns the serialized data
rather than a (data, errors) duple. A ValidationError is raised if obj
is invalid.
https://marshmallow.readthedocs.io/en/latest/api_reference.html#marshmallow.Schema.dump

How to import a Model from Controller in Flask api

Im kinda new in api creations and trying to make one in Flask from zero. I have a issue making the model. Here is the code.
main.py :
from flask import Flask
from flask_restful import Api, Resource, reqparse #Reqparse sobra (?)
from controllers.attribute_controller import Attribute
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
api = Api(app)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///database.db'
db = SQLAlchemy(app)
Attribute()
api.add_resource(Attribute, "/attribute/<int:attribute_id>")
if __name__ == "__main__":
app.run(debug=True)
attribute_controller.py
from flask_restful import Api, Resource, reqparse
from models.attribute_model import AttibuteModel
attribute_put_args = reqparse.RequestParser()
attribute_put_args.add_argument("name", type=str, help="Name is required", required=True )
attributes = {}
class Attribute(Resource):
def get(self, attribute_id):
return attributes[attribute_id]
def put(self, attribute_id):
args = attribute_put_args.parse_args()
attributes[attribute_id] = args
return attributes[attribute_id],201
attribute_model.py
from main import db
class AttibuteModel(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100), nullable=False)
def __repr__(self):
return f"Attribute(name={name})"
test.py
import requests
BASE = "http://127.0.0.1:5000/"
response = requests.put(BASE + "attribute/1", {"name": "red"})
print(response.json())
I got this error:
I know why i got the error, but i dont know any other solution to acced the model in my controllers.
I need the attribute_model in my attribute_controller to change it but i dont know how to solve the error. I've tried to follow this instructions:
How to avoid circular imports in a Flask app with Flask SQLAlchemy models?
But I didn't understand it at all so I don't know how to continue :(. Thx for your time.
Your problem is a circular import.
In attribute_controller.py you're importing AttibuteModel (missing an 'r' there by the way).
In attribute_model.py you're importing db from main.
In main.py you're importing Attribute from attribute_controller.py (which imports AttibuteModel which imports db) on line 3, before db has been created on line 11.
Move the import statement to after db initialisation:
db = SQLAlchemy(app)
from controllers.attribute_controller import Attribute

Simple request parsing without reqparse.RequestParser()

flask_restful.reqparse has been deprecated (https://flask-restful.readthedocs.io/en/latest/reqparse.html):
The whole request parser part of Flask-RESTful is slated for removal and will be replaced by documentation on how to integrate with other packages that do the input/output stuff better (such as marshmallow). This means that it will be maintained until 2.0 but consider it deprecated. Don’t worry, if you have code using that now and wish to continue doing so, it’s not going to go away any time too soon.
I've looked briefly at Marshmallow and still a bit confused about how to use it if I wanted to replace reqparse.RequestParser(). What would we write instead of something like the following:
from flask import Flask, request, Response
from flask_restful import reqparse
#app.route('/', methods=['GET'])
def my_api() -> Response:
parser = reqparse.RequestParser()
parser.add_argument('username', type=str, required=True)
args = parser.parse_args()
return {'message': 'cool'}, 200
(after half an hour of reading some more documentation…)
RequestParser looks at the MultiDict request.values by default (apparently query parameters, then form body parameters according to https://stackoverflow.com/a/16664376/5139284). So then we just need to validate the data in request.values somehow.
Here's a snippet of some relevant code from Marshmallow. It seems a good deal more involved than reqparse: first you create a schema class, then instantiate it, then have it load the request JSON. I'd rather not have to write a separate class for each API endpoint. Is there something more lightweight similar to reqparse, where you can write all the types of the argument validation information within the function defining your endpoint?
from flask import Flask, request, Response
from flask_restful import reqparse
from marshmallow import (
Schema,
fields,
validate,
pre_load,
post_dump,
post_load,
ValidationError,
)
class UserSchema(Schema):
id = fields.Int(dump_only=True)
email = fields.Str(
required=True, validate=validate.Email(error="Not a valid email address")
)
password = fields.Str(
required=True, validate=[validate.Length(min=6, max=36)], load_only=True
)
joined_on = fields.DateTime(dump_only=True)
user_schema = UserSchema()
#app.route("/register", methods=["POST"])
def register():
json_input = request.get_json()
try:
data = user_schema.load(json_input)
except ValidationError as err:
return {"errors": err.messages}, 422
# etc.
If your endpoints share any commonalities in schema, you can use fields.Nested() to nest definitions within each Marshmallow class, which may save on code writing for each endpoint. Docs are here.
For example, for operations that update a resource called 'User', you would likely need a standardised subset of user information to conduct the operation, such as user_id, user_login_status, user_authorisation_level etc. These can be created once and nested in new classes for more specific user operations, for example updating a user's account:
class UserData(Schema):
user_id = fields.Int(required=True)
user_login_status = fields.Boolean(required=True)
user_authentication_level = fields.Int(required=True)
# etc ....
class UserAccountUpdate(Schema):
created_date = fields.DateTime(required=True)
user_data = fields.Nested(UserData)
# account update fields...

Graphene resolver for an object that has no model

I'm trying to write a resolver that returns an object created by a function. It gets the data from memcached, so there is no actual model I can tie it to.
I think my main issue is I can't figure out what type to use and how to set it up. I'm using this in conjunction with Django, but I don't think it's a django issue (afaict). Here's my code so far:
class TextLogErrorGraph(DjangoObjectType):
def bug_suggestions_resolver(root, args, context, info):
from treeherder.model import error_summary
return error_summary.bug_suggestions_line(root)
bug_suggestions = graphene.Field(TypeForAnObjectHere, resolver=bug_suggestions_resolver)
Notice I don't know what type or field to use. Can someone help me? :)
GraphQL is designed to be backend agnostic, and Graphene is build to support various python backends like Django and SQLAlchemy. To integrate your custom backend, simply define your models using Graphene's type system and roll out your own resolvers.
import graphene
import time
class TextLogEntry(graphene.ObjectType):
log_id = graphene.Int()
text = graphene.String()
timestamp = graphene.Float()
level = graphene.String()
def textlog_resolver(root, args, context, info):
log_id = args.get('log_id') # 123
# fetch object...
return TextLogEntry(
log_id=log_id,
text='Hello World',
timestamp=time.time(),
level='debug'
)
class Query(graphene.ObjectType):
textlog_entry = graphene.Field(
TextLogEntry,
log_id=graphene.Argument(graphene.Int, required=True),
resolver=textlog_resolver
)
schema = graphene.Schema(
query=Query
)

Categories