I am new to Falcon framework of python. I have a question regarding the usage of middleware class of Falcon. Is it wise to use custom routers and authentication of requests in the middleware or should this be handled only on the routing
**main.py**
import falcon
import falcon_jsonify
import root
from waitress import serve
if __name__ == "__main__":
app = falcon.API(
middleware=[falcon_jsonify.Middleware(help_messages=True),
root.customRequestParser()]
)
serve(app, host="0.0.0.0", port=5555)
root.py where I am planning to write the custom routes
import json
import falcon
class Home(object):
#classmethod
def getResponse(self):
return {"someValue": "someOtherValue"}
def process_request_path(path):
path = path.lstrip("/").split("/")
return path
class customRequestParser(object):
def process_request(self, req, resp):
print process_request_path(req.path)
I also saw examples using app = falcon.API(router=CustomRouter()). I saw a documentation on the falcon official documentation page - http://falcon.readthedocs.io/en/stable/api/routing.html
Please let me know if there are any references that I can look through.
To quote the Falcon Community FAQ
How do I authenticate requests?
Hooks and middleware components can be used together to authenticate and authorize requests. For example, a middleware component could be used to parse incoming credentials and place the results in req.context. Downstream components or hooks could then use this information to authorize the request, taking into account the user’s role and the requested resource.
Falcon's Hooks are decorators used on the either a particular request function (i.e. on_get) or on an entire class. They're great for validating incoming requests, so as the FAQ says, authentication could be done at this point.
Here's an (untested) example I knocked up:
def AuthParsingMiddleware(object):
def process_request(self, req, resp):
req.context['GodMode'] = req.headers.get('Auth-Token') == 'GodToken':
# Might need process_resource & process_response
def validate_god_mode(req, resp, resource, params):
if not req.context['GodMode']:
raise falcon.HTTPBadRequest('Not authorized', 'You are not god')
def GodLikeResource(object):
#falcon.before(validate_god_mode):
def on_get(self, req, resp):
resp.body = 'You have god mode; I prostrate myself'
app = falcon.API(
middleware=[falcon_jsonify.Middleware(help_messages=True),
AuthParsingMiddleware()]
)
app.add_route('/godlikeresource', GodLikeResource())
Or better...
There is a falcon-auth package.
Related
I'm currently serving up files using a static route like so:
application.add_static_route('/artifacts/', '/artifacts/')
How can I add a function that is called before every GET to this route and any route below it? I'd like to send some data to our matomo (analytics) server when any user tries to grab an artifact from that route.
You can add middleware to process every request before routing. The drawback is that this would apply to all incoming requests, so you might need to recheck req.path first:
class AnalyticsMiddleware:
def process_request(self, req, resp):
if req.path.startswith('/artifacts/'):
print(f'Do something with {req.uri}...')
application = falcon.App(middleware=[AnalyticsMiddleware(), ...])
Alternatively, you could subclass StaticRoute and add it as a sink:
import falcon
import falcon.routing.static
class MyStaticRoute(falcon.routing.static.StaticRoute):
def __call__(self, req, resp):
print(f'Do something with {req.uri}...')
super().__call__(req, resp)
# ...
static = MyStaticRoute('/artifacts/', '/artifacts/')
application.add_sink(static, '/artifacts/')
However, the latter approach is not officially documented, so it could theoretically break in a future release without notice. Use this only if the middleware approach doesn't cut it for your use case for some reason.
Suppose I have a FastAPI project that contains 100+ API endpoints. How can I list all APIs/paths?
To get all possible URL patterns, we need to get access to the defined URL routes which is an attribute of running app instance.
We can do that in at least two ways,
Using FastAPI app: This is handy when you have access to the FastAPi instance
Using Request instance: This is handy when you have access to the incoming requests, but not to the FastAPI instance.
Complete Example
from fastapi import FastAPI, Request
app = FastAPI()
#app.get(path="/", name="API Foo")
def foo():
return {"message": "this is API Foo"}
#app.post(path="/bar", name="API Bar")
def bar():
return {"message": "this is API Bar"}
# Using FastAPI instance
#app.get("/url-list")
def get_all_urls():
url_list = [{"path": route.path, "name": route.name} for route in app.routes]
return url_list
# Using Request instance
#app.get("/url-list-from-request")
def get_all_urls_from_request(request: Request):
url_list = [
{"path": route.path, "name": route.name} for route in request.app.routes
]
return url_list
I tried to provide an edit the original answer but wouldn't let me.
Another use case: Suppose you are not in the main app file and don't have access to app in the namespace. In that case Starlette documentation says that we also have access to the app instance from the request as request.app. For example if in the main file you only have the app instance and don't want to have any endpoints in the main file but all of them to be in separate routers.
main.py
from fastapi import FastAPI
# then let's import all the various routers we have
# please note that api is the name of our package
from api.routers import router_1, router_2, router_3, utils
app = FastAPI()
app.include_router(router_1)
app.include_router(router_2)
app.include_router(router_3)
app.include_router(utils)
I have my list_endpoints endpoint in the utils router. To be able to list all of the app routes, I would do the following:
utils.py
from fastapi import APIRouter, Request
router = APIRouter(
prefix="/utils",
tags=["utilities"]
)
#router.get('/list_endpoints/')
def list_endpoints(request: Request):
url_list = [
{'path': route.path, 'name': route.name}
for route in request.app.routes
]
return url_list
Note that rather than using app.routes I used request.app.routes and I have access to all of them. If you now access /utils/list_endpoints you will get all your routes.
The accepted answer works great when you have just one app, but unfortunately, in our project we have a number of submounted ones which makes things a bit tricker in terms of traversal.
app.mount("/admin", admin_app)
...
We also have a lot of routes and their unqualified names can be the same even within one app, let alone different ones. So I wanted to get an overview of all the routes and their matching functions.
That's how I approached it, hope it will be helpful to others. :)
Great framework, but really miss django-extensions which had that covered, too bad there's nothing of the sort in fastapi land. Please correct me if I'm wrong!
from __future__ import annotations
from typing import Iterable
from app.main import app
from fastapi import FastAPI
from starlette.routing import Mount
def gen_routes(app: FastAPI | Mount) -> Iterable[tuple[str, str]]:
for route in app.routes:
if isinstance(route, Mount):
yield from (
(f"{route.path}{path}", name) for path, name in gen_routes(route)
)
else:
yield (
route.path,
"{}.{}".format(route.endpoint.__module__, route.endpoint.__qualname__),
)
def list_routes(app: FastAPI) -> None:
import tabulate
routes = sorted(set(gen_routes(app))) # also readable enough
print(tabulate.tabulate(routes, headers=["path", "full name"]))
if __name__ == "__main__":
list_routes(app)
I have an issue trying to setup CORS for my REST API.
I'm currently using the Flask-Restplus package. Here's what my endpoints look like :
#api_ns.route('/some/endpoint')
#api_ns.response(code=400, description='Bad Request.')
class AEndpointResource(Resource):
#api_ns.param(**api_req_fields.POST_DOC)
#api_ns.expect(POST_REQUIRED_BODY)
#api_ns.marshal_with(code=201,
fields=my_api_models.MyEndpointResponse.get_serializer(),
description=my_api_models.MyEndpointResponse.description)
def post(self) -> Tuple[my_api_models.MyEndpointResponse, int]:
"""
The post body
"""
# Some logic here
return response, 200
If I code a small javascript snippet and I try to launch it in a browser, I will get an error because there's no CORS headers. I'm seeing the Flask-Restplus is already handling the OPTIONS request without me telling him anything. (This makes sense according to this link, mentioning that since Flask 0.6, the OPTIONS requests are handled automatically)
My problem is that even when I try to decorate my endpoint using :
from flask-restplus import cors # <--- Adding this import
...
class AnEndpointResource(Resource):
...
#my_other_decorators
...
#cors.crossdomain(origin='*') # <--- Adding this new decorator on my endpoint
def post(self) -> Tuple[my_api_models.MyEndpointResponse, int]:
...
Nothing changes and I still get the same result as before. I get an HTTP 200 from the OPTIONS request automatically handled as before, but I don't see my new headers (i.e. Access-Control-Allow-Origin) in the response.
Am I missing something ?
Using Flask-CORS, it works:
from flask import Flask, request
from flask_restplus import Resource, Api, fields
from flask_cors import CORS
# configuration
DEBUG = True
# instantiate the app
app = Flask(__name__)
api = Api(app)
app.config.from_object(__name__)
# enable CORS
CORS(app, resources={r'/*': {'origins': '*'}})
When doing local testing on my laptop and in my browser, I was able to solve the cors problem by adding the header in the response.
before: return state, 200
after: return state, 200, {'Access-Control-Allow-Origin': '*'}
I tested and the headers you are looking for are added to the response to the subsequent GET request. The decorator #cors.crossdomain has an option automatic_options which is set to be True by default. This means your OPTIONS request will still be handled automatically.
See this test to check how it should work.
The flask_restplus.cors module is not documented so not sure if you should use it.
I had a CORS problem as well and solved it this way:
from flask import Flask
from flask_restplus import Api
app = Flask('name')
api = Api(app)
// your api code here
#app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin', '*')
return response
The problem is that flask-restplus only assigns CORS headers to the GET request. When the browser makes an OPTIONS request, this method isn't handled by a method on the Resource class. Then you get a flickering CORS header: one request it works, the next it doesn't, etc
When a 404 is raised, this also skips the CORS decorator.
A workaround for these bugs is using something like:
def exception_to_response(func):
#wraps(func)
def _exc_to_resp_decorator(self, *args, **kwargs):
try:
return func(self, *args, **kwargs)
except Exception as e:
return self.api.handle_error(e)
return _exc_to_resp_decorator
#api.route("/your-endpoint")
class SomeEndpoint(Resource):
#cors.crossdomain(origin='*')
def options(self):
# Make sure the OPTIONS request is also handled for CORS
# The "automatic_options" will handle this, no need to define a return here:
return
#api.expect(my_schema)
#api.response(200, "Success")
#cors.crossdomain(origin='*')
#exception_to_response
def get(self):
return {"json-fields": 123}, 200
I have simple Twisted-Klein server, with HTTP Basic Auth enabled globally:
from klein import Klein
import attr
from zope.interface import implementer
from twisted.cred.portal import IRealm
from twisted.internet.defer import succeed
from twisted.cred.portal import Portal
from twisted.cred.checkers import FilePasswordDB
from twisted.web.resource import IResource
from twisted.web.guard import HTTPAuthSessionWrapper, BasicCredentialFactory
from werkzeug.datastructures import MultiDict
from bson import json_util
import json
app = Klein()
# health check
#app.route('/health', methods=['GET'])
def health_check(request):
return ''
# dataset query API
#app.route('/query/<path:expression>', methods=['GET'])
def query(request, expression):
response = evaluate_expression(expression)
return response
#implementer(IRealm)
#attr.s
class HTTPAuthRealm(object):
resource = attr.ib()
def requestAvatar(self, avatarId, mind, *interfaces):
return succeed((IResource, self.resource, lambda: None))
def resource():
realm = HTTPAuthRealm(resource=app.resource())
portal = Portal(realm, [FilePasswordDB('./configs/server-auth.db')])
credential_factory = BasicCredentialFactory('Authentication required')
return HTTPAuthSessionWrapper(portal, [credential_factory])
I want to disable auth for only specific API endpoints, for example, in this case for /health API endpoint. I've read the docs, but just cant wrap my mind around it.
One way is to only wrap the part of the hierarchy that you want authentication for:
from twisted.web.resource import Resource
class Health(Resource):
# ...
def resource():
realm = HTTPAuthRealm(resource=app.resource())
portal = Portal(realm, [FilePasswordDB('./configs/server-auth.db')])
credential_factory = BasicCredentialFactory('Authentication required')
guarded = HTTPAuthSessionWrapper(portal, [credential_factory])
root = Resource()
root.putChild(b"health", Health())
root.putChild(b"this-stuff-requires-auth", guarded)
return root
The normal resource traversal logic used for dispatching requests will start at root. If the request is for /health (or any child) then it goes to root's health child - which is the Health instance created in this example. Note how the HTTPAuthSessionWrapper doesn't get involved there. If the request is for /this-stuff-requires-auth (or any child) then traversal does go through the auth wrapper and so authentication is required.
Another approach is to vary your avatar based on the credentials. In this scheme, you actually still authenticate everyone but you authorize anonymous users to access some of the hierarchy.
from twisted.cred.checkers import ANONYMOUS
#implementer(IRealm)
#attr.s
class HTTPAuthRealm(object):
def requestAvatar(self, avatarId, mind, *interfaces):
avatar = Resource()
avatar.putChild(b"health", Health())
if avatarId is not ANONYMOUS:
avatar.putChild(b"this-stuff-requires-auth", SecretResource())
return succeed((IResource, avatar, lambda: None))
You'll also need to configure your portal with a credentials checker for anonymous credentials:
from twisted.cred.checkers import AllowAnonymousAccess
portal = Portal(
realm, [
FilePasswordDB('./configs/server-auth.db'),
AllowAnonymousAccess(),
],
)
In this approach, HTTPAuthSessionWrapper is again your root resource.
Anonymous requests are associated with the ANONYMOUS avatar identifier and HTTPAuthRealm gives back an IResource which only knows about the resources that should be available to anonymous users.
Requests with valid user credentials are associated with a different avatar identifier (usually their username) and HTTPAuthRealm gives back an IResource with more children attached to it, granting more access.
I'm needing some help setting up unittests for Google Cloud Endpoints. Using WebTest all requests answer with AppError: Bad response: 404 Not Found. I'm not really sure if endpoints is compatible with WebTest.
This is how the application is generated:
application = endpoints.api_server([TestEndpoint], restricted=False)
Then I use WebTest this way:
client = webtest.TestApp(application)
client.post('/_ah/api/test/v1/test', params)
Testing with curl works fine.
Should I write tests for endpoints different? What is the suggestion from GAE Endpoints team?
After much experimenting and looking at the SDK code I've come up with two ways to test endpoints within python:
1. Using webtest + testbed to test the SPI side
You are on the right track with webtest, but just need to make sure you correctly transform your requests for the SPI endpoint.
The Cloud Endpoints API front-end and the EndpointsDispatcher in dev_appserver transforms calls to /_ah/api/* into corresponding "backend" calls to /_ah/spi/*. The transformation seems to be:
All calls are application/json HTTP POSTs (even if the REST endpoint is something else).
The request parameters (path, query and JSON body) are all merged together into a single JSON body message.
The "backend" endpoint uses the actual python class and method names in the URL, e.g. POST /_ah/spi/TestEndpoint.insert_message will call TestEndpoint.insert_message() in your code.
The JSON response is only reformatted before being returned to the original client.
This means you can test the endpoint with the following setup:
from google.appengine.ext import testbed
import webtest
# ...
def setUp(self):
tb = testbed.Testbed()
tb.setup_env(current_version_id='testbed.version') #needed because endpoints expects a . in this value
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_insert(self):
app = endpoints.api_server([TestEndpoint], restricted=False)
testapp = webtest.TestApp(app)
msg = {...} # a dict representing the message object expected by insert
# To be serialised to JSON by webtest
resp = testapp.post_json('/_ah/spi/TestEndpoint.insert', msg)
self.assertEqual(resp.json, {'expected': 'json response msg as dict'})
The thing here is you can easily setup appropriate fixtures in the datastore or other GAE services prior to calling the endpoint, thus you can more fully assert the expected side effects of the call.
2. Starting the development server for full integration test
You can start the dev server within the same python environment using something like the following:
import sys
import os
import dev_appserver
sys.path[1:1] = dev_appserver._DEVAPPSERVER2_PATHS
from google.appengine.tools.devappserver2 import devappserver2
from google.appengine.tools.devappserver2 import python_runtime
# ...
def setUp(self):
APP_CONFIGS = ['/path/to/app.yaml']
python_runtime._RUNTIME_ARGS = [
sys.executable,
os.path.join(os.path.dirname(dev_appserver.__file__),
'_python_runtime.py')
]
options = devappserver2.PARSER.parse_args([
'--admin_port', '0',
'--port', '8123',
'--datastore_path', ':memory:',
'--logs_path', ':memory:',
'--skip_sdk_update_check',
'--',
] + APP_CONFIGS)
server = devappserver2.DevelopmentServer()
server.start(options)
self.server = server
def tearDown(self):
self.server.stop()
Now you need to issue actual HTTP requests to localhost:8123 to run tests against the API, but again can interact with GAE APIs to set up fixtures, etc. This is obviously slow as you're creating and destroying a new dev server for every test run.
At this point I use the Google API Python client to consume the API instead of building the HTTP requests myself:
import apiclient.discovery
# ...
def test_something(self):
apiurl = 'http://%s/_ah/api/discovery/v1/apis/{api}/{apiVersion}/rest' \
% self.server.module_to_address('default')
service = apiclient.discovery.build('testendpoint', 'v1', apiurl)
res = service.testresource().insert({... message ... }).execute()
self.assertEquals(res, { ... expected reponse as dict ... })
This is an improvement over testing with CURL as it gives you direct access to the GAE APIs to easily set up fixtures and inspect internal state. I suspect there is an even better way to do integration testing that bypasses HTTP by stitching together the minimal components in the dev server that implement the endpoint dispatch mechanism, but that requires more research time than I have right now.
webtest can be simplified to reduce naming bugs
for the following TestApi
import endpoints
import protorpc
import logging
class ResponseMessageClass(protorpc.messages.Message):
message = protorpc.messages.StringField(1)
class RequestMessageClass(protorpc.messages.Message):
message = protorpc.messages.StringField(1)
#endpoints.api(name='testApi',version='v1',
description='Test API',
allowed_client_ids=[endpoints.API_EXPLORER_CLIENT_ID])
class TestApi(protorpc.remote.Service):
#endpoints.method(RequestMessageClass,
ResponseMessageClass,
name='test',
path='test',
http_method='POST')
def test(self, request):
logging.info(request.message)
return ResponseMessageClass(message="response message")
the tests.py should look like this
import webtest
import logging
import unittest
from google.appengine.ext import testbed
from protorpc.remote import protojson
import endpoints
from api.test_api import TestApi, RequestMessageClass, ResponseMessageClass
class AppTest(unittest.TestCase):
def setUp(self):
logging.getLogger().setLevel(logging.DEBUG)
tb = testbed.Testbed()
tb.setup_env(current_version_id='testbed.version')
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_testApi(self):
application = endpoints.api_server([TestApi], restricted=False)
testapp = webtest.TestApp(application)
req = RequestMessageClass(message="request message")
response = testapp.post('/_ah/spi/' + TestApi.__name__ + '.' + TestApi.test.__name__, protojson.encode_message(req),content_type='application/json')
res = protojson.decode_message(ResponseMessageClass,response.body)
self.assertEqual(res.message, 'response message')
if __name__ == '__main__':
unittest.main()
I tried everything I could think of to allow these to be tested in the normal way. I tried hitting the /_ah/spi methods directly as well as even trying to create a new protorpc app using service_mappings to no avail. I'm not a Googler on the endpoints team so maybe they have something clever to allow this to work but it doesn't appear that simply using webtest will work (unless I missed something obvious).
In the meantime you can write a test script that starts the app engine test server with an isolated environment and just issue http requests to it.
Example to run the server with an isolated environment (bash but you can easily run this from python):
DATA_PATH=/tmp/appengine_data
if [ ! -d "$DATA_PATH" ]; then
mkdir -p $DATA_PATH
fi
dev_appserver.py --storage_path=$DATA_PATH/storage --blobstore_path=$DATA_PATH/blobstore --datastore_path=$DATA_PATH/datastore --search_indexes_path=$DATA_PATH/searchindexes --show_mail_body=yes --clear_search_indexes --clear_datastore .
You can then just use requests to test ala curl:
requests.get('http://localhost:8080/_ah/...')
If you don't want to test the full HTTP stack as described by Ezequiel Muns, you can also just mock out endpoints.method and test your API definition directly:
def null_decorator(*args, **kwargs):
def decorator(method):
def wrapper(*args, **kwargs):
return method(*args, **kwargs)
return wrapper
return decorator
from google.appengine.api.users import User
import endpoints
endpoints.method = null_decorator
# decorator needs to be mocked out before you load you endpoint api definitions
from mymodule import api
class FooTest(unittest.TestCase):
def setUp(self):
self.api = api.FooService()
def test_bar(self):
# pass protorpc messages directly
self.api.foo_bar(api.MyRequestMessage(some='field'))
My solution uses one dev_appserver instance for the entire test module, which is faster than restarting the dev_appserver for each test method.
By using Google's Python API client library, I also get the simplest and at the same time most powerful way of interacting with my API.
import unittest
import sys
import os
from apiclient.discovery import build
import dev_appserver
sys.path[1:1] = dev_appserver.EXTRA_PATHS
from google.appengine.tools.devappserver2 import devappserver2
from google.appengine.tools.devappserver2 import python_runtime
server = None
def setUpModule():
# starting a dev_appserver instance for testing
path_to_app_yaml = os.path.normpath('path_to_app_yaml')
app_configs = [path_to_app_yaml]
python_runtime._RUNTIME_ARGS = [
sys.executable,
os.path.join(os.path.dirname(dev_appserver.__file__),
'_python_runtime.py')
]
options = devappserver2.PARSER.parse_args(['--port', '8080',
'--datastore_path', ':memory:',
'--logs_path', ':memory:',
'--skip_sdk_update_check',
'--',
] + app_configs)
global server
server = devappserver2.DevelopmentServer()
server.start(options)
def tearDownModule():
# shutting down dev_appserver instance after testing
server.stop()
class MyTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
# build a service object for interacting with the api
# dev_appserver must be running and listening on port 8080
api_root = 'http://127.0.0.1:8080/_ah/api'
api = 'my_api'
version = 'v0.1'
discovery_url = '%s/discovery/v1/apis/%s/%s/rest' % (api_root, api,
version)
cls.service = build(api, version, discoveryServiceUrl=discovery_url)
def setUp(self):
# create a parent entity and store its key for each test run
body = {'name': 'test parent'}
response = self.service.parent().post(body=body).execute()
self.parent_key = response['parent_key']
def test_post(self):
# test my post method
# the tested method also requires a path argument "parent_key"
# .../_ah/api/my_api/sub_api/post/{parent_key}
body = {'SomeProjectEntity': {'SomeId': 'abcdefgh'}}
parent_key = self.parent_key
req = self.service.sub_api().post(body=body,parent_key=parent_key)
response = req.execute()
etc..
After digging through the sources, I believe things have changed in endpoints since Ezequiel Muns's (excellent) answer in 2014. For method 1 you now need to request from /_ah/api/* directly and use the correct HTTP method instead of using the /_ah/spi/* transformation. This makes the test file look like this:
from google.appengine.ext import testbed
import webtest
# ...
def setUp(self):
tb = testbed.Testbed()
# Setting current_version_id doesn't seem necessary anymore
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_insert(self):
app = endpoints.api_server([TestEndpoint]) # restricted is no longer required
testapp = webtest.TestApp(app)
msg = {...} # a dict representing the message object expected by insert
# To be serialised to JSON by webtest
resp = testapp.post_json('/_ah/api/test/v1/insert', msg)
self.assertEqual(resp.json, {'expected': 'json response msg as dict'})
For searching's sake, the symptom of using the old method is endpoints raising a ValueError with Invalid request path: /_ah/spi/whatever. Hope that saves someone some time!