Global dependency as an argument to each handler - python

Looking at the docs I got to use my app settings in this way:
import config
...
#router.post('')
async def my_handler(
...
settings: config.SettingsCommon = fastapi.Depends(config.get_settings),
):
...
But I am not satisfied with repeating everywhere import config, config.get_settings.
Is there a way to use settings in my handlers without repeating myself?
Because FastAPI cares about helping you minimize code repetition.

You can use Class Based Views from the fastapi_utils package.
As an example:
from fastapi import APIRouter, Depends, FastAPI
from fastapi_utils.cbv import cbv
from starlette import requests
from logging import Logger
import config
from auth import my_auth
router = APIRouter(
tags=['Settings test'],
dependencies=[Depends(my_auth)] # injected into each query, but my_auth return values are ignored, throw Exceptions
)
#cbv(router)
class MyQueryCBV:
settings: config.SettingsCommon = Depends(config.get_settings) # you can get settings here
def __init__(self, r: requests.Request): # called for each query, after their dependencies have been evaluated
self.logger: Logger = self.settings.logger
self.logger.warning(str(r.headers))
#router.get("/cbv/{test}")
def test_cbv(self, test: str):
self.logger.warning(f"test_cbv: {test}")
return "test_cbv"
#router.get("/cbv2")
def test_cbv2(self):
self.logger.warning(f"test_cbv2")
return "test_cbv2"

It's not currently possible to inject global dependencies. You can still declare them and the code inside the dependencies will run as normal.
Docs on global dependencies for reference.
Without any external dependency, I can think of three ways of using global dependencies. You can set a private variable with your dependency and get that dependency using a function.
You can also use the same approach without a global private variable, but instead using a cache decorator (docs here).
Finally, you can implement the singleton pattern if using a class as a dependency.
Something like:
class Animal:
_singleton = None
#classmethod
def singleton(cls) -> "Animal":
if cls._singleton is None:
cls._singleton = Animal()
return cls._singleton

Related

Fast API - set function type to same type as imported method?

I'm building a Fast API server that serves code on behalf of my customers.
So my directory structure is:
project
| main.py
|--customer_code (mounted at runtime)
| blah.py
Within main.py I have:
from customer_code import blah
from fastapi import FastAPI
app = FastAPI()
...
#app.post("/do_something")
def bar(# I want this to have the same type as blah.foo()):
blah.foo()
and within blah.py I have:
from pydantic import BaseModel
class User(BaseModel):
id: int
name = 'John Doe'
def foo(data : User):
# does something
I don't know a priori what types my customers' code (in blah.py) will expect. But I'd like to use FastAPI's built-in generation of Open API schemas (rather than requiring my customers to accept and parse JSON inputs).
Is there a way to set the types of the arguments to bar to be the same as the types of the arguments to foo?
Seems like one way would be to do exec with fancy string interpolation but I worry that that's not really Pythonic and I'd also have to sanitize my users' inputs. So if there's another option, would love to learn.
Does this answering your question?
def f1(a: str, b: int):
print('F1:', a, b)
def f2(c: float):
print('F2:', c)
def function_with_unknown_arguments_for_f1(*args, **kwargs):
f1(*args, **kwargs)
def function_with_unknown_arguments_for_f2(*args, **kwargs):
f2(*args, **kwargs)
def function_with_unknown_arguments_for_any_function(func_to_run, *args, **kwargs):
func_to_run(*args, **kwargs)
function_with_unknown_arguments_for_f1("a", 1)
function_with_unknown_arguments_for_f2(1.1)
function_with_unknown_arguments_for_any_function(f1, "b", 2)
function_with_unknown_arguments_for_any_function(f2, 2.2)
Output:
F1: a 1
F2: 1.1
F1: b 2
F2: 2.2
Here is detailed explanation about args and kwargs
In other words post function should be like
#app.post("/do_something")
def bar(*args, **kwargs):
blah.foo(*args, **kwargs)
To be able handle dynamically changing foos
About OpenAPI:
Pretty sure that it is possible to override documentation generator classes or functions and set payload type based on foo instead of bar for specific views.
Here is couple examples how to extend OpenAPI. It is not related to your question directly but may help to understand how it works at all.
Ended up using this answer:
from fastapi import Depends, FastAPI
from inspect import signature
from pydantic import BaseModel, create_model
sig = signature(blah.foo)
query_params = {}
for k in sig.parameters:
query_params[k] = (sig.parameters[k].annotation, ...)
query_model = create_model("Query", **query_params)
So then the function looks like:
#app.post("/do_something")
def bar(params: query_model = Depends()):
p_as_dict = params.as_dict()
return blah.foo(**p_as_dict)
I don't quite get what I want (there's no nice text box, just a JSON field with example inputs), but it's close:
First consider the definition of a decorator. From Primer on Python Decorators:
a decorator is a function that takes another function and extends the behavior of the latter function without explicitly modifying it
app.post("/do_something") returns a decorator that receives the def bar(...) function in your example. About the usage of #, the same page mentioned before says:
#my_decorator is just an easier way of saying say_whee = my_decorator(say_whee)
So you could just use something like:
app.post("/do_something")(blah.foo)
And the foo function of blah will be exposed by FastAPI. It could be the end of it unless you also want perform some operations before foo is called or after foo returns. If that is the case you need to have your own decorator.
Full example:
# main.py
import functools
import importlib
from typing import Any
from fastapi import FastAPI
app = FastAPI()
# Create your own decorator if you need to intercept the request/response
def decorator(func):
# Use functools.wraps() so that the returned function "look like"
# the wrapped function
#functools.wraps(func)
def wrapper_decorator(*args: Any, **kwargs: Any) -> Any:
# Do something before if needed
print("Before")
value = func(*args, **kwargs)
# Do something after if needed
print("After")
return value
return wrapper_decorator
# Import your customer's code
blah = importlib.import_module("customer_code.blah")
# Decorate it with your decorator and then pass it to FastAPI
app.post("/do_something")(decorator(blah.foo))
# customer_code.blah.py
from pydantic import BaseModel
class User(BaseModel):
id: int
name = 'John Doe'
def foo(data: User) -> User:
return data

How to build a good registration mechanism in python?

I want to build a well-modularized python project, where all alternative modules should be registed and acessed via a function named xxx_builder.
Taking data class as an example:
register.py:
def register(key, module, module_dict):
"""Register and maintain the data classes
"""
if key in module_dict:
logger.warning(
'Key {} is already pre-defined, overwritten.'.format(key))
module_dict[key] = module
data_dict = {}
def register_data(key, module):
register(key, module, data_dict)
data.py:
from register import register_data
import ABCDEF
class MyData:
"""An alternative data class
"""
pass
def call_my_data(data_type):
if data_type == 'mydata'
return MyData
register_data('mydata', call_my_data)
builder.py:
import register
def get_data(type):
"""Obtain the corresponding data class
"""
for func in register.data_dict.values():
data = func(type)
if data is not None:
return data
main.py:
from data import MyData
from builder import get_data
if __name__ == '__main__':
data_type = 'mydata'
data = get_data(type=data_type)
My problem
In main.py, to register MyData class into register.data_dict before calling the function get_data, I need to import data.py in advance to execute register_data('mydata', call_my_data).
It's okay when the project is small, and all the data-related classes are placed according to some rules (e.g. all data-related class should be placed under the directory data) so that I can import them in advance.
However, this registeration mechanism means that all data-related classes will be imported, and I need to install all packages even if I won't use it actually. For example, when the indicator data_type in main.py is not mydata I still need to install ABCDEF package for the class MyData.
So is there any good idea to avoid importing all the packages?
Python's packaging tools come with a solution for this: entry points. There's even a tutorial about how to use entry points for plugins (which seems like what you're doing) (in conjunction with this Setuptools tutorial).
IOW, something like this (nb. untested), if you have a plugin package that has defined
[options.entry_points]
myapp.data_class =
someplugindata = my_plugin.data_classes:SomePluginData
in setup.cfg (or pypackage.toml or setup.py, with their respective syntaxes), you could register all of these plugin classes (here shown with an example with a locally registered class too).
from importlib.metadata import entry_points
data_class_registry = {}
def register(key):
def decorator(func):
data_class_registry[key] = func
return func
return decorator
#register("mydata")
class MyData:
...
def register_from_entrypoints():
for entrypoint in entry_points(group="myapp.data_class"):
register(entrypoint.name)(entrypoint.load())
def get_constructor(type):
return data_class_registry[type]
def main():
register_from_entrypoints()
get_constructor("mydata")(...)
get_constructor("someplugindata")(...)

Mock an external service instantiated in python constructor

I'm working on unit tests for a service I made that uses confluent-kafka. The goal is to test successful function calls, exception errors, etc. The problem I'm running into is since I'm instantiating the client in the constructor of my service the tests are failing since I'm unsure how to patch a constructor. My question is how do I mock my service in order to properly test its functionality.
Example_Service.py:
from confluent_kafka.schema_registry import SchemaRegistryClient
class ExampleService:
def __init__(self, config):
self.service = SchemaRegistryClient(config)
def get_schema(self):
return self.service.get_schema()
Example_Service_tests.py
from unittest import mock
#mock.patch.object(SchemaRegistryClient, "get_schema")
def test_get_schema_success(mock_client):
schema_Id = ExampleService.get_schema()
mock_service.assert_called()
The problem is that you aren't creating an instance of ExampleService; __init__ never gets called.
You can avoid patching anything by allowing your class to accept a client maker as an argument (which can default to SchemaRegistryClient:
class ExampleService:
def __init__(self, config, *, client_factory=SchemaRegistryClient):
self.service = client_factory(config)
...
Then in your test, you can simply pass an appropriate stub as an argument:
def test_get_schema_success():
mock_client = Mock()
schema_Id = ExampleService(some_config, client_factory=mock_client)
mock_client.assert_called()
Two ways
mock entire class using #mock.patch(SchemaRegistryClient) OR
replace #mock.patch.object(SchemaRegistryClient, "get_schema") with
#mock.patch.object(SchemaRegistryClient, "__init__")
#mock.patch.object(SchemaRegistryClient, "get_schema")

Access pytest session or arguments in pytest_runtest_logreport

I am trying to build a pytest plugin that utilizes the pytest_runtest_logreport in order to invoke some code everytime a test fails. I'd like to gate this plugin with a CLI argument I have added using the pytest_addoption hook. Unfortunately I can't seem to figure out how to access the pytest session state or arguments inside the pytest_runtest_logreport hook. Is there a way to do this? I don't see it in the hookspec.
You can't get the session from the standard TestReport object. However, you can introduce a custom wrapper around the pytest_runtest_makereport hook (the one that creates the report object), where you can attach the session yourself. Example:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
out = yield
report = out.get_result()
report.session = item.session
def pytest_runtest_logreport(report):
print(report.session)
Another example of passing state between hooks is a plugin class. Example with accessing config object in pytest_runtest_logreport:
import pytest
#pytest.mark.tryfirst
def pytest_configure(config):
p = MyPlugin(config)
config.pluginmanager.register(p, 'my_plugin')
class MyPlugin:
def __init__(self, config):
self.config = config
def pytest_runtest_logreport(self, report):
print(report, self.config)

Using flask-jwt-extended callbacks with flask-restful and create_app

I'm trying to create API tokens for my flask API with flask-jwt-extended. I'm trying to initialize the token_in_blacklist_loader but can't figure out the right way to do that.
The problem is that token_in_blacklist_loader is implemented as a decorator. It is supposed to be used in the following way:
#jwt.token_in_blacklist_loader
def check_if_token_in_blacklist(decrypted_token):
jti = decrypted_token['jti']
return jti in blacklist
^ from the docs here
Where jwt is defined as:
jwt = JWTManager(app)
But if using the create_app pattern, then jwt variable is hidden inside a function, and cannot be used in the global scope for decorators.
What is the right way to fix this / work around this?
What I ended up doing was putting the handler inside of create_app like so:
def create_app(name: str, settings_override: dict = {}):
app = Flask(name, ...)
...
jwt = JWTManager(app)
#jwt.token_in_blacklist_loader
def check_token_in_blacklist(token_dict: dict) -> bool:
...
Put the JWTManager in a different file, and initialize it with the jwt.init_app function
As an example, see:
https://github.com/vimalloc/flask-jwt-extended/blob/master/examples/database_blacklist/extensions.py
and
https://github.com/vimalloc/flask-jwt-extended/blob/master/examples/database_blacklist/app.py

Categories