How to define nullable response attributes? - python

I'm using the Python library connexion in conjunction with my Swagger specification which features the x-nullable attribute in some of the definitions.
This x-nullable attribute is essentially a polyfill for the lack of support for nullable attributes in OAS 2.0. However, most frameworks support it with varying degrees.
connexion does appear to support this attribute in parameters but not responses.
So if you attempt to return a response with null as a value for any of items return in a response with x-nullable and validate_responses set to True it will fail validation and produce Response body does not conform to specification in the response.
How best can I polyfill support for x-nullable in a connexion based Python application such as the ones generate from the swagger-codegen tool?

Managed to specify a custom validator that patches the JSON schema's in the definition of the operation passed to it.
from connexion.decorators.response import ResponseValidator
class CustomResponseValidator(ResponseValidator):
def __init__(self, operation, mimetype):
operation.definitions = { name: self.patch_nullable_in_definition(definition) for name, definition in operation.definitions.items() }
ResponseValidator.__init__(self, operation, mimetype)
def patch_nullable_in_definition(self, definition):
definition['properties'] = { key: property_ if '$ref' in property_ else self.patch_nullable_in_property(property_) for key, property_ in definition['properties'].items() }
return definition
def patch_nullable_in_property(self, property_):
if isinstance(property_, dict) and \
not isinstance(property_, list) and \
'x-nullable' in property_ and \
property_['x-nullable'] and \
'type' in property_:
if not isinstance(property_['type'], list):
property_['type'] = [property_['type']]
if not 'null' in property_['type']:
property_['type'].append('null')
return property_

Related

How can I obtain required arguments which are not explicitly stated in the signature of a class method?

I am using a python library (ccxt) in which one base exchange class is inherited by exchange-specific classes, to provide a unified interface to several exchanges (coinbase, binance etc.).
The function definition for a sub-class might look something like this (not necessarily exactly): def fetch_ledger(self, symbols = None, since = None, params = {}):
The thing is, for e.g. the coinbase class, this method calls another method called prepareAccountRequestWithCurrencyCode(), which raises the exception:
raise ArgumentsRequired(self.id + ' prepareAccountRequestWithCurrencyCode() method requires an account_id(or accountId) parameter OR a currency code argument') if "accountId" or "code" is not provided in the params dict. These arguments are not in the function signature, as they are to be provided in the params dict (e.g. params = {"accountId" : "0x123"}).
I want to know that these arguments are required before I use the method, as I want to implement some automation and GUI-elements which can work across several exchanges (sub-classes). Some of these sub-classes have their own fetch_ledger methods which might not require e.g. the "accountId" argument to be provided in the params dict.
What is a god way to automatically obtain required aguments that are not in the function signature, for all exchanges?
I am providing the relevant ccxt code below since it's open-source:
def fetch_ledger(self, code=None, since=None, limit=None, params={}):
self.load_markets()
currency = None
if code is not None:
currency = self.currency(code)
request = self.prepare_account_request_with_currency_code(code, limit, params) # REQUIRES "accountId" in params
query = self.omit(params, ['account_id', 'accountId'])
response = self.v2PrivateGetAccountsAccountIdTransactions(self.extend(request, query))
return self.parse_ledger(response['data'], currency, since, limit)
def prepare_account_request_with_currency_code(self, code=None, limit=None, params={}):
accountId = self.safe_string_2(params, 'account_id', 'accountId')
if accountId is None:
if code is None:
raise ArgumentsRequired(self.id + ' prepareAccountRequestWithCurrencyCode() method requires an account_id(or accountId) parameter OR a currency code argument')
accountId = self.find_account_id(code)
if accountId is None:
raise ExchangeError(self.id + ' prepareAccountRequestWithCurrencyCode() could not find account id for ' + code)
request = {
'account_id': accountId,
}
if limit is not None:
request['limit'] = limit
return request
I've already thought of a few ways of doing it, such as running the function, catching the exception and dissecting the string to prompt the user for any missing arguments during run-time. I've also thought about making a source code parser, and even making changes to the library code, but I'm currently not sure what is best. I'd prefer to not have to look at the documentation of each unified method for all 100 exchanges and having to do it manually.
I'm wondering if anyone knows of an elegant or best-practice way of obtaining such optionally provided, yet required arguments for such methods (or just for the library I am currently using).

Set Optional params in PUT method using fastAPI/mongodb

I am trying to set Optional some params in a PUT method from my API.
Using fastAPI and mongodb I've build a simple API to insert students and delete the ones, now I am looking to allow me update the entries but not mandatory "params".
I've checked this Fastapi: put method and looks like something I am looking for mongodb.
And this response from art049 looks similar what I already have in my #api_router.put('/update-student/{id}', tags=['Student']) MongoDb with FastAPI
As example for my question here I have this structure:
Models:
class Student(BaseModel):
age:int
name:str
address:str
class UpdateStudent(BaseModel):
age: Optional[int] = None
name: Optional[str] = None
address: Optional[str] = None
Schemas:
def serializeDict(a) -> dict:
return {**{i:str(a[i]) for i in a if i=='_id'},**{i:a[i] for i in a if i!='_id'}}
def serializeList(entity) -> list:
return [serializeDict(a) for a in entity]
Routes:
#api_router.post('/create-student', tags=['Students'])
async def create_students(student: Student):
client.collegedb.students_collection.insert_one(dict(student))
return serializeList(client.collegedb.students_collection.find())
Also I know I can update the entry without problems in this way:
#api_router.put('/update-student/{id}', tags=['Student'])
async def update_student(id,ustudent: UpdateStudent):
client.collegedb.students_collection.find_one_and_update({"_id":ObjectId(id)},{
"$set":dict(ustudent)
})
return serializeDict(client.collegedb.students_collection.find_one({"_id":ObjectId(id)}))
My problem as you can see with my Models I need a way to validate which params are modified and update the ones only:
If right now I Update for example the age only; since the other params are not required, name and address will be stored as None (null actually) because I set this in my model.
Maybe I can do something like this:
if ustudent.age != None:
students_collection[ObjectId(id)] = ustudent.age
if ustudent.name != None:
students_collection[ObjectId(id)] = ustudent.name
if ustudent.address != None:
students_collection[ObjectId(id)] = ustudent.address
I know I can use this in a simple dictionary but never tried before in a collection in mongodb since pydantic not support ObjectId for iterations and that's why serializeDict was created.
I will really appreciate if somebody can give a hint with my concern
You can use exclude_unset=True argument as suggested in FastAPI documentation:
#api_router.put('/update-student/{id}', tags=['Student'])
async def update_student(id,ustudent: UpdateStudent):
client.collegedb.students_collection.find_one_and_update({"_id":ObjectId(id)},{
"$set":ustudent.dict(exclude_unset=True)
})
return serializeDict(client.collegedb.students_collection.find_one({"_id":ObjectId(id)}))
Here is the documentation for exporting Pydantic models.

How to pass **kwargs as params to FastAPI endpoint?

I have a function generating a dict template. This function consists of several generators and requires one parameter (i.e., carrier) and has many optional parameters (keyword arguments - **kwargs).
def main_builder(carrier, **params):
output = SamplerBuilder(DEFAULT_JSON)
output.generate_flight(carrier)
output.generate_airline_info(carrier)
output.generate_locations()
output.generate_passengers()
output.generate_contact_info()
output.generate_payment_card_info()
output.configs(**params)
result = output.input_json
return result
# example of function call
examplex = main_builder("3M", proxy="5.39.69.171:8888", card=Visa, passengers={"ADT":2, "CHD":1}, bags=2)
I want to deploy this function to FastAPI endpoint. I managed to do it for carrier but how can I set **kwargs as params to the function?
#app.get("/carrier/{carrier_code}", response_class=PrettyJSONResponse) # params/kwargs??
async def get_carrier(carrier_code):
output_json = main_builder(carrier_code)
return airline_input_json
Using Pydantic Model
Since your function "..has many optional parameters" and passengers parameter requires a dictionary as an input, I would suggest creating a Pydantic model, where you define the parameters, and which would allow you sending the data in JSON format and getting them automatically validated by Pydantci as well. Once the endpoint is called, you can use Pydantic's dict() method to convert the model into a dictionary.
Example
from pydantic import BaseModel
from typing import Optional
class MyModel(BaseModel):
proxy: Optional[str] = None
card: Optional[str] = None
passengers: Optional[dict] = None
bags: Optional[int] = None
#app.post("/carrier/{carrier_code}")
async def get_carrier(carrier_code: int, m: MyModel):
return main_builder(carrier_code, **m.dict())
Sending arbitrary JSON data
In case you had to send arbitrary JSON data, and hence, pre-defining the parameters of an endpoint wouldn't be possible, you could use an approach similar to the one described in this answer (see Options 3 and 4), as well as this answer and this answer.

validictory is not able to validate properly

I'm trying to validate the headers of a flask request and its failing. I'm trying to use the below code to simulate the same and can see that its failing to validate the headers properly even if I miss some of the mandatory headers.
The below code is expected to fail but its passing.
import validictory
from werkzeug.datastructures import EnvironHeaders
obj = EnvironHeaders(environ={})
validictory.validate(obj,{'type': 'object', 'properties': {'test':{'required': True, 'type': 'any'}}})
If I convert the EnvironHeaders as dict then validation is happening properly.
import validictory
from werkzeug.datastructures import EnvironHeaders
obj = EnvironHeaders(environ={})
validictory.validate(dict(obj),{'type': 'object', 'properties': {'test':{'required': True, 'type': 'any'}}})
This properly raises the below error during validation. Any idea on the reason for improper validation happened in the first case?
validictory.validator.RequiredFieldValidationError: Required field 'test' is missing
I was able to find out the reason for this issue by going through the source code of validictory.
It was passing the type validation since EnvironHeaders has both the attributes 'keys' and 'values'.
def validate_type_object(self, val):
return isinstance(val, Mapping) or (hasattr(val, 'keys') and hasattr(val, 'items'))
Property validation is happening only for dict types and the validation is passing since the code doesn't raise any error if the input type is not a dictionary.
def validate_properties(self, x, fieldname, schema, path, properties=None):
''' Validates properties of a JSON object by processing the object's schema recursively '''
value = x.get(fieldname)
if value is not None:
if isinstance(value, dict):
if isinstance(properties, dict):
if self.disallow_unknown_properties or self.remove_unknown_properties:
self._validate_unknown_properties(properties, value, fieldname,
schema.get('patternProperties'))
for property in properties:
self.__validate(property, value, properties.get(property),
path + '.' + property)
else:
raise SchemaError("Properties definition of field '{0}' is not an object"
.format(fieldname))
Note: Validictory has stopped support and hence not going to raise any issue in git repo. Will try using jsonschema package as suggested.

How to compare sql vs json in python

I have the following problem.
I have a class User simplified example:
class User:
def __init__(self, name, lastname, status, id=None):
self.id = id
self.name = name
self.lastname = lastname
self.status = status
def set_status(self,status)
# call to the api to change status
def get_data_from_db_by_id(self)
# select data from db where id = self.id
def __eq__(self, other):
if not isinstance(other, User):
return NotImplemented
return (self.id, self.name, self.lastname, self.status) == \
(other.id, other.name, other.lastname, other.status)
And I have a database structure like:
id, name, lastname, status
1, Alex, Brown, free
And json response from an API:
{
"id": 1,
"name": "Alex",
"lastname": "Brown",
"status": "Sleeping"
}
My question is:
What the best way to compare json vs sql responses?
What for? - it's only for testing purposes - I have to check that API has changed the DB correctly.
How can I deserialize Json and DB resul to the same class? Is there any common /best practices ?
For now, I'm trying to use marshmallow for json and sqlalchemy for DB, but have no luck with it.
Convert the database row to a dictionary:
def row2dict(row):
d = {}
for column in row.__table__.columns:
d[column.name] = str(getattr(row, column.name))
return d
Then convert json string to a dictionary:
d2 = json.loads(json_response)
And finally compare:
d2 == d
If you are using SQLAlchemy for the database, then I would recommend using SQLAthanor (full disclosure: I am the library’s author).
SQLAthanor is a serialization and de-serialization library for SQLAlchemy that lets you configure robust rules for how to serialize / de-serialize your model instances to JSON. One way of checking your instance and JSON for equivalence is to execute the following logic in your Python code:
First, serialize your DB instance to JSON. Using SQLAthanor you can do that as simply as:
instance_as_json = my_instance.dump_to_json()
This will take your instance and dump all of its attributes to a JSON string. If you want more fine-grained control over which model attributes end up on your JSON, you can also use my_instance.to_json() which respects the configuration rules applied to your model.
Once you have your serialized JSON string, you can use the Validator-Collection to convert your JSON strings to dicts, and then check if your instance dict (from your instance JSON string) is equivalent to the JSON from the API (full disclosure: I’m also the author of the Validator-Collection library):
from validator_collection import checkers, validators
api_json_as_dict = validators.dict(api_json_as_string)
instance_json_as_dict = validators.dict(instance_as_json)
are_equivalent = checkers.are_dicts_equivalent(instance_json_as_dict, api_json_as_dict)
Depending on your specific situation and objectives, you can construct even more elaborate checks and validations as well, using SQLAthanor’s rich serialization and deserialization options.
Here are some links that you might find helpful:
SQLAthanor Documentation on ReadTheDocs
SQLAthanor on Github
.dump_to_json() documentation
.to_json() documentation
Validator-Collection Documentation
validators.dict() documentation
checkers.are_dicts_equivalent() documentation
Hope this helps!

Categories