I have code to print out a dict as YAML as so:
import yaml
yaml.dump(
{
"Properties":
{
"ImageId": "!Ref AParameter"
}
},
new_template,
default_flow_style=False
)
This creates:
Properties:
ImageId: '!Ref AParameter'
Notice how the value for ImageId is inside quotes? I would like to print without the quotes. How do I do that with PyYAML?
The ! has a special meaning, as it is used to introduce an explicit tag, and therefore cannot appear at the beginning of a plain (unquoted) style scalar. Specifically rule 126 of the YAML 1.2 specification indicates that the first character of such a plain scalar cannot be a c-indicator, which is what ! is.
Such a scalar has to be quoted (single or double) which PyYAML does automatically, or be put in a literal or folding block style.
You could dump valid YAML without quotes to a literal block style scalar:
Properties:
ImageId: |
!Ref AParameter
Without supportive programming PyYAML cannot do this. You can use ruamel.yaml to do so (disclaimer: I am the author of that package) by making the value a PreservedScalarString instance: ruamel.yaml.scalarstring.PreservedScalarString("!Ref AParameter")
You can of course define a class that dumps using the !Ref tag, but the tag context will force quotes around the scalar AParameter:
import sys
import yaml
class Ref(str):
#staticmethod
def yaml_dumper(dumper, data):
return dumper.represent_scalar('!Ref', u'{}'.format(data), style=None)
yaml.add_representer(Ref, Ref.yaml_dumper)
yaml.dump(
{
"Properties":
{
"ImageId": Ref("AParameter"),
}
},
sys.stdout,
default_flow_style=False,
)
which gives:
Properties:
ImageId: !Ref 'AParameter'
This although loading !Ref Aparameter with an appropriate constructor is possible (i.e. the quotes are just added here to be on the safe side).
If you also want to suppress those quotes, you can e.g. do so using ruamel.yaml, by defining a special style 'x' for your node and providing emitting processing for that:
from ruamel import yaml
class Ref(str):
#staticmethod
def yaml_dumper(dumper, data):
return dumper.represent_scalar('!Ref', u'{}'.format(data), style='x')
#staticmethod
def yaml_constructor(loader, node):
value = loader.construct_scalar(node)
return Ref(value)
yaml.add_representer(Ref, Ref.yaml_dumper)
yaml.add_constructor('!Ref', Ref.yaml_constructor,
constructor=yaml.constructor.SafeConstructor)
def choose_scalar_style(self):
if self.event.style == 'x':
return ''
return self.org_choose_scalar_style()
yaml.emitter.Emitter.org_choose_scalar_style = yaml.emitter.Emitter.choose_scalar_style
yaml.emitter.Emitter.choose_scalar_style = choose_scalar_style
data = {
"Properties":
{
"ImageId": Ref("AParameter"),
}
}
ys = yaml.dump(data, default_flow_style=False)
print(ys)
data_out = yaml.safe_load(ys)
assert data_out == data
the above doesn't throw an error on the assert, so the data round-trips and the printed output is AFAICT exactly what you want:
Properties:
ImageId: !Ref AParameter
Related
I'm using the pytest framework to test an executable.
For this executable, I defined multiple test cases in a json file:
{
"Tests": [
{
"name": "test1",
"description" : "writes hello world to file",
"exe" : "%path_to_exe%",
"arguments": "--verbose",
"expression" : "test1.txt",
"referencedir": "%path_to_referencedir%",
"logdir" : "%path_to_logdir%"
},
{
"name": "test2",
"description" : "returns length of hello world string",
"exe" : "path_to_exe",
"arguments": "--verbose",
"expression" : "test2.txt",
"referencedir": "%path_to_referencedir%",
"logdir" : "%path_to_logdir%"
}
]
}
For each of these test cases, the exe should start and execute the expression that is passed by the 'expression' attribute. Its output is written to the logdir (defined by the 'logdir' attribute), which should then be compared with the referencedir. Pytest should then indicate for each of the test cases whether the outputfile in the logdir is identical to to the file in the referencedir.
I'm struggling with making pytest go over each test case one by one.
I'm able to loop over each test, but assertions don't indicate which test exactly is failing.
def test_xxx():
with open('tests.cfg') as f:
data = json.loads(f.read())
for test in data['Tests']:
assert test['name'] == "test1"
Furthermore, I tried to parametrize the input, but I cannot get it to work either:
def load_test_cases():
# Opening JSON file
f = open('tests.cfg')
# returns JSON object as
# a dictionary
data = json.load(f)
f.close()
return data
#pytest.mark.parametrize("test", load_test_cases())
def test_xxx(test):
assert test['name'] == "test1"
Which returns 'test_json.py::test_xxx[Tests]: string indices must be integers', indicating that it's not really looping over test objects.
I would suggest parametrizing is a better option here. parametrize expects an iterator object, refer to the example and comments added for code.
import json
import pytest
# method returns an iterator
def get_the_test():
with open('tests.cfg') as f:
data = json.loads(f.read())
return iter(data['Tests'])
# use the iterator object to feed into parameters
#pytest.mark.parametrize("test", get_the_test())
def test_xxx(test):
assert test['name'] == "test1"
Storing the individual test cases in a dictionary did the trick:
import json
import pytest
def load_test_cases():
# Opening JSON file
f = open('tests.cfg')
testlist = []
# returns JSON object as
# a dictionary
data = json.load(f)
for test in data['Tests']:
testlist.append(test)
f.close()
return testlist
#pytest.mark.parametrize("test", load_test_cases())
def test_xxx(test):
assert test['name'] == "test1"
I am looking at moving to cattrs / attrs from a completely manual process of typing out all my classes but need some help understanding how to achieve the following.
This is a single example but the data returned will be varied and sometimes not with all the fields populated.
data = {
"data": [
{
"broadcaster_id": "123",
"broadcaster_login": "Sam",
"language": "en",
"subscriber_id": "1234",
"subscriber_login": "Dave",
"moderator_id": "12345",
"moderator_login": "Tom",
"delay": "0",
"title": "Weekend Events"
}
]
}
#attrs.define
class PartialUser:
id: int
login: str
#attrs.define
class Info:
language: str
title: str
delay: int
broadcaster: PartialUser
subscriber: PartialUser
moderator: PartialUser
So I understand how you would construct this and it works perfectly fine with 1:1 mappings, as expected, but how would you create the PartialUser objects dynamically since the names are not identical to the JSON response from the API?
instance = cattrs.structure(data["data"][0], Info)
Is there some trick to using a converter?
This would need to be done for around 70 classes which is why I thought maybe cattrs could modernise and simplify what I'm trying to do.
thanks
Here's one possible solution.
This is the strategy: we will customize the structuring hook by wrapping it. The default hook expects the keys in the input dictionary to match the structure of the class, but here this is not the case. So we'll substitute our own structuring hook that does a little preprocessing and then calls into the default hook.
The default hook for an attrs class cls can be retrieved like this:
from cattrs import Converter
from cattrs.gen import make_dict_structure_fn
c = Converter()
handler = make_dict_structure_fn(cls, c)
Knowing this, we can implement a helper function thusly:
def group_by_prefix(cls: type, c: Converter, *prefixes: str) -> None:
handler = make_dict_structure_fn(cls, c)
def prefix_grouping_hook(val: dict[str, Any], _) -> Any:
by_prefix = {}
for key in val:
if "_" in key and (prefix := (parts := key.split("_", 1))[0]) in prefixes:
by_prefix.setdefault(prefix, {})[parts[1]] = val[key]
return handler(val | by_prefix, _)
c.register_structure_hook(cls, prefix_grouping_hook)
This function takes an attrs class cls, a converter, and a list of prefixes. Then it creates a hook and registers it with the converter for the class cls. Inside, it does a little bit of preprocessing to beat the data into the shape cattrs expects.
Here's how you'd use it for the Info class:
>>> c = Converter()
>>> group_by_prefix(Info, c, "broadcaster", "subscriber", "moderator")
>>> print(c.structure(data["data"][0], Info))
Info(language='en', title='Weekend Events', delay=0, broadcaster=PartialUser(id=123, login='Sam'), subscriber=PartialUser(id=1234, login='Dave'), moderator=PartialUser(id=12345, login='Tom'))
You can use this approach to make the solution more elaborate as needed.
Here is the JSON response I get from an API request:
{
"associates": [
{
"name":"DOE",
"fname":"John",
"direct_shares":50,
"direct_shares_details":{
"shares_PP":25,
"shares_NP":25
},
"indirect_shares":50,
"indirect_shares_details": {
"first_type": {
"shares_PP": 25,
"shares_NP": 0
},
"second_type": {
"shares_PP": 25,
"shares_NP": 0
}
}
}
]
}
However, in some occasions, some values will be equal to None. In that case, I handle it in my function for all the values that I know will be integers. But it doesn't work in this scenario for the nested keys inside indirect_shares_details:
{
"associates": [
{
"name":"DOE",
"fname":"John",
"direct_shares":50,
"direct_shares_details":{
"shares_PP":25,
"shares_NP":25
},
"indirect_shares":None,
"indirect_shares_details": None
}
}
]
}
So when I run my function to get the API values and put them in a custom dict, I get an error because the keys are simply inexistant in the response.
def get_shares_data(response):
associate_from_api = []
for i in response["associates"]:
associate_data = {
"PM_shares": round(company["Shares"], 2),
"full_name": i["name"] + " " + ["fname"]
"details": {
"shares_in_PM": i["direct_shares"],
"shares_PP_in_PM": i["direct_shares_details"]["shares_PP"],
"shares_NP_in_PM": i["direct_shares_details"]["shares_NP"],
"shares_directe": i["indirect_shares"],
"shares_indir_PP_1": i["indirect_shares_details"]["first_type"]["shares_PP"],
"shares_indir_NP_1": i["indirect_shares_details"]["first_type"]["shares_NP"],
"shares_indir_PP_2": i["indirect_shares_details"]["second_type"]["shares_PP"],
"shares_indir_NP_2": i["indirect_shares_details"]["second_type"]["shares_NP"],
}
}
for key,value in associate_data["details"].items():
if value != None:
associate_data["details"][key] = value * associate_data["PM_shares"] / 100
else:
associate_data["calculs"][key] = 0.0
associate_from_api.append(associate_data)
return associate_from_api
I've tried conditioning the access of the nested keys only if the parent key wasn't equal to None but I ended up declaring 3 different dictionaries inside if/else conditions and it turned into a mess, is there an efficient way to achieve this?
You can try accessing the values using dict.get('key') instead of accessing them directly, as in dict['key'].
Using the first approach, you will get None instead of KeyError if the key is not there.
EDIT: tested using the dictionary from the question:
You can try pydantic
Install pydantic
pip install pydantic
# OR
conda install pydantic -c conda-forge
Define some models based on your response structure
from pydantic import BaseModel
from typing import List, Optional
# There are some common fields in your json response.
# So you can put them together.
class ShareDetail(BaseModel):
shares_PP: int
shares_NP: int
class IndirectSharesDetails(BaseModel):
first_type: ShareDetail
second_type: ShareDetail
class Associate(BaseModel):
name: str
fname: str
direct_shares: int
direct_shares_details: ShareDetail
indirect_shares: int = 0 # Sets a default value for this field.
indirect_shares_details: Optional[IndirectSharesDetails] = None
class ResponseModel(BaseModel):
associates: List[Associate]
use ResponseModel.parse_xxx functions to parse response.
Here I use parse_file funtion, you can also use parse_json function
See: https://pydantic-docs.helpmanual.io/usage/models/#helper-functions
def main():
res = ResponseModel.parse_file("./NullResponse.json",
content_type="application/json")
print(res.dict())
if __name__ == "__main__":
main()
Then the response can be successfully parsed. And it automatically validates the input.
I have a multilang FastAPI connected to MongoDB. My document in MongoDB is duplicated in the two languages available and structured this way (simplified example):
{
"_id": xxxxxxx,
"en": {
"title": "Drinking Water Composition",
"description": "Drinking water composition expressed in... with pesticides.",
"category": "Water",
"tags": ["water","pesticides"]
},
"fr": {
"title": "Composition de l'eau de boisson",
"description": "Composition de l'eau de boisson exprimée en... présence de pesticides....",
"category": "Eau",
"tags": ["eau","pesticides"]
},
}
I therefore implemented two models DatasetFR and DatasetEN, each one makeS references with specific external Models (Enum) for category and tags in each lang.
class DatasetFR(BaseModel):
title:str
description: str
category: CategoryFR
tags: Optional[List[TagsFR]]
# same for DatasetEN chnaging the lang tag to EN
In the routes definition I forced the language parameter to declare the corresponding Model and get the corresponding validation.
#router.post("?lang=fr", response_description="Add a dataset")
async def create_dataset(request:Request, dataset: DatasetFR = Body(...), lang:str="fr"):
...
return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_dataset)
#router.post("?lang=en", response_description="Add a dataset")
async def create_dataset(request:Request, dataset: DatasetEN = Body(...), lang:str="en"):
...
return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_dataset)
But this seems to be in contradiction with the DRY principle. So, I wonder here if someone knows an elegant solution to: - given the parameter lang, dynamically call the corresponding model.
Or, if we can create a Parent Model Dataset that takes the lang argument and retrieve the child model Dataset.
This would incredibly ease building my API routes and the call of my models and mathematically divide by two the writing...
There are 2 parts to the answer (API call and data structure)
for the API call, you could separate them into 2 routes like /api/v1/fr/... and /api/v1/en/... (separating ressource representation!) and play with fastapi.APIRouter to declare the same route twice but changing for each route the validation schema by the one you want to use.
you could start by declaring a common BaseModel as an ABC as well as an ABCEnum.
from abc import ABC
from pydantic import BaseModel
class MyModelABC(ABC, BaseModel):
attribute1: MyEnumABC
class MyModelFr(MyModelABC):
attribute1: MyEnumFR
class MyModelEn(MyModelABC):
attribute1: MyEnumEn
Then you can select the accurate Model for the routes through a class factory:
my_class_factory: dict[str, MyModelABC] = {
"fr": MyModelFr,
"en": MyModelEn,
}
Finally you can create your routes through a route factory:
def generate_language_specific_router(language: str, ...) -> APIRouter:
router = APIRouter(prefix=language)
MySelectedModel: MyModelABC = my_class_factory[language]
#router.post("/")
def post_something(my_model_data: MySelectedModel):
# My internal logic
return router
About the second part (internal computation and data storage), internationalisation is often done through hashmaps.
The standard python library gettext could be investigated
Otherwise, the original language can be explicitely used as the key/hash and then map translations to it (also including the original language if you want to have consistency in your calls).
It can look like:
dictionnary_of_babel = {
"word1": {
"en": "word1",
"fr": "mot1",
},
"word2": {
"en": "word2",
},
"Drinking Water Composition": {
"en": "Drinking Water Composition",
"fr": "Composition de l'eau de boisson",
},
}
my_arbitrary_object = {
"attribute1": "word1",
"attribute2": "word2",
"attribute3": "Drinking Water Composition",
}
my_translated_object = {}
for attribute, english_sentence in my_arbitrary_object.items():
if "fr" in dictionnary_of_babel[english_sentence].keys():
my_translated_object[attribute] = dictionnary_of_babel[english_sentence]["fr"]
else:
my_translated_object[attribute] = dictionnary_of_babel[english_sentence]["en"] # ou sans "en"
expected_translated_object = {
"attribute1": "mot1",
"attribute2": "word2",
"attribute3": "Composition de l'eau de boisson",
}
assert expected_translated_object == my_translated_object
This code should run as is
A proposal for mongoDB representation, if we don't want to have a separate table for translations, could be a data structure such as:
# normal:
my_attribute: "sentence"
# internationalized
my_attribute_internationalized: {
sentence: {
original_lang: "sentence"
lang1: "sentence_lang1",
lang2: "sentence_lang2",
}
}
A simple tactic to generalize string translation is to define an anonymous function _() that embeds the translation like:
CURRENT_MODULE_LANG = "fr"
def _(original_string: str) -> str:
"""Switch from original_string to translation"""
return dictionnary_of_babel[original_string][CURRENT_MODULE_LANG]
Then call it everywhere a translation is needed:
>>> print(_("word 1"))
"mot 1"
You can find a reference to this practice in the django documentation about internationalization-in-python-code.
For static translation (for example a website or a documentation), you can use .po files and editors like poedit (See the french translation of python docs for a practical usecase)!
Option 1
A solution would be the following. Define lang as Query paramter and add a regular expression that the parameter should match. In your case, that would be ^(fr|en)$, meaning that only fr or en would be valid inputs. Thus, if no match was found, the request would stop there and the client would receive a "string does not match regex..." error.
Next, define the body parameter as a generic type of dict and declare it as Body field; thus, instructing FastAPI to expect a JSON body.
Following, create a dictionary of your models that you can use to look up for a model using the lang attribute. Once you find the corresponding model, try to parse the JSON body using models[lang].parse_obj(body) (equivalent to using models[lang](**body)). If no ValidationError is raised, you know the resulting model instance is valid. Otherwise, return an HTTP_422_UNPROCESSABLE_ENTITY error, including the errors, which you can handle as desired.
If you would also like FR and EN being valid lang values, adjust the regex to ignore case using ^(?i)(fr|en)$ instead, and make sure to convert lang to lower case when looking up for a model (i.e., models[lang.lower()].parse_obj(body)).
import pydantic
from fastapi import FastAPI, Response, status, Body, Query
from fastapi.responses import JSONResponse
from fastapi.encoders import jsonable_encoder
models = {"fr": DatasetFR, "en": DatasetEN}
#router.post("/", response_description="Add a dataset")
async def create_dataset(body: dict = Body(...), lang: str = Query(..., regex="^(fr|en)$")):
try:
model = models[lang].parse_obj(body)
except pydantic.ValidationError as e:
return Response(content=e.json(), status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, media_type="application/json")
return JSONResponse(content=jsonable_encoder(model.dict()), status_code=status.HTTP_201_CREATED)
Update
Since the two models have identical attributes (i.e., title and description), you could define a parent model (e.g., Dataset) with those two attributes, and have DatasetFR and DatasetEN models inherit those.
class Dataset(BaseModel):
title:str
description: str
class DatasetFR(Dataset):
category: CategoryFR
tags: Optional[List[TagsFR]]
class DatasetEN(Dataset):
category: CategoryEN
tags: Optional[List[TagsEN]]
Additionally, it might be a better approach to move the logic from inside the route to a dependecy function and have it return the model, if it passes the validation; otherwise, raise an HTTPException, as also demonstrated by #tiangolo. You can use jsonable_encoder, which is internally used by FastAPI, to encode the validation errors() (the same function can also be used when returning the JSONResponse).
from fastapi.exceptions import HTTPException
from fastapi import Depends
models = {"fr": DatasetFR, "en": DatasetEN}
async def checker(body: dict = Body(...), lang: str = Query(..., regex="^(fr|en)$")):
try:
model = models[lang].parse_obj(body)
except pydantic.ValidationError as e:
raise HTTPException(detail=jsonable_encoder(e.errors()), status_code=status.HTTP_422_UNPROCESSABLE_ENTITY)
return model
#router.post("/", response_description="Add a dataset")
async def create_dataset(model: Dataset = Depends(checker)):
return JSONResponse(content=jsonable_encoder(model.dict()), status_code=status.HTTP_201_CREATED)
Option 2
A further approach would be to have a single Pydantic model (let's say Dataset) and customise the validators for category and tags fields. You can also define lang as part of Dataset, thus, no need to have it as query parameter. You can use a set, as described here, to keep the values of each Enum class, so that you can efficiently check if a value exists in the Enum; and have dictionaries to quickly look up for a set using the lang attribute. In the case of tags, to verify that every element in the list is valid, use set.issubset, as described here. If an attribute is not valid, you can raise ValueError, as shown in the documentation, "which will be caught and used to populate ValidationError" (see "Note" section here). Again, if you need the lang codes written in uppercase being valid inputs, adjust the regex pattern, as described earlier.
P.S. You don't even need to use Enum with this approach. Instead, populate each set below with the permitted values. For instance,
categories_FR = {"Eau"} categories_EN = {"Water"} tags_FR = {"eau", "pesticides"} tags_EN = {"water", "pesticides"}. Additionally, if you would like not to use regex, but rather have a custom validation error for lang attribute as well, you could add it in the same validator decorator and perform validation similar (and previous) to the other two fields.
from pydantic import validator
categories_FR = set(item.value for item in CategoryFR)
categories_EN = set(item.value for item in CategoryEN)
tags_FR = set(item.value for item in TagsFR)
tags_EN = set(item.value for item in TagsEN)
cats = {"fr": categories_FR, "en": categories_EN}
tags = {"fr": tags_FR, "en": tags_EN}
def raise_error(values):
raise ValueError(f'value is not a valid enumeration member; permitted: {values}')
class Dataset(BaseModel):
lang: str = Body(..., regex="^(fr|en)$")
title: str
description: str
category: str
tags: List[str]
#validator("category", "tags")
def validate_atts(cls, v, values, field):
lang = values.get('lang')
if lang:
if field.name == "category":
if v not in cats[lang]: raise_error(cats[lang])
elif field.name == "tags":
if not set(v).issubset(tags[lang]): raise_error(tags[lang])
return v
#router.post("/", response_description="Add a dataset")
async def create_dataset(model: Dataset):
return JSONResponse(content=jsonable_encoder(model.dict()), status_code=status.HTTP_201_CREATED)
Option 3
Another approach would be to use Discriminated Unions, as described in this answer.
As per the documentation:
When Union is used with multiple submodels, you sometimes know
exactly which submodel needs to be checked and validated and want to
enforce this. To do that you can set the same field - let's call it
my_discriminator - in each of the submodels with a discriminated
value, which is one (or many) Literal value(s). For your Union,
you can set the discriminator in its value:
Field(discriminator='my_discriminator').
Setting a discriminated union has many benefits:
validation is faster since it is only attempted against one model
only one explicit error is raised in case of failure
the generated JSON schema implements the associated OpenAPI specification
I have a YAML file that I'd like to parse the description variable only; however, I know that the exclamation points in my CloudFormation template (YAML file) are giving PyYAML trouble.
I am receiving the following error:
yaml.constructor.ConstructorError: could not determine a constructor for the tag '!Equals'
The file has many !Ref and !Equals. How can I ignore these constructors and get a specific variable I'm looking for -- in this case, the description variable.
If you have to deal with a YAML document with multiple different tags, and
are only interested in a subset of them, you should still
handle them all. If the elements you are intersted in are nested
within other tagged constructs you at least need to handle all of the "enclosing" tags
properly.
There is however no need to handle all of the tags individually, you
can write a constructor routine that can handle mappings, sequences
and scalars register that to PyYAML's SafeLoader using:
import yaml
inp = """\
MyEIP:
Type: !Join [ "::", [AWS, EC2, EIP] ]
Properties:
InstanceId: !Ref MyEC2Instance
"""
description = []
def any_constructor(loader, tag_suffix, node):
if isinstance(node, yaml.MappingNode):
return loader.construct_mapping(node)
if isinstance(node, yaml.SequenceNode):
return loader.construct_sequence(node)
return loader.construct_scalar(node)
yaml.add_multi_constructor('', any_constructor, Loader=yaml.SafeLoader)
data = yaml.safe_load(inp)
print(data)
which gives:
{'MyEIP': {'Type': ['::', ['AWS', 'EC2', 'EIP']], 'Properties': {'InstanceId': 'MyEC2Instance'}}}
(inp can also be a file opened for reading).
As you see above will also continue to work if an unexpected !Join tag shows up in your code,
as well as any other tag like !Equal. The tags are just dropped.
Since there are no variables in YAML, it is a bit of guesswork what
you mean by "like to parse the description variable only". If that has
an explicit tag (e.g. !Description), you can filter out the values by adding 2-3 lines
to the any_constructor, by matching the tag_suffix parameter.
if tag_suffix == u'!Description':
description.append(loader.construct_scalar(node))
It is however more likely that there is some key in a mapping that is a scalar description,
and that you are interested in the value associated with that key.
if isinstance(node, yaml.MappingNode):
d = loader.construct_mapping(node)
for k in d:
if k == 'description':
description.append(d[k])
return d
If you know the exact position in the data hierarchy, You can of
course also walk the data structure and extract anything you need
based on keys or list positions. Especially in that case you'd be better of
using my ruamel.yaml, was this can load tagged YAML in round-trip mode without
extra effort (assuming the above inp):
from ruamel.yaml import YAML
with YAML() as yaml:
data = yaml.load(inp)
You can define a custom constructors using a custom yaml.SafeLoader
import yaml
doc = '''
Conditions:
CreateNewSecurityGroup: !Equals [!Ref ExistingSecurityGroup, NONE]
'''
class Equals(object):
def __init__(self, data):
self.data = data
def __repr__(self):
return "Equals(%s)" % self.data
class Ref(object):
def __init__(self, data):
self.data = data
def __repr__(self):
return "Ref(%s)" % self.data
def create_equals(loader,node):
value = loader.construct_sequence(node)
return Equals(value)
def create_ref(loader,node):
value = loader.construct_scalar(node)
return Ref(value)
class Loader(yaml.SafeLoader):
pass
yaml.add_constructor(u'!Equals', create_equals, Loader)
yaml.add_constructor(u'!Ref', create_ref, Loader)
a = yaml.load(doc, Loader)
print(a)
Outputs:
{'Conditions': {'CreateNewSecurityGroup': Equals([Ref(ExistingSecurityGroup), 'NONE'])}}