Replacing strings in YAML using Python - python

I have the following YAML:
instance:
name: test
flavor: x-large
image: centos7
tasks:
centos-7-prepare:
priority: 1
details::
ha: 0
args:
template: &startup
name: startup-centos-7
version: 1.2
timeout: 1800
centos-7-crawl:
priority: 5
details::
ha: 1
args:
template: *startup
timeout: 0
The first task defines template name and version, which is then used by other tasks. Template definition should not change, however others especially task name will.
What would be the best way to change template name and version in Python?
I have the following regex for matching (using re.DOTALL):
template:.*name: (.*?)version: (.*?)\s
However did not figure out re.sub usage so far. Or is there any more convenient way of doing this?

For this kind of round-tripping (load-modify-dump) of YAML you should be using ruamel.yaml (disclaimer: I am the author of that package).
If your input is in input.yaml, you can then relatively easily find
the name and version under key template and update them:
import sys
import ruamel.yaml
def find_template(d):
if isinstance(d, list):
for elem in d:
x = find_template(elem)
if x is not None:
return x
elif isinstance(d, dict):
for k in d:
v = d[k]
if k == 'template':
if 'name' in v and 'version' in v:
return v
x = find_template(v)
if x is not None:
return x
return None
yaml = ruamel.yaml.YAML()
# yaml.indent(mapping=4, sequence=4, offset=2)
yaml.preserve_quotes = True
with open('input.yaml') as ifp:
data = yaml.load(ifp)
template = find_template(data)
template['name'] = 'startup-centos-8'
template['version'] = '1.3'
yaml.dump(data, sys.stdout)
which gives:
instance:
name: test
flavor: x-large
image: centos7
tasks:
centos-7-prepare:
priority: 1
'details:':
ha: 0
args:
template: &startup
name: startup-centos-8
version: '1.3'
timeout: 1800
centos-7-crawl:
priority: 5
'details:':
ha: 1
args:
template: *startup
timeout: 0
Please note that the (superfluous) quotes that I inserted in the input, as well as the comment and the name of the alias are preserved.

I would parse the yaml file into a dictionary, and the edit the field and write the dictionary back out to yaml.
See this question for discussion on parsing yaml in python How can I parse a YAML file in Python but I think you would end up with something like this.
from ruamel.yaml import YAML
from io import StringIO
yaml=YAML(typ='safe')
yaml.default_flow_style = False
#Parse from string
myConfig = yaml.load(doc)
#Example replacement code
for task in myConfig["tasks"]:
if myConfig["tasks"][task]["details"]["args"]["template"]["name"] == "&startup":
myConfig["tasks"][task]["details"]["args"]["template"]["name"] = "new value"
#Convert back to string
buf = StringIO()
yaml.dump(myConfig, buf)
updatedYml = buf.getvalue()

Related

Identify if yaml key is anchor or pointer

I use ruamel.yaml in order to parse YAML files and I'd like to identify if the key is the anchor itself or just a pointer. Given the following:
foo: &some_anchor
bar: 1
baz: *some_anchor
I'd like to understand that foo is the actual anchor and baz is a pointer. From what I can see, there's an anchor property on the node (and also yaml_anchor method), but both baz and foo show that their anchor is some_anchor - meaning that I cannot differentiate.
How can I get this info?
Since PyYaml and Ruamel.yaml load an alias node as a reference of the object loaded from the corresponding anchor node, you can traverse an object tree and check if each node is a reference of a previous visited object or not.
The following is a simple example only checking dictionaries.
from ruamel.yaml import YAML
root = YAML().load('''
foo: &some_anchor
bar: 1
baz: *some_anchor
''')
dict_ids = set()
def visit(parent):
if isinstance(parent, dict):
i = id(parent)
print(parent, ', is_alias:', i in dict_ids)
dict_ids.add(i)
for k, v in parent.items():
visit(v)
elif isinstance(parent, list):
for e in parent:
visit(e)
visit(root)
This will output the following.
ordereddict([('foo', ordereddict([('bar', 1)])), ('baz', ordereddict([('bar', 1)]))]) , is_alias: False
ordereddict([('bar', 1)]) , is_alias: False
ordereddict([('bar', 1)]) , is_alias: True
In your example &some_anchor is the anchor for the single element mapping bar: 1 and
*some_anchor is the alias. Writing the "foo is the actual anchor and baz is pointer`" is
in IMO both incorrect terminology and confusing keys with their (anchored/aliased) values. If you had a YAML document:
- 3
- 5
- 9
- &some_anchor
bar: 1
- 42
- *some_anchor
would you actually say, probably after carefully counting,
that '4 is the anchor and 6 is the pointer(or3and5` depending on
where you start counting)?
If you want to test if a key of a dict has a value that was an anchored node in YAML, or if that
value was an aliased node, you'll have to look at the value, and you'll find that they are the same Python data structure
for keys foo resp. baz.
What determines on dumping, which key's value gets the anchor and which key's (or keys') value(s) are dumped as an alias,
is entirely determined
by which gets dumped first, as the YAML specification stats that an anchor has to come before its use as an alias (an
anchor can come after an alias if it is re-defined).
As #relent95 describes you should recursively walk over the
data structure you loaded (to see which key gets there first) and in both ruamel.yaml and PyYAML look at the id().
But for PyYAML that only works for complex data (dict, list, objects) as it throws away anchoring information and will
not find the same id() on e.g. an anchored integer value.
The alternative to using the id is to look at the actual anchor name that ruamel.yaml stores in attribute/property anchor.
If you know up front that your YAML document is as simple as your example ( anchored/aliased nodes are values for
the root level mapping ) you can do:
import sys
import ruamel.yaml
yaml_str = """\
foo: &some_anchor
bar: 1
baz: *some_anchor
oof: 42
"""
def is_keys_value_anchor(key, data, verbose=0):
anchor_found = set()
for k, v in data.items():
res = None
try:
anchor = v.anchor.value
if anchor is not None:
res = anchor not in anchor_found
anchor_found.add(anchor)
except AttributeError:
pass
if k == key:
break
if verbose > 0:
print(f'key "{key}" {res}')
return res
yaml = ruamel.yaml.YAML()
data = yaml.load(yaml_str)
is_keys_value_anchor('foo', data, verbose=1)
is_keys_value_anchor('baz', data, verbose=1)
is_keys_value_anchor('oof', data, verbose=1)
which gives:
key "foo" True
key "baz" False
key "oof" None
But this in ineffecient for root mappings with lots of keys, and won't find anchors/aliases that were nested deeply
in the document. A more generic approach is to recursively walk the data structure once and create dict with
as key the anchor used, and as value a list of "paths", A path itself being a list of keys/indices with which
which you can traverse the data structure starting at the root. The first path in the list being the anchor, the rest aliases:
import sys
import ruamel.yaml
yaml_str = """\
foo: &some_anchor
- bar: 1
- klm: &anchored_num 42
baz:
xyz:
- *some_anchor
oof: [1, 2, c: 13, magic: [*anchored_num]]
"""
def find_anchor_alias_paths(data, path=None, res=None):
def check_add_anchor(d, path, anchors):
# returns False when an alias is found, to prevent recursing into a node twice.
try:
anchor = d.anchor.value
if anchor is not None:
tmp = anchors.setdefault(anchor, [])
tmp.append(path)
return len(tmp) == 1
except AttributeError:
pass
return True
if path is None:
path = []
if res is None:
res = {}
if isinstance(data, dict):
for k, v in data.items():
next_path = path.copy()
next_path.append(k)
if check_add_anchor(v, next_path, res):
find_anchor_alias_paths(v, next_path, res)
elif isinstance(data, list):
for idx, elem in enumerate(data):
next_path = path.copy()
next_path.append(idx)
if check_add_anchor(elem, next_path, res):
find_anchor_alias_paths(elem, next_path, res)
return res
yaml = ruamel.yaml.YAML()
data = yaml.load(yaml_str)
anchor_alias_paths = find_anchor_alias_paths(data)
for anchor, paths in anchor_alias_paths.items():
print(f'anchor: "{anchor}", anchor_path: {paths[0]}, alias_path(s): {paths[1:]}')
print('value for last anchor/alias found', data.mlget(paths[-1], list_ok=True))
which gives:
anchor: "some_anchor", anchor_path: ['foo'], alias_path(s): [['baz', 'xyz', 0]]
anchor: "anchored_num", anchor_path: ['foo', 1, 'klm'], alias_path(s): [['oof', 3, 'magic', 0]]
value for last anchor/alias found 42
You can then test your the paths you are interested in against the values returned by find_anchor_alias_paths,
or the key against the final elements of such paths.

python read dict with array value

I have a test.yaml:
exclude:
- name: apple
version: [3]
- name: pear
version: [2,4,5]
I have a function to check these values in dict and compare it.
def do_something(fruit_name: str, data: dict):
result =[]
versions = [2,3,5,6,7,8]
for version in versions:
url = f"some.api.url/subjects/{fruit_name}/versions/{version}"
response = sr_rest_api.session.post(url, json=data).json()
config = read_yaml("test.yaml") # not sure
for schema in config['exclude']: # not sure
# I'm stuck here
# if version and name exist in the yaml, skip
# else, append to the list such as:
else:
result.append(response["is_fruit"]) # Boolean
return result
I'm not sure how to unwrap the array from a dictionary.
Result from reading the yaml:
{'exclude': [{'name': 'apple', 'version': [3]},
{'name': 'pear', 'version': [2,4,5]}]}
Try this:
def do_something(fruit_name: str, data: dict):
result =[]
# do this once outside the loop
config = read_yaml("test.yaml")
# make the config exclusions easier to work with
exclusions = {schema["name"]: schema for schema in config["exclude"]}
versions = [2,3,5,6,7,8]
for version in versions:
if fruit_name in exclusions and version in exclusions[fruit_name]["version"]:
# this is an excluded fruit version, skip it
continue
else:
# fetch the data and update result
url = f"some.api.url/subjects/{fruit_name}/versions/{version}"
response = sr_rest_api.session.post(url, json=data).json()
result.append(response["is_fruit"])
return result

Clone Kubernetes objects programmatically using the Python API

The Python API is available to read objects from a cluster. By cloning we can say:
Get a copy of an existing Kubernetes object using kubectl get
Change the properties of the object
Apply the new object
Until recently, the option to --export api was deprecated in 1.14. How can we use the Python Kubernetes API to do the steps from 1-3 described above?
There are multiple questions about how to extract the code from Python API to YAML, but it's unclear how to transform the Kubernetes API object.
Just use to_dict() which is now offered by Kubernetes Client objects. Note that it creates a partly deep copy. So to be safe:
copied_obj = copy.deepcopy(obj.to_dict())
Dicts can be passed to create* and patch* methods.
For convenience, you can also wrap the dict in Prodict.
copied_obj = Prodict.from_dict(copy.deepcopy(obj.to_dict()))
The final issue is getting rid of superfluous fields. (Unfortunately, Kubernetes sprinkles them throughout the object.) I use kopf's internal facility for getting the "essence" of an object. (It takes care of the deep copy.)
copied_obj = kopf.AnnotationsDiffBaseStorage().build(body=kopf.Body(obj.to_dict()))
copied_obj = Prodic.from_dict(copied_obj)
After looking at the requirement, I spent a couple of hours researching the Kubernetes Python API. Issue 340 and others ask about how to transform the Kubernetes API object into a dict, but the only workaround I found was to retrieve the raw data and then convert to JSON.
The following code uses the Kubernetes API to get a deployment and its related hpa from the namespaced objects, but retrieving their raw values as JSON.
Then, after transforming the data into a dict, you can alternatively clean up the data by removing null references.
Once you are done, you can transform the dict as YAML payload to then save the YAML to the file system
Finally, you can apply either using kubectl or the Kubernetes Python API.
Note:
Make sure to set KUBECONFIG=config so that you can point to a cluster
Make sure to adjust the values of origin_obj_name = "istio-ingressgateway" and origin_obj_namespace = "istio-system" with the name of the corresponding objects to be cloned in the given namespace.
import os
import logging
import yaml
import json
logging.basicConfig(level = logging.INFO)
import crayons
from kubernetes import client, config
from kubernetes.client.rest import ApiException
LOGGER = logging.getLogger(" IngressGatewayCreator ")
class IngressGatewayCreator:
#staticmethod
def clone_default_ingress(clone_context):
# Clone the deployment
IngressGatewayCreator.clone_deployment_object(clone_context)
# Clone the deployment's HPA
IngressGatewayCreator.clone_hpa_object(clone_context)
#staticmethod
def clone_deployment_object(clone_context):
kubeconfig = os.getenv('KUBECONFIG')
config.load_kube_config(kubeconfig)
v1apps = client.AppsV1beta1Api()
deployment_name = clone_context.origin_obj_name
namespace = clone_context.origin_obj_namespace
try:
# gets an instance of the api without deserialization to model
# https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414
deployment = v1apps.read_namespaced_deployment(deployment_name, namespace, _preload_content=False)
except ApiException as error:
if error.status == 404:
LOGGER.info("Deployment %s not found in namespace %s", deployment_name, namespace)
return
raise
# Clone the object deployment as a dic
cloned_dict = IngressGatewayCreator.clone_k8s_object(deployment, clone_context)
# Change additional objects
cloned_dict["spec"]["selector"]["matchLabels"]["istio"] = clone_context.name
cloned_dict["spec"]["template"]["metadata"]["labels"]["istio"] = clone_context.name
# Save the deployment template in the output dir
context.save_clone_as_yaml(cloned_dict, "deployment")
#staticmethod
def clone_hpa_object(clone_context):
kubeconfig = os.getenv('KUBECONFIG')
config.load_kube_config(kubeconfig)
hpas = client.AutoscalingV1Api()
hpa_name = clone_context.origin_obj_name
namespace = clone_context.origin_obj_namespace
try:
# gets an instance of the api without deserialization to model
# https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414
hpa = hpas.read_namespaced_horizontal_pod_autoscaler(hpa_name, namespace, _preload_content=False)
except ApiException as error:
if error.status == 404:
LOGGER.info("HPA %s not found in namespace %s", hpa_name, namespace)
return
raise
# Clone the object deployment as a dic
cloned_dict = IngressGatewayCreator.clone_k8s_object(hpa, clone_context)
# Change additional objects
cloned_dict["spec"]["scaleTargetRef"]["name"] = clone_context.name
# Save the deployment template in the output dir
context.save_clone_as_yaml(cloned_dict, "hpa")
#staticmethod
def clone_k8s_object(k8s_object, clone_context):
# Manipilate in the dict level, not k8s api, but from the fetched raw object
# https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414
cloned_obj = json.loads(k8s_object.data)
labels = cloned_obj['metadata']['labels']
labels['istio'] = clone_context.name
cloned_obj['status'] = None
# Scrub by removing the "null" and "None" values
cloned_obj = IngressGatewayCreator.scrub_dict(cloned_obj)
# Patch the metadata with the name and labels adjusted
cloned_obj['metadata'] = {
"name": clone_context.name,
"namespace": clone_context.origin_obj_namespace,
"labels": labels
}
return cloned_obj
# https://stackoverflow.com/questions/12118695/efficient-way-to-remove-keys-with-empty-strings-from-a-dict/59959570#59959570
#staticmethod
def scrub_dict(d):
new_dict = {}
for k, v in d.items():
if isinstance(v, dict):
v = IngressGatewayCreator.scrub_dict(v)
if isinstance(v, list):
v = IngressGatewayCreator.scrub_list(v)
if not v in (u'', None, {}):
new_dict[k] = v
return new_dict
# https://stackoverflow.com/questions/12118695/efficient-way-to-remove-keys-with-empty-strings-from-a-dict/59959570#59959570
#staticmethod
def scrub_list(d):
scrubbed_list = []
for i in d:
if isinstance(i, dict):
i = IngressGatewayCreator.scrub_dict(i)
scrubbed_list.append(i)
return scrubbed_list
class IngressGatewayContext:
def __init__(self, manifest_dir, name, hostname, nats, type):
self.manifest_dir = manifest_dir
self.name = name
self.hostname = hostname
self.nats = nats
self.ingress_type = type
self.origin_obj_name = "istio-ingressgateway"
self.origin_obj_namespace = "istio-system"
def save_clone_as_yaml(self, k8s_object, kind):
try:
# Just try to create if it doesn't exist
os.makedirs(self.manifest_dir)
except FileExistsError:
LOGGER.debug("Dir already exists %s", self.manifest_dir)
full_file_path = os.path.join(self.manifest_dir, self.name + '-' + kind + '.yaml')
# Store in the file-system with the name provided
# https://stackoverflow.com/questions/12470665/how-can-i-write-data-in-yaml-format-in-a-file/18210750#18210750
with open(full_file_path, 'w') as yaml_file:
yaml.dump(k8s_object, yaml_file, default_flow_style=False)
LOGGER.info(crayons.yellow("Saved %s '%s' at %s: \n%s"), kind, self.name, full_file_path, k8s_object)
try:
k8s_clone_name = "http2-ingressgateway"
hostname = "my-nlb-awesome.a.company.com"
nats = ["123.345.678.11", "333.444.222.111", "33.221.444.23"]
manifest_dir = "out/clones"
context = IngressGatewayContext(manifest_dir, k8s_clone_name, hostname, nats, "nlb")
IngressGatewayCreator.clone_default_ingress(context)
except Exception as err:
print("ERROR: {}".format(err))
Not python, but I've used jq in the past to quickly clone something with the small customisations required for each use case (usually cloning secrets into a new namespace).
kc get pod whatever-85pmk -o json \
| jq 'del(.status, .metadata ) | .metadata.name="newname"' \
| kc apply -f - -o yaml --dry-run
This is really easy to do with Hikaru.
Here is an example from my own open source project:
def duplicate_without_fields(obj: HikaruBase, omitted_fields: List[str]):
"""
Duplicate a hikaru object, omitting the specified fields
This is useful when you want to compare two versions of an object and first "cleanup" fields that shouldn't be
compared.
:param HikaruBase obj: A kubernetes object
:param List[str] omitted_fields: List of fields to be omitted. Field name format should be '.' separated
For example: ["status", "metadata.generation"]
"""
if obj is None:
return None
duplication = obj.dup()
for field_name in omitted_fields:
field_parts = field_name.split(".")
try:
if len(field_parts) > 1:
parent_obj = duplication.object_at_path(field_parts[:-1])
else:
parent_obj = duplication
setattr(parent_obj, field_parts[-1], None)
except Exception:
pass # in case the field doesn't exist on this object
return duplication
Dumping the object to yaml afterwards or re-applying it to the cluster is trivial with Hikaru
We're using this to clean up objects so that can show users a github-style diff when objects change, without spammy fields that change often like generation

How we convert string into json

I want to convert ansible-init file into json. So, I just use this code:
common_shared file:
[sql]
x.com
[yps_db]
y.com
[ems_db]
c.com
[scc_db]
d.com
[all:vars]
server_url="http://x.com/x"
app_host=abc.com
server_url="https://x.com"
[haproxy]
1.1.1.1 manual_hostname=abc instance_id=i-dddd
2.2.2.2 manual_hostname=xyz instance_id=i-cccc
For converting Ansible INI file in JSON:
import json
options= {}
f = open('common_shared')
x = f.read()
config_entries = x.split()
for key,value in zip(config_entries[0::2], config_entries[1::2]):
cleaned_key = key.replace("[",'').replace("]",'')
options[cleaned_key]=value
print json.dumps(options,indent=4,ensure_ascii=False)
But it will print this result:
{
"scc_db": "xxx",
"haproxy": "x.x.x.x",
"manual_hostname=xxx": "instance_id=xx",
"ems_db": "xxx",
"yps_db": "xxx",
"all:vars": "yps_server_url=\"xxx\"",
"1.1.1.5": "manual_hostname=xxx",
"sql": "xxx",
"xxx": "scc_server_url=xxxx\""
}
But I wanted to print result in proper JSON format but not able to understand how. I tried config parser but didn't get help to print it in desired format.
You can use ConfigParser to read in your file, and then do the conversion to a dict to dump.
from ConfigParser import ConfigParser
from collections import defaultdict
config = ConfigParser()
config.readfp(open('/path/to/file.ini'))
def convert_to_dict(config):
config_dict = defaultdict(dict)
for section in config.sections():
for key, value in config.items(section):
config_dict[section][key] = value
return config_dict
print convert_to_dict(config)
EDIT
As you stated in your comment, some line items are just 'things' with no value, the below might work for you.
import re
from collections import defaultdict
SECTION_HEADER_RE = re.compile('^\[.*\]$')
KEY_VALUE_RE = re.compile('^.*=.*$')
def convert_ansible_to_dict(filepath_and_name):
ansible_dict = defaultdict(dict)
with open(filepath_and_name) as input_file:
section_header = None
for line in input_file:
if SECTION_HEADER_RE.findall(line.strip()):
section_header = SECTION_HEADER_RE.findall(line.strip())[0]
elif KEY_VALUE_RE.findall(line.strip()):
if section_header:
# Make sure you have had a header section prior to the line
key, value = KEY_VALUE_RE.findall(line.strip())[0].split('=', 1)
ansible_dict[section_header][key] = value
else:
if line.strip() and section_header:
# As they're just attributes without value, assign the value None
ansible_dict[section_header][line.strip()] = None
return ansible_dict
This is a naive approach, and might not catch all corner cases for you, but perhaps it's a step in the right direction. If you have any 'bare-attributes' prior to your first section header, they will not be included in the dictionary as it would not know where to apportion it to, and the regex for key=value pairs is working on the assumption that there will be only 1 equals sign in the line. I'm sure there might be many other cases I'm not seeing right now, but hopefully this helps.
Christian's answer is the correct one: Use ConfigParser.
Your issue with his solution is that you have an incorrectly formatted INI file.
You need to change all your properties to:
key=value
key: value
e.g.
[sql]
aaaaaaa: true
https://wiki.python.org/moin/ConfigParserExamples
https://en.wikipedia.org/wiki/INI_file#Keys_.28properties.29

PyYaml Expect scalar but found Sequence

I am trying to load with user defined tags in my python code, with PyYaml. Dont have much experience with pyYaml loader, constructor, representer parser, resolver and dumpers.
Below is my code what i could come up with:
import yaml, os
from collections import OrderedDict
root = os.path.curdir
def construct_position_object(loader, suffix, node):
return loader.construct_yaml_map(node)
def construct_position_sym(loader, node):
return loader.construct_yaml_str(node)
yaml.add_multi_constructor(u"!Position", construct_position_object)
yaml.add_constructor(u"!Position", construct_position_sym)
def main():
file = open('C:\calcWorkspace\\13.3.1.0\PythonTest\YamlInput\Exception_V5.yml','r')
datafile = yaml.load_all(file)
for data in datafile:
yaml.add_representer(literal, literal_presenter)
yaml.add_representer(OrderedDict, ordered_dict_presenter)
d = OrderedDict(l=literal(data))
print yaml.dump(data, default_flow_style=False)
print datafile.get('abcd').get('addresses')
yaml.add_constructor('!include', include)
def include(loader, node):
"""Include another YAML file."""
global root
old_root = root
filename = os.path.join(root, loader.construct_scalar(node))
root = os.path.split(filename)[0]
data = yaml.load(open(filename, 'r'))
root = old_root
return data
class literal(str): pass
def literal_presenter(dumper, data):
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='|')
def ordered_dict_presenter(dumper, data):
return dumper.represent_dict(data.items())
if __name__ == '__main__':
main()
This is my Yaml file:
#sid: Position[SIK,sourceDealID,UTPI]
sid: Position[1232546, 0634.10056718.0.1096840.0,]
ASSET_CLASS: "Derivative"
SOURCE_DEAL_ID: "0634.10056718.0.1096840.0"
INSTR_ID: "UKCM.L"
PRODUCT_TYPE_ID: 0
SOURCE_PRODUCT_TYPE: "CDS"
NOTIONAL_USD: 14.78
NOTIONAL_CCY:
LOB:
PRODUCT_TYPE:
#GIM
UNDERLIER_INSTRUMENT_ID:
MTM_USD:
MTM_CCY:
TRADER_SID:
SALES_PERSON_SID:
CLIENT_SPN:
CLIENT_UCN:
CLIENT_NAME:
LE:
---
sid: Position[1258642, 0634.10056718.0.1096680.0,]
#sid: Position[1]
ASSET_CLASS: "Derivative"
SOURCE_DEAL_ID: "0634.10056718.0.1096840.0"
INSTR_ID: "UKCM.L"
PRODUCT_TYPE_ID: 0
SOURCE_PRODUCT_TYPE: "CDS"
NOTIONAL_AMT: 18.78
NOTIONAL_CCY: "USD"
LOB:
PRODUCT_TYPE:
UNDERLIER_INSTRUMENT_ID:
MTM_AMT:
MTM_CCY:
TRADER_SID:
SALES_PERSON_SID:
CLIENT_SPN:
CLIENT_UCN:
CLIENT_NAME:
LE:
---
# Excption documents to follow from here!!!
Exception:
src_excp_id: 100001
# CONFIGURABLE OBJECT, VALUE TO BE POPULATED RUNTIME (impact_obj COMES FROM CONFIG FILE)
# VALUE STARTS FROM "!POSITION..." A USER DEFINED DATATYPE
impact_obj: !Position [1232546, 0634.10056718.0.1096840.0,]
# CONFIGURABLE OBJECT, VALUE TO BE POPULATED RUNTIME (rsn_obj COMES FROM CONFIG FILE)
# VALUE STARTS FROM "_POSITION..." AN IDENTIFIER FOR CONFIGURABLE OBJECTS
rsn_obj: !Position [1258642, 0634.10056718.0.1096680.0,]
exception_txt: "Invalid data, NULL value provided"
severity: "High"
Looks like my code is unable to identify the !Position user-defined data type.
Any help would be appericiated
Regards.
Needed to change:
def construct_position_sym(loader, node):
return loader.construct_yaml_str(node)
to :
def construct_position_sym(loader, node):
return loader.construct_yaml_seq(node)
Because the position object was a sequence:
!Position [something, something]
So the constructor had to be a sequence type. Works perfect!!!

Categories