I am reading about "parameters" here and wondering whether I can define catalogue level parameters that I can later use in the definition of the catalogue's sources?
Consider a simple YAML-catalogue with two sources:
sources:
data1:
args:
urlpath: "{{CATALOG_DIR}}/data/{{snapshot_date}}/data1.csv"
description: ''
driver: intake.source.csv.CSVSource
metadata: {}
data2:
args:
urlpath: "{{CATALOG_DIR}}/data/{{snapshot_date}}/data2.csv"
description: ''
driver: intake.source.csv.CSVSource
metadata: {}
Note that both data sources (data1 and data2) make use of snapshot_date parameter inside urlpath argument? With this definition I can load data sources with:
cat = intake.open_catalog("./catalog.yaml")
cat.data1(snapshot_date="latest").read() # reads from data/latest/data1.csv
cat.data2(snapshot_date="20211029").read() # reads from data/20211029/data2.csv
Please note that cat.data1().read() will not work, since snapshot_date defaults to empty string, so the csv driver cannot find the path "./data//data1.csv".
I can set the default value by adding parameters section to every (!) source like in the below.
sources:
data1:
parameters:
snapshot_date:
type: str
default: "latest"
description: ""
args:
urlpath: "{{CATALOG_DIR}}/data/{{snapshot_date}}/data1.csv"
description: ''
driver: intake.source.csv.CSVSource
metadata: {}
data2:
parameters:
snapshot_date:
type: str
default: "latest"
description: ""
args:
urlpath: "{{CATALOG_DIR}}/data/{{snapshot_date}}/data2.csv"
description: ''
driver: intake.source.csv.CSVSource
metadata: {}
But this looks complicated (too much repetitive code) and a little inconvenient for the end user -- if a user wants to load all data sources from a given date, he has to explicitly provide snapshot_date parameter to every(!) data source at initialization. IMO, it would be nice I user can provide this value once when initializing the catalog.
Is there a way I can define snapshot_date parameter at catalog level? So that:
I can set default value (e.g. "latest" in my example) in the YAML-definition of the catalogue's parameter
or can pass catalogue's parameter value at runtimeduring the call intake.open_catalog("./catalog.yaml", snapshot_date="20211029")
this value should be accessible in the definition of data sources of this catalog
?
cat = intake.open_catalog("./catalog.yaml", snapshot_date="20211029")
cat.data1.read() # will return data from ./data/20211029/data1.csv
cat.data2.read() # will return data from ./data/20211029/data2.csv
cat.data2(snapshot_date="latest").read() # will return data from ./data/latest/data1.csv
cat = intake.open_catalog("./catalog.yaml")
cat.data1.read() # will return data from ./data/latest/data1.csv
cat.data2.read() # will return data from ./data/latest/data2.csv
Thanks in advance
This idea has been suggested before ( https://github.com/intake/intake/pull/562 , https://github.com/intake/intake/issues/511 ), and I have an inkling that maybe https://github.com/zillow/intake-nested-yaml-catalog supports something like you are asking.
However, I fully support adding this functionality in Intake, either based on #562, above, or otherwise. Adding it to the base Catalog and YAML file(s) catalog should be easy, but doing it so that it works for all subclasses might be tricky.
Currently, you can achieve what you want using environment variables, e.g., "{{snapshot_date}}"->"{{env(SNAPSHOT_DATE)}}", but you would ned to communicate to the user that this variable should be set. In addition, if the value is not to be used within a string, you would still need a parameter definition to cast to the right type.
This is a bit of a hack, but consider a yaml file with this content:
global_params:
snapshot_date: &global
default: latest
description: ''
type: str
sources:
data1:
args:
urlpath: '{{CATALOG_DIR}}/data/{{snapshot_date}}/data1.csv'
description: ''
driver: intake.source.csv.CSVSource
metadata: {}
parameters:
snapshot_date: *global
data2:
args:
urlpath: '{{CATALOG_DIR}}/data/{{snapshot_date}}/data2.csv'
description: ''
driver: intake.source.csv.CSVSource
metadata: {}
parameters:
snapshot_date: *global
Now intake will accept keyword argument for snapshot_date for specific sources.
Some relevant answers: 1 and 2.
Related
I am trying to run a YAML SSM document from a Python AWS Lambda, using boto3 ssm.send_command with parameters, but even if I'm just trying to run the sample "Hello World", I get:
"errorMessage": "An error occurred (InvalidParameters) when calling the SendCommand operation: document TestMessage does not support parameters.
JSON Run Documents work without an issue, so it seems like the parameters are being passed in JSON format, but the document I intend this for contains a relatively long Powershell script, JSON needing to run it all on a single line would be awkward, and I am hoping to avoid needing to run it from an S3 bucket. Can anyone suggest a way to run a YAML Run Document with parameters from the Lambda?
As far as I know AWS lambda always gets it's events as JSON. My suggestion would be that in the lambda_handler.py file declare a new variable like this:
import json
import yaml
def handler_name(event, context):
yaml_event = yaml.dump(json.load(event))
#rest of the code...
This way the event will be in YAML format and you can use that variable instead of the event, which is in JSON format.
Here is an example of running a YAML Run Command document using boto3 ssm.send_command in a Lambda running Python 3.8. Variables are passed to the Lambda using either environment variables or SSM Parameter Store. The script is retrieved from S3 and accepts a single parameter formatted as a JSON string which is passed to the bash script running on Linux (sorry I don't have one for PowerShell).
The SSM Document is deployed using CloudFormation but you could also create it through the console or CLI. Based on the error message you cited, perhaps verify the Document Type is set as "Command".
SSM Document (wrapped in CloudFormation template, refer to the Content property)
Neo4jLoadQueryDocument:
Type: AWS::SSM::Document
Properties:
DocumentType: "Command"
DocumentFormat: "YAML"
TargetType: "/AWS::EC2::Instance"
Content:
schemaVersion: "2.2"
description: !Sub "Load Neo4j for ${AppName}"
parameters:
sourceType:
type: "String"
description: "S3"
default: "S3"
sourceInfo:
type: "StringMap"
description: !Sub "Downloads all files under the ${AppName} scripts prefix"
default:
path: !Sub 'https://{{resolve:ssm:/${AppName}/${Stage}/${AWS::Region}/DataBucketName}}.s3.amazonaws.com/config/scripts/'
commandLine:
type: "String"
description: "These commands are invoked by a Lambda script which sets the correct parameters (Refer to documentation)."
default: 'bash start_task.sh'
workingDirectory:
type: "String"
description: "Working directory"
default: "/home/ubuntu"
executionTimeout:
type: "String"
description: "(Optional) The time in seconds for a command to complete before it is considered to have failed. Default is 3600 (1 hour). Maximum is 28800 (8 hours)."
default: "86400"
mainSteps:
- action: "aws:downloadContent"
name: "downloadContent"
inputs:
sourceType: "{{ sourceType }}"
sourceInfo: "{{ sourceInfo }}"
destinationPath: "{{ workingDirectory }}"
- action: "aws:runShellScript"
name: "runShellScript"
inputs:
runCommand:
- ""
- "directory=$(pwd)"
- "export PATH=$PATH:$directory"
- " {{ commandLine }} "
- ""
workingDirectory: "{{ workingDirectory }}"
timeoutSeconds: "{{ executionTimeout }}"
Lambda function
import os
import boto3
neo4j_load_query_document_name = os.environ["NEO4J_LOAD_QUERY_DOCUMENT_NAME"]
# neo4j_database_instance_id = os.environ["NEO4J_DATABASE_INSTANCE_ID"]
neo4j_database_instance_id_param = os.environ["NEO4J_DATABASE_INSTANCE_ID_SSM_PARAM"]
load_neo4j_activity = os.environ["LOAD_NEO4J_ACTIVITY"]
app_name = os.environ["APP_NAME"]
# Get SSM Document Neo4jLoadQuery
ssm = boto3.client('ssm')
response = ssm.get_document(Name=neo4j_load_query_document_name)
neo4j_load_query_document_content = json.loads(response["Content"])
# Get Instance ID
neo4j_database_instance_id = ssm.get_parameter(Name=neo4j_database_instance_id_param)["Parameter"]["Value"]
# Extract document parameters
neo4j_load_query_document_parameters = neo4j_load_query_document_content["parameters"]
command_line_default = neo4j_load_query_document_parameters["commandLine"]["default"]
source_info_default = neo4j_load_query_document_parameters["sourceInfo"]["default"]
def lambda_handler(event, context):
params = {
"params": {
"app_name": app_name,
"activity_arn": load_neo4j_activity,
}
}
# Include params JSON as command line argument
cmd = f"{command_line_default} \'{json.dumps(params)}\'"
try:
response = ssm.send_command(
InstanceIds=[
neo4j_database_instance_id,
],
DocumentName=neo4j_load_query_document_name,
Parameters={
"commandLine":[cmd],
"sourceInfo":[json.dumps(source_info_default)]
},
MaxConcurrency='1')
if response['ResponseMetadata']['HTTPStatusCode'] != 200:
logger.error(json.dumps(response, cls=DatetimeEncoder))
raise Exception("Failed to send command")
else:
logger.info(f"Command `{cmd}` invoked on instance {neo4j_database_instance_id}")
except Exception as err:
logger.error(err)
raise err
return
Parameters in a JSON document are not necessarily in JSON themselves, they can easily be string or numeric values (more likely IMO). If you want to pass a parameter in JSON format (not the same as a JSON document), pay attention to quotes and escaping.
I have a task. I need to write python code to generate a yaml file for kubernetes. So far I have been using pyyaml and it works fine. Here is my generated yaml file:
apiVersion: v1
kind: ConfigMap
data:
info:
name: hostname.com
aio-max-nr: 262144
cpu:
cpuLogicalCores: 4
memory:
memTotal: 33567170560
net.core.somaxconn: 1024
...
However, when I try to create this configMap the error is that info expects a string() but not a map. So I explored a bit and it seem the easiest way to resolve this is to add a pipe after info like this:
apiVersion: v1
kind: ConfigMap
data:
info: | # this will translate everything in data into a string but still keep the format in yaml file for readability
name: hostname.com
aio-max-nr: 262144
cpu:
cpuLogicalCores: 4
memory:
memTotal: 33567170560
net.core.somaxconn: 1024
...
This way, my configmap is created successfully. My struggling is I dont know how to add that pipe bar from python code. Here I manually added it, but I want to automate this whole process.
part of the python code I wrote is, pretend data is a dict():
content = dict()
content["apiVersion"] = "v1"
content["kind"] = "ConfigMap"
data = {...}
info = {"info": data}
content["data"] = info
# Get all contents ready. Now write into a yaml file
fileName = "out.yaml"
with open(fileName, 'w') as outfile:
yaml.dump(content, outfile, default_flow_style=False)
I searched online and found a lot of cases, but none of them fits my needs. Thanks in advance.
The pipe makes the contained values a string. That string is not processed by YAML, even if it contains data with YAML syntax. Consequently, you will need to give a string as value.
Since the string contains data in YAML syntax, you can create the string by processing the contained data with YAML in a previous step. To make PyYAML dump the scalar in literal block style (i.e. with |), you need a custom representer:
import yaml, sys
from yaml.resolver import BaseResolver
class AsLiteral(str):
pass
def represent_literal(dumper, data):
return dumper.represent_scalar(BaseResolver.DEFAULT_SCALAR_TAG,
data, style="|")
yaml.add_representer(AsLiteral, represent_literal)
info = {
"name": "hostname.com",
"aio-max-nr": 262144,
"cpu": {
"cpuLogicalCores": 4
}
}
info_str = AsLiteral(yaml.dump(info))
data = {
"apiVersion": "v1",
"kind": "ConfigMap",
"data": {
"info": info_str
}
}
yaml.dump(data, sys.stdout)
By putting the rendered YAML data into the type AsLiteral, the registered custom representer will be called which will set the desired style to |.
Using intersphinx and autodoc, having:
:param stores: Array of objects
:type stores: list[dict[str,int]]
Would result in an entry like:
stores (list[dict[str,int]]) - Array of objects.
Is there a way to convert list[dict[str,int]] outside of the autodoc :param: derivative (or others like :rtype:) with raw RST (within the docstring) or programatically given a 'list[dict[str,int]]' string?
Additionally, is it possible to use external links within the aforementioned example?
Example
Consider a script.py file:
def some_func(arg1):
"""
This is a head description.
:param arg1: The type of this param is hyperlinked.
:type arg1: list[dict[str,int]]
Is it possible to hyperlink this, here: dict[str,list[int]]
Or even add custom references amongst the classes: dict[int,ref]
Where *ref* links to a foreign, external source.
"""
Now in the Sphinx conf.py file add:
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx'
]
intersphinx_mapping = {
'py': ('https://docs.python.org/3', None),
}
In your index.rst, add:
Title
=====
.. toctree::
:maxdepth: 2
.. autofunction:: script.some_func
And now simply make the html for the page.
The list[dict[str,int]] next to :type arg1: will be hyperlinked as shown at the beginning of this question, but dict[str,list[int]] obviously won't. Is there a way to make the latter behave like the former?
I reached a solution after digging around sphinx's code.
Injecting External References (:param:)
I created a custom extension that connects to the missing-reference event and attempts to resolve unknown references.
Code of reflinks.py:
import docutils.nodes as nodes
_cache = {}
def fill_cache(app):
_cache.update(app.config.reflinks)
def missing_reference(app, env, node, contnode):
target = node['reftarget']
try:
uri = _cache[target]
except KeyError:
return
newnode = nodes.reference('', '', internal = False, refuri = uri)
if not node.get('refexplicit'):
name = target.replace('_', ' ') # style
contnode = contnode.__class__(name, name)
newnode.append(contnode)
return newnode
def setup(app):
app.add_config_value('reflinks', None, False)
app.connect('builder-inited', fill_cache)
app.connect('missing-reference', missing_reference, priority = 1000)
Explanation
I consulted intersphinx's methodology for resolving unknown references and connected the function with high priority so it's hopefully only consulted as a last result.
Followup
Include the extenion.
Adding to conf.py:
reflinks = {'google': 'https://google.com'}
Allowed for script.py:
def some_func(arg1):
"""
:param arg1: Google homepages.
:type arg1: dict[str, google]
"""
Where dict[str, google] are now all hyperlinks.
Formatting Nested Types
There were instances where I wanted to use type structures like list[dict[str,myref]] outside of fields like :param:, :rtype:, etc. Another short extension did the trick.
Code of nestlinks.py:
import sphinx.domains.python as domain
import docutils.parsers.rst.roles as roles
_field = domain.PyTypedField('class')
def handle(name, rawtext, text, lineno, inliner, options = {}, content = []):
refs = _field.make_xrefs('class', 'py', text)
return (refs, [])
def setup(app):
roles.register_local_role('nref', handle)
Explanation
After reading this guide on roles, and digging here and here I realised that all I needed was a dummy field to handle the whole reference-making work and pretend like it's trying to reference classes.
Followup
Include the extension.
Now script.py:
def some_func(arg1):
"""
:param arg1: Google homepages.
:type arg1: dict[str, google]
Now this :nref:`list[dict[str,google]]` is hyperlinked!
"""
Notes
I am using intersphinx and autodoc to link to python's types and document my function's docstrings.
I am not well-versed in Sphinx's underlying mechanisms so take my methodology with a grain of salt.
The examples are provided are adjusted for the sake of being re-usable and generic and have not been tested.
The usability of such features is obviously questionable and only necessary when libraries like extlinks don't cover your needs.
I tried to setup a very simple app. I wanted to create this app as fullstack app as training for future projects. So i wrote a backend in python which provides data from a DB (SQLLite) via an API (Flask/Connexion). The API is documented via Swagger. The DB should have a table where each row got 2 values:
1. name
2. images
I quickly faced a problem: I actually don't know how to handle images in APIs. Therefore I created the backup with a placeholder. Till now images is just another string which is mostly empty. Everything works fine. But now I want to be able to get Images via API and save them in the DB. I have absolutly no Idea how to do this. Hope one of you can help me.
Here is my Code so far:
SqlliteHandler.py
import sqlite3
conn = sqlite3.connect('sprint_name.db')
c = conn.cursor()
def connect_db():
global conn
global c
conn = sqlite3.connect('sprint_name.db')
c = conn.cursor()
c.execute("CREATE TABLE if not exists sprint_names ( name text, image text)")
def make_db_call(execute_statement, fetch_smth=""):
global c
connect_db()
print(execute_statement)
c.execute(execute_statement)
response = ""
if fetch_smth is "one":
response = transform_tuple_to_dict(c.fetchone())
if fetch_smth is "all":
response_as_tuples = c.fetchall()
response = []
for sug in response_as_tuples:
response.append(transform_tuple_to_dict(sug))
conn.commit()
conn.close()
return response
def transform_tuple_to_dict(my_tuple):
return {"name": my_tuple[0], "image": my_tuple[1]}
def add_name(suggestion):
name = suggestion.get("name")
image = "" if suggestion.get("image") is None else suggestion.get("image")
execute_statement = "SELECT * FROM sprint_names WHERE name='" + name + "'"
print(execute_statement)
alreadyexists = False if make_db_call(execute_statement, "one") is None else True
print(alreadyexists)
if not alreadyexists:
execute_statement = "INSERT INTO sprint_names VALUES ('" + name + "', '" + image + "')"
make_db_call(execute_statement)
def delete_name(suggestion_name):
execute_statement = "DELETE FROM sprint_names WHERE name='" + suggestion_name + "'"
print(execute_statement)
make_db_call(execute_statement)
def delete_all():
make_db_call("DELETE FROM sprint_names")
def get_all_names():
return make_db_call("SELECT * FROM sprint_names", "all")
def get_name(suggestion_name):
print(suggestion_name)
execute_statement = "SELECT * FROM sprint_names WHERE name='" + suggestion_name + "'"
print(execute_statement)
return make_db_call(execute_statement, "one")
def update_image(suggestion_name, suggestion):
new_name = suggestion.get("name" )
new_image = "" if suggestion.get("image") is None else suggestion.get("image")
execute_statement = "UPDATE sprint_names SET name='" + new_name + "', image='" + new_image + "' WHERE name='"\
+ suggestion_name + "'"
make_db_call(execute_statement)
RunBackEnd.py
from flask import render_template
import connexion
# Create the application instance
app = connexion.App(__name__, specification_dir='./')
# Read the swagger.yml file to configure the endpoints
app.add_api('swagger.yml')
# Create a URL route in our application for "/"
#app.route('/')
def home():
"""
This function just responds to the browser ULR
localhost:5000/
:return: the rendered template 'home.html'
"""
return render_template('home.html')
# If we're running in stand alone mode, run the application
if __name__ == '__main__':
app.run(port=5000)
Swagger.yml
swagger: "2.0"
info:
description: This is the swagger file that goes with our server code
version: "1.0.0"
title: Swagger REST Article
consumes:
- "application/json"
produces:
- "application/json"
basePath: "/api"
# Paths supported by the server application
paths:
/suggestions:
get:
operationId: SqlliteHandler.get_all_names
tags:
- suggestions
summary: The names data structure supported by the server application
description: Read the list of names
responses:
200:
description: Successful read names list operation
schema:
type: array
items:
properties:
name:
type: string
image:
type: string
post:
operationId: SqlliteHandler.add_name
tags:
- suggestions
summary: Create a name and add it to the names list
description: Create a new name in the names list
parameters:
- name: suggestion
in: body
description: Suggestion you want to add to the sprint
required: True
schema:
type: object
properties:
name:
type: string
description: Name you want to submit
image:
type: string
description: path to the picture of that name
responses:
201:
description: Successfully created name in list
/suggestions/{suggestion_name}:
get:
operationId: SqlliteHandler.get_name
tags:
- suggestions
summary: Read one name from the names list
description: Read one name from the names list
parameters:
- name: suggestion_name
in: path
description: name of the sprint name to get from the list
type: string
required: True
responses:
200:
description: Successfully read name from names list operation
schema:
type: object
properties:
name:
type: string
image:
type: string
put:
operationId: SqlliteHandler.update_image
tags:
- suggestions
summary: Update an image in the suggestion list via the name of the suggestions
description: Update an image in the suggestion list
parameters:
- name: suggestion_name
in: path
description: Suggestion you want to edit
type: string
required: True
- name: suggestion
in: body
schema:
type: object
properties:
name:
type: string
image:
type: string
responses:
200:
description: Successfully updated suggestion in suggestion list
delete:
operationId: SqlliteHandler.delete_name
tags:
- suggestions
summary: Delete a suggestion via its name from the suggestion list
description: Delete a suggestion
parameters:
- name: suggestion_name
in: path
type: string
required: True
responses:
200:
description: Successfully deleted a suggestion from the list
To save an image in SQLITE (not that it's recommended, better to save the image as a file and to save the path in the DB) you save it as an array of bytes (storage type of BLOB, not that the column has to be defined as a BLOB).
In SQL you specify an array of bytes as a hex string. So you read you image and build a hex string
Noting
Maximum length of a string or BLOB
The maximum number of bytes in a string or BLOB in SQLite is defined
by the preprocessor macro SQLITE_MAX_LENGTH. The default value of this
macro is 1 billion (1 thousand million or 1,000,000,000). You can
raise or lower this value at compile-time using a command-line option
like this:
-DSQLITE_MAX_LENGTH=123456789 The current implementation will only support a string or BLOB length up to 231-1 or 2147483647. And some
built-in functions such as hex() might fail well before that point. In
security-sensitive applications it is best not to try to increase the
maximum string and blob length. In fact, you might do well to lower
the maximum string and blob length to something more in the range of a
few million if that is possible.
During part of SQLite's INSERT and SELECT processing, the complete
content of each row in the database is encoded as a single BLOB. So
the SQLITE_MAX_LENGTH parameter also determines the maximum number of
bytes in a row.
The maximum string or BLOB length can be lowered at run-time using the
sqlite3_limit(db,SQLITE_LIMIT_LENGTH,size) interface.
Also
Noting
Maximum Length Of An SQL Statement
The maximum number of bytes in the text of an SQL statement is limited
to SQLITE_MAX_SQL_LENGTH which defaults to 1000000. You can redefine
this limit to be as large as the smaller of SQLITE_MAX_LENGTH and
1073741824.
If an SQL statement is limited to be a million bytes in length, then
obviously you will not be able to insert multi-million byte strings by
embedding them as literals inside of INSERT statements. But you should
not do that anyway. Use host parameters for your data. Prepare short
SQL statements like this:
INSERT INTO tab1 VALUES(?,?,?); Then use the sqlite3_bind_XXXX()
functions to bind your large string values to the SQL statement. The
use of binding obviates the need to escape quote characters in the
string, reducing the risk of SQL injection attacks. It is also runs
faster since the large string does not need to be parsed or copied as
much.
The maximum length of an SQL statement can be lowered at run-time
using the sqlite3_limit(db,SQLITE_LIMIT_SQL_LENGTH,size) interface.
The resultant SQL would be along the lines of :-
INSERT INTO mytable (myimage) VALUES (x'fffe004577aabbcc33f1f8');
As a demo using your table (slightly modified to include the "correct" column type BLOB, which makes little difference) :-
DROP TABLE If EXISTS sprint_names;
CREATE TABLE if not exists sprint_names ( name text, image text, altimage BLOB);
INSERT INTO sprint_names VALUES
('SPRINT001',x'fffe004577aabbcc33f1f8',x'fffe004577aabbcc33f1f8'), -- obviously image would be larger
('SPRINT002',x'99008877665544332211f4d6e9c2aaa8b7b4',x'99008877665544332211f4d6e9c2aaa8b7b4')
;
SELECT * FROM sprint_names;
The result would be :-
Note Navicat was used to run text the above. Blobs are inherently difficult to display hence display. However, what is shown is that the above obviously stores and retrieves the data.
As previously stated it's much simpler to just store the path to the image file and when it boils down to it there is likely very little need for the image as data. You're unlikely to be querying the data that the image is comprised of, whilst using naming standards could allow useful searches/queries of a stored name/path.
However, in contradiction of the above, SQLite can, in some circumstances (images with an average size around 100k or less (maybe more)) allow faster access than the file system 35% Faster Than The Filesystem.
I have the following YAML file:
heat_template_version: 2015-10-15
parameters:
image:
type: string
label: Image name or ID
default: CirrOS
private_network_id:
type: string
label: Private network name or ID
floating_ip:
type: string
I want to add key-> default to private_network_id and floating_ip (if default doesn't exist) and to the default key I want to add the value (which I get from user)
How can I achieve this in python?
The resulting YAML should look like:
heat_template_version: 2015-10-15
parameters:
image:
type: string
label: Image name or ID
default: CirrOS
private_network_id:
type: string
label: Private network name or ID
default: <private_network_id>
floating_ip:
type: string
default: <floating_ip>
For this kind of round-tripping you should do use ruamel.yaml (disclaimer: I am the author of the package).
Assuming your input is in a file input.yaml and the following program:
from ruamel.yaml import YAML
from pathlib import Path
yaml = YAML()
path = Path('input.yaml')
data = yaml.load(path)
parameters = data['parameters']
# replace assigned values with user input
parameters['private_network_id']['default'] = '<private_network_id>'
parameters['floating_ip']['default'] = '<floating_ip>'
yaml.dump(data, path)
After that your file will exact match the output you requested.
Please note that comments in the YAML file, as well as the key ordering are automatically preserved (not guaranteed by the YAML specification).
If you are still using Python2 (which has no pathlib in the standard library) use from ruamel.std.pathlib import Path or rewrite the .load() and .dump() lines with appropriately opened, old style, file objects. E.g.
with open('input.yaml', 'w') as fp:
yaml.dump(data, fp)