i have the following yaml file:
rule_groups:
- name: "APPSTREAM"
allowed-domains:
- ".github.com"
- ".google.com"
source: "10.143.80.0/24"
- name: "TEST"
allowed-domains:
- ".microsoft.com"
- ".amazonaws.com"
source: "10.143.70.0/24"
i am calling inside a python script :
#!/usr/bin/env python3
import yaml
with open('settings.yaml', 'rb') as f:
config = yaml.safe_load(f)
core_config = config['rule_groups']
for workload in core_config:
print(workload)
{'name': 'APPSTREAM', 'allowed-domains': ['.github.com', '.google.com'], 'source': '10.143.80.0/24'}
{'name': 'TEST', 'allowed-domains': ['.microsoft.com', '.amazonaws.com'], 'source': '10.143.70.0/24'}
i am trying to create a dynamic string for every name , allowed-domains and sid and write the outpot into a file , as following:
$APPSTREAM ,.google.com, sid:1
$APPSTREAM ,.github.com, sid:2
$TEST, .microsoft.com, sid:3
$TEST, .amazonaws.com,sid:4
any help will be appreciated
solution
#!/usr/bin/env python3
import yaml
with open('settings.yaml', 'rb') as f:
config = yaml.safe_load(f)
core_config = config['rule_groups']
i = 0
for workload in core_config:
for domain in (workload['allowed-domains']):
i = i+1
print(f"${workload['name']},'{domain}',sid:{i}")
Related
I have this yaml file
data:
- name: acme_aws1
source: aws
path: acme/acme_aws1.zip
- name: acme_gke1
source: gke
path: acme/acme_gke1.zip
- name: acme_oci
source: oci
path: acme/acme_oci1.zip
- name: acme_aws2
source: aws
path: acme/acme_aws2.zip
- name: acme_gke2
source: gke
path: acme/acme_gke2.zip
- name: acme_oci2
source: oci
path: acme/acme_oci2.zip
i want to filter out the data containing "source=gke" and for loop assign the value of path to variable., can any one please share how-to when using python with pyyaml as import module.
This code would do what you need, it just reads, and uses filter standard function to return an iterable with the elements passing a condition. Then such elements are put into a new list
import yaml
# for files you can use
# with open("data.yaml", "r") as file:
# yaml_data = yaml.safe_load(file)
yaml_data = yaml.safe_load("""
data:
- name: acme_aws1
source: aws
path: acme/acme_aws1.zip
- name: acme_gke1
source: gke
path: acme/acme_gke1.zip
- name: acme_oci
source: oci
path: acme/acme_oci1.zip
- name: acme_aws2
source: aws
path: acme/acme_aws2.zip
- name: acme_gke2
source: gke
path: acme/acme_gke2.zip
- name: acme_oci2
source: oci
path: acme/acme_oci2.zip
""")
data = yaml_data['data']
filtered = list(filter(lambda x: x.get('source') == 'gke', data))
print(filtered)
It prints
[{'name': 'acme_gke1', 'source': 'gke', 'path': 'acme/acme_gke1.zip'}, {'name': 'acme_gke2', 'source': 'gke', 'path': 'acme/acme_gke2.zip'}]
import yaml
# Read the file.
content = yaml.safe_load('your_file.yaml')
# Get rid of 'gke' elements.
not_gke_sources = [block for block in content if block.source != 'gke']
# Iterate over to access all 'path's.
for block in not_gke_sources:
path = block.path
# Some actions.
I want to add data inside the 'tags' key in this YAML script
# Generated by Chef, local modifications will be overwritten
---
env: nonprod
api_key: 5d9k8h43124g40j9ocmnb619h762d458
hostname: ''
bind_host: localhost
additional_endpoints: {}
tags:
- application_name:testin123
- cloud_supportteam:eagles
- technical_applicationid:0000
- application:default
- lifecycle:default
- function:default-api-key
dogstatsd_non_local_traffic: false
histogram_aggregates:
- max
- median
- avg
- count
which should be like this,
tags:
- application_name:testing123
- cloud_supportteam:eagles
- technical_applicationid:0000
- application:default
- lifecycle:default
- function:default-api-key
- managed_by:Teams
so far I have created this script that will append the data at the end of the file seems not the solution,
import yaml
data = {
'tags': {
'- managed_by': 'Teams'
}
}
with open('test.yml', 'a') as outfile:
yaml.dump(data, outfile,indent=2)
Figured out it like this and this is working,
import yaml
from yaml.loader import SafeLoader
with open('test.yaml', 'r') as f:
data = dict(yaml.load(f,Loader=SafeLoader))
data ['tags'].append('managed_by:teams')
print(data['tags'])
with open ('test.yaml', 'w') as write:
data2 = yaml.dump(data,write,sort_keys= False, default_flow_style=False)
and the output was like this,
['application_name:testin123', 'cloud_supportteam:eagles', 'technical_applicationid:0000', 'application:default', 'lifecycle:default', 'function:default-api-key', 'managed_by:teams']
and the test.yaml file was updated,
tags:
- application_name:testing123
- cloud_supportteam:eagles
- technical_applicationid:0000
- application:default
- lifecycle:default
- function:default-api-key
- managed_by:teams
I have a list of yaml files which I would like to edit in the same way. Put the following block under spec.template.imagePullSecret and spec.template.container.imagePullPolicy in each file:
imagePullSecrets:
- dockerpullsecret
container:
imagePullPolicy: IfNotPresent
This is an example of one file:
apiVersion: argoproj.io/v1alpha1
kind: Sensor
meta data:
name: sensor-finanda-ci
namespace: argo-events
spec:
template:
serviceAccountName: argo-events-sa
dependencies:
- name: eventsource-iv
eventSourceName: eventsource-iv
eventName: iv
This is my desired output:
apiVersion: argoproj.io/v1alpha1
kind: Sensor
meta data:
name: sensor-finanda-ci
namespace: argo-events
spec:
template:
serviceAccountName: argo-events-sa
imagePullSecrets:
- dockerpullsecret
container:
imagePullPolicy: IfNotPresent
dependencies:
- name: eventsource-iv
eventSourceName: eventsource-iv
eventName: iv
If your heart is set on a Python solution, something like this might do:
import os
import yaml
for yaml_filename in [filename for filename in os.listdir('.') if filename.endswith('.yaml')]:
with open(yaml_filename, 'r') as yaml_file:
yaml_obj = yaml.safe_load(yaml_file)
if 'spec' not in yaml_obj:
yaml_obj['spec'] = {"template": {}}
if 'template' not in yaml_obj['spec']:
yaml_obj['spec']['template'] = {"imagePullSecrets": [], "container": {}}
if 'container' not in yaml_obj['spec']['template']:
yaml_obj['spec']['template']['container'] = {}
yaml_obj['spec']['template']['imagePullSecrets'] = ["dockerpullsecret"]
yaml_obj['spec']['template']['container']['imagePullPolicy'] = 'IfNotPresent'
with open(yaml_filename, 'w') as yaml_file:
yaml.dump(yaml_obj, yaml_file)
If you're open to command-line utils, this is more concise:
Loop over the files and use yq to edit them in-place.
for file in *.yaml; do
yq -i e '.spec.template.imagePullSecrets = ["dockerpullsecret"] | .spec.template.container.imagePullPolicy = "IfNotPresent"' "$file"
done
I have exported my google-maps Point Of Interests (saved places / locations) via the takeout tool. How can i convert this to GPX, so that i can import it into OSMAnd?
I tried using gpsbabel:
gpsbabel -i geojson -f my-saved-locations.json -o gpx -F my-saved-locations_converted.gpx
But this did not retain the title/name of each point of interest - and instead just used names like WPT001, WPT002, etc.
in the end I solved this by creating a small python script to convert between the formats.
This could be easily adapted for specific needs:
#!/usr/bin/env python3
import argparse
import json
import xml.etree.ElementTree as ET
from xml.dom import minidom
def ingestJson(geoJsonFilepath):
poiList = []
with open(geoJsonFilepath) as fileObj:
data = json.load(fileObj)
for f in data["features"]:
poiList.append({'title': f["properties"]["Title"],
'lon': f["geometry"]["coordinates"][0],
'lat': f["geometry"]["coordinates"][1],
'link': f["properties"].get("Google Maps URL", ''),
'address': f["properties"]["Location"].get("Address", '')})
return poiList
def dumpGpx(gpxFilePath, poiList):
gpx = ET.Element("gpx", version="1.1", creator="", xmlns="http://www.topografix.com/GPX/1/1")
for poi in poiList:
wpt = ET.SubElement(gpx, "wpt", lat=str(poi["lat"]), lon=str(poi["lon"]))
ET.SubElement(wpt, "name").text = poi["title"]
ET.SubElement(wpt, "desc").text = poi["address"]
ET.SubElement(wpt, "link").text = poi["link"]
xmlstr = minidom.parseString(ET.tostring(gpx)).toprettyxml(encoding="utf-8", indent=" ")
with open(gpxFilePath, "wb") as f:
f.write(xmlstr)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--inputGeoJsonFilepath', required=True)
parser.add_argument('--outputGpxFilepath', required=True)
args = parser.parse_args()
poiList = ingestJson(args.inputGeoJsonFilepath)
dumpGpx(args.outputGpxFilepath, poiList=poiList)
if __name__ == "__main__":
main()
...
it can be called like so:
./convert-googlemaps-geojson-to-gpx.py \
--inputGeoJsonFilepath my-saved-locations.json \
--outputGpxFilepath my-saved-locations_converted.gpx
There is also a NPM script called "togpx":
https://github.com/tyrasd/togpx
I didn't try it, but it claims to keep as much information as possible.
ConfigParser also reads comments. Why? Shouldn't this be a default thing to "ignore" inline comments?
I reproduce my problem with the following script:
import configparser
config = configparser.ConfigParser()
config.read("C:\\_SVN\\BMO\\Source\\Server\\PythonExecutor\\Resources\\visionapplication.ini")
for section in config.sections():
for item in config.items(section):
print("{}={}".format(section, item))
The ini file looks as follows:
[LPI]
reference_size_mm_width = 30 ;mm
reference_size_mm_height = 40 ;mm
print_pixel_pitch_mm = 0.03525 ; mm
eye_cascade = "TBD\haarcascade_eye.xml" #
The output:
C:\_Temp>python read.py
LPI=('reference_size_mm_width', '30 ;mm')
LPI=('reference_size_mm_height', '40 ;mm')
LPI=('print_pixel_pitch_mm', '0.03525 ; mm')
LPI=('eye_cascade', '"TBD\\haarcascade_eye.xml" #')
I don't want to read 30 ;mm but I want to read just the number '30'.
What am I doing wrong?
PS: Python3.7
hi use inline_comment_prefixes while creating configparser object check example below
config = configparser.ConfigParser(inline_comment_prefixes = (";",))
Here is detailed documentation.