Ansible - Looping over two lists - python

I've looked at similar questions and although I've solved the problem, I don't think it's the most elegant solution.
I'm trying to loop over a list of sites - which are individual dictionaries. For each site, I want to perform a series of replacements on a config file within the site using lineinfile (the path to the config file is determined from information in the sites dictionary)
I want to perform a loop over lineinfile with both these sites, and a list of regex/replacements to try. The only problem is, the list of replacements needs to use a value found within the sites dictionary.
I'm currently achieving this the following way, in playbook.yml:
- name: Perform replacements with loop over sites
ansible.builtin.include_tasks: replacements.yml
tags: test
loop: "{{ sites }}"
loop_control:
loop_var: site
vars:
sites:
- apache_servername: "site1.com"
apache_documentroot: /var/www/site1
- apache_servername: "site2.com"
apache_documentroot: /var/www/site2
And the contents of replacements.yml:
- name: Perform replacements
ansible.builtin.lineinfile:
path: "{{ site.apache_documentroot }}/config.txt"
backrefs: yes
regexp: "{{ item.regex }}"
state: present
line: "{{ item.replacement }}"
loop:
- {regex: "(public \\$tmp_path.*?')(?:.+)(';)", replacement: "\\1{{ site.apache_documentroot }}/tmp\\2"}
- {regex: "(public \\$log_path.*?')(?:.+)(';)", replacement: "\\1{{ site.apache_documentroot }}/administrator/logs\\2"}
- {regex: "(public \\$password.*?')(?:.+)(';)", replacement: "\\1{{ site.password }}\\2"}
tags: test
This works fine, but it is a little bit inelegant split out across multiple files, not to mention it's not the easiest to control those regex replacements - would be good if they could be held in a separate variable.
Is it possible to loop over these two lists of dictionaries together within the same task, whilst also allowing the regex replacements to reference a value of the first loop? I sort of imagine building a data structure that has all of these things created and then just looping over that.

Iterate with_nested. For example, simplified for the testing
- name: Perform replacements with loop over sites
debug:
msg: |
path: {{ item.0.apache_documentroot }}/config.txt
regexp: {{ item.1.regex }}
line: {{ item.1.replace }}
with_nested:
- "{{ sites }}"
- "{{ regex_replace }}"
vars:
sites:
- apache_servername: site1.com
apache_documentroot: /var/www/site1
- apache_servername: site2.com
apache_documentroot: /var/www/site2
regex_replace:
- {regex: A, replace: X}
- {regex: B, replace: Y}
gives (abridged)
msg: |-
path: /var/www/site1/config.txt
regexp: A
line: X
--
msg: |-
path: /var/www/site1/config.txt
regexp: B
line: Y
--
msg: |-
path: /var/www/site2/config.txt
regexp: A
line: X
--
msg: |-
path: /var/www/site2/config.txt
regexp: B
line: Y

Related

Build Ansible dictionary from stdout

I am creating Ansible roles to install various software. Within these roles, I'm using ansible_pkg_mgr to determine whether I have to use apt or yum. This works as expected.
When retrieving certain repositories like https://download.docker.com/linux/centos/7/x86_64/stable/repodata/ I want to use lsbs_release -a to obtain values needed in order to correctly populate the URL for the specific release.
The code below works but how would I loop to the end of the list and put the key/value pairs in a dictionary?
I'm always open to other suggestions or if there's a cleaner method. I'm not necessarily stuck and would appreciate another set of eyes. I think it's a good problem to solve as it'll be useful for future projects.
- hosts: localhost
connection: local
tasks:
- name: check OS
command: lsb_release -a
register: var
- name:
set_fact:
foo: "{{ var.stdout }}"
- name:
set_fact:
bar: "{{ foo.split('\n') | replace('\\t','') }}"
- name:
set_fact:
lsbs_release_attributes:
- key: "{{ bar[0].split(':',1)[0] }}"
- value: "{{ bar[0].split(':',1)[1] }}"```
Q: "How would I loop to the end of the list and put the key/value pairs in a dictionary?"
A: Try
- set_fact:
lsbs_release_attributes: "{{ lsbs_release_attributes|d({})|
combine({key: val}) }}"
loop: "{{ bar }}"
vars:
_item: "{{ item.split(':',1) }}"
key: "{{ _item.0 }}"
val: "{{ _item.1 }}"

Ansible Setting fact with dynamic key/value

I am trying to set ansible facts from the stdout of a command task I call from another role.
Role A:
- name: example command
command: client get -s {{ service }}
register: vars_string
- name: set vars
set_fact: vars={{ vars_string.stdout.split('\n')}}
when:
- vars_string.stdout | length > 0
- name: set vars as facts
set_fact: "{{ item }}"
with_items: "{{ vars }}"
vars output:
"vars": [
"tst=ansible",
"example=values"
]
Role B:
- debug:
var: tst
Results from Role B:
Expectation: { "tst": "ansible" }
Reality: { "tst": "VARIABLE IS NOT DEFINED!" }
I have tried to spit vars into a dict and use set_fact: "{{ item.key }}" : "{{ item.value }}" as well. This returned the same results.
I want to be able to call by the variable name returned from the command in future roles. Any ideas?
Two points about your code snippet that may interest you:
There is already a split-by-newline version of the output from your command, it's vars_string.stdout_lines
I can't tell if you just chose that variable by accident, or you were trying to actually assign to the vars built-in variable, but either way, don't do that
As best I can tell, there is no supported syntax for assigning arbitrary top-level host facts from within just a task.
You have two choices: write out those variables to a file, then use include_vars: to read them in -- which will assign them as host facts, or concede to the way set_fact: wants things and be content with those dynamic variables living underneath a known key in hostfacts
We'll show the latter first, because it's shorter:
- set_fact:
my_facts: >-
{{ "{" + (vars_string.stdout_lines
| map('regex_replace', '^([^=]+)=(.+)', '"\1": "\2"')
| join(",")) + "}"
}}
when:
- vars_string.stdout | length > 0
Of course, be aware that trickery won't work if your keys or values have non-JSON friendly characters in them, but if that simple version doesn't work, ask a follow-up question, because there are a lot more tricks in that same vein
The include_vars: way is:
- tempfile:
state: file
suffix: .json
register: vars_filename
- copy:
dest: '{{ vars_filename.path }}'
content: >-
{{ "{" + (vars_string.stdout_lines
| map('regex_replace', '^([^=]+)=(.+)', '"\1": "\2"')
| join(",")) + "}"
}}
- include_vars:
file: '{{ vars_filename.path }}'
- file:
path: '{{ vars_filename.path }}'
state: absent

Ansible Special Characters in passwords

I read my root passwords from an encrypted ansible-vault file.
But when I use it on ansible_become_pass the operation fails because the password contains a special character. In my example "#"
This is my yml:
- hosts: sirius
remote_user: ansusr
become: yes
vars_files:
- vault_vars.yml
become_pass: "{{ root_pass_sirius }}"
ansible-playbook check.yml --ask-vault-pass
fatal: FAILED! => {"msg": "{{ TesT#1234 }}: template error while templating string: unexpected char '#' at 6. String: {{ TesT#1234 }}"}
How to mask the # Char?
Use set +H before actually running that encryption command.
This might work.
become_pass: "{{ root_pass_sirius | regex_escape() }}"
Try single quotes instead of double:
become_pass: '{{ root_pass_sirius }}'
Another thing that you can try is the quote filter:
become_pass: "{{ root_pass_sirius | quote }}"
Try this "'"{{ }}"'"
or this $'{{ }}'
Its Jinja templates
I had a different symbol: $ and when decrypting this symbol disappeared (along with what came after it) and the following solution helped:
replace " with '
That is:
shell: 'echo '{{ password }}'' - this works correctly, but here:
shell: 'echo '{{ password }}'' - it doesn't work.
add replace
That is:
- name: replace
set_fact:
password: "{{ password | replace ('\n', '') | replace ('\r', '') }}"
In sum, it looks like this:
- name: replace
set_fact:
password: "{{ password | replace ('\n', '') | replace ('\r', '') }}"
- name: echo
shell: "echo '{{ password }}'"

yaml and jinja2 reader

I would like to be able to read in python a YAML jinja configuration file like using the PyYAML library but I'm receiving errors:
{% set name = "abawaca" %}
{% set version = "1.00" %}
package:
name: {{ name }}
version: {{ version }}
source:
fn: {{ name }}-{{ version }}.tar.gz
url: https://github.com/CK7/abawaca/archive/v{{ version }}.tar.gz
sha256: 57465bb291c3a9af93605ffb11d704324079036205e5ac279601c9e98c467529
build:
number: 0
requirements:
build:
- gcc # [not osx]
- llvm # [osx]
Your input is not valid YAML, as you can easily check, e.g. here
You should first expand the {% %} constructs, and then process the YAML, or you should make your file into valid YAML.
This is a partly consequence of choosing jinja2 for which the macro sequences {% ... %} start with a character ({) that has special meaning in YAML.
If you need to change the YAML, and write it out again, you can define your own delimiters and choose them so that don't have special meaning in YAML.
The {% %} you should put in a YAML comment block as at the top-level you have a mapping and should only have key-value pairs. One way to achieve that is by redefining the start as #% %# (you don't necessarily have to change the end, but I prefer the symmetry).
Then after updating, run the correct YAML through a small script that processes the file and replaces the delimiters to those that jinja2 understands, or tweak the environment, to change the actual definitions used by jinja2.
corrected data.yaml:
#% set name = "abawaca" %#
#% set version = "1.00" %#
package:
name: <{ name }>
version: 42
source:
fn: <{ name }>-<{ version }>.tar.gz
url: https://github.com/CK7/abawaca/archive/v<{ version }>.tar.gz
sha256: 57465bb291c3a9af93605ffb11d704324079036205e5ac279601c9e98c467529
build:
number: 0
requirements:
build:
- gcc # [not osx]
- llvm # [osx]
This can be processed by:
import jinja2
from ruamel import yaml
yaml_file = 'data.yaml'
tmp_file = 'tmp.yaml'
data = yaml.round_trip_load(open(yaml_file))
data['package']['version'] = '<{ version }>'
with open(tmp_file, 'w') as fp:
yaml.round_trip_dump(data, fp)
environment = jinja2.Environment(
loader=jinja2.FileSystemLoader(searchpath='.'),
trim_blocks=True,
block_start_string='#%', block_end_string='%#',
variable_start_string='<{', variable_end_string='}>')
print(environment.get_template(tmp_file).render())
to give:
package:
name: abawaca
version: 1.00
source:
fn: abawaca-1.00.tar.gz
url: https://github.com/CK7/abawaca/archive/v1.00.tar.gz
sha256: 57465bb291c3a9af93605ffb11d704324079036205e5ac279601c9e98c467529
build:
number: 0
requirements:
build:
- gcc # [not osx]
- llvm # [osx]
Please note that you have to use `ruamel.yaml (disclaimer: I am the author of that package), you cannot do this as easily with PyYAML as it throws away the comments on reading the YAML file. Since all of the jinja2 within comments occurs at the beginning of the file you can work around this with this particular example, but in general that will not be the case.

How do I pass parameters to a salt state file?

I want to create a group and user using salt state files, but I do not know the group, gid, user, uid, sshkey until I need to execute the salt state file which I would like to pass in as parameters.
I have read about Pillar to create the variable. How do I create pillars before execution?
/srv/salt/group.sls:
{{ name }}:
group.present:
- gid: {{ gid }}
- system: True
Command line:
salt 'SaltStack-01' state.sls group name=awesome gid=123456
If you really want to pass in the data on the command like you can also do it like this:
{{ pillar['name'] }}:
group.present:
- gid: {{ pillar['gid'] }}
- system: True
Then on the command line you can pass in the data like this:
salt 'SaltStack-01' state.sls group pillar='{"name": "awesome", "gid": "123456"}'
You use Pillars to create "dictionaries" that you can reference into State files. I'm not sure if I'm understanding you correctly, but here's an example of what you can do:
mkdir /srv/pillar/
Create /srv/pillar/groups.sls and paste something like this into it:
groups:
first: 1234
second: 5678
These are names and GIDs of the groups you want to create.
Create /srv/pillar/top.sls so you can apply this pillar to your minions. This is very similar to a salt top file, so you can either apply it to all minions ('*') or just the one ('SaltStack-01'):
base:
'hc01*':
- groups
To test that that has worked, you can run salt '*' pillar.items and you should find the groups pillar somewhere in the output.
Now, your /srv/salt/group.sls file should look like this:
{% for group,gid in pillar.get('groups',{}).items() %}
{{ group }}:
group.present:
- gid: {{ gid }}
{% endfor %}
This is a for loop: for every group and gid in the pillar groups, do the rest. So basically, you can look at it as if the state file is running twice:
first:
group.present:
- gid: 1234
And then:
second:
group.present:
- gid: 5678
This was incorporated from this guide.
if you do not want use Pillar
you can do as:
# /srv/salt/params.yaml
name: awesome
gid: 123456
and then:
# /srv/salt/groups.sls
{% import_yaml "params.yaml" as params %}
{{ params['name'] }}:
group.present:
- gid: {{ parmas['gid'] }}
- system: True
more details:doc
Another nice way to pass (incase you don't want to use pillars Nor create a file as other answers shows) - you can pass a local environment variable to salt and read it from within the sls file, like this:
Command:
MYVAR=world salt 'SaltStack-01' state.sls somesalt # Note the env variable passed at the beginning
sls file:
# /srv/salt/somesalt.sls
foo:
cmd.run:
- name: |
echo "hello {{ salt['environ.get']('MYVAR') }}"
Will print to stdout:
hello world
Another good thing to know is that the env variable also gets passed on to any included salt states as well.

Categories