Ansible Setting fact with dynamic key/value - python

I am trying to set ansible facts from the stdout of a command task I call from another role.
Role A:
- name: example command
command: client get -s {{ service }}
register: vars_string
- name: set vars
set_fact: vars={{ vars_string.stdout.split('\n')}}
when:
- vars_string.stdout | length > 0
- name: set vars as facts
set_fact: "{{ item }}"
with_items: "{{ vars }}"
vars output:
"vars": [
"tst=ansible",
"example=values"
]
Role B:
- debug:
var: tst
Results from Role B:
Expectation: { "tst": "ansible" }
Reality: { "tst": "VARIABLE IS NOT DEFINED!" }
I have tried to spit vars into a dict and use set_fact: "{{ item.key }}" : "{{ item.value }}" as well. This returned the same results.
I want to be able to call by the variable name returned from the command in future roles. Any ideas?

Two points about your code snippet that may interest you:
There is already a split-by-newline version of the output from your command, it's vars_string.stdout_lines
I can't tell if you just chose that variable by accident, or you were trying to actually assign to the vars built-in variable, but either way, don't do that
As best I can tell, there is no supported syntax for assigning arbitrary top-level host facts from within just a task.
You have two choices: write out those variables to a file, then use include_vars: to read them in -- which will assign them as host facts, or concede to the way set_fact: wants things and be content with those dynamic variables living underneath a known key in hostfacts
We'll show the latter first, because it's shorter:
- set_fact:
my_facts: >-
{{ "{" + (vars_string.stdout_lines
| map('regex_replace', '^([^=]+)=(.+)', '"\1": "\2"')
| join(",")) + "}"
}}
when:
- vars_string.stdout | length > 0
Of course, be aware that trickery won't work if your keys or values have non-JSON friendly characters in them, but if that simple version doesn't work, ask a follow-up question, because there are a lot more tricks in that same vein
The include_vars: way is:
- tempfile:
state: file
suffix: .json
register: vars_filename
- copy:
dest: '{{ vars_filename.path }}'
content: >-
{{ "{" + (vars_string.stdout_lines
| map('regex_replace', '^([^=]+)=(.+)', '"\1": "\2"')
| join(",")) + "}"
}}
- include_vars:
file: '{{ vars_filename.path }}'
- file:
path: '{{ vars_filename.path }}'
state: absent

Related

Ansible - Looping over two lists

I've looked at similar questions and although I've solved the problem, I don't think it's the most elegant solution.
I'm trying to loop over a list of sites - which are individual dictionaries. For each site, I want to perform a series of replacements on a config file within the site using lineinfile (the path to the config file is determined from information in the sites dictionary)
I want to perform a loop over lineinfile with both these sites, and a list of regex/replacements to try. The only problem is, the list of replacements needs to use a value found within the sites dictionary.
I'm currently achieving this the following way, in playbook.yml:
- name: Perform replacements with loop over sites
ansible.builtin.include_tasks: replacements.yml
tags: test
loop: "{{ sites }}"
loop_control:
loop_var: site
vars:
sites:
- apache_servername: "site1.com"
apache_documentroot: /var/www/site1
- apache_servername: "site2.com"
apache_documentroot: /var/www/site2
And the contents of replacements.yml:
- name: Perform replacements
ansible.builtin.lineinfile:
path: "{{ site.apache_documentroot }}/config.txt"
backrefs: yes
regexp: "{{ item.regex }}"
state: present
line: "{{ item.replacement }}"
loop:
- {regex: "(public \\$tmp_path.*?')(?:.+)(';)", replacement: "\\1{{ site.apache_documentroot }}/tmp\\2"}
- {regex: "(public \\$log_path.*?')(?:.+)(';)", replacement: "\\1{{ site.apache_documentroot }}/administrator/logs\\2"}
- {regex: "(public \\$password.*?')(?:.+)(';)", replacement: "\\1{{ site.password }}\\2"}
tags: test
This works fine, but it is a little bit inelegant split out across multiple files, not to mention it's not the easiest to control those regex replacements - would be good if they could be held in a separate variable.
Is it possible to loop over these two lists of dictionaries together within the same task, whilst also allowing the regex replacements to reference a value of the first loop? I sort of imagine building a data structure that has all of these things created and then just looping over that.
Iterate with_nested. For example, simplified for the testing
- name: Perform replacements with loop over sites
debug:
msg: |
path: {{ item.0.apache_documentroot }}/config.txt
regexp: {{ item.1.regex }}
line: {{ item.1.replace }}
with_nested:
- "{{ sites }}"
- "{{ regex_replace }}"
vars:
sites:
- apache_servername: site1.com
apache_documentroot: /var/www/site1
- apache_servername: site2.com
apache_documentroot: /var/www/site2
regex_replace:
- {regex: A, replace: X}
- {regex: B, replace: Y}
gives (abridged)
msg: |-
path: /var/www/site1/config.txt
regexp: A
line: X
--
msg: |-
path: /var/www/site1/config.txt
regexp: B
line: Y
--
msg: |-
path: /var/www/site2/config.txt
regexp: A
line: X
--
msg: |-
path: /var/www/site2/config.txt
regexp: B
line: Y

Build Ansible dictionary from stdout

I am creating Ansible roles to install various software. Within these roles, I'm using ansible_pkg_mgr to determine whether I have to use apt or yum. This works as expected.
When retrieving certain repositories like https://download.docker.com/linux/centos/7/x86_64/stable/repodata/ I want to use lsbs_release -a to obtain values needed in order to correctly populate the URL for the specific release.
The code below works but how would I loop to the end of the list and put the key/value pairs in a dictionary?
I'm always open to other suggestions or if there's a cleaner method. I'm not necessarily stuck and would appreciate another set of eyes. I think it's a good problem to solve as it'll be useful for future projects.
- hosts: localhost
connection: local
tasks:
- name: check OS
command: lsb_release -a
register: var
- name:
set_fact:
foo: "{{ var.stdout }}"
- name:
set_fact:
bar: "{{ foo.split('\n') | replace('\\t','') }}"
- name:
set_fact:
lsbs_release_attributes:
- key: "{{ bar[0].split(':',1)[0] }}"
- value: "{{ bar[0].split(':',1)[1] }}"```
Q: "How would I loop to the end of the list and put the key/value pairs in a dictionary?"
A: Try
- set_fact:
lsbs_release_attributes: "{{ lsbs_release_attributes|d({})|
combine({key: val}) }}"
loop: "{{ bar }}"
vars:
_item: "{{ item.split(':',1) }}"
key: "{{ _item.0 }}"
val: "{{ _item.1 }}"

ansible jinja2 template output csv format

Trying to make a failsafe logic on output as csv as comma separated each columns by JInja2 template.
failsafe logic is supposed to tell me if any of items in modules or tech is missing.
any help appreciated to figure out the logic of jinja2 template.
Variable
swproduct_list:
header: Sw product,sw product module,technology
details:
- name: BASE PACKAGE
Modules:
- Polygon Manager
- Common Manager
tech:
- SPRING CLOUD
- SPRING CLOUD
- name: DMA
Modules:
- KUA on demand
- KUA parameters
tech:
- SPRING CLOUD
- SPRING CLOUD
Desired Output
Sw product,sw product module,technology
DMA,KUA on demand,SPRING CLOUD
DMA,KUA parameters,SPRING CLOUD
BASE PACKAGE,Polygon Manager,SPRING CLOUD
BASE PACKAGE,Common Manager,SPRING CLOUD
Solution- Jinja2 template
{% for intf in swproduct_list.details -%}
{% for ll in intf.Modules -%}
{{ intf.name }},{{ ll }},{{ intf.tech[loop.index0] }}
{% endfor %}
{% endfor %}
Thanks to Zeitounator who suggested in a comment:
Create a task which checks that every element has the same number of modules and tech using the fail or assert module prior to rendering your template.
I applied the following assert method before jinja template starts and it works:
- set_fact:
check_total: |
{
'modules_total': {{ (swproduct_list.details | selectattr('Modules', 'defined') | map(attribute='Modules') | flatten | list) | length }},
'tech_total': {{ (swproduct_list.details | selectattr('tech', 'defined') | map(attribute='tech') | flatten | list) | length }},
}
- debug:
var: check_total
- assert:
that:
- check_total.modules_total == check_total.tech_total
quiet: true
fail_msg: >
total no of modules should match with total no of tech

Ansible Special Characters in passwords

I read my root passwords from an encrypted ansible-vault file.
But when I use it on ansible_become_pass the operation fails because the password contains a special character. In my example "#"
This is my yml:
- hosts: sirius
remote_user: ansusr
become: yes
vars_files:
- vault_vars.yml
become_pass: "{{ root_pass_sirius }}"
ansible-playbook check.yml --ask-vault-pass
fatal: FAILED! => {"msg": "{{ TesT#1234 }}: template error while templating string: unexpected char '#' at 6. String: {{ TesT#1234 }}"}
How to mask the # Char?
Use set +H before actually running that encryption command.
This might work.
become_pass: "{{ root_pass_sirius | regex_escape() }}"
Try single quotes instead of double:
become_pass: '{{ root_pass_sirius }}'
Another thing that you can try is the quote filter:
become_pass: "{{ root_pass_sirius | quote }}"
Try this "'"{{ }}"'"
or this $'{{ }}'
Its Jinja templates
I had a different symbol: $ and when decrypting this symbol disappeared (along with what came after it) and the following solution helped:
replace " with '
That is:
shell: 'echo '{{ password }}'' - this works correctly, but here:
shell: 'echo '{{ password }}'' - it doesn't work.
add replace
That is:
- name: replace
set_fact:
password: "{{ password | replace ('\n', '') | replace ('\r', '') }}"
In sum, it looks like this:
- name: replace
set_fact:
password: "{{ password | replace ('\n', '') | replace ('\r', '') }}"
- name: echo
shell: "echo '{{ password }}'"

How do I pass parameters to a salt state file?

I want to create a group and user using salt state files, but I do not know the group, gid, user, uid, sshkey until I need to execute the salt state file which I would like to pass in as parameters.
I have read about Pillar to create the variable. How do I create pillars before execution?
/srv/salt/group.sls:
{{ name }}:
group.present:
- gid: {{ gid }}
- system: True
Command line:
salt 'SaltStack-01' state.sls group name=awesome gid=123456
If you really want to pass in the data on the command like you can also do it like this:
{{ pillar['name'] }}:
group.present:
- gid: {{ pillar['gid'] }}
- system: True
Then on the command line you can pass in the data like this:
salt 'SaltStack-01' state.sls group pillar='{"name": "awesome", "gid": "123456"}'
You use Pillars to create "dictionaries" that you can reference into State files. I'm not sure if I'm understanding you correctly, but here's an example of what you can do:
mkdir /srv/pillar/
Create /srv/pillar/groups.sls and paste something like this into it:
groups:
first: 1234
second: 5678
These are names and GIDs of the groups you want to create.
Create /srv/pillar/top.sls so you can apply this pillar to your minions. This is very similar to a salt top file, so you can either apply it to all minions ('*') or just the one ('SaltStack-01'):
base:
'hc01*':
- groups
To test that that has worked, you can run salt '*' pillar.items and you should find the groups pillar somewhere in the output.
Now, your /srv/salt/group.sls file should look like this:
{% for group,gid in pillar.get('groups',{}).items() %}
{{ group }}:
group.present:
- gid: {{ gid }}
{% endfor %}
This is a for loop: for every group and gid in the pillar groups, do the rest. So basically, you can look at it as if the state file is running twice:
first:
group.present:
- gid: 1234
And then:
second:
group.present:
- gid: 5678
This was incorporated from this guide.
if you do not want use Pillar
you can do as:
# /srv/salt/params.yaml
name: awesome
gid: 123456
and then:
# /srv/salt/groups.sls
{% import_yaml "params.yaml" as params %}
{{ params['name'] }}:
group.present:
- gid: {{ parmas['gid'] }}
- system: True
more details:doc
Another nice way to pass (incase you don't want to use pillars Nor create a file as other answers shows) - you can pass a local environment variable to salt and read it from within the sls file, like this:
Command:
MYVAR=world salt 'SaltStack-01' state.sls somesalt # Note the env variable passed at the beginning
sls file:
# /srv/salt/somesalt.sls
foo:
cmd.run:
- name: |
echo "hello {{ salt['environ.get']('MYVAR') }}"
Will print to stdout:
hello world
Another good thing to know is that the env variable also gets passed on to any included salt states as well.

Categories