How can I run a playbook in python script? What is the equivalent of the following using ansible module in python:
ansible -i hosts dbservers -m setup
ansible-playbook -i hosts -vvvv -k site.yml
I was looking at their documenation in http://docs.ansible.com/developing_api.html but they have very limited examples.
Deprecation Notice: This post doesn't work as of ansible 2. The API was changed.
This covered in the Ansible documentation under "Python API."
For example, ansible -i hosts dbservers -m setup is implemented via:
import ansible.runner
runner = ansible.runner.Runner(
module_name='setup',
module_args='',
pattern='dbservers',
)
dbservers_get_facts = runner.run()
There are a bunch of non-documented parameters in the __init__ method of Runner (from ansible.runner). There's too many to list inline, but I've included some of the parameters in this post as a guess to what you're specifically looking for.
class Runner(object):
''' core API interface to ansible '''
# see bin/ansible for how this is used...
def __init__(self,
host_list=C.DEFAULT_HOST_LIST, # ex: /etc/ansible/hosts, legacy usage
module_path=None, # ex: /usr/share/ansible
module_name=C.DEFAULT_MODULE_NAME, # ex: copy
module_args=C.DEFAULT_MODULE_ARGS, # ex: "src=/tmp/a dest=/tmp/b"
...
pattern=C.DEFAULT_PATTERN, # which hosts? ex: 'all', 'acme.example.org'
remote_user=C.DEFAULT_REMOTE_USER, # ex: 'username'
remote_pass=C.DEFAULT_REMOTE_PASS, # ex: 'password123' or None if using key
remote_port=None, # if SSH on different ports
private_key_file=C.DEFAULT_PRIVATE_KEY_FILE, # if not using keys/passwords
sudo_pass=C.DEFAULT_SUDO_PASS, # ex: 'password123' or None
...
sudo=False, # whether to run sudo or not
sudo_user=C.DEFAULT_SUDO_USER, # ex: 'root'
module_vars=None, # a playbooks internals thing
play_vars=None, #
play_file_vars=None, #
role_vars=None, #
role_params=None, #
default_vars=None, #
extra_vars=None, # extra vars specified with he playbook(s)
is_playbook=False, # running from playbook or not?
inventory=None, # reference to Inventory object
...
su=False, # Are we running our command via su?
su_user=None, # User to su to when running command, ex: 'root'
su_pass=C.DEFAULT_SU_PASS,
vault_pass=None,
...
):
For instance, the above command that specifies a sudo user and pass would be:
runner = ansible.runner.Runner(
module_name='setup',
module_args='',
pattern='dbservers',
remote_user='some_user'
remote_pass='some_pass_or_python_expression_that_returns_a_string'
)
For playbooks, look into playbook.PlayBook, which takes a similar set of initializers:
class PlayBook(object):
'''
runs an ansible playbook, given as a datastructure or YAML filename.
...
'''
# *****************************************************
def __init__(self,
playbook = None,
host_list = C.DEFAULT_HOST_LIST,
module_path = None,
....
and can be executed with the .run() method. e.g.:
from ansible.playbook import PlayBook
pb = PlayBook(playbook='/path/to/book.yml, --other initializers--)
pb.run()
more robust usage can be found in the ansible-playbook file.
As far as I know, translating playbooks to Python modules is a bit more involved, but the documentation listed above should get you covered and you can reuse the YAML parser built into Ansible to convert playbooks to variables.
Just a quick code update that works on 2.8.3,
from ansible import context
from ansible.cli import CLI
from ansible.module_utils.common.collections import ImmutableDict
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.parsing.dataloader import DataLoader
from ansible.inventory.manager import InventoryManager
from ansible.vars.manager import VariableManager
loader = DataLoader()
context.CLIARGS = ImmutableDict(tags={}, listtags=False, listtasks=False, listhosts=False, syntax=False, connection='ssh',
module_path=None, forks=100, remote_user='xxx', private_key_file=None,
ssh_common_args=None, ssh_extra_args=None, sftp_extra_args=None, scp_extra_args=None, become=True,
become_method='sudo', become_user='root', verbosity=True, check=False, start_at_task=None)
inventory = InventoryManager(loader=loader, sources=('/xxx/inventory_file',))
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
pbex = PlaybookExecutor(playbooks=['/xxx/playbook.yml'], inventory=inventory, variable_manager=variable_manager, loader=loader, passwords={})
results = pbex.run()
I have answered the question here
Posting this here cause posting links is discouraged in the community. Hope it helps.
The documentation is surprisingly lacking and you'll have to get started here
That being said, here is a quick script I hacked together that manages to run a playbook.
#!/usr/bin/env python
import os
import sys
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import Inventory
from ansible.executor.playbook_executor import PlaybookExecutor
loader = DataLoader()
inventory = Inventory(loader=loader, sources='/home/slotlocker/hosts2')
variable_manager = VariableManager(loader=loader, inventory=inventory)
playbook_path = '/home/slotlocker/ls.yml'
if not os.path.exists(playbook_path):
print '[INFO] The playbook does not exist'
sys.exit()
Options = namedtuple('Options', ['listtags', 'listtasks', 'listhosts', 'syntax', 'connection','module_path', 'forks', 'remote_user', 'private_key_file', 'ssh_common_args', 'ssh_extra_args', 'sftp_extra_args', 'scp_extra_args', 'become', 'become_method', 'become_user', 'verbosity', 'check','diff'])
options = Options(listtags=False, listtasks=False, listhosts=False, syntax=False, connection='ssh', module_path=None, forks=100, remote_user='slotlocker', private_key_file=None, ssh_common_args=None, ssh_extra_args=None, sftp_extra_args=None, scp_extra_args=None, become=True, become_method='sudo', become_user='root', verbosity=None, check=False, diff=False)
variable_manager.extra_vars = {'hosts': 'mywebserver'} # This can accomodate various other command line arguments.`
passwords = {}
pbex = PlaybookExecutor(playbooks=[playbook_path], inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords=passwords)
results = pbex.run()
Ansible provides the ansible-runner Python package (PyPI, GitHub), which can be used for this.
Usage example from the documentation:
import ansible_runner
r = ansible_runner.run(private_data_dir='/tmp/demo', playbook='test.yml')
print("{}: {}".format(r.status, r.rc))
# successful: 0
for each_host_event in r.events:
print(each_host_event['event'])
print("Final status:")
print(r.stats)
Based on vigilander's answer, I took the liberty to upload a pkg to pypi and now you can use this simple ansible runner:
Just pip install ansible-playbook-runner and run:
from ansible_playbook_runner import Runner
Runner(['inventory_path'], 'playbook_path').run()
You're looking at something that isn't officially supported or recommended therefore little documentation to be had.
That said, if you really want to pursue that course I would start with cracking open the ansible-playbook script in bin and reverse engineering what you want to do.
Related
I am trying run playbook from withing python code however I do not know how to pass --limit option.
It's similar to following How to set "--limit" option in Ansible playbook api ,however I think API has changed a bit. I have following inventory:
[rhosts]
host_a
host_b
host_c
Inside ping.yaml hosts is set rhosts but i need to limit it to only run on host_a,host_c
This is my code to run the playbook
from ansible import context
from ansible.cli import CLI
from ansible.module_utils.common.collections import ImmutableDict
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.parsing.dataloader import DataLoader
from ansible.inventory.manager import InventoryManager
from ansible.vars.manager import VariableManager
from simple_term_menu import TerminalMenu
INVENTORY_FILE = 'hosts'
loader = DataLoader()
context.CLIARGS = ImmutableDict(tags={}, listtags=False, listtasks=False,
listhosts=False, syntax=False, connection='ssh',
module_path=None, forks=100, private_key_file=None,
ssh_common_args=None, ssh_extra_args=None, sftp_extra_args=None,
scp_extra_args=None, become=True, become_method='sudo', become_user='root',
verbosity=True, check=False, start_at_task=None,)
inventory = InventoryManager(loader=loader, sources=[INVENTORY_FILE])
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
pbex = PlaybookExecutor(playbooks=['ping.yml'],
inventory=inventory, variable_manager=variable_manager,
loader=loader, passwords={})
results = pbex.run()
I tried to search in Ansible code and only found
def get_host_list(inventory, subset, pattern='all'):
no_hosts = False
if len(inventory.list_hosts()) == 0:
# Empty inventory
if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST:
display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'")
no_hosts = True
inventory.subset(subset)
hosts = inventory.list_hosts(pattern)
if not hosts and no_hosts is False:
raise AnsibleError("Specified hosts and/or --limit does not match any hosts")
return hosts
I am can't work out what and where to pass it, if that's the right bit of code.
I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.
Using the Ansible 2 Python API I'm able to run playbooks and handle results with a custom callback handler (thanks to this question). Everything works well, but now I'd like to implement a simple retry loop for the PlaybookExecutor.
All my callback handler is doing is stuffing any failed tasks in an array, and if I see that the array isn't empty count it as a failure and try again.
I have another python module that uses this script to kick off the playbook. The call to run_playbook is nested in a try/except block and I'd like an exception to bubble up so I can properly handle the failure.
I'd like to give my playbook 3 attempts at running and if all fail then raise an exception.
Here is my code:
#! /usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import print_function
import logging
import os
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import InventoryManager
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.plugins.callback import CallbackBase
class ResultsCallback(CallbackBase):
""" A callback plugin used for performing an action as results come in """
def __init__(self):
super(ResultsCallback, self).__init__()
# Store all failed results
self.failed = []
def v2_runner_on_failed(self, result, ignore_errors=False):
if ignore_errors:
self._display.display("...ignoring", color=C.COLOR_SKIP)
host = result._host
self.failed.append(result.task_name)
def create_inventory_file(hostnames):
inv_file = 'ansible_hosts.{0}'.format(os.getppid())
logging.print('\nCreating Ansible host file: {0}/{1}'.format(os.path.join(os.path.expanduser('~')), inv_file))
with open(os.path.join(os.path.expanduser('~'), inv_file), 'w') as host_file:
# If ec2, stuff into an '[ec2]' group.
# Otherwise don't use a group header
if 'ec2' in hostnames[0]:
host_file.write('[ec2]\n')
for host in hostnames:
host_file.write('{0}\n'.format(host))
return os.path.join(os.path.expanduser('~'), inv_file)
def run_playbook(hostnames, playbook, playbook_arguments, host_file=False):
# If user passes in the optional arg host_file, then just use that one.
if not host_file:
host_file = create_inventory_file(hostnames)
if not os.path.isfile(host_file):
logging.critical('Host file does not exist. Make sure absolute path is correct.\nInventory: {0}'.format(host_file))
raise RuntimeError('Host file does not exist')
loader = DataLoader()
inventory = InventoryManager(loader=loader, sources=host_file)
variable_manager = VariableManager(loader=loader, inventory=inventory)
# Add extra variables to use in playbook like so:
# variable_manager.extra_vars = {'name': 'value'}
if playbook_arguments:
variable_manager.extra_vars = playbook_arguments
Options = namedtuple('Options', ['listtags', 'listtasks', 'listhosts', 'syntax', 'connection','module_path', 'forks', 'remote_user', 'become', 'become_method', 'become_user', 'verbosity', 'check', 'diff', 'ask_sudo_pass'])
if 'superuser' in playbook_arguments:
remote_user = playbook_arguments['superuser']
else:
remote_user = 'ec2-user'
options = Options(listtags=None, listtasks=None, listhosts=None, syntax=None, connection='smart', module_path=None, forks=100, remote_user=remote_user, become=None, become_method='sudo', become_user='root', verbosity=None, check=False, diff=False, ask_sudo_pass=None)
pbex = PlaybookExecutor(playbooks=[playbook], inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords={})
callback = ResultsCallback()
pbex._tqm._stdout_callback = callback
logging.print('Provisioning cluster with Ansible...')
attempts = 3
for i in range(attempts):
try:
pbex.run()
failed = callback.failed
if failed:
logging.critical('Playbook failed!')
raise RuntimeError('{0} tasks failed'.format(len(failed)))
break
except:
if i < attempts - 1:
logging.critical('Attempting to re-try playbook')
continue
else:
raise
logging.print('\nRemoving Ansible Inventory file {0}'.format(host_file))
try:
os.remove(host_file)
except OSError:
pass
However, when I test the above code using a playbook that is guaranteed to fail, it fails with the following traceback:
Creating Ansible host file: /home/someuser/ansible_hosts.18600
Provisioning cluster with Ansible...
Playbook failed!
Attempting to re-try playbook
Exception during setup; tearing down all created instances
Traceback (most recent call last):
File "./manage_aws.py", line 486, in cmd_ec2_create
manage_ansible.run_playbook(hostnames, playbook, playbook_arguments)
File "/home/someuser/manage_ansible.py", line 88, in run_playbook
break
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/playbook_executor.py", line 159, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python2.7/dist-packages/ansible/executor/task_queue_manager.py", line 296, in run
strategy.cleanup()
File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py", line 223, in cleanup
self._final_q.put(_sentinel)
File "/usr/lib/python2.7/multiprocessing/queues.py", line 100, in put
assert not self._closed
AssertionError
You'll notice that the exception is properly caught inside of the calling script manage_aws.py ("Exception during setup; tearing down all created instances") and we go to tear down the instances. That's great, but I'd like to properly re-try the playbook before deciding to do so.
I'm no Python master, so if anyone has any tips, or has accomplished something similar, then I would very much appreciate your advice.
Thanks in advance!
I found a solution, and while it isn't as graceful as I was hoping it works.
The problem that I was having seemed related to re-running on the same PlaybokExecutor object without letting the spawned threads cleanup properly.
What I did to fix it was just initialize a new PlaybookExecutor object when I notice that the first failed. Current implementation allows for only one retry, which is fine, but I'll most likely adapt it to do more if necessary.
Here is my adapted retry logic:
pbex = PlaybookExecutor(playbooks=[playbook], inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords={})
callback = ResultsCallback()
pbex._tqm._stdout_callback = callback
logging.print('Provisioning cluster with Ansible...')
pbex.run()
failed = callback.failed
if failed:
logging.critical('Playbook failed! Attempting retry...')
pbex_retry = PlaybookExecutor(playbooks=[playbook], inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords={})
callback_retry = ResultsCallback()
pbex_retry._tqm._stdout_callback = callback_retry
pbex_retry.run()
failed_retry = callback_retry.failed
if failed_retry:
logging.critical('Playbook failed again! Failed on task:\n{0}'.format(failed_retry[0]))
remove_inventory_file(host_file)
raise RuntimeError('Playbook failed to successfully configure the cluster.')
remove_inventory_file(host_file)
Super simple solution, but it is a shame my initial try didn't work out as intended. Maybe I'll re-visit it and try to properly cleanup the executor when failing.
Ansible-playbook has a --list-hosts cli switch that just outputs the hosts affected by each play in a playbook. I am looking for a way to access to same information through the python API.
The (very) basic script I am using to test right now is
#!/usr/bin/python
import ansible.runner
import ansible.playbook
import ansible.inventory
from ansible import callbacks
from ansible import utils
import json
# hosts list
hosts = ["127.0.0.1"]
# set up the inventory, if no group is defined then 'all' group is used by default
example_inventory = ansible.inventory.Inventory(hosts)
pm = ansible.runner.Runner(
module_name = 'command',
module_args = 'uname -a',
timeout = 5,
inventory = example_inventory,
subset = 'all' # name of the hosts group
)
out = pm.run()
print json.dumps(out, sort_keys=True, indent=4, separators=(',', ': '))
I just can't figure out what to add to ansible.runner.Runner() to make it output affected hosts and exit.
I'm not sure what are you trying to achieve, but ansible.runner.Runner is actually one task and not playbook.
Your script is a more kind of ansible CLI and not ansible-playbook.
And ansible doesn't have any kind of --list-hosts, while ansible-playbook does.
You can see how listhosts is done here.
How to specify a callback when calling ansible via its API?
I have a callback plugin database_write.py for ansible 2.0.0.2 that logs into a database when this is run:
ansible-playbook -i inventory.txt playbook.yml # callback is fired ok
This works ok because in my $PWD i have ansible.cfg with this line:
callback_plugins = ./src/callback
Now I'm trying to make ansible to execute my playbook and my callback using the python API. I've basically copied what the ansible-playbook cli tool does
# based on https://github.com/ansible/ansible/blob/v2.0.0.2-1/lib/ansible/cli/playbook.py
pbex = PlaybookExecutor(playbooks=['../playbook.yml'],
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords)
results = pbex.run()
This executes the playbook fine, but the callback is not triggered.
I guess when using the python API, my ansible.cfg file is not being taken into account?
how do I specify my callback plugin to the PlaybookExecutor?
(most of the documentation I've found works for ansible versions < 2.0)
thank you in advance!
2.0 API is very raw and generally not suited for end-user. It's likely to be changed in the future also. See this discussion on mailing list where I posted similar question and proposed my own answer that apparently was correct: you can assign your callbacks to TaskQueueManager instance of the executor (PlaybookExecutor._tqm._stdout_callback).
pbex = PlaybookExecutor(playbooks=playbooks, inventory=inventory,
variable_manager=variable_manager,
loader=loader, options=options,
passwords=passwords)
cb = ResultAccumulator()
pbex._tqm._stdout_callback = cb
results = pbex.run()
(ResultAccumulator is callback plugin class from here. It should be derived from ansible.plugins.callback.CallbackBase.
Ansible API docs are scarce. PlaybookExecutor does not accept any callback arguments.
However, you can create a custom callback from CallbackBase.
Here's an example which returns a list of all the TaskResults:
class SampleCallback(CallbackBase):
"""Sample callback"""
def __init__(self):
super(SampleCallback, self).__init__()
# store all results
self.results = []
def v2_runner_on_ok(self, result, **kwargs):
"""Save result instead of printing it"""
self.results.append(result)
pbe = PlaybookExecutor(
playbooks=playbooks,
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords
)
callback = SampleCallback()
pbe._tqm._stdout_callback = callback
return_code = pbe.run()
results = callback.results
With my setup (ansible 2.0.0.2 RHEL6 python 2.6.6) I needed to have it set in /etc/ansible/ansible.cfg
and did it thusly:
callback_plugins = /usr/lib/python2.6/site-packages/ansible/plugins/callback
bin_ansible_callbacks = True
callback_whitelist = profile_tasks