"--limit" option when calling playbook in Ansible python api - python

I am trying run playbook from withing python code however I do not know how to pass --limit option.
It's similar to following How to set "--limit" option in Ansible playbook api ,however I think API has changed a bit. I have following inventory:
[rhosts]
host_a
host_b
host_c
Inside ping.yaml hosts is set rhosts but i need to limit it to only run on host_a,host_c
This is my code to run the playbook
from ansible import context
from ansible.cli import CLI
from ansible.module_utils.common.collections import ImmutableDict
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.parsing.dataloader import DataLoader
from ansible.inventory.manager import InventoryManager
from ansible.vars.manager import VariableManager
from simple_term_menu import TerminalMenu
INVENTORY_FILE = 'hosts'
loader = DataLoader()
context.CLIARGS = ImmutableDict(tags={}, listtags=False, listtasks=False,
listhosts=False, syntax=False, connection='ssh',
module_path=None, forks=100, private_key_file=None,
ssh_common_args=None, ssh_extra_args=None, sftp_extra_args=None,
scp_extra_args=None, become=True, become_method='sudo', become_user='root',
verbosity=True, check=False, start_at_task=None,)
inventory = InventoryManager(loader=loader, sources=[INVENTORY_FILE])
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
pbex = PlaybookExecutor(playbooks=['ping.yml'],
inventory=inventory, variable_manager=variable_manager,
loader=loader, passwords={})
results = pbex.run()
I tried to search in Ansible code and only found
def get_host_list(inventory, subset, pattern='all'):
no_hosts = False
if len(inventory.list_hosts()) == 0:
# Empty inventory
if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST:
display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'")
no_hosts = True
inventory.subset(subset)
hosts = inventory.list_hosts(pattern)
if not hosts and no_hosts is False:
raise AnsibleError("Specified hosts and/or --limit does not match any hosts")
return hosts
I am can't work out what and where to pass it, if that's the right bit of code.

Related

FastAPI testing No ORM Databases

I am a None ORM user, I prefer to use plain text queries, I took that course on free code camp on FastAPI but almost the whole course revolves around ORMs so I struggle to implement a number of concepts found in that course.
My main struggle is when I write tests for my endpoints I need to use a test database rather than my development database, I am using testing.postgres to create the testing database and this is the code I am using :
client = TestClient(app)
class TestUser:
def setup_method(self, method):
self.postgres = testing.postgresql.Postgresql(cache_initialized_db=True)
self.db = psycopg2.connect(**self.postgres.dsn())
def teardown_method(self, method):
self.db.close()
def test_get_users(self):
res = client.get("/users")
assert res.status_code == 200
assert res.headers["content-type"] == "application/json"
my question is how to make my tests and cruds send queries to the test database, not the development database.
where is the link that I should add here?
Edit: this is the code I use for connection with database, I am not using a dependency for it :
while True:
try:
conn = psycopg2.connect(
database=settings.database,
user=settings.user,
port=settings.port,
password=settings.password,
cursor_factory=RealDictCursor,
)
cr = conn.cursor()
console.print(
"[green bold]SUCCESS[/]: Connection To Database Established successfuly"
)
break
except psycopg2.OperationalError:
console.print(
"[red bold]FAILED[/]: Connection To Database Failed , Trying Again"
)
time.sleep(2)
Edit2: this is the settings segment :
load_dotenv()
class Settings(BaseSettings):
database: str
user: str
port: int
password: str
expiray: int
algorithm: str
secret_key: str
settings = Settings()
If you wanna use the test database rather than the development database you should create the testing database and let the backend application be able to find the testing database and use it.
Kareem, I suggest you create your Settings instance from one or another .env file, based on an ENVIRONMENT variable:
import os
import sys
import pydantic
class Settings(pydantic.BaseSettings):
database: str
user: str
port: int
password: str
expiray: int
algorithm: str
secret_key: str
try:
ENVIRONMENT = os.environ['ENVIRONMENT']
except KeyError:
print('make sure to set an "ENVIRONMENT" env var')
sys.exit()
settings = Settings(_env_file=f'{ENVIRONMENT}.env', _env_file_encoding='utf-8')
This way, if you want to work with the testing database, you can do $ export ENVIRONMENT=test and it will load the settings from a file called test.env.
Does that make sense? Here are the docs for pydantic's dotenv support for BaseSettings subclasses.
As an aside, since you're interested in doing as much as you can without the use of an ORM and perhaps other dependencies, you really don't need pydantic for your purposes: You could simply change the above to something like:
import os
import sys
import types
import dotenv
try:
ENVIRONMENT = os.environ['ENVIRONMENT']
except KeyError:
print('make sure to set an "ENVIRONMENT" env var')
sys.exit()
db_settings_dict = dotenv.dotenv_values(f"{ENVIRONMENT}.env")
# to get dot notation as in your example
settings = types.SimpleNamespace(**db_settings_dict)
where test.env contains things like
database=hey.there.com:8768
user=kareem
...
Below is the Script I was written to control the ENV variable from the .env file, before running the unicorn server you would load the .env file based on the ENV mode to determine what kinds of modes are working on then use conditions to set up the corresponding DB path and name.
import os
import sys
from colored import fg
from os import environ, path
from dotenv import load_dotenv, set_key
import dotenv
def unknown(option, args=None):
color = fg('red')
if args:
print(color + f'Unknown arguments: -{args}')
else:
print(color + f'Unknown option: -{option}')
print("usage: python3 config.py [option] [arguments]")
print("Try `python3 config.py --help for more information")
color = fg('white')
print(color)
def version():
basedir = path.abspath(path.dirname(__file__))
load_dotenv(path.join(basedir, '.env'))
color = fg('blue')
print(color + '[VERSION]: ' + environ.get('VER'))
color = fg('white')
print(color)
def env():
basedir = path.abspath(path.dirname(__file__))
load_dotenv(path.join(basedir, '.env'))
color = fg('blue')
print(color + '[ENV]: ' + environ.get('ENV'))
color = fg('white')
print(color)
def set_env(env):
dotenv_file = dotenv.find_dotenv()
basedir = path.abspath(path.dirname(__file__))
load_dotenv(path.join(basedir, '.env'))
color = fg('blue')
print(color + f"[Updating ENV] ... ... from {environ.get('ENV')} to {env}")
os.environ["ENV"] = env
set_key(dotenv_file, 'ENV', os.environ['ENV'])
print("[Successfully Update the 'ENV' from .env]")
color = fg('white')
print(color)
def doc():
color = fg('blue')
print(color + "usage: python3 config.py [option] [arguments]")
read_me = """
-v : current project version
-env : current project environment
-set ENV={env} : set project env 'DEV' or 'PROD'
"""
print(read_me)
color = fg('white')
print(color)
def action_handler(option, args):
if option == '-set':
args_list = args.split('=')
if args_list[0] == 'ENV' and (args_list[1] == 'DEV' or args_list[1] == 'PROD'):
set_env(args_list[1])
else:
unknown(option=option, args=args)
else:
unknown(option=option)
argvs = len(sys.argv)
if argvs == 1:
doc()
elif argvs == 2:
option = sys.argv[1]
if option == '--help':
doc()
elif option == '-v':
version()
elif option == '-env':
env()
else:
unknown(option)
elif argvs == 3:
option, args = sys.argv[1], sys.argv[2]
action_handler(option, args)
else:
pass

How to run 'connect_get_namespaced_pod_exec' as root in python k8s client

from os import getenv, listdir, path
from kubernetes import client, config
from kubernetes.stream import stream
import constants, logging
from pprint import pprint
def listdir_fullpath(directory):
return [path.join(directory, file) for file in listdir(directory)]
def active_context(kubeConfig, cluster):
config.load_kube_config(config_file=kubeConfig, context=cluster)
def kube_exec(command, apiInstance, podName, namespace, container):
response = None
execCommand = [
'/bin/bash',
'-c',
command]
try:
response = apiInstance.read_namespaced_pod(name=podName,
namespace=namespace)
except ApiException as e:
if e.status != 404:
print(f"Unknown error: {e}")
exit(1)
if not response:
print("Pod does not exist")
exit(1)
try:
response = stream(apiInstance.connect_get_namespaced_pod_exec,
podName,
namespace,
container=container,
command=execCommand,
stderr=True,
stdin=False,
stdout=True,
tty=False,
_preload_content=True)
except Exception as e:
print("error in executing cmd")
exit(1)
pprint(response)
if __name__ == '__main__':
configPath = constants.CONFIGFILE
kubeConfigList = listdir_fullpath(configPath)
kubeConfig = ':'.join(kubeConfigList)
active_context(kubeConfig, "ort.us-west-2.k8s.company-foo.net")
apiInstance = client.CoreV1Api()
kube_exec("whoami", apiInstance, "podname-foo", "namespace-foo", "container-foo")
I run this code
and the response I get from running whoami is:'java\n'
how can I run as root? also, I can't find a good doc for this client anywhere (the docs on the git repo are pretty horrible) if you can link me to any it would be awesome
EDIT: I just tried on a couple of different pods and containers, looks like some of them default to root, would still like to be able to choose my user when I run a command so question is still relevant
some of them default to root, would still like to be able to choose my user when I run a command so question is still relevant
You have influence over the UID (not the user directly, as far as I know) when you launch the Pod, but from that point forward, there is no equivalent to docker exec -u in kubernetes -- you can attach to the Pod, running as whatever UID it was launched as, but you cannot change the UID
I would hypothesize that's a security concern in locked down clusters, since one would not want someone with kubectl access to be able to elevate privileges
If you need to run as root in your container, then you should change the value of securityContext: runAsUser: 0 and then drop privileges for running your main process. That way new commands (spawned by your exec command) will run as root, just as your initial command: does

how to create vms with pyvmomi

I am trying to create a python program that will create a provided number of identical virtual machines. I have used the community sample scripts to get as much as i can running but i am completely stuck now.
#!/usr/bin/env python
"""
vSphere SDK for Python program for creating tiny VMs (1vCPU/128MB)
"""
import atexit
import hashlib
import json
import random
import time
import requests
from pyVim import connect
from pyVmomi import vim
from tools import cli
from tools import tasks
from add_nic_to_vm import add_nic, get_obj
def get_args():
"""
Use the tools.cli methods and then add a few more arguments.
"""
parser = cli.build_arg_parser()
parser.add_argument('-c', '--count',
type=int,
required=True,
action='store',
help='Number of VMs to create')
parser.add_argument('-d', '--datastore',
required=True,
action='store',
help='Name of Datastore to create VM in')
parser.add_argument('--datacenter',
required=True,
help='Name of the datacenter to create VM in.')
parser.add_argument('--folder',
required=True,
help='Name of the vm folder to create VM in.')
parser.add_argument('--resource-pool',
required=True,
help='Name of resource pool to create VM in.')
parser.add_argument('--opaque-network',
help='Name of the opaque network to add to the new VM')
# NOTE (hartsock): as a matter of good security practice, never ever
# save a credential of any kind in the source code of a file. As a
# matter of policy we want to show people good programming practice in
# these samples so that we don't encourage security audit problems for
# people in the future.
args = parser.parse_args()
return cli.prompt_for_password(args)
def create_dummy_vm(vm_name, service_instance, vm_folder, resource_pool,
datastore):
"""Creates a dummy VirtualMachine with 1 vCpu, 128MB of RAM.
:param name: String Name for the VirtualMachine
:param service_instance: ServiceInstance connection
:param vm_folder: Folder to place the VirtualMachine in
:param resource_pool: ResourcePool to place the VirtualMachine in
:param datastore: DataStrore to place the VirtualMachine on
"""
datastore_path = '[' + datastore + '] ' + vm_name
# bare minimum VM shell, no disks. Feel free to edit
vmx_file = vim.vm.FileInfo(logDirectory=None,
snapshotDirectory=None,
suspendDirectory=None,
vmPathName=datastore_path)
config = vim.vm.ConfigSpec(name=vm_name, memoryMB=128, numCPUs=1,
files=vmx_file, guestId='dosGuest',
version='vmx-07')
print("Creating VM {}...".format(vm_name))
task = vm_folder.CreateVM_Task(config=config, pool=resource_pool)
tasks.wait_for_tasks(service_instance, [task])
A=1
def main():
"""
Simple command-line program for creating Dummy VM based on Marvel character
names
"""
name = "computer" + str(A)
args = get_args()
service_instance = connect.SmartConnectNoSSL(host=args.host,
user=args.user,
pwd=args.password,
port=int(args.port))
if not service_instance:
print("Could not connect to the specified host using specified "
"username and password")
return -1
atexit.register(connect.Disconnect, service_instance)
content = service_instance.RetrieveContent()
datacenter = get_obj(content, [vim.Datacenter], args.datacenter)
vmfolder = get_obj(content, [vim.Folder], args.folder)
resource_pool = get_obj(content, [vim.ResourcePool], args.resource_pool)
vm_name = name
create_dummy_vm(vm_name, service_instance, vmfolder, resource_pool,
args.datastore)
A + 1
if args.opaque_network:
vm = get_obj(content, [vim.VirtualMachine], vm_name)
add_nic(service_instance, vm, args.opaque_network)
return 0
# Start program
if __name__ == "__main__":
main()
The error i get when running it is
Creating VM computer1...
Traceback (most recent call last):
File "create_vm.py", line 142, in <module>
main()
File "create_vm.py", line 133, in main
args.datastore)
File "create_vm.py", line 98, in create_dummy_vm
task = vm_folder.CreateVM_Task(config=config, pool=resource_pool)
AttributeError: 'NoneType' object has no attribute 'CreateVM_Task'
I know that my CreateVM_task is returning a a parameter of none but i cant seem to figure out why.
Problem was with the config parameters. With the current code, the datacenter and vmfolder objects come back as none when printed. To fix this I edited it to the following block.
content = service_instance.RetrieveContent()
datacenter = content.rootFolder.childEntity[0]
vmfolder = datacenter.vmFolder
hosts = datacenter.hostFolder.childEntity
resource_pool = hosts[0].resourcePool

Ansible Playbook Retry Logic Using Python API

Using the Ansible 2 Python API I'm able to run playbooks and handle results with a custom callback handler (thanks to this question). Everything works well, but now I'd like to implement a simple retry loop for the PlaybookExecutor.
All my callback handler is doing is stuffing any failed tasks in an array, and if I see that the array isn't empty count it as a failure and try again.
I have another python module that uses this script to kick off the playbook. The call to run_playbook is nested in a try/except block and I'd like an exception to bubble up so I can properly handle the failure.
I'd like to give my playbook 3 attempts at running and if all fail then raise an exception.
Here is my code:
#! /usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import print_function
import logging
import os
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import InventoryManager
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.plugins.callback import CallbackBase
class ResultsCallback(CallbackBase):
""" A callback plugin used for performing an action as results come in """
def __init__(self):
super(ResultsCallback, self).__init__()
# Store all failed results
self.failed = []
def v2_runner_on_failed(self, result, ignore_errors=False):
if ignore_errors:
self._display.display("...ignoring", color=C.COLOR_SKIP)
host = result._host
self.failed.append(result.task_name)
def create_inventory_file(hostnames):
inv_file = 'ansible_hosts.{0}'.format(os.getppid())
logging.print('\nCreating Ansible host file: {0}/{1}'.format(os.path.join(os.path.expanduser('~')), inv_file))
with open(os.path.join(os.path.expanduser('~'), inv_file), 'w') as host_file:
# If ec2, stuff into an '[ec2]' group.
# Otherwise don't use a group header
if 'ec2' in hostnames[0]:
host_file.write('[ec2]\n')
for host in hostnames:
host_file.write('{0}\n'.format(host))
return os.path.join(os.path.expanduser('~'), inv_file)
def run_playbook(hostnames, playbook, playbook_arguments, host_file=False):
# If user passes in the optional arg host_file, then just use that one.
if not host_file:
host_file = create_inventory_file(hostnames)
if not os.path.isfile(host_file):
logging.critical('Host file does not exist. Make sure absolute path is correct.\nInventory: {0}'.format(host_file))
raise RuntimeError('Host file does not exist')
loader = DataLoader()
inventory = InventoryManager(loader=loader, sources=host_file)
variable_manager = VariableManager(loader=loader, inventory=inventory)
# Add extra variables to use in playbook like so:
# variable_manager.extra_vars = {'name': 'value'}
if playbook_arguments:
variable_manager.extra_vars = playbook_arguments
Options = namedtuple('Options', ['listtags', 'listtasks', 'listhosts', 'syntax', 'connection','module_path', 'forks', 'remote_user', 'become', 'become_method', 'become_user', 'verbosity', 'check', 'diff', 'ask_sudo_pass'])
if 'superuser' in playbook_arguments:
remote_user = playbook_arguments['superuser']
else:
remote_user = 'ec2-user'
options = Options(listtags=None, listtasks=None, listhosts=None, syntax=None, connection='smart', module_path=None, forks=100, remote_user=remote_user, become=None, become_method='sudo', become_user='root', verbosity=None, check=False, diff=False, ask_sudo_pass=None)
pbex = PlaybookExecutor(playbooks=[playbook], inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords={})
callback = ResultsCallback()
pbex._tqm._stdout_callback = callback
logging.print('Provisioning cluster with Ansible...')
attempts = 3
for i in range(attempts):
try:
pbex.run()
failed = callback.failed
if failed:
logging.critical('Playbook failed!')
raise RuntimeError('{0} tasks failed'.format(len(failed)))
break
except:
if i < attempts - 1:
logging.critical('Attempting to re-try playbook')
continue
else:
raise
logging.print('\nRemoving Ansible Inventory file {0}'.format(host_file))
try:
os.remove(host_file)
except OSError:
pass
However, when I test the above code using a playbook that is guaranteed to fail, it fails with the following traceback:
Creating Ansible host file: /home/someuser/ansible_hosts.18600
Provisioning cluster with Ansible...
Playbook failed!
Attempting to re-try playbook
Exception during setup; tearing down all created instances
Traceback (most recent call last):
  File "./manage_aws.py", line 486, in cmd_ec2_create
    manage_ansible.run_playbook(hostnames, playbook, playbook_arguments)
  File "/home/someuser/manage_ansible.py", line 88, in run_playbook
    break
  File "/usr/local/lib/python2.7/dist-packages/ansible/executor/playbook_executor.py", line 159, in run
    result = self._tqm.run(play=play)
  File "/usr/local/lib/python2.7/dist-packages/ansible/executor/task_queue_manager.py", line 296, in run
    strategy.cleanup()
  File "/usr/local/lib/python2.7/dist-packages/ansible/plugins/strategy/__init__.py", line 223, in cleanup
    self._final_q.put(_sentinel)
  File "/usr/lib/python2.7/multiprocessing/queues.py", line 100, in put
    assert not self._closed
AssertionError
You'll notice that the exception is properly caught inside of the calling script manage_aws.py ("Exception during setup; tearing down all created instances") and we go to tear down the instances. That's great, but I'd like to properly re-try the playbook before deciding to do so.
I'm no Python master, so if anyone has any tips, or has accomplished something similar, then I would very much appreciate your advice.
Thanks in advance!
I found a solution, and while it isn't as graceful as I was hoping it works.
The problem that I was having seemed related to re-running on the same PlaybokExecutor object without letting the spawned threads cleanup properly.
What I did to fix it was just initialize a new PlaybookExecutor object when I notice that the first failed. Current implementation allows for only one retry, which is fine, but I'll most likely adapt it to do more if necessary.
Here is my adapted retry logic:
pbex = PlaybookExecutor(playbooks=[playbook], inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords={})
callback = ResultsCallback()
pbex._tqm._stdout_callback = callback
logging.print('Provisioning cluster with Ansible...')
pbex.run()
failed = callback.failed
if failed:
logging.critical('Playbook failed! Attempting retry...')
pbex_retry = PlaybookExecutor(playbooks=[playbook], inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords={})
callback_retry = ResultsCallback()
pbex_retry._tqm._stdout_callback = callback_retry
pbex_retry.run()
failed_retry = callback_retry.failed
if failed_retry:
logging.critical('Playbook failed again! Failed on task:\n{0}'.format(failed_retry[0]))
remove_inventory_file(host_file)
raise RuntimeError('Playbook failed to successfully configure the cluster.')
remove_inventory_file(host_file)
Super simple solution, but it is a shame my initial try didn't work out as intended. Maybe I'll re-visit it and try to properly cleanup the executor when failing.

Running ansible-playbook using Python API

How can I run a playbook in python script? What is the equivalent of the following using ansible module in python:
ansible -i hosts dbservers -m setup
ansible-playbook -i hosts -vvvv -k site.yml
I was looking at their documenation in http://docs.ansible.com/developing_api.html but they have very limited examples.
Deprecation Notice: This post doesn't work as of ansible 2. The API was changed.
This covered in the Ansible documentation under "Python API."
For example, ansible -i hosts dbservers -m setup is implemented via:
import ansible.runner
runner = ansible.runner.Runner(
module_name='setup',
module_args='',
pattern='dbservers',
)
dbservers_get_facts = runner.run()
There are a bunch of non-documented parameters in the __init__ method of Runner (from ansible.runner). There's too many to list inline, but I've included some of the parameters in this post as a guess to what you're specifically looking for.
class Runner(object):
''' core API interface to ansible '''
# see bin/ansible for how this is used...
def __init__(self,
host_list=C.DEFAULT_HOST_LIST, # ex: /etc/ansible/hosts, legacy usage
module_path=None, # ex: /usr/share/ansible
module_name=C.DEFAULT_MODULE_NAME, # ex: copy
module_args=C.DEFAULT_MODULE_ARGS, # ex: "src=/tmp/a dest=/tmp/b"
...
pattern=C.DEFAULT_PATTERN, # which hosts? ex: 'all', 'acme.example.org'
remote_user=C.DEFAULT_REMOTE_USER, # ex: 'username'
remote_pass=C.DEFAULT_REMOTE_PASS, # ex: 'password123' or None if using key
remote_port=None, # if SSH on different ports
private_key_file=C.DEFAULT_PRIVATE_KEY_FILE, # if not using keys/passwords
sudo_pass=C.DEFAULT_SUDO_PASS, # ex: 'password123' or None
...
sudo=False, # whether to run sudo or not
sudo_user=C.DEFAULT_SUDO_USER, # ex: 'root'
module_vars=None, # a playbooks internals thing
play_vars=None, #
play_file_vars=None, #
role_vars=None, #
role_params=None, #
default_vars=None, #
extra_vars=None, # extra vars specified with he playbook(s)
is_playbook=False, # running from playbook or not?
inventory=None, # reference to Inventory object
...
su=False, # Are we running our command via su?
su_user=None, # User to su to when running command, ex: 'root'
su_pass=C.DEFAULT_SU_PASS,
vault_pass=None,
...
):
For instance, the above command that specifies a sudo user and pass would be:
runner = ansible.runner.Runner(
module_name='setup',
module_args='',
pattern='dbservers',
remote_user='some_user'
remote_pass='some_pass_or_python_expression_that_returns_a_string'
)
For playbooks, look into playbook.PlayBook, which takes a similar set of initializers:
class PlayBook(object):
'''
runs an ansible playbook, given as a datastructure or YAML filename.
...
'''
# *****************************************************
def __init__(self,
playbook = None,
host_list = C.DEFAULT_HOST_LIST,
module_path = None,
....
and can be executed with the .run() method. e.g.:
from ansible.playbook import PlayBook
pb = PlayBook(playbook='/path/to/book.yml, --other initializers--)
pb.run()
more robust usage can be found in the ansible-playbook file.
As far as I know, translating playbooks to Python modules is a bit more involved, but the documentation listed above should get you covered and you can reuse the YAML parser built into Ansible to convert playbooks to variables.
Just a quick code update that works on 2.8.3,
from ansible import context
from ansible.cli import CLI
from ansible.module_utils.common.collections import ImmutableDict
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.parsing.dataloader import DataLoader
from ansible.inventory.manager import InventoryManager
from ansible.vars.manager import VariableManager
loader = DataLoader()
context.CLIARGS = ImmutableDict(tags={}, listtags=False, listtasks=False, listhosts=False, syntax=False, connection='ssh',
module_path=None, forks=100, remote_user='xxx', private_key_file=None,
ssh_common_args=None, ssh_extra_args=None, sftp_extra_args=None, scp_extra_args=None, become=True,
become_method='sudo', become_user='root', verbosity=True, check=False, start_at_task=None)
inventory = InventoryManager(loader=loader, sources=('/xxx/inventory_file',))
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
pbex = PlaybookExecutor(playbooks=['/xxx/playbook.yml'], inventory=inventory, variable_manager=variable_manager, loader=loader, passwords={})
results = pbex.run()
I have answered the question here
Posting this here cause posting links is discouraged in the community. Hope it helps.
The documentation is surprisingly lacking and you'll have to get started here
That being said, here is a quick script I hacked together that manages to run a playbook.
#!/usr/bin/env python
import os
import sys
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import Inventory
from ansible.executor.playbook_executor import PlaybookExecutor
loader = DataLoader()
inventory = Inventory(loader=loader, sources='/home/slotlocker/hosts2')
variable_manager = VariableManager(loader=loader, inventory=inventory)
playbook_path = '/home/slotlocker/ls.yml'
if not os.path.exists(playbook_path):
print '[INFO] The playbook does not exist'
sys.exit()
Options = namedtuple('Options', ['listtags', 'listtasks', 'listhosts', 'syntax', 'connection','module_path', 'forks', 'remote_user', 'private_key_file', 'ssh_common_args', 'ssh_extra_args', 'sftp_extra_args', 'scp_extra_args', 'become', 'become_method', 'become_user', 'verbosity', 'check','diff'])
options = Options(listtags=False, listtasks=False, listhosts=False, syntax=False, connection='ssh', module_path=None, forks=100, remote_user='slotlocker', private_key_file=None, ssh_common_args=None, ssh_extra_args=None, sftp_extra_args=None, scp_extra_args=None, become=True, become_method='sudo', become_user='root', verbosity=None, check=False, diff=False)
variable_manager.extra_vars = {'hosts': 'mywebserver'} # This can accomodate various other command line arguments.`
passwords = {}
pbex = PlaybookExecutor(playbooks=[playbook_path], inventory=inventory, variable_manager=variable_manager, loader=loader, options=options, passwords=passwords)
results = pbex.run()
Ansible provides the ansible-runner Python package (PyPI, GitHub), which can be used for this.
Usage example from the documentation:
import ansible_runner
r = ansible_runner.run(private_data_dir='/tmp/demo', playbook='test.yml')
print("{}: {}".format(r.status, r.rc))
# successful: 0
for each_host_event in r.events:
print(each_host_event['event'])
print("Final status:")
print(r.stats)
Based on vigilander's answer, I took the liberty to upload a pkg to pypi and now you can use this simple ansible runner:
Just pip install ansible-playbook-runner and run:
from ansible_playbook_runner import Runner
Runner(['inventory_path'], 'playbook_path').run()
You're looking at something that isn't officially supported or recommended therefore little documentation to be had.
That said, if you really want to pursue that course I would start with cracking open the ansible-playbook script in bin and reverse engineering what you want to do.

Categories