How do I list hosts using Ansible 1.x API - python

Ansible-playbook has a --list-hosts cli switch that just outputs the hosts affected by each play in a playbook. I am looking for a way to access to same information through the python API.
The (very) basic script I am using to test right now is
#!/usr/bin/python
import ansible.runner
import ansible.playbook
import ansible.inventory
from ansible import callbacks
from ansible import utils
import json
# hosts list
hosts = ["127.0.0.1"]
# set up the inventory, if no group is defined then 'all' group is used by default
example_inventory = ansible.inventory.Inventory(hosts)
pm = ansible.runner.Runner(
module_name = 'command',
module_args = 'uname -a',
timeout = 5,
inventory = example_inventory,
subset = 'all' # name of the hosts group
)
out = pm.run()
print json.dumps(out, sort_keys=True, indent=4, separators=(',', ': '))
I just can't figure out what to add to ansible.runner.Runner() to make it output affected hosts and exit.

I'm not sure what are you trying to achieve, but ansible.runner.Runner is actually one task and not playbook.
Your script is a more kind of ansible CLI and not ansible-playbook.
And ansible doesn't have any kind of --list-hosts, while ansible-playbook does.
You can see how listhosts is done here.

Related

How can I launch an Android app on a device through Python?

I have consulted several topics on the subject, but I didn't see any related to launching an app on a device directly using a ppadb command.
I managed to do this code:
import ppadb
import subprocess
from ppadb.client import Client as AdbClient
# Create the connect functiun
def connect():
client = AdbClient(host='localhost', port=5037)
devices = client.devices()
for device in devices:
print (device.serial)
if len(devices) == 0:
print('no device connected')
quit()
phone = devices[0]
print (f'connected to {phone.serial}')
return phone, client
if __name__ == '__main__':
phone, client = connect()
import time
time.sleep(5)
# How to print each app on the emulator
list = phone.list_packages()
for truc in list:
print(truc)
# Launch the desired app through phone.shell using the package name
phone.shell(????????????????)
From there, I have access to each app package (com.package.name). I would like to launch it through a phone.shell() command but I can't access the correct syntax.
I can execute a tap or a keyevent and it's perfectly working, but I want to be sure my code won't be disturbed by any change in position.
From How to start an application using Android ADB tools, the shell command to launch an app is
am start -n com.package.name/com.package.name.ActivityName
Hence you would call
phone.shell("am start -n com.package.name/com.package.name.ActivityName")
A given package may have multiple activities. To find out what they are, you can use dumpsys package as follows:
def parse_activities(package, connection, retval):
out = ""
while True:
data = connection.read(1024)
if not data: break
out += data.decode('utf-8')
retval.clear()
retval += [l.split()[-1] for l in out.splitlines() if package in l and "Activity" in l]
connection.close()
activities = []
phone.shell("dumpsys package", handler=lambda c: parse_activities("com.package.name", c, activities))
print(activities)
Here is the correct and easiest answer:
phone.shell('monkey -p com.package.name 1')
This method will launch the app without needing to have acces to the ActivityName
Using AndroidViewClient/cluebra, you can launch the MAIN Activity of a package as follows:
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
from com.dtmilano.android.viewclient import ViewClient
ViewClient.connectToDeviceOrExit()[0].startActivity(package='com.example.package')
This connects to the device (waiting if necessary) and then invokes startActivity() just using the package name.
startActivity() can also receive a component which is used when you know the package and the activity.

Custom Ansible module is giving param extra params error

I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.

ansible: local test new module with Error:Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed

I'm new to Python. This is my first Ansible module in order to delete the SimpleDB domain from ChaosMonkey deletion.
When tested in my local venv with my Mac OS X, it keeps saying
Module unable to decode valid JSON on stdin. Unable to figure out
what parameters were passed.
Here is the code:
#!/usr/bin/python
# Delete SimpleDB Domain
from ansible.module_utils.basic import *
import boto3
def delete_sdb_domain():
fields = dict(
sdb_domain_name=dict(required=True, type='str')
)
module = AnsibleModule(argument_spec=fields)
client = boto3.client('sdb')
response = client.delete_domain(DomainName='module.params['sdb_domain_name']')
module.exit_json(changed = False, meta = response)
def main():
delete_sdb_domain()
if __name__ == '__main__':
main()
And I'm trying to pass in parameters from this file: /tmp/args.json.
and run the following command to make the local test:
$ python ./delete_sdb_domain.py /tmp/args.json
please note I'm using venv test environment on my Mac.
If you find any syntax error in my module, please also point it out.
This is not how you should test your modules.
AnsibleModule expects to have specific JSON as stdin data.
So the closest thing you can try is:
python ./delete_sdb_domain.py < /tmp/args.json
But I bet you have your json file in wrong format (no ANSIBLE_MODULE_ARGS, etc.).
To debug your modules you can use test-module script from Ansible hacking pack:
./hacking/test-module -m delete_sdb_domain.py -a "sdb_domain_name=zzz"

Looking for a way to de-reference a bash var wrapped in a python command call

I'm trying to find a way to de-reference goldenClusterID to use it in an AWS CLI command to terminate my cluster. This program is to compensate for dynamic Job-Flow Numbers generated each day so normal Data Pipeline shutdown via schedule is applicable. I can os.system("less goldenClusterID") all day and it gives me the right answer. However, it won't give up the goodies with a straight de-ref. Suggestions?
from __future__ import print_function
import json
import urllib
import boto3
import commands
import os
import re
import datetime
import awscli
foundCluster = ""
rawClusterNum = ""
mainClusterNum = ""
goldenClusterID = ""
# Next, we populate the list file with clusters currently active
os.system("aws emr list-clusters --active >> foundCluster")
# We search for a specific Cluster Name
os.system("fgrep 'AnAWSEMRCluster' foundCluster")
os.system("grep -B 1 DrMikesEMRCluster foundCluster >> rawClusterNum")
# Look for the specific Cluster ID in context with it's Cluster Name
os.system("fgrep 'j-' rawClusterNum >> mainClusterNum")
# Regex the Cluster ID from the line
os.system("grep -o '\j-[0-9a-zA-Z]*' mainClusterNum >> goldenClusterID")
# Read the Cluster ID from the file and run AWS Terminate on it
os.system("aws emr describe-cluster --cluster-id %s" % goldenClusterID")
os.system("aws emr terminate-clusters --cluster-ids goldenClusterID")
os.system("rm *")
Never mind, I figured it out. Too much coffee and not enough sleep. The answer is to use:
goldkeyID=open('goldenClusterID', 'r').read()
os.system("aws emr describe-cluster --cluster-id %s" % goldkeyID)

How to define more than one server environment in Fabric(Python)?

I need to use Fabric to do some operations in a website that use one machine for the filesystem and other machine to the database server. I need to handle two hosts. How can I do that?
I have some code but I cannot get the environment definition to work.
The idea is to connect to the remote Filesystem server and get the files and then connect to the remote Database server and get the database schema.
The code that I have for now is something like this:
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
'''
Here I define where is my "aid"s file structure
'''
local_root = '/home/andre/test' # This is the root folder for the audits
code_location = '/remote_code' # This is the root folder dor the customer code inside each audit
#
# ENVIRONMENTS CONFIGURATIONS
#
'''
Here I configure where is the remote file server
'''
def file_server():
env.user = 'andre'
env.hosts = ['localhost']
'''
Here I configure where is the database server
'''
def database_server():
env.user = 'andre'
env.hosts = ['192.168.5.1']
#
# START SCRIPT
#
def get_install(remote_location, aid):
### I will get the files
'''
Here I need to load the file_server() definitions
'''
working_folder = local_root + '/%s' % aid # I will define the working folder
local('mkdir ' + working_folder) # I will create the working folder for this audit
local('mkdir ' + working_folder + code_location) # I will create the folder to receive the code
get(remote_location, working_folder + code_location) # I will download the code to my machine
### I will get the database
'''
Here I need to load the database_server() definitions
'''
local('dir') # Just to test
How can I inside get_install() define the environments file_server() and database_server() ?
Best Regards,
I don't understand exactly what you are trying to do, but maybe you can split up your get_install function into two functions each for every server.
Then limit those functions to the correct servers with fabric.decorators.hosts(*host_list) decorator:
For example, the following will ensure that, barring an override on the command line, my_func will be run on host1, host2 and host3, and with specific users on host1 and host3:
#hosts('user1#host1', 'host2', 'user2#host3')
def my_func():
pass
(For more info see http://readthedocs.org/docs/fabric/en/1.1.0/api/core/decorators.html#fabric.decorators.hosts)
And you can than call those 2 functions in one go by defining your get_install method as:
def get_install():
func1()
func2()
You should be able to do this with fab database_server get_install. Basically, fab [environment] [command] should do what you want.

Categories