Custom Ansible module is giving param extra params error - python

I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.

When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"

ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.

Related

ansible: local test new module with Error:Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed

I'm new to Python. This is my first Ansible module in order to delete the SimpleDB domain from ChaosMonkey deletion.
When tested in my local venv with my Mac OS X, it keeps saying
Module unable to decode valid JSON on stdin. Unable to figure out
what parameters were passed.
Here is the code:
#!/usr/bin/python
# Delete SimpleDB Domain
from ansible.module_utils.basic import *
import boto3
def delete_sdb_domain():
fields = dict(
sdb_domain_name=dict(required=True, type='str')
)
module = AnsibleModule(argument_spec=fields)
client = boto3.client('sdb')
response = client.delete_domain(DomainName='module.params['sdb_domain_name']')
module.exit_json(changed = False, meta = response)
def main():
delete_sdb_domain()
if __name__ == '__main__':
main()
And I'm trying to pass in parameters from this file: /tmp/args.json.
and run the following command to make the local test:
$ python ./delete_sdb_domain.py /tmp/args.json
please note I'm using venv test environment on my Mac.
If you find any syntax error in my module, please also point it out.
This is not how you should test your modules.
AnsibleModule expects to have specific JSON as stdin data.
So the closest thing you can try is:
python ./delete_sdb_domain.py < /tmp/args.json
But I bet you have your json file in wrong format (no ANSIBLE_MODULE_ARGS, etc.).
To debug your modules you can use test-module script from Ansible hacking pack:
./hacking/test-module -m delete_sdb_domain.py -a "sdb_domain_name=zzz"

Python - bulk promote variables to parent scope

In python 2.7, I want to run:
$ ./script.py initparms.py
This is a trick to supply a parameter file to script.py, since initparms.py contains several python variables e.g.
Ldir = '/home/marzipan/jelly'
LMaps = True
# etc.
script.py contains:
X = __import__(sys.argv[1])
Ldir = X.Ldir
LMaps = X.Lmaps
# etc.
I want to do a bulk promotion of the variables in X so they are available to script.py, without spelling out each one in the code by hand.
Things like
import __import__(sys.argv[1])
or
from sys.argv[1] import *
don't work. Almost there perhaps... Any ideas? Thanks!
here's a one-liner:
globals().update(__import__(sys.argv[1]).__dict__)
You can use execfile:
execfile(sys.argv[1])
Of course, the usual warnings with exec or eval apply (Your script has no way of knowing whether it is running trusted or untrusted code).
My suggestion would be to not do what you're doing and instead use configparser and handling the configuration though there.
You could do something like this:
import os
import imp
import sys
try:
module_name = sys.argv[1]
module_info = imp.find_module(module_name, [os.path.abspath(os.path.dirname(__file__))] + sys.path)
module_properties = imp.load_module(module_name, *module_info)
except ImportError:
pass
else:
try:
attrlist = module_properties.__all__
except AttributeError:
attrlist = dir(module_properties)
for attr in attrlist:
if attr.startswith('__'):
continue
globals()[attr] = getattr(module_properties, attr)
Little complicated, but gets the job done.

Python - create object of class from one package in different package

I started using Python few days back and I think I have a very basic question where I am stuck. Maybe I am not doing it correctly in Python so wanted some advice from the experts:
I have a config.cfg & a class test in one package lib as follows:
myProj/lib/pkg1/config.cfg
[api_config]
url = https://someapi.com/v1/
username=sumitk
myProj/lib/pkg1/test.py
class test(object):
def __init__(self, **kwargs):
config = ConfigParser.ConfigParser()
config.read('config.cfg')
print config.get('api_config', 'username')
#just printing here but will be using this as a class variable
def some other foos()..
Now I want to create an object of test in some other module in a different package
myProj/example/useTest.py
from lib.pkg1.test import test
def temp(a, b, c):
var = test()
def main():
temp("","","")
if __name__ == '__main__':
main()
Running useTest.py is giving me error:
...
print config.get('api_config', 'username')
File "C:\Python27\lib\ConfigParser.py", line 607, in get
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'api_config'
Now if I place thie useTest.py in the same package it runs perfectly fine:
myProj/lib/pkg1/useTest.py
myProj/lib/pkg1/test.py
myProj/lib/pkg1/config.cfg
I guess there is some very basic package access concept in Python that I am not aware of or is there something I am doing wrong here?
The issue here is that you have a different working directory depending on which module is your main script. You can check the working directory by adding the following lines to the top of each script:
import os
print os.getcwd()
Because you just provide 'config.cfg' as your file name, it will attempt to find that file inside of the working directory.
To fix this, give an absolute path to your config file.
You should be able to figure out the absolute path with the following method since you know that config.cfg and test.py are in the same directory:
# inside of test.py
import os
config_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'config.cfg')

How to properly install isapi_wsgi on IIS for Python2.7?

I've worked through installing Python as a CGI application on IIS on Windows 7. This is pretty straightforward, but I'd like to use the WSGI stuff, for better flexibility.
I downloaded the archive for isapi_wsgi, unzipped it, and then ran the install as per the instructions, like this:
\python27\python.exe setup.py install
This succeeded:
Then I coded a .py module that had the wsgi glue in it, and tried installing it. This failed like so:
It's a COM Moniker error, and I know that the IIS6-compatible management stuff is based on COM Monikers, which reminded me that there is a pre-req for isapi_wsgi of the IIS6-compatible management stuff. I ran \windows\system32\OptionalFeatures.exe and installed that, then re-ran the .py module and it installed correctly.
C:\dev\wsgi>\Python27\python.exe app1_wsgi.py
Configured Virtual Directory: /wsgi
Installation complete.
Ok, wonderful. Now when I look in the current directory, I see a new DLL named _app1_wsgi.dll, and when I look in IIS Manager I can see a new IIS vdir, and a scriptmap within that vdir for '*', which is mapped to the _app1_wsgi.DLL. All good. But! making a request to http://localhost/wsgi gives me a 500 error.
Through some trial-and-error I see that the .py module that defines my handlers must be in the site-packages directory. I am very surprised by this.
Can I avoid this? Can I simply put the .py module in the same directory as the generated .dll file? Or do I need to deploy all of my python logic to site-packages in order to run it from the WSGI mechanism?
The answer is:
the installation of isapi_wsgi as described in the question, is correct.
with the basic boilerplate of app.py as shown in the example code accompanying isapi_wsgi, the python classes for the web app need to be in the site-packages directory.
it is possible to allow the python source modules to reside in the same directory as with the generated *.dll file, but it requires some special handling in the *wsgi.py file.
a better way to run python on Windows for development purposes is to simply download the Google App Engine and use the builtin dedicated http server. The framework that comes with the GAE SDK handles reloading and allows the .py modules to be placed in particular directories.
If you don't want to download and install the GAE SDK, then you might try the following. Using this code, when a request arrives on isapi_wsgi, the handler looks in the home directory for a py module, and loads it. If the module is already loaded, it checks the file "last modified time" and reloads the module if the last mod time is later than the time from the prior load. It works for simplistic cases but I suppose it will be brittle when there are nested module dependencies.
import sys
import os
import win32file
from win32con import *
# dictionary of [mtime, module] tuple; uses file path as key
loadedPages = {}
def request_handler(env, start_response):
'''Demo app from wsgiref'''
cr = lambda s='': s + '\n'
if hasattr(sys, "isapidllhandle"):
h = None
# get the path of the ISAPI Extension DLL
hDll = getattr(sys, "isapidllhandle", None)
import win32api
dllName = win32api.GetModuleFileName(hDll)
p1 = repr(dllName).split('?\\\\')
p2 = p1[1].split('\\\\')
sep = '\\'
homedir = sep.join(p2[:-1])
# the name of the Python module is in the PATH_INFO
moduleToImport = env['PATH_INFO'].split('/')[1]
pyFile = homedir + sep + moduleToImport + '.py'
fd = None
try:
fd = win32file.CreateFile(pyFile, GENERIC_READ, FILE_SHARE_DELETE, None, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0)
except Exception as exc1:
fd = None
if fd is not None:
# file exists, get mtime
fd.close()
mt = os.path.getmtime(pyFile)
else:
mt = None
if mt is not None:
h = None
if not pyFile in loadedPages:
# need a new import
if homedir not in sys.path:
sys.path.insert(0, homedir)
h = __import__(moduleToImport, globals(), locals(), [])
# remember
loadedPages[pyFile] = [mt, h]
else:
# retrieve handle to module
h = loadedPages[pyFile][1]
if mt != loadedPages[pyFile][0]:
# need to reload the page
reload(h)
loadedPages[pyFile][0] = mt
if h is not None:
if 'handler' in h.__dict__:
for x in h.handler(env, start_response):
yield x
else:
start_response("400 Bad Request", [('Content-Type', 'text/html')])
else:
start_response("404 Not Found", [('Content-Type', 'text/html')])
yield cr()
yield cr("<html><head><title>Module not found</title>" \
"</head><body>")
yield cr("<h3>404 Not Found</h3>")
yield cr("<h3>No handle</h3></body></html>")
else:
start_response("404 Not Found", [('Content-Type', 'text/html')])
yield cr()
yield cr("<html><head><title>Module not found</title>" \
"</head><body>")
yield cr("<h3>404 Not Found</h3>")
yield cr("<h3>That module (" + moduleToImport + ") was not found.</h3></body></html>")
else:
start_response("500 Internal Server Error", [('Content-Type', 'text/html')])
yield cr()
yield cr("<html><head><title>Server Error</title>" \
"</head><body><h1>Server Error - No ISAPI Found</h1></body></html>")
# def test(environ, start_response):
# '''Simple app as per PEP 333'''
# status = '200 OK'
# start_response(status, [('Content-type', 'text/plain')])
# return ['Hello world from isapi!']
import isapi_wsgi
# The entry point(s) for the ISAPI extension.
def __ExtensionFactory__():
return isapi_wsgi.ISAPISimpleHandler(request_handler)
def PostInstall(params, options):
print "The Extension has been installed"
# Handler for our custom 'status' argument.
def status_handler(options, log, arg):
"Query the status of the ISAPI?"
print "Everything seems to be fine..."
if __name__=='__main__':
# This logic gets invoked when the script is run from the command-line.
# In that case, it installs this module as an ISAPI.
#
# The API provided by isapi_wsgi for this is a bit confusing. There
# is an ISAPIParameters object. Within that object there is a
# VirtualDirs property, which itself is a list of
# VirtualDirParameters objects, one per vdir. Each vdir has a set
# of scriptmaps, usually this set of script maps will be a wildcard
# (*) so that all URLs in the vdir will be served through the ISAPI.
#
# To configure a single vdir to serve Python scripts through an
# ISAPI, create a scriptmap, and stuff it into the
# VirtualDirParameters object. Specify the vdir path and other
# things in the VirtualDirParameters object. Stuff that vdp object
# into a sequence and set it into the ISAPIParameters thing, then
# call the vaguely named "HandleCommandLine" function, passing that
# ISAPIParameters thing.
#
# Clear as mud?
#
# Seriously, this thing could be so much simpler, if it had
# reasonable defaults and a reasonable model, but I guess it will
# work as is.
from isapi.install import *
# Setup the virtual directories -
# To serve from root, set Name="/"
sm = [ ScriptMapParams(Extension="*", Flags=0) ]
vdp = VirtualDirParameters(Name="wsgi", # name of vdir/IIS app
Description = "ISAPI-WSGI Demo",
ScriptMaps = sm,
ScriptMapUpdate = "replace"
)
params = ISAPIParameters(PostInstall = PostInstall)
params.VirtualDirs = [vdp]
cah = {"status": status_handler}
# from isapi.install, part of pywin32
HandleCommandLine(params, custom_arg_handlers = cah)
Using this model, requesting http://foo/wsgi/bar will try loading bar.py from the home directory with the WSGI .dll file. If bar.py cannot be found, you get a 404. If bar.py has been updated since the last run, it reloads. If bar cannot be loaded, you get a 500.
bar.py must export a method called handler, publicly. That method must be a generator. like so:
import time
def handler(env, start_response):
start_response("200 OK", [('Content-Type', 'text/html')])
cr = lambda s='': s + '\n'
yield cr("<html><head><title>Hello world!</title></head><body>")
yield cr("<h1>Bargle Bargle Bargle</h1>")
yield cr("<p>From the handler...</p>")
yield cr("<p>(bargle)</p>")
yield cr("<p>The time is now: " + time.asctime() + " </p>")
yield cr("</body></html>")
__all__ = ['handler']
But as I said, I think GAE is probably a better way to develop Python webapps using Windows.
put this on top of your scrip:
import site
site.addsitedir('path/to/your/site-packages')
the same problem you had, was solved with this two lines

Ignore ImportError when exec source code

I have an application that reads test scripts in python and sends them across the network for execution on a remote python instance. As the controlling program does not need to run these scripts I do not want to have all the modules the test scripts use installed on the controller's python environment. However the controller does need information from the test script to tell it how to run the test.
Currently what I do to read and import test script data is something like
with open( 'test.py', 'r' ) as f:
source = f.read()
m = types.ModuleType( "imported-temp", "Test module" )
co = compile( source, 'test.py', 'exec' )
exec co in m.__dict__
which yields a new module that contains the test. Unfortunately exec will raise ImportErrors if the test tries to import something the controller does not have. And worse, the module will not be fully imported.
If I can guarantee that the controller will not use the missing modules, is there someway I can ignore these exceptions? Or some other way to find out the names and classes defined in the test?
Examples test:
from controller import testUnit
import somethingThatTheControllerDoesNotHave
_testAttr = ['fast','foo','function']
class PartOne( testUnit ):
def run( self ):
pass
What the controller needs to know is the data in _testAttr and the name of all class definitions inheriting from testUnit.
Write an import hook that catches the exception and returns a dummy module if the module doesn't exist.
import __builtin__
from types import ModuleType
class DummyModule(ModuleType):
def __getattr__(self, key):
return None
__all__ = [] # support wildcard imports
def tryimport(name, globals={}, locals={}, fromlist=[], level=-1):
try:
return realimport(name, globals, locals, fromlist, level)
except ImportError:
return DummyModule(name)
realimport, __builtin__.__import__ = __builtin__.__import__, tryimport
import sys # works as usual
import foo # no error
from bar import baz # also no error
from quux import * # ditto
You could also write it to always return a dummy module, or to return a dummy module if the specified module hasn't already been loaded (hint: if it's in sys.modules, it has already been loaded).
I think, based on what you're saying, that you can just say:
try:
exec co in m.__dict__
except ImportError: pass
Does that help?
You could use python ast module, and parse the script to an AST tree and then scan through the tree looking for elements of interest. That way you don't have to execute the script at all.

Categories