I want to create a python library with a 0 argument function that my custom Robot Framework keywords can call. It needs to know the absolute path of the file where the keyword is defined, and the name of the keyword. I know how to do something similar with test cases using the robot.libraries.BuiltIn library and the ${SUITE_SOURCE} and ${TEST NAME} variables, but I can't find anything similar for custom keywords. I don't care how complicated the answer is, maybe I have to dig into the guts of Robot Framework's internal classes and access that data somehow. Is there any way to do this?
Thanks to janne I was able to find the solution.
from robot.running.context import EXECUTION_CONTEXTS
def resource_locator():
name = EXECUTION_CONTEXTS.current.keywords[-1].name
libname = EXECUTION_CONTEXTS.current.get_handler(name).libname
resources = EXECUTION_CONTEXTS.current.namespace._kw_store.resources
path = ""
for key in resources._keys:
if resources[key].name == libname:
path = key
break
return {'name': name, 'path': path}
EXECUTION_CONTEXTS.current.keywords is the stack of keywords called, with the earliest first and the most recent last, so EXECUTION_CONTEXTS.current.keywords[-1] gets the last keyword, which is the keyword that called this function.
EXECUTION_CONTEXTS.current.get_handler(name).libname gets the name of the library in which the keyword name is defined. In the case of user defined keywords, it is the filename (not the full path) minus the extension.
EXECUTION_CONTEXTS.current.namespace._kw_store.resources is a dictionary of all included resources, where the key is the absolute path. Because the file path is the key, I have to search for the key such that the value's name is the name of the resource in which the keyword is defined (libname)
I took a relatively quick look through the sources, and it seems that the execution context does have any reference to currently executing keyword. So, the only way I can think of resolving this is:
Your library needs also to be a listener, since listeners get events when a keyword is started
You need to go through robot.libraries.BuiltIn.EXECUTION_CONTEXT._kw_store.resources to find out which resource file contains the keyword currently executing.
I did not do a POC, so I am not sure whether this actually doable, bu that's the solution that comes to my mind currently.
Related
TLDR
I'm noticing a significant difference in the information presented by the official python docs compared to what I'm seeing in the PyCharm hover-over / quickdocs. I'm hoping someone can point me to where I can find the source of this quickdoc information such that I can use it outside of PyCharm as a general reference.
For example in the python docs for os.makedir I see:
os.makedirs(name, mode=0o777, exist_ok=False)
Recursive directory creation function. Like mkdir(), but makes all intermediate-level directories needed to contain the leaf directory.
The mode parameter is passed to mkdir() for creating the leaf directory; see the mkdir() description for how it is interpreted. To set the file permission bits of any newly created parent directories you can set the umask before invoking makedirs(). The file permission bits of existing parent directories are not changed.
If exist_ok is False (the default), an FileExistsError is raised if the target directory already exists.
Note
makedirs() will become confused if the path elements to create include pardir (eg. “..” on UNIX systems).
This function handles UNC paths correctly.
Raises an auditing event os.mkdir with arguments path, mode, dir_fd.
New in version 3.2: The exist_ok parameter.
Changed in version 3.4.1: Before Python 3.4.1, if exist_ok was True and the directory existed, makedirs() would still raise an error if mode did not match the mode of the existing directory. Since this behavior was impossible to implement safely, it was removed in Python 3.4.1. See bpo-21082.
Changed in version 3.6: Accepts a path-like object.
Changed in version 3.7: The mode argument no longer affects the file permission bits of newly created intermediate-level directories.
But in the quickdocs I see:
os def makedirs(name: str | bytes | PathLike[str] | PathLike[bytes],
mode: int = ...,
exist_ok: bool = ...) -> None
makedirs(name [, mode=0o777][, exist_ok=False]) Super-mkdir; create a leaf directory and all intermediate ones. Works like mkdir, except that any intermediate path segment (not just the rightmost) will be created if it does not exist. If the target directory already exists, raise an OSError if exist_ok is False. Otherwise no exception is raised. This is recursive.
Where is this quickdoc type hinting information coming from and where can I find a complete reference with all these type hints such that I can reference it outside of PyCharm?
Background
Coming mainly from a strongly typed language like Java, I struggle to make constructive use of the python documentation with regards to function input parameter types. I am hoping someone can elucidate a standard process for resolving ambiguity compared to my current trail+[lots of]errors approach.
For example, the os.makedir function's first parameter is name.
os.makedirs(name, mode=0o777, exist_ok=False)
It is not apparent to me what sorts of things I can pass as name here. Could it be:
A str? If so, how should I create this? Via a string literal, double quoted string? Does this accept / separators or \ separators or system dependent?
A pathlib.Path?
Anything pathlike?
[Note the above are rhetorical questions and not the focus of this post]. These are all informed guesses, but if I were completely new to python and was trying to use this documentation to make some directories, I see two options:
Read the source code via some IDE or other indexing
Guess until I get it right
The first is fine for easier to understand functions like makedirs but for more complicated functions this would require gaining expertise in a library that I don't necessarily want to reuse and just want to try out. I simply don't have enough time to become an expert in everything I encounter. This seems quite inefficient.
The second also seems to be quite inefficient, with the added demerit of not knowing how to write robust code to check for inappropriate inputs.
Now I don't want to bash the python docs, as they are LEAPS and BOUNDS better than a fair few other languages I've used, but is this dilemma just a case of unfinished/poor documentation, or is there a standard way of knowing/understanding what input parameters like name in this case should be that I haven't outlined above?
To be fair, this may not be the optimal example, as if you look towards the end of the doc for makedirs you can see it does state:
Changed in version 3.6: Accepts a path-like object.
but this is not specifically referring to name. Yes, in this example it may seem rather obvious it is referring to name, but with the advent of type-hinting, why are the docs not type hinted like the quickdocs from PyCharm? Is this something planned for the future, or is it too large a can of worms to try to hint all possibilities in a flexible language like python?
Just as a comparison, take a look at Java's java.io.file.mkdirs where the various constructors definitely tell you all the options for specifying the path of the file:
File(File parent, String child)
// Creates a new File instance from a parent abstract pathname and a child pathname string.
File(String pathname)
// Creates a new File instance by converting the given pathname string into an abstract pathname.
File(String parent, String child)
// Creates a new File instance from a parent pathname string and a child pathname string.
File(URI uri)
// Creates a new File instance by converting the given file: URI into an abstract pathname.
Just reading this I already know exactly how to make a File object and create directories without running/testing anything. With the quickdoc in PyCharm I can do the same, so where is this type hint information in the official docs?
I have to convert following robot framework keyword in to python code. Can you please help me out.
Robot framework sample keyword:
*variables*
${locator} xpath=(//div[#it="testID"])[2]
*keyword*
sample keyword ${count} Get Element Count ${locator}
In python file. I used the following command
from robot.libraries.BuiltIn import BuiltIn
def _helper keyword(locator):
count=BuiltIn.run_keyword(get_element_count,locator)
When I executed it, I received the following error message: NameError: name 'get_element_count' is not defined.
First, no need the xpath= in your xpath, simply use:
*variables*
${locator} | (//div[#it="testID"])[2]
Secondly, get_element_count is a keyword come from SeleniumLibrary, not Builtin. Therefore, you need to import SeleniumLibrary to call it:
def get_element_count(locator):
context = BuiltIn().get_library_instance('SeleniumLibrary')
return context.get_element_count(locator)
The method get_element_count is actually from the SeleniumLibrary; for its call to fail like this, it probably means you don't have it imported in the context you're running your function (e.g. the suite this function is called doesn't have Library SeleniumLibrary in it, or in any of the resources it imports).
Once you resolve that, there's a slightly better way to call its methods - instead of going through run_keyword, you can use get_library_instance() and directly call its keywords:
se_lib = BuiltIn().get_library_instance('SeleniumLibrary')
cnt = se_lib.get_element_count(locator)
For this to work though, the library obviously needs to be imported - to have an instance to get.
P.S. Don't use "count" for a variable name - it's a python's builtin function you've just overridden - hidden bugs downstream ;)
I have a python file named getProperty.py, in that I have only one method!
import configparser
def desiredCapability(platform, key):
conf= configparser.RawConfigParser()
if platform.lower() == "android":
conf.read("somepath")
elif platform.lower() == "ios":
conf.read("some path")
else:
return None
strr=conf.get( "main",key)
return strr
I have a robot file where I have variables section
***Settings***
Library getProperty.py
***Variables***
${deviceName} #Here i want to call the method
When i'm trying to call the method in that variables section, it is taking as string! I can call the method inside the Test cases section with no trouble! But I want it in variables section!
You can't call keywords in the Variables table. From the robot framework user guide:
The most common source for variables are Variable tables in test case files and resource files. Variable tables are convenient, because they allow creating variables in the same place as the rest of the test data, and the needed syntax is very simple. Their main disadvantages are that values are always strings and they cannot be created dynamically. If either of these is a problem, variable files can be used instead.
It seems to me that you want to be able to load variables with specific content given a certain variable element. With your chosen approach this isn't possible, as explained by #Bryan Oakley.
That said, it is possible to have variable data sets and most of us use them on a daily basis. Often the use-case is to have a single set of tests that can be run against multiple environments. Each environment having a different URL, Credentials and other properties.
One approach is to load a set of variables from the command line using a variable file. The Documentation on Variable files gives several approaches:
Defining Python variables in a Python file and referring to it.
Defining variables in a YAML variable file.
Defining variables in a Python Class or Python function.
The first two contain fixed variables and the last one can take an argument and return values depending the input. In the below example we use a Python Function to give back the ${name} variable with a certain value given a particular input.
variable_file.py
def get_variables(platform=None, key=None):
if platform.lower() == "android":
variables = {'name' : 'android'}
elif platform.lower() == "ios":
variables = {'name' : 'ios'}
else:
variables = {'name' : 'No Device'}
return variables
variable_file.robot
*** Settings ***
Variables ${EXECDIR}/variable_file.py ios
*** Test Cases ***
TC
Log To Console ${name}
In this example the variable file is referenced like Variables ${EXECDIR}/variable_file.py ios with the argument ios. This then results in the ${name} variable holding the value ios.
Is there a way how one can access host/group vars from within a custom written module? I would like to avoid to pass all required vars as module parameters.
My module is written in Python and I use the boilerplate. I checked pretty much all available vars but they are not stored anywhere:
def main():
pprint(dir())
pprint(globals())
pprint(locals())
for name in vars().keys():
print(name)
Now my only hope is they are somehow accessible through the undocumented module utils.
I guess it is not possible, since the module runs on the target machine and probably the facts/host/group vars are not transferred along with the module...
Edit: Found the module utils now and it doesn't look promising.
Is there a way how one can access host/group vars from within a custom
written module?
Not built-in.
You will have to pass them yourself one way or the other:
Module args.
Serialize to local file system (with pickle or yaml.dump() or json or ...) and send the file over.
any other innovative ideas you can come up with.
Unfortunately you can't just send over whole host/groupvar files as-it-is because you would have to implement the variable scope/precedence resolution algorithm of ansible which is undefined (it's not the Zen philosophy of ansible to define such petty things :P ).
--edit--
I see they have some precedence defined now.
Ansible does apply variable precedence, and you might have a use for
it. Here is the order of precedence from least to greatest (the last
listed variables override all other variables):
command line values (for example, -u my_user, these are not variables)
role defaults (defined in role/defaults/main.yml) 1
inventory file or script group vars 2
inventory group_vars/all 3
playbook group_vars/all 3
inventory group_vars/* 3
playbook group_vars/* 3
inventory file or script host vars 2
inventory host_vars/* 3
playbook host_vars/* 3
host facts / cached set_facts 4
play vars
play vars_prompt
play vars_files
role vars (defined in role/vars/main.yml)
block vars (only for tasks in block)
task vars (only for the task)
include_vars
set_facts / registered vars
role (and include_role) params
include params
extra vars (for example, -e "user=my_user")(always win precedence)
In general, Ansible gives precedence to variables that were defined
more recently, more actively, and with more explicit scope. Variables
in the defaults folder inside a role are easily overridden. Anything
in the vars directory of the role overrides previous versions of that
variable in the namespace. Host and/or inventory variables override
role defaults, but explicit includes such as the vars directory or an
include_vars task override inventory variables.
Ansible merges different variables set in inventory so that more
specific settings override more generic settings. For example,
ansible_ssh_user specified as a group_var is overridden by
ansible_user specified as a host_var. For details about the precedence
of variables set in inventory, see How variables are merged.
Footnotes
1 Tasks in each role see their own role’s defaults. Tasks defined
outside of a role see the last role’s defaults.
2(1,2) Variables defined in inventory file or provided by dynamic
inventory.
3(1,2,3,4,5,6) Includes vars added by ‘vars plugins’ as well as
host_vars and group_vars which are added by the default vars plugin
shipped with Ansible.
4 When created with set_facts’s cacheable option, variables have the
high precedence in the play, but are the same as a host facts
precedence when they come from the cache.
As per your suggestion in your answer here, I did manage to read host_vars and local play vars through a custom Action Plugin.
I'm posting this answer for completeness sake and to give an explicit example of how one might go about this method, although you gave this idea originally :)
Note - this example is incomplete in terms of a fully functioning plugin. It just shows the how to access variables.
from ansible.template import is_template
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
# some boilerplate ...
# init
result = super(ActionModule, self).run(tmp, task_vars)
# more boilerplate ...
# check the arguments passed to the task, where if missing, return None
self._task.args.get('<TASK ARGUMENT NAME>', None)
# or
# check if the play has vars defined
task_vars['vars']['<ARGUMENT NAME>']
# or
# check if the host vars has something defined
task_vars['hostvars']['<HOST NAME FORM HOSTVARS>']['<ARGUMENT NAME>']
# again boilerplate...
# build arguments to pass to the module
some_module_args = dict(
arg1=arg1,
arg2=arg2
)
# call the module with the above arguments...
In case you have your playbook variables with jinja 2 templates, you can resolve these templates in the plugin as follows:
from ansible.template import is_template
# check if the variable is a template through 'is_template'
if is_template(var, self._templar.environment):
# access the internal `_templar` object to resolve the template
resolved_arg = self._templar.template(var_arg)
Some words of caution:
If you have a variable defined in your playbook as follows
# things ...
#
vars:
- pkcs12_path: '{{ pkcs12_full_path }}'
- pkcs12_pass: '{{ pkcs12_password }}'
The variable pkcs12_path must not match the host_vars name.
For instance, if you had pkcs12_path: '{{ pkcs12_path }}', then resolving the template with the above code will cause a recursive exception... This might be obvious to some, but for me it was surprising that the host_vars variable and the playbook variable must not be with the same name.
You can also access variables through task_vars['<ARG_NAME>'], but I'm not sure where it's reading this from. Also it's less explicit than taking variables from task_vars['vars']['<ARG_NAME>'] or from the hostvars.
PS - in the time of writing this, the example follows the basic structure of what Ansible consider an Action Plugin. In the future, the run method might change its signature...
I think you pretty much hit the nail on the head with your thinking here:
I guess it is not possible, since the module runs on the target machine and probably the facts/host/group vars are not transferred along with the module...
However, having said that, if you really have a need for this then there might be a slightly messy way of doing it. As of Ansible 1.8 you can set up fact caching, which uses redis to cache facts between runs of plays. Since redis is pretty easy to use and has clients for most popular programming languages, you could have your module query the redis server for any facts you need. It's not exactly the cleanest way to do it, but it just might work.
I am building a very basic platform in the form of a Python 2.7 module. This module has a read-eval-print loop where entered user commands are mapped to function calls. Since I am trying to make it easy to build plugin modules for my platform, the function calls will be from my Main module to an arbitrary plugin module. I'd like a plugin builder to be able to specify the command that he wants to trigger his function, so I've been looking for a Pythonic way to remotely enter a mapping in the command->function dict in the Main module from the plugin module.
I've looked at several things:
Method name parsing: the Main module would import the plugin module
and scan it for method names that match a certain format. For
example, it might add the download_file_command(file) method to its
dict as "download file" -> download_file_command. However, getting a
concise, easy-to-type command name (say, "dl") requires that the
function's name also be short, which isn't good for code
readability. It also requires the plugin developer to conform to a
precise naming format.
Cross-module decorators: decorators would let
the plugin developer name his function whatever he wants and simply
add something like #Main.register("dl"), but they would necessarily
require that I both modify another module's namespace and keep
global state in the Main module. I understand this is very bad.
Same-module decorators: using the same logic as above, I could add a
decorator that adds the function's name to some command name->function mapping local to the
plugin module and retrieve the mapping to the Main module with an
API call. This requires that certain methods always be present or
inherited though, and - if my understanding of decorators is correct - the function will only register itself the first time it is run and will unnecessarily re-register itself every subsequent time
thereafter.
Thus, what I really need is a Pythonic way to annotate a function with the command name that should trigger it, and that way can't be the function's name. I need to be able to extract the command name->function mapping when I import the module, and any less work on the plugin developer's side is a big plus.
Thanks for the help, and my apologies if there are any flaws in my Python understanding; I'm relatively new to the language.
Building or Standing on the first part of #ericstalbot's answer, you might find it convenient to use a decorator like the following.
################################################################################
import functools
def register(command_name):
def wrapped(fn):
#functools.wraps(fn)
def wrapped_f(*args, **kwargs):
return fn(*args, **kwargs)
wrapped_f.__doc__ += "(command=%s)" % command_name
wrapped_f.command_name = command_name
return wrapped_f
return wrapped
################################################################################
#register('cp')
def copy_all_the_files(*args, **kwargs):
"""Copy many files."""
print "copy_all_the_files:", args, kwargs
################################################################################
print "Command Name: ", copy_all_the_files.command_name
print "Docstring : ", copy_all_the_files.__doc__
copy_all_the_files("a", "b", keep=True)
Output when run:
Command Name: cp
Docstring : Copy many files.(command=cp)
copy_all_the_files: ('a', 'b') {'keep': True}
User-defined functions can have arbitrary attributes. So you could specify that plug-in functions have an attribute with a certain name. For example:
def a():
return 1
a.command_name = 'get_one'
Then, in your module you could build a mapping like this:
import inspect #from standard library
import plugin
mapping = {}
for v in plugin.__dict__.itervalues():
if inspect.isfunction(v) and v.hasattr('command_name'):
mapping[v.command_name] = v
To read about arbitrary attributes for user-defined functions see the docs
There are two parts in a plugin system:
Discover plugins
Trigger some code execution in a plugin
The proposed solutions in your question address only the second part.
There many ways to implement both depending on your requirements e.g., to enable plugins, they could be specified in a configuration file for your application:
plugins = some_package.plugin_for_your_app
another_plugin_module
# ...
To implement loading of the plugin modules:
plugins = [importlib.import_module(name) for name in config.get("plugins")]
To get a dictionary: command name -> function:
commands = {name: func
for plugin in plugins
for name, func in plugin.get_commands().items()}
Plugin author can use any method to implement get_commands() e.g., using prefixes or decorators — your main application shouldn't care as long as get_commands() returns the command dictionary for each plugin.
For example, some_plugin.py (full source):
def f(a, b):
return a + b
def get_commands():
return {"add": f, "multiply": lambda x,y: x*y}
It defines two commands add, multiply.