Python methods from RobotFramework Variables section - python

I have a python file named getProperty.py, in that I have only one method!
import configparser
def desiredCapability(platform, key):
conf= configparser.RawConfigParser()
if platform.lower() == "android":
conf.read("somepath")
elif platform.lower() == "ios":
conf.read("some path")
else:
return None
strr=conf.get( "main",key)
return strr
I have a robot file where I have variables section
***Settings***
Library getProperty.py
***Variables***
${deviceName} #Here i want to call the method
When i'm trying to call the method in that variables section, it is taking as string! I can call the method inside the Test cases section with no trouble! But I want it in variables section!

You can't call keywords in the Variables table. From the robot framework user guide:
The most common source for variables are Variable tables in test case files and resource files. Variable tables are convenient, because they allow creating variables in the same place as the rest of the test data, and the needed syntax is very simple. Their main disadvantages are that values are always strings and they cannot be created dynamically. If either of these is a problem, variable files can be used instead.

It seems to me that you want to be able to load variables with specific content given a certain variable element. With your chosen approach this isn't possible, as explained by #Bryan Oakley.
That said, it is possible to have variable data sets and most of us use them on a daily basis. Often the use-case is to have a single set of tests that can be run against multiple environments. Each environment having a different URL, Credentials and other properties.
One approach is to load a set of variables from the command line using a variable file. The Documentation on Variable files gives several approaches:
Defining Python variables in a Python file and referring to it.
Defining variables in a YAML variable file.
Defining variables in a Python Class or Python function.
The first two contain fixed variables and the last one can take an argument and return values depending the input. In the below example we use a Python Function to give back the ${name} variable with a certain value given a particular input.
variable_file.py
def get_variables(platform=None, key=None):
if platform.lower() == "android":
variables = {'name' : 'android'}
elif platform.lower() == "ios":
variables = {'name' : 'ios'}
else:
variables = {'name' : 'No Device'}
return variables
variable_file.robot
*** Settings ***
Variables ${EXECDIR}/variable_file.py ios
*** Test Cases ***
TC
Log To Console ${name}
In this example the variable file is referenced like Variables ${EXECDIR}/variable_file.py ios with the argument ios. This then results in the ${name} variable holding the value ios.

Related

API handling with Robot and Python Class

What is the best way to handle objects with robot framework? I am starting to write a python class to handle API interactions, which I can therefore use as keywords in robot framework (RF). My question is how does one pass data from one method to another? Do I have to pass the object back to every function to get the data?
In the example below, I call the class and it initializes, but can I reference an instance of the class if I wanted? Or am I supposed to write every method to handle the entire object I get back from another method? Hopefully this makes sense, I basically want to use python like I normally would but inside of RF.
More specifically, is it feasible to distinguish between several instances if I call them all at once?
Test python foo.py:
class foo:
def intialize(self, api):
self.api_item = api
def get_api():
return self.api_item
def do_something_with_api
# doing something with an API, then return results
def do_something_else_with_api
# doing something with an API, then return results
Test Robot file:
*** Settings ***
Library /path/foo.py
*** Variables ***
${api_url} "https://apiurl.com/"
*** Tasks ***
Setup Initialize Settings
${session}= MgsRestApiHandler.intialize ${api_url}
In RF when a class is loaded as a Library there's always an instance object that's created for it. Thus if you have state variables within it, they'll be present for all class methods ("keywords") in your RF source.
In other words, in your example all methods will have access to self.api_item (after initialize() is called); by the way, why don't you add a normal constructor __init()__ and define the var there, even with None value, so it's cleaner?
is it feasible to distinguish between several instances
You can instantiate several instances of the same class ("Library") by importing them multiple times and using the WITH NAME Robot Framework syntax:
*** Settings ***
Library /path/foo.py WITH NAME o1
Library /path/foo.py WITH NAME o2
The "drawback" is you now have to prefix the method call with the instance name - otherwise the framework doesn't know for which object you want to call it:
*** Tasks ***
Setup Initialize Settings
${session1}= o1.intialize ${api_url}
${session2}= o2.intialize ${api_url2}
And if I understand one of your questions correctly (or if not - take this as a general trivia :) - in RF whatever a method/keyword returns - from primitives to complex objects (e.g.nclass instances) is assigned to that variable that in front of the call.
So if you have a method/keyword down the line that expects a complex object, you can pass that returned value - the framework will not mangle it in any way, it'll be passing around a normal pyhton object.

How to declare optional global variable in robot script depending upon command line arguments in robot framework

I have a scenario where I need to set global variable in my robot script depending upon the command line arguments. In some cases, I can pass 2 arguments and in other 3 arguments from command line.
sample.robot
Set Global Variable ${arg1} ${ARG1}
Set Global Variable ${arg2} ${ARG2}
Set Global Variable ${arg3} ${ARG3}
Scenario I
Command line argument passed
robot --variable ARG1:arg1 --variable ARG2:arg2 sample.robot
During code execution, script throws an error that "Variable '${ARG3}' not found."
Scenario II
Command line argument passed
robot --variable ARG1:arg1 --variable ARG2:arg2 --variable ARG3:arg3 sample.robot
During code execution, everything works fine.
Requirement
What I need here is, even if I don't pass some command line arguments, it should not throw any error. Maybe some way to declare some global variable as optional and others as mandatory. Similar functionality can be achieved in python using "argparse" module.
To set the global Variable in robot framework, you can use Set Global Variable builtin keyword from robot framework- You can see the reference here
To study more on the command line options, you can go through this link
To get read the variable bu you want to have one default values set if there is no command-line argument is provided
${value} = Get Variable Value &{var} default value #Set your default value here
And now use a list or dict variable to dynamically hold varying numbers of arguments. Hence, if there is no command-line argument is provided for that suite.
It sounds to me that you want to insert a specific set of data given a particular test scenario. If this is the case I'd recommend looking into Robot Framework Variable Files. In these external files you can craft all types of variables. They can easily be replaced by one that matches your test scenario.
If this all has to happen on the command line, then combining a single file name/path on the command line with the BuiltIn keyword: Import Variables should allow you the same sort of flexibility.

Robot Framework location and name of keyword

I want to create a python library with a 0 argument function that my custom Robot Framework keywords can call. It needs to know the absolute path of the file where the keyword is defined, and the name of the keyword. I know how to do something similar with test cases using the robot.libraries.BuiltIn library and the ${SUITE_SOURCE} and ${TEST NAME} variables, but I can't find anything similar for custom keywords. I don't care how complicated the answer is, maybe I have to dig into the guts of Robot Framework's internal classes and access that data somehow. Is there any way to do this?
Thanks to janne I was able to find the solution.
from robot.running.context import EXECUTION_CONTEXTS
def resource_locator():
name = EXECUTION_CONTEXTS.current.keywords[-1].name
libname = EXECUTION_CONTEXTS.current.get_handler(name).libname
resources = EXECUTION_CONTEXTS.current.namespace._kw_store.resources
path = ""
for key in resources._keys:
if resources[key].name == libname:
path = key
break
return {'name': name, 'path': path}
EXECUTION_CONTEXTS.current.keywords is the stack of keywords called, with the earliest first and the most recent last, so EXECUTION_CONTEXTS.current.keywords[-1] gets the last keyword, which is the keyword that called this function.
EXECUTION_CONTEXTS.current.get_handler(name).libname gets the name of the library in which the keyword name is defined. In the case of user defined keywords, it is the filename (not the full path) minus the extension.
EXECUTION_CONTEXTS.current.namespace._kw_store.resources is a dictionary of all included resources, where the key is the absolute path. Because the file path is the key, I have to search for the key such that the value's name is the name of the resource in which the keyword is defined (libname)
I took a relatively quick look through the sources, and it seems that the execution context does have any reference to currently executing keyword. So, the only way I can think of resolving this is:
Your library needs also to be a listener, since listeners get events when a keyword is started
You need to go through robot.libraries.BuiltIn.EXECUTION_CONTEXT._kw_store.resources to find out which resource file contains the keyword currently executing.
I did not do a POC, so I am not sure whether this actually doable, bu that's the solution that comes to my mind currently.

Ansible: Access host/group vars from within custom module

Is there a way how one can access host/group vars from within a custom written module? I would like to avoid to pass all required vars as module parameters.
My module is written in Python and I use the boilerplate. I checked pretty much all available vars but they are not stored anywhere:
def main():
pprint(dir())
pprint(globals())
pprint(locals())
for name in vars().keys():
print(name)
Now my only hope is they are somehow accessible through the undocumented module utils.
I guess it is not possible, since the module runs on the target machine and probably the facts/host/group vars are not transferred along with the module...
Edit: Found the module utils now and it doesn't look promising.
Is there a way how one can access host/group vars from within a custom
written module?
Not built-in.
You will have to pass them yourself one way or the other:
Module args.
Serialize to local file system (with pickle or yaml.dump() or json or ...) and send the file over.
any other innovative ideas you can come up with.
Unfortunately you can't just send over whole host/groupvar files as-it-is because you would have to implement the variable scope/precedence resolution algorithm of ansible which is undefined (it's not the Zen philosophy of ansible to define such petty things :P ).
--edit--
I see they have some precedence defined now.
Ansible does apply variable precedence, and you might have a use for
it. Here is the order of precedence from least to greatest (the last
listed variables override all other variables):
command line values (for example, -u my_user, these are not variables)
role defaults (defined in role/defaults/main.yml) 1
inventory file or script group vars 2
inventory group_vars/all 3
playbook group_vars/all 3
inventory group_vars/* 3
playbook group_vars/* 3
inventory file or script host vars 2
inventory host_vars/* 3
playbook host_vars/* 3
host facts / cached set_facts 4
play vars
play vars_prompt
play vars_files
role vars (defined in role/vars/main.yml)
block vars (only for tasks in block)
task vars (only for the task)
include_vars
set_facts / registered vars
role (and include_role) params
include params
extra vars (for example, -e "user=my_user")(always win precedence)
In general, Ansible gives precedence to variables that were defined
more recently, more actively, and with more explicit scope. Variables
in the defaults folder inside a role are easily overridden. Anything
in the vars directory of the role overrides previous versions of that
variable in the namespace. Host and/or inventory variables override
role defaults, but explicit includes such as the vars directory or an
include_vars task override inventory variables.
Ansible merges different variables set in inventory so that more
specific settings override more generic settings. For example,
ansible_ssh_user specified as a group_var is overridden by
ansible_user specified as a host_var. For details about the precedence
of variables set in inventory, see How variables are merged.
Footnotes
1 Tasks in each role see their own role’s defaults. Tasks defined
outside of a role see the last role’s defaults.
2(1,2) Variables defined in inventory file or provided by dynamic
inventory.
3(1,2,3,4,5,6) Includes vars added by ‘vars plugins’ as well as
host_vars and group_vars which are added by the default vars plugin
shipped with Ansible.
4 When created with set_facts’s cacheable option, variables have the
high precedence in the play, but are the same as a host facts
precedence when they come from the cache.
As per your suggestion in your answer here, I did manage to read host_vars and local play vars through a custom Action Plugin.
I'm posting this answer for completeness sake and to give an explicit example of how one might go about this method, although you gave this idea originally :)
Note - this example is incomplete in terms of a fully functioning plugin. It just shows the how to access variables.
from ansible.template import is_template
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
# some boilerplate ...
# init
result = super(ActionModule, self).run(tmp, task_vars)
# more boilerplate ...
# check the arguments passed to the task, where if missing, return None
self._task.args.get('<TASK ARGUMENT NAME>', None)
# or
# check if the play has vars defined
task_vars['vars']['<ARGUMENT NAME>']
# or
# check if the host vars has something defined
task_vars['hostvars']['<HOST NAME FORM HOSTVARS>']['<ARGUMENT NAME>']
# again boilerplate...
# build arguments to pass to the module
some_module_args = dict(
arg1=arg1,
arg2=arg2
)
# call the module with the above arguments...
In case you have your playbook variables with jinja 2 templates, you can resolve these templates in the plugin as follows:
from ansible.template import is_template
# check if the variable is a template through 'is_template'
if is_template(var, self._templar.environment):
# access the internal `_templar` object to resolve the template
resolved_arg = self._templar.template(var_arg)
Some words of caution:
If you have a variable defined in your playbook as follows
# things ...
#
vars:
- pkcs12_path: '{{ pkcs12_full_path }}'
- pkcs12_pass: '{{ pkcs12_password }}'
The variable pkcs12_path must not match the host_vars name.
For instance, if you had pkcs12_path: '{{ pkcs12_path }}', then resolving the template with the above code will cause a recursive exception... This might be obvious to some, but for me it was surprising that the host_vars variable and the playbook variable must not be with the same name.
You can also access variables through task_vars['<ARG_NAME>'], but I'm not sure where it's reading this from. Also it's less explicit than taking variables from task_vars['vars']['<ARG_NAME>'] or from the hostvars.
PS - in the time of writing this, the example follows the basic structure of what Ansible consider an Action Plugin. In the future, the run method might change its signature...
I think you pretty much hit the nail on the head with your thinking here:
I guess it is not possible, since the module runs on the target machine and probably the facts/host/group vars are not transferred along with the module...
However, having said that, if you really have a need for this then there might be a slightly messy way of doing it. As of Ansible 1.8 you can set up fact caching, which uses redis to cache facts between runs of plays. Since redis is pretty easy to use and has clients for most popular programming languages, you could have your module query the redis server for any facts you need. It's not exactly the cleanest way to do it, but it just might work.

Populating namespace within a module before loading it

I designed a configuration mechanism in Python, where certain objects can operate in special ways to define problems in our domain.
The user specifies the problem by using this objects in a "config-file" manner. For instance:
# run configuration
CASES = [
('Case 1', Item('item1') + Item('item2') + Item('item3')),
('Case 2', Item('item1') + Item('item4')),
]
DATA = {
'Case 1' = {'Piece 1': 'path 1'},
'Case 2' = {'Piece 1': 'path 2'},
}
The Item objects are, of course, defined in a specific module. In order to use them you have to issue an import statement: from models import Item (of course, my actual imports are more complex, not a single one).
I would like the user to simply write the configuration presented, without having to import anything (users very easily can forget this).
I thought of reading the file as text, and creating a secondary text file with all the appropriate imports at the top, write that to a file, and import that file, but this seems clumsy.
Any advice?
Edit:
The workflow of my system is somewhat similar to Django, in that the user defines the "Settings" in a python file, and runs a script which imports that Settings file and does things with it. That is where I would like this functionality, to tell Python "given this namespace (where Item means something in particular), the user will provide a script - execute it and hand me the result so that I can spawn the different runs".
From the eval help:
>>> help(eval)
Help on built-in function eval in module __builtin__:
eval(...)
eval(source[, globals[, locals]]) -> value
Evaluate the source in the context of globals and locals.
The source may be a string representing a Python expression
or a code object as returned by compile().
The globals must be a dictionary and locals can be any mapping,
defaulting to the current globals and locals.
If only globals is given, locals defaults to it.
That is, you can pass in an arbitrary dictionary to use as the namespace for an eval call.
with open(source) as f:
eval(f.read, globals(), {'Item': Item})
Why have you decided that the user needs to write their configuration file in pure Python? There are many simple human-writable languages you could use instead. Have a look at ConfigParser, for instance, which reads basic configuration files of the sort Windows uses.
[cases]
case 1: item1 + item2 + item3
case 2: item1 + item4
[data]
case 1: piece1 - path1
case 2: piece1 - path2
1) the first thing that i have in mind is to offer to the user the generation of your config file; how so ?
you can add an argument in the script that launch your application :
$ python application_run.py --generate_settings
this will generate a config file with a skeleton of different import that the user should not have to add every time, something like this:
import sys
from models import Item
# Complete the information here please !!!
CASES = []
DATA = {}
2) a second way is to use execfile() , you can for this create a script that will read the settings.py:
root_settings.py
# All import defined Here.
from Model import Item
...
execfile('settings.py')
And now to read the settings file info just import the root_settings, by the way all variable that have been defined in settings.py are now in root_settings.py namespace .

Categories