How to call a python method from robot framework - python

I need to call a python method from robot framework.
def getRandomEmails():
a = ''.join(random.choice(string.ascii_lowercase + string.digits) for i in range(16))
email = a + '#' + 'gmail.com'
return email
This function is written in EnvVar.py file
How can I use the returned value from this method in Robot Framework. I have tried almost many ways, but nothing works.
please help.

Using Evaluate
Exactly how to do it on your system depends on how your files are organized and how you've configured robot, but in short, Evaluate from the BuiltIn library is the keyword that lets you run arbitrary methods from importable modules.
Example:
For this example I've created a file named EnvVar.py in the current working directory. It has the following contents:
import random, string
def getRandomEmails():
a = ''.join(random.choice(string.ascii_lowercase + string.digits) for i in range(16))
email = a + '#' + 'gmail.com'
return email
I then created a file named "example.robot" that looks like this:
*** Test cases ***
Example
${result}= evaluate EnvVar.getRandomEmails() modules=EnvVar
log result: ${result}
Since the current working directory isn't by default on my PYTHONPATH (your setup may be different), I have to tell robot to include the current directory on PYTHONPATH. I can do that with the --pythonpath option.
$ robot --pythonpath . example.robot
Creating a keyword library
Another solution is to create your own keyword library that exposes this function as a keyword.
For example, assuming you can import EnvVar, you could write a library named "Util" (Util.py) that creates a keyword that calls this function:
# Util.py
import EnvVar
def get_random_emails():
return EnvVar.getRandomEmails()
You would then use this in a test like any other keyword library:
*** Settings ***
Library Util.py
*** Test Cases ***
Example
${result}= Get Random Emails
log result: ${result}

If it's the only method you want to add than you could add keyword decorator i.e:
from robot.api.deco import keyword
#keyword
def getRandomEmails():
a = ''.join(random.choice(string.ascii_lowercase + string.digits) for i in range(16))
email = a + '#' + 'gmail.com'
return email
And obviously you should import in settings as library

Related

Custom Ansible module is giving param extra params error

I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.

Robotframework treats variables passed from python class as 'None'

i have a python class Wiresharking.py
from subprocess import Popen, CREATE_NEW_CONSOLE,PIPE,STDOUT
import time
import subprocess
import datetime
import os
#import envSI
class Wiresharking:
"""Wireshark Server subclass"""
def __init__(self,**kwargs):
self.filters=''
self.window_ip = kwargs.get('ip')
print type(self.window_ip)
self.window_user= kwargs.get('username')
self.window_password= kwargs.get('password')
self.dest_path= kwargs.get('Target_path')
self.interface= kwargs.get('interface')
self.terminal='cmd'
self.home=kwargs.get('Home_path')
def test(self):
print 'hi'
return self.window_ip
i can call it from another python file (env.py) like below
SERVER_01 = Wiresharking(
name='WIRESHARK_ENV91',
ip='192.168.1.16',
username=r'INTRA\pmmm', #always prepend r , before giving your username and password
password='jan#2018',
prompt='$ ',
autostart=False,
unzip_capture=True,
filter='',
#interface=['ens2f0'],
interface='Ethernet',
Target_path=r'D:\Users\pankaj-m\Desktop\Test'
)
print SERVER_01.test()
output :
<type 'str'>
hi
192.168.1.16
however , the problem arises when i use env.py file as --variable file with robotframework
command
pybot -V env.py Check.robot
Check.robot file is below
*** Settings ***
Library Wiresharking.py
*** Test Cases ***
Test
check
*** Keywords ***
check
${abc} = test
log ${abc}
the output i here getting is 'None'
16:13:37.279 INFO None
can anyone point out what wrong i am doing here.
Your env.py defines a single variable named ${SERVER_01}. However, Check.robot never uses that variable.
Check.robot imports Wiresharking.py without passing any arguments. That causes its self.window_ip to be None, and thus the keyword returns None.
If you want to see the values from env.py, you need to look at the ${SERVER_01} variable. For example:
log ${SERVER_01.window_ip}
This is the way i was able to pass **kwargs and able to resolve error .
I am still looking for a cleaner way to pass **kwargs
*** Settings ***
Library Wiresharking.py ip=${SERVER_01.window_ip} username=${SERVER_01.window_user} password=${SERVER_01.window_password} Target_path=${SERVER_01.dest_path} interface=${SERVER_01.interface} Home_path=${SERVER_01.home} WITH NAME Mylib
*** Variables ***
${window_ip }
#&{names} = Create Dictionary 0=First Name 2=Email
*** Test Cases ***
Test
check
*** Keywords ***
check
${abc} = test
log ${abc}
Output
INFO 192.168.1.16

In robot framework how do you to create object of class and call the methods in corresponding class?

In robot framework how do you to create object of class and call the methods in corresponding class? This is the code snippet.
*** Settings ***
Documentation A resource file with reusable keywords and variables.
... Use keywords in this file in testcases directory.
Library /home/kirti/src/Helper/utilities.py
Library /home/kirti/src/Helper/config_parser.py
#Library /home/kirti/qa/src/executor/cleanup.CleanUp
Library /home/kirti/qa/src/executor/cleanup.py
*** Variables ***
${RESULT} 0
*** Keywords ***
Read Json Config Values
Log To Console "Setting up the config values globally"
config_parser.Json Config Parser
Import Variables /home/kirti/src/Helper/variables.py
Log Variables INFO
Check Machines Reachability
utilities.Check All Machines Status
Check SNMP Counter
utilities.Get Snmp 192.178.1.2 PPSessionCount
Call Clean Up
#${cleanupobj}= cleanup.create cleanup
#${name}= ${cleanupobj.cc()}
Import Library /home/kirti/src/executor/cleanup.py
${cmp}= Get library instance CleanUp
Log To Console ${cmp}.__class__.__name__
#${name}= Call method ${cmp} Create cleanup
${name}= Call method ${cmp} cc
#${name}= Call method ${cleanupobj} env cleanup
#Log To Console "${name}"
#Log Variables INFO
utilities.Check All Machines Status
Here is a way you can achieve the desired result.
Lets take example of demo.py which have class Sample
Sample class has init ,getting_path() as methods
class Sample(object):
def __init__(self,path,device):
self.device=device
self.path = path
def getting_path(self):
return self.path
Lets use these methods in Robotfile
*** Settings ***
#in the Library section you reference python class in below format
# (file.class_name) so file is demo.py and class is Sample
Library demo.Sample ${path} ${device} WITH NAME obj
#path and device are two arguments required by __init__,'obj' will be used to
#access the methods in python class
Library Collections
*** Variables ***
${path} c:
${device} samsung
*** Test Cases ***
Test
Test_python_class
*** Keywords ***
Test_python_class
#with obj you now call the method of python file
${result} = obj.getting_path
#if method need any argument , this can be passed like
#${result} = obj.getting_path ${arg1} ${arg2}
log to console ${result}
If you want to use a specific instance of a class you can use
${instance} = obj arg1
log to console ${instance.function(args)}

How to properly install isapi_wsgi on IIS for Python2.7?

I've worked through installing Python as a CGI application on IIS on Windows 7. This is pretty straightforward, but I'd like to use the WSGI stuff, for better flexibility.
I downloaded the archive for isapi_wsgi, unzipped it, and then ran the install as per the instructions, like this:
\python27\python.exe setup.py install
This succeeded:
Then I coded a .py module that had the wsgi glue in it, and tried installing it. This failed like so:
It's a COM Moniker error, and I know that the IIS6-compatible management stuff is based on COM Monikers, which reminded me that there is a pre-req for isapi_wsgi of the IIS6-compatible management stuff. I ran \windows\system32\OptionalFeatures.exe and installed that, then re-ran the .py module and it installed correctly.
C:\dev\wsgi>\Python27\python.exe app1_wsgi.py
Configured Virtual Directory: /wsgi
Installation complete.
Ok, wonderful. Now when I look in the current directory, I see a new DLL named _app1_wsgi.dll, and when I look in IIS Manager I can see a new IIS vdir, and a scriptmap within that vdir for '*', which is mapped to the _app1_wsgi.DLL. All good. But! making a request to http://localhost/wsgi gives me a 500 error.
Through some trial-and-error I see that the .py module that defines my handlers must be in the site-packages directory. I am very surprised by this.
Can I avoid this? Can I simply put the .py module in the same directory as the generated .dll file? Or do I need to deploy all of my python logic to site-packages in order to run it from the WSGI mechanism?
The answer is:
the installation of isapi_wsgi as described in the question, is correct.
with the basic boilerplate of app.py as shown in the example code accompanying isapi_wsgi, the python classes for the web app need to be in the site-packages directory.
it is possible to allow the python source modules to reside in the same directory as with the generated *.dll file, but it requires some special handling in the *wsgi.py file.
a better way to run python on Windows for development purposes is to simply download the Google App Engine and use the builtin dedicated http server. The framework that comes with the GAE SDK handles reloading and allows the .py modules to be placed in particular directories.
If you don't want to download and install the GAE SDK, then you might try the following. Using this code, when a request arrives on isapi_wsgi, the handler looks in the home directory for a py module, and loads it. If the module is already loaded, it checks the file "last modified time" and reloads the module if the last mod time is later than the time from the prior load. It works for simplistic cases but I suppose it will be brittle when there are nested module dependencies.
import sys
import os
import win32file
from win32con import *
# dictionary of [mtime, module] tuple; uses file path as key
loadedPages = {}
def request_handler(env, start_response):
'''Demo app from wsgiref'''
cr = lambda s='': s + '\n'
if hasattr(sys, "isapidllhandle"):
h = None
# get the path of the ISAPI Extension DLL
hDll = getattr(sys, "isapidllhandle", None)
import win32api
dllName = win32api.GetModuleFileName(hDll)
p1 = repr(dllName).split('?\\\\')
p2 = p1[1].split('\\\\')
sep = '\\'
homedir = sep.join(p2[:-1])
# the name of the Python module is in the PATH_INFO
moduleToImport = env['PATH_INFO'].split('/')[1]
pyFile = homedir + sep + moduleToImport + '.py'
fd = None
try:
fd = win32file.CreateFile(pyFile, GENERIC_READ, FILE_SHARE_DELETE, None, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0)
except Exception as exc1:
fd = None
if fd is not None:
# file exists, get mtime
fd.close()
mt = os.path.getmtime(pyFile)
else:
mt = None
if mt is not None:
h = None
if not pyFile in loadedPages:
# need a new import
if homedir not in sys.path:
sys.path.insert(0, homedir)
h = __import__(moduleToImport, globals(), locals(), [])
# remember
loadedPages[pyFile] = [mt, h]
else:
# retrieve handle to module
h = loadedPages[pyFile][1]
if mt != loadedPages[pyFile][0]:
# need to reload the page
reload(h)
loadedPages[pyFile][0] = mt
if h is not None:
if 'handler' in h.__dict__:
for x in h.handler(env, start_response):
yield x
else:
start_response("400 Bad Request", [('Content-Type', 'text/html')])
else:
start_response("404 Not Found", [('Content-Type', 'text/html')])
yield cr()
yield cr("<html><head><title>Module not found</title>" \
"</head><body>")
yield cr("<h3>404 Not Found</h3>")
yield cr("<h3>No handle</h3></body></html>")
else:
start_response("404 Not Found", [('Content-Type', 'text/html')])
yield cr()
yield cr("<html><head><title>Module not found</title>" \
"</head><body>")
yield cr("<h3>404 Not Found</h3>")
yield cr("<h3>That module (" + moduleToImport + ") was not found.</h3></body></html>")
else:
start_response("500 Internal Server Error", [('Content-Type', 'text/html')])
yield cr()
yield cr("<html><head><title>Server Error</title>" \
"</head><body><h1>Server Error - No ISAPI Found</h1></body></html>")
# def test(environ, start_response):
# '''Simple app as per PEP 333'''
# status = '200 OK'
# start_response(status, [('Content-type', 'text/plain')])
# return ['Hello world from isapi!']
import isapi_wsgi
# The entry point(s) for the ISAPI extension.
def __ExtensionFactory__():
return isapi_wsgi.ISAPISimpleHandler(request_handler)
def PostInstall(params, options):
print "The Extension has been installed"
# Handler for our custom 'status' argument.
def status_handler(options, log, arg):
"Query the status of the ISAPI?"
print "Everything seems to be fine..."
if __name__=='__main__':
# This logic gets invoked when the script is run from the command-line.
# In that case, it installs this module as an ISAPI.
#
# The API provided by isapi_wsgi for this is a bit confusing. There
# is an ISAPIParameters object. Within that object there is a
# VirtualDirs property, which itself is a list of
# VirtualDirParameters objects, one per vdir. Each vdir has a set
# of scriptmaps, usually this set of script maps will be a wildcard
# (*) so that all URLs in the vdir will be served through the ISAPI.
#
# To configure a single vdir to serve Python scripts through an
# ISAPI, create a scriptmap, and stuff it into the
# VirtualDirParameters object. Specify the vdir path and other
# things in the VirtualDirParameters object. Stuff that vdp object
# into a sequence and set it into the ISAPIParameters thing, then
# call the vaguely named "HandleCommandLine" function, passing that
# ISAPIParameters thing.
#
# Clear as mud?
#
# Seriously, this thing could be so much simpler, if it had
# reasonable defaults and a reasonable model, but I guess it will
# work as is.
from isapi.install import *
# Setup the virtual directories -
# To serve from root, set Name="/"
sm = [ ScriptMapParams(Extension="*", Flags=0) ]
vdp = VirtualDirParameters(Name="wsgi", # name of vdir/IIS app
Description = "ISAPI-WSGI Demo",
ScriptMaps = sm,
ScriptMapUpdate = "replace"
)
params = ISAPIParameters(PostInstall = PostInstall)
params.VirtualDirs = [vdp]
cah = {"status": status_handler}
# from isapi.install, part of pywin32
HandleCommandLine(params, custom_arg_handlers = cah)
Using this model, requesting http://foo/wsgi/bar will try loading bar.py from the home directory with the WSGI .dll file. If bar.py cannot be found, you get a 404. If bar.py has been updated since the last run, it reloads. If bar cannot be loaded, you get a 500.
bar.py must export a method called handler, publicly. That method must be a generator. like so:
import time
def handler(env, start_response):
start_response("200 OK", [('Content-Type', 'text/html')])
cr = lambda s='': s + '\n'
yield cr("<html><head><title>Hello world!</title></head><body>")
yield cr("<h1>Bargle Bargle Bargle</h1>")
yield cr("<p>From the handler...</p>")
yield cr("<p>(bargle)</p>")
yield cr("<p>The time is now: " + time.asctime() + " </p>")
yield cr("</body></html>")
__all__ = ['handler']
But as I said, I think GAE is probably a better way to develop Python webapps using Windows.
put this on top of your scrip:
import site
site.addsitedir('path/to/your/site-packages')
the same problem you had, was solved with this two lines

How to define more than one server environment in Fabric(Python)?

I need to use Fabric to do some operations in a website that use one machine for the filesystem and other machine to the database server. I need to handle two hosts. How can I do that?
I have some code but I cannot get the environment definition to work.
The idea is to connect to the remote Filesystem server and get the files and then connect to the remote Database server and get the database schema.
The code that I have for now is something like this:
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
'''
Here I define where is my "aid"s file structure
'''
local_root = '/home/andre/test' # This is the root folder for the audits
code_location = '/remote_code' # This is the root folder dor the customer code inside each audit
#
# ENVIRONMENTS CONFIGURATIONS
#
'''
Here I configure where is the remote file server
'''
def file_server():
env.user = 'andre'
env.hosts = ['localhost']
'''
Here I configure where is the database server
'''
def database_server():
env.user = 'andre'
env.hosts = ['192.168.5.1']
#
# START SCRIPT
#
def get_install(remote_location, aid):
### I will get the files
'''
Here I need to load the file_server() definitions
'''
working_folder = local_root + '/%s' % aid # I will define the working folder
local('mkdir ' + working_folder) # I will create the working folder for this audit
local('mkdir ' + working_folder + code_location) # I will create the folder to receive the code
get(remote_location, working_folder + code_location) # I will download the code to my machine
### I will get the database
'''
Here I need to load the database_server() definitions
'''
local('dir') # Just to test
How can I inside get_install() define the environments file_server() and database_server() ?
Best Regards,
I don't understand exactly what you are trying to do, but maybe you can split up your get_install function into two functions each for every server.
Then limit those functions to the correct servers with fabric.decorators.hosts(*host_list) decorator:
For example, the following will ensure that, barring an override on the command line, my_func will be run on host1, host2 and host3, and with specific users on host1 and host3:
#hosts('user1#host1', 'host2', 'user2#host3')
def my_func():
pass
(For more info see http://readthedocs.org/docs/fabric/en/1.1.0/api/core/decorators.html#fabric.decorators.hosts)
And you can than call those 2 functions in one go by defining your get_install method as:
def get_install():
func1()
func2()
You should be able to do this with fab database_server get_install. Basically, fab [environment] [command] should do what you want.

Categories