API handling with Robot and Python Class - python

What is the best way to handle objects with robot framework? I am starting to write a python class to handle API interactions, which I can therefore use as keywords in robot framework (RF). My question is how does one pass data from one method to another? Do I have to pass the object back to every function to get the data?
In the example below, I call the class and it initializes, but can I reference an instance of the class if I wanted? Or am I supposed to write every method to handle the entire object I get back from another method? Hopefully this makes sense, I basically want to use python like I normally would but inside of RF.
More specifically, is it feasible to distinguish between several instances if I call them all at once?
Test python foo.py:
class foo:
def intialize(self, api):
self.api_item = api
def get_api():
return self.api_item
def do_something_with_api
# doing something with an API, then return results
def do_something_else_with_api
# doing something with an API, then return results
Test Robot file:
*** Settings ***
Library /path/foo.py
*** Variables ***
${api_url} "https://apiurl.com/"
*** Tasks ***
Setup Initialize Settings
${session}= MgsRestApiHandler.intialize ${api_url}

In RF when a class is loaded as a Library there's always an instance object that's created for it. Thus if you have state variables within it, they'll be present for all class methods ("keywords") in your RF source.
In other words, in your example all methods will have access to self.api_item (after initialize() is called); by the way, why don't you add a normal constructor __init()__ and define the var there, even with None value, so it's cleaner?
is it feasible to distinguish between several instances
You can instantiate several instances of the same class ("Library") by importing them multiple times and using the WITH NAME Robot Framework syntax:
*** Settings ***
Library /path/foo.py WITH NAME o1
Library /path/foo.py WITH NAME o2
The "drawback" is you now have to prefix the method call with the instance name - otherwise the framework doesn't know for which object you want to call it:
*** Tasks ***
Setup Initialize Settings
${session1}= o1.intialize ${api_url}
${session2}= o2.intialize ${api_url2}
And if I understand one of your questions correctly (or if not - take this as a general trivia :) - in RF whatever a method/keyword returns - from primitives to complex objects (e.g.nclass instances) is assigned to that variable that in front of the call.
So if you have a method/keyword down the line that expects a complex object, you can pass that returned value - the framework will not mangle it in any way, it'll be passing around a normal pyhton object.

Related

Python abstract module possible?

I've built a module in Python in one single file without using classes. I do this so that using some api module becomes easier. Basically like this:
the_module.py
from some_api_module import some_api_call, another_api_call
def method_one(a, b):
return some_api_call(a + b)
def method_two(c, d, e):
return another_api_call(c * d * e)
I now need to built many similar modules, for different api modules, but I want all of them to have the same basic set of methods so that I can import any of these modules and call a function knowing that this function will behave the same in all the modules I built. To ensure they are all the same, I want to use some kind of abstract base module to build upon. I would normally grab the Abstract Base Classes module, but since I don't use classes at all, this doesn't work.
Does anybody know how I can implement an abstract base module on which I can build several other modules without using classes? All tips are welcome!
You are not using classes, but you could easily rewrite your code to do so.
A class is basically a namespace which contains functions and variables, as is a module.
Should not make a huge difference whether you call mymodule.method_one() or mymodule.myclass.method_one().
In python there is no such thing as interfaces which you might know from java.
The paradigm in python is Duck typing, that means more or less that for a given module you can tell whether it implements your API if it provides the right methods.
Python does this i.e. to determine what to do if you call myobject[i] on an instance of your class myclass. It looks whether the class has the method __getitem__ and if it does so, it replaces myobject[i] by myobject.__getitem__(i).
Yout don't have to tell python that your class supports this kind of access, python just figures it out from the way you defined your class.
The same way you should determine whether your module implements your API.
Maybe you want to look inside the hidden dictionary mymodule.__dict__ after import mymodulewhich contains all function names and pointers to them of your module. You could then check whether the right functions are present and raise an error otherwise
import my_module_4
#check if my_module_4 implements api
if all(func in my_module_4.__dict__ for func in ("method_one","method_two"):
print "API implemented"
else:
print "Warning: Not all API functions found in my_module_4"

Global state in Python module

I am writing a Python wrapper for a C library using the cffi.
The C library has to be initialized and shut down. Also, the cffi needs some place to save the state returned from ffi.dlopen().
I can see two paths here:
Either I wrap this whole stateful business in a class like this
class wrapper(object):
def __init__(self):
self.c = ffi.dlopen("mylibrary")
self.c.initialize()
def __del__(self):
self.c.terminate()
Or I provide two global functions that hide the state in a global variable
def initialize():
global __library
__library = ffi.dlopen("mylibrary")
__library.initialize()
def terminate():
__library.terminate()
del __library
The first path is somewhat cumbersome in that it requires the user to always create an object that really serves no other purpose other than managing the library state. On the other hand, it makes sure that terminate() is actually called every time.
The second path seems to result in a somewhat easier API. However, it exposes some hidden global state, which might be a bad thing. Also, if the user forgets to call terminate(), the C library is not unloaded correctly (which is not a big problem on the C side).
Which one of these paths would be more pythonic?
Exposing a wrapper object only makes sense in python if the library actually supports something like multiple instances in one application. If it doesn't support that or it's not really relevant go for kindall's suggestion and just initialize the library when imported and add an atexit handler for cleanup.
Adding wrappers around a stateless api or even an api without support for keeping different sets of state is not really pythonic and would raise expectations that different instances have some kind of isolation.
Example code:
import atexit
# Normal library initialization
__library = ffi.dlopen("mylibrary")
__library.initialize()
# Private library cleanup function
def __terminate():
__library.terminate()
# register function to be called on clean interpreter termination
atexit.register(__terminate)
For more details about atexit this question has some more details, as has the python documentation of course.

Injecting arbitrary code into a Python SimpleXMLRPC Server

In the python docs of python SimpleXMLRPC Server, it is mentioned:
Warning Enabling the allow_dotted_names option allows intruders to access your module’s global variables and may allow intruders to execute arbitrary code on your machine. Only use this option on a secure, closed network.
Now I have a Server With the following code:
from xmlrpc.server import SimpleXMLRPCServer
from xmlrpc.server import SimpleXMLRPCRequestHandler
server = SimpleXMLRPCServer(("localhost", 8000),
requestHandler=RequestHandler)
server.register_introspection_functions()
server.register_function(pow)
def adder_function(x,y):
return x + y
server.register_function(adder_function, 'add')
class MyFuncs:
def mul(self, x, y):
return x * y
server.register_instance(MyFuncs(), allow_dotted_names=True)
server.serve_forever()
Please explain how the vulnerability can be exploited to inject arbitrary code onto the server? If my above code is not vulnerable, then give example of one which can be exploited and the client code to do so.
MyFuncs().mul is not just a callable function, it is (like all Python functions) a first-class object with its own properties.
Apart from a load of __xxx__ magic methods (which you can't access because SimpleXMLRPCServer blocks access to anything beginning with _), there are internal method members im_class (pointing to the class object), im_self (pointing to the MyFuncs() instance) and im_func (pointing to the function definition for mul). That function object itself has a number of accessible properties - most notably including func_globals which give access to the variable scope dictionary for the containing file.
So, by calling mul.im_func.func_globals.get an attacker would be able to read arbitrary global variables you had set in your script, or use update() on the dictionary to alter them. In the above example that's not exploitable because you have nothing sensitive in the global variables. But that's probably not something you want to rely on always staying true.
Full 'execute arbitrary code' is pretty unlikely, but you might imagine a writable global codeToExecute variable that gets evaled later, for example, or someone registering a whole module with register_instance, allowing all the modules it imported to be accessible (typical example: os and os.system).
In Python 3 this particular attack is no longer reachable because the function/method internal properties were renamed to double-underline versions, where they get blocked. But in general it seems like a bad idea to 'default open' and allow external access to any property on an instance just based on name - there is no guarantee that no other non-underline names will ever exist in the future, or that properties won't be added to the accessible built-in types (tuple, dict) of those properties that could be exploited in some way.
If you really need nested property access, it would seem safer to come up with a version of SimpleXMLRPCServer that requires something like a #rpc_accessible decoration to define what should be visible.

How to work with interactively-defined classes in IPython.parallel?

Context
In an interactive prototyping development on the notebook connected to a cluster, I would like to define a class that is both available in the client __main__ session and interactively update on the cluster engine nodes to be able to move instances of that class around by passing such instances a argument to a LoadBalanced view. The following demonstrates the typical user session:
First setup the parallel clustering environment:
>>> from IPython.parallel import Client
>>> rc = Client()
>>> lview = rc.load_balanced_view()
>>> rc[:]
<DirectView [0, 1, 2]>
In a notebook cell let's define the code snippet of the component we are interactively editing:
>>> class MyClass(object):
... def __init__(self, parameter):
... self.parameter = parameter
...
... def update_something(self, some_data):
... # do something smart here with some_data & internal state
...
... def compute_something(self, other_data):
... # do something smart here with other data & internal state
... return something
...
In the next cell, let's create a script that builds instances of this class and then use the load balanced view of the cluster environment to evaluate our component on a wide range of input parameters:
>>> def process(obj, some_data, other_data):
... obj.update_something(some_data)
... return obj.compute_something(other_data)
...
>>> tasks = []
>>> some_instances = [MyClass(i) for i in range(10)]
>>> for obj in some_instances:
... for some_data in data_source_1:
... for other_data in data_source_2:
... ar = lview.apply_async(process, obj, some_data, other_data)
... tasks.append(ar)
...
>>> # wait for computation to end
>>> results = [ar.get() for ar in tasks]
Problem
That will obviously not work as the engines of the load balanced view will be unable to unpickle the instances passed as first argument to the process function. The process function definition itself is passed successfully as I assume that apply_async does bytecode instrospection to pickle it (by accessing the .code attribute of the function) and then just does a simple pickle for the remaining arguments.
Possible solutions (that don't work for me)
One alternative solution would be to use the %%px cell magic on the cell holding the definition of the class MyClass. However that would prevent me to build the class instances in the client script that also do the scheduling. I would need to copy and paste the cell content in an other cell without the %%px magic (or execute the cell twice once with magic and another time without the magic) but this is tedious when I am still editing the methods of the class in an iterative development & evaluation setting.
An alternative solution would be to embed the class definition inside the process function but I find this not practical as I would like to reuse that class definition in other functions later in my notebook.
Alternatively I could just stop using a class and only work with functions that can be shipped over to the engines by passing then as first argument to the apply_async. However I don't like that either as I would like to prototype my code in an object oriented way for later extraction from the notebook and including the resulting class in an object oriented library. The notebook session serving as a collaborative prototyping tool using for exchanging ideas between developers using the http://nbviewer.ipython.org publisher.
The final alternative would be to write my class in a python module on a file on the filesystem and ship that file to the engines PYTHONPATH using NFS for instance. That works but prevent me to work only in the notebook environment which defeats the whole purpose of interactive prototyping in the notebook.
So basically, is there a way to define a class interactively and then ship its definition around to the engines?
It should be possible to pickle a class definition using the inspect.getsource in the client then send the source to the engines and use the eval builtin but unfortunately source inspection does not work for classes defined inside the DummyMod built-in module:
TypeError: <IPython.core.interactiveshell.DummyMod object at 0x10c2c4e50> is a built-in class
Is there a way to inspect the bytecode of a class definition instead?
Or is it possible to use the %%px magic so as to both execute the content of the cell locally on the client and on each engine?
Thanks for the detailed question (and pinging me on Twitter).
First, maybe it should be considered a bug that you can't just push classes,
because the simple solution should be
rc[:]['MyClass'] = MyClass
but pickling interactively defined classes results only in a reference ('\x80\x02c__main__\nMyClass\nq\x01.'), giving your DummyMod AttributeError.
This can probably be fixed internally in IPython's serialization.
On to an actual working solution, though.
Adding local execution to %%px is super easy, just:
def pxlocal(line, cell):
ip = get_ipython()
ip.run_cell_magic("px", line, cell)
ip.run_cell(cell)
get_ipython().register_magic_function(pxlocal, "cell")
And now you have a %%pxlocal magic that runs %%px in addition to running the cell locally.
Then all you have to do is:
%%pxlocal
class MyClass(object):
# etc
to define your class everywhere.
I will add a --local flag to %%px, so this extra step isn't necessary.
A complete, working example notebook.
I think you could use "dill" to pickle the interactively defined class, and not have to worry about %%pxlocal magic, using DummyMod, and faking of namespaces.
To pickle a class interactively, just do "import dill" and then build your class as you first did. You should be able to then send it across any sane map or apply_async function.

function for getting instance of class VS calling class directly

http://doc.qt.io/qt-5/qdesktopwidget.html#obtaining-a-desktop-widget
The QApplication::desktop() function is used to get an instance of QDesktopWidget.
I don't understand why should you use QApplication::desktop(), when you can just directly call QDesktopWidget() class.
What is the difference between
desktop = QApplication.desktop()
and
desktop = QDesktopWidget()
They look like the same. But Windows(OS) throws warning on exit when using QDesktopWidget(). So there should be some difference.
How they differs?
They may look the same but are not. On the C++ side the static desktop() function is using a singleton pattern - there is only one desktop and it is represented by a static variable, which may (or may not) be created on request. QDesktopWidget() is a constructor, which is not accessible for the "outside world" to guarantee the uniqueness of the singleton.

Categories