how can i call multiple methods of an object at the same time in python. in php I can do this:
class Client{
public ac1(){ ... }
public ac2(){ ... }
}
$client = new client();
$client->ac1()->ac2(); <-- I want to do it here
how would you do it in python?
The example you gave of PHP code does not run the methods at the same time - it just combines the 2 calls in a single line of code. This pattern is called a fluent interface.
You can do the same in Python as long as a method you're calling returns the instance of the object it was called on.
I.e.:
class Client:
def ac1(self):
...
return self
def ac2(self):
...
return self
c = Client()
c.ac1().ac2()
Note that ac1() gets executed first, returns the (possibly modified in-place) instance of the object and then ac2() gets executed on that returned instance.
Some major libraries that are popular in Python are moving towards this type of interface, a good example is pandas. It has many methods that allow in-place operations, but the package is moving towards deprecating those in favour of chainable operations, returning a modified copy of the original.
Related
I've seen somewhere that there was a way to change some object functions in python
def decorable(cls):
cls.__lshift__ = lambda objet, fonction: fonction(objet)
return cls
I wondered if you could do things like in ruby, with the :
number.times
Can we actually change some predefined classes by applying the function above to the class int for example? If so, any ideas how I could manage to do it? And could you link me the doc of python showing every function (like lshift) that can be changed?
Ordinarily not -
as a rule, Python types defined in native code -in CPython can't be monkey patched to have new methods. Although there are means to do that with direct memory access and changing the C object structures, using CPython - that is not considered "clever", "beautiful", much less usable. (check https://github.com/clarete/forbiddenfruit)
That said, for class hierarchies you define on your own packages, that pretty much works - any magic "dunder" method that is set changes the behavior for all objects of that class, in all the process.
So, you can't do that to Python's "int" - but you can have a
class MyInt(int):
pass
a = MyInt(10)
MyInt.__rshift__ = lambda self, other: MyInt(str(self) + str(other))
print(a >> 20)
Will result in 1020 being printed.
The Python document thta tells about all the magic methods taht are used by the language is the Data Model:
https://docs.python.org/3/reference/datamodel.html
Many languages support ad-hoc polymorphism (a.k.a. function overloading) out of the box. However, it seems that Python opted out of it. Still, I can imagine there might be a trick or a library that is able to pull it off in Python. Does anyone know of such a tool?
For example, in Haskell one might use this to generate test data for different types:
-- In some testing library:
class Randomizable a where
genRandom :: a
-- Overload for different types
instance Randomizable String where genRandom = ...
instance Randomizable Int where genRandom = ...
instance Randomizable Bool where genRandom = ...
-- In some client project, we might have a custom type:
instance Randomizable VeryCustomType where genRandom = ...
The beauty of this is that I can extend genRandom for my own custom types without touching the testing library.
How would you achieve something like this in Python?
Python is not a strongly typed language, so it really doesn't matter if yo have an instance of Randomizable or an instance of some other class which has the same methods.
One way to get the appearance of what you want could be this:
types_ = {}
def registerType ( dtype , cls ) :
types_[dtype] = cls
def RandomizableT ( dtype ) :
return types_[dtype]
Firstly, yes, I did define a function with a capital letter, but it's meant to act more like a class. For example:
registerType ( int , TheLibrary.Randomizable )
registerType ( str , MyLibrary.MyStringRandomizable )
Then, later:
type = ... # get whatever type you want to randomize
randomizer = RandomizableT(type) ()
print randomizer.getRandom()
A Python function cannot be automatically specialised based on static compile-time typing. Therefore its result can only depend on its arguments received at run-time and on the global (or local) environment, unless the function itself is modifiable in-place and can carry some state.
Your generic function genRandom takes no arguments besides the typing information. Thus in Python it should at least receive the type as an argument. Since built-in classes cannot be modified, the generic function (instance) implementation for such classes should be somehow supplied through the global environment or included into the function itself.
I've found out that since Python 3.4, there is #functools.singledispatch decorator. However, it works only for functions which receive a type instance (object) as the first argument, so it is not clear how it could be applied in your example. I am also a bit confused by its rationale:
In addition, it is currently a common anti-pattern for Python code to inspect the types of received arguments, in order to decide what to do with the objects.
I understand that anti-pattern is a jargon term for a pattern which is considered undesirable (and does not at all mean the absence of a pattern). The rationale thus claims that inspecting types of arguments is undesirable, and this claim is used to justify introducing a tool that will simplify ... dispatching on the type of an argument. (Incidentally, note that according to PEP 20, "Explicit is better than implicit.")
The "Alternative approaches" section of PEP 443 "Single-dispatch generic functions" however seems worth reading. There are several references to possible solutions, including one to "Five-minute Multimethods in Python" article by Guido van Rossum from 2005.
Does this count for ad hock polymorphism?
class A:
def __init__(self):
pass
def aFunc(self):
print "In A"
class B:
def __init__(self):
pass
def aFunc(self):
print "In B"
f = A()
f.aFunc()
f = B()
f.aFunc()
output
In A
In B
Another version of polymorphism
from module import aName
If two modules use the same interface, you could import either one and use it in your code.
One example of this is from xml.etree.ElementTree import XMLParser
Context
In an interactive prototyping development on the notebook connected to a cluster, I would like to define a class that is both available in the client __main__ session and interactively update on the cluster engine nodes to be able to move instances of that class around by passing such instances a argument to a LoadBalanced view. The following demonstrates the typical user session:
First setup the parallel clustering environment:
>>> from IPython.parallel import Client
>>> rc = Client()
>>> lview = rc.load_balanced_view()
>>> rc[:]
<DirectView [0, 1, 2]>
In a notebook cell let's define the code snippet of the component we are interactively editing:
>>> class MyClass(object):
... def __init__(self, parameter):
... self.parameter = parameter
...
... def update_something(self, some_data):
... # do something smart here with some_data & internal state
...
... def compute_something(self, other_data):
... # do something smart here with other data & internal state
... return something
...
In the next cell, let's create a script that builds instances of this class and then use the load balanced view of the cluster environment to evaluate our component on a wide range of input parameters:
>>> def process(obj, some_data, other_data):
... obj.update_something(some_data)
... return obj.compute_something(other_data)
...
>>> tasks = []
>>> some_instances = [MyClass(i) for i in range(10)]
>>> for obj in some_instances:
... for some_data in data_source_1:
... for other_data in data_source_2:
... ar = lview.apply_async(process, obj, some_data, other_data)
... tasks.append(ar)
...
>>> # wait for computation to end
>>> results = [ar.get() for ar in tasks]
Problem
That will obviously not work as the engines of the load balanced view will be unable to unpickle the instances passed as first argument to the process function. The process function definition itself is passed successfully as I assume that apply_async does bytecode instrospection to pickle it (by accessing the .code attribute of the function) and then just does a simple pickle for the remaining arguments.
Possible solutions (that don't work for me)
One alternative solution would be to use the %%px cell magic on the cell holding the definition of the class MyClass. However that would prevent me to build the class instances in the client script that also do the scheduling. I would need to copy and paste the cell content in an other cell without the %%px magic (or execute the cell twice once with magic and another time without the magic) but this is tedious when I am still editing the methods of the class in an iterative development & evaluation setting.
An alternative solution would be to embed the class definition inside the process function but I find this not practical as I would like to reuse that class definition in other functions later in my notebook.
Alternatively I could just stop using a class and only work with functions that can be shipped over to the engines by passing then as first argument to the apply_async. However I don't like that either as I would like to prototype my code in an object oriented way for later extraction from the notebook and including the resulting class in an object oriented library. The notebook session serving as a collaborative prototyping tool using for exchanging ideas between developers using the http://nbviewer.ipython.org publisher.
The final alternative would be to write my class in a python module on a file on the filesystem and ship that file to the engines PYTHONPATH using NFS for instance. That works but prevent me to work only in the notebook environment which defeats the whole purpose of interactive prototyping in the notebook.
So basically, is there a way to define a class interactively and then ship its definition around to the engines?
It should be possible to pickle a class definition using the inspect.getsource in the client then send the source to the engines and use the eval builtin but unfortunately source inspection does not work for classes defined inside the DummyMod built-in module:
TypeError: <IPython.core.interactiveshell.DummyMod object at 0x10c2c4e50> is a built-in class
Is there a way to inspect the bytecode of a class definition instead?
Or is it possible to use the %%px magic so as to both execute the content of the cell locally on the client and on each engine?
Thanks for the detailed question (and pinging me on Twitter).
First, maybe it should be considered a bug that you can't just push classes,
because the simple solution should be
rc[:]['MyClass'] = MyClass
but pickling interactively defined classes results only in a reference ('\x80\x02c__main__\nMyClass\nq\x01.'), giving your DummyMod AttributeError.
This can probably be fixed internally in IPython's serialization.
On to an actual working solution, though.
Adding local execution to %%px is super easy, just:
def pxlocal(line, cell):
ip = get_ipython()
ip.run_cell_magic("px", line, cell)
ip.run_cell(cell)
get_ipython().register_magic_function(pxlocal, "cell")
And now you have a %%pxlocal magic that runs %%px in addition to running the cell locally.
Then all you have to do is:
%%pxlocal
class MyClass(object):
# etc
to define your class everywhere.
I will add a --local flag to %%px, so this extra step isn't necessary.
A complete, working example notebook.
I think you could use "dill" to pickle the interactively defined class, and not have to worry about %%pxlocal magic, using DummyMod, and faking of namespaces.
To pickle a class interactively, just do "import dill" and then build your class as you first did. You should be able to then send it across any sane map or apply_async function.
http://doc.qt.io/qt-5/qdesktopwidget.html#obtaining-a-desktop-widget
The QApplication::desktop() function is used to get an instance of QDesktopWidget.
I don't understand why should you use QApplication::desktop(), when you can just directly call QDesktopWidget() class.
What is the difference between
desktop = QApplication.desktop()
and
desktop = QDesktopWidget()
They look like the same. But Windows(OS) throws warning on exit when using QDesktopWidget(). So there should be some difference.
How they differs?
They may look the same but are not. On the C++ side the static desktop() function is using a singleton pattern - there is only one desktop and it is represented by a static variable, which may (or may not) be created on request. QDesktopWidget() is a constructor, which is not accessible for the "outside world" to guarantee the uniqueness of the singleton.
Hallo,
I have some troubles understanding the python reference count.
What I want to do is return a tuple from c++ to python using the ctypes module.
C++:
PyObject* foo(...)
{
...
return Py_BuildValue("(s, s)", value1, value2);
}
Python:
pointer = c_foo(...) # c_foo loaded with ctypes
obj = cast(pointer, py_object).value
I'm was not sure about the ref count of obj, so I tried sys.getrefcount()
and got 3. I think it should be 2 (the getrefcount functions makes one ref itself).
Now I can't make Py_DECREF() before the return in C++ because the object gets deleted. Can I decrease the ref count in python?
edit
What happens to the ref count when the cast function is called? I'm not really sure from the documentation below. http://docs.python.org/library/ctypes.html#ctypes.cast
ctypes.cast(obj, type)
This function is similar to the cast operator in C. It returns a new instance of type which points to the same memory block as obj. type must be a pointer type, and obj must be an object that can be interpreted as a pointer.
On further research I found out that one can specify the return type of the function.
http://docs.python.org/library/ctypes.html#callback-functions
This makes the cast obsolete and the ref count is no longer a problem.
clib = ctypes.cdll.LoadLibrary('some.so')
c_foo = clib.c_foo
c_foo.restype = ctypes.py_object
As no additional answers were given I accept my new solution as the answer.
Your c++ code seems to be a classic wrapper using the official C-API and it's a bit weird since ctypes is usually used for using classic c types in python (like int, float, etc...).
I use personnally the C-API "alone" (without ctypes) but on my personnal experience, you don't have to worry about the reference counter in this case since you are returning a native python type with Py_BuildValue. When a function returns an object, the ownership of the returned object is given to the calling function.
You have to worry about Py_XINCREF/Py_XDECREF (better than Py_INCREF/Py_DECREF because it accepts NULL pointers) only when you want to change ownership of the object :
For example, you have created a wrapper of a map in python (let's call the typed object py_map). The element are of c++ class Foo and you have created an other python wrapper for them (let's call it py_Foo). If you create a function that wrap the [] operator, you are going to return a py_Foo object in python :
F = py_Map["key"]
but since the ownership is given to the calling function, you will call the destructor when you delete F and the map in c++ contains a pointer to a deallocated objet !
The solution is to write in c++ in the wrapper of [] :
...
PyObject* result; // My py_Foo object
Py_XINCREF(result); // transfer the ownership
return result;
}
You should take a look at the notion of borrowed and owned reference in python. This is essential to understand properly the reference counter.