In Python I can see what methods and fields an object has with:
print dir(my_object)
What's the equivalent of that in Groovy (assuming it has one)?
Looks particulary nice in Groovy (untested, taken from this link so code credit should go there):
// Introspection, know all the details about classes :
// List all constructors of a class
String.constructors.each{println it}
// List all interfaces implemented by a class
String.interfaces.each{println it}
// List all methods offered by a class
String.methods.each{println it}
// Just list the methods names
String.methods.name
// Get the fields of an object (with their values)
d = new Date()
d.properties.each{println it}
The general term you are looking for is introspection.
As described here, to find all methods defined for String object:
"foo".metaClass.methods*.name.sort().unique()
It's not as simple as Python version, perhaps somebody else can show better way.
Besides just using the normal Java reflection API, there's:
http://docs.codehaus.org/display/GROOVY/JN3535-Reflection
You can also play games with the metaclasses.
Related
I've seen somewhere that there was a way to change some object functions in python
def decorable(cls):
cls.__lshift__ = lambda objet, fonction: fonction(objet)
return cls
I wondered if you could do things like in ruby, with the :
number.times
Can we actually change some predefined classes by applying the function above to the class int for example? If so, any ideas how I could manage to do it? And could you link me the doc of python showing every function (like lshift) that can be changed?
Ordinarily not -
as a rule, Python types defined in native code -in CPython can't be monkey patched to have new methods. Although there are means to do that with direct memory access and changing the C object structures, using CPython - that is not considered "clever", "beautiful", much less usable. (check https://github.com/clarete/forbiddenfruit)
That said, for class hierarchies you define on your own packages, that pretty much works - any magic "dunder" method that is set changes the behavior for all objects of that class, in all the process.
So, you can't do that to Python's "int" - but you can have a
class MyInt(int):
pass
a = MyInt(10)
MyInt.__rshift__ = lambda self, other: MyInt(str(self) + str(other))
print(a >> 20)
Will result in 1020 being printed.
The Python document thta tells about all the magic methods taht are used by the language is the Data Model:
https://docs.python.org/3/reference/datamodel.html
Many languages support ad-hoc polymorphism (a.k.a. function overloading) out of the box. However, it seems that Python opted out of it. Still, I can imagine there might be a trick or a library that is able to pull it off in Python. Does anyone know of such a tool?
For example, in Haskell one might use this to generate test data for different types:
-- In some testing library:
class Randomizable a where
genRandom :: a
-- Overload for different types
instance Randomizable String where genRandom = ...
instance Randomizable Int where genRandom = ...
instance Randomizable Bool where genRandom = ...
-- In some client project, we might have a custom type:
instance Randomizable VeryCustomType where genRandom = ...
The beauty of this is that I can extend genRandom for my own custom types without touching the testing library.
How would you achieve something like this in Python?
Python is not a strongly typed language, so it really doesn't matter if yo have an instance of Randomizable or an instance of some other class which has the same methods.
One way to get the appearance of what you want could be this:
types_ = {}
def registerType ( dtype , cls ) :
types_[dtype] = cls
def RandomizableT ( dtype ) :
return types_[dtype]
Firstly, yes, I did define a function with a capital letter, but it's meant to act more like a class. For example:
registerType ( int , TheLibrary.Randomizable )
registerType ( str , MyLibrary.MyStringRandomizable )
Then, later:
type = ... # get whatever type you want to randomize
randomizer = RandomizableT(type) ()
print randomizer.getRandom()
A Python function cannot be automatically specialised based on static compile-time typing. Therefore its result can only depend on its arguments received at run-time and on the global (or local) environment, unless the function itself is modifiable in-place and can carry some state.
Your generic function genRandom takes no arguments besides the typing information. Thus in Python it should at least receive the type as an argument. Since built-in classes cannot be modified, the generic function (instance) implementation for such classes should be somehow supplied through the global environment or included into the function itself.
I've found out that since Python 3.4, there is #functools.singledispatch decorator. However, it works only for functions which receive a type instance (object) as the first argument, so it is not clear how it could be applied in your example. I am also a bit confused by its rationale:
In addition, it is currently a common anti-pattern for Python code to inspect the types of received arguments, in order to decide what to do with the objects.
I understand that anti-pattern is a jargon term for a pattern which is considered undesirable (and does not at all mean the absence of a pattern). The rationale thus claims that inspecting types of arguments is undesirable, and this claim is used to justify introducing a tool that will simplify ... dispatching on the type of an argument. (Incidentally, note that according to PEP 20, "Explicit is better than implicit.")
The "Alternative approaches" section of PEP 443 "Single-dispatch generic functions" however seems worth reading. There are several references to possible solutions, including one to "Five-minute Multimethods in Python" article by Guido van Rossum from 2005.
Does this count for ad hock polymorphism?
class A:
def __init__(self):
pass
def aFunc(self):
print "In A"
class B:
def __init__(self):
pass
def aFunc(self):
print "In B"
f = A()
f.aFunc()
f = B()
f.aFunc()
output
In A
In B
Another version of polymorphism
from module import aName
If two modules use the same interface, you could import either one and use it in your code.
One example of this is from xml.etree.ElementTree import XMLParser
I'm trying to mimic methods.grep from Ruby which simply returns a list of available methods for any object (class or instance) called upon, filtered by regexp pattern passed to grep.
Very handy for investigating objects in an interactive prompt.
def methods_grep(self, pattern):
""" returns list of object's method by a regexp pattern """
from re import search
return [meth_name for meth_name in dir(self) \
if search(pattern, meth_name)]
Because of Python's limitation not quite clear to me it unfortunately can't be simply inserted in the object class ancestor:
object.mgrep = classmethod(methods_grep)
# TypeError: can't set attributes of built-in/extension type 'object'
Is there some workaround how to inject all classes or do I have to stick with a global function like dir ?
There is a module called forbiddenfruit that enables you to patch built-in objects. It also allows you to reverse the changes. You can find it here https://pypi.python.org/pypi/forbiddenfruit/0.1.1
from forbiddenfruit import curse
curse(object, "methods_grep", classmethod(methods_grep))
Of course, using this in production code is likely a bad idea.
There is no workaround AFAIK. I find it quite annoying that you can't alter built-in classes. Personal opinion though.
One way would be to create a base object and force all your objects to inherit from it.
But I don't see the problem to be honest. You can simply use methods_grep(object, pattern), right? You don't have to insert it anywhere.
I am writing a file parser and I want to be able to determine witch "data fields" it would return for me.
I am starting to learn python and am still used to think like a Java programmer, so this question is more about how to design my module than about how specifically parse the file.
Contextualizing, each line of the file has a fixed number of characters and each information is contained between specific indexes. Eg.:
XX20120101NAME1CITYA
XY20120101NAME2CITYB
In this fictional example, from index 0 until 2 you have one information, from 2 to 10 another, and so on...
Using Java, I would normally create an enumerator representing the different pieces of information, each "storing" the start index and the end index. In my parsing class, I would then make an method available design to accept n different enums. Eg.:
enum FileInformation {
INFO01(0,2), INFO02(2,10), INFO03(10,15), INFO04(15,20);
int startIndex;
int endIndex;
public FileInformation(int si, int ei) {
this.startIndex = si;
this.endIndex = ei;
}
public int getStartIndex() { return si; }
public int getEndIndex() { return ei; }
}
public Whatever parse(FileInformation... infos) {
// Here I would iterate through infos[],
// using its start and end index to retrieve only what I need.
}
I know that I probably should not use the same line of though in python, specially because the language wouldn't allow it (no enums in python) and because I imagine python can be much less verbose, but I have no idea of a good design practice to achieve this same result.
It is valid to mention that I don't want to expose the module's user to unnecessary complexity, or to force him to know the indexes for each information. The module's user should preferably be able to determine witch information he wants and its order.
So, do you have any insights about solving this requisites in an elegant manner?
Thanks in advance
Python already has a builtin type that does what FileInformation does - check out slice.
Here's how your module might look:
# module dataparser.py
INFO01, INFO02, INFO03, INFO04 = map(slice, ((0,2),(2,10),(10,15),(15,20)))
def parse(infos, data):
return [data[info] for info in infos]
And how a calling module might use it:
# module dataparser_user.py
import dataparser as dp
data = """\
XX20120101NAME1CITYA
XY20120101NAME2CITYB""".splitlines()
for d in data:
print d, dp.parse((dp.INFO01, dp.INFO03), d)
# or use partial to define a function object that takes your
# subset number of slices
from functools import partial
specific_parse = partial(dp.parse, (dp.INFO01, dp.INFO03))
for d in data:
print d, specific_parse(d)
If you were to implement your own enum analog in Python, I think namedtuple would be the closest thing (seeing as your Java enum has getters but no setters - namedtuples are likewise immutable):
from collections import namedtuple
FileInformation = namedtuple("FileInformation", "start end")
INFO01, INFO02, INFO03, INFO04 = map(FileInformation, ((0,2),(2,10),(10,15),(15,20)))
For all intents and purposes, an Objective-C method declaration is
simply a C function that prepends two additional parameters (see
“Messaging” in the Objective-C Runtime Programming Guide ).
Thus, the structure of an Objective-C method declaration differs from the structure of a method that uses named or keyword parameters
in a language like Python, as the following Python example
illustrates:
In this Python example, Thing and NeatMode might be omitted or might have different values when called.
def func(a, b, NeatMode=SuperNeat, Thing=DefaultThing):
pass
What's the goal of showing this example on an Objective-c related book?
This is a (poor) example of how Objective-C does not support certain features that other languages, (for example, Python) may. The text explains that while Objective-C has "named parameters" of the format
- (void)myMethodWithArgument:(NSObject *)argument andArgument:(NSObject *)another;
Those parameters do not support defaults values, which Python does.
The mention of prepending two arguments hints at how message passing in Objective-C works under the hood, which is by prepending each method with a receiver object and a selector. You don't need to know this detail in order to write code in Objective-C, especially at a beginner level, but Apple explains this process here.
def func(a, b, NeatMode=SuperNeat, Thing=DefaultThing):
pass
NeatMode, Thing are optional named parameters
in objective c they would be
- (void) func:(int)a :(int)b NeatMode:(object*)SuperNeat Thing:(object*)DefaultThing
Pleas read more about this subject
http://www.diveintopython.net/power_of_introspection/optional_arguments.html
I think the point here is to differentiate between how you are "used" to receive parameters in functions and how objective-c does. Normally:
public void accumulate(double value, double value1) {
}
And in objective-c:
-(void)accumulateDouble:(double)aDouble withAnotherDouble:(double)anotherDouble{
}