I know Python doesn't support overloading, but I'm not sure how to do the following task in Python without resorting to different method names.
I have two methods which require different set of parameters:
def get_infobox_from_list(templates):
for template in templates:
if get_base_length(template[0]) >= 0:
return template
return None
def get_infobox(site, name):
# first try box template
infobox = get_infobox_from_list(get_templates(site, "{}/Box".format(name)))
if infobox is None:
infobox = get_infobox_from_list(get_templates(site, name))
return infobox
Both methods do similar things (they get you a template), but their parameters are different. Now I've read that Python is usually allowing this by using default arguments.
That might be helping sometimes, but the difference is, that the method either needs two parameters (site and name) or one (templates) but no other combination (like site and templates, only name, only site, name and templates or all three).
Now in Java I could simply define those two overloading methods. So if somebody is calling either one of them their parameters must match without defining to many or to few. So my question is, how should it be done in Python really.
You could try using *args:
def get_infobox_from_list(*args):
if len(args) == 1:
return _get_infobox_from_list_template(*args)
else:
return _get_infobox_from_list_sitename(*args)
Then you can define the two similar sub-functions. But this is pretty awkward, and suggests that two separate methods with different names might be a better fit.
You could use a "wrapper" method (not sure what the correct terminology here is) that passes the parameters along to the correct version of the get_infobox_... function.
def get_infobox(site=None, name=None, templates=None):
if site is not None and name is not None and templates is None:
get_infobox_from_site_and_name(site, name)
elif templates is not None and site is None and name is None:
get_infobox_from_list(templates)
else:
raise Exception # or some particular type of exception
However, I imagine there is a better way to accomplish what you want to do - I've never found a need to resort to a pattern like this in Python. I can't really suggest a better option without understanding why you want to do this in greater detail, though.
Related
I have tried looking into the documentation and google search , but I am unable to find out the significance of the [clazz] at the end of method. Could someone help me understand the meaning of the [clazz] at the end of the method? Thanks.
def get_context_setter(context, clazz):
return {
int: context.setFieldToInt,
datetime: context.setFieldToDatetime
}[clazz]
setFieldToInt and setFieldToDatetime are methods inside context class.
This function returns one of two things. It returns either context.setFieldToInt or context.setFieldToDatetime. It does so by using a dictionary as what would be a switch statement in other programming languages.
It checks whether clazz is a reference to the class int or a reference to the class datetime, and then returns the appropriate method.
It's identical to this code:
def get_context_setter(context, clazz):
lookup_table = {int: context.setFieldToInt,
datetime: context.setFieldToDatetime
}
context_function = lookup_table[clazz] # figure out which to return
return context_function
Using a dict instead of a switch statement is pretty popular, see Replacements for switch statement in Python? .
More briefly.
The code presented is expecting the class of some object as a parameter poorly named as clazz.
It's then using that class as a dictionary key.
They're essentially trying to accept two different types and call a method on the object type.
class is a keyword in Python.
The author of the code you show chose to use a strange spelling instead of a longer snake_case parameter name like obj_class.
The parameters really should have been named obj, obj_class
Or
instance, instance_class
Even better, the class really need not be a separate parameter.
I have this case in Python (with Pyramid framwork), where I'm trying to check for condition.
Here is the code:
if some_condition:
value = self.__parent__.__parent__.__parent__.method()
else:
value = self.__parent__.__parent__.method()
The question is, is there more pythonic way (syntax sugar shortcut) for representing __parent__.__parent__... dynamically?
I know that there is Python syntax like this:
value1, value2, value3 = (None,) * 3
Is there something similar and dynamic for my case?
I searched in Google, in Python documentation, in Reddit source code, in Open Stack source code, and I spend 2 days in searching, so decided to ask here.
If you don't like the parent chain you could always write a helper method to get a node at a given depth. Though this might be less legible.
eg.
def get_parent(item, depth):
original_depth = depth
try:
while depth:
item = item.__parent__
depth -= 1
return item
except AttributeError:
raise AttributeError("No parent node found at depth {}".format(
original_depth-depth))
Usage:
get_parent(self, 3).method()
As far as I know there is no such syntax in python.
However you may indeed implement custom method for obtaining a list of parent resources:
def find_ancestors(resource):
ancestors = [resource]
while hasattr(ancestors[-1], '__parent__'):
ancestors.append(ancestors[-1].__parent__)
return ancestors
Or a method to iterate them:
def iter_ancestors(resource):
yield resource
while hasattr(resource, '__parent__'):
resource = resource.__parent__
yield resource
Also, I'm not sure if such way is the appropriate one. I think you should take a look at find_interface(..) method and somehow manage to define appropriate interfaces for your resources to locate them. Doing such way your code will look like:
value = find_interface(self, ResourceA if some_condition else ResourceB).method()
UPDATE: The code provided by #Dunes in his answer is another good approach to get ancestors by their index.
I want to use sqlalchemy in a way like this:
email1 = EmailModel(email="user#domain.com", account=AccountModel(name="username"))
email2 = EmailModel(email="otheruser#domain.com", account=AccountModel(name="username"))
Usually sqlalchemy will create two entries for the account and link each email address to this. If i set the accountname as unique sqlalchemy is throwing an exception which tells me about an entry with the same value already exists. This makes all sense and works as expected.
Now i figured out an way by my own which allows the mentioned code and just creates an account only once by overwriting the the new Constructor of the AccountModel Class:
def __new__(*cls, **kw):
if len(kw) and "name" in kw:
x = session.query(cls.__class__).filter(cls[0].name==kw["name"]).first()
if x: return x
return object.__new__(*cls, **kw)
This is working perfectly for me. But the question is:
Is this the correct way?
Is there an sqlalchemy way of achieving the same?
I'm using the latest 0.8.x SQLAlchemy Version and Python 2.7.x
Thanks for any help :)
There is exactly this example on the wiki at http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject.
Though, more recently I've preferred to use a #classmethod for this instead of redefining the constructor, as explicit is better than implicit, also simpler:
user.email = Email.as_unique('foo#bar.com')
(I'm actually going to update the wiki now to more fully represent the usage options here.)
I think it's not the best way to achieve this since it creates an dependency of your constructor to the global variable session and also modifies the behaviour of the constructor in an unexpected way (new is expected to return a new object after all). If, for example, someone is using your classes with two sessions in parallel the code will fail and he will have to dig into the code to find the error.
I'm not aware of any "sqlalchemy" way of achieving this, but I would suggest to create a function createOrGetAccountModel like
def createOrGetAccountModel(**kw):
if len(kw) and "name" in kw:
x = session.query(AccountModel).filter_by(name=kw["name"]).first()
if x: return x
return AccountModel(**kw)
I create a class whose objects are initialized with
a bunch of XML code. The class has the ability to extract various parameters out of that XML and to cache them inside the object state variables. The potential amount of these parameters is large and most probably, the user will not need most of them. That is why I have decided to perform a "lazy" initialization.
In the following test case such a parameter is title. When the user tries to access it for the first time, the getter function parses the XML, properly initializes the state variable and return its value:
class MyClass(object):
def __init__(self, xml=None):
self.xml = xml
self.title = None
def get_title(self):
if self.__title is None:
self.__title = self.__title_from_xml()
return self.__title
def set_title(self, value):
self.__title = value
title = property(get_title, set_title, None, "Citation title")
def __title_from_xml(self):
#parse the XML and return the title
return title
This looks nice and works fine for me. However, I am disturbed a little bit by the fact that the getter function is actually a "setter" one in the sense that it has a very significant side effect on the object. Is this a legitimate concern? If so, how should I address it?
This design pattern is called Lazy initialization and it has legitimate use.
While the getter certainly performs a side-effect, that's not traditionally what one would consider a bad side-effect. Since the getter always returns the same thing (barring any intervening changes in state), it has no user-visible side-effects. This is a typical use for properties, so there's nothing to be concerned about.
Quite some years later but well: while lazy initialization is fine in itself, I would definitly not postpone xml parsing etc until someone accesses the object's title. Computed attributes are supposed to behave like plain attributes, and a plain attribute access will never raise (assuming the attribute exists of course).
FWIW I had a very similar case in some project I took over, with xml parsing errors happening at the most unexpected places, due to the previous developper using properties the very same way as in the OP example, and had to fix it by putting the parsing and validation part at instanciation time.
So, use properties for lazy initialization only if and when you know the first access will never ever raise. Actually, never use a property for anything that might raise (at least when getting - setting is a different situation). Else, dont use a property, make the getter an explicit method and clearly document it might raise this or that.
NB : using a property to cache something is not the problem here, this by itself is fine.
We're considering using Python (IronPython, but I don't think that's relevant) to provide a sort of 'macro' support for another application, which controls a piece of equipment.
We'd like to write fairly simple functions in Python, which take a few arguments - these would be things like times and temperatures and positions. Different functions would take different arguments, and the main application would contain user interface (something like a property grid) which allows the users to provide values for the Python function arguments.
So, for example function1 might take a time and a temperature, and function2 might take a position and a couple of times.
We'd like to be able to dynamically build the user interface from the Python code. Things which are easy to do are to find a list of functions in a module, and (using inspect.getargspec) to get a list of arguments to each function.
However, just a list of argument names is not really enough - ideally we'd like to be able to include some more information about each argument - for instance, it's 'type' (high-level type - time, temperature, etc, not language-level type), and perhaps a 'friendly name' or description.
So, the question is, what are good 'pythonic' ways of adding this sort of information to a function.
The two possibilities I have thought of are:
Use a strict naming convention for arguments, and then infer stuff about them from their names (fetched using getargspec)
Invent our own docstring meta-language (could be little more than CSV) and use the docstring for our metadata.
Because Python seems pretty popular for building scripting into large apps, I imagine this is a solved problem with some common conventions, but I haven't been able to find them.
Decorators are a good way to add metadata to functions. Add one that takes a list of types to append to a .params property or something:
def takes(*args):
def _takes(fcn):
fcn.params = args
return fcn
return _takes
#takes("time", "temp", "time")
def do_stuff(start_time, average_temp, stop_time):
pass
I would use some kind of decorator:
class TypeProtector(object):
def __init__(self, fun, types):
self.fun, self.types = fun, types
def __call__(self, *args, **kwargs)
# validate args with self.types
pass
# run function
return fun(*args, **kwargs)
def types(*args):
def decorator(fun):
# validate args count with fun parameters count
pass
# return covered function
return TypeProtector(fun, args)
return decorator
#types(Time, Temperature)
def myfunction(foo, bar):
pass
myfunction('21:21', '32C')
print myfunction.types
The 'pythonic' way to do this are function annotations.
def DoSomething(critical_temp: "temperature", time: "time")
pass
For python 2.x, I like to use the docstring
def my_func(txt):
"""{
"name": "Justin",
"age" :15
}"""
pass
and it can be automatically assign to the function object with this snippet
for f in globals():
if not hasattr(globals()[f], '__call__'):
continue
try:
meta = json.loads(globals()[f].__doc__)
except:
continue
for k, v in meta.items():
setattr(globals()[f], k, v)