How to call Python functions dynamically [duplicate] - python

This question already has answers here:
Calling a function of a module by using its name (a string)
(18 answers)
Closed 4 months ago.
I have this code:
fields = ['name','email']
def clean_name():
pass
def clean_email():
pass
How can I call clean_name() and clean_email() dynamically?
For example:
for field in fields:
clean_{field}()
I used the curly brackets because it's how I used to do it in PHP but obviously doesn't work.
How to do this with Python?

If don't want to use globals, vars and don't want make a separate module and/or class to encapsulate functions you want to call dynamically, you can call them as the attributes of the current module:
import sys
...
getattr(sys.modules[__name__], "clean_%s" % fieldname)()

Using global is a very, very, bad way of doing this. You should be doing it this way:
fields = {'name':clean_name,'email':clean_email}
for key in fields:
fields[key]()
Map your functions to values in a dictionary.
Also using vars()[] is wrong too.

It would be better to have a dictionary of such functions than to look in globals().
The usual approach is to write a class with such functions:
class Cleaner(object):
def clean_name(self):
pass
and then use getattr to get access to them:
cleaner = Cleaner()
for f in fields:
getattr(cleaner, 'clean_%s' % f)()
You could even move further and do something like this:
class Cleaner(object):
def __init__(self, fields):
self.fields = fields
def clean(self):
for f in self.fields:
getattr(self, 'clean_%s' % f)()
Then inherit it and declare your clean_<name> methods on an inherited class:
cleaner = Cleaner(['one', 'two'])
cleaner.clean()
Actually this can be extended even further to make it more clean. The first step probably will be adding a check with hasattr() if such method exists in your class.

I have come across this problem twice now, and finally came up with a safe and not ugly solution (in my humble opinion).
RECAP of previous answers:
globals is the hacky, fast & easy method, but you have to be super consistent with your function names, and it can break at runtime if variables get overwritten. Also it's un-pythonic, unsafe, unethical, yadda yadda...
Dictionaries (i.e. string-to-function maps) are safer and easy to use... but it annoys me to no end, that i have to spread dictionary assignments across my file, that are easy to lose track of.
Decorators made the dictionary solution come together for me. Decorators are a pretty way to attach side-effects & transformations to a function definition.
Example time
fields = ['name', 'email', 'address']
# set up our function dictionary
cleaners = {}
# this is a parametered decorator
def add_cleaner(key):
# this is the actual decorator
def _add_cleaner(func):
cleaners[key] = func
return func
return _add_cleaner
Whenever you define a cleaner function, add this to the declaration:
#add_cleaner('email')
def email_cleaner(email):
#do stuff here
return result
The functions are added to the dictionary as soon as their definition is parsed and can be called like this:
cleaned_email = cleaners['email'](some_email)
Alternative proposed by PeterSchorn:
def add_cleaner(func):
cleaners[func.__name__] = func
return func
#add_cleaner
def email():
#clean email
This uses the function name of the cleaner method as its dictionary key.
It is more concise, though I think the method names become a little awkward.
Pick your favorite.

globals() will give you a dict of the global namespace. From this you can get the function you want:
f = globals()["clean_%s" % field]
Then call it:
f()

Here's another way:
myscript.py:
def f1():
print 'f1'
def f2():
print 'f2'
def f3():
print 'f3'
test.py:
import myscript
for i in range(1, 4):
getattr(myscript, 'f%d' % i)()

I had a requirement to call different methods of a class in a method of itself on the basis of list of method names passed as input (for running periodic tasks in FastAPI). For executing methods of Python classes, I have expanded the answer provided by #khachik. Here is how you can achieve it from inside or outside of the class:
>>> class Math:
... def add(self, x, y):
... return x+y
... def test_add(self):
... print(getattr(self, "add")(2,3))
...
>>> m = Math()
>>> m.test_add()
5
>>> getattr(m, "add")(2,3)
5
Closely see how you can do it from within the class using self like this:
getattr(self, "add")(2,3)
And from outside the class using an object of the class like this:
m = Math()
getattr(m, "add")(2,3)

Here's another way: define the functions then define a dict with the names as keys:
>>> z=[clean_email, clean_name]
>>> z={"email": clean_email, "name":clean_name}
>>> z['email']()
>>> z['name']()
then you loop over the names as keys.
or how about this one? Construct a string and use 'eval':
>>> field = "email"
>>> f="clean_"+field+"()"
>>> eval(f)
then just loop and construct the strings for eval.
Note that any method that requires constructing a string for evaluation is regarded as kludgy.

for field in fields:
vars()['clean_' + field]()

In case if you have a lot of functions and a different number of parameters.
class Cleaner:
#classmethod
def clean(cls, type, *args, **kwargs):
getattr(cls, f"_clean_{type}")(*args, **kwargs)
#classmethod
def _clean_email(cls, *args, **kwargs):
print("invoked _clean_email function")
#classmethod
def _clean_name(cls, *args, **kwargs):
print("invoked _clean_name function")
for type in ["email", "name"]:
Cleaner.clean(type)
Output:
invoked _clean_email function
invoked _clean_name function

I would use a dictionary which mapped field names to cleaning functions. If some fields don't have corresponding cleaning function, the for loop handling them can be kept simple by providing some sort of default function for those cases. Here's what I mean:
fields = ['name', 'email', 'subject']
def clean_name():
pass
def clean_email():
pass
# (one-time) field to cleaning-function map construction
def get_clean_func(field):
try:
return eval('clean_'+field)
except NameError:
return lambda: None # do nothing
clean = dict((field, get_clean_func(field)) for field in fields)
# sample usage
for field in fields:
clean[field]()
The code above constructs the function dictionary dynamically by determining if a corresponding function named clean_<field> exists for each one named in the fields list. You likely would only have to execute it once since it would remain the same as long as the field list or available cleaning functions aren't changed.

Related

How to access a dictionary value from within the same dictionary in Python? [duplicate]

I'm new to Python, and am sort of surprised I cannot do this.
dictionary = {
'a' : '123',
'b' : dictionary['a'] + '456'
}
I'm wondering what the Pythonic way to correctly do this in my script, because I feel like I'm not the only one that has tried to do this.
EDIT: Enough people were wondering what I'm doing with this, so here are more details for my use cases. Lets say I want to keep dictionary objects to hold file system paths. The paths are relative to other values in the dictionary. For example, this is what one of my dictionaries may look like.
dictionary = {
'user': 'sholsapp',
'home': '/home/' + dictionary['user']
}
It is important that at any point in time I may change dictionary['user'] and have all of the dictionaries values reflect the change. Again, this is an example of what I'm using it for, so I hope that it conveys my goal.
From my own research I think I will need to implement a class to do this.
No fear of creating new classes -
You can take advantage of Python's string formating capabilities
and simply do:
class MyDict(dict):
def __getitem__(self, item):
return dict.__getitem__(self, item) % self
dictionary = MyDict({
'user' : 'gnucom',
'home' : '/home/%(user)s',
'bin' : '%(home)s/bin'
})
print dictionary["home"]
print dictionary["bin"]
Nearest I came up without doing object:
dictionary = {
'user' : 'gnucom',
'home' : lambda:'/home/'+dictionary['user']
}
print dictionary['home']()
dictionary['user']='tony'
print dictionary['home']()
>>> dictionary = {
... 'a':'123'
... }
>>> dictionary['b'] = dictionary['a'] + '456'
>>> dictionary
{'a': '123', 'b': '123456'}
It works fine but when you're trying to use dictionary it hasn't been defined yet (because it has to evaluate that literal dictionary first).
But be careful because this assigns to the key of 'b' the value referenced by the key of 'a' at the time of assignment and is not going to do the lookup every time. If that is what you are looking for, it's possible but with more work.
What you're describing in your edit is how an INI config file works. Python does have a built in library called ConfigParser which should work for what you're describing.
This is an interesting problem. It seems like Greg has a good solution. But that's no fun ;)
jsbueno as a very elegant solution but that only applies to strings (as you requested).
The trick to a 'general' self referential dictionary is to use a surrogate object. It takes a few (understatement) lines of code to pull off, but the usage is about what you want:
S = SurrogateDict(AdditionSurrogateDictEntry)
d = S.resolve({'user': 'gnucom',
'home': '/home/' + S['user'],
'config': [S['home'] + '/.emacs', S['home'] + '/.bashrc']})
The code to make that happen is not nearly so short. It lives in three classes:
import abc
class SurrogateDictEntry(object):
__metaclass__ = abc.ABCMeta
def __init__(self, key):
"""record the key on the real dictionary that this will resolve to a
value for
"""
self.key = key
def resolve(self, d):
""" return the actual value"""
if hasattr(self, 'op'):
# any operation done on self will store it's name in self.op.
# if this is set, resolve it by calling the appropriate method
# now that we can get self.value out of d
self.value = d[self.key]
return getattr(self, self.op + 'resolve__')()
else:
return d[self.key]
#staticmethod
def make_op(opname):
"""A convience class. This will be the form of all op hooks for subclasses
The actual logic for the op is in __op__resolve__ (e.g. __add__resolve__)
"""
def op(self, other):
self.stored_value = other
self.op = opname
return self
op.__name__ = opname
return op
Next, comes the concrete class. simple enough.
class AdditionSurrogateDictEntry(SurrogateDictEntry):
__add__ = SurrogateDictEntry.make_op('__add__')
__radd__ = SurrogateDictEntry.make_op('__radd__')
def __add__resolve__(self):
return self.value + self.stored_value
def __radd__resolve__(self):
return self.stored_value + self.value
Here's the final class
class SurrogateDict(object):
def __init__(self, EntryClass):
self.EntryClass = EntryClass
def __getitem__(self, key):
"""record the key and return"""
return self.EntryClass(key)
#staticmethod
def resolve(d):
"""I eat generators resolve self references"""
stack = [d]
while stack:
cur = stack.pop()
# This just tries to set it to an appropriate iterable
it = xrange(len(cur)) if not hasattr(cur, 'keys') else cur.keys()
for key in it:
# sorry for being a duche. Just register your class with
# SurrogateDictEntry and you can pass whatever.
while isinstance(cur[key], SurrogateDictEntry):
cur[key] = cur[key].resolve(d)
# I'm just going to check for iter but you can add other
# checks here for items that we should loop over.
if hasattr(cur[key], '__iter__'):
stack.append(cur[key])
return d
In response to gnucoms's question about why I named the classes the way that I did.
The word surrogate is generally associated with standing in for something else so it seemed appropriate because that's what the SurrogateDict class does: an instance replaces the 'self' references in a dictionary literal. That being said, (other than just being straight up stupid sometimes) naming is probably one of the hardest things for me about coding. If you (or anyone else) can suggest a better name, I'm all ears.
I'll provide a brief explanation. Throughout S refers to an instance of SurrogateDict and d is the real dictionary.
A reference S[key] triggers S.__getitem__ and SurrogateDictEntry(key) to be placed in the d.
When S[key] = SurrogateDictEntry(key) is constructed, it stores key. This will be the key into d for the value that this entry of SurrogateDictEntry is acting as a surrogate for.
After S[key] is returned, it is either entered into the d, or has some operation(s) performed on it. If an operation is performed on it, it triggers the relative __op__ method which simple stores the value that the operation is performed on and the name of the operation and then returns itself. We can't actually resolve the operation because d hasn't been constructed yet.
After d is constructed, it is passed to S.resolve. This method loops through d finding any instances of SurrogateDictEntry and replacing them with the result of calling the resolve method on the instance.
The SurrogateDictEntry.resolve method receives the now constructed d as an argument and can use the value of key that it stored at construction time to get the value that it is acting as a surrogate for. If an operation was performed on it after creation, the op attribute will have been set with the name of the operation that was performed. If the class has a __op__ method, then it has a __op__resolve__ method with the actual logic that would normally be in the __op__ method. So now we have the logic (self.op__resolve) and all necessary values (self.value, self.stored_value) to finally get the real value of d[key]. So we return that which step 4 places in the dictionary.
finally the SurrogateDict.resolve method returns d with all references resolved.
That'a a rough sketch. If you have any more questions, feel free to ask.
If you, just like me wandering how to make #jsbueno snippet work with {} style substitutions, below is the example code (which is probably not much efficient though):
import string
class MyDict(dict):
def __init__(self, *args, **kw):
super(MyDict,self).__init__(*args, **kw)
self.itemlist = super(MyDict,self).keys()
self.fmt = string.Formatter()
def __getitem__(self, item):
return self.fmt.vformat(dict.__getitem__(self, item), {}, self)
xs = MyDict({
'user' : 'gnucom',
'home' : '/home/{user}',
'bin' : '{home}/bin'
})
>>> xs["home"]
'/home/gnucom'
>>> xs["bin"]
'/home/gnucom/bin'
I tried to make it work with the simple replacement of % self with .format(**self) but it turns out it wouldn't work for nested expressions (like 'bin' in above listing, which references 'home', which has it's own reference to 'user') because of the evaluation order (** expansion is done before actual format call and it's not delayed like in original % version).
Write a class, maybe something with properties:
class PathInfo(object):
def __init__(self, user):
self.user = user
#property
def home(self):
return '/home/' + self.user
p = PathInfo('thc')
print p.home # /home/thc
As sort of an extended version of #Tony's answer, you could build a dictionary subclass that calls its values if they are callables:
class CallingDict(dict):
"""Returns the result rather than the value of referenced callables.
>>> cd = CallingDict({1: "One", 2: "Two", 'fsh': "Fish",
... "rhyme": lambda d: ' '.join((d[1], d['fsh'],
... d[2], d['fsh']))})
>>> cd["rhyme"]
'One Fish Two Fish'
>>> cd[1] = 'Red'
>>> cd[2] = 'Blue'
>>> cd["rhyme"]
'Red Fish Blue Fish'
"""
def __getitem__(self, item):
it = super(CallingDict, self).__getitem__(item)
if callable(it):
return it(self)
else:
return it
Of course this would only be usable if you're not actually going to store callables as values. If you need to be able to do that, you could wrap the lambda declaration in a function that adds some attribute to the resulting lambda, and check for it in CallingDict.__getitem__, but at that point it's getting complex, and long-winded, enough that it might just be easier to use a class for your data in the first place.
This is very easy in a lazily evaluated language (haskell).
Since Python is strictly evaluated, we can do a little trick to turn things lazy:
Y = lambda f: (lambda x: x(x))(lambda y: f(lambda *args: y(y)(*args)))
d1 = lambda self: lambda: {
'a': lambda: 3,
'b': lambda: self()['a']()
}
# fix the d1, and evaluate it
d2 = Y(d1)()
# to get a
d2['a']() # 3
# to get b
d2['b']() # 3
Syntax wise this is not very nice. That's because of us needing to explicitly construct lazy expressions with lambda: ... and explicitly evaluate lazy expression with ...(). It's the opposite problem in lazy languages needing strictness annotations, here in Python we end up needing lazy annotations.
I think with some more meta-programmming and some more tricks, the above could be made more easy to use.
Note that this is basically how let-rec works in some functional languages.
The jsbueno answer in Python 3 :
class MyDict(dict):
def __getitem__(self, item):
return dict.__getitem__(self, item).format(self)
dictionary = MyDict({
'user' : 'gnucom',
'home' : '/home/{0[user]}',
'bin' : '{0[home]}/bin'
})
print(dictionary["home"])
print(dictionary["bin"])
Her ewe use the python 3 string formatting with curly braces {} and the .format() method.
Documentation : https://docs.python.org/3/library/string.html

How to get list of arguments by name and value in Python

How can I dynamically get the names and values of all arguments to a class method? (For debugging).
The following code works, but it would need to be repeated a few dozen times (one for each method). Is there a simpler, more Pythonic way to do this?
class Foo:
def foo(self, a, b):
myself = getattr(self, inspect.stack()[0][3])
argnames = inspect.getfullargspec(myself).args[1:]
d = {}
for argname in argnames:
d[argname] = locals()[argname]
log.debug(d)
That's six lines of code for something that should be a lot simpler.
Sure, I can hardcode the debugging code separately for each method, but it seems easier to use copy/paste. Besides, it's way too easy to leave out an argument or two when hardcoding, which could make the debugging more confusing.
I would also prefer to assign local variables instead of accessing the values using a kwargs dict, because the rest of the code (not shown) could get clunky real fast, and is partially copied/pasted.
What is the simplest way to do this?
An alternative:
from collections import OrderedDict
class Foo:
def foo(self, *args):
argnames = 'a b'.split()
kwargs = OrderedDict(zip(argnames, args))
log.debug(kwargs)
for argname, argval in kwargs.items():
locals()[argname] = argval
This saves one line per method, but at the expense of IDE autocompete/intellisense when calling the method.
As wpercy wrote, you can reduce the last three lines to a single line using a dict comprehension. The caveat is that it only works in some versions of Python.
However, in Python 3, a dict comprehension has its own namespace and locals wouldn't work. So a workaround is to put the locals func after the in:
from itertools import repeat
class Foo:
def foo(self, a, b):
myname = inspect.stack()[0][3]
argnames = inspect.getfullargspec(getattr(self, myname)).args[1:]
args = [(x, parent[x]) for x, parent in zip(argnames, repeat(locals()))]
log.debug('{}: {!s}'.format(myname, args))
This saves two lines per method.

Storing a data for recalling functions Python

I have a project in which I run multiple data through a specific function that "cleans" them.
The cleaning function looks like this:
Misc.py
def clean(my_data)
sys.stdout.write("Cleaning genes...\n")
synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms()
clean_genes = {}
for g in data:
if g in synonyms:
# Found a data point which appears in the synonym list.
#print synonyms[g]
for synonym in synonyms[g]:
if synonym in data:
del data[synonym]
clean_data[g] = synonym
sys.stdout.write("\t%s is also known as %s\n" % (g, clean_data[g]))
return data
FileIO is a custom class I made to open files.
My question is, this function will be called many times throughout the program's life cycle. What I want to achieve is don't have to read the input_data every time since it's gonna be the same every time. I know that I can just return it, and pass it as an argument in this way:
def clean(my_data, synonyms = None)
if synonyms == None:
...
else
...
But is there another, better looking way of doing this?
My file structure is the following:
lib
Misc.py
FileIO.py
__init__.py
...
raw_data
runme.py
From runme.py, I do this from lib import * and call all the functions I made.
Is there a pythonic way to go around this? Like a 'memory' for the function
Edit:
this line: synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms() returns a collections.OrderedDict() from input_data and using the 3rd column as the key of the dictionary.
The dictionary for the following dataset:
column1 column2 key data
... ... A B|E|Z
... ... B F|W
... ... C G|P
...
Will look like this:
OrderedDict([('A',['B','E','Z']), ('B',['F','W']), ('C',['G','P'])])
This tells my script that A is also known as B,E,Z. B as F,W. etc...
So these are the synonyms. Since, The synonyms list will never change throughout the life of the code. I want to just read it once, and re-use it.
Use a class with a __call__ operator. You can call objects of this class and store data between calls in the object. Some data probably can best be saved by the constructor. What you've made this way is known as a 'functor' or 'callable object'.
Example:
class Incrementer:
def __init__ (self, increment):
self.increment = increment
def __call__ (self, number):
return self.increment + number
incrementerBy1 = Incrementer (1)
incrementerBy2 = Incrementer (2)
print (incrementerBy1 (3))
print (incrementerBy2 (3))
Output:
4
5
[EDIT]
Note that you can combine the answer of #Tagc with my answer to create exactly what you're looking for: a 'function' with built-in memory.
Name your class Clean rather than DataCleaner and the name the instance clean. Name the method __call__ rather than clean.
Like a 'memory' for the function
Half-way to rediscovering object-oriented programming.
Encapsulate the data cleaning logic in a class, such as DataCleaner. Make it so that instances read synonym data once when instantiated and then retain that information as part of their state. Have the class expose a clean method that operates on the data:
class FileIO(object):
def __init__(self, file_path, some_num, header):
pass
def openSynonyms(self):
return []
class DataCleaner(object):
def __init__(self, synonym_file):
self.synonyms = FileIO(synonym_file, 3, header=False).openSynonyms()
def clean(self, data):
for g in data:
if g in self.synonyms:
# ...
pass
if __name__ == '__main__':
dataCleaner = DataCleaner('raw_data/input_file')
dataCleaner.clean('some data here')
dataCleaner.clean('some more data here')
As a possible future optimisation, you can expand on this approach to use a factory method to create instances of DataCleaner which can cache instances based on the synonym file provided (so you don't need to do expensive recomputation every time for the same file).
I think the cleanest way to do this would be to decorate your "clean" (pun intended) function with another function that provides the synonyms local for the function. this is iamo cleaner and more concise than creating another custom class, yet still allows you to easily change the "input_data" file if you need to (factory function):
def defineSynonyms(datafile):
def wrap(func):
def wrapped(*args, **kwargs):
kwargs['synonyms'] = FileIO(datafile, 3, header=False).openSynonyms()
return func(*args, **kwargs)
return wrapped
return wrap
#defineSynonyms("raw_data/input_data")
def clean(my_data, synonyms={}):
# do stuff with synonyms and my_data...
pass

how to dynamically generate a subclass in a function?

I'm attempting to write a function that creates a new subclass named with the string it gets passed as an argument. I don't know what tools would be best for this, but I gave it a shot in the code below and only managed to make a subclass named "x", instead of "MySubClass" as intended. How can I write this function correctly?
class MySuperClass:
def __init__(self,attribute1):
self.attribute1 = attribute1
def makeNewClass(x):
class x(MySuperClass):
def __init__(self,attribute1,attribute2):
self.attribute2 = attribute2
x = "MySubClass"
makeNewClass(x)
myInstance = MySubClass(1,2)
The safest and easiest way to do this would be to use the type builtin function. This takes an optional second argument (tuple of base classes), and third argument (dict of functions). My recommendation would be the following:
def makeNewClass(x):
def init(self,attribute1,attribute2):
# make sure you call the base class constructor here
self.attribute2 = attribute2
# make a new type and return it
return type(x, (MySuperClass,), {'__init__': init})
x = "MySubClass"
MySubClass = makeNewClass(x)
You will need to populate the third argument's dict with everything you want the new class to have. It's very likely that you are generating classes and will want to push them back into a list, where the names won't actually matter. I don't know your use case though.
Alternatively you could access globals and put the new class into that instead. This is a really strangely dynamic way to generate classes, but is the best way I can think of to get exactly what you seem to want.
def makeNewClass(x):
def init(self,attribute1,attribute2):
# make sure you call the base class constructor here
self.attribute2 = attribute2
globals()[x] = type(x, (MySuperClass,), {'__init__': init})
Ryan's answer is complete, but I think it's worth noting that there is at least one other nefarious way to do this besides using built-in type and exec/eval or whatever:
class X:
attr1 = 'some attribute'
def __init__(self):
print 'within constructor'
def another_method(self):
print 'hey, im another method'
# black magics
X.__name__ = 'Y'
locals()['Y'] = X
del X
# using our class
y = locals()['Y']()
print y.attr1
y.another_method()
Note that I only used strings when creating class Y and when initializing an instance of Y, so this method is fully dynamic.

Python - Call an object from a list of objects

I have a class, and I would like to be able to create multiple objects of that class and place them in an array. I did it like so:
rooms = []
rooms.append(Object1())
...
rooms.append(Object4())
I then have a dict of functions, and I would like to pass the object to the function. However, I'm encountering some problems..For example, I have a dict:
dict = {'look': CallLook(rooms[i])}
I'm able to pass it into the function, however; in the function if I try to call an objects method it gives me problems
def CallLook(current_room)
current_room.examine()
I'm sure that there has to be a better way to do what I'm trying to do, but I'm new to Python and I haven't seen a clean example on how to do this. Anyone have a good way to implement a list of objects to be passed into functions? All of the objects contain the examine method, but they are objects of different classes. (I'm sorry I didn't say so earlier)
The specific error states: TypeError: 'NoneType' object is not callable
Anyone have a good way to implement a list of objects to be passed into functions? All of the objects contain the examine method, but they are objects of different classes. (I'm sorry I didn't say so earlier)
This is Python's plain duck-typing.
class Room:
def __init__(self, name):
self.name = name
def examine(self):
return "This %s looks clean!" % self.name
class Furniture:
def __init__(self, name):
self.name = name
def examine(self):
return "This %s looks comfortable..." % self.name
def examination(l):
for item in l:
print item.examine()
list_of_objects = [ Room("Living Room"), Furniture("Couch"),
Room("Restrooms"), Furniture("Bed") ]
examination(list_of_objects)
Prints:
This Living Room looks clean!
This Couch looks comfortable...
This Restrooms looks clean!
This Bed looks comfortable...
As for your specific problem: probably you have forgotten to return a value from examine()? (Please post the full error message (including full backtrace).)
I then have a dict of functions, and I would like to pass the object to the function. However, I'm encountering some problems..For example, I have a dict:
my_dict = {'look': CallLook(rooms[i])} # this is no dict of functions
The dict you have created may evaluate to {'look': None} (assuming your examine() doesn't return a value.) Which could explain the error you've observed.
If you wanted a dict of functions you needed to put in a callable, not an actual function call, e.g. like this:
my_dict = {'look': CallLook} # this is a dict of functions
if you want to bind the 'look' to a specific room you could redefine CallLook:
def CallLook(current_room)
return current_room.examine # return the bound examine
my_dict = {'look': CallLook(room[i])} # this is also a dict of functions
Another issue with your code is that you are shadowing the built-in dict() method by naming your local dictionary dict. You shouldn't do this. This yields nasty errors.
Assuming you don't have basic problems (like syntax errors because the code you have pasted is not valid Python), this example shows you how to do what you want:
>>> class Foo():
... def hello(self):
... return 'hello'
...
>>> r = [Foo(),Foo(),Foo()]
>>> def call_method(obj):
... return obj.hello()
...
>>> call_method(r[1])
'hello'
Assuming you have a class Room the usual way to create a list of instances would be using a list comprehension like this
rooms = [Room() for i in range(num_rooms)]
I think there are some things you may not be getting about this:
dict = {'look': CallLook(rooms[i])}
This creates a dict with just one entry: a key 'look', and a value which is the result of evaluating CallLook(rooms[i]) right at the point of that statement. It also then uses the name dict to store this object, so you can no longer use dict as a constructor in that context.
Now, the error you are getting tells us that rooms[i] is None at that point in the programme.
You don't need CallLook (which is also named non-standardly) - you can just use the expression rooms[i].examine(), or if you want to evaluate the call later rooms[i].examine.
You probably don't need the dict at all.
That is not a must, but in some cases, using hasattr() is good... getattr() is another way to get an attribute off an object...
So:
rooms = [Obj1(),Obj2(),Obj3()]
if hasattr(rooms[i], 'examine'):#First check if our object has selected function or attribute...
getattr(rooms[i], 'examine') #that will just evaluate the function do not call it, and equals to Obj1().examine
getattr(rooms[i], 'examine')() # By adding () to the end of getattr function, we evalute and then call the function...
You may also pass parameters to examine function like:
getattr(rooms[i], 'examine')(param1, param2)
I'm not sure of your requirement, but you can use dict to store multiple object of a class.
May be this will help,
>>> class c1():
... print "hi"
...
hi
>>> c = c1()
>>> c
<__main__.c1 instance at 0x032165F8>
>>> d ={}
>>> for i in range (10):
... d[i] = c1()
...
>>> d[0]
<__main__.c1 instance at 0x032166E8>
>>> d[1]
<__main__.c1 instance at 0x032164B8>
>>>
It will create a object of c1 class and store it in dict. Obviously, in this case you can use list instead of dict.

Categories