Similar to ".format", I want to be able to automatically prefix the time before a string. I really have no idea to do it but I think it might look something like this.
>>print("Function __init__ at CLASS myClass running...".log())
Prints:
[myPrefix] Function init at CLASS myClass running...
I have no idea in the world of how I would do this.
Sadly, you can't even monkey-patch attributes onto built-in types. This:
def log(self):
print "logging "+self
str.log = log
str("hello")
print "hello".log()
Gives:
Traceback (most recent call last):
Line 3, in <module>
str.log = log
TypeError: can't set attributes of built-in/extension type 'str'
The best way to do this is to just write a logging method, like so:
def log(s):
print("my-prefix -- "+s)
log("hello")
The advantage of this is that, if at a later stage, you decide to not print your logging statements, but pipe them into a file, you only need to change the log function, not the many places you have the print statements, for example:
def log(s):
with open("my_log.txt",w) as f:
data = f.write("the time - " + s)
log("hello")
Now, all your logging statements go to the file, without having to change the actual logging call.
Related
I realize that this may be a fragile approach, but I'm looking for a way to intercept global name lookups (and also to provide a value/binding when the lookup fails) under 'exec'.
Use case: I want to provide a restricted execution environment for some external scripts written by users. I am trying to tailor the script conventions and namespace construction to very unsophisticated users, so I'd like them to be able to call a bunch of functions as if they were "global" without having to construct the entire dictionary by hand ahead of time.
Ergo, I'd like to intercept the global/module namespace lookup of SomeIdentifierNameTheyMayUse, and to dynamically bind that name to something computed rather than something already bound in the namespace.
Is something like this possible in general?
I managed to get something sort-of working, but it has problems, as you can see below:
class mydict( dict ):
def __missing__( self, key ):
print "__missing__:", key
return 99
d = mydict()
d[ '__builtins__' ] = {}
code = """
# triggers __missing__ call as desired, prints 99
print this_bad_sym_is_ok
def action1():
print 'action1!'
# does not trigger __missing__. Why? And how can I fix it?
print this_bad_sym_is_not
"""
exec code in d
print "d=", d
exec 'action1()' in d
which currently produces:
__missing__: this_bad_sym_is_ok
99
d= {'__builtins__': {}, 'action1': <function action1 at 0x107d6b2a8>}
action1!
Traceback (most recent call last):
File "t.py", line 25, in <module>
exec 'action1()' in d
File "<string>", line 1, in <module>
File "<string>", line 10, in action1
NameError: global name 'this_bad_sym_is_not' is not defined
Even if it's not possible to do something similar to this, I'd still like to understand why it's not working.
Thanks!
Maybe this helps: https://wiki.python.org/moin/SandboxedPython
It explains the restricted execution environment.
This is an implementation: https://pypi.python.org/pypi/pysandbox/
I was trying IPython with a module I created and it does not show the actual representation of class objects. Instead it shows something like
TheClass.__module__ + '.' + TheClass.__name__
I heavily use metaclasses in this module and I have really meaningful class representations that should be shown to the user.
Is there an IPython specific method I can change to make the right representation available instead of this namespace thingy that is quite useless in this application?
Or, if that's not possible, how can I customize my version of IPython to show the information I want?
EDIT
As complementary information, if I get a class and change the __module__ attribute to e.g. None, it blows with this traceback when trying to show the representation:
Traceback (most recent call last):
... [Huge traceback] ...
File "C:\Python32\lib\site-packages\IPython\lib\pretty.py", line 599, in _type_pprint
name = obj.__module__ + '.' + obj.__name__
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
So my expectations were right and this function is used to show class objects:
def _type_pprint(obj, p, cycle):
I tried customizing it in my class but I don't think I'm doing it right. This module IPython.lib.pretty does have a big dictionary linking type (the parent of metaclasses) with this function.
EDIT 2
Things I tried:
Adding the _repr_pretty_ function to metaclass. It do work with instances but not with classes...
Using this function IPython.lib.pretty.for_type(typ, func). It only changes the big dictionary a wrote above but not the copy of it made by the RepresentationPrinter instance... So this function has no use at all?!
Calling the magic function %pprint. It disables (or enables) this pretty print feature, using the default Python __repr__ for all the objects. That's bad because the pretty printing of lists, dict and many others are quite nice.
The first approach is more of what I want because it does not affect the environment and is specific for this class.
This is just an issue with IPython 0.12 and older versions. Now is possible to do:
class A(type):
def _repr_pretty_(cls, p, cycle):
p.text(repr(self))
def __repr__(cls):
return 'This Information'
class B: #or for Py3K: class B(metaclass=A):
__metaclass__ = A
and it'll show the desired representation for B.
I have a problem similar to the first problem in this question, which as far as I can see went unanswered.
I have a file "config.py" which contains a lot of parameters to be used by a class (this config.py file will change), however I can't get these to propagate into the class via execfile.
In an example piece of code:
class Class():
def __init__(self):
execfile("config.py")
print x
# config.py
x = "foo"
>>> t = Class()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __init__
NameError: global name 'x' is not defined
Any help welcome, or any better methods of retrieving parameters from a file to be used in a class.
Many Thanks.
I don't get what you're trying to do (but i don't like it, and this is just me) but to fix your problem do (test in python2.6):
class Class():
def __init__(self):
execfile('config.py', locals()) # Not recommanded, maybe you want globals().
print x
But from the doc:
Note
The default locals act as described
for function locals() below:
modifications to the default locals
dictionary should not be attempted.
Pass an explicit locals dictionary if
you need to see effects of the code on
locals after function execfile()
returns. execfile() cannot be used
reliably to modify a function’s
locals.
and about :
Any help welcome, or any better
methods of retrieving parameters from
a file to be used in a class.
You can use import.
Even though it might be convenient to keep configuration settings in a Python file I would recommend against it. I think it opens up a whole set of problems that you don't really want to have do deal with. Anything could be placed in your configuration file, including malicious code.
I would use either the json module or the ConfigParser module to hold my configuration.
If you have trouble choosing between those two I would recommend the json module. Json is a simple yet flexible format for structured data.
I'm currently trying to write a multiple-file Python (2.6.5) game using PyGame. The problem is that one of the files, "pyconsole.py", needs to be able to call methods on instances of other objects imported by the primary file, "main.py". The problem is that I have a list in the main file to hold instances of all of the game objects (player's ship, enemy ships, stations, etc.), yet I can't seem to be able to call methods from that list within "pyconsole.py" despite the fact that I'm doing a from pyconsole import * in "main.py" before the main loop starts. Is this simply not possible, and should I instead use M4 to combine every file into 1 single file and then bytecode-compile and test/distribute that?
Example:
bash$ cat test.py
#!/usr/bin/python
import math, distancefrom00
foo = 5
class BarClass:
def __init__(self):
self.baz = 10
def get(self):
print "The BAZ is ", self.baz
def switch(self)
self.baz = 15
self.get()
bar = BarClass()
def main():
bar.switch()
print distancefrom00.calculate([2, 4])
if __name__ == '__main__': main()
bash$ cat distancefrom00.py
#!/usr/bin/python
import math
import test
def calculate(otherpoint):
return str(math.hypot(otherpoint[0], otherpoint[1]))+" (foo = "+str(test.foo)+"; "+test.bar.get()+")"
bash$ python test.py
The BAZ is 15
The BAZ is 10
Traceback (most recent call last):
File "test.py", line 24, in <module>
if __name__ == '__main__': main()
File "test.py", line 22, in main
print distancefrom00.calculate([2, 4])
File "/home/archie/Development/Python/Import Test/distancefrom00.py", line 8, in calculate
return str(math.hypot(otherpoint[0], otherpoint[1]))+" (foo = "+str(test.foo)+"; "+test.bar.get()+")"
TypeError: cannot concatenate 'str' and 'NoneType' objects
If my somewhat limited understanding of Python names, classes, and all that stuff is correct here, the NoneType means that the name test.bar.get() - and thus, test.bar - is not assigned to anything.
The problem is that one of the files,
"pyconsole.py", needs to be able to
call methods on instances of other
objects imported by the primary file,
"main.py".
This just sounds like the dependencies are wrong. Generally nothing should be calling 'backwards' up to the main file. That main.py should be the glue that holds everything else together, and nothing should depend on it. Technically the dependencies should form a directed acyclic graph. As soon as you find a cycle in your dependency graph, move out the common aspects into a new file to break the cycle.
So, move the things in 'main.py' that are used by 'pyconsole.py' out into a new file. Then have 'main.py' and 'pyconsole.py' import that new file.
In addition to the other answers, note that when you run test.py as a script it is module __main__. When you import test.py from distancefrom00.py that creates a new test module. bar in the main script and test.bar accessible from distancefrom00.py are completely unrelated. They aren't even the same class: one is a __main__.BarClass while the other is a test.BarClass instance.
That's why you get the two outputs 15 followed by 10: the main script bar has had its switch method called, but the test module bar has not been switched.
Circular imports aside, importing your main script into another module has its own level of badness.
Are you instantiating an object in pyconsole in main.py? If you've got a class called PyConsole in pyconsole, give its __init__ method a parameter that takes the list of game objects. That way your pyConsole object will have a reference to the objects.
Hope this helps. It seems like you've just misunderstood the way Python works with imported modules.
The problem with the submitted code is that the get method of the BarClass class returns a value of None because the body of the method contains only a print statement. Therefore, in distancefrom00.py the result of the function calculate is:
str + str + str + str + None + str
Hence, the TypeError: cannot concatenate a 'str' and 'NoneType' objects
You can solve this problem by returning a string from a call to get. For example,
def get(self):
return "The BAZ is %s" % self.baz
Also, note that you have a circular import in your two files. test.py imports distancefrom00.py, and distancefrom00.py imports test.py. As Kylotan says cyclic dependences are bad
I'm aware of using globals(), locals() and getattr to referance things in Python by string (as in this question) but unless I'm missing something obvious I can't seem to use this with calling types.
e.g.:
In [12]: locals()['int']
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
e:\downloads_to_access\<ipython console> in <module>()
KeyError: 'int'
In [13]: globals()['int']
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
e:\downloads_to_access\<ipython console> in <module>()
KeyError: 'int'
getattr(???, 'int')...
What's the best way of doing this?
There are locals,globals, and then builtins.
Perhaps you are looking for the builtin:
import __builtin__
getattr(__builtin__,'int')
You've already gotten a solution using builtins, but another worthwhile technique to hold in your toolbag is a dispatch table. If your CSV is designed to be used by multiple applications written in multiple languages, it might look like this:
Integer,15
String,34
Float,1.0
Integer,8
In such a case you might want something like this, where csv is a list of tuples containing the data above:
mapping = {
'Integer': int,
'String': str,
'Float': float,
'Unicode': unicode
}
results = []
for row in csv:
datatype = row[0]
val_string = row[1]
results.append(mapping[datatype](val_string))
return results
That gives you the flexibility of allowing arbitrary strings to map to useful types. You don't have to massage your data to give you the exact values python expects.
getattr(__builtins__,'int')
The issue here is that int is part of the __builtins__ module, not just part of the global namespace. You can get a built-in type, such as int, using the following bit of code:
int_gen = getattr(globals()["__builtins__"], "int")
i = int_gen(4)
# >>> i = 4
Similarly, you can access any other (imported) module by passing the module's name as a string index to globals(), and then using getattr to extract the desired attributes.
Comments suggest that you are unhappy with the idea of using eval to generate data. looking for a function in __builtins__ allows you to find eval.
the most basic solution given looks like this:
import __builtin__
def parseInput(typename, value):
return getattr(__builtins__,typename)(value)
You would use it like so:
>>> parseInput("int", "123")
123
cool. works pretty ok. how about this one though?
>>> parseInput("eval", 'eval(compile("print \'Code injection?\'","","single"))')
Code injection?
does this do what you expect? Unless you explicitly want this, you need to do something to prevent untrustworthy inputs from poking about in your namespace. I'd strongly recommend a simple whitelist, gracefully raising some sort of exception in the case of invalid input, like so:
import __builtin__
def parseInput(typename, value):
return {"int":int, "float":float, "str":str}[typename](value)
but if you just can't bear that, you can still add just a bit of armor by verifying that the requested function is actually a type:
import __builtin__
def parseInput(typename, value):
typector = getattr(__builtins__,typename)
if type(typector) is type:
return typector(value)
else:
return None
If you have a string that is the name of a thing, and you want the thing, you can also use:
thing = 'int'
eval(thing)
Keep in mind though, that this is very powerful, and you need to understand what thing might contain, and where it came from. For example, if you accept user input as thing, a malicious user could do unlimited damage to your machine with this code.