I'm new to logging in python and I would like to save, other than the outcome of a long pipeline in the log file, also the parameters/class attributes of some of the instance created in the pipeline.
Ideally this should not pollute too much the code where the class is implemented.
Even better if the solution takes into account only the instance of the class and writes its attribute in the log file without touching at all the class implementation.
Any suggestion, or god practice advice?
--- Edit:
An unpolished and simplified version initial attempt (as asked in the comment) is the most obvious I could think of, and consist in adding a method that queries the attribute of the class in a string to be returned when the method is called:
In a python package with 2 modules main.py and a_class.py written as follows:
>> cat main.py
import logging
from a_class import MyClass
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.warning('Print this to the console and save it to the log')
logging.info('Print this to the console')
o = MyClass()
o.attribute_1 = 1
o.attribute_2 = 3
o.attribute_3 = 'Spam'
logging.info(o.print_attributes())
and
>> cat a_class.py
class MyClass():
def __init__(self):
self.attribute_1 = 0
self.attribute_2 = 0
self.attribute_3 = 0
def print_attributes(self):
msg = '\nclass.attribute_1 {}\n'.format(self.attribute_1)
msg += 'class.attribute_2 {}\n'.format(self.attribute_2)
msg += 'class.attribute_3 {}\n'.format(self.attribute_3)
return msg
The example.log contains what I wanted, which is:
WARNING:root:Print this to the console and save it to the log
INFO:root:Print this to the console
INFO:root:
class.attribute_1 1
class.attribute_2 3
class.attribute_3 Spam
In reformulating the question, is there a way of doing the same query to the attribute of the class and send it to the log without adding any kind of print_attributes method in the class itself?
Use the inbuilt __dict__
class MyClass():
def __init__(self):
self.attribute_1 = 0
self.attribute_2 = 0
self.attribute_3 = 0
o = MyClass()
print o.__dict__
Outputs:
{'attribute_2': 0, 'attribute_3': 0, 'attribute_1': 0}
Use it in logging as you want to.
I'd suggest to implement __str__ or __repr__ for your class so that it would nicely show all the salient attribute values.
Then you can log instances as simple values: log.info("Now foo is %s", foo_instance).
A complete example:
class Donut(object):
def __init__(self, filling, icing):
self.filling = filling
self.icing = icing
def __repr__(self):
return 'Donut(filling=%r, icing=%r)' % (self.filling, self.icing)
donut = Donut('jelly', 'glaze')
import logging
logging.basicConfig()
logging.getLogger().warn('Carbs overload: one %s too much', donut)
Output:
2017-10-25 10:59:05,302 9265 WARNING Carbs overload: one Donut(filling='jelly', icing='glaze') too much
I agree with #Iguananaut that there is no magical way of doing this. However, the following may do the trick. It is better than the print_attributes method you wrote, IMO.
import logging
logging.basicConfig()
logger = logging.getLogger('ddd')
logger.setLevel(logging.DEBUG)
class A(object):
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def __str__(self):
return "\n".join(["{} is {}".format(k, v)
for k, v in self.__dict__.iteritems()])
a = A(1, 2, 3)
logger.debug(a)
The result looks like this -
{12:43}~ ➭ python logging_attrib.py
DEBUG:ddd:a is 1
c is 3
b is 2
Please let me know what you think
Related
I have this class in which a variable c is used as a intermediate step of the operate() method.
class DumbAdder():
def __init__(self, a : float, b : float):
super(DumbAdder, self).__init__()
self.a = a
self.b = b
def operate(self):
c = self.a
for step in range(self.b):
c = c + 1
result = c
print(result)
After creating the object x by calling DumbAdder with arguments a, b, and calling operate(), we obtain a result which is the sum of the two arguments.
x = DumbAdder(10,20)
x.operate()
In this case we get 30 printed on the screen as a result.
now, let's say we have a new instance y
y = DumbAdder(5,10)
Now, my question: Is there a way to access the values of c in y, when calling operate(), for each step of the for loop, that is, to display 6,7,8...13,14,15 without modifying the definition of operate()? or with minimal modifications
My goal is to be able to switch between a 'normal mode' and a 'debug mode' for my classes with iterations inside the methods. The debug mode would allow me to inspect the evolution of the intermediate values.
NOTE
While writing the question I came up with this solution. I post the question in case someone wants to share a more efficient or elegant way.
class DumbAdder():
def __init__(self, a : float, b : float):
super(DumbAdder, self).__init__()
self.a = a
self.b = b
self.mode = 'normal'
self.c_history = []
def store_or_not(self, c):
if self.mode == 'normal':
pass
elif self.mode == 'debug':
self.c_history.append(c)
def operate(self):
c = self.a
for x in range(self.b):
c = c + 1
self.store_or_not(c)
result = c
print(result)
The typical way to handle this is to log the value, e.g. using the logging module.
def operate(self):
logger = logging.getLogger("operate")
c = self.a
for x in range(self.b):
c = c + 1
logger.debug(c)
result = c
print(result)
What logging.debug does is not operate's concern; it's just a request to do something. What that something is defined the logger's configuration and the handler(s) attached to the logger. Your application can do that without modifying your code at all, since all loggers live in a single application-wide namespace. For example,
# Only log errors by default...
logging.basicConfig(level=logging.ERROR)
# ... but log debugging information for this particular logger
logging.getLogger("operate").setLevel(logging.DEBUG)
This uses the same logging configuration for every instance of DumbAdder, rather than allowing per-instance debugging. I suspect that's all you really need, but if not, you can have __init__ create an instance specific logger and configure it at that time.
def __init__(self, a, b, debug=False):
super().__init__()
self.a = a
self.b = b
self.logger = logging.getLogger(f'{id(self)}') # Logger specific to this object
if debug:
self.logger.setLevel(logging.DEBUG)
def operate(self):
c = self.a
for x in range(self.b):
c = c + 1
self.logger.debug(c)
result = c
print(result)
Handlers can do much more than simply write to the screen. The default handler created by logging.basicConfig writes to standard error, but the logging.handler module provides for writing to files, syslog, sockets, SMTP servers for e-mail, HTTP services, etc, and you can always write your own handlers to do really whatever you want with the message that the debug method constructs from the value of c it receives.
I need some help regarding modules & class and how to share data and methods.
I have been working on a Python program for a while and it has gotten too big to deal with in its original form which did not use classes. The program is about 6000 lines long and uses Tkinter with multiple screens. I figured the logical breakdown into classes would follow the user interface with each screen being its own class. Then I would have another class that stores all the "global" data.
So far, with everything still in one file, and it is working. However, I am now trying to break these classes out into separate modules by saving them in their own files. This is where things got really bad for me. I have read many tutorials and texts on classes and have seen a number of posts about my same problem here on Stack Overflow, but I still can't figure out what to do.
As my actual program is so big, I created a very simple program that shows what I need to do...
#main.py
class data:
def __init__(self):
self.A = "A"
self.B = "B"
self.C = "C"
self.All = ""
class module_1:
def __init__(self):
place_holder = "something"
def add_1_2(self):
d.All = d.A + d.B
print("new_string=", d.All)
class module_2:
def __init__(self):
place_holder = "something"
def combine_it_all(self):
m1.add_1_2()
d.All = d.All + d.C
d = data()
m1 = module_1()
m2 = module_2()
m2.combine_it_all()
print("d.All = ", d.All)
print("END OF PROGRAM")
The program above shows how I want to access data in other classes and use their methods. However, I also need to break out the program into modules so that they are much smaller and easier to work with. So, I tried to break out each class and put it in its own file (module) and now I don't know how to access data or methods in other classes which come from modules. Here are the breakdowns of each file...
#data_p.py
class data:
def __init__(self):
self.A = "A"
self.B = "B"
self.C = "C"
self.All = ""
#module_1_p.py
class module_1:
def __init__(self):
place_holder = "something"
def add_1_2(self):
d.All = d.A + d.B
print("new_string=", d.All)
#module_2_p.py
class module_2:
def __init__(self):
place_holder = "something"
def combine_it_all(self):
m1.add_1_2()
d.All = d.All + d.C
#main.py
from data_p import data
from module_1_p import module_1
from module_2_p import module_2
d = data()
m1 = module_1()
m2 = module_2()
m2.combine_it_all()
print("d.All = ", d.All)
print("END OF PROGRAM")
As you can see, there a problems with attributes and methods being addressed, but not yet instantiated. Unfortunately, I am getting up there in years, and certainly not the brightest programmer around, but I am hoping someone can show me how to make this simple example work so that I can fix my actual program.
The issue is classes are instantiated, but each instance has no idea about the other, one way to fix it is to pass references in the method calls.
It's probably possible to do better in the actual context, but with these changes, your example will work:
#module_1_p.py (line 5 changed)
class module_1:
def __init__(self):
place_holder = "something"
def add_1_2(self, d):
d.All = d.A + d.B
print("new_string=", d.All)
#module_2_p.py (lines 5 and 6 changed)
class module_2:
def __init__(self):
place_holder = "something"
def combine_it_all(self, m1, d):
m1.add_1_2(d)
d.All = d.All + d.C
#main.py (line 9 changed)
from data_p import data
from module_1_p import module_1
from module_2_p import module_2
d = data()
m1 = module_1()
m2 = module_2()
m2.combine_it_all(m1, d)
print("d.All = ", d.All)
print("END OF PROGRAM")
Is it possible to have serializable static class variables or methods in python?
As an example suppose, I have the following code snippet:
import pickle
class Sample:
count = 0 # class variable
def __init__(self, a1=0, a2=0):
self.a = a1
self.b = a2
Sample.count += 1
#MAIN
f = open("t1.dat", "wb")
d = dict()
for i in range(10):
s = Sample(i, i*i)
d[i] = s
pickle.dump(d,f)
print "Sample.count = " + str(Sample.count)
f.close()
The output is:
Sample.count = 10
Now, I have another reader program similar to above:
import pickle
class Sample:
count = 0 # class variable
def __init__(self, a1=0, a2=0):
self.a = a1
self.b = a2
Sample.count += 1
#MAIN
f = open("t1.dat", "rb")
d = pickle.load(f)
print "Sample.count = " + str(Sample.count)
The output is:
Sample.count = 0
My question is:
How do I load the class variable from my file? In other words, how do I serialize a class variable? If directly not possible, is there any alternative? Please suggest.
Since class variable cannot be picked, as an alternative, I have used the code snippet in main part when reading from the file as below:
#MAIN
f = open("t1.dat", "rb")
d = pickle.load(f)
Sample.count = len(d.values())
print "Sample.count = " + str(Sample.count)
The output is now:
Sample.count = 10
Is it acceptable solution? Any other alternative?
Quoting the section on "What can be pickled and unpickled?"
Similarly, classes are pickled by named reference, so the same restrictions in the unpickling environment apply. Note that none of the class’s code or data is pickled, so in the following example the class attribute attr is not restored in the unpickling environment:
class Foo:
attr = 'a class attr'
picklestring = pickle.dumps(Foo)
So because attr, or in your case count, is part of the class definition, it never gets pickled. In your 'write' example, you're printing Sample.count which does exist but is not pickled in the first place.
You could store Sample.count in each instance as _count and put Sample.count = self._count. But remember that since your d is a dict, they may unpickle in any order. So essentially this won't work.
You'll need to add __setstate__ to your class customize the way it pickles and put in some flag value (like _count) which you then manipulate (via whatever logic works consistently) in __getstate__. (Edit: doesn't help with the given problem unless you store count in a global variable and access that in getstate and manipulate further each time an object is unpickled.)
Another potential workaround but yuck: Add a variable to your dict d so that it also gets pickled. When you read it back, restore with Sample.count = d['_count']. So before pickle.dump(d,f) when you pickle, do d['_count'] = Sample.count.
Important caveat: This is not actually allowing you to pickle Sample.count since what you're actually pickling (d) is a dictionary of Samples.
Edit: The Sample.count = len(d.values()) which you've put as a workaround is very specific to your use case and not to class attr's in general.
If I import a class and rename it by subclassing, it's fairly simple to find the new class name:
>>> from timeit import Timer
>>> class Test(Timer):
... pass
...
>>> test = Test()
>>> test.__class__.__name__
'Test'
However, if I alias the class as I import it, it retains the name from its host module:
>>> from timeit import Timer as Test2
>>> test2 = Test2()
>>> test2.__class__.__name__
'Timer'
Later, I want to provide user-facing output which is aware of the name they've given the class in their namespace. Consider:
def report_stats(timer):
print("Runtime statistics for %s:" % timer.__class__.__name__)
...
Is there a way to get a string reading "Test2", short of iterating over variables in the namespace to test for an exact match?
There's a really terrible answer to my own question; I won't be accepting this since it's probably pretty fragile (I only tested for a limited set of call circumstances). I mostly just hunted this down for the challenge; I will most likely be using something more durable for my actual use case.
This assumes we have access to the init function of the class we're trying to import as blah, and some sort of persistent external data store, at least for more complicated edge cases:
import inspect, dis
class Idiom(object):
description = None
alias = None
def __init__(self, desc):
global data_ob
self.description = desc
if self.__class__.__name__ == 'Idiom':
#cheat like hell to figure out who called us
self.alias = data_ob.name_idiom(inspect.currentframe().f_back)
else:
self.alias = self.__class__.__name__
class DataOb(object):
code = None
locations = {}
LOAD_NAME = 101
codelen = None
def name_idiom(self, frame):
if not self.code:
self.code = frame.f_code
self.codelen = len(self.code.co_code)
self.locations = {y:x for x, y in dis.findlinestarts(self.code)}
target_line = frame.f_lineno
addr_index = self.locations[target_line]+1
name_index = self.code.co_code[addr_index]
# there's a chance we'll get called again this line,
# so we want to seek to the next LOAD_NAME instance(101)
addr_index += 1
while addr_index < self.codelen:
if self.code.co_code[addr_index] == self.LOAD_NAME:
self.locations[target_line] = addr_index
break
addr_index += 1
return self.code.co_names[name_index]
The short explanation of how this works is:
we look up the previous frame from the init function
obtain the code object
find bytecode locations for the start of every line in the code
use the line-number from the frame to grab the bytecode location for the start of that line
locate a LOAD_NAME indicator in the bytecode for this line (I don't really follow this; my code assumes it'll be there)
look in the next bytecode position for an index which indicates which position in the code.co_names tuple contains the "name" of the LOAD_NAME call
From here we can do something like:
>>> from rabbit_hole import Idiom as timer_bob
>>> with timer_bob("down the rabbit hole"):
... waste_some_time = list(range(50000))
...
timer_bob: down the rabbit hole
runtime: 0:00:00.001909, children: 0:00:00, overhead: 0:00:00.001909
I have objects from various classes that work together to perform a certain task. The task requires a lot of parameters, provided by the user (through a configuration file). The parameters are used deep inside the system.
I have a choice of having the controller object read the configuration file, and then allocate the parameters as appropriate to the next layer of objects, and so on in each layer. But the only objects themselves know which parameters they need, so the controller object would need to learn a lot of detail about every other object.
The other choice is to bundle all the parameters into a collection, and pass the whole collection into every function call (equivalently, create a global object that stores them, and is accessible to everyone). This looks and feels ugly, and would cause a variety of minor technical issues (e.g., I can't allow two objects to use parameters with the same name; etc.)
What to do?
I have used the "global collection" alternative in the past.
If you are concerned with naming: how would you handle this in your config file? The way I see it, your global collection is a datastructure representing the same information you have in your config file, so if you have a way of resolving or avoiding name clashes in your cfg-file, you can do the same in your global collection.
I hope you don't feel like I'm thread-jacking you - what you're asking about is similar to what I was thinking about in terms of property aggregation to avoid the models you want to avoid.
I also nicked a bit of the declarative vibe that Elixir has turned me onto.
I'd be curious what the Python gurus of stack overflow think of it and what better alternatives there might be. I don't like big kwargs and if I can avoid big constructors I prefer to.
#!/usr/bin/python
import inspect
from itertools import chain, ifilter
from pprint import pprint
from abc import ABCMeta
class Property(object):
def __init__(self, value=None):
self._x = value
def __repr__(self):
return str(self._x)
def getx(self):
return self._x
def setx(self, value):
self._x = value
def delx(self):
del self._x
value = property(getx, setx, delx, "I'm the property.")
class BaseClass(object):
unique_baseclass_thing = Property()
def get_prop_tree(self):
mro = self.__class__.__mro__
r = []
for i in xrange( 0, len(mro) - 1 ):
child_prop_names = set(dir(mro[i]))
parent_prop_names = set(dir(mro[i+1]))
l_k = list( chain( child_prop_names - parent_prop_names ) )
l_n = [ (x, getattr(mro[i],x,None)) for x in l_k ]
l_p = list( ifilter(lambda y: y[1].__class__ == Property, l_n))
r.append(
(mro[i],
(dict
( l_p )
)
)
)
return r
def get_prop_list(self):
return list( chain(* [ x[1].items() for x in reversed( self.get_prop_tree() ) ] ) )
class SubClass(BaseClass):
unique_subclass_thing = Property(1)
class SubSubClass(SubClass):
unique_subsubclass_thing_one = Property("blah")
unique_subsubclass_thing_two = Property("foo")
if __name__ == '__main__':
a = SubSubClass()
for b in a.get_prop_tree():
print '---------------'
print b[0].__name__
for prop in b[1].keys():
print "\t", prop, "=", b[1][prop].value
print
for prop in a.get_prop_list():
When you run it..
SubSubClass
unique_subsubclass_thing_one = blah
unique_subsubclass_thing_two = foo
---------------
SubClass
unique_subclass_thing = 1
---------------
BaseClass
unique_baseclass_thing = None
unique_baseclass_thing None
unique_subclass_thing 1
unique_subsubclass_thing_one blah
unique_subsubclass_thing_two foo