python - log the request's journey - python

I want to log all methods a single request has visited once at the end of the request for debugging purposes.
I'm ok with starting with just one class at first:
here is my desired output example:
logging full trace once
'__init__': ->
'init_method_1' ->
'init_method_1_1'
'init_method_2'
'main_function': ->
'first_main_function': ->
'condition_method_3'
'condition_method_5'
here is my partial attempt:
import types
class DecoMeta(type):
def __new__(cls, name, bases, attrs):
for attr_name, attr_value in attrs.items():
if isinstance(attr_value, types.FunctionType):
attrs[attr_name] = cls.deco(attr_value)
return super(DecoMeta, cls).__new__(cls, name, bases, attrs)
#classmethod
def deco(cls, func):
def wrapper(*args, **kwargs):
name = func.__name__
stacktrace_full.setdefault(name, [])
sorted_functions = stacktrace_full[name]
if len(sorted_functions) > 0:
stacktrace_full[name].append(name)
result = func(*args, **kwargs)
print("after",func.__name__)
return result
return wrapper
class MyKlass(metaclass=DecoMeta):

Approaches
I think there are two different approaches worth considering for this problem:
"Simple" logging metaclass, or
Beefier metaclass to store call stacks
If you only need the method calls to be printed as they are made, and you don’t care about saving an actual record of the method call stack, then the first approach should do the trick.
I’m not certain which approach you’re looking for (if you had anything specific in mind), but if you know you need to store the method call stack, in addition to printing invocations, you might want to skip ahead to the second approach.
Note: All code hereafter assumes the presence of the following imports:
from types import FunctionType
1. Simple Logging Metaclass
This approach is far easier, and it doesn’t require too much extra work on top of your first attempt (depending on special circumstances we want to account for). However, as already mentioned, this metaclass is solely concerned with logging. If you definitely need to save a method call stack structure, consider skipping ahead to the second approach.
Changes to DecoMeta.__new__
With this approach, your DecoMeta.__new__ method remains mostly unchanged. The most notable change made in the code below is the addition of the “_in_progress_calls” list to namespace. DecoMeta.deco’s wrapper function will use this attribute to keep track of how many methods have been invoked, but not ended. With that information, it can appropriately indent the printed method names.
Also note the inclusion of staticmethod to the namespace attributes we want to decorate via DecoMeta.deco. However, you may not need this functionality. On the other hand, you may want to consider going further by accounting for classmethod and others, as well.
One other change you’ll notice is the creation of the cls variable, which is modified directly before being returned. However, your existing loop through the namespace, followed by both the creation and return of the class object should still do the trick here.
Changes to DecoMeta.deco
We set in_progress_calls to the current instance’s _in_progress_calls to be used later in wrapper
Next, we make a small modification to your first attempt to handle staticmethod — something you may or may not want, as mentioned earlier
In the “Log” section, we need to calculate pad for the following line, in which we print the name of the called method. After printing, we add the current method name to in_progress_calls, informing other methods of the in-progress method
In the “Invoke Method” section, we (optionally) handle staticmethod again.
Aside from this minor change, we make one small but significant change by adding the self argument to our func call. Without this, the normal methods of the class using DecoMeta would start complaining about not being given the positional self argument, which is kind of a big deal, since func.__call__ is a method-wrapper and needs the instance to which our method is bound.
The final change to your first attempt is to remove the last in_progress_calls value, since we have officially invoked the method and are returning result
Shut Up, and Show Me the Code
class DecoMeta(type):
def __new__(mcs, name, bases, namespace):
namespace["_in_progress_calls"] = []
cls = super().__new__(mcs, name, bases, namespace)
for attr_name, attr_value in namespace.items():
if isinstance(attr_value, (FunctionType, staticmethod)):
setattr(cls, attr_name, mcs.deco(attr_value))
return cls
#classmethod
def deco(mcs, func):
def wrapper(self, *args, **kwargs):
in_progress_calls = getattr(self, "_in_progress_calls")
try:
name = func.__name__
except AttributeError: # Resolve `staticmethod` names
name = func.__func__.__name__
#################### Log ####################
pad = " " * (len(in_progress_calls) * 3)
print(f"{pad}`{name}`")
in_progress_calls.append(name)
#################### Invoke Method ####################
try:
result = func(self, *args, **kwargs)
except TypeError: # Properly invoke `staticmethod`-typed `func`
result = func.__func__(*args, **kwargs)
in_progress_calls.pop(-1)
return result
return wrapper
What Does It Do?
Here’s some code for a dummy class that I tried to model after your desired example output:
Setup
Don't pay too much attention to this block. It's just a silly class whose methods call other methods
class MyKlass(metaclass=DecoMeta):
def __init__(self):
self.i_1()
self.i_2()
#################### Init Methods ####################
def i_1(self):
self.i_1_1()
def i_1_1(self): ...
def i_2(self): ...
#################### Main Methods ####################
def main(self, x):
self.m_1(x)
def m_1(self, x):
if x == 0:
self.c_1()
self.c_2()
self.c_4()
elif x == 1:
self.c_3()
self.c_5()
#################### Condition Methods ####################
def c_1(self): ...
def c_2(self): ...
def c_3(self): ...
def c_4(self): ...
def c_5(self): ...
Run
my_k = MyKlass()
my_k.main(1)
my_k.main(0)
Console Output
`__init__`
`i_1`
`i_1_1`
`i_2`
`main`
`m_1`
`c_3`
`c_5`
`main`
`m_1`
`c_1`
`c_2`
`c_4`
2. Beefy Metaclass to Store Call Stacks
Because I’m unsure whether you actually want this, and your question seems more focused on the metaclass part of the problem, rather than the call stack storage structure, I’ll focus on how to beef up the above metaclass to handle the required operations. Then, I’ll just make a few notes on the many ways you could store the call stack and “stub” out those parts of the code with a simple placeholder structure.
The obvious thing we need is a persistent call stack structure to extend the reach of the ephemeral _in_progress_calls attribute. So we can start by adding the following uncommented line to the top of DecoMeta.__new__:
namespace["full_stack"] = dict()
# namespace["_in_progress_calls"] = []
# cls = super().__new__(mcs, name, bases, namespace)
# ...
Unfortunately, the obviousness stops there, and things get tricky fairly quickly if you want to trace anything beyond very simple method call stacks.
Regarding how we need to save our call stack, there are a few things that might limit our options:
We can’t use a simple dict, with method names as keys, because in the resulting arbitrarily-complex call stack, it’s entirely possible that method X could call method Y multiple times
We can’t assume that every call to method X will invoke the same methods, as your example with “conditional” methods indicates. This means that we can’t say that any invocation of X will yield call stack Y, and neatly save that information somewhere
We need to limit the persistence of our new full_stack attribute, since we declare it on a class-wide basis in DecoMeta.__new__. If we don’t, then all instances of MyKlass will share the same full_stack, swiftly undermining its usefulness
Because the first two are highly dependent on your preferences/requirements and because I think your question is more concerned with the problem’s metaclass aspect, rather than call stack structure, I’ll start by addressing the third point.
To ensure each instance gets its own full_stack, we can add a new DecoMeta.__call__ method, which gets called whenever we make an instance of MyKlass (or anything using DecoMeta as a metaclass). Just drop the following into DecoMeta:
def __call__(cls, *args, **kwargs):
setattr(cls, "full_stack", dict())
return super().__call__(*args, **kwargs)
The last piece is to figure out how you want to structure full_stack and add the code to update it to the DecoMeta.deco.wrapper function.
A deeply-nested list of strings, naming the methods invoked in order, together with the methods invoked by those methods, and so on... should get the job done and sidestep the first two problems mentioned above, but that sounds messy, so I’ll let you decide if you actually need it.
As an example, we can make full_stack a dict with keys of Tuple[str], and values of List[str]. Be warned that this will quietly fail under both of the aforementioned problem conditions; however, it does serve to illustrate the updates that would be necessary to DecoMeta.deco.wrapper should you decide to go further.
Only two lines need to be added:
First, immediately below the signature of DecoMeta.deco.wrapper, add the following uncommented line:
full_stack = getattr(self, "full_stack")
# in_progress_calls = getattr(self, "_in_progress_calls")
# ...
Second, in the “Log” section, right after the print call, add the following uncommented line:
# print(f"{pad}`{name}`")
full_stack.setdefault(tuple(in_progress_calls), []).append(name)
# in_progress_calls.append(name)
# ...
TL;DR
If I am correct in interpreting your question as asking for a metaclass that really does just log method calls, then the first approach (outlined above under the “Simple Logging Metaclass” heading) should work great. However, if you also need to save a full record of all method calls, you can start by following the suggestions under the “Beefy Metaclass to Store Call Stacks” heading.
Please let me know if you have any other questions or clarifications. I hope this was useful!

Related

Make python linter "understand" metaclass changes

I am playing around with python metaclasses, and trying to write some sort of metaclasses that changes or adds methods dynamically for its subclasses.
For example, here is a metaclass that its purpose is to find async methods in the subclass (that their name also ends with the string "_async") and add an additional "synchronized" version of this method:
class AsyncClientMetaclass(type):
#staticmethod
def async_func_to_sync(func):
return lambda *_args, **_kwargs: run_synchronized(func(*_args, **_kwargs))
def __new__(mcs, *args, **kwargs):
cls = super().__new__(mcs, *args, **kwargs)
_, __, d = args
for key, value in d.items():
if asyncio.iscoroutinefunction(value) and key.endswith('_async'):
sync_func_name = key[:-len('_async')]
if sync_func_name in d:
continue
if isinstance(value, staticmethod):
value = value.__func__
setattr(cls, sync_func_name, mcs.async_func_to_sync(value))
return cls
# usage
class SleepClient(metaclass=AsyncClientMetaclass):
async def sleep_async(self, seconds):
await asyncio.sleep(seconds)
return f'slept for {seconds} seconds'
c = SleepClient()
res = c.sleep(2)
print(res) # prints "slept for 2 seconds"
This example works great, the only problem is that the python linter warns about using the non async method that the metaclass has created (for the example above, the warning is Unresolved attribute reference 'sleep' for class 'SleepClient')
For now, I am adding pylint: disable whenever I am using a sync method created by the metaclass, but I am wondering if is there any way to add a custom linter rule with the metaclass, so the linter will know those methods will be created dynamically.
And are you think there is a better way to achieve this purpose rather than using metaclass?
Thanks!
As put by Chepner: no static code analyser can know about these methods - not linters nor type-annotation checking tools like MyPy, unless you give then a hint.
Maybe there is one way out: static type annotators will consume a parallel ".pyi" stub file, put side by side to the correspondent ".py" file that can list class interfaces, and, I may be wrong, but whatever it findes there will supersede what the toll "sees" on the actual Py file.
So, you could instrument your metaclass to, aside from generating the actual methods, render their signature and the signature for the "real" methods and attributes of the class as source code, and record those as the proper "pyi" file. You will have to run this code once, before the linter can find its way - but it is the only workaround I can think of.
In other words, to be clear:
make a mechanism called by the metaclass that will check for the existence and time-stamp of the appropriate ".pyi" file for the classes it is modifying, and generate them. By checking the timestamp, or generating this file only when some "--build" variable is active, there should be no runtime penalties, and static-type checkers (and possibly some linters), should be pleased.

How can i make a method available only from within the class

Good evening, i need an advice, googling i couldn't find a proper direction.
I need to make a method available only within the class (i.e other methods or functions), if called from the program as a method of the object referring to the class i want:
the method to be invisible/not available to the intellisense
if i'm stubborn, and code it anyway, must raise an error.
Attaching a screenshot to make it more clear.
Any advice is appreciated, Thank you.
Screenshot of the problem
There's no private methods in python. Common usage dictates to precede a method that's only supposed to be used internally with one or two underscores, depending on the case. See here: What is the meaning of single and double underscore before an object name?
As others have mentioned there are no private methods in Python. I also don't know how to make it invisible for intelisense (probably there is some setting), but what you could theoretically do is this:
import re
def make_private(func):
def inner(*args, **kwargs):
name = func.__name__
pattern = re.compile(fr'(.*)\.{name}')
with open(__file__) as file:
for line in file:
lst = pattern.findall(line)
if (lst and not line.strip().startswith('#')
and not all(g.strip() == 'self' for g in lst)):
raise Exception()
return func(*args, **kwargs)
return inner
class MyClass:
#make_private
def some_method(self):
pass
def some_other_method(self):
self.some_method()
m = MyClass()
# m.some_method()
m.some_other_method()
It (make_private) is a decorator which basically when you call the function it is decorating, it first reads the entire file line by line and tries to find if in all of the file this method is called without being prefixed with self.. So if it is not then it is considered to be called from outside the class and an Exception is raised (probably add some message to it tho).
Issues could start once you have multiple files and this wouldn't entirely prevent someone from calling it if they really wanted for example if they did it like this:
self = MyClass()
self.some_method()
But mostly this would raise an exception.
OK Solved, to hide the method to the ide's Intellisense i added the double underscore (works fine with pycharm, not with vscode) then i used the accessify module to prevent forced execution calling myobj._myclass__somemethod()
from accessify import private
class myclass:
#private
def __somemethod(self)

Calling functions / class methods inside a for loop

I'm working on a some classes, and for the testing process it would be very useful to be able to run the class methods in a for loop. I'm adding methods and changing their names, and I want this to automatically change in the file where I run the class for testing.
I use the function below to get a list of the methods I need to run automatically (there are some other conditional statements I deleted for the example to make sure that I only run certain methods that require testing and which only have self as an argument)
def get_class_methods(class_to_get_methods_from):
import inspect
methods = []
for name, type in (inspect.getmembers(class_to_get_methods_from)):
if 'method' in str(type) and str(name).startswith('_') == False:
methods.append(name)
return methods
Is it possible to use the returned list 'methods' to run the class methods in a for loop?
Or is there any other way to make sure i can run my class methods in my testingrunning file without having to alter or add things i changed in the class?
Thanks!
Looks like you want getattr(object, name[, default]):
class Foo(object):
def bar(self):
print("bar({})".format(self))
f = Foo()
method = getattr(f, "bar")
method()
As a side note : I'm not sure that dynamically generating lists of methods to test is such a good idea (looks rather like an antipattern to me) - now it's hard to tell without the whole project's context so take this remarks with the required grain of salt ;)

Prevent other classes' methods from calling my constructor

How do I make a python "constructor" "private", so that the objects of its class can only be created by calling static methods? I know there are no C++/Java like private methods in Python, but I'm looking for another way to prevent others from calling my constructor (or other method).
I have something like:
class Response(object):
#staticmethod
def from_xml(source):
ret = Response()
# parse xml into ret
return ret
#staticmethod
def from_json(source):
# parse json
pass
and would like the following behavior:
r = Response() # should fail
r = Response.from_json(source) # should be allowed
The reason for using static methods is that I always forget what arguments my constructors take - say JSON or an already parsed object. Even then, I sometimes forget about the static methods and call the constructor directly (not to mention other people using my code). Documenting this contract won't help with my forgetfulness. I'd rather enforce it with an assertion.
And contrary to some of the commenters, I don't think this is unpythonic - "explicit is better than implicit", and "there should be only one way to do it".
How can I get a gentle reminder when I'm doing it wrong? I'd prefer a solution where I don't have to change the static methods, just a decorator or a single line drop-in for the constructor would be great. A la:
class Response(object):
def __init__(self):
assert not called_from_outside()
I think this is what you're looking for - but it's kind of unpythonic as far as I'm concerned.
class Foo(object):
def __init__(self):
raise NotImplementedError()
def __new__(cls):
bare_instance = object.__new__(cls)
# you may want to have some common initialisation code here
return bare_instance
#classmethod
def from_whatever(cls, arg):
instance = cls.__new__(cls)
instance.arg = arg
return instance
Given your example (from_json and from_xml), I assume you're retrieving attribute values from either a json or xml source. In this case, the pythonic solution would be to have a normal initializer and call it from your alternate constructors, i.e.:
class Foo(object):
def __init__(self, arg):
self.arg = arg
#classmethod
def from_json(cls, source):
arg = get_arg_value_from_json_source(source)
return cls(arg)
#classmethod
def from_xml(cls, source):
arg = get_arg_value_from_xml_source(source)
return cls(arg)
Oh and yes, about the first example: it will prevent your class from being instantiated in the usual way (calling the class), but the client code will still be able to call on Foo.__new__(Foo), so it's really a waste of time. Also it will make unit testing harder if you cannot instantiate your class in the most ordinary way... and quite a few of us will hate you for this.
I'd recommend turning the factory methods into module-level factory functions, then hiding the class itself from users of your module.
def one_constructor(source):
return _Response(...)
def another_constructor(source):
return _Response(...)
class _Response(object):
...
You can see this approach used in modules like re, where match objects are only constructed through functions like match and search, and the documentation doesn't actually name the match object type. (At least, the 3.4 documentation doesn't. The 2.7 documentation incorrectly refers to re.MatchObject, which doesn't exist.) The match object type also resists direct construction:
>>> type(re.match('',''))()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: cannot create '_sre.SRE_Match' instances
but unfortunately, the way it does so relies upon the C API, so it's not available to ordinary Python code.
Good discussion in the comments.
For the minimal use case you describe,
class Response(object):
def __init__(self, construct_info = None):
if construct_info is None: raise ValueError, "must create instance using from_xml or from_json"
# etc
#staticmethod
def from_xml(source):
info = {} # parse info into here
return Response(info)
#staticmethod
def from_json(source):
info = {} # parse info into here
return Response(info)
It can be gotten around by a user who passes in a hand-constructed info, but at that point they'll have to read the code anyway and the static method will provide the path of least resistance. You can't stop them, but you can gently discourage them. It's Python, after all.
This might be achievable through metaclasses, but is heavily discouraged in Python. Python is not Java. There is no first-class notion of public vs private in Python; the idea is that users of the language are "consenting adults" and can use methods however they like. Generally, functions that are intended to be "private" (as in not part of the API) are denoted by a single leading underscore; however, this is mostly just convention and there's nothing stopping a user from using these functions.
In your case, the Pythonic thing to do would be to default the constructor to one of the available from_foo methods, or even to create a "smart constructor" that can find the appropriate parser for most cases. Or, add an optional keyword arg to the __init__ method that determines which parser to use.
An alternative API (and one I've seen far more in Python APIs) if you want to keep it explicit for the user would be to use keyword arguments:
class Foo(object):
def __init__(self, *, xml_source=None, json_source=None):
if xml_source and json_source:
raise ValueError("Only one source can be given.")
elif xml_source:
from_xml(xml_source)
elif json_source:
from_json(json_source)
else:
raise ValueError("One source must be given.")
Here using 3.x's * to signify keyword-only arguments, which helps enforce the explicit API. In 2.x this is recreatable with kwargs.
Naturally, this doesn't scale well to lots of arguments or options, but there are definitely cases where this style makes sense. (I'd argue bruno desthuilliers probably has it right for this case, from what we know, but I'll leave this here as an option for others).
The following is similar to what I ended up doing. It is a bit more general then what was asked in the question.
I made a function called guard_call, that checks if the current method is being called from a method of a certain class.
This has multiple uses. For example, I used the Command Pattern to implement undo and redo, and used this to ensure that my objects were only ever modified by command objects, and not random other code (which would make undo impossible).
In this concrete case, I place a guard in the constructor ensuring only Response methods can call it:
class Response(object):
def __init__(self):
guard_call([Response])
pass
#staticmethod
def from_xml(source):
ret = Response()
# parse xml into ret
return ret
For this specific case, you could probably make this a decorator and remove the argument, but I didn't do that here.
Here is the rest of the code. It's been a long time since I tested it, and can't guarentee that it works in all edge cases, so beware. It is also still Python 2. Another caveat is that it is slow, because it uses inspect. So don't use it in tight loops and when speed is an issue, but it might be useful when correctness is more important than speed.
Some day I might clean this up and release it as a library - I have a couple more of these functions, including one that asserts you are running on a particular thread. You may snear at the hackishness (it is hacky), but I did find this technique useful to smoke out some hard to find bugs, and to ensure my code still behaves during refactorings, for example.
from __future__ import print_function
import inspect
# http://stackoverflow.com/a/2220759/143091
def get_class_from_frame(fr):
args, _, _, value_dict = inspect.getargvalues(fr)
# we check the first parameter for the frame function is
# named 'self'
if len(args) and args[0] == 'self':
# in that case, 'self' will be referenced in value_dict
instance = value_dict.get('self', None)
if instance:
# return its class
return getattr(instance, '__class__', None)
# return None otherwise
return None
def guard_call(allowed_classes, level=1):
stack_info = inspect.stack()[level + 1]
frame = stack_info[0]
method = stack_info[3]
calling_class = get_class_from_frame(frame)
# print ("calling class:", calling_class)
if calling_class:
for klass in allowed_classes:
if issubclass(calling_class, klass):
return
allowed_str = ", ".join(klass.__name__ for klass in allowed_classes)
filename = stack_info[1]
line = stack_info[2]
stack_info_2 = inspect.stack()[level]
protected_method = stack_info_2[3]
protected_frame = stack_info_2[0]
protected_class = get_class_from_frame(protected_frame)
if calling_class:
origin = "%s:%s" % (calling_class.__name__, method)
else:
origin = method
print ()
print ("In %s, line %d:" % (filename, line))
print ("Warning, call to %s:%s was not made from %s, but from %s!" %
(protected_class.__name__, protected_method, allowed_str, origin))
assert False
r = Response() # should fail
r = Response.from_json("...") # should be allowed

How to add new parameter to subclass of immutable type in a function chain?

I have an class inheriting from an immutable type which uses __new__. How would I add a new parameter to one function and elegantly pass it to a second function, which I know is in the chain of execution?
class ImmutableClass(ImmutableType):
def functionA(param1=None, param2=None):
# Do some stuff
stuff self.functionB(param2)
# Do some more stuff
return stuff
def functionB(param2):
# Do some stuff
newObjects = [ImmutableClass() for x in range(len(param2))]
stuff = self.functionC(newObjects)
# Do some more stuff
return stuff
def functionC(param3=None):
# Do some stuff
return stuff
class SomeClass(ImmutableClass):
def __new__(cls, *args, **kwargs):
return ImmutableClass.__new__(SomeClass, *args, **kwargs)
def functionA(self, newParameter=True):
# How do I pass newParameter to functionC?
return super(ImmutableClass, self).functionA(*args, **kwargs)
def functionC(self, newParameter=True):
# How do I get the parameter from functionA?
if not newParameter:
print 'Success!'
return super(ImmutableClass, self).functionC(*args, **kwargs)
I know I can add **kwargs to all of ImmutableClass's functions in the chain, but this felt a bit messy. Callers to these functions could pass invalid arguments for quite some time before erroring out, or passing a flag to some unintended function. I'm hoping there's some obvious solution I'm not seeing.
I know ImmutableClass has functionA call functionB, which then calls functionC. However, the call in functionB is in the middle of the code, so I can't simply prepend/append the new call. I can't use a member variable (as far as I know), because of __new__ re-initializing it half way through the call. I have access to ImmutableClass's source, but I'd prefer to not alter it, as ImmutableClass should have no knowledge of SomeClass. I've thought of using a global variable, but I'm afraid that could do unexpected things if section of SomeClass starts a second, call of functionA.

Categories