PyCharm: List all usages of all methods of a class - python

I'm aware that I can use 'Find Usages' to find what's calling a method in a class.
Is there a way of doing this for all methods on a given class? (or indeed all methods in file)
Use Case: I'm trying to refactor a god class, that should almost certainly be several classes. It would be nice to be able to see what subset of god class methods, the classes that interact with it use. It seems like PyCharm has done the hard bit of this, but doesn't let me scale it up.
I'm using PyCharm 2016.1.2
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206666319-See-all-callers-of-all-methods-of-a-class

This is possible, but you have to deal with abstraction, otherwise Pycharm doesn't know the method in question belongs to your specific class. AKA - Type Hinting
Any instance of that method being called in an abstraction layer which does not have type hinting will not be found.
Example:
#The class which has the method you're searching for.
class Inst(object):
def mymethod(self):
return
#not the class your looking for, but it too has a method of the same name.
class SomethingElse(object):
def mymethod(self):
return
#Option 1 -- Assert hinting
def foo(inst):
assert isinstance(inst, Inst)
inst.mymethod()
#Option 2 -- docstring hinting
def bar(inst):
"""
:param inst:
:type inst: Inst
:return:
:rtype:
"""
inst.mymethod()

Nowadays it would be rather easy for Pycharm to use Python 3.6 type hints and match function calls "correctly", as type hints are part of Python 3.5/3.6 language. Of course partial type hints in a big software cause some compromises when resolving the targets of the method calls.
Here is an example, how type hints makes it very easy to do type inferencing logic and resolve the correct target of the call.
def an_example():
a: SoftagramAnalysisAction = SoftagramAnalysisAction(
analysis_context=analysis_context,
preprocessors=list(preprocessors),
analyzers=list(analyzers),
analysis_control_params=analysis_control_params)
output = a.run()
In above example, local variable a is specially marked to have type SoftagramAnalysisAction which makes it clear that run() call below targets to the run method of that class (or any of its possible subclasses).
The current version (2018.1) does not resolve these kind of calls correctly but I hope that will change in the future.

Related

Reusing method from another class without inheritance or delegation in Python

I want to use a method from another class.
Neither inheritance nor delegation is a good choice (to my understanding) because the existing class is too complicated to override and too expensive to instanciate.
Note that modifying the existing class is not allowed (legacy project, you know).
I came up with a way:
class Old:
def a(self):
print('Old.a')
class Mine:
b = Old.a
and it shows
>>> Mine().b()
Old.a
>>> Mine().b
<bound method Old.a of <__main__.Mine object at 0x...>>
It seems fine.
And I tried with some more complicated cases including property modification (like self.foo = 'bar'), everything seems okay.
My question:
What is actually happening when I define methods like that?
Will that safely do the trick for my need mentioned above?
Explanation
What's happening is that you are defining a callable class property of class Mine called b. However, this works:
m = Mine()
m.b()
But this won't:
Mine.b()
Why doesn't the second way work?
When you call a function of a class, python expects the first argument to be the actual object upon which the function was called. When you do this, the self argument is automatically passed into the function behind the scenes. Since we called Mine.b() without an instantiated instance of any object, no self was passed into b().
Will this "do the trick"?
As for whether this will do the trick, that depends.
As long as Mine can behave the same way as Old, python won't complain. This is because the python interpreter does not care about the "type" of self. As long as it walks like a duck and quacks like a duck, it's a duck (see duck typing). However, can you guarantee this? What if someone goes and changes the implementation of Old.a. Most of the time, as a client of another system we have no say when the private implementation of functions change.
A simpler solution might be to pull out the functionality you are missing into a separate module. Yes, there is some code duplication but at least you can be confident the code won't change from under you.
Ultimately, if you can guarantee the behavior of Old and Mine will be similar enough for the purposes of Old.a, python really shouldn't care.

Python generic type wrapper

I am assuming this question has been asked a million times already but I can't seem to make sense of a few things so please bear with me here. I am trying to make do generic inheritance in Python. This is what I want to accomplish: I have a method that takes in a generic type and it returns a class that has been inherited from that parent class
this is what the code looks like
def make_foo(parent):
class Relationship(parent):
# def __init__(self):
#staticmethod
def should_see_this_method(self):
print("Hello here")
return True
return Relationship
Now this is the piece of code I am have
NewType = make_relationship(str)
another_test: NewType = "Hello"
another_test.should_see_this_method()
another_test.capitalize()
Now I am getting AttributeError: 'str' object has no attribute 'should_see_this_method'
I am not sure if this is anti pattern or not but I am just curios to know how I can do this.
thanks
This line:
another_test: NewType = "Hello"
doesn't do what you think it does.
This is a type hint. Hints are used by static type checkers, linters, and the like to check if your code has obvious bugs or is being used incorrectly. It helps you at "compile time" to catch things that are possible sources of errors, but it has no impact on the runtime behavior of the code.
Importantly, it does not construct an object of type NewType. It constructs a str. You can see this easily by calling type(another_test), which indicates this is a str. (It's also in the message of the AttributeError in your question.)
To actually construct that object, you have to do the usual thing:
>>> another_test = NewType("Hello")
>>> isinstance(another_test, NewType)
True
An unrelated problem in your code: staticmethods should not take self as the first argument. They are not bound to any instance. You'll see an error once you actually get to the line which calls the method.

What is dynamic dispatch and duck typing?

When using Pycharm, It often points out an error, saying:
Unresolved reference 'name'. This inspection detects names that should
resolve but don't. Due to dynamic dispatch and duck typing, this is
possible in a limited but useful number of cases. Top-level and
class-level items are supported better than instance items.
I've snooped around about this, but most questions and information I find is about preventing the message from being shown. what I want to know is:
What is dynamic dispatch/duck typing?
What are (or an example of) these "useful number of cases"?
Python uses a duck typing convention. This means that you do not have to specify what type a name is. Unlike in Java, for example, where you must specify explicitly that variable may be type int or Object. Essentially, type checking is done at runtime.
"If it walks like a duck and it quacks like a duck, then it must be a duck."
In Python everything will seem to work until you use try to manipulate an object in a way that it is not designed to. Basically, an object may not have a certain method or attribute that another might, and you won't find this out until Python throws an error upon trying it.
Dynamic Dispatch is the practice of the compiler or environment choosing which version of a polymorphic function to use at runtime. If you have multiple implementations of a method, you can use them in different ways despite the methods having the same or similar properties/attributes. Here's an example:
class Foo:
def flush():
pass
class Bar:
def flush():
pass
Both classes have a flush() method but the correct name is chosen at runtime.
Python is not the best example of this process since methods can take multiple parameters and don't have to be reimplemented. Java is a better example, but I'm not fluent enough in it to provide a correct example.
The warning means that you're using a variable that PyCharm doesn't recognise, but due to Python's dynamic nature it can't be sure if it's right or you're right.
For example you may have the following code:
class myClass():
def myfunc(self):
print(self.name)
PyCharm will probably complain that self.name can't be resolved. However, you may use the class like this:
my_class = myClass()
my_class.name = "Alastair"
my_class.myfunc()
which is perfectly valid (albeit brittle).
The message goes on to say that it's more confident about attribute and methods that are less ambiguous. For example:
class myClass():
my_instance_var = "Al"
def myfunc(self):
print(self.my_instance_var)
As my_instance_var is defined in the source code (a class attribute), PyCharm can be confident it exists.
(Don't use class attributes unless you know what you're doing!)

Custom type hint annotation

I just wrote a simple #autowired decorator for Python that instantiate classes based on type annotations.
To enable lazy initialization of the class, the package provides a lazy(type_annotation: (Type, str)) function so that the caller can use it like this:
#autowired
def foo(bla, *, dep: lazy(MyClass)):
...
This works very well, under the hood this lazy function just returns a function that returns the actual type and that has a lazy_init property set to True. Also this does not break IDEs' (e.g., PyCharm) code completion feature.
But I want to enable the use of a subscriptable Lazy type use instead of the lazy function.
Like this:
#autowired
def foo(bla, *, dep: Lazy[MyClass]):
...
This would behave very much like typing.Union. And while I'm able to implement the subscriptable type, IDEs' code completion feature will be rendered useless as it will present suggestions for attributes in the Lazy class, not MyClass.
I've been working with this code:
class LazyMetaclass(type):
def __getitem__(lazy_type, type_annotation):
return lazy_type(type_annotation)
class Lazy(metaclass=LazyMetaclass):
def __init__(self, type_annotation):
self.type_annotation = type_annotation
I tried redefining Lazy.__dict__ as a property to forward to the subscripted type's __dict__ but this seems to have no effect on the code completion feature of PyCharm.
I strongly believe that what I'm trying to achieve is possible as typing.Union works well with IDEs' code completion. I've been trying to decipher what in the source code of typing.Union makes it to behave well with code completion features but with no success so far.
For the Container[Type] notation to work you would want to create a user-defined generic type:
from typing import TypeVar, Generic
T = TypeVar('T')
class Lazy(Generic[T]):
pass
You then use
def foo(bla, *, dep: Lazy[MyClass]):
and Lazy is seen as a container that holds the class.
Note: this still means the IDE sees dep as an object of type Lazy. Lazy is a container type here, holding an object of type MyClass. Your IDE won't auto-complete for the MyClass type, you can't use it that way.
The notation also doesn't create an instance of the Lazy class; it creates a subclass instead, via the GenericMeta metaclass. The subclass has a special attribute __args__ to let you introspect the subscription arguments:
>>> a = Lazy[str]
>>> issubclass(a, Lazy)
True
>>> a.__args__
(<class 'str'>,)
If all you wanted was to reach into the type annotations at runtime but resolve the name lazily, you could just support a string value:
def foo(bla, *, dep: 'MyClass'):
This is valid type annotation, and your decorator could resolve the name at runtime by using the typing.get_type_hints() function (at a deferred time, not at decoration time), or by wrapping strings in your lazy() callable at decoration time.
If lazy() is meant to flag a type to be treated differently from other type hints, then you are trying to overload the type hint annotations with some other meaning, and type hinting simply doesn't support such use cases, and using a Lazy[...] containing can't make it work.

Python's equivalent of .Net's sealed class

Does python have anything similar to a sealed class? I believe it's also known as final class, in java.
In other words, in python, can we mark a class so it can never be inherited or expanded upon? Did python ever considered having such a feature? Why?
Disclaimers
Actually trying to understand why sealed classes even exist. Answer here (and in many, many, many, many, many, really many other places) did not satisfy me at all, so I'm trying to look from a different angle. Please, avoid theoretical answers to this question, and focus on the title! Or, if you'd insist, at least please give one very good and practical example of a sealed class in csharp, pointing what would break big time if it was unsealed.
I'm no expert in either language, but I do know a bit of both. Just yesterday while coding on csharp I got to know about the existence of sealed classes. And now I'm wondering if python has anything equivalent to that. I believe there is a very good reason for its existence, but I'm really not getting it.
You can use a metaclass to prevent subclassing:
class Final(type):
def __new__(cls, name, bases, classdict):
for b in bases:
if isinstance(b, Final):
raise TypeError("type '{0}' is not an acceptable base type".format(b.__name__))
return type.__new__(cls, name, bases, dict(classdict))
class Foo:
__metaclass__ = Final
class Bar(Foo):
pass
gives:
>>> class Bar(Foo):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __new__
TypeError: type 'Foo' is not an acceptable base type
The __metaclass__ = Final line makes the Foo class 'sealed'.
Note that you'd use a sealed class in .NET as a performance measure; since there won't be any subclassing methods can be addressed directly. Python method lookups work very differently, and there is no advantage or disadvantage, when it comes to method lookups, to using a metaclass like the above example.
Before we talk Python, let's talk "sealed":
I, too, have heard that the advantage of .Net sealed / Java final / C++ entirely-nonvirtual classes is performance. I heard it from a .Net dev at Microsoft, so maybe it's true. If you're building a heavy-use, highly-performance-sensitive app or framework, you may want to seal a handful of classes at or near the real, profiled bottleneck. Particularly classes that you are using within your own code.
For most applications of software, sealing a class that other teams consume as part of a framework/library/API is kinda...weird.
Mostly because there's a simple work-around for any sealed class, anyway.
I teach "Essential Test-Driven Development" courses, and in those three languages, I suggest consumers of such a sealed class wrap it in a delegating proxy that has the exact same method signatures, but they're override-able (virtual), so devs can create test-doubles for these slow, nondeterministic, or side-effect-inducing external dependencies.
[Warning: below snark intended as humor. Please read with your sense of humor subroutines activated. I do realize that there are cases where sealed/final are necessary.]
The proxy (which is not test code) effectively unseals (re-virtualizes) the class, resulting in v-table look-ups and possibly less efficient code (unless the compiler optimizer is competent enough to in-line the delegation). The advantages are that you can test your own code efficiently, saving living, breathing humans weeks of debugging time (in contrast to saving your app a few million microseconds) per month... [Disclaimer: that's just a WAG. Yeah, I know, your app is special. ;-]
So, my recommendations: (1) trust your compiler's optimizer, (2) stop creating unnecessary sealed/final/non-virtual classes that you built in order to either (a) eke out every microsecond of performance at a place that is likely not your bottleneck anyway (the keyboard, the Internet...), or (b) create some sort of misguided compile-time constraint on the "junior developers" on your team (yeah...I've seen that, too).
Oh, and (3) write the test first. ;-)
Okay, yes, there's always link-time mocking, too (e.g. TypeMock). You got me. Go ahead, seal your class. Whatevs.
Back to Python: The fact that there's a hack rather than a keyword is probably a reflection of the pure-virtual nature of Python. It's just not "natural."
By the way, I came to this question because I had the exact same question. Working on the Python port of my ever-so-challenging and realistic legacy-code lab, and I wanted to know if Python had such an abominable keyword as sealed or final (I use them in the Java, C#, and C++ courses as a challenge to unit testing). Apparently it doesn't. Now I have to find something equally challenging about untested Python code. Hmmm...
Python does have classes that can't be extended, such as bool or NoneType:
>>> class ExtendedBool(bool):
... pass
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: type 'bool' is not an acceptable base type
However, such classes cannot be created from Python code. (In the CPython C API, they are created by not setting the Py_TPFLAGS_BASETYPE flag.)
Python 3.6 will introduce the __init_subclass__ special method; raising an error from it will prevent creating subclasses. For older versions, a metaclass can be used.
Still, the most “Pythonic” way to limit usage of a class is to document how it should not be used.
Similar in purpose to a sealed class and useful to reduce memory usage (Usage of __slots__?) is the __slots__ attribute which prevents monkey patching a class. Because when the metaclass __new__ is called, it is too late to put a __slots__ into the class, we have to put it into the namespace at the first possible timepoint, i.e. during __prepare__. Additionally, this throws the TypeError a little bit earlier. Using mcs for the isinstance comparison removes the necessity to hardcode the metaclass name in itself. The disadvantage is that all unslotted attributes are read-only. Therefore, if we want to set specific attributes during initialization or later, they have to slotted specifically. This is feasible e.g. by using a dynamic metaclass taking slots as an argument.
def Final(slots=[]):
if "__dict__" in slots:
raise ValueError("Having __dict__ in __slots__ breaks the purpose")
class _Final(type):
#classmethod
def __prepare__(mcs, name, bases, **kwargs):
for b in bases:
if isinstance(b, mcs):
msg = "type '{0}' is not an acceptable base type"
raise TypeError(msg.format(b.__name__))
namespace = {"__slots__":slots}
return namespace
return _Final
class Foo(metaclass=Final(slots=["_z"])):
y = 1
def __init__(self, z=1):
self.z = 1
#property
def z(self):
return self._z
#z.setter
def z(self, val:int):
if not isinstance(val, int):
raise TypeError("Value must be an integer")
else:
self._z = val
def foo(self):
print("I am sealed against monkey patching")
where the attempt of overwriting foo.foo will throw AttributeError: 'Foo' object attribute 'foo' is read-only and attempting to add foo.x will throw AttributeError: 'Foo' object has no attribute 'x'. The limiting power of __slots__ would be broken when inheriting, but because Foo has the metaclass Final, you can't inherit from it. It would also be broken when dict is in slots, so we throw a ValueError in case. To conclude, defining setters and getters for slotted properties allows to limit how the user can overwrite them.
foo = Foo()
# attributes are accessible
foo.foo()
print(foo.y)
# changing slotted attributes is possible
foo.z = 2
# %%
# overwriting unslotted attributes won't work
foo.foo = lambda:print("Guerilla patching attempt")
# overwriting a accordingly defined property won't work
foo.z = foo.foo
# expanding won't work
foo.x = 1
# %% inheriting won't work
class Bar(Foo):
pass
In that regard, Foo could not be inherited or expanded upon. The disadvantage is that all attributes have to be explicitly slotted, or are limited to a read-only class variable.
Python 3.8 has that feature in the form of the typing.final decorator:
class Base:
#final
def done(self) -> None:
...
class Sub(Base):
def done(self) -> None: # Error reported by type checker
...
#final
class Leaf:
...
class Other(Leaf): # Error reported by type checker
See https://docs.python.org/3/library/typing.html#typing.final

Categories