I'm trying to use with Python a COM server which expose only the IDispatch interface and have neither IDL file nor type library for it. I do have documentation for the different methods and how to use them.
Trying to use the win32com package fails for me, because it seems that when no type information is available win32com fallback to assuming any attribute access is property get or set, never a method invocation.
That is, when I do the following:
import win32com.client
c = win32com.client.GetActiveObject(server_progid)
c.someServerMethod(arg1, arg2)
win32com tries to get the someServerMethod property on the server, ignoring arg1, arg2 completely. Digging into the code it seems because the python is invoking self.__getattr__ which has no arg1, arg2.
I'm looking for a way to solve this:
Some syntax to tell win32com I'm actually calling a method ;
Some other python COM client which actually implement this behavior ;
Other suggestions, except the obvious 'manually convert the documentation into type-library'.
Thanks!
A possible solution (which I'm currently implementing) is to wrap the usage of win32com.client with a proxy which calls _make_method_ for every method invocation, using some logic. Using a code recipe from here I changed to method every property which does not start with get_ or set_ (just an example, any heuristic which allows to tell properties from methods will do).
import new
from types import MethodType
class Proxy(object):
def __init__(self, target):
self._target = target
def __getattr__(self, aname):
target = self._target
### Beginning of special logic ###
if aname[:4]!='set_' and aname[:4]!='get_':
### End of special logic ###
# Rebind the method to the target.
return new.instancemethod(f.im_func, self, target.__class__)
else:
return f
You should be able to use
c._make_method_("someServerMethod")
to tell win32com to treat it as a method instead of a property.
Related
Good evening, i need an advice, googling i couldn't find a proper direction.
I need to make a method available only within the class (i.e other methods or functions), if called from the program as a method of the object referring to the class i want:
the method to be invisible/not available to the intellisense
if i'm stubborn, and code it anyway, must raise an error.
Attaching a screenshot to make it more clear.
Any advice is appreciated, Thank you.
Screenshot of the problem
There's no private methods in python. Common usage dictates to precede a method that's only supposed to be used internally with one or two underscores, depending on the case. See here: What is the meaning of single and double underscore before an object name?
As others have mentioned there are no private methods in Python. I also don't know how to make it invisible for intelisense (probably there is some setting), but what you could theoretically do is this:
import re
def make_private(func):
def inner(*args, **kwargs):
name = func.__name__
pattern = re.compile(fr'(.*)\.{name}')
with open(__file__) as file:
for line in file:
lst = pattern.findall(line)
if (lst and not line.strip().startswith('#')
and not all(g.strip() == 'self' for g in lst)):
raise Exception()
return func(*args, **kwargs)
return inner
class MyClass:
#make_private
def some_method(self):
pass
def some_other_method(self):
self.some_method()
m = MyClass()
# m.some_method()
m.some_other_method()
It (make_private) is a decorator which basically when you call the function it is decorating, it first reads the entire file line by line and tries to find if in all of the file this method is called without being prefixed with self.. So if it is not then it is considered to be called from outside the class and an Exception is raised (probably add some message to it tho).
Issues could start once you have multiple files and this wouldn't entirely prevent someone from calling it if they really wanted for example if they did it like this:
self = MyClass()
self.some_method()
But mostly this would raise an exception.
OK Solved, to hide the method to the ide's Intellisense i added the double underscore (works fine with pycharm, not with vscode) then i used the accessify module to prevent forced execution calling myobj._myclass__somemethod()
from accessify import private
class myclass:
#private
def __somemethod(self)
I am studying python. I am trying to understand how to design a library that exposes a public api. I want avoid to expose internal methods that could change in future. I am looking for a simple and pythonic way to do it.
I have a library that contains a bunch of classes. Some methods of those classes are used internally among classes. I don't want to expose those methods to the client code.
Suppose that my library (f.e. mylib) contains a class C with two methods a C.public() method thought to be used from client code and C.internal() method used to do some work into the library code.
I want to commit myself to the public api (C.public()) but I am expecting to change the C.internal() method in future, for example adding or removing parameters.
The following code illustrates my question:
mylib/c.py:
class C:
def public(self):
pass
def internal(self):
pass
mylib/f.py:
class F:
def build():
c = C()
c.internal()
return c
mylib/__init__.py:
from mylib.c import C
from mylib.f import F
client/client.py:
import mylib
f = mylib.F()
c = f.build()
c.public()
c.internal() # I wish to hide this from client code
I have thought the following solutions:
document only public api, warning user in documentation to don't use private library api. Live in peace hoping that clients will use only public api. If the next library version breaks client code is the client fault:).
use some form of naming convention, f.e. prefix each method with "_", (it is reserved for protected methods and raises a warning into ide), perhaps I can use other prefixes.
use objects composition to hide internal methods.
For example the library could return to the clients only PC object that
embeds C objects.
mylib/pc.py:
class PC:
def __init__(self, c):
self.__c__
def public(self):
self.__cc__.public()
But this looks a little contrived.
Any suggestion is appreciated :-)
Update
It was suggested that this question is duplicated of Does Python have “private” variables in classes?
It is similar question but I is a bit different about scope. My scope is a library not a single class. I am wondering if there is some convention about marking (or forcing) which are the public methods/classes/functions of a library. For example I use the __init__.py to export the public classes or functions. I am wondering if there is some convention about exporting class methods or if i can rely only on documentation.
I know I can use "_" prefix for marking protected methods. As best as I know protected method are method that can be used in class hierarchy.
I have found a question about marking public method with a decorator #api Sphinx Public API documentation but it was about 3 years ago. There is commonly accepted solution, so if someone are reading my code understand what are methods intended to be library public api, and methods intended to be used internally in the library?
Hope I have clarified my questions.
Thanks all!
You cannot really hide methods and attributes of objects. If you want to be sure that your internal methods are not exposed, wrapping is the way to go:
class PublicC:
def __init__(self):
self._c = C()
def public(self):
self._c.public()
Double underscore as a prefix is usually discouraged as far as I know to prevent collision with python internals.
What is discouraged are __myvar__ names with double-underscore prefix+suffix ...this naming style is used by many python internals and should be avoided -- Anentropic
If you prefer subclassing, you could overwrite internal methods and raise Errors:
class PublicC(C):
def internal(self):
raise Exception('This is a private method')
If you want to use some python magic, you can have a look at __getattribute__. Here you can check what your user is trying to retrieve (a function or an attribute) and raise AttributeError if the client wants to go for an internal/blacklisted method.
class C(object):
def public(self):
print "i am a public method"
def internal(self):
print "i should not be exposed"
class PublicC(C):
blacklist = ['internal']
def __getattribute__(self, name):
if name in PublicC.blacklist:
raise AttributeError("{} is internal".format(name))
else:
return super(C, self).__getattribute__(name)
c = PublicC()
c.public()
c.internal()
# --- output ---
i am a public method
Traceback (most recent call last):
File "covering.py", line 19, in <module>
c.internal()
File "covering.py", line 13, in __getattribute__
raise AttributeError("{} is internal".format(name))
AttributeError: internal is internal
I assume this causes the least code overhead but also requires some maintenance. You could also reverse the check and whitelist methods.
...
whitelist = ['public']
def __getattribute__(self, name):
if name not in PublicC.whitelist:
...
This might be better for your case since the whitelist will probably not change as often as the blacklist.
Eventually, it is up to you. As you said yourself: It's all about documentation.
Another remark:
Maybe you also want to reconsider your class structure. You already have a factory class F for C. Let F have all the internal methods.
class F:
def build(self):
c = C()
self._internal(c)
return c
def _internal(self, c):
# work with c
In this case you do not have to wrap or subclass anything. If there are no hard design constraints to render this impossible, I would recommend this approach.
I have thought the following solutions:
document only public api, warning user in documentation to don't use
private library api. Live in peace hoping that clients will use only
public api. If the next library version breaks client code is the
client fault:).
use some form of naming convention, f.e. prefix each method with "_",
(it is reserved for protected methods and raises a warning into ide),
perhaps I can use other prefixes.
use objects composition to hide internal methods. For example the
library could return to the clients only PC object that embeds C
objects.
You got it pretty right with the first two points.
The Pythonic way is to name internal methods starting with single underscore '_', this way all Python developers know that this method is there, but it's use is discouraged and won't use it. (Until they decide to do some monkey-patching, but you shouldn't care for this scenario.) For newbie developers you might want to mention explicitly about not using methods starting with underscore. Also, just don't provide public documentation for your "private" methods, use it for internal reference only.
You might want to take a look at "name mangling", but it's less common.
Hiding internals with object composition or methods like __getattribute__ and etc. is generally discouraged in Python.
You might want to look at source code of some popular libraries to see how they manage this, e.g. Django, Twisted, etc.
I have a number of modules that needs to have a database connection instance, and I would prefer they share the same instance and don't create their own. My current way of doing this is to explicitly send each function in all modules the object instance like such:
def func(arg1, arg2, database_connection):
pass
This becomes quite ugly and in a way redundant when there should be a better way to import a separate module containing the instance, but I'm not quite sure how to guarantee that it's actually one single instance, and not multiple instances.
That is, I'm looking for a way to do something like this:
import db_module
def func(arg1, arg2):
database_connection = db_module.get_db_instance()
The solution you've described:
import db_module
def func(arg1, arg2):
database_connection = db_module.get_db_instance()
is perfectly viable because Python imports each module exactly once. If multiple import statements are executed, they each refer to a single instance of the module.
You can read more about modules and importing in the Python Tutorial and the
Python Language Reference.
Is this what you mean? Create a module that imports the defined function?
db_module.py
my_connection = func(x,y)
from db_module import my_connection
Here be dragons. You've been warned.
I'm thinking about creating a new library that will attempt to help write a better test suite.
In order to do that one of the features is a feature that verifies that any object that is being used which isn't the test runner and the system under test has a test double (a mock object, a stub, a fake or a dummy). If the tester wants the live object and thus reduce test isolation it has to specify so explicitly.
The only way I see to do this is to override the builtin type() function which is the default metaclass.
The new default metaclass will check the test double registry dictionary to see if it has been replaced with a test double or if the live object was specified.
Of course this is not possible through Python itself:
>>> TypeError: can't set attributes of built-in/extension type 'type'
Is there a way to intervene with Python's metaclass lookup before the test suite will run (and probably Python)?
Maybe using bytecode manipulation? But how exactly?
The following is not advisable, and you'll hit plenty of problems and cornercases implementing your idea, but on Python 3.1 and onwards, you can hook into the custom class creation process by overriding the __build_class__ built-in hook:
import builtins
_orig_build_class = builtins.__build_class__
class SomeMockingMeta(type):
# whatever
def my_build_class(func, name, *bases, **kwargs):
if not any(isinstance(b, type) for b in bases):
# a 'regular' class, not a metaclass
if 'metaclass' in kwargs:
if not isinstance(kwargs['metaclass'], type):
# the metaclass is a callable, but not a class
orig_meta = kwargs.pop('metaclass')
class HookedMeta(SomeMockingMeta):
def __new__(meta, name, bases, attrs):
return orig_meta(name, bases, attrs)
kwargs['metaclass'] = HookedMeta
else:
# There already is a metaclass, insert ours and hope for the best
class SubclassedMeta(SomeMockingMeta, kwargs['metaclass']):
pass
kwargs['metaclass'] = SubclassedMeta
else:
kwargs['metaclass'] = SomeMockingMeta
return _orig_build_class(func, name, *bases, **kwargs)
builtins.__build_class__ = my_build_class
This is limited to custom classes only, but does give you an all-powerful hook.
For Python versions before 3.1, you can forget hooking class creation. The C build_class function directly uses the C-type type() value if no metaclass has been defined, it never looks it up from the __builtin__ module, so you cannot override it.
I like your idea, but I think you're going slightly off course. What if the code calls a library function instead of a class? Your fake type() would never be called and you would never be advised that you failed to mock that library function. There are plenty of utility functions both in Django and in any real codebase.
I would advise you to write the interpreter-level support you need in the form of a patch to the Python sources. Or you might find it easier to add such a hook to PyPy's codebase, which is written in Python itself, instead of messing with Python's C sources.
I just realized that the Python interpreter includes a comprehensive set of tools to enable any piece of Python code to step through the execution of any other piece of code, checking what it does down to each function call, or even to each single Python line being executed, if needed.
sys.setprofile should be enough for your needs. With it you can install a hook (a callback) that will be notified of every function call being made by the target program. You cannot use it to change the behavior of the target program, but you can collect statistics about it, including your "mock coverage" metric.
Python's documentation about the Profilers introduces a number of modules built upon sys.setprofile. You can study their sources to see how to use it effectively.
If that turns out not to be enough, there is still sys.settrace, a heavy-handed approach that allows you to step through every line of the target program, inspect its variables and modify its execution. The standard module bdb.py is built upon sys.settrace and implements the standard set of debugging tools (breakpoints, step into, step over, etc.) It is used by pdb.py which is the commandline debugger, and by other graphical debuggers.
With these two hooks, you should be all right.
First, I have never used SWIG, I dont know what it does...
We have a python library, that as far as I can tell uses SWIG, say when I want to use this library I have to put this in my python code:
import pylib
Now if I go open this vendor's pylib.py I see some classes, functions and this header:
# This file was automatically generated by SWIG (http://www.swig.org).
# Version 1.3.33
#
# Don't modify this file, modify the SWIG interface instead.
# This file is compatible with both classic and new-style classes.
import _pylib
import new
new_instancemethod = new.instancemethod
Next, in the same directory as pylib.py, there is a file called _pylib.pyd, that I think is a dll.
My problem is the following:
Many classes in pylib.py look like this:
class PersistentCache(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, PersistentCache, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, PersistentCache, name)
__repr__ = _swig_repr
def __init__(self, *args):
this = _pylib.new_PersistentCache(*args)
try: self.this.append(this)
except: self.this = this
__swig_destroy__ = _pylib.delete_PersistentCache
__del__ = lambda self : None;
def setProperty(*args): return _pylib.PersistentCache_setProperty(*args)
def getProperty(*args): return _pylib.PersistentCache_getProperty(*args)
def clear(*args): return _pylib.PersistentCache_clear(*args)
def entries(*args): return _pylib.PersistentCache_entries(*args)
PersistentCache_swigregister = _pylib.PersistentCache_swigregister
PersistentCache_swigregister(PersistentCache)
Say I want to use this class or it's methods, with things like:
*args
as parameters, I cant know how many parameters I should pass nor what they should be, with what I have is it possible to find this out, so I can use the library?
SWIG is a method of automatically wrapping up a C/C++ library so it can be accessed from Python. The library is actually a C library compiled as a DLL. The Python code is just pass-through code, all autogenerated by SWIG, and you're right that it's not very helpful.
If you want to know what arguments to pass, you should not look at the Python code, you should look at the C code it was generated from -- if you have it, or the documentation if not. If you don't have any code or documentation for that library, then I think you're going to have a very difficult time figuring it out... you should contact the vendor for documentation.