I'm implementing custom test cases that are based on external files using the tutorial from https://docs.pytest.org/en/latest/example/nonpython.html.
I need to parametrise them with one bool flag. I'd like to be able to run pytest with a commandline option, in my case --use-real-api, which would turn using mocks off and do the real talking to a remote network API.
I've tried using the cmdopt tutorial and blend them together, but can't find any way to read the parameter from within the custom pytest.Item subclass. Could you help? Here is a trivial example from the tutorial. I'd like to get it to change the test behaviour depending on the value of the cmdopt passed.
# content of conftest.py
import pytest
def pytest_collect_file(parent, path):
if path.ext == ".yml" and path.basename.startswith("test"):
return YamlFile(path, parent)
class YamlFile(pytest.File):
def collect(self):
import yaml
raw = yaml.safe_load(self.fspath.open())
for name, spec in sorted(raw.items()):
yield YamlItem(name, self, spec)
class YamlItem(pytest.Item):
def __init__(self, name, parent, spec):
super().__init__(name, parent)
self.spec = spec
def runtest(self):
for name, value in sorted(self.spec.items()):
# some custom test execution (dumb example follows)
if name != value:
raise YamlException(self, name, value)
def repr_failure(self, excinfo):
""" called when self.runtest() raises an exception. """
if isinstance(excinfo.value, YamlException):
return "\n".join(
[
"usecase execution failed",
" spec failed: %r: %r" % excinfo.value.args[1:3],
" no further details known at this point.",
]
)
def reportinfo(self):
return self.fspath, 0, "usecase: %s" % self.name
class YamlException(Exception):
""" custom exception for error reporting. """
def pytest_addoption(parser):
parser.addoption(
"--cmdopt", action="store", default="type1", help="my option: type1 or type2"
)
#pytest.fixture
def cmdopt(request):
return request.config.getoption("--cmdopt")
Each collection entity in pytest (File, Module, Function etc) is a subtype of the Node class which defines access to the config object. Knowing that, the task becomes easy:
def pytest_addoption(parser):
parser.addoption('--run-yml', action='store_true')
def pytest_collect_file(parent, path):
run_yml = parent.config.getoption('--run-yml')
if run_yml and path.ext == ".yml" and path.basename.startswith("test"):
return YamlFile(path, parent)
Running pytest --run-yml will now collect the YAML files; without the flag, they are ignored.
Same for accessing the config in custom classes, for example:
class YamlItem(pytest.Item):
def runtest(self):
run_yml = self.config.getoption('--run-yml')
...
etc.
Related
I would like to avoid using the "test" prefix in classes and functions names and implement my own schema of the test parametrization.
I did the next code
test.py
import pytest
# class for inheritance to avoid "Test" prefix
class AtsClass:
__ATS_TEST_CLASS__ = True
# decorator to mark functions as tests (to avoid "Test" prefix)
def ats_test(f):
setattr(f, "__ATS_TEST_CLASS__", True)
return f
def test_1():
pass
#ats_test
def some_global_test():
pass
class MyClass(AtsClass):
def test_4(self):
pass
#ats_test
def some_func(self):
pass
conftest.py
import pytest
import inspect
# #pytest.hookimpl(hookwrapper=True)
def pytest_pycollect_makeitem(collector, name, obj):
# outcome = yield
# res = outcome.get_result()
if inspect.isclass(obj) and obj.__name__ != "AtsClass" and hasattr(obj, "__ATS_TEST_CLASS__") and obj.__ATS_TEST_CLASS__ == 1:
print("WE HAVE FOUND OUR CLASS")
return pytest.Class(name, parent=collector)
# outcome.force_result(pytest.Class(name, parent=collector))
if inspect.isfunction(obj) and hasattr(obj, "__ATS_TEST_CLASS__") and obj.__ATS_TEST_CLASS__ == 1:
print("WE HAVE FOUND OUR FUNCTION")
return pytest.Function(name, parent=collector)
# outcome.force_result([pytest.Function(name, parent=collector)])
def pytest_generate_tests(metafunc):
print("-->Generate: {}".format(metafunc.function.__name__))
In this case hook "pytest_pycollect_makeitem" creates test for function "some_global_test", but hook "pytest_generate_tests" is not executed for function "some_global_test".
I have found a solution, call collector._genfunctions(name, obj) from my hook. But I think it is not the right decision, cause _genfunctions is a private method and not declared.
Is there another way to solve my task?
So, nobody knows the answer and I decided to offer my solution (it can be useful for others):
class TestBaseClass:
__test__ = True
def mark_test(f):
setattr(f, "__test__", True)
return f
# using base class and decorator
class MyTestClass(TestBaseClass):
#mark_test
def some_func(self):
pass
Pytest uses attribute __test__ to detect nose-tests, so you can use nose-library or just use such base class and decorator.
If you want only to change prefix of tests you can set custom python_functionsand python_classes options at pytest.ini.
For more information follow a link.
How can I use click.MultiCommand together with commands defined as classmethods?
I'm trying to setup a plugin system for converters where users of the library can provide their own converters. For this system I'm setting up a CLI like the following:
$ myproj convert {converter} INPUT OUTPUT {ARGS}
Each converter is its own class and all inherit from BaseConverter. In the BaseConverter is the most simple Click command which only takes input and output.
For the converters that don't need more than that, they don't have to override that method. If a converter needs more than that, or needs to provide additional documentation, then it needs to be overridden.
With the code below, I get the following error when trying to use the cli:
TypeError: cli() missing 1 required positional argument: 'cls'
conversion/
├── __init__.py
└── backends/
├── __init__.py
├── base.py
├── bar.py
├── baz.py
└── foo.py
# cli.py
from pydoc import locate
import click
from proj.conversion import AVAILABLE_CONVERTERS
class ConversionCLI(click.MultiCommand):
def list_commands(self, ctx):
return sorted(list(AVAILABLE_CONVERTERS))
def get_command(self, ctx, name):
return locate(AVAILABLE_CONVERTERS[name] + '.cli')
#click.command(cls=ConversionCLI)
def convert():
"""Convert files using specified converter"""
pass
# conversion/__init__.py
from django.conf import settings
AVAILABLE_CONVERTERS = {
'bar': 'conversion.backends.bar.BarConverter',
'baz': 'conversion.backends.baz.BazConverter',
'foo': 'conversion.backends.foo.FooConverter',
}
extra_converters = getattr(settings, 'CONVERTERS', {})
AVAILABLE_CONVERTERS.update(extra_converters)
# conversion/backends/base.py
import click
class BaseConverter():
#classmethod
def convert(cls, infile, outfile):
raise NotImplementedError
#classmethod
#click.command()
#click.argument('infile')
#click.argument('outfile')
def cli(cls, infile, outfile):
return cls.convert(infile, outfile)
# conversion/backends/bar.py
from proj.conversion.base import BaseConverter
class BarConverter(BaseConverter):
#classmethod
def convert(cls, infile, outfile):
# do stuff
# conversion/backends/foo.py
import click
from proj.conversion.base import BaseConverter
class FooConverter(BaseConverter):
#classmethod
def convert(cls, infile, outfile, extra_arg):
# do stuff
#classmethod
#click.command()
#click.argument('infile')
#click.argument('outfile')
#click.argument('extra-arg')
def cli(cls, infile, outfile, extra_arg):
return cls.convert(infile, outfile, extra_arg)
To use a classmethod as a click command, you need to be able to populate the cls parameter when invoking the command. That can be done with a custom click.Command class like:
Custom Class:
import click
class ClsMethodClickCommand(click.Command):
def __init__(self, *args, **kwargs):
self._cls = [None]
super(ClsMethodClickCommand, self).__init__(*args, **kwargs)
def main(self, *args, **kwargs):
self._cls[0] = args[0]
return super(ClsMethodClickCommand, self).main(*args[1:], **kwargs)
def invoke(self, ctx):
ctx.params['cls'] = self._cls[0]
return super(ClsMethodClickCommand, self).invoke(ctx)
Using the Custom Class:
class MyClassWithAClickCommand:
#classmethod
#click.command(cls=ClsMethodClickCommand)
....
def cli(cls, ....):
....
And then in the click.Multicommand class you need to populate the _cls attribute since the command.main is not called in this case:
def get_command(self, ctx, name):
# this is hard coded in this example but presumably
# would be done with a lookup via name
cmd = MyClassWithAClickCommand.cli
# Tell the click command which class it is associated with
cmd._cls[0] = MyClassWithAClickCommand
return cmd
How does this work?
This works because click is a well designed OO framework. The #click.command() decorator usually instantiates a click.Command object but allows this behavior to be over ridden with the cls parameter. So it is a relatively easy matter to inherit from click.Command in our own class and over ride desired methods.
In this case, we override click.Command.invoke() and then add the containing class to the ctx.params dict as cls before invoking the command handler.
Test Code:
class MyClassWithAClickCommand:
#classmethod
#click.command(cls=ClsMethodClickCommand)
#click.argument('arg')
def cli(cls, arg):
click.echo('cls: {}'.format(cls.__name__))
click.echo('cli: {}'.format(arg))
class ConversionCLI(click.MultiCommand):
def list_commands(self, ctx):
return ['converter_x']
def get_command(self, ctx, name):
cmd = MyClassWithAClickCommand.cli
cmd._cls[0] = MyClassWithAClickCommand
return cmd
#click.command(cls=ConversionCLI)
def convert():
"""Convert files using specified converter"""
if __name__ == "__main__":
commands = (
'converter_x an_arg',
'converter_x --help',
'converter_x',
'--help',
'',
)
import sys, time
time.sleep(1)
print('Click Version: {}'.format(click.__version__))
print('Python Version: {}'.format(sys.version))
for cmd in commands:
try:
time.sleep(0.1)
print('-----------')
print('> ' + cmd)
time.sleep(0.1)
convert(cmd.split())
except BaseException as exc:
if str(exc) != '0' and \
not isinstance(exc, (click.ClickException, SystemExit)):
raise
Results:
Click Version: 6.7
Python Version: 3.6.3 (v3.6.3:2c5fed8, Oct 3 2017, 18:11:49) [MSC v.1900 64 bit (AMD64)]
-----------
> converter_x an_arg
class: MyClassWithAClickCommand
cli: an_arg
-----------
> converter_x --help
Usage: test.py converter_x [OPTIONS] ARG
Options:
--help Show this message and exit.
-----------
> converter_x
Usage: test.py converter_x [OPTIONS] ARG
Error: Missing argument "arg".
-----------
> --help
Usage: test.py [OPTIONS] COMMAND [ARGS]...
Convert files using specified converter
Options:
--help Show this message and exit.
Commands:
converter_x
-----------
>
Usage: test.py [OPTIONS] COMMAND [ARGS]...
Convert files using specified converter
Options:
--help Show this message and exit.
Commands:
converter_x
#Stephen Rauch's answer was inspirational to me, but didn't quite do it either. While I think it's a more complete answer for the OP, it doesn't quite work the way I wanted insofar as making any arbitrary click command/group work like a classmethod.
It also doesn't work with click's built-in decorators like click.pass_context and click.pass_obj; that's not so much its fault though as that click is really not designed to work on methods--it always passes the context as the first argument, even if that argument should be self/cls.
My use case was I already have a base class for microservices that provides a base CLI for starting them (which generally isn't overridden). But the individual services subclass the base class, so the default main() method on the class is a classmethod, and instantiates an instance of the given subclass.
I wanted to convert the CLI to using click (to make it more extensible) while keeping the existing class structure, but click is really not particularly designed to work with OOP, though this can be worked around.
import click
import types
from functools import update_wrapper, partial
class BoundCommandMixin:
def __init__(self, binding, wrapped, with_context=False, context_arg='ctx'):
self.__self__ = binding
self.__wrapped__ = wrapped
callback = types.MethodType(wrapped.callback, binding)
if with_context:
def context_wrapper(*args, **kwargs):
ctx = obj = click.get_current_context()
if isinstance(with_context, type):
obj = ctx.find_object(with_context)
kwargs[context_arg] = obj
return ctx.invoke(callback, *args, **kwargs)
self.callback = update_wrapper(context_wrapper, callback)
else:
self.callback = callback
def __repr__(self):
wrapped = self.__wrapped__
return f'<bound {wrapped.__class__.__name__} {wrapped.name} of {self.__self__!r}>'
def __getattr__(self, attr):
return getattr(self.__wrapped__, attr)
class classcommand:
_bound_cls_cache = {}
def __new__(cls, command=None, **kwargs):
if command is None:
# Return partially-applied classcommand for use as a decorator
return partial(cls, **kwargs)
else:
# Being used directly as a decorator without arguments
return super().__new__(cls)
def __init__(self, command, with_context=False, context_arg='ctx'):
self.command = command
self.with_context = with_context
self.context_arg = context_arg
def __get__(self, obj, cls=None):
if cls is None:
cls = type(obj)
cmd_type = type(self.command)
bound_cls = self._bound_cls_cache.setdefault(cmd_type,
type('Bound' + cmd_type.__name__, (BoundCommandMixin, cmd_type), {}))
return bound_cls(cls, self.command, self.with_context, self.context_arg)
First it introduces a notion of a "BoundCommand", which is sort of an extension of the notion of a bound method. In fact it just proxies a Command instance, but in fact replaces the command's original .callback attribute with a bound method on the callback, bound to either a class or instance depending on what binding is.
Since click's #pass_context and #pass_obj decorators don't really work with methods, it also provides replacement for the same functionality. If with_context=True the original callback is wrapped in a wrapper that provides the context as a keyword argument ctx (instead of as the first argument). The name of the argument can also be overridden by specifying context_arg.
If with_context=<some type>, the wrapper works the same as click's make_pass_decorator factory for the given type. Note: IIUC if you set with_context=object this is equivalent to #pass_obj.
The second part of this is the decorator class #classcommand, somewhat analogous to #classmethod. It implements a descriptor which simply returns BoundCommands for the wrapped Command.
Here's an example usage:
>>> class Foo:
... #classcommand(with_context=True)
... #click.group(no_args_is_help=False, invoke_without_command=True)
... #click.option('--bar')
... def main(cls, ctx, bar):
... print(cls)
... print(ctx)
... print(bar)
...
>>> Foo.__dict__['main']
<__main__.classcommand object at 0x7f1b471df748>
>>> Foo.main
<bound Group main of <class '__main__.Foo'>>
>>> try:
... Foo.main(['--bar', 'qux'])
... except SystemExit:
... pass
...
<class '__main__.Foo'>
<click.core.Context object at 0x7f1b47229630>
qux
In this example you can still extend the command with sub-commands as simple functions:
>>> #Foo.main.command()
... #click.option('--fred')
... def subcommand(fred):
... print(fred)
...
>>> try:
... Foo.main(['--bar', 'qux', 'subcommand', '--fred', 'flintstone'])
... except SystemExit:
... pass
...
...
<class '__main__.Foo'>
<click.core.Context object at 0x7f1b4715bb38>
qux
flintstone
One possible shortcoming to this is that the sub-commands are not tied to the BoundCommand, but just to the original Group object. So any subclasses of Foo will share the same subcommands as well, and could override each other. For my case this is not a problem, but it's worth considering. I believe a workaround would be possible, e.g. perhaps creating a copy of the original Group for each class it's bound to.
You could similarly implement an #instancecommand decorator for creating commands on instance methods. That's not a use case I have though so it's left as an exercise to the reader ^^
Update: I later came up with yet another solution to this problem, which is sort of a synthesis of my previous solutions, but I think a little bit simpler. I have packaged this solution as a new package objclick which can be used as a drop-in replacement for click like:
import objclick as click
I believe this can be used to solve the OP's problem. For example, to make a command from a "classmethod" you would write:
class BaseConverter():
#classmethod
def convert(cls, infile, outfile):
raise NotImplementedError
#click.classcommand()
#click.argument('infile')
#click.argument('outfile')
def cli(cls, infile, outfile):
return cls.convert(infile, outfile)
where objclick.classcommand provides classmethod-like functionality (it is not necessary to specify classmethod explicitly; in fact currently this will break).
Old answer:
I came up with a different solution to this that I think is much simpler than my previous answer. Since I primarily needed this for click.group(), rather than use click.group() directly I came up with the descriptor+decorator classgroup. It works as a wrapper to click.group(), but creates a new Group instance whose callback is in a sense "bound" to the class on which it was accessed:
import click
from functools import partial, update_wrapper
class classgroup:
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
self.callback = None
self.recursion_depth = 0
def __call__(self, callback):
self.callback = callback
return self
def __get__(self, obj, owner=None):
# The recursion_depth stuff is to work around an oddity where
# click.group() uses inspect.getdoc on the callback to get the
# help text for the command if none was provided via help=
# However, inspect.getdoc winds up calling the equivalent
# of getattr(owner, callback.__name__), causing a recursion
# back into this descriptior; in this case we just return the
# wrapped callback itself
self.recursion_depth += 1
if self.recursion_depth > 1:
self.recursion_depth -= 1
return self.callback
if self.callback is None:
return self
if owner is None:
owner = type(obj)
key = '_' + self.callback.__name__
# The Group instance is cached in the class dict
group = owner.__dict__.get(key)
if group is None:
def callback(*args, **kwargs):
return self.callback(owner, *args, **kwargs)
update_wrapper(callback, self.callback)
group = click.group(*self.args, **self.kwargs)(callback)
setattr(owner, key, group)
self.recursion_depth -= 1
return group
Additionally, I added the following decorator based on click's pass_context and pass_obj, but that I think is a little more flexible:
def with_context(func=None, obj_type=None, context_arg='ctx'):
if func is None:
return partial(with_context, obj_type=obj_type, context_arg=context_arg)
def context_wrapper(*args, **kwargs):
ctx = obj = click.get_current_context()
if isinstance(obj_type, type):
obj = ctx.find_object(obj_type)
kwargs[context_arg] = obj
return ctx.invoke(func, *args, **kwargs)
update_wrapper(context_wrapper, func)
return context_wrapper
They can be used together like this:
>>> class Foo:
... #classgroup(no_args_is_help=False, invoke_without_command=True)
... #with_context
... def main(cls, ctx):
... print(cls)
... print(ctx)
... ctx.obj = cls()
... print(ctx.obj)
...
>>> try:
... Foo.main()
... except SystemExit:
... pass
...
<class '__main__.Foo'>
<click.core.Context object at 0x7f8cf4056b00>
<__main__.Foo object at 0x7f8cf4056128>
Subcommands can easily be attached to Foo.main:
>>> #Foo.main.command()
... #with_context(obj_type=Foo, context_arg='foo')
... def subcommand(foo):
... print('subcommand', foo)
...
>>> try:
... Foo.main(['subcommand'])
... except SystemExit:
... pass
...
<class '__main__.Foo'>
<click.core.Context object at 0x7f8ce7a45160>
<__main__.Foo object at 0x7f8ce7a45128>
subcommand <__main__.Foo object at 0x7f8ce7a45128>
Unlike my previous answer, this has the advantage that all subcommands are tied to the class through which they were declared:
>>> Foo.main.commands
{'subcommand': <Command subcommand>}
>>> class Bar(Foo): pass
>>> Bar.main.commands
{}
As an exercise, you could also easily implement a version in which the main on subclasses inherit sub-commands from parent classes, but I don't personally need that.
I'm creating some classes for dealing with filenames in various types of file shares (nfs, afp, s3, local disk) etc. I get as user input a string that identifies the data source (i.e. "nfs://192.168.1.3" or "s3://mybucket/data") etc.
I'm subclassing the specific filesystems from a base class that has common code. Where I'm confused is in the object creation. What I have is the following:
import os
class FileSystem(object):
class NoAccess(Exception):
pass
def __new__(cls,path):
if cls is FileSystem:
if path.upper().startswith('NFS://'):
return super(FileSystem,cls).__new__(Nfs)
else:
return super(FileSystem,cls).__new__(LocalDrive)
else:
return super(FileSystem,cls).__new__(cls,path)
def count_files(self):
raise NotImplementedError
class Nfs(FileSystem):
def __init__ (self,path):
pass
def count_files(self):
pass
class LocalDrive(FileSystem):
def __init__(self,path):
if not os.access(path, os.R_OK):
raise FileSystem.NoAccess('Cannot read directory')
self.path = path
def count_files(self):
return len([x for x in os.listdir(self.path) if os.path.isfile(os.path.join(self.path, x))])
data1 = FileSystem('nfs://192.168.1.18')
data2 = FileSystem('/var/log')
print type(data1)
print type(data2)
print data2.count_files()
I thought this would be a good use of __new__ but most posts I read about it's use discourage it. Is there a more accepted way to approach this problem?
I don't think using __new__() to do what you want is improper. In other words, I disagree with the accepted answer to this question which claims factory functions are always the "best way to do it".
If you really want to avoid using it, then the only options are metaclasses or a separate factory function/method (however see Python 3.6+ Update below). Given the choices available, making the __new__() method one — since it's static by default — is a perfectly sensible approach.
That said, below is what I think is an improved version of your code. I've added a couple of class methods to assist in automatically finding all the subclasses. These support the most important way in which it's better — which is now adding subclasses doesn't require modifying the __new__() method. This means it's now easily extensible since it effectively supports what you could call virtual constructors.
A similar implementation could also be used to move the creation of instances out of the __new__() method into a separate (static) factory method — so in one sense the technique shown is just a relatively simple way of coding an extensible generic factory function regardless of what name it's given.
# Works in Python 2 and 3.
import os
import re
class FileSystem(object):
class NoAccess(Exception): pass
class Unknown(Exception): pass
# Regex for matching "xxx://" where x is any non-whitespace character except for ":".
_PATH_PREFIX_PATTERN = re.compile(r'\s*([^:]+)://')
#classmethod
def _get_all_subclasses(cls):
""" Recursive generator of all class' subclasses. """
for subclass in cls.__subclasses__():
yield subclass
for subclass in subclass._get_all_subclasses():
yield subclass
#classmethod
def _get_prefix(cls, s):
""" Extract any file system prefix at beginning of string s and
return a lowercase version of it or None when there isn't one.
"""
match = cls._PATH_PREFIX_PATTERN.match(s)
return match.group(1).lower() if match else None
def __new__(cls, path):
""" Create instance of appropriate subclass using path prefix. """
path_prefix = cls._get_prefix(path)
for subclass in cls._get_all_subclasses():
if subclass.prefix == path_prefix:
# Using "object" base class method avoids recursion here.
return object.__new__(subclass)
else: # No subclass with matching prefix found (& no default defined)
raise FileSystem.Unknown(
'path "{}" has no known file system prefix'.format(path))
def count_files(self):
raise NotImplementedError
class Nfs(FileSystem):
prefix = 'nfs'
def __init__ (self, path):
pass
def count_files(self):
pass
class LocalDrive(FileSystem):
prefix = None # Default when no file system prefix is found.
def __init__(self, path):
if not os.access(path, os.R_OK):
raise FileSystem.NoAccess('Cannot read directory')
self.path = path
def count_files(self):
return sum(os.path.isfile(os.path.join(self.path, filename))
for filename in os.listdir(self.path))
if __name__ == '__main__':
data1 = FileSystem('nfs://192.168.1.18')
data2 = FileSystem('c:/') # Change as necessary for testing.
print(type(data1).__name__) # -> Nfs
print(type(data2).__name__) # -> LocalDrive
print(data2.count_files()) # -> <some number>
Python 3.6+ Update
The code above works in both Python 2 and 3.x. However in Python 3.6 a new class method was added to object named __init_subclass__() which makes the finding of subclasses simpler by using it to automatically create a "registry" of them instead of potentially having to check every subclass recursively as the _get_all_subclasses() method is doing in the above.
I got the idea of using __init_subclass__() to do this from the Subclass registration section in the PEP 487 -- Simpler customisation of class creation proposal. Since the method will be inherited by all the base class' subclasses, registration will automatically be done for sub-subclasses, too (as opposed to only to direct subclasses) — it completely eliminates the need for a method like _get_all_subclasses().
# Requires Python 3.6+
import os
import re
class FileSystem(object):
class NoAccess(Exception): pass
class Unknown(Exception): pass
# Pattern for matching "xxx://" # x is any non-whitespace character except for ":".
_PATH_PREFIX_PATTERN = re.compile(r'\s*([^:]+)://')
_registry = {} # Registered subclasses.
#classmethod
def __init_subclass__(cls, /, path_prefix, **kwargs):
super().__init_subclass__(**kwargs)
cls._registry[path_prefix] = cls # Add class to registry.
#classmethod
def _get_prefix(cls, s):
""" Extract any file system prefix at beginning of string s and
return a lowercase version of it or None when there isn't one.
"""
match = cls._PATH_PREFIX_PATTERN.match(s)
return match.group(1).lower() if match else None
def __new__(cls, path):
""" Create instance of appropriate subclass. """
path_prefix = cls._get_prefix(path)
subclass = cls._registry.get(path_prefix)
if subclass:
return object.__new__(subclass)
else: # No subclass with matching prefix found (and no default).
raise cls.Unknown(
f'path "{path}" has no known file system prefix')
def count_files(self):
raise NotImplementedError
class Nfs(FileSystem, path_prefix='nfs'):
def __init__ (self, path):
pass
def count_files(self):
pass
class Ufs(Nfs, path_prefix='ufs'):
def __init__ (self, path):
pass
def count_files(self):
pass
class LocalDrive(FileSystem, path_prefix=None): # Default file system.
def __init__(self, path):
if not os.access(path, os.R_OK):
raise self.NoAccess(f'Cannot read directory {path!r}')
self.path = path
def count_files(self):
return sum(os.path.isfile(os.path.join(self.path, filename))
for filename in os.listdir(self.path))
if __name__ == '__main__':
data1 = FileSystem('nfs://192.168.1.18')
data2 = FileSystem('c:/') # Change as necessary for testing.
data4 = FileSystem('ufs://192.168.1.18')
print(type(data1)) # -> <class '__main__.Nfs'>
print(type(data2)) # -> <class '__main__.LocalDrive'>
print(f'file count: {data2.count_files()}') # -> file count: <some number>
try:
data3 = FileSystem('c:/foobar') # A non-existent directory.
except FileSystem.NoAccess as exc:
print(f'{exc} - FileSystem.NoAccess exception raised as expected')
else:
raise RuntimeError("Non-existent path should have raised Exception!")
try:
data4 = FileSystem('foobar://42') # Unregistered path prefix.
except FileSystem.Unknown as exc:
print(f'{exc} - FileSystem.Unknown exception raised as expected')
else:
raise RuntimeError("Unregistered path prefix should have raised Exception!")
In my opinion, using __new__ in such a way is really confusing for other people who might read your code. Also it requires somewhat hackish code to distinguish guessing file system from user input and creating Nfs and LocalDrive with their corresponding classes.
Why not make a separate function with this behaviour? It can even be a static method of FileSystem class:
class FileSystem(object):
# other code ...
#staticmethod
def from_path(path):
if path.upper().startswith('NFS://'):
return Nfs(path)
else:
return LocalDrive(path)
And you call it like this:
data1 = FileSystem.from_path('nfs://192.168.1.18')
data2 = FileSystem.from_path('/var/log')
Edit [BLUF]: there is no problem with the answer provided by #martineau, this post is merely to follow up for completion to discuss a potential error encountered when using additional keywords in a class definition that are not managed by the metaclass.
I'd like to supply some additional information on the use of __init_subclass__ in conjuncture with using __new__ as a factory. The answer that #martineau has posted is very useful and I have implemented an altered version of it in my own programs as I prefer using the class creation sequence over adding a factory method to the namespace; very similar to how pathlib.Path is implemented.
To follow up on a comment trail with #martinaeu I have taken the following snippet from his answer:
import os
import re
class FileSystem(object):
class NoAccess(Exception): pass
class Unknown(Exception): pass
# Regex for matching "xxx://" where x is any non-whitespace character except for ":".
_PATH_PREFIX_PATTERN = re.compile(r'\s*([^:]+)://')
_registry = {} # Registered subclasses.
#classmethod
def __init_subclass__(cls, /, **kwargs):
path_prefix = kwargs.pop('path_prefix', None)
super().__init_subclass__(**kwargs)
cls._registry[path_prefix] = cls # Add class to registry.
#classmethod
def _get_prefix(cls, s):
""" Extract any file system prefix at beginning of string s and
return a lowercase version of it or None when there isn't one.
"""
match = cls._PATH_PREFIX_PATTERN.match(s)
return match.group(1).lower() if match else None
def __new__(cls, path):
""" Create instance of appropriate subclass. """
path_prefix = cls._get_prefix(path)
subclass = FileSystem._registry.get(path_prefix)
if subclass:
# Using "object" base class method avoids recursion here.
return object.__new__(subclass)
else: # No subclass with matching prefix found (and no default).
raise FileSystem.Unknown(
f'path "{path}" has no known file system prefix')
def count_files(self):
raise NotImplementedError
class Nfs(FileSystem, path_prefix='nfs'):
def __init__ (self, path):
pass
def count_files(self):
pass
class LocalDrive(FileSystem, path_prefix=None): # Default file system.
def __init__(self, path):
if not os.access(path, os.R_OK):
raise FileSystem.NoAccess('Cannot read directory')
self.path = path
def count_files(self):
return sum(os.path.isfile(os.path.join(self.path, filename))
for filename in os.listdir(self.path))
if __name__ == '__main__':
data1 = FileSystem('nfs://192.168.1.18')
data2 = FileSystem('c:/') # Change as necessary for testing.
print(type(data1).__name__) # -> Nfs
print(type(data2).__name__) # -> LocalDrive
print(data2.count_files()) # -> <some number>
try:
data3 = FileSystem('foobar://42') # Unregistered path prefix.
except FileSystem.Unknown as exc:
print(str(exc), '- raised as expected')
else:
raise RuntimeError(
"Unregistered path prefix should have raised Exception!")
This answer, as written works, but I wish to address a few items (potential pitfalls) others may experience through inexperience or perhaps codebase standards their team requires.
Firstly, for the decorator on __init_subclass__, per the PEP:
One could require the explicit use of #classmethod on the __init_subclass__ decorator. It was made implicit since there's no sensible interpretation for leaving it out, and that case would need to be detected anyway in order to give a useful error message.
Not a problem since its already implied, and the Zen tells us "explicit over implicit"; never the less, when abiding by PEPs, there you go (and rational is further explained).
In my own implementation of a similar solution, subclasses are not defined with an additional keyword argument, such as #martineau does here:
class Nfs(FileSystem, path_prefix='nfs'): ...
class LocalDrive(FileSystem, path_prefix=None): ...
When browsing through the PEP:
As a second change, the new type.__init__ just ignores keyword arguments. Currently, it insists that no keyword arguments are given. This leads to a (wanted) error if one gives keyword arguments to a class declaration if the metaclass does not process them. Metaclass authors that do want to accept keyword arguments must filter them out by overriding __init__.
Why is this (potentially) problematic? Well there are several questions (notably this) describing the problem surrounding additional keyword arguments in a class definition, use of metaclasses (subsequently the metaclass= keyword) and the overridden __init_subclass__. However, that doesn't explain why it works in the currently given solution. The answer: kwargs.pop().
If we look at the following:
# code in CPython 3.7
import os
import re
class FileSystem(object):
class NoAccess(Exception): pass
class Unknown(Exception): pass
# Regex for matching "xxx://" where x is any non-whitespace character except for ":".
_PATH_PREFIX_PATTERN = re.compile(r'\s*([^:]+)://')
_registry = {} # Registered subclasses.
def __init_subclass__(cls, **kwargs):
path_prefix = kwargs.pop('path_prefix', None)
super().__init_subclass__(**kwargs)
cls._registry[path_prefix] = cls # Add class to registry.
...
class Nfs(FileSystem, path_prefix='nfs'): ...
This will still run without issue, but if we remove the kwargs.pop():
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs) # throws TypeError
cls._registry[path_prefix] = cls # Add class to registry.
The error thrown is already known and described in the PEP:
In the new code, it is not __init__ that complains about keyword arguments, but __init_subclass__, whose default implementation takes no arguments. In a classical inheritance scheme using the method resolution order, each __init_subclass__ may take out it's keyword arguments until none are left, which is checked by the default implementation of __init_subclass__.
What is happening is the path_prefix= keyword is being "popped" off of kwargs, not just accessed, so then **kwargs is now empty and passed up the MRO and thus compliant with the default implementation (receiving no keyword arguments).
To avoid this entirely, I propose not relying on kwargs but instead use that which is already present in the call to __init_subclass__, namely the cls reference:
# code in CPython 3.7
import os
import re
class FileSystem(object):
class NoAccess(Exception): pass
class Unknown(Exception): pass
# Regex for matching "xxx://" where x is any non-whitespace character except for ":".
_PATH_PREFIX_PATTERN = re.compile(r'\s*([^:]+)://')
_registry = {} # Registered subclasses.
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls._registry[cls._path_prefix] = cls # Add class to registry.
...
class Nfs(FileSystem):
_path_prefix = 'nfs'
...
Adding the prior keyword as a class attribute also extends the use in later methods if one needs to refer back to the particular prefix used by the subclass (via self._path_prefix). To my knowledge, you cannot refer back to supplied keywords in the definition (without some complexity) and this seemed trivial and useful.
So to #martineau I apologize for my comments seeming confusing, only so much space to type them and as shown it was more detailed.
I am able to setup nose tests to run with the #attr tag. I am now interested in know if I can append to the end of the test name, the #attr tag? What we are trying to do is add a tag if our tests run into an issue and we write up a defect for it, we would then put the defect number as an #attr tag. Then when we run we could easily identify which tests have open defects against them.
Just wondering if this is even possible, and where to go to see how to set it up?
EDIT RESULTS RUNNING WITH ANSWER:
Test Results:
So I sort of know what is going on, if I have the #fancyattr() at the class level it picks it up and changes the name of the class. When I put the #fancyattr() at the test level it is not changing the name of the test, which is what I need for it to do.
For example - Changes the name of the class:
#dms_attr('DMSTEST')
#attr('smoke_login', 'smoketest', priority=1)
class TestLogins(BaseSmoke):
"""
Just logs into the system and then logs off
"""
def setUp(self):
BaseSmoke.setUp(self)
def test_login(self):
print u"I can login -- taking a nap now"
sleep(5)
print u"Getting off now"
def tearDown(self):
BaseSmoke.tearDown(self)
This is what I need and it isn't working:
#attr('smoke_login', 'smoketest', priority=1)
class TestLogins(BaseSmoke):
"""
Just logs into the system and then logs off
"""
def setUp(self):
BaseSmoke.setUp(self)
#dms_attr('DMSTEST')
def test_login(self):
print u"I can login -- taking a nap now"
sleep(5)
print u"Getting off now"
def tearDown(self):
BaseSmoke.tearDown(self)
Updated screenshot with what I am seeing with __doc__:
Here is how to do it with args type attributes:
rename_test.py:
import unittest
from nose.tools import set_trace
def fancy_attr(*args, **kwargs):
"""Decorator that adds attributes to classes or functions
for use with the Attribute (-a) plugin. It also renames functions!
"""
def wrap_ob(ob):
for name in args:
setattr(ob, name, True)
#using __doc__ instead of __name__ works for class methods tests
ob.__doc__ = '_'.join([ob.__name__, name])
#ob.__name__ = '_'.join([ob.__name__, name])
return ob
return wrap_ob
class TestLogins(unittest.TestCase):
#fancy_attr('slow')
def test_method():
assert True
#fancy_attr('slow')
def test_func():
assert True
Running test:
$ nosetests rename_test.py -v
test_method_slow ... ok
test_func_slow ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.003s
OK
EDIT: For xunit reporting to work, test renaming should take place before running the test. You can do it on import, here is untested hack showing how to do it:
rename_test.py:
import unittest
def fancy_attr(*args, **kwargs):
"""Decorator that adds attributes to classes or functions
for use with the Attribute (-a) plugin. It also renames functions!
"""
def wrap_ob(ob):
for name in args:
setattr(ob, name, True)
ob.__doc__ = '_'.join([ob.__name__, name])
return ob
return wrap_ob
class TestLogins(unittest.TestCase):
#fancy_attr('slow')
def test_method(self):
assert True
def make_name(orig, attrib):
return '_'.join([orig, attrib])
def rename(cls):
methods = []
for key in cls.__dict__:
method = getattr(cls, key)
if method:
if hasattr(cls.__dict__[key], '__dict__'):
if 'slow' in cls.__dict__[key].__dict__:
methods.append(key)
print methods
for method in methods:
setattr(cls, make_name(method, 'slow'), cls.__dict__[key])
delattr(cls, method)
rename(TestLogins)
#fancy_attr('slow')
def test_func():
assert True
I make use of PyCLIPS to integrate CLIPS into Python. Python methods are registered in CLIPS using clips.RegisterPythonFunction(method, optional-name). Since I have to register several functions and want to keep the code clear, I am looking for a decorator to do the registration.
This is how it is done now:
class CLIPS(object):
...
def __init__(self, data):
self.data = data
clips.RegisterPythonFunction(self.pyprint, "pyprint")
def pyprint(self, value):
print self.data, "".join(map(str, value))
and this is how I would like to do it:
class CLIPS(object):
...
def __init__(self, data):
self.data = data
#clips.RegisterPythonFunction(self.pyprint, "pyprint")
#clips_callable
def pyprint(self, value):
print self.data, "".join(map(str, value))
It keeps the coding of the methods and registering them in one place.
NB: I use this in a multiprocessor set-up in which the CLIPS process runs in a separate process like this:
import clips
import multiprocessing
class CLIPS(object):
def __init__(self, data):
self.environment = clips.Environment()
self.data = data
clips.RegisterPythonFunction(self.pyprint, "pyprint")
self.environment.Load("test.clp")
def Run(self, cycles=None):
self.environment.Reset()
self.environment.Run()
def pyprint(self, value):
print self.data, "".join(map(str, value))
class CLIPSProcess(multiprocessing.Process):
def run(self):
p = multiprocessing.current_process()
self.c = CLIPS("%s %s" % (p.name, p.pid))
self.c.Run()
if __name__ == "__main__":
p = multiprocessing.current_process()
c = CLIPS("%s %s" % (p.name, p.pid))
c.Run()
# Now run CLIPS from another process
cp = CLIPSProcess()
cp.start()
it should be fairly simple to do like this:
# mock clips for testing
class clips:
#staticmethod
def RegisterPythonFunction(func, name):
print "register: ", func, name
def clips_callable(fnc):
clips.RegisterPythonFunction(fnc, fnc.__name__)
return fnc
#clips_callable
def test(self):
print "test"
test()
edit: if used on a class method it will register the unbound method only. So it won't work if the function will be called without an instance of the class as the first argument. Therefore this would be usable to register module level functions, but not class methods. To do that, you'll have to register them in __init__.
It seems that the elegant solution proposed by mata wouldn't work because the CLIPS environment should be initialized before registering methods to it.
I'm not a Python expert, but from some searching it seems that combination of inspect.getmembers() and hasattr() will do the trick for you - you could loop all members of your class, and register the ones that have the #clips_callable attribute to CLIPS.
Got it working now by using a decorator to set an attribute on the method to be registered in CLIPS and using inspect in init to fetch the methods and register them. Could have used some naming strategy as well, but I prefer using a decorator to make the registering more explicit. Python functions can be registered before initializing a CLIPS environment. This is what I have done.
import inspect
def clips_callable(func):
from functools import wraps
#wraps(func)
def wrapper(*__args,**__kw):
return func(*__args,**__kw)
setattr(wrapper, "clips_callable", True)
return wrapper
class CLIPS(object):
def __init__(self, data):
members = inspect.getmembers(self, inspect.ismethod)
for name, method in members:
try:
if method.clips_callable:
clips.RegisterPythonFunction(method, name)
except:
pass
...
#clips_callable
def pyprint(self, value):
print self.data, "".join(map(str, value))
For completeness, the CLIPS code in test.clp is included below.
(defrule MAIN::start-me-up
=>
(python-call pyprint "Hello world")
)
If somebody knows a more elegant approach, please let me know.