How to dynamically share a package-wide config variable? - python

I am building a python package, which has a module called "config", in which I have defined different files that contain some global variables that are used as configuration in other modules.
.
└── mypackage
├── base.py
├── config
│   ├── completion.py
│   ├── __init__.py
│   ├── locals.py
│   └── queries.py
├── encoders.py
├── exceptions.py
├── functions
│   ├── actions.py
│   ├── dummy.py
│   ├── __init__.py
│   ├── parsers.py
│   └── seekers.py
├── __init__.py
├── query.py
└── utils
├── dict.py
├── __init__.py
└── retry.py
For example, the file mypackage/config/queries.py has the following content:
INCLUDE_PARENTS = False
Whereas in the main file mypackage/base.py, I have a function which takes this config variable as a default argument:
import mypackage.config.queries as conf
def query(include_parent_=conf.INCLUDE_PARENTS, **kwargs):
# do stuff depending on include_parent_ argument
What I want, and what I haven't been able to find in other similar questions, is to be able to dynamically modify these variables in a Python/Ipython console session. That is, I should be able to do the following on Ipython:
In [1]: import mypackage as mp
In [2]: mp.config.INCLUDE_PARENTS = True # Its default value was False
In [3]: mp.query()
Out[3]: # result with include_parent_ argument set to True
In [4]: mp.config.INCLUDE_PARENTS = False # Now I set the value back to False
In [5]: mp.query()
Out[5]: # result with include_parent_ argument set to False
But I don't understand why I am not able to achieve it. I have tried importing the configuration variables in init.py with their associated namespace, but I never manage to be able to change the global configuration variables dynamically, as Pandas does, for example.

The issue is that you are using conf.INCLUDE_PARENTS as a default parameter of a function. A default parameter is evaluated when the function is created not when it is called. Thus, when you change your code later, the value inside the function does not change. The following should work as you expect.
def query(include_parent_=None, **kwargs):
if include_parent_ is None:
include_parent_ = conf.INCLUDE_PARENTS
# do stuff depending on include_parent_ argument

Related

Python: relatively import the containing package

I have the following package structure (drastically simplified from a real use case):
mypackage/
├── __init__.py
├── mod1.py
│   ├── def func1(): return 1
│   └── ...
│
├── mod2.py
│   ├── def func2(): return 2
│   └── ...
└── mod3.py
with __init__.py like this
from .mod1 import *
from .mod2 import *
Now, in mod3.py I want to access the packages complete namespace with one alias like res = p.func1() + p.func2() and I want this to achieve this by one relative import statement. Is this possible?
I don't want an absolute import like import mypackage as p (because the code should be indifferent on renaming the package).
Note, this is related to but different from this unanswered question from 2009.
Generally you do not want to use asterisks while doing imports so you can use this while still using relative import:
from . import mod1
You can call the function like this:
mod1.func1()
PS: If you are using Python 3 you are no longer required to use __init__.py to define packages.
Edit:
If you want use common namespace for all those functions you have imported you could create a new module to use as a header file:
mypackage/
├── header.py
│ ├── from .mod1 import *
│ └── from .mod2 import *
├── mod1.py
│ ├── def func1(): return 1
│ └── ...
│
├── mod2.py
│ ├── def func2(): return 2
│ └── ...
└── mod3.py
And in mod3.py you can import header.py and use the functions in various ways:
from .header import *
val1 = func1()
val2 = func2()
from . import header
val1 = header.func1()
val2 = header.func2()
from . import header as p
val1 = p.func1()
val2 = p.func2()

Making a python package, how do I pass object into submodule

I'm making my own python package with submodules to organize things. How can I create an object defined at the top level and pass the object to a lower submodule so that the submodule edits values?
File structure:
packageFolder
├── mypackage
│   ├── resources
│   │   └── MASTERLOOKUP.csv
│   ├── analysis
│   │   ├── __init__.py
│   │   ├── elffileextract.py
│   │   └── apkextraction.py
│   └── __init__.py
├── setup.py
└── README.md
Inside of the top level __init__.py I included quite a bit of code so I didn't have a long string when creating objects, ie x = mypackage.m_file() preferable as opposed to x = mypackage.mymodule.m_file()
####__init__.py
class m_file():
def __init__(self):
self.FilePath = "*"
self.FileSize = "*"
self.FileType = "*"
self.FileDependencies = "*"
def fileoperation(self):
analysis.elffileextract.process(self) #want to pass in 'm_file' object
Inside a submodule file elffileextract.py I'll have something like this
####analysis/elffileextract.py
from . import m_file
def elfextract(mFile:m_file): #take object as argument
filename = mFile.FileName
mFile.FileDependencies = "glibc.so"
return
Inside of the file analysis/__init__.py I have the following
#### analysis/__init__.py
import mypackage.analysis.elffileextract.py
import mypackage.analysis.apkextraction.py
How I want to be able to use the package is as follows:
>>> import mypackage
>>> x = mypackage.m_file()
>>> x.FileDependencies
'*'
>>> x.fileoperation()
>>> x.FileDependencies
'glibc.so'
I'm not sure what the technical term for passing objects into submodules is but any answers to my problem or pointers in the right direction would be appreciated. I've tried various variations of imports in all files, I can't seem to figure it out. I may be trying to overcomplicate this in a non-pythonic way of doing things.

How to use data obtained from DB as values in add_argument() when the Flask app starts?

I have the following project structure for a Flask app using flask-restx
.
├── app
│   ├── extensions.py
│   ├── __init__.py
│   └── pv_dimensioning
│   ├── controller.py
│   ├── __init__.py
│   ├── models
│   │   ├── dto.py
│   │   ├── __init__.py
│   │   └── vendor_models.py
│   ├── services
│   │   ├── calculator.py
│   │   ├── database.py
│   │   ├── data.py
│   │   ├── db_crud.py
│   │   ├── __init__.py
│   │   └── processor.py
│   └── utils
│   ├── decode_verify_jwt.py
│   ├── decorator.py
│   └── __init__.py
├── config.py
├── main.py
├── package.json
├── package-lock.json
├── Pipfile
├── Pipfile.lock
├── README.md
├── serverless.yml
└── tests
├── __init__.py
├── test_calculator.py
├── test_config.py
└── test_processor.py
In controller.py, I am adding the add_argument() statements and parsing them in the api routes. In one of the add_argument() statement, I would like to add choices for the user. For getting the choices, I am querying from the database and getting a list of values available. I then convert this list to a tuple, assign it to a variable, and pass it as choices parameter in the add_argument()
My codes:
data.py
from ..models.vendor_models import AdminVendor
def data(app):
values = AdminVendor.query.all()
v = [value.name for value in values]
return {'v': tuple(v)}
controller.py
from flask_restx import Resource, reqparse
parser = reqparse.RequestParser()
parser.add_argument(
"vendor",
choices=vendors, # <--- The values of v should be added here
help="Select the vendor"
)
#ns.route("/")
class UserOutput(Resource):
#ns.doc(
"Get calculated response",
responses={
200: "Values returned",
400: "Validation Error",
403: "Not authorized"
},
)
#ns.expect(parser, validation=True)
def get(self):
args = parser.parse_args()
return DimensionCalculator.inputs(**args), 200
where ns is the namespace.
My __init__.py file in the app folder is as follows:
from flask import Flask
from .extensions import cors, db, ma
def create_app(app_config):
app = Flask(__name__)
app.config.from_object(app_config)
register_blueprints(app)
register_extensions(app)
return app
def register_extensions(app):
cors.init_app(app)
db.init_app(app)
ma.init_app(app)
def register_blueprints(app):
from .pv_dimensioning import dimensioning_blueprint
app.register_blueprint(dimensioning_blueprint)
and the entry point to the app is main.py
import os
from app import create_app
from app.extensions import db
from app.pv_dimensioning.services.data import data
from config import config_by_name
config_name = os.getenv("FLASK_CONFIG") or "default"
app_config = config_by_name[config_name]
app = create_app(app_config)
db.create_all(app=app)
with app.app_context():
v = data(app)
print(v)
The output of print(v) is as follows:
{'v': ('Solarmodul Canadian Solar HiKu CS3L-370MS 370Wp', 'Solarmodul Longi LR4-60HIH-370M, 370Wp', 'Solarmodul Solar Fabrik mono S3 - Halfcut 360Wp', 'Solarmodul Energetica e.Classic M HC black - 360Wp', 'Solarmodul Yingli YL280P-29b-YGE 60 Cell Series 2 - poly, 280Wp', 'Solarmodul Suntech Power STP370S-B60/Wnh, 370Wp', 'Solarmodul AXITEC AXIworldpremium X HC AC-340MH/120S, 340Wp', 'Solarmodul Longi LR4-72HIH-440M, 440Wp', 'Solarmodul Seraphim SRP-330-BMB-DG 330Wp', 'Solarmodul Sharp NU-JD 440Wp')}
I want these values of v to be used in controller.py in the 'vendor' argument.
I have tried getting the values of v from main.py by adding from main import v in the controller.py, but it shows the following error
ImportError: cannot import name 'v' from 'main'
What is the mistake that I am doing?
I'm not an expert on flask_restx, but from my understanding, the choices argument takes an iterable so you should simply be able to pass in the return value of your data function.
data.py
from ..models.vendor_models import AdminVendor
def data():
values = AdminVendor.query.all()
v = [value.name for value in values]
return {'v': tuple(v)}
controller.py
from flask_restx import Resource, reqparse
from .services.data import data
parser = reqparse.RequestParser()
parser.add_argument(
"vendor",
choices=data()['v'],
help="Select the vendor")
Regarding the import error, as Mindslave points out that is most likely a circular import error see this question for a bit more detail. Generally these can be avoided by moving the import from the top of the module to within a function/class, e.g:
from flask_restx import Resource, reqparse
def load_parser():
from .services.data import data # avoid circular import
parser = reqparse.RequestParser()
parser.add_argument(
"vendor",
choices=data()['v'],
help="Select the vendor")
return parser
parse = load_parser()
As a side note, be aware that reqparse is scheduled to be removed from flask_restx, so might be worth considering a different option before you get too embedded with it:
Warning The whole request parser part of Flask-RESTX is slated for
removal and will be replaced by documentation on how to integrate with
other packages that do the input/output stuff better (such as
marshmallow). This means that it will be maintained until 2.0 but
consider it deprecated. Don’t worry, if you have code using that now
and wish to continue doing so, it’s not going to go away any time too
soon.
source: https://flask-restx.readthedocs.io/en/latest/parsing.html

pytest mocker fixture mock module from where it is defined and not where it is used

I have some utils function at src/utils/helper.py
Imagine I have a function called func_a in utils/helper.py and it is used at multiple places in my project.
And every time I use it, I import it like this
from src.utils.helper import func_a
Now I want to mock this func_a in my tests.
I want to create a fixture in conftest.py so that I don't need to write a mock function again and again for each test file.
The problem is, in my mock function I CANNOT write like this.
https://pypi.org/project/pytest-mock/
mocker.patch('src.utils.helper.func_a', return_value="some_value", autospec=True)
I have to write it like this for each test file
mocker.patch('src.pipeline.node_1.func_a', return_value="some_value", autospec=True)
As per the docs https://docs.python.org/3/library/unittest.mock.html#where-to-patch
Since I am importing func_a like from src.utils.helper import func_a I have to mock where it is used and not where it is defined.
But the problem with this approach is that I can not define it in my fixture in conftest.py
Directory Structure
├── src
│   ├── pipeline
│   │   ├── __init__.py
│   │   ├── node_1.py
│   │   ├── node_2.py
│   │   └── node_3.py
│   └── utils
│   ├── __init__.py
│   └── helper.py
└── tests
├── __init__.py
├── conftest.py
└── pipeline
├── __init__.py
├── test_node_1.py
├── test_node_2.py
└── test_node_3.py
Well, as you wrote, you have to use that patching if you use from xxx import. Your first option is of course to use full module import in the production code instead:
node_1.py
import src.utils.helper
def node_1():
src.utils.helper.func_a()
I`m sure that you are aware of this, but I wanted to mention it anyway.
If you don't want to change the production code, you have to do the patching depending on the patched module, as you wrote. That basically means that you have to construct the patch location dynamically. Provided you have a symmetric naming of the tested functions and the test functions, you could do something like this:
conftest.py
#pytest.fixture
def mock_func_a(mocker, request):
node_name = request.node.name[5:] # we assume that the test function name is "test_node_1" for testing "node_1"
module_path = f'src.pipeline.{node_name}.func_a'
mocked = mocker.patch(module_path,
return_value="some_value",
autospec=True)
yield mocked
If you can't derive the patch path from the test itself, you have to add more information to the test function.
That probably makes only sense if you want to do more than just a patch in the fixture - otherwise you could also just add a patch decorator directly.
You could add a custom mark that has the module path, or a part of the module path as an argument:
test_node_1.py
#pytest.mark.node("node_1")
def test_node(mock_func_a):
node_1()
mock_func_a.assert_called_once()
conftest.py
#pytest.fixture
def mock_func_a(mocker, request):
mark = next((m for m in request.node.iter_markers()
if m.name == 'node'), None) # find your custom mark
if mark is not None:
node_name = mark.args[0]
module_path = f'src.pipeline.{node_name}.func_a'
mocked = mocker.patch(module_path,
return_value="some_value",
autospec=True)
yield mocked
Or, if you need to provide the full path:
test_node_1.py
#pytest.mark.path("src.pipeline.node_1")
def test_node(mock_func_a):
...
conftest.py
#pytest.fixture
def mock_func_a(mocker, request):
mark = next((m for m in request.node.iter_markers()
if m.name == 'path'), None) # find your custom mark
if mark is not None:
node_name = mark.args[0]
module_path = f'{node_name}.func_a'
...

Python3 + pytest + pytest-mock: Mocks leaking into other test functions breaking assertions?

NOTE: All Details about my setup (python version, modules etc) listed at bottom of question.
Apologies in advance if this issue is blatant, but i've been wrestling with it for several days now. Hopefully someone can shed some new light.
I'm in the process of converting unit tests for my personal project from unittest -> pytest. Previously I was using the built-in unittest.mock module, but now i'm trying to use the pytest-mock plugin instead.
I have a sneaking feeling that my tests are leaking mock objects into one another.
Here's why:
High-level details:
# Python version
Python 3.5.2
# Pytest version ( and plugins )
pytest==3.0.7
pytest-benchmark==3.1.0a2
pytest-catchlog==1.2.2
pytest-cov==2.4.0
pytest-ipdb==0.1.dev2
pytest-leaks==0.2.2
pytest-mock==1.6.0
pytest-rerunfailures==2.1.0
pytest-sugar==0.8.0
pytest-timeout==1.2.0
python-dateutil==2.6.0
python-dbusmock==0.16.7
When I run my tests using the following command:
py.test --pdb --showlocals -v -R : -k test_subprocess.py
Everything is fine till we get to test_subprocess_check_command_type. At which point I get the following error:
# Set mock return types
# mock_map_type_to_command.return_value = int
# action
with pytest.raises(TypeError) as excinfo:
scarlett_os.subprocess.Subprocess(test_command,
name=test_name,
fork=test_fork,
> run_check_command=True)
E Failed: DID NOT RAISE <class 'TypeError'>
excinfo = <[AttributeError("'ExceptionInfo' object has no attribute 'typename'") raised in repr()] ExceptionInfo object at 0x7f8c380f9dc0>
mock_fork = <Mock name='mock_fork' id='140240122195184'>
mock_logging_debug = <Mock name='mock_logging_debug' id='140240128747640'>
mock_map_type_to_command = <Mock name='mock_map_type_to_command' id='140240122785112'>
mocker = <pytest_mock.MockFixture object at 0x7f8c329f07a8>
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f8c329f0810>
self = <tests.test_subprocess.TestScarlettSubprocess object at 0x7f8c32aaac20>
test_command = ['who', '-b']
test_fork = False
test_name = 'test_who'
tests/test_subprocess.py:267: Failed
tests/test_subprocess.py::TestScarlettSubprocess.test_subprocess_check_command_type ⨯ 100% ██████████
BUT!
If I filter out all of the other tests except for the problematic one then I get:
via py.test --pdb --showlocals -v -R : -k test_subprocess_check_command_type
pi#0728af726f1f:~/dev/bossjones-github/scarlett_os$ py.test --pdb --showlocals -v -R : -k test_subprocess_check_command_type
/usr/local/lib/python3.5/site-packages/_pdbpp_path_hack/pdb.py:4: ResourceWarning: unclosed file <_io.TextIOWrapper name='/usr/local/lib/python3.5/site-packages/pdb.py' mode='r' encoding='UTF-8'>
os.path.dirname(os.path.dirname(__file__)), 'pdb.py')).read(), os.path.join(
Test session starts (platform: linux, Python 3.5.2, pytest 3.0.7, pytest-sugar 0.8.0)
cachedir: .cache
benchmark: 3.1.0a2 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/pi/dev/bossjones-github/scarlett_os, inifile: setup.cfg
plugins: timeout-1.2.0, sugar-0.8.0, rerunfailures-2.1.0, mock-1.6.0, leaks-0.2.2, ipdb-0.1.dev2, cov-2.4.0, catchlog-1.2.2, benchmark-3.1.0a2
timeout: 60.0s method: signal
NOTE: DBUS_SESSION_BUS_ADDRESS environment var not found!
[DBUS_SESSION_BUS_ADDRESS]: unix:path=/tmp/dbus_proxy_outside_socket
tests/test_subprocess.py::TestScarlettSubprocess.test_subprocess_check_command_type ✓ 100% ██████████
Results (8.39s):
1 passed
190 deselected
pi#0728af726f1f:~/dev/bossjones-github/scarlett_os$
I also tried manually commenting out the following 2 tests and they allowed me to successfully run all the tests again:
test_subprocess_init
test_subprocess_map_type_to_command
Can anyone see anything blatently wrong with my setup? I've read several blog posts on "where to mock", and looked at the docs themselves several times, not sure what i'm missing. https://docs.python.org/3/library/unittest.mock.html
My Setup Details
Here is everything that might be required to solve this. Let me know if I need to provide any more information!
Also ... please excuse how messy my code looks and all of the comment blocks. I'm a big note taker when i'm learning something new ... I'll make everything more pythonic and cleaner in the near future :)
My code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Scarlett Dbus Service. Implemented via MPRIS D-Bus Interface Specification."""
from __future__ import with_statement, division, absolute_import
import os
import sys
from scarlett_os.exceptions import SubProcessError
from scarlett_os.exceptions import TimeOutError
import logging
from scarlett_os.internal.gi import GObject
from scarlett_os.internal.gi import GLib
logger = logging.getLogger(__name__)
def check_pid(pid):
"""Check For the existence of a unix pid."""
try:
os.kill(pid, 0)
except OSError:
return False
else:
return True
class Subprocess(GObject.GObject):
"""
GObject API for handling child processes.
:param command: The command to be run as a subprocess.
:param fork: If `True` this process will be detached from its parent and
run independent. This means that no excited-signal will be emited.
:type command: `list`
:type fork: `bool`
"""
__gtype_name__ = 'Subprocess'
__gsignals__ = {
'exited': (GObject.SignalFlags.RUN_LAST, None, (GObject.TYPE_INT, GObject.TYPE_INT))
}
def __init__(self, command, name=None, fork=False, run_check_command=True):
"""Create instance of Subprocess."""
GObject.GObject.__init__(self)
self.process = None
self.pid = None
if not fork:
self.stdout = True
self.stderr = True
else:
self.stdout = False
self.stderr = False
self.forked = fork
# Verify that command is properly formatted
# and each argument is of type str
if run_check_command:
self.check_command_type(command)
self.command = command
self.name = name
logger.debug("command: {}".format(self.command))
logger.debug("name: {}".format(self.name))
logger.debug("forked: {}".format(self.forked))
logger.debug("process: {}".format(self.process))
logger.debug("pid: {}".format(self.pid))
if fork:
self.fork()
# TODO: Add these arguments so we can toggle stdout
# def spawn_command(self, standard_input=False, standard_output=False, standard_error=False):
def spawn_command(self):
# DO_NOT_REAP_CHILD
# Don't reap process automatically so it is possible to detect when it is closed.
return GLib.spawn_async(self.command,
flags=GLib.SpawnFlags.SEARCH_PATH | GLib.SpawnFlags.DO_NOT_REAP_CHILD
)
def map_type_to_command(self, command):
"""Return: Map after applying type to several objects in an array"""
# NOTE: In python3, many processes that iterate over iterables return iterators themselves.
# In most cases, this ends up saving memory, and should make things go faster.
# cause of that, we need to call list() over the map object
return list(map(type, command))
def check_command_type(self, command):
types = self.map_type_to_command(command)
if type(types) is not list:
raise TypeError("Variable types should return a list in python3. Got: {}".format(types))
# NOTE: str is a built-in function (actually a class) which converts its argument to a string.
# string is a module which provides common string operations.
# source: http://stackoverflow.com/questions/2026038/relationship-between-string-module-and-str
for t in types:
if t is not str:
raise TypeError("Executables and arguments must be str objects. types: {}".format(t))
logger.debug("Running Command: %r" % " ".join(command))
return True
def run(self):
"""Run the process."""
# NOTE: DO_NOT_REAP_CHILD: the child will not be automatically reaped;
# you must use g_child_watch_add yourself (or call waitpid or handle `SIGCHLD` yourself),
# or the child will become a zombie.
# source:
# http://valadoc.org/#!api=glib-2.0/GLib.SpawnFlags.DO_NOT_REAP_CHILD
# NOTE: SEARCH_PATH: argv[0] need not be an absolute path, it will be looked for in the user's PATH
# source:
# http://lazka.github.io/pgi-docs/#GLib-2.0/flags.html#GLib.SpawnFlags.SEARCH_PATH
self.pid, self.stdin, self.stdout, self.stderr = self.spawn_command()
logger.debug("command: {}".format(self.command))
logger.debug("stdin: {}".format(self.stdin))
logger.debug("stdout: {}".format(self.stdout))
logger.debug("stderr: {}".format(self.stderr))
logger.debug("pid: {}".format(self.pid))
# close file descriptor
self.pid.close()
print(self.stderr)
# NOTE: GLib.PRIORITY_HIGH = -100
# Use this for high priority event sources.
# It is not used within GLib or GTK+.
watch = GLib.child_watch_add(GLib.PRIORITY_HIGH,
self.pid,
self.exited_cb)
return self.pid
def exited_cb(self, pid, condition):
if not self.forked:
self.emit('exited', pid, condition)
def fork(self):
"""Fork the process."""
try:
# first fork
pid = os.fork()
if pid > 0:
logger.debug('pid greater than 0 first time')
sys.exit(0)
except OSError as e:
logger.error('Error forking process first time')
sys.exit(1)
# Change the current working directory to path.
os.chdir("/")
# Description: setsid() creates a new session if the calling process is not a process group leader.
# The calling process is the leader of the new session,
# the process group leader of the new process group,
# and has no controlling terminal.
# The process group ID and session ID of the calling process are set to the PID of the calling process.
# The calling process will be the only process in this new process group and in this new session.
# Return Value: On success, the (new) session ID of the calling process is returned.
# On error, (pid_t) -1 is returned, and errno is set to indicate the error.
os.setsid()
# Set the current numeric umask and return the previous umask.
os.umask(0)
try:
# second fork
pid = os.fork()
if pid > 0:
logger.debug('pid greater than 0 second time')
sys.exit(0)
except OSError as e:
logger.error('Error forking process second time')
sys.exit(1)
My Test:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
test_subprocess
----------------------------------
"""
import os
import sys
import pytest
import scarlett_os
# import signal
# import builtins
# import re
class TestScarlettSubprocess(object):
'''Units tests for Scarlett Subprocess, subclass of GObject.Gobject.'''
def test_check_pid_os_error(self, mocker):
# Feels like mocks are leaking into other tests,
# stop mock before starting each test function
mocker.stopall()
# Setup mock objects
kill_mock = mocker.MagicMock(name=__name__ + "_kill_mock_OSError")
kill_mock.side_effect = OSError
# patch things
mocker.patch.object(scarlett_os.subprocess.os, 'kill', kill_mock)
# When OSError occurs, throw False
assert not scarlett_os.subprocess.check_pid(4353634632623)
# Verify that os.kill only called once
assert kill_mock.call_count == 1
def test_check_pid(self, mocker):
# Feels like mocks are leaking into other tests,
# stop mock before starting each test function
mocker.stopall()
# Setup mock objects
kill_mock = mocker.MagicMock(name=__name__ + "_kill_mock")
mocker.patch.object(scarlett_os.subprocess.os, 'kill', kill_mock)
result = scarlett_os.subprocess.check_pid(123)
assert kill_mock.called
# NOTE: test against signal 0
# sending the signal 0 to a given PID just checks if any
# process with the given PID is running and you have the
# permission to send a signal to it.
kill_mock.assert_called_once_with(123, 0)
assert result is True
# FIXME: I THINK THIS GUYS IS LEAKING MOCK OBJECTS
def test_subprocess_init(self, mocker):
# Feels like mocks are leaking into other tests,
# stop mock before starting each test function
mocker.stopall()
mock_check_command_type = MagicMock(name="mock_check_command_type")
mock_check_command_type.return_value = True
mock_fork = mocker.MagicMock(name="mock_fork")
mock_logging_debug = mocker.MagicMock(name="mock_logging_debug")
# mock
mocker.patch.object(scarlett_os.subprocess.logging.Logger, 'debug', mock_logging_debug)
mocker.patch.object(scarlett_os.subprocess.Subprocess, 'check_command_type', mock_check_command_type)
mocker.patch.object(scarlett_os.subprocess.Subprocess, 'fork', mock_fork)
# NOTE: On purpose this is an invalid cmd. Should be of type array
test_command = ['who']
test_name = 'test_who'
test_fork = False
s_test = scarlett_os.subprocess.Subprocess(test_command,
name=test_name,
fork=test_fork)
# action
assert s_test.check_command_type(test_command) is True
mock_check_command_type.assert_called_with(['who'])
assert not s_test.process
assert not s_test.pid
assert s_test.name == 'test_who'
assert not s_test.forked
assert s_test.stdout is True
assert s_test.stderr is True
mock_logging_debug.assert_any_call("command: ['who']")
mock_logging_debug.assert_any_call("name: test_who")
mock_logging_debug.assert_any_call("forked: False")
mock_logging_debug.assert_any_call("process: None")
mock_logging_debug.assert_any_call("pid: None")
mock_fork.assert_not_called()
# FIXME: I THINK THIS GUYS IS LEAKING MOCK OBJECTS
def test_subprocess_map_type_to_command(self, mocker):
"""Using the mock.patch decorator (removes the need to import builtins)"""
# Feels like mocks are leaking into other tests,
# stop mock before starting each test function
mocker.stopall()
mock_check_command_type = mocker.MagicMock(name="mock_check_command_type")
mock_check_command_type.return_value = True
mock_fork = mocker.MagicMock(name="mock_fork")
mock_logging_debug = mocker.MagicMock(name="mock_logging_debug")
# mock
mocker.patch.object(scarlett_os.subprocess.logging.Logger, 'debug', mock_logging_debug)
mocker.patch.object(scarlett_os.subprocess.Subprocess, 'check_command_type', mock_check_command_type)
mocker.patch.object(scarlett_os.subprocess.Subprocess, 'fork', mock_fork)
# NOTE: On purpose this is an invalid cmd. Should be of type array
test_command = ["who", "-b"]
test_name = 'test_who'
test_fork = False
# create subprocess object
s_test = scarlett_os.subprocess.Subprocess(test_command,
name=test_name,
fork=test_fork)
mocker.spy(s_test, 'map_type_to_command')
assert isinstance(s_test.map_type_to_command(test_command), list)
assert s_test.map_type_to_command.call_count == 1
assert s_test.check_command_type(test_command)
assert s_test.check_command_type(
test_command) == mock_check_command_type.return_value
def test_subprocess_check_command_type(self, mocker):
"""Using the mock.patch decorator (removes the need to import builtins)"""
# Feels like mocks are leaking into other tests,
# stop mock before starting each test function
mocker.stopall()
test_command = ["who", "-b"]
test_name = 'test_who'
test_fork = False
# mock
mock_map_type_to_command = mocker.MagicMock(name="mock_map_type_to_command")
# mock_map_type_to_command.return_value = int
mock_map_type_to_command.side_effect = [int, [int, int]]
mock_fork = mocker.MagicMock(name="mock_fork")
mock_logging_debug = mocker.MagicMock(name="mock_logging_debug")
mocker.patch.object(scarlett_os.subprocess.logging.Logger, 'debug', mock_logging_debug)
mocker.patch.object(scarlett_os.subprocess.Subprocess, 'map_type_to_command', mock_map_type_to_command)
mocker.patch.object(scarlett_os.subprocess.Subprocess, 'fork', mock_fork)
# action
with pytest.raises(TypeError) as excinfo:
scarlett_os.subprocess.Subprocess(test_command,
name=test_name,
fork=test_fork,
run_check_command=True)
assert str(
excinfo.value) == "Variable types should return a list in python3. Got: <class 'int'>"
with pytest.raises(TypeError) as excinfo:
scarlett_os.subprocess.Subprocess(test_command,
name=test_name,
fork=test_fork,
run_check_command=True)
assert str(
excinfo.value) == "Executables and arguments must be str objects. types: <class 'int'>"
My folder structure( Note I removed a couple things since it was overly verbose ):
pi#0728af726f1f:~/dev/bossjones-github/scarlett_os$ tree -I *.pyc
.
├── requirements_dev.txt
├── requirements_test_experimental.txt
├── requirements_test.txt
├── requirements.txt
├── scarlett_os
│   ├── automations
│   │   ├── __init__.py
│   │   └── __pycache__
│   ├── commands.py
│   ├── compat.py
│   ├── config.py
│   ├── const.py
│   ├── core.py
│   ├── emitter.py
│   ├── exceptions.py
│   ├── __init__.py
│   ├── internal
│   │   ├── debugger.py
│   │   ├── deps.py
│   │   ├── encoding.py
│   │   ├── formatting.py
│   │   ├── gi.py
│   │   ├── __init__.py
│   │   ├── path.py
│   │   ├── __pycache__
│   │   └── system_utils.py
│   ├── listener.py
│   ├── loader.py
│   ├── logger.py
│   ├── log.py
│   ├── __main__.py
│   ├── mpris.py
│   ├── player.py
│   ├── __pycache__
│   ├── receiver.py
│   ├── speaker.py
│   ├── subprocess.py
│   ├── tasker.py
│   ├── tools
│   │   ├── __init__.py
│   │   ├── package.py
│   │   ├── __pycache__
│   │   └── verify.py
│   └── utility
│   ├── audio.py
│   ├── dbus_runner.py
│   ├── dbus_utils.py
│   ├── distance.py
│   ├── dt.py
│   ├── file.py
│   ├── generators.py
│   ├── gnome.py
│   ├── __init__.py
│   ├── location.py
│   ├── __pycache__
│   ├── temperature.py
│   ├── threadmanager.py
│   ├── thread.py
│   ├── unit_system.py
│   └── yaml.py
├── setup.cfg
├── setup.py
├── tests
│   ├── common_integration.py
│   ├── common.py
│   ├── helpers
│   │   ├── __init__.py
│   │   ├── __pycache__
│   │   ├── test_config_validation.py
│   │   ├── test_entity.py
│   │   └── test_init.py
│   ├── __init__.py
│   ├── integration
│   │   ├── baseclass.py
│   │   ├── conftest.py
│   │   ├── __init__.py
│   │   ├── __pycache__
│   │   ├── README.md
│   │   ├── stubs.py
│   │   ├── test_integration_end_to_end.py
│   │   ├── test_integration_listener.py
│   │   ├── test_integration_mpris.py
│   │   ├── test_integration_player.py
│   │   ├── test_integration_tasker.py
│   │   ├── test_integration_tasker.py.enable_sound.diff
│   │   └── test_integration_threadmanager.py
│   ├── internal
│   │   ├── __init__.py
│   │   ├── __pycache__
│   │   ├── test_deps.py
│   │   ├── test_encoding.py
│   │   └── test_path.py
│   ├── performancetests
│   │   ├── baseclass.py
│   │   ├── __init__.py
│   │   └── __pycache__
│   ├── __pycache__
│   ├── run_all_tests
│   ├── run_dbus_tests.sh
│   ├── test_cli.py
│   ├── test_commands.py
│   ├── testing_config
│   │   └── custom_automations
│   │   ├── light
│   │   │   └── test.py
│   │   └── switch
│   │   └── test.py
│   ├── test_listener.py
│   ├── test_mpris.py
│   ├── test_player.py
│   ├── test_scarlett_os.py
│   ├── test_speaker.py
│   ├── test_subprocess.py
│   ├── test_tasker.py
│   ├── test_threadmanager.py
│   ├── tools_common.py
│   ├── unit_scarlett_os.py
│   └── utility
│   ├── __init__.py
│   ├── __pycache__
│   ├── test_dbus_utils.py
│   ├── test_distance.py
│   ├── test_dt.py
│   ├── test_gnome.py
│   ├── test_init.py
│   ├── test_location.py
│   ├── test_unit_system.py
│   └── test_yaml.py
67 directories, 256 files
pi#0728af726f1f:~/dev/bossjones-github/scarlett_os$
Other details( Extended pip freeze just in case of incompatibilities ):
# Python version
Python 3.5.2
# Pytest version ( and plugins )
pytest==3.0.7
pytest-benchmark==3.1.0a2
pytest-catchlog==1.2.2
pytest-cov==2.4.0
pytest-ipdb==0.1.dev2
pytest-leaks==0.2.2
pytest-mock==1.6.0
pytest-rerunfailures==2.1.0
pytest-sugar==0.8.0
pytest-timeout==1.2.0
python-dateutil==2.6.0
python-dbusmock==0.16.7
# Pip Freeze ( Just in case )
alabaster==0.7.10
appdirs==1.4.3
argh==0.26.2
asn1crypto==0.22.0
astroid==1.5.2
Babel==2.4.0
bleach==2.0.0
bumpversion==0.5.3
cffi==1.10.0
click==6.7
click-plugins==1.0.3
colorama==0.3.7
colorlog==2.10.0
coverage==4.3.4
coveralls==1.1
cryptography==1.8.1
Cython==0.25.2
decorator==4.0.11
docopt==0.6.2
docutils==0.13.1
ecdsa==0.13
entrypoints==0.2.2
Fabric3==1.12.post1
fancycompleter==0.7
fields==5.0.0
flake8==3.3.0
flake8-docstrings==1.0.3
flake8-polyfill==1.0.1
freezegun==0.3.8
gnureadline==6.3.3
graphviz==0.6
html5lib==0.999999999
hunter==1.4.1
idna==2.5
imagesize==0.7.1
ipdb==0.10.2
ipykernel==4.6.1
ipython==6.0.0
ipython-genutils==0.2.0
ipywidgets==6.0.0
isort==4.2.5
jedi==0.10.2
Jinja2==2.9.6
jsonschema==2.6.0
jupyter==1.0.0
jupyter-client==5.0.1
jupyter-console==5.1.0
jupyter-core==4.3.0
lazy-object-proxy==1.2.2
MarkupSafe==1.0
mccabe==0.6.1
mistune==0.7.4
mock==2.0.0
mock-open==1.3.1
mypy-lang==0.4.6
nbconvert==5.1.1
nbformat==4.3.0
notebook==5.0.0
objgraph==3.1.0
ordereddict==1.1
packaging==16.8
pandocfilters==1.4.1
paramiko==1.18.2
pathtools==0.1.2
pbr==1.10.0
pdbpp==0.8.3
pexpect==4.2.1
pickleshare==0.7.4
pluggy==0.4.0
plumbum==1.6.3
prompt-toolkit==1.0.14
psutil==5.2.2
ptyprocess==0.5.1
py==1.4.33
py-cpuinfo==3.2.0
pyasn1==0.2.3
pycodestyle==2.3.1
pycparser==2.17
pycrypto==2.6.1
pydbus==0.6.0
pydocstyle==2.0.0
pyflakes==1.5.0
pygal==2.3.1
pygaljs==1.0.1
Pygments==2.2.0
pygobject==3.22.0
pylint==1.7.1
pyparsing==2.2.0
pystuck==0.8.5
pytest==3.0.7
pytest-benchmark==3.1.0a2
pytest-catchlog==1.2.2
pytest-cov==2.4.0
pytest-ipdb==0.1.dev2
pytest-leaks==0.2.2
pytest-mock==1.6.0
pytest-rerunfailures==2.1.0
pytest-sugar==0.8.0
pytest-timeout==1.2.0
python-dateutil==2.6.0
python-dbusmock==0.16.7
pytz==2017.2
PyYAML==3.12
pyzmq==16.0.2
qtconsole==4.3.0
requests==2.13.0
requests-mock==1.3.0
rpyc==3.3.0
-e git+git#github.com:bossjones/scarlett_os.git#c14ffcde608da12f5c2d4d9b81a63c7e618b3eed#egg=scarlett_os
simplegeneric==0.8.1
six==1.10.0
snowballstemmer==1.2.1
Sphinx==1.5.5
stevedore==1.18.0
termcolor==1.1.0
terminado==0.6
testpath==0.3
tornado==4.5.1
tox==2.7.0
traitlets==4.3.2
typing==3.6.1
virtualenv==15.0.3
virtualenv-clone==0.2.6
virtualenvwrapper==4.7.2
voluptuous==0.9.3
watchdog==0.8.3
wcwidth==0.1.7
webencodings==0.5.1
widgetsnbextension==2.0.0
wmctrl==0.3
wrapt==1.10.10
xdot==0.7
Edit: ( One more detail, why didn't I just us the patch context manager or decorators ? )
pytest-mock has a pretty good section on their design choice, and why they decided to move away from nested with statements and decorators piled on top of each other. Link is here, but let me mention a couple here just in case:
- excessive nesting of with statements breaking the flow of test
- receiving the mocks as parameters doesn't mix nicely with pytest's approach of naming fixtures as parameters, or pytest.mark.parametrize;
So, if it is possible to make my code a bit cleaner using this plugin, I would like to make that happen. If that isn't possible, then maybe I need to reconsider things.
The error you get is that code under test hit AttributeError instead of TypeError.
The detail is that some object was assumed to have a .typename member, and it didn't.
I suspect once you solve that riddle, the rest will be just fine.
I see someone opened https://github.com/pytest-dev/pytest-mock/issues/84 (you?), let's wait for pytest devs to analyse it in case there's an incompatibility between 2 plugins.
Why not run your mocks with function decorators or context managers to make sure they get closed out? For example, in test_subprocess_map_type_to_command,
instead of doing all of this to mock scarlett_os.subprocess.Subprocess.check_command_type:
mock_check_command_type = mocker.MagicMock(name="mock_check_command_type")
mock_check_command_type.return_value = True
mocker.patch.object(scarlett_os.subprocess.Subprocess, 'check_command_type', mock_check_command_type)
Why not instead just use a context manager and do:
with mock.patch.object(
scarlett_os.subprocess.Subprocess,
'check_command_type',
return_value=True):
It'll be much terser, and will make sure your mock doesn't leak.
Even better, if your mocks apply to the whole function (I think some of them do), you can use a decorator at the top of the function:
#mock.patch('scarlett_os.subprocess.Subprocess.check_command_type',
return_value=True)

Categories