This question already has answers here:
How to change a module variable from another module?
(3 answers)
Closed 1 year ago.
So I have read that the way to share globals across files is to create a module which holds globals and import this in all python files needing to access the global. However it doesn't seem to work as expected for me (Python 3.6+)
Simple directory structure:
run.py
mypack/
-- globals.py
-- stuff.py
-- __init__.py
I have a global var in globals.py which I need to modify in main file (run.py) and finally print out while exiting the program. It does not seem to work:
__init__.py:
from .stuff import *
globals.py:
test = 'FAIL'
stuff.py:
import atexit
from .globals import test
def cya():
print ("EXIT: test = " + test)
atexit.register(cya)
def hi():
print('HI')
run.py:
import mypack
import mypack.globals as globals
mypack.hi()
globals.test = 'PASS'
print ("MAIN: test = " + globals.test)
Output on script execution:
HI
MAIN: test = PASS
EXIT: test = FAIL
Clearly the exit routine (cya) did not show the correct value of global value that was modified in run.py. Not sure what I am doing wrong.
the python documentation might help you on this one.
https://docs.python.org/3/faq/programming.html#how-do-i-share-global-variables-across-modules
Thanks to #PeterTrcka for pointing out the issue. Also thanks to #buran for indicating globals is a bad name for module since its inbuilt function. Here is the working solution:
directory structure:
run.py
mypack/
-- universal.py
-- stuff.py
-- __init__.py
__init__.py:
from .stuff import *
universal.py:
class GlobalVars:
test = 'FAIL'
stuff.py:
import atexit
from .universal import GlobalVars
def cya():
print ("EXIT: test = " + GlobalVars.test)
atexit.register(cya)
def hi():
print('HI')
run.py:
import mypack
from mypack.universal import GlobalVars
mypack.hi()
GlobalVars.test = 'PASS'
print ("MAIN: test = " + GlobalVars.test)
Output on script execution:
HI
MAIN: test = PASS
EXIT: test = PASS
Issue was: at each import all variables will be reinitialized thier values. Use singleton object:
universal.py
import logging
class GlobVars:
_instances = {}
def __new__(cls, logger, counter_start=0):
if cls not in cls._instances:
print("Creating Instance")
cls._instances[cls] = super(GlobVars, cls).__new__(cls)
return cls._instances[cls]
def __init__(self, logger, counter_start=0):
self.logger = logger
self.counter = counter_start
glob_vars = GlobVars(logger=logging.getLogger("basic_logger"))
run.py
from universal import glob_vars
glob_vars.logger.info("One logger rulles them all")
Edit: Thanks #eemz for the idea to redesign the structure and use from unittest.mock import patch, but the problem persists.
So I just recently stumbled into unittest and I have a program which I normally start like this python run.py -config /path/to/config.file -y. I wanted to write a simple test in a separate test.py file: Execute the script, pass the mentioned arguments and get all of its output. I pass a prepared config file which is missing certain things, so the run.py will break and exactly log this error using logging.error: "xyz was missing in Config file!" (see example below). I'll get a few words from print() and then the logging instance kicks in and handles from there on. How do I get its output so I can check it? Feel free to rewrite this, as I'm still learning, please bear with me.
Simplified example:
run.py
import logging
def run(args):
< args.config = /path/to/config.file >
cnfg = Config(args.config)
cnfg.logger.info("Let's start with the rest of the code!") # This is NOT in 'output' of the unittest
< code >
if __name__ == "__main__":
print("Welcome! Starting execution.") # This is in 'output' of the unittest
< code to parse arguments 'args' >
run(args)
Config.py
import logging
class Config:
def __init__(self):
print("Creating logging instance, hold on ...") # This is in 'output' of the unittest
logger = logging.getLogger(__name__)
console_handler = logging.StreamHandler()
logger.addHandler(console_handler)
logger.info("Logging activated, let's go!") # This is NOT in 'output' of the unittest
self.logger = logger
if xyz not in config:
self.logger.error("xyz was missing in Config file!") # This is NOT in 'output' of the unittest
exit(1)
test.py
import unittest
from unittest.mock import patch
class TestConfigs(unittest.TestCase):
def test_xyz(self):
with patch('sys.stdout', new=StringIO()) as capture:
with self.assertRaises(SystemExit) as cm:
run("/p/to/f/missing/xyz/f", "", False, True)
output = capture.getvalue().strip()
self.assertEqual(cm.exception.code, 1)
# Following is working, because the print messages are in output
self.assertTrue("Welcome! Starting execution." in output)
# Following is NOT working, because the logging messages are not in output
self.assertTrue("xyz was missing in Config file!" in output)
if __name__ == "__main__":
unittest.main()
I would restructure run.py like this:
import logging
def main():
print("Welcome! Starting execution.")
Etc etc
if __name__ == "__main__":
main()
Then you can call the function run.main() in your unit test rather than forking a subprocess.
from io import StringIO
from unittest.mock import patch
import sys
import run
class etc etc
def test_run etc etc:
with patch('sys.stdout', new=StringIO()) as capture:
sys.argv = [‘run.py’, ‘-flag’, ‘-flag’, ‘-flag’]
run.main()
output = capture.getvalue().strip()
assert output == <whatever you expect it to be>
If you’re new to unit testing then you might not have seen mocks before. Effectively I am replacing stdout with a fake one to capture everything that gets sent there, so that I can pull it out later into the variable output.
In fact a second patch around sys.argv would be even better because what I’m doing here, an assignment to the real argv, will actually change it which will affect subsequent tests in the same file.
I ended up instantiating the logger of the main program with a specific name, so I could get the logger in the test.py again and assert that the logger was called with a specific text. I didn't know that I could just get the logger by using logging.getLogger("name") with the same name. Simplified example:
test.py
import unittest
from run import run
from unittest.mock import patch
main_logger = logging.getLogger("main_tool")
class TestConfigs(unittest.TestCase):
def test_xyz(self):
with patch('sys.stdout', new=StringIO()) as capture, \
self.assertRaises(SystemExit) as cm, \
patch.object(main_logger , "info") as mock_log1, \
patch.object(main_logger , "error") as mock_log2:
run("/path/to/file/missing/xyz.file")
output = capture.getvalue().strip()
self.assertTrue("Creating logging instance, hold on ..." in output)
mock_log1.assert_called_once_with("Logging activated, let's go!")
mock_log2.assert_called_once_with("xyz was missing in Config file!")
self.assertEqual(cm.exception.code, 1)
if __name__ == "__main__":
unittest.main()
run.py
def run(path: str):
cnfg = Config(path)
< code >
if __name__ == "__main__":
< code to parse arguments 'args' >
path = args.file_path
run(path)
Config.py
import logging
class Config:
def __init__(self, path: str):
print("Creating logging instance, hold on ...")
logger = logging.getLogger("main_tool")
console_handler = logging.StreamHandler()
logger.addHandler(console_handler)
logger.info("Logging activated, let's go!")
self.logger = logger
# Load file, simplified
config = load(path)
if xyz not in config:
self.logger.error("xyz was missing in Config file!")
exit(1)
This method seems to be very complicated and I got to this point by reading through a lot of other posts and the docs. Maybe some one knows a better way to achieve this.
I want to do the following:
I have a class which should provide several functions, which need different inputs. And I would like to use these functions from within other scripts, or solely from commandline.
e.g. I have the class "test". It has a function "quicktest" (which basically justs prints something). (From commandline) I want to be able to
$ python test.py quicktest "foo" "bar"
Whereas quicktest is the name of the function, and "foo" and "bar" are the variables.
Also (from within another script) I want to
from test import test
# this
t = test()
t.quicktest(["foo1", "bar1"])
# or this
test().quicktest(["foo2", "bar2"])
I just can't bring that to work. I managed to write a class for the first request and one for the second, but not for both of them. The problem is that I sometimes have to call the functions via (self), sometimes not, and also I have to provide the given parameters at any time, which is also kinda complicated.
So, does anybody have an idea for that?
This is what I already have:
Works only from commandline:
class test:
def quicktest(params):
pprint(params)
if (__name__ == '__main__'):
if (sys.argv[1] == "quicktest"):
quicktest(sys.argv)
else:
print "Wrong call."
Works only from within other scripts:
class test:
_params = sys.argv
def quicktest(self, params):
pprint(params)
pprint(self._params)
if (__name__ == '__main__'):
if (sys.argv[1] == "quicktest"):
quicktest()
else:
print "Wrong call"
try the following (note that the different indentation, the if __name__ part is not part of class test anymore):
class test:
def quicktest(params):
pprint(params)
if __name__ == '__main__':
if sys.argv[1] == "quicktest":
testObj = test()
testObj.quicktest(sys.argv)
else:
print "Wrong call."
from other scripts:
from test import test
testObj = test()
testObj.quicktest(...)
The if __name__ == '__main__': block needs to be at the top level:
class Test(object): # Python class names are capitalized and should inherit from object
def __init__(self, *args):
# parse args here so you can import and call with options too
self.args = args
def quicktest(self):
return 'ret_value'
if __name__ == '__main__':
test = Test(sys.argv[1:])
You can parse the command line with the help of argparse to parse the value from the command line.
Your class which has the method and associate methods to arguments.
My application has a structure similar to this one:
myapp.py
basemod.py
[pkg1]
__init__.py
mod1.py
[pkg2]
__init__.py
mod2.py
myapp.py:
import pkg1
import pkg2
if __name__ == '__main__':
pkg1.main()
pkg2.main()
basemod.py:
import pkg1
def get_msg():
return pkg1.msg
pkg1/__init__.py:
import mod1
msg = None
def main():
global msg
mod1.set_bar()
msg = mod1.bar
pkg1/mod1.py:
bar = None
def set_bar():
global bar
bar = 'Hello World'
pkg2/__init__.py:
import mod2
def main():
mod2.print_foo()
pkg2/mod2.py:
import basemod
foo = basemod.get_msg()
def print_foo():
print(foo)
If I run myapp.py I get:
None
While in my mind I'd expect:
Hello World
My goal is to keep the two packages completely independent from each other, and only communicating through basemod.py, which is a sort of API to pkg1.
I'm starting to think that I have not completely understood how imports among packages work, what am I doing wrong?
Thank you!
Took me a while to read through all that code, but it looks like your problem is in pkg2/mod2.py. The line foo = basemod.get_msg() is executed the first time that file is imported, and never again. So by the time you change the value of mod1.bar, this has already executed, and foo is None.
The solution should simply be to move that line into the print_foo function, so it is only executed when that function is called - which is after the code that sets the relevant value.
Say I have a module with the following:
def main():
pass
if __name__ == "__main__":
main()
I want to write a unit test for the bottom half (I'd like to achieve 100% coverage). I discovered the runpy builtin module that performs the import/__name__-setting mechanism, but I can't figure out how to mock or otherwise check that the main() function is called.
This is what I've tried so far:
import runpy
import mock
#mock.patch('foobar.main')
def test_main(self, main):
runpy.run_module('foobar', run_name='__main__')
main.assert_called_once_with()
I will choose another alternative which is to exclude the if __name__ == '__main__' from the coverage report , of course you can do that only if you already have a test case for your main() function in your tests.
As for why I choose to exclude rather than writing a new test case for the whole script is because if as I stated you already have a test case for your main() function the fact that you add an other test case for the script (just for having a 100 % coverage) will be just a duplicated one.
For how to exclude the if __name__ == '__main__' you can write a coverage configuration file and add in the section report:
[report]
exclude_lines =
if __name__ == .__main__.:
More info about the coverage configuration file can be found here.
Hope this can help.
You can do this using the imp module rather than the import statement. The problem with the import statement is that the test for '__main__' runs as part of the import statement before you get a chance to assign to runpy.__name__.
For example, you could use imp.load_source() like so:
import imp
runpy = imp.load_source('__main__', '/path/to/runpy.py')
The first parameter is assigned to __name__ of the imported module.
Whoa, I'm a little late to the party, but I recently ran into this issue and I think I came up with a better solution, so here it is...
I was working on a module that contained a dozen or so scripts all ending with this exact copypasta:
if __name__ == '__main__':
if '--help' in sys.argv or '-h' in sys.argv:
print(__doc__)
else:
sys.exit(main())
Not horrible, sure, but not testable either. My solution was to write a new function in one of my modules:
def run_script(name, doc, main):
"""Act like a script if we were invoked like a script."""
if name == '__main__':
if '--help' in sys.argv or '-h' in sys.argv:
sys.stdout.write(doc)
else:
sys.exit(main())
and then place this gem at the end of each script file:
run_script(__name__, __doc__, main)
Technically, this function will be run unconditionally whether your script was imported as a module or ran as a script. This is ok however because the function doesn't actually do anything unless the script is being ran as a script. So code coverage sees the function runs and says "yes, 100% code coverage!" Meanwhile, I wrote three tests to cover the function itself:
#patch('mymodule.utils.sys')
def test_run_script_as_import(self, sysMock):
"""The run_script() func is a NOP when name != __main__."""
mainMock = Mock()
sysMock.argv = []
run_script('some_module', 'docdocdoc', mainMock)
self.assertEqual(mainMock.mock_calls, [])
self.assertEqual(sysMock.exit.mock_calls, [])
self.assertEqual(sysMock.stdout.write.mock_calls, [])
#patch('mymodule.utils.sys')
def test_run_script_as_script(self, sysMock):
"""Invoke main() when run as a script."""
mainMock = Mock()
sysMock.argv = []
run_script('__main__', 'docdocdoc', mainMock)
mainMock.assert_called_once_with()
sysMock.exit.assert_called_once_with(mainMock())
self.assertEqual(sysMock.stdout.write.mock_calls, [])
#patch('mymodule.utils.sys')
def test_run_script_with_help(self, sysMock):
"""Print help when the user asks for help."""
mainMock = Mock()
for h in ('-h', '--help'):
sysMock.argv = [h]
run_script('__main__', h*5, mainMock)
self.assertEqual(mainMock.mock_calls, [])
self.assertEqual(sysMock.exit.mock_calls, [])
sysMock.stdout.write.assert_called_with(h*5)
Blam! Now you can write a testable main(), invoke it as a script, have 100% test coverage, and not need to ignore any code in your coverage report.
Python 3 solution:
import os
from importlib.machinery import SourceFileLoader
from importlib.util import spec_from_loader, module_from_spec
from importlib import reload
from unittest import TestCase
from unittest.mock import MagicMock, patch
class TestIfNameEqMain(TestCase):
def test_name_eq_main(self):
loader = SourceFileLoader('__main__',
os.path.join(os.path.dirname(os.path.dirname(__file__)),
'__main__.py'))
with self.assertRaises(SystemExit) as e:
loader.exec_module(module_from_spec(spec_from_loader(loader.name, loader)))
Using the alternative solution of defining your own little function:
# module.py
def main():
if __name__ == '__main__':
return 'sweet'
return 'child of mine'
You can test with:
# Override the `__name__` value in your module to '__main__'
with patch('module_name.__name__', '__main__'):
import module_name
self.assertEqual(module_name.main(), 'sweet')
with patch('module_name.__name__', 'anything else'):
reload(module_name)
del module_name
import module_name
self.assertEqual(module_name.main(), 'child of mine')
I did not want to exclude the lines in question, so based on this explanation of a solution, I implemented a simplified version of the alternate answer given here...
I wrapped if __name__ == "__main__": in a function to make it easily testable, and then called that function to retain logic:
# myapp.module.py
def main():
pass
def init():
if __name__ == "__main__":
main()
init()
I mocked the __name__ using unittest.mock to get at the lines in question:
from unittest.mock import patch, MagicMock
from myapp import module
def test_name_equals_main():
# Arrange
with patch.object(module, "main", MagicMock()) as mock_main:
with patch.object(module, "__name__", "__main__"):
# Act
module.init()
# Assert
mock_main.assert_called_once()
If you are sending arguments into the mocked function, like so,
if __name__ == "__main__":
main(main_args)
then you can use assert_called_once_with() for an even better test:
expected_args = ["expected_arg_1", "expected_arg_2"]
mock_main.assert_called_once_with(expected_args)
If desired, you can also add a return_value to the MagicMock() like so:
with patch.object(module, "main", MagicMock(return_value='foo')) as mock_main:
One approach is to run the modules as scripts (e.g. os.system(...)) and compare their stdout and stderr output to expected values.
I found this solution helpful. Works well if you use a function to keep all your script code.
The code will be handled as one code line. It doesn't matter if the entire line was executed for coverage counter (though this is not what you would actually actually expect by 100% coverage)
The trick is also accepted pylint. ;-)
if __name__ == '__main__': \
main()
If it's just to get the 100% and there is nothing "real" to test there, it is easier to ignore that line.
If you are using the regular coverage lib, you can just add a simple comment, and the line will be ignored in the coverage report.
if __name__ == '__main__':
main() # pragma: no cover
https://coverage.readthedocs.io/en/coverage-4.3.3/excluding.html
Another comment by # Taylor Edmiston also mentions it
My solution is to use imp.load_source() and force an exception to be raised early in main() by not providing a required CLI argument, providing a malformed argument, setting paths in such a way that a required file is not found, etc.
import imp
import os
import sys
def mainCond(testObj, srcFilePath, expectedExcType=SystemExit, cliArgsStr=''):
sys.argv = [os.path.basename(srcFilePath)] + (
[] if len(cliArgsStr) == 0 else cliArgsStr.split(' '))
testObj.assertRaises(expectedExcType, imp.load_source, '__main__', srcFilePath)
Then in your test class you can use this function like this:
def testMain(self):
mainCond(self, 'path/to/main.py', cliArgsStr='-d FailingArg')
To import your "main" code in pytest in order to test it you can import main module like other functions thanks to native importlib package :
def test_main():
import importlib
loader = importlib.machinery.SourceFileLoader("__main__", "src/glue_jobs/move_data_with_resource_partitionning.py")
runpy_main = loader.load_module()
assert runpy_main()