How to append functions to current module from unrelated script in Python - python

I have two python scripts in different locations. I try to import the functions of the second one in such a way that they are integrated with the globals of the first one. That works fine. However when I call functions in the first script from the second one they cant be found.
foo.py
def run():
glob = {}
loc = {}
execfile("/path/to/bar.py", glob, loc)
currmodule = globals()
currmodule["func_in_bar"] = glob["func_in_bar"]
func_in_bar()
def func_in_foo_A():
print("fooA")
def func_in_foo_B():
print("fooB")
if __name__ == "__main__":
run()
bar.py
def func_in_bar():
func_in_foo_A()
func_in_foo_B()
When foo is run it fails with: NameError: global name 'func_in_foo_A' is not defined

In bar.py you need to add an import foo and then reference foo.func_in_foo(), etc.
Alternatively, use the form: from foo import func_in_foo

Related

Python 2.6 unittest - how to set a value to use for a global variable in a function that you're testing

I'm having trouble setting the value of a global variable in a function that I'm writing for unit tests.
The function is probably not ready to be used in a test. Or at least to be used to test in an easy manner, but I'm trying to work around that.
Here is an example of the function I'm trying to test:
def my_func_with_globals(filepath):
spos=filepath.find(__my_global_var1)
new_path = filepath[0:spos] + __my_global_var2
return new_path
def some_function():
...
my_func_with_globals(filepath)
...
if __name__ = '__main__':
global __my_global_var1
__my_global_var1='value1'
global __my_global_var2
__my_global_var2='value2'
...
some_function()
And here is an example of my test:
import unittest
from my_module import *
class UnitTestMyModule(unittest.TestCase):
def test_my_func_with_globals(self):
self.assertEqual(my_func_with_globals('arbitrary/file/path'), 'valid output')
Another example of my test using #kdopen's suggestion (gives me the same error):
import unittest
import my_module
class UnitTestMyModule(unittest.TestCase):
def test_my_func_with_globals(self):
my_module.__my_global_var1='some/value'
my_module.__my_global_var2='second_val'
self.assertEqual(my_module.my_func_with_globals('arbitrary/file/path'), 'valid output')
I keep getting the error:
NameError: global name '__my_global_var1' is not defined.
I've tried a few different things, but I can't get anything to work. Using unittest.mock.patch looks like it would work perfectly, but I'm stuck with what I currently have with v2.6.4.
The globals are defined with a double leading underscore, so they are not imported by the from my_module import * statement.
You can make them accessible with the following:
from my_module import __my_global_var1, __my_global_var2
Alternatively, if you used import my_module you can access them as my_module.__my_global_var1 etc.
But I don't see any reference to the global variables in your sample test case
Here's a simple example
a.py
__global1 = 1
def foo():
return __global1
b.py:
import a
print "global1: %d" % a.__global1
print "foo: %d" % a.foo()
a.__global1 = 2
print "foo: %d" % a.foo()
And running b.py
$ python2.6 b.py
global1: 1
foo: 1
foo: 2
UPDATE:
Dang it, missed the obvious
You declare the variables within the if test. That code doesn't run on import - only when you execute python my_module from the command line.
During importing, __name__ will be set to my_module, not __main__
So, yes - they are undefined when you call your unit test.

python module __init__ function

Is there any way to make an implicit initializer for modules (not packages)?
Something like:
#file: mymodule.py
def __init__(val):
global value
value = 5
And when you import it:
#file: mainmodule.py
import mymodule(5)
The import statement uses the builtin __import__ function.
Therefore it's not possible to have a module __init__ function.
You'll have to call it yourself:
import mymodule
mymodule.__init__(5)
These things often are not closed as duplicates, so here's a really nice solution from Pass Variable On Import. TL;DR: use a config module, configure that before importing your module.
[...] A cleaner way to do it which is very useful for multiple configuration
items in your project is to create a separate Configuration module
that is imported by your wrapping code first, and the items set at
runtime, before your functional module imports it. This pattern is
often used in other projects.
myconfig/__init__.py :
PATH_TO_R_SOURCE = '/default/R/source/path'
OTHER_CONFIG_ITEM = 'DEFAULT'
PI = 3.14
mymodule/__init__.py :
import myconfig
PATH_TO_R_SOURCE = myconfig.PATH_TO_R_SOURCE
robjects.r.source(PATH_TO_R_SOURCE, chdir = True) ## this takes time
class SomeClass:
def __init__(self, aCurve):
self._curve = aCurve
if myconfig.VERSION is not None:
version = myconfig.VERSION
else:
version = "UNDEFINED"
two_pi = myconfig.PI * 2
And you can change the behaviour of your module at runtime from the
wrapper:
run.py :
import myconfig
myconfig.PATH_TO_R_SOURCE = 'actual/path/to/R/source'
myconfig.PI = 3.14159
# we can even add a new configuration item that isn't present in the original myconfig:
myconfig.VERSION="1.0"
import mymodule
print "Mymodule.two_pi = %r" % mymodule.two_pi
print "Mymodule.version is %s" % mymodule.version
Output:
> Mymodule.two_pi = 6.28318
> Mymodule.version is 1.0

Is there a way in python to execute all functions in a file without explicitly calling them?

Is there a library or a python magic that allows me to execute all functions in a file without explicitly calling them. Something very similar to what pytest is doing - running all functions that start with 'test_...' without ever registering them anywhere.
For example assume I have a file a.py:
def f1():
print "f1"
def f2():
print "f2"
and assume I have file - my main file - main.py:
if __name__ == '__main__':
some_magic()
so when I call:
python main.py
The output would be:
f1
f2
Here's a way:
def some_magic():
import a
for i in dir(a):
item = getattr(a,i)
if callable(item):
item()
if __name__ == '__main__':
some_magic()
dir(a) retrieves all the attributes of module a. If the attribute is a callable object, call it. This will call everything callable, so you may want to qualify it with and i.startswith('f').
Here's another way:
#!/usr/local/bin/python3
import inspect
import sys
def f1():
print("f1")
def f2():
print("f2")
def some_magic(mod):
all_functions = inspect.getmembers(mod, inspect.isfunction)
for key, value in all_functions:
if str(inspect.signature(value)) == "()":
value()
if __name__ == '__main__':
some_magic(sys.modules[__name__])
It will only call functions that don't have any parameters by using inspect.signature(function).
Have you tried callifile?
pip install callifile
and then, in your file:
from callifile.callifile import callifile as callifile
import sys
callifile(module=sys.modules[__name__], verbose=True)
Self-sufficient example:
In a file some_callify.py:
from callifile.callifile import callifile as callifile
import sys
def f_1():
print "bla"
callifile(module=sys.modules[__name__], verbose=True)
Calling in terminal:
python some_callify.py
Gives me the terminal output:
Apply call_all_function_in_this_file on <module '__main__' from 'some_callify.py'>
See if should call callifile
See if should call f_1
Call f_1
bla

Python: perform relative import when using __import__?

Here are the files in this test:
main.py
app/
|- __init__.py
|- master.py
|- plugin/
|- |- __init__.py
|- |- p1.py
|- |_ p2.py
The idea is to have a plugin-capable app. New .py or .pyc files can be dropped into plugins that adhere to my API.
I have a master.py file at the app level that contains global variables and functions that any and all plugins may need access to, as well as the app itself. For the purposes of this test, the "app" consists of a test function in app/__init__.py. In practice the app would probably be moved to separate code file(s), but then I'd just use import master in that code file to bring in the reference to master.
Here's the file contents:
main.py:
import app
app.test()
app.test2()
app/__init__.py:
import sys, os
from plugin import p1
def test():
print "__init__ in app is executing test"
p1.test()
def test2():
print "__init__ in app is executing test2"
scriptDir = os.path.join ( os.path.dirname(os.path.abspath(__file__)), "plugin" )
print "The scriptdir is %s" % scriptDir
sys.path.insert(0,scriptDir)
m = __import__("p2", globals(), locals(), [], -1)
m.test()
app/master.py:
myVar = 0
app/plugin/__init__.py:
<empty file>
app/plugin/p1.py:
from .. import master
def test():
print "test in p1 is running"
print "from p1: myVar = %d" % master.myVar
app/plugin/p2.py:
from .. import master
def test():
master.myVar = 2
print "test in p2 is running"
print "from p2, myVar: %d" % master.myVar
Since I explicitly import the p1 module, everything works as expected. However, when I use __import__ to import p2, I get the following error:
__init__ in app is executing test
test in p1 is running
from p1: myVar = 0
__init__ in app is executing test2
The scriptdir is ....../python/test1/app/plugin
Traceback (most recent call last):
File "main.py", line 4, in <module>
app.test2()
File "....../python/test1/app/__init__.py", line 17, in test2
m = __import__("p2", globals(), locals(), [], -1)
File "....../python/test1/app/plugin/p2.py", line 1, in <module>
from .. import master
ValueError: Attempted relative import in non-package
Execution proceeds all the way through the test() function and errors out right as test2() tries to execute its __import__ statement, which in turn p2 tries to do a relative import (which does work when p1 is imported explicitly via the import statement, recall)
It's clear that using __import__ is doing something different than using the import statement. The Python docs state that using import simply translates to an __import__ statement internally but there has to be more going on than meets the eye.
Since the app is plugin-based, coding explicit import statements in the main app would of course not be feasible. Using import itself within the
What am I missing here? How can I get Python to behave as expected when manually importing modules using __import__? It seems maybe I'm not fully understanding the idea of relative imports, or that I'm just missing something with respect to where the import is occurring (i.e. inside a function rather than at the root of the code file)
EDIT: I found the following possible, but unsuccessful solutions:
m = __import__("p2",globals(),locals(),"plugin")
(returns the same exact error as above)
m = __import__("plugin",fromlist="p2")
(returns a reference to app.plugin, not to app.plugin.p2)
m = __import__("plugin.p2",globals(),locals())
(returns a reference to app.plugin, not to app.plugin.p2)
import importlib
m = importlib.import_module("plugin.p2")
(returns:)
Traceback (most recent call last):
File "main.py", line 4, in <module>
app.test2()
File "....../python/test1/app/__init__.py", line 20, in test2
m = importlib.import_module("plugin.p2")
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named plugin.p2
I've had a similar problem.
__import__ only imports submodules if all parent __init__.py files are empty.
You should use importlib instead
import importlib
p2 = importlib.import_module('plugin.p2')
Have you tried the following syntax:
How to use python's import function properly __import__()
It worked for me with a similar problem...
I never did find a solution, so I ended up deciding to restructure the program.
What I did was set up the main app as a class. Then, I also changed each plugin into a class. Then, as I load plugins using import, I also instantiate the class inside each plugin which has a predefined name, and pass in the reference to the main app class.
This means that each class can directly read and manipulate variables back in the host class simply by using the reference. It is totally flexible because anything that the host class exports is accessible by all the plugins.
This turns out to be more effective and doesn't depend on relative paths and any of that stuff. It also means one Python interpreter could in theory run multiple instances of the host app simultaneously (on different threads for example) and the plugins will still refer back to the correct host instance.
Here's basically what I did:
main.py:
import os, os.path, sys
class MyApp:
_plugins = []
def __init__(self):
self.myVar = 0
def loadPlugins(self):
scriptDir = os.path.join ( os.path.dirname(os.path.abspath(__file__)), "plugin" )
sys.path.insert(0,scriptDir)
for plug in os.listdir(scriptDir):
if (plug[-3:].lower() == ".py"):
m = __import__(os.path.basename(plug)[:-3])
self._plugins.append(m.Plugin(self))
def runTests(self):
for p in self._plugins:
p.test()
if (__name__ == "__main__"):
app = MyApp()
app.loadPlugins()
app.runTests()
plugin/p1.py:
class Plugin:
def __init__(self, host):
self.host = host
def test(self):
print "from p1: myVar = %d" % self.host.myVar
plugin/p2.py:
class Plugin:
def __init__(self, host):
self.host = host
def test(self):
print "from p2: variable set"
self.host.myVar = 1
print "from p2: myVar = %d" % self.host.myVar
There is some room to improve this, for example, validating each imported .py file to see if it's actually a plugin and so on. But this works as expected.
I have managed to find a solution to the problem.
By taking your example the following static import is needed to be dynamic
from .plugin import p2
the "." near plugin means there is a need to relative import and not absolute import.
I was able to do that with the following code snipset:
plugin = __import__('plugin', globals(), locals(), level=1, fromlist=['p2'])
p2 = getattr(plugin, 'p2')
level=1 Relative import parameter
fromlist Specify which sub modules to take from plugin module
As you mentioned, plugin holds the reference to 'plugin', thus additional getattr is needed to grep p2 from plugin

python: I need to understand better imports and packages

My application has a structure similar to this one:
myapp.py
basemod.py
[pkg1]
__init__.py
mod1.py
[pkg2]
__init__.py
mod2.py
myapp.py:
import pkg1
import pkg2
if __name__ == '__main__':
pkg1.main()
pkg2.main()
basemod.py:
import pkg1
def get_msg():
return pkg1.msg
pkg1/__init__.py:
import mod1
msg = None
def main():
global msg
mod1.set_bar()
msg = mod1.bar
pkg1/mod1.py:
bar = None
def set_bar():
global bar
bar = 'Hello World'
pkg2/__init__.py:
import mod2
def main():
mod2.print_foo()
pkg2/mod2.py:
import basemod
foo = basemod.get_msg()
def print_foo():
print(foo)
If I run myapp.py I get:
None
While in my mind I'd expect:
Hello World
My goal is to keep the two packages completely independent from each other, and only communicating through basemod.py, which is a sort of API to pkg1.
I'm starting to think that I have not completely understood how imports among packages work, what am I doing wrong?
Thank you!
Took me a while to read through all that code, but it looks like your problem is in pkg2/mod2.py. The line foo = basemod.get_msg() is executed the first time that file is imported, and never again. So by the time you change the value of mod1.bar, this has already executed, and foo is None.
The solution should simply be to move that line into the print_foo function, so it is only executed when that function is called - which is after the code that sets the relevant value.

Categories