Overview
I'm running some scientific simulations and I want to process the resulting data in Python. The simulation produces a custom data type that is not used outside of the chain of programs that the authors of the simulation produced, so unfortunately I need what they provide me.
They want me to install two files:
A module called sdds.py that defines a class that provides all user functions and two demos
A compiled module called sddsdatamodule.so that only provides helper functions to sdds.py.
(I find it strange that they're offering me two modules that are so inextricably connected, it doesn't seem like good coding practice to me, but using their code is probably better than rewriting things from scratch.) I'd prefer not to install them directly into my path, side by side. They come from the same company, they're designed to do one specific task together: access and manipulate SDDS-type files.
So I thought I would put them in a package. I could install that on my path, it would be self-contained, and I could easily find and uninstall or upgrade the modules from one location. Then I could hide their un-Pythonic solution in a more-Pythonic package without significantly rewriting things. Seems elegant.
Details
The package I actually use is found here:
http://www.aps.anl.gov/Accelerator_Systems_Division/Accelerator_Operations_Physics/software.shtml#PythonBinaries
Unfortunately, they only support Windows and Mac OS X right now. Compiling the source code is quite onerous, and apparently they have no significant requests for Linux/Unix. I have a Mac, so thankfully this isn't a problem for me.
So my directory tree looks like this:
SDDSPython/ My toplevel package
__init__.py Designed to only import the SDDS class
sdds.py Defines SDDS class and two demo methods
sddsdatamodule.so Defines sddsdata module used by SDDS class.
My __init__.py file literally only contains this:
from sdds import SDDS
The sdds.py file contains the class definition and the two demo definitions. The only other code in the sdds.py file is:
import sddsdata, sys, time
class SDDS:
(lots of code here)
def demo(output):
(lots of code here)
def demo2(output):
(lots of code here)
I can then import SDDSPython and check, using dir:
>>> import SDDSPython
>>> dir(SDDSPython)
['SDDS', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'sdds', 'sddsdata']
So I can now access the SDDS class via SDDSPython.SDDS
Question
How on earth did SDDSPython.sdds and SDDSPython.sddsdata get loaded into the SDDSPython namespace??
>>> SDDSPython.sdds
<module 'SDDSPython.sdds' from 'SDDSPython/sdds.pyc'>
>>> SDDSPython.sddsdata
<module 'SDDSPython.sddsdata' from 'SDDSPython/sddsdatamodule.so'>
I thought by creating an __init__.py file I was specifically excluding the sdds and sddsdata modules from being loaded into the SDDSPython namespace. What is going on? I can only assume this is happening due to something in the sddsdatamodule.so file? But how can a module affect its parent's namespace like that? I'm rather lost, and I don't know where to start. I've looked at the C code, but I don't see anything suspicious. To be fair- I probably don't know what something suspicious would look like, I'm probably not familiar enough with programming C extensions for Python.
Curious question--I did some investigation for you using a similar test case.
XML/
__init__.py -from indent import XMLIndentGenerator
indent.py -contains class XMLIndentGenerator, and Xml
Sink.py
It appears that importing a class from a module, even though you are importing just a portion, the entire module is accessible in the way you described, that is:
>>>import XML
>>>XML.indent
<module 'XML.indent' from 'XML\indent.py'>
>>>XML.indent.Xml #did not include this in the from
<class 'XML.indent.Xml'>
>>>XML.Sink
Traceback (most recent call last):
AttributeError:yadayada no attribute 'Sink'
This is expected, since I did not import Sink in __init__.py.....BUT!
I added a line to indent.py:
import Sink
class XMLIndentGenerator(XMLGenerator):
(code)
Now, since this class imports a module contained within the XML package, if i do:
>>>import XML
>>>XML.Sink
<module 'XML.Sink' from 'XML\Sink.pyc'>
So, it appears that because your imported sdds module also imports sddsdata, you are able to access it. That answers the "How" portion of your question, but "why" this is the case, I'm sure there's an answer somewhere in the docs :)
I hope this helps - I was literally doing this as I was typing the answer! A learning experience for me as well.
This happens because python imports don't work the way you might think. They work like this:
the import machinery looks for a file that should be the module requested from the import
a types.ModuleType instance is created, several attributes on it are set to the corresponding file (__file__, __name__ and so on), and that object is inserted into sys.modules under the fully qualified module name it would have.
if this is a submodule import (ie, sdds.py which is a submodule in SDDSPython), the newly created module is attached as an attribute to the existing python module of the parent package.
the file is "executed" with that module as its global scope; all names defined by that file appear as attributes of the module.
in the case of a from import, an attribute from the module may be returned to the importing script.
So that means if I import a module (say, foo.py) that has, as its source only:
import bar
then there is a global in foo, called bar, and I can access it as foo.bar.
There is no capacity in python for "only execute the part of this python script i want to use right now." The whole thing runs.
Related
I have the following project structure:
Package1
|--__init__.py
|--__main__.py
|--Module1.py
|--Module2.py
where Module1.py contains something like:
import dill as pickle
import Package1.Module2
# from https://stackoverflow.com/questions/52402783/pickle-class-definition-in-module-with-dill
def mainify(obj):
import __main__
import inspect
import ast
s = inspect.getsource(obj)
m = ast.parse(s)
co = compile(m, "<string>", "exec")
exec(co, __main__.__dict__)
def Module1():
"""I hope the details of this class are not necessary for this example. I can add detail if necessary
"""
obj_to_pickle = Module1()
def write_session():
mainify(Module1)
mainify(Module2)
with FileHandler.open_file(...) as f:
pickle.dump(obj_to_pickle, f)
I run the code as a module via python -m Package1 ..., thus __main__.py is the entry point to package execution, though I hope these details aren't relevant (I can improve my example if necessary).
Now, when I try to load the pickled object, I get ModuleNotFoundError: No module named Package1.
How can tell dill in this situation to understand that Package1 is the package? The mainify function seems to be getting the modules' source code into the pickle, but I believe the import statement in Module1.py that is import Package1.Module2.py is causing the ImportError. How can I tell dill to understand the reference to Package1?
NOTE: this reference can be fixed by adding the directory that Package1 is in via sys.path.append. But the whole point of pickling the package source alongside the instance is to make pickled instance unpicklable without needed to do this.
Relevant posts:
Pickle class definition in module with dill
Why dill dumps external classes by reference, no matter what?
#courtyardz. I'm a contributor of dill and your question is similar to others that have been asked in the past.
First, let me explain that generally dill assumes that all the modules necessary to deserialize an object are importable in the "unpickling" environment. Therefore modules are almost always saved by reference, with the current exception of modules that are not properly installed, like local modules (e.g. located in the working directory) or modules at non-canonical paths added to sys.path. There's also a function that's able to save the complete state of a module, which can be restored afterwards, but not the module itself.
That said, what exactly do you need? It's to serialize an object alongside its class (including any objects in the module's namespace that it refers to), or it's really the whole module?
If you need to transfer the complete module to an interpreter session where it's not available, like in a different machine, this problem is under active discussion here: https://github.com/uqfoundation/dill/issues/123. There's no complete solution for this currently, but one possibility is to ship the module as a ZIP archive, and load it using the zipimport module (indirectly, by saving the zip file to disk, maybe in a temporary location, and adding its path to sys.path as described in Python's documentation).
If you just need to serialize an object with its class, note that doing such has the limitation that objects of that class pickled by separate calls to dill.dump() or dill.dumps() will end up having different (although identical) classes when unpickled. This may or may not be a problem. There's also an open discussion about forcing the serialization of a class by value: https://github.com/uqfoundation/dill/issues/424.
The workaround you are trying to use should work because dill pickles classes defined in the __main__ module by value, as well as "orphaned" classes, i.e. classes that can't be found in the module where they were defined. However, for this to work the object must be created by the __main__.Module1 class (I suppose this is a class, even though you used def instead of class in your code example), not the Package1.Module1.Module1 class. If the class references global objects in Module1 in its methods, you may need to use the option recurse=True with dill.dump(s).
A simpler workaround, that may not work for your specific case as it involves multiple modules, is to temporarily change the __module__ attribute of the class. For example, at a module's body:
import dill
class X:
pass
obj = X()
X.__module__ = None # temporarily orphan the class
with open('/path/to/file.pkl', 'wb') as file:
dill.dump(obj) # X will be pickled by value because __module__ is None
X.__module__ = __name__ # de-orphan the class
Going back to your example, if you can't create the object with the "mainified" class, you may change the object's class temporarily too:
obj_to_pickle = Module1()
def write_session():
mainify(Module1)
mainify(Module2)
obj_to_pickle.__class__ = __main__.Module1
with FileHandler.open_file(...) as f:
pickle.dump(obj_to_pickle, f)
obj_to_pickle.__class__ = Module1
If the object has instance attributes of types defined in Package1, it won't work however.
Most of the tutorials and books about Django or Flask import specific classes from files instead of importing the whole file.
For example, importing DataRequiered validator from wrtforms.validators is done via from wtforms import validators instead of importing it via import wtforms.validators as valids and then accessing DataRequiered with valids.DataRequiered.
My question is: Is there an reason for this ?
I thought to something like avoiding the loading a whole module for computation/memory optimization (is it really relevant?) ? Or is it simply to make the code more readable ?
My question is: Is there an reason for this ?
from module_or_package import something is the canonical pythonic idiom (when you only want to import something in your current namespace of course).
Also, import module_or_package.something only works if module_or_package is a package and something a submodule, it raises an ImportError(No module named something) if something is a function, class or whatever object defined in module_or_package, as can be seen in the stdlib with os.path (which is a submodule of the os.package) vs datetime.date (which is a class defined in the datetime module):
>>> import os.path as p
>>> p
<module 'posixpath' from '/home/bruno/.virtualenvs/blook/lib/python2.7/posixpath.pyc'>
vs
>>>import datetime.date as d
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named date
thought to something like avoiding the loading a whole module for computation/memory optimization (is it really relevant?)
Totally irrelevant - importing a given name from a module requires importing the whole module. Actually, this:
from module_or_package import something
is only syntactic sugar for
import module_or_package
something = module_or_package.something
del module_or_package
EDIT: You mention in a comment that
Right, but importing the whole module means loading it to the memory, which can be a reason for importing only a submodule/class
so it seems I failed to make the point clear: in Python, you can not "import only a submodule/class", period.
In Python, import, class and def are all executable statements (and actually just syntactic sugar for operation you can do 'manually' with functions and classes). Importing a module actually consists in executing all the code at the module's top-level (which will instanciate function and class objects) and create a module object (instance of module type) which attributes will be all names defined at the top-level via import, def and class statements or via explicit assignment. It's only when all this has been done that you can access any name defined in the module, and this is why, as I stated above,
from module import obj
is only syntactic sugar for
import module
obj = module.obj
del module
But (unless you do something stupid like defining a terabyte-huge dict or list in your module) this doesn't actually take that much time nor eat much ram, and a module is only effectively executed once per process the first time it's imported - then it's cached in sys.modules so subsequent imports only fetch it from cache.
Also, unless you actively prevents it, Python will cache the compiled version of the module (the .pyc files) and only recompile it if the .pyc is missing or older than the source .py file.
wrt/ packages and submodules, importing a submodule will also execute the package's __init__.py and build a module instance from it (IOW, at runtime, a package is also a module). Package initializer are canonically rather short, and actually quite often empty FWIW...
It depends, in the tutorial that was probably done for readability
Usually if you use most of the classes in a file, you import the file. If the files contains many classes but you only need a few, just import those.
It's both a matter of readability and optimization.
I can make this code work, but I am still confused why it won't work the first way I tried.
I am practicing python because my thesis is going to be coded in it (doing some cool things with Arduino and PC interfaces). I'm trying to import a class from another file into my main program so that I can create objects. Both files are in the same directory. It's probably easier if you have a look at the code at this point.
#from ArduinoBot import *
#from ArduinoBot import ArduinoBot
import ArduinoBot
# Create ArduinoBot object
bot1 = ArduinoBot()
# Call toString inside bot1 object
bot1.toString()
input("Press enter to end.")
Here is the very basic ArduinoBot class
class ArduinoBot:
def toString(self):
print ("ArduinoBot toString")
Either of the first two commented out import statements will make this work, but not the last one, which to me seems the most intuitive and general. There's not a lot of code for stuff to go wrong here, it's a bit frustrating to be hitting these kind of finicky language specific quirks when I had heard some many good things about Python. Anyway I must be doing something wrong, but why doesn't the simple 'import ClassName' or 'import FileName' work?
Thank you for your help.
consider a file (example.py):
class foo(object):
pass
class bar(object):
pass
class example(object):
pass
Now in your main program, if you do:
import example
what should be imported from the file example.py? Just the class example? should the class foo come along too? The meaning would be too ambiguous if import module pulled the whole module's namespace directly into your current namespace.
The idea is that namespaces are wonderful. They let you know where the class/function/data came from. They also let you group related things together (or equivalently, they help you keep unrelated things separate!). A module sets up a namespace and you tell python exactly how you want to bring that namespace into the current context (namespace) by the way you use import.
from ... import * says -- bring everything in that module directly into my namespace.
from ... import ... as ... says, bring only the thing that I specify directly into my namespace, but give it a new name here.
Finally, import ... simply says bring that module into the current namespace, but keep it separate. This is the most common form in production code because of (at least) 2 reasons.
It prevents name clashes. You can have a local class named foo which won't conflict with the foo in example.py -- You get access to that via example.foo
It makes it easy to trace down which module a class came from for debugging.
consider:
from foo import *
from bar import *
a = AClass() #did this come from foo? bar? ... Hmmm...
In this case, to get access to the class example from example.py, you could also do:
import example
example_instance = example.example()
but you can also get foo:
foo_instance = example.foo()
The simple answer is that modules are things in Python. A module has its own status as a container for classes, functions, and other objects. When you do import ArduinoBot, you import the module. If you want things in that module -- classes, functions, etc. -- you have to explicitly say that you want them. You can either import them directly with from ArduinoBot import ..., or access them via the module with import ArduinoBot and then ArduinoBot.ArduinoBot.
Instead of working against this, you should leverage the container-ness of modules to allow you to group related stuff into a module. It may seem annoying when you only have one class in a file, but when you start putting multiple classes and functions in one file, you'll see that you don't actually want all that stuff being automatically imported when you do import module, because then everything from all modules would conflict with other things. The modules serve a useful function in separating different functionality.
For your example, the question you should ask yourself is: if the code is so simple and compact, why didn't you put it all in one file?
Import doesn't work quite the you think it does. It does work the way it is documented to work, so there's a very simple remedy for your problem, but nonetheless:
import ArduinoBot
This looks for a module (or package) on the import path, executes the module's code in a new namespace, and then binds the module object itself to the name ArduinoBot in the current namespace. This means a module global variable named ArduinoBot in the ArduinoBot module would now be accessible in the importing namespace as ArduinoBot.ArduinoBot.
from ArduinoBot import ArduinoBot
This loads and executes the module as above, but does not bind the module object to the name ArduinoBot. Instead, it looks for a module global variable ArduinoBot within the module, and binds whatever object that referred to the name ArduinoBot in the current namespace.
from ArduinoBot import *
Similarly to the above, this loads and executes a module without binding the module object to any name in the current namespace. It then looks for all module global variables, and binds them all to the same name in the current namespace.
This last form is very convenient for interactive work in the python shell, but generally considered bad style in actual development, because it's not clear what names it actually binds. Considering it imports everything global in the imported module, including any names that it imported at global scope, it very quickly becomes extremely difficult to know what names are in scope or where they came from if you use this style pervasively.
The module itself is an object. The last approach does in fact work, if you access your class as a member of the module. Either if the following will work, and either may be appropriate, depending on what else you need from the imported items:
from my_module import MyClass
foo = MyClass()
or
import my_module
foo = my_module.MyClass()
As mentioned in the comments, your module and class usually don't have the same name in python. That's more a Java thing, and can sometimes lead to a little confusion here.
I am trying to design the package and module system for a programming language (Heron) which can be both compiled and interpreted, and from what I have seen I really like the Python approach. Python has a rich choice of modules, which seems to contribute largely to its success.
What I don`t know is what happens in Python if a module is included in two different compiled packages: are there separate copies made of the data or is it shared?
Related to this are a bunch of side-questions:
Am I right in assuming that packages can be compiled in Python?
What are there pros and cons to the two approaches (copying or sharing of module data)?
Are there widely known problems with the Python module system, from the point of view of the Python community? For example is there a PEP under consideration for enhancing modules/packages?
Are there certain aspects of the Python module/package system which wouldn`t work well for a compiled language?
Well, you asked a lot of questions. Here are some hints to get a bit further:
a. Python code is lexed and compiled into Python specific instructions, but not compiled to machine executable code. The ".pyc" file is automatically created whenever you run python code that does not match the existing .pyc timestamp. This feature can be turned off. You might play with the dis module to see these instructions.
b. When a module is imported, it is executed (top to bottom) in its own namespace and that namespace cached globally. When you import from another module, the module is not executed again. Remember that def is just a statement. You may want to put a print('compiling this module') statement in your code to trace it.
It depends.
There were recent enhancements, mostly around specifying which module needed to be loaded. Modules can have relative paths so that a huge project might have multiple modules with the a same name.
Python itself won't work for a compiled language. Google for "unladen swallow blog" to see the tribulations of trying to speed up a language where "a = sum(b)" can change meanings between executions. Outside of corner cases, the module system forms a nice bridge between source code and a compiled library system. The approach works well, and Python's easy wrapping of C code (swig, etc.) helps.
Modules are the only truly global objects in Python, with all other global data based around the module system (which uses sys.modules as a registry). Packages are simply modules with special semantics for importing submodules. "Compiling" a .py file into a .pyc or .pyo isn't compilation as understood for most languages: it only checks the syntax and creates a code object which, when executed in the interpreter, creates the module object.
example.py:
print "Creating %s module." % __name__
def show_def(f):
print "Creating function %s.%s." % (__name__, f.__name__)
return f
#show_def
def a():
print "called: %s.a" % __name__
Interactive session:
>>> import example
# first sys.modules['example'] is checked
# since it doesn't exist, example.py is found and "compiled" to example.pyc
# (since example.pyc doesn't exist, same would happen if it was outdated, etc.)
Creating example module. # module code is executed
Creating function example.a. # def statement executed
>>> example.a()
called: example.a
>>> import example
# sys.modules['example'] found, local variable example assigned to that object
# no 'Creating ..' output
>>> d = {"__name__": "fake"}
>>> exec open("example.py") in d
# the first import in this session is very similar to this
# in that it creates a module object (which has a __dict__), initializes a few
# variables in it (__builtins__, __name__, and others---packages' __init__
# modules have their own as well---look at some_module.__dict__.keys() or
# dir(some_module))
# and executes the code from example.py in this dict (or the code object stored
# in example.pyc, etc.)
Creating fake module. # module code is executed
Creating function fake.a. # def statement executed
>>> d.keys()
['__builtins__', '__name__', 'a', 'show_def']
>>> d['a']()
called: fake.a
Your questions:
They are compiled, in a sense, but not as you would expect if you're familiar with how C compilers work.
If the data is immutable, copying is feasible, and should be indistinguishable from sharing except for object identity (is operator and id() in Python).
Imports may or may not execute code (they always assign a local variable to an object, but that poses no problems) and may or may not modify sys.modules. You must be careful to not import in threads, and generally it is best to do all imports at the top of every module: this leads to a cascading graph so all the imports are done at once and then __main__ continues and does the Real Workâ˘.
I don't know of any current PEP, but there's already a lot of complex machinery in place, too. For example packages can have a __path__ attribute (really a list of paths) so submodules don't have to be in the same directory, and these paths can even be computed at runtime! (Example mungepath package below.) You can have your own import hooks, use import statements inside functions, directly call __import__, and I wouldn't be surprised to find 2-3 other unique ways to work with packages and modules.
A subset of the import system would work in a traditionally-compiled language, as long as it was similar to something like C's #include. You could run the "first level" of execution (creating the module objects) in the compiler, and compile those results. There are significant drawbacks to this, however, and amounts to separate execution contexts for module-level code and functions executed at runtime (and some functions would have to run in both contexts!). (Remember in Python that every statement is executed at runtime, even def and class statements.)
I believe this is the main reason traditionally-compiled languages restrict "top-level" code to class, function, and object declarations, eliminating this second context. Even then, you have initialization problems for global objects in C/C++ (and others), unless managed carefully.
mungepath/__init__.py:
print __path__
__path__.append(".") # CWD, would be different in non-example code
print __path__
from . import example # this is example.py from above, and is NOT in mungepath/
# note that this is a degenerate case, in that we now have two names for the
# 'same' module: example and mungepath.example, but they're really different
# modules with different functions (use 'is' or 'id()' to verify)
Interactive session:
>>> import example
Creating example module.
Creating function example.a.
>>> example.__dict__.keys()
['a', '__builtins__', '__file__', 'show_def', '__package__',
'__name__', '__doc__']
>>> import mungepath
['mungepath']
['mungepath', '.']
Creating mungepath.example module.
Creating function mungepath.example.a.
>>> mungepath.example.a()
called: mungepath.example.a
>>> example is mungepath.example
False
>>> example.a is mungepath.example.a
False
Global data is scoped at the interpreter level.
"packages" can be compiled as a package is just a collection of modules which themselves can be compiled.
I am not sure I understand given the established scoping of data.
I am in a project where we are starting refactoring some massive code base. One problem that immediately sprang up is that each file imports a lot of other files. How do I in an elegant way mock this in my unit test without having to alter the actual code so I can start to write unit-tests?
As an example: The file with the functions I want to test, imports ten other files which is part of our software and not python core libs.
I want to be able to run the unit tests as separately as possible and for now I am only going to test functions that does not depend on things from the files that are being imported.
Thanks for all the answers.
I didn't really know what I wanted to do from the start but now I think I know.
Problem was that some imports was only possible when the whole application was running because of some third-party auto-magic. So I had to make some stubs for these modules in a directory which I pointed out with sys.path
Now I can import the file which contains the functions I want to write tests for in my unit-test file without complaints about missing modules.
If you want to import a module while at the same time ensuring that it doesn't import anything, you can replace the __import__ builtin function.
For example, use this class:
class ImportWrapper(object):
def __init__(self, real_import):
self.real_import = real_import
def wrapper(self, wantedModules):
def inner(moduleName, *args, **kwargs):
if moduleName in wantedModules:
print "IMPORTING MODULE", moduleName
self.real_import(*args, **kwargs)
else:
print "NOT IMPORTING MODULE", moduleName
return inner
def mock_import(self, moduleName, wantedModules):
__builtins__.__import__ = self.wrapper(wantedModules)
try:
__import__(moduleName, globals(), locals(), [], -1)
finally:
__builtins__.__import__ = self.real_import
And in your test code, instead of writing import myModule, write:
wrapper = ImportWrapper(__import__)
wrapper.mock_import('myModule', [])
The second argument to mock_import is a list of module names you do want to import in inner module.
This example can be modified further to e.g. import other module than desired instead of just not importing it, or even mocking the module object with some custom object of your own.
If you really want to muck around with the python import mechanism, take a look at the ihooks module. It provides tools for changing the behavior of the __import__ built-in. But it's not clear from your question why you need to do this.
"imports a lot of other files"? Imports a lot of other files that are part of your customized code base? Or imports a lot of other files that are part of the Python distribution? Or imports a lot of other open source project files?
If your imports don't work, you have a "simple" PYTHONPATH problem. Get all of your various project directories onto a PYTHONPATH that you can use for testing. We have a rather complex path, in Windows we manage it like this
#set Part1=c:\blah\blah\blah
#set Part2=c:\some\other\path
#set that=g:\shared\stuff
set PYTHONPATH=%part1%;%part2%;%that%
We keep each piece of the path separate so that we (a) know where things come from and (b) can manage change when we move things around.
Since the PYTHONPATH is searched in order, we can control what gets used by adjusting the order on the path.
Once you have "everything", it becomes a question of trust.
Either
you trust something (i.e., the Python code base) and just import it.
Or
You don't trust something (i.e., your own code) and you
test it separately and
mock it for stand-alone testing.
Would you test the Python libraries? If so, you've got a lot of work. If not, then, you should perhaps only mock out the things you're actually going to test.
No difficult manipulation is necessary if you want a quick-and-dirty fix before your unit-tests.
If the unit tests are in the same file as the code you wish to test, simply delete unwanted module from the globals() dictionary.
Here is a rather lengthy example: suppose you have a module impp.py with contents:
value = 5
Now, in your test file you can write:
>>> import impp
>>> print globals().keys()
>>> def printVal():
>>> print impp.value
['printVal', '__builtins__', '__file__', 'impp', '__name__', '__doc__']
Note that impp is among the globals, because it was imported. Calling the function printVal that uses impp module still works:
>>> printVal()
5
But now, if you remove impp key from globals()...
>>> del globals()['impp']
>>> print globals().keys()
['printVal', '__builtins__', '__file__', '__name__', '__doc__']
...and try to call printVal(), you'll get:
>>> printVal()
Traceback (most recent call last):
File "test_imp.py", line 13, in <module>
printVal()
File "test_imp.py", line 5, in printVal
print impp.value
NameError: global name 'impp' is not defined
...which is probably exactly what you're trying to achieve.
To use it in your unit-tests, you can delete the globals just before running the test suite, e.g. in __main__:
if __name__ == '__main__':
del globals()['impp']
unittest.main()
In your comment above, you say you want to convince python that certain modules have already been imported. This still seems like a strange goal, but if that's really what you want to do, in principle you can sneak around behind the import mechanism's back, and change sys.modules. Not sure how this'd work for package imports, but should be fine for absolute imports.