Python imported module is None - python

I have a module that imports fine (i print it at the top of the module that uses it)
from authorize import cim
print cim
Which produces:
<module 'authorize.cim' from '.../dist-packages/authorize/cim.pyc'>
However later in a method call, it has mysteriously turned to None
class MyClass(object):
def download(self):
print cim
which when run show that cim is None. The module isn't ever directly assigned to None anywhere in this module.
Any ideas how this can happen?

As you comment it youself - it is likely some code is attributing None to the "cim" name on your module itself - the way for checking for this is if your large module would be made "read only" for other modules -- I think Python allows for this --
(20 min. hacking ) --
Here -- just put this snippet in a "protect_module.py" file, import it, and call
"ProtectdedModule()" at the end of your module in which the name "cim" is vanishing -
it should give you the culprit:
"""
Protects a Module against naive monkey patching -
may be usefull for debugging large projects where global
variables change without notice.
Just call the "ProtectedModule" class, with no parameters from the end of
the module definition you want to protect, and subsequent assignments to it
should fail.
"""
from types import ModuleType
from inspect import currentframe, getmodule
import sys
class ProtectedModule(ModuleType):
def __init__(self, module=None):
if module is None:
module = getmodule(currentframe(1))
ModuleType.__init__(self, module.__name__, module.__doc__)
self.__dict__.update(module.__dict__)
sys.modules[self.__name__] = self
def __setattr__(self, attr, value):
frame = currentframe(1)
raise ValueError("Attempt to monkey patch module %s from %s, line %d" %
(self.__name__, frame.f_code.co_filename, frame.f_lineno))
if __name__ == "__main__":
from xml.etree import ElementTree as ET
ET = ProtectedModule(ET)
print dir(ET)
ET.bla = 10
print ET.bla

In my case, this was related with threading quirks: https://docs.python.org/2/library/threading.html#importing-in-threaded-code

Related

Keeping track of when Python modules are imported

Does the interpreter somehow keep a timestamp of when a module is imported? Or is there an easy way of hooking into the import machinery to do this?
The scenario is a long-running Python process that at various points imports user-provided modules. I would like the process to be able to check "should I restart to load the latest code changes?" by checking the module file's timestamps against the time the module was imported.
Here's a way to automatically have an attribute (named _loadtime in the example code below) added to modules when they're imported. The code is based on Recipe 10.12 titled "Patching Modules on Import" in the book Python Cookbook, by David Beazley and Brian Jones, O'Reilly, 2013, which shows a technique that I adapted to do what you want.
For testing purposes I created this trivial target_module.py file:
print('in target_module')
Here's the example code:
import importlib
import sys
import time
class PostImportFinder:
def __init__(self):
self._skip = set() # To prevent recursion.
def find_module(self, fullname, path=None):
if fullname in self._skip: # Prevent recursion
return None
self._skip.add(fullname)
return PostImportLoader(self)
class PostImportLoader:
def __init__(self, finder):
self._finder = finder
def load_module(self, fullname):
importlib.import_module(fullname)
module = sys.modules[fullname]
# Add a custom attribute to the module object.
module._loadtime = time.time()
self._finder._skip.remove(fullname)
return module
sys.meta_path.insert(0, PostImportFinder())
if __name__ == '__main__':
import time
try:
print('importing target_module')
import target_module
except Exception as e:
print('Import failed:', e)
raise
loadtime = time.localtime(target_module._loadtime)
print('module loadtime: {} ({})'.format(
target_module._loadtime,
time.strftime('%Y-%b-%d %H:%M:%S', loadtime)))
Sample output:
importing target_module
in target_module
module loadtime: 1604683023.2491636 (2020-Nov-06 09:17:03)
I don't think there's any way to get around how hacky this is, but how about something like this every time you import? (I don't know exactly how you're importing):
import time
from types import ModuleType
# create a dictionary to keep track
# filter globals to exclude things that aren't modules and aren't builtins
MODULE_TIMES = {k:None for k,v in globals().items() if not k.startswith("__") and not k.endswith("__") and type(v) == ModuleType}
for module_name in user_module_list:
MODULE_TIMES[module_name] = time.time()
eval("import {0}".format(module_name))
And then you can reference this dictionary in a similar way later.

Validating Arbitrary Python Code

I have an application that will take in a string and later run it as arbitrary python code. I wish to validate this string before I attempt to run it and evaluate it for a few things:
Syntactically correct (this can be done via the compile(stringCode, foo.py, "execute") builtin)
All imports are available locally
Whether a class in the arbitrary code string inherits from a specific class
Whether the class from #3 also implements a specifically named method (so I can later call foo.bar() on the arbitrary code without too much hassle)
I've looked around at code objects, but they don't seem to be able to do anything unless I try to run the code directly, when I would rather validate that it works beforehand
You can use ast.parse to create a syntax tree of your string. Then you can iterate over the tree and validate whatever parse-time qualities you like.
As internet_user says, this will not tell you about the run-time qualities of your code; if modules are imported through a mechanism other than the usual import statement, those won't be validated. If your classes are dynamically changed to add or remove methods, you won't know that just from looking at the defs in their class definition.
Provided that you're not worried about any of that, here's a sample implementation:
import ast
import sys
import os
import imp
s = """
import math, gzip
from os import system
import numpy
import obviouslyFakeModuleName
class A(int):
def troz(self):
return 23
class B(str):
def zort(self):
return 42
"""
def can_be_imported(name):
try:
imp.find_module(name)
return True
except ImportError:
return False
def iter_nodes_by_type(code, type_or_types):
for node in ast.walk(code):
if isinstance(node, type_or_types):
yield node
def iter_imported_module_names(code):
for node in iter_nodes_by_type(code, ast.Import):
for alias in node.names:
yield alias.name
for node in iter_nodes_by_type(code, ast.ImportFrom):
yield node.module
def iter_globally_defined_classes(code):
for child in ast.iter_child_nodes(code):
if isinstance(child, ast.ClassDef):
yield child
def iter_methods(class_):
for node in ast.iter_child_nodes(class_):
if isinstance(node, ast.FunctionDef):
yield node
try:
code = ast.parse(s)
except SyntaxError:
print("That string is not valid Python.")
sys.exit(0)
#inspection of imports
for name in iter_imported_module_names(code):
if can_be_imported(name):
print("module {} is available for import.".format(name))
else:
print("module {} is not available for import.".format(name))
#inspection of classes
for class_ in iter_globally_defined_classes(code):
class_name = class_.name
base_class_names = [name.id for name in class_.bases]
function_names = [func.name for func in iter_methods(class_)]
print("Inspecting class {}...".format(class_name))
#we want to know if this class inherits directly from int
if "int" in base_class_names:
print(" Does inherit from int.")
else:
print(" Does not inherit from int.")
#and does it implement zort()?
if "zort" in function_names:
print(" Implements `zort`.")
else:
print(" Does not implement `zort`.")
Result:
module math is available for import.
module gzip is available for import.
module numpy is not available for import.
module obviouslyFakeModuleName is not available for import.
module os is available for import.
Inspecting class A...
Does inherit from int.
Does not implement `zort`.
Inspecting class B...
Does not inherit from int.
Implements `zort`.

Does eventlet do monkey_patch for threading module?

Docs here in http://eventlet.net/doc/patching.htm says "If no arguments are specified, everything is patched." and "thread, which patches thread, threading, and Queue".
But with a simple test:
#!/bin/env python
import threading
import eventlet
eventlet.monkey_patch()
if __name__ == '__main__':
patched = eventlet.patcher.is_monkey_patched(threading)
print('patched : %s' % patched)
The result is:
patched : False
It seems like threading is not patched at all.
The doc is wrong?
I found the doc is right. The problem is about is_monkey_patched(), it can't detect some situation like 'threading, Queue' module. Take a look at the src of this function, the behaviour is easy to understand.
def _green_thread_modules():
from eventlet.green import Queue
from eventlet.green import thread
from eventlet.green import threading
if six.PY2:
return [('Queue', Queue), ('thread', thread), ('threading', threading)]
if six.PY3:
return [('queue', Queue), ('_thread', thread), ('threading', threading)]
if on['thread'] and not already_patched.get('thread'):
modules_to_patch += _green_thread_modules()
already_patched['thread'] = True
def is_monkey_patched(module):
"""Returns True if the given module is monkeypatched currently, False if
not. *module* can be either the module itself or its name.
Based entirely off the name of the module, so if you import a
module some other way than with the import keyword (including
import_patched), this might not be correct about that particular
module."""
return module in already_patched or \
getattr(module, '__name__', None) in already_patched
And because the patch operation is implemented like this:
for name, mod in modules_to_patch:
orig_mod = sys.modules.get(name)
if orig_mod is None:
orig_mod = __import__(name)
for attr_name in mod.__patched__:
patched_attr = getattr(mod, attr_name, None)
if patched_attr is not None:
setattr(orig_mod, attr_name, patched_attr)
We can check whether a module like threading/Queue is patched by using:
>>>import threading
>>>eventlet.monkey_patch()
>>>threading.current_thread.__module__
>>>'eventlet.green.threading'

Python package/module lazily loading submodules

Interesting usecase today: I need to migrate a module in our codebase following code changes. The old mynamespace.Document will disappear and I want to ensure smooth migration by replacing this package by a code object that will dynamically import the correct path and migrate the corresponding objects.
In short:
# instanciate a dynamic package, but do not load
# statically submodules
mynamespace.Document = SomeObject()
assert 'submodule' not in mynamespace.Document.__dict__
# and later on, when importing it, the submodule
# is built if not already available in __dict__
from namespace.Document.submodule import klass
c = klass()
A few things to note:
I am not talking only of migrating code. A simple huge sed would in a sense be enough to change the code in order to migrate some imports, and I would not need a dynamic module. I am talking of objects. A website, holding some live/stored objects will need migration. Those objects will be loaded assuming that mynamespace.Document.submodule.klass exists, and that's the reason for the dynamic module. I need to provide the site with something to load.
We cannot, or do not want to change the way objects are unpickled/loaded. For simplicity, let's just say that we want to make sure that the idiom from mynamespace.Document.submodule import klass has to work. I cannot use instead from mynamespace import Document as container; klass = getattr(getattr(container, 'submodule'), 'klass')
What I tried:
import sys
from types import ModuleType
class VerboseModule(ModuleType):
def __init__(self, name, doc=None):
super(VerboseModule, self).__init__(name, doc)
sys.modules[name] = self
def __repr__(self):
return "<%s %s>" % (self.__class__.__name__, self.__name__)
def __getattribute__(self, name):
if name not in ('__name__', '__repr__', '__class__'):
print "fetching attribute %s for %s" % (name, self)
return super(VerboseModule, self).__getattribute__(name)
class DynamicModule(VerboseModule):
"""
This module generates a dummy class when asked for a component
"""
def __getattr__(self, name):
class Dummy(object):
pass
Dummy.__name__ = name
Dummy.__module__ = self
setattr(self, name, Dummy)
return Dummy
class DynamicPackage(VerboseModule):
"""
This package should generate dummy modules
"""
def __getattr__(self, name):
mod = DynamicModule("%s.%s" % (self.__name__, name))
setattr(self, name, mod)
return mod
DynamicModule("foobar")
# (the import prints:)
# fetching attribute __path__ for <DynamicModule foobar>
# fetching attribute DynamicModuleWorks for <DynamicModule foobar>
# fetching attribute DynamicModuleWorks for <DynamicModule foobar>
from foobar import DynamicModuleWorks
print DynamicModuleWorks
DynamicPackage('document')
# fetching attribute __path__ for <DynamicPackage document>
from document.submodule import ButDynamicPackageDoesNotWork
# Traceback (most recent call last):
# File "dynamicmodule.py", line 40, in <module>
# from document.submodule import ButDynamicPackageDoesNotWork
#ImportError: No module named submodule
As you can see the Dynamic Package does not work. I do not understand what is happening because document is not even asked for a ButDynamicPackageDoesNotWork attribute.
Can anyone clarify what is happening; and if/how I can fix this?
The problem is that python will bypass the entry in for document in sys.modules and load the file for submodule directly. Of course this doesn't exist.
demonstration:
>>> import multiprocessing
>>> multiprocessing.heap = None
>>> import multiprocessing.heap
>>> multiprocessing.heap
<module 'multiprocessing.heap' from '/usr/lib/python2.6/multiprocessing/heap.pyc'>
We would expect that heap is still None because python can just pull it out of sys.modules but That doesn't happen. The dotted notation essentially maps directly to {something on python path}/document/submodule.py and an attempt is made to load that directly.
Update
The trick is to override pythons importing system. The following code requires your DynamicModule class.
import sys
class DynamicImporter(object):
"""this class works as both a finder and a loader."""
def __init__(self, lazy_packages):
self.packages = lazy_packages
def load_module(self, fullname):
"""this makes the class a loader. It is given name of a module and expected
to return the module object"""
print "loading {0}".format(fullname)
components = fullname.split('.')
components = ['.'.join(components[:i+1])
for i in range(len(components))]
for component in components:
if component not in sys.modules:
DynamicModule(component)
print "{0} created".format(component)
return sys.modules[fullname]
def find_module(self, fullname, path=None):
"""This makes the class a finder. It is given the name of a module as well as
the package that contains it (if applicable). It is expected to return a
loader for that module if it knows of one or None in which case other methods
will be tried"""
if fullname.split('.')[0] in self.packages:
print "found {0}".format(fullname)
return self
else:
return None
# This is a list of finder objects which is empty by defaule
# It is tried before anything else when a request to import a module is encountered.
sys.meta_path=[DynamicImporter('foo')]
from foo.bar import ThisShouldWork

__getattr__ on a module

How can implement the equivalent of a __getattr__ on a class, on a module?
Example
When calling a function that does not exist in a module's statically defined attributes, I wish to create an instance of a class in that module, and invoke the method on it with the same name as failed in the attribute lookup on the module.
class A(object):
def salutation(self, accusative):
print "hello", accusative
# note this function is intentionally on the module, and not the class above
def __getattr__(mod, name):
return getattr(A(), name)
if __name__ == "__main__":
# i hope here to have my __getattr__ function above invoked, since
# salutation does not exist in the current namespace
salutation("world")
Which gives:
matt#stanley:~/Desktop$ python getattrmod.py
Traceback (most recent call last):
File "getattrmod.py", line 9, in <module>
salutation("world")
NameError: name 'salutation' is not defined
There are two basic problems you are running into here:
__xxx__ methods are only looked up on the class
TypeError: can't set attributes of built-in/extension type 'module'
(1) means any solution would have to also keep track of which module was being examined, otherwise every module would then have the instance-substitution behavior; and (2) means that (1) isn't even possible... at least not directly.
Fortunately, sys.modules is not picky about what goes there so a wrapper will work, but only for module access (i.e. import somemodule; somemodule.salutation('world'); for same-module access you pretty much have to yank the methods from the substitution class and add them to globals() eiher with a custom method on the class (I like using .export()) or with a generic function (such as those already listed as answers). One thing to keep in mind: if the wrapper is creating a new instance each time, and the globals solution is not, you end up with subtly different behavior. Oh, and you don't get to use both at the same time -- it's one or the other.
Update
From Guido van Rossum:
There is actually a hack that is occasionally used and recommended: a
module can define a class with the desired functionality, and then at
the end, replace itself in sys.modules with an instance of that class
(or with the class, if you insist, but that's generally less useful).
E.g.:
# module foo.py
import sys
class Foo:
def funct1(self, <args>): <code>
def funct2(self, <args>): <code>
sys.modules[__name__] = Foo()
This works because the import machinery is actively enabling this
hack, and as its final step pulls the actual module out of
sys.modules, after loading it. (This is no accident. The hack was
proposed long ago and we decided we liked enough to support it in the
import machinery.)
So the established way to accomplish what you want is to create a single class in your module, and as the last act of the module replace sys.modules[__name__] with an instance of your class -- and now you can play with __getattr__/__setattr__/__getattribute__ as needed.
Note 1: If you use this functionality then anything else in the module, such as globals, other functions, etc., will be lost when the sys.modules assignment is made -- so make sure everything needed is inside the replacement class.
Note 2: To support from module import * you must have __all__ defined in the class; for example:
class Foo:
def funct1(self, <args>): <code>
def funct2(self, <args>): <code>
__all__ = list(set(vars().keys()) - {'__module__', '__qualname__'})
Depending on your Python version, there may be other names to omit from __all__. The set() can be omitted if Python 2 compatibility is not needed.
A while ago, Guido declared that all special method lookups on
new-style classes bypass __getattr__ and __getattribute__. Dunder methods had previously worked on modules - you could, for example, use a module as a context manager simply by defining __enter__ and __exit__, before those tricks broke.
Recently some historical features have made a comeback, the module __getattr__ among them, and so the existing hack (a module replacing itself with a class in sys.modules at import time) should be no longer necessary.
In Python 3.7+, you just use the one obvious way. To customize attribute access on a module, define a __getattr__ function at the module level which should accept one argument (name of attribute), and return the computed value or raise an AttributeError:
# my_module.py
def __getattr__(name: str) -> Any:
...
This will also allow hooks into "from" imports, i.e. you can return dynamically generated objects for statements such as from my_module import whatever.
On a related note, along with the module getattr you may also define a __dir__ function at module level to respond to dir(my_module). See PEP 562 for details.
This is a hack, but you can wrap the module with a class:
class Wrapper(object):
def __init__(self, wrapped):
self.wrapped = wrapped
def __getattr__(self, name):
# Perform custom logic here
try:
return getattr(self.wrapped, name)
except AttributeError:
return 'default' # Some sensible default
sys.modules[__name__] = Wrapper(sys.modules[__name__])
We don't usually do it that way.
What we do is this.
class A(object):
....
# The implicit global instance
a= A()
def salutation( *arg, **kw ):
a.salutation( *arg, **kw )
Why? So that the implicit global instance is visible.
For examples, look at the random module, which creates an implicit global instance to slightly simplify the use cases where you want a "simple" random number generator.
Similar to what #Håvard S proposed, in a case where I needed to implement some magic on a module (like __getattr__), I would define a new class that inherits from types.ModuleType and put that in sys.modules (probably replacing the module where my custom ModuleType was defined).
See the main __init__.py file of Werkzeug for a fairly robust implementation of this.
This is hackish, but...
# Python 2.7
import types
class A(object):
def salutation(self, accusative):
print("hello", accusative)
def farewell(self, greeting, accusative):
print(greeting, accusative)
def AddGlobalAttribute(classname, methodname):
print("Adding " + classname + "." + methodname + "()")
def genericFunction(*args):
return globals()[classname]().__getattribute__(methodname)(*args)
globals()[methodname] = genericFunction
# set up the global namespace
x = 0 # X and Y are here to add them implicitly to globals, so
y = 0 # globals does not change as we iterate over it.
toAdd = []
def isCallableMethod(classname, methodname):
someclass = globals()[classname]()
something = someclass.__getattribute__(methodname)
return callable(something)
for x in globals():
print("Looking at", x)
if isinstance(globals()[x], (types.ClassType, type)):
print("Found Class:", x)
for y in dir(globals()[x]):
if y.find("__") == -1: # hack to ignore default methods
if isCallableMethod(x,y):
if y not in globals(): # don't override existing global names
toAdd.append((x,y))
# Returns:
# ('Looking at', 'A')
# ('Found Class:', 'A')
# ('Looking at', 'toAdd')
# ('Looking at', '__builtins__')
# ('Looking at', 'AddGlobalAttribute')
# ('Looking at', 'register')
# ('Looking at', '__package__')
# ('Looking at', 'salutation')
# ('Looking at', 'farewell')
# ('Looking at', 'types')
# ('Looking at', 'x')
# ('Looking at', 'y')
# ('Looking at', '__name__')
# ('Looking at', 'isCallableMethod')
# ('Looking at', '__doc__')
# ('Looking at', 'codecs')
for x in toAdd:
AddGlobalAttribute(*x)
if __name__ == "__main__":
salutation("world")
farewell("goodbye", "world")
# Returns:
# hello world
# goodbye world
This works by iterating over the all the objects in the global namespace. If the item is a class, it iterates over the class attributes. If the attribute is callable it adds it to the global namespace as a function.
It ignore all attributes which contain "__".
I wouldn't use this in production code, but it should get you started.
Here's my own humble contribution -- a slight embellishment of #Håvard S's highly rated answer, but a bit more explicit (so it might be acceptable to #S.Lott, even though probably not good enough for the OP):
import sys
class A(object):
def salutation(self, accusative):
print "hello", accusative
class Wrapper(object):
def __init__(self, wrapped):
self.wrapped = wrapped
def __getattr__(self, name):
try:
return getattr(self.wrapped, name)
except AttributeError:
return getattr(A(), name)
_globals = sys.modules[__name__] = Wrapper(sys.modules[__name__])
if __name__ == "__main__":
_globals.salutation("world")
Create your module file that has your classes. Import the module. Run getattr on the module you just imported. You can do a dynamic import using __import__ and pull the module from sys.modules.
Here's your module some_module.py:
class Foo(object):
pass
class Bar(object):
pass
And in another module:
import some_module
Foo = getattr(some_module, 'Foo')
Doing this dynamically:
import sys
__import__('some_module')
mod = sys.modules['some_module']
Foo = getattr(mod, 'Foo')

Categories