Populating Factory using Metaclasses in Python - python

Obviously, registering classes in Python is a major use-case for metaclasses. In this case, I've got a serialization module that currently uses dynamic imports to create classes and I'd prefer to replace that with a factory pattern.
So basically, it does this:
data = #(Generic class based on serial data)
moduleName = data.getModule()
className = data.getClass()
aModule = __import__(moduleName)
aClass = getattr(aModule, className)
But I want it to do this:
data = #(Generic class based on serial data)
classKey = data.getFactoryKey()
aClass = factory.getClass(classKey)
However, there's a hitch: If I make the factory rely on metaclasses, the Factory only learns about the existence of classes after their modules are imported (e.g., they're registered at module import time). So to populate the factory, I'd have to either:
manually import all related modules (which would really defeat the purpose of having metaclasses automatically register things...) or
automatically import everything in the whole project (which strikes me as incredibly clunky and ham-fisted).
Out of these options, just registering the classes directly into a factory seems like the best option. Has anyone found a better solution that I'm just not seeing? One option might be to automatically generate the imports required in the factory module by traversing the project files, but unless you do that with a commit-hook, you run the risk of your factory getting out of date.
Update:
I have posted a self-answer, to close this off. If anyone knows a good way to traverse all Python modules across nested subpackages in a way that will never hit a cycle, I will gladly accept that answer rather than this one. The main problem I see happening is:
\A.py (import Sub.S2)
\Sub\S1.py (import A)
\Sub\S2.py
\Sub\S3.py (import Sub.S2)
When you try to import S3, it first needs to import Main (otherwise it won't know what a Sub is). At that point, it tries to import A. While there, the __init__.py is called, and tries to register A. At this point, A tries to import S1. Since the __init__.py in Sub is hit, it tries to import S1, S2, and S3. However, S1 wants to import A (which does not yet exist, as it is in the process of being imported)! So that import fails. You can switch how the traversal occurs (i.e., depth first rather than breadth first), but you hit the same issues. Any insight on a good traversal approach for this would be very helpful. A two-stage approach can probably solve it (i.e., traverse to get all module references, then import as a flat batch). However, I am not quite sure of the best way to handle the final stage (i.e., to know when you are done traversing and then import everything). My big restriction is that I do not want to have a super-package to deal with (i.e., an extra directory under Sub and A). If I had that, it could kick off traversal, but everything would need to import relative to that for no good reason (i.e., all imports longer by an extra directory). Thusfar, adding a special function call to sitecustomize.py seems like my only option (I set the root directory for the package development in that file anyway).

The solution I found to this was to do all imports on the package based off of a particular base directory and have special __init__.py functions for all of the ones that might have modules with classes that I'd want to have registered. So basically, if you import any module, it first has to import the base directory and proceeds to walk every package (i.e., folder) with a similar __init__.py file.
The downside of this approach is that the same modules are sometimes imported multiple times, which is annoying if anyone leaves code with side effects in a module import. However, that's bad either way. Unfortunately, some major packages (cough, cough: Flask) have serious complaints with IDLE if you do this (IDLE just restarts, rather than doing anything). The other downside is that because modules import each other, sometimes it attempts to import a module that it is already in the process of importing (an easily caught error, but one I'm still trying to stamp out). It's not ideal, but it does get the job done. Additional details on the more specific issue are attached, and if anyone can offer a better answer, I will gladly accept it.

Related

What is the benefit of putting an import statement inside of a class? [duplicate]

I created a module named util that provides classes and functions I often use in Python.
Some of them need imported features. What are the pros and the cons of importing needed things inside class/function definition? Is it better than import at the beginning of a module file? Is it a good idea?
It's the most common style to put every import at the top of the file. PEP 8 recommends it, which is a good reason to do it to start with. But that's not a whim, it has advantages (although not critical enough to make everything else a crime). It allows finding all imports at a glance, as opposed to looking through the whole file. It also ensures everything is imported before any other code (which may depend on some imports) is executed. NameErrors are usually easy to resolve, but they can be annoying.
There's no (significant) namespace pollution to be avoided by keeping the module in a smaller scope, since all you add is the actual module (no, import * doesn't count and probably shouldn't be used anyway). Inside functions, you'd import again on every call (not really harmful since everything is imported once, but uncalled for).
PEP8, the Python style guide, states that:
Imports are always put at the top of
the file, just after any module
comments and docstrings, and before module globals and constants.
Of course this is no hard and fast rule, and imports can go anywhere you want them to. But putting them at the top is the best way to go about it. You can of course import within functions or a class.
But note you cannot do this:
def foo():
from os import *
Because:
SyntaxWarning: import * only allowed at module level
Like flying sheep's answer, I agree that the others are right, but I put imports in other places like in __init__() routines and function calls when I am DEVELOPING code. After my class or function has been tested and proven to work with the import inside of it, I normally give it its own module with the import following PEP8 guidelines. I do this because sometimes I forget to delete imports after refactoring code or removing old code with bad ideas. By keeping the imports inside the class or function under development, I am specifying its dependencies should I want to copy it elsewhere or promote it to its own module...
Only move imports into a local scope, such as inside a function definition, if it’s necessary to solve a problem such as avoiding a circular import or are trying to reduce the initialization time of a module. This technique is especially helpful if many of the imports are unnecessary depending on how the program executes. You may also want to move imports into a function if the modules are only ever used in that function. Note that loading a module the first time may be expensive because of the one time initialization of the module, but loading a module multiple times is virtually free, costing only a couple of dictionary lookups. Even if the module name has gone out of scope, the module is probably available in sys.modules.
https://docs.python.org/3/faq/programming.html#what-are-the-best-practices-for-using-import-in-a-module
I believe that it's best practice (according to some PEP's) that you keep import statements at the beginning of a module. You can add import statements to an __init__.py file, which will import those module to all modules inside the package.
So...it's certainly something you can do the way you're doing it, but it's discouraged and actually unnecessary.
While the other answers are mostly right, there is a reason why python allows this.
It is not smart to import redundant stuff which isn’t needed. So, if you want to e.g. parse XML into an element tree, but don’t want to use the slow builtin XML parser if lxml is available, you would need to check this the moment you need to invoke the parser.
And instead of memorizing the availability of lxml at the beginning, I would prefer to try importing and using lxml, except it’s not there, in which case I’d fallback to the builtin xml module.

What benefits or disadvantages would importing a module that contains 'import' commands?

If I were to create a module that was called for example imp_mod.py and inside it contained all (subjectively used) relevant modules that I frequently used.
Would importing this module into my main program allow me access to the imports contained inside imp_mod.py?
If so, what disadvantages would this bring?
I guess a major advantage would be a reduction of time spent importing even though its only a couple of seconds saved...
Yes, it would allow you to access them. If you place these imports in imp_mod.py:
from os import listdir
from collections import defaultdict
from copy import deepcopy
Then, you could do this in another file, say, myfile.py:
import imp_mod
imp_mod.listdir
imp_mod.defaultdict
imp_mod.deepcopy
You're wrong about reduction of importing time, as what happens is the opposite. Python will need to import imp_mod and then import the other modules afterwards, while the first import would not be needed if you were importing these modules in myfile.py itself. If you do the same imports in another file, they will already be in cache, so virtually no time is spent in the next import.
The real disadvantage here is less readability. Whoever looks at imp_mod.listdir, for example, will ask himself what the heck is this method and why it has the same name as that os module's method. When he had to open imp_mod.py just to find out that it's the same method, well, he probably wouldn't be happy. I wouldn't.
As lucasnadalutti mentioned, you can access them by importing your module.
In terms of advantages, it can make your main program care less about where the imports are coming from if the imp_mod handles all imports, however, as your program gets more complex and starts to include more namespaces, this approach can get more messy. You can start to handle a bit of this by using __init__.py within directories to handle imports to do a similar thing, but as things get more complex, personally, I feel it add a little more complexity. I'd rather just know where a module came from to look it up.

Import from parent directory for a test sub-directory without using packaging, Python 2.7

TL;DR
For a fixed and unchangeable non-package directory structure like this:
some_dir/
mod.py
test/
test_mod.py
example_data.txt
what is a non-package way to enable test_mod.py to import from mod.py?
I am restricted to using Python 2.7 in this case.
I want to write a few tests for the functions in mod.py. I want to create a new directory test that sits alongside mod.py and inside it there is one test file test/test_mod.py which should import the functions from mod.py and test them.
Because of the well-known limitations of relative imports, which rely on package-based naming, you can't do this in the straightforward way. Yet, all advice on the topic suggests to build the script as a package and then use relative imports, which is impossible for my use case.
In my case, it is not allowable for mod.py to be built as a package and I cannot require users of mod.py to install it. They are instead free to merely check the file out from version control and just begin using it however they wish, and I am not able to change that circumstance.
Given this, what is a way to provide a simple, straightforward test directory?
Note: not just a test file that sits alongside mod.py, but an actual test directory since there will be other assets like test data that come with it, and the sub-directory organization is critical.
I apologize if this is a duplicate, but out of the dozen or so permutations of this question I've seen in my research before posting, I haven't seen a single one that addresses how to do this. They all say to use packaging, which is not a permissible option for my case.
Based on #mgilson 's comment, I added a file import_helper.py to the test directory.
some_dir/
mod.py
test/
test_mod.py
import_helper.py
example_data.txt
Here is the content of import_helper.py:
import sys as _sys
import os.path as _ospath
import inspect as _inspect
from contextlib import contextmanager as _contextmanager
#_contextmanager
def enable_parent_import():
path_appended = False
try:
current_file = _inspect.getfile(_inspect.currentframe())
current_directory = _ospath.dirname(_ospath.abspath(current_file))
parent_directory = _ospath.dirname(current_directory)
_sys.path.insert(0, parent_directory)
path_appended = True
yield
finally:
if path_appended:
_sys.path.pop(0)
and then in the import section of test_mod.py, prior to an attempt to import mod.py, I have added:
import unittest
from import_helper import enable_parent_import
with enable_parent_import():
from mod import some_mod_function_to_test
It is unfortunate to need to manually mangle PYTHONPATH, but writing it as a context manager helps a little, and restores sys.path back to its original state prior to the parent directory modification.
In order for this solution to scale across multiple instances of this problem (say tomorrow I am asked to write a widget.py module for some unrelated tasks and it also cannot be distributed as a package), I have to replicate my helper function and ensure a copy of it is distributed with any tests, or I have to write that small utility as a package, ensure it gets globally installed across my user base, and then maintain it going forward.
When you manage a lot of Python code internal to a company, often one dysfunctional code distribution mode that occurs is that "installing" some Python code becomes equivalent to checking out the new version from version control.
Since the code is often extremely localized and specific to a small set of tasks for a small subset of a larger team, maintaining the overhead for sharing code via packaging (even if it is a better idea in general) will simply never happen.
As a result, I feel the use case I describe above is extremely common for real-world Python, and it would be nice if some import tools added this functionality for modifying PYTHONPATH, which some sensible default choices (like adding parent directory) being very easy.
That way you could rely on this at least being part of the standard library, and not needing to roll your own code and ensure it's either shipped with your tests or installed across your user base.

Proper way to import across Python package

Let's say I have a couple of Python packages.
/package_name
__init__.py
/dohickey
__init__.py
stuff.py
other_stuff.py
shiny_stuff.py
/thingamabob
__init__.py
cog_master.py
round_cogs.py
teethless_cogs.py
/utilities
__init__.py
important.py
super_critical_top_secret_cog_blueprints.py
What's the best way to utilize the utilites package? Say shiny_stuff.py needs to import important.py, what's the best way to go about that?
Currently I'm thinking
from .utilities import important
But is that the best way? Would it make more sense to add utilities to the path and import it that way?
import sys
sys.path.append(os.path.basename(os.path.basename(__file__)))
import utilities.super_critical_top_secret_cog_blueprints
That seems clunky to add to each of my files.
I think the safest way is always to use absolute import, so in you case:
from package_name.utilities import important
This way you won't have to change your code if you decide to move your shiny_stuff.py in some other package (assuming that package_name will still be in your sys.path).
According to Nick Coghlan (who is a Python core developer):
"“Never add a package directory, or any directory inside a package, directly to the Python path.” (Under the heading "The double import trap")
Adding the package directory to the path gives two separate ways for the module to be referred to. The link above is an excellent blog post about the Python import system. Adding it to the path directly means you can potentially have two copies of a single module, which you don't want. Your relative import from .utilities import important is fine, and an absolute import import package_name.utilities.important is also fine.
A "best" out-of-context choice probably doesn't exist, but you can have some criteria choosing which is better for your use cases, and for such a judgment one should know are the different possible approaches and their characteristics. Probably the best source of information is the PEP 328 itself, which contains some rationale about declaring distinct possibilities for that.
A common approach is to use the "absolute import", in your case it would be something like:
from package_name.utilities import important
This way, you can make this file it a script. It is somewhat independent from other modules and packages, fixed mainly by its location. If you have a package structure and need to change one single module from its location, having absolute path would help this single file to be kept unchanged, but all the ones which uses this module it should change. Of course you can also import the __init__.py files as:
from package_name import utilities
And these imports have the same characteristics. Be careful that utilities.important try to find a variable important within __init__.py, not from important.py, so having a "import important" __init__.py would help avoiding a mistake due to the distinction between file structure and namespace structure.
Another way to do that is the relative approach, by using:
from ..utilities import important
The first dot (from .stuff import ___ or from . import ___) says "the module in this [sub]package", or __init__.py when there's only the dot. From the second dot we are talking about parent directories. Generally, starting with dots in any import isn't allowed in a script/executable, but you can read about explicit relative imports (PEP 366) if you care about scripts with relative imports.
A justification for relative import can be found on the PEP 328 itself:
With the shift to absolute imports, the question arose whether relative imports should be allowed at all. Several use cases were presented, the most important of which is being able to rearrange the structure of large packages without having to edit sub-packages. In addition, a module inside a package can't easily import itself without relative imports.
Either case, the modules are tied to the subpackages in the sense that package_name is imported first no matter which the user tried to import first, unless you use sys.path to search for subpackages as packages (i.e., use the package root inside sys.path)...but that sounds weird, why would one do that?
The __init__.py can auto-import module names, for that one should care about its namespace contents. For example, say important.py has an object called top_secret, which is a dictionary. To find it from anywhere you would need
from package_name.utilities.important import top_secret
Perhaps you want be less specific:
from package_name.utilities import top_secret
That would be done with an __init__.py with the following line inside it:
from .important import top_secret
That's perhaps mixing the relative and absolute imports, but for a __init__.py you probably know that subpackage makes sense as a subpackage, i.e., as an abstraction by itself. If it's just a bunch of files located in the same place with the need for a explicit module name, probably the __init__.py would be empty (or almost empty). But for avoiding explicit module names for the user, the same idea can be done on the root __init__.py, with
from .utilities import top_secret
Completely indirect, but the namespace gets flat this way while the files are nested for some internal organization. For example, the wx package (wxPython) do that: everything can be found from wx import ___ directly.
You can also use some metaprogramming for finding the contents if you want to follow this approach, for example, using __all__ to detect all names a module have, or looking for the file location to know which modules/subpackages are available there to import. However, some simpler code completion utilities might get lost when doing that.
For some contexts you might have other kind of constraints. For example, macropy makes some "magic" with imports and doesn't work on the file you call as a script, so you'll need at least 2 modules just to use this package.
Anyhow, you should always ask whether nesting into subpackages is really needed for you code or API organization. The PEP 20 tells us that "Flat is better than nested", which isn't a law but a point-of-view that suggests you should keep a flat package structure unless nesting is needed for some reason. Likewise, you don't need a module for each class nor anything alike.
Use absolute import in case you need to move to a different location.

Efficiently importing modules in Django views

I was wondering - how do people handle importing large numbers of commonly used modules within django views? And whats the best method to do this efficiently?
For instance, I've got some views like,
admin_views.py
search_views.py
.
.
and from what I've seen, every one of them needs to use HttpResponse or other such commonly used modules. Moreover, some of them need things like BeautifulSoup, and others need other things (md5, auth, et al).
What I did when starting the project was to make an include_all.py which contained most of my common imports, and then added these specific things in the view itself. So, I had something like,
admin_views.py
from include_all import *
...
[list of specific module imports for admin]
...
search_views.py
from include_all import *
...
[list of specific module imports for search]
...
As time progressed, the include_all became a misc file with anything being needed put into it - as a result, a number of views end up importing modules they don't need.
Is this going to affect efficiency? That is, does python (django?) import all the modules once and store/cache them such that any other view needing them doesn't have to import it again? Or is my method of calling this long file a very inefficient one - and I would be better of sticking to individually importing these modules in each view?
Are there any best practices for this sort of thing too?
Thanks!
Python itself guarantees that a module is loaded just once (unless reload is explicitly called, which is not the case here): after the first time, import of that module just binds its name directly from sys.modules[themodulename], an extremely fast operation. So Django does not have to do any further optimization, and neither do you.
Best practice is avoiding from ... import * in production code (making it clearer and more maintainable where each name is coming from, facilitating testing, etc, etc) and importing modules, "individually" as you put it, exactly where they're needed (by possibly binding fewer names that may save a few microseconds and definitely won't waste any, but "explicit is better than implicit" -- clarity, readability, maintainability -- is the main consideration anyway).
I guess you could slap your frequently used imports into your __init__.py file.
Django isn't CGI (or PHP). Your app is a single (or a few) long-running Python process. It doesn't matter how long it takes to start, each HTTP request will simply call your (already loaded) view functions.

Categories