Put imports at the top? Or where they get used? [duplicate] - python

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Python import coding style
When I write some code that requires an import, and the import is only introduced by the bit of code I'm currently writing, should I:
Stick the import at the top of the file where it is made clear that to make this module work it needs these imports, yet the import is detached from the usage and should the need be removed later the module may still import stuff it doesn't ever actually use, or
Keep the import with the code that uses it immediately thereafter so it is obvious what the import is used to do and whence it can be safely removed, but risk importing the same libs multiple times and make it hard to work out what libs are required to make the module work.
Best practice?
Put the import at the top? Or put it where it gets used?

Import_Statement_Overhead from the Python wiki states:
"import statements can be executed just about anywhere. It's often
useful to place them inside functions to restrict their visibility
and/or reduce initial startup time. Although Python's interpreter is
optimized to not import the same module multiple times, repeatedly
executing an import statement can seriously affect performance in some
circumstances."
I follow general stylistic conventions and put all of my import statements at the top of the program. PEP 8 states re imports:
"Imports are always put at the top of the file, just after any module
comments and docstrings, and before module globals and constants."

Related

Share a module across other modules in python [duplicate]

This question already has answers here:
How to share imports between modules?
(2 answers)
Closed 3 years ago.
Considering the following python code setup:
main.py
MTS/
optimization.py
__init__.py
Where __init__.py consists of one line import optimization
and optimization.py uses different modules such as numpy (called as np). Let's main.py be:
from MTS import optimization
import numpy as np
a=np.load('data.npy')
b=optimization.minimize(a)
this code returns the following error:
global name 'np' is not defined
If I import numpy inside optimization.py, it works but I want to avoid importing the same module twice. How to share an imported module across other modules?
In python, imports are mainly about namespaces. Importing a module puts it into the current namespace, and if it's not already in memory it loads it into memory.
If you import a module that another module has already imported, then that module is already in memory so you just get a reference to the existing module.
So, in other words, just do import numpy as np inside every file you need to use it in. This will not consume any extra memory, as numpy will only be properly loaded once, and every subsequent import statement will just add it to the namespace of whoever calls it.
An additional benefit of this is that it's much more clear that your code is going to use numpy - it would be confusing if you just used np.load() halfway through your program and there was no indication that you'd defined np anywhere (C has this problem, for example; pretty much every programming language since has used the idea of namespaces to avoid it, including even C++). It is poor practice in python to not do this.

What is the benefit of putting an import statement inside of a class? [duplicate]

I created a module named util that provides classes and functions I often use in Python.
Some of them need imported features. What are the pros and the cons of importing needed things inside class/function definition? Is it better than import at the beginning of a module file? Is it a good idea?
It's the most common style to put every import at the top of the file. PEP 8 recommends it, which is a good reason to do it to start with. But that's not a whim, it has advantages (although not critical enough to make everything else a crime). It allows finding all imports at a glance, as opposed to looking through the whole file. It also ensures everything is imported before any other code (which may depend on some imports) is executed. NameErrors are usually easy to resolve, but they can be annoying.
There's no (significant) namespace pollution to be avoided by keeping the module in a smaller scope, since all you add is the actual module (no, import * doesn't count and probably shouldn't be used anyway). Inside functions, you'd import again on every call (not really harmful since everything is imported once, but uncalled for).
PEP8, the Python style guide, states that:
Imports are always put at the top of
the file, just after any module
comments and docstrings, and before module globals and constants.
Of course this is no hard and fast rule, and imports can go anywhere you want them to. But putting them at the top is the best way to go about it. You can of course import within functions or a class.
But note you cannot do this:
def foo():
from os import *
Because:
SyntaxWarning: import * only allowed at module level
Like flying sheep's answer, I agree that the others are right, but I put imports in other places like in __init__() routines and function calls when I am DEVELOPING code. After my class or function has been tested and proven to work with the import inside of it, I normally give it its own module with the import following PEP8 guidelines. I do this because sometimes I forget to delete imports after refactoring code or removing old code with bad ideas. By keeping the imports inside the class or function under development, I am specifying its dependencies should I want to copy it elsewhere or promote it to its own module...
Only move imports into a local scope, such as inside a function definition, if it’s necessary to solve a problem such as avoiding a circular import or are trying to reduce the initialization time of a module. This technique is especially helpful if many of the imports are unnecessary depending on how the program executes. You may also want to move imports into a function if the modules are only ever used in that function. Note that loading a module the first time may be expensive because of the one time initialization of the module, but loading a module multiple times is virtually free, costing only a couple of dictionary lookups. Even if the module name has gone out of scope, the module is probably available in sys.modules.
https://docs.python.org/3/faq/programming.html#what-are-the-best-practices-for-using-import-in-a-module
I believe that it's best practice (according to some PEP's) that you keep import statements at the beginning of a module. You can add import statements to an __init__.py file, which will import those module to all modules inside the package.
So...it's certainly something you can do the way you're doing it, but it's discouraged and actually unnecessary.
While the other answers are mostly right, there is a reason why python allows this.
It is not smart to import redundant stuff which isn’t needed. So, if you want to e.g. parse XML into an element tree, but don’t want to use the slow builtin XML parser if lxml is available, you would need to check this the moment you need to invoke the parser.
And instead of memorizing the availability of lxml at the beginning, I would prefer to try importing and using lxml, except it’s not there, in which case I’d fallback to the builtin xml module.

How does importing a module inside a function make code more modular? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
From Dive into Python:
You're used to seeing import statements at the top of a program, which
means that the imported module is available anywhere in the program.
But you can also import modules within a function, which means that
the imported module is only available within the function. If you have
a module that is only ever used in one function, this is an easy way
to make your code more modular.
The other alternative to importing module at the function level is to import it at the module level (of which the function is a part).
Since the module is the reusable unit, how does the former option increase modularity?
I'm not convinced that importing at the function level is more modular. Importing a function (or class) from a module within Python will resolve the library dependencies, and those imported functions work without having to import the same libraries.
(Note: I'm using library to avoid saying module every other word)
Example:
my_module.py
import sys
def printerr(message):
"""Use stderr for error messages"""
sys.stderr.write('{0}\n'.format(message))
sys.stderr.flush()
my_script.py
import os
from my_module import printerr
def main():
"""Derpy script"""
if 'my_important_file.txt' in os.listdir():
print("it's there")
else:
printerr("ERROR: Unable to find important file")
Given, this is an extremely simplified example, but importing sys within printerr would not make it more modular. It's equally modular either way.
In my experience, the issues that can and do arise is related to tech-debt build up when it comes to modularity.
At first, you have a simple module with a well defined idea of what it does. The libraries imported within that module make sense and are highly used across the functions/classes within it.
As time goes on, and features are added, we often just add the new functions/classes in a "best fit" module (i.e. "I guess it makes sense to put that here?").
More time goes on, and a better library comes into popularity, so I'll use that one, but I really don't want to spend several weeks replacing all that old code (like I have that kind of time, I got things to make!).
Before long you have a gnarly bit of code that imports like 20 things and does way to much. And of course there are no unit tests, so every time you touch it, you break something, but it takes a week to find out...
Now you're sick of dealing with this, so screw it, I'll just import what I want within my functions that use it; this is easier than dealing with some 20 imports.
A very compelling reason to not import within functions is well demonstrated like this:
def doh():
import nothing
print "Homer says..."
That's a bug you just shipped, because doh never got called in your unit tests, but it gets called in this one weird way that you didn't think of. "If only I imported like PEP8 said, I wouldn't have this problem."

Import conflict in Python? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Python: Circular (or cyclic) imports
I'm new to Python, and I'm having an issue, but I'm not exactly sure if this is my issue. I have two files, user.py and comments.py. In user.py, I do
from comments import Comment
and in comments.py I do
from user import User
My user loads fine, but when I load the URL that leads to comments, I get a server error. Commenting out the from comments import Comment fixes the problem. Am I doing something wrong?
Yes, you have a circular import, and those cause many many problems. If you think about what is actually happening when you import, it is analogous to saying, "copy the code from file x in to this file," but if you copy from x to y then back from y to x, you've created an infinite loop where it's difficult to impossible for the interpreter to figure out which module is supposed to supersede or load which in which situations. However, if your program is architected properly, you should not have any. What reason do you have for having this circular import? Chances are you don't actually need it at all if we look at the problem a little more carefully.
This kind of circular import does not work. Importing a module means essentially executing the statements in it. The import statements are executed in the moment they are encountered, so in at least one of the modules the other module has not yet been initialised, so the import will fail.
A circular dependency is considered an antipattern. There are situations where they somehow occur naturally, but in general they are a sign of a bad design.
You can probably make this work by moving one of the import statements to the end of the module or to function level, but I'd recommend against these "fixes".

Python Auto Importing [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Perl's AUTOLOAD in Python (getattr on a module)
I'm coming from a PHP background and attempting to learn Python, and I want to be sure to do things the "Python way" instead of how i've developed before.
My question comes from the fact in PHP5 you can set up your code so if you attempt to call a class that doesn't exist in the namespace, a function will run first that will load the class in and allow you to continue on as if it were already loaded. the advantages to this is classes weren't loaded unless they were called, and you didn't have to worry about loading classes before using them.
In python, there's alot of emphasis on the import statement, is it bad practice to attempt an auto importing trick with python, to alleviate the need for an import statement? I've found this module that offers auto importing, however I dont know if that's the best way of doing it, or if auto importing of modules is something that is recommended, thoughts?
Imports serve at least two other important purposes besides making the modules or contents of the modules available:
They serve as a sort of declaration of intent -- "this module uses services from this other module" or "this module uses services belonging to a certain class" -- e.g. if you are doing a security review for socket-handling code, you can begin by only looking at modules that import socket (or other networking-related modules)
Imports serve as a proxy for the complexity of a module. If you find yourself with dozens of lines of imports, it may be time to reconsider your separation of concerns within the module, or within your application as a whole. This is also a good reason to avoid "from foo import *"-type imports.
In Python, people usually avoid auto imports, just because it is not worth the effort. You may slightly remove startup costs, but otherwise, there is no (or should be no) significant effect. If you have modules that are expensive to import and do a lot of stuff that doesn't need to be done, rather rewrite the module than delay importing it.
That said, there is nothing inherently wrong with auto imports. Because of the proxy nature, there may be some pitfalls (e.g. when looking at a thing that has not been imported yet). Several auto importing libraries are floating around.
If you are learning Python and want to do things the Python way, then just import the modules. It's very unusual to find autoimports in Python code.
You could auto-import the modules, but the most I have ever needed to import was about 10, and that is after I tacked features on top of the original program. You won't be importing a lot, and the names are very easy to remember.

Categories