In first.py, I imported the datetime library and called a method which is written in second.py. The method works well without importing the datetime libirary in second.py.
first.py
from datetime import datetime
import second
def method1(time):
return datetime.strptime(time,"%Y/%m/%d")
a = method1("2019/08/01")
b = second.method2(a)
second.py
def method2(para1):
return para1.second
Output
0
Should second.py import datetime so that para1.second can work? Can someone help explain the rationale behind?
You only need to import modules explicitly when you need to use their names. In first.py, for example, you're using things in the datetime module directly, and referring to it by name. So, you do import datetime, and then call datetime.strptime() on that module you imported.
In second.py, however, you don't have to do import datetime. This is because of how python handles attributes - when you do para1.second, python doesn't need to know exactly what type of variable para1 is - it just checks to see whether it has a field called second. And it does, so it returns that. Nowhere in second.py are you referring to datetime directly - only indirectly, via a variable that was defined from it.
Also consider that the datetime module does a lot of stuff on its own, and almost certainly imports other dependencies that you're not aware of and you're not importing yourself. But you can still use the datetime module, because you don't need to explicitly refer to those modules it's using behind the scenes. They're still in memory somewhere, and if you call certain methods from datetime, that code will still get executed, but you don't need to be directly aware of it.
Python usually uses duck typing1. This means that instead of requiring a particular type for an object, it looks at the actual attributes it has.
What this means in your case is that method2 does not care in the slightest whether you pass in a datetime object or not. All that's required is that the input para1 have a second attribute.
Importing datetime into second.py would be counter-productive. It wouldn't affect the operation of your method in any way, but it would polute your namespace and set up an implication that isn't necessarily true.
1 A notable counterexample is a sum of strings, e.g. sum(['a', 'b'], ''). Aside from that, your own code can choose what to do as you see fit if course.
Related
I know that from module import * will import all the functions in current namespace but it is a bad practice. I want to use two functions directly and use module.function when I have to use any other function from the module. What I am doing currently is:
import module
from module import func1, func2
# DO REST OF MY STUFF
Is it a good practice? Does the order of first two statements matter?
Is there a better way using which I can use these two functions directly and use rest of the functions as usual with the module's name prepended to them?
Using just import module results in very long statements with a lot of repetition if I use the same function from the given module five times in a single statement. That's what I want to avoid.
The order doesn't matter and it's not a pythonic way. When you import the module there is no need to import some of its functions separately again. If you are not sure how many of the functions you might need to use just import the module and access to the functions on demand with a simple reference.
# The only import you need
import module
# Use module.funcX when you need any of its functions
After all, if you want to use some of your functions (much) more than the others, as the cost of attribute access is greater than importing the functions separately, you better to import them as you've done.
And still, the order doesn't matter. You can do:
import module
from module import func1, func2
For more info read the documentation https://www.python.org/dev/peps/pep-0008/#imports
It is not good to do (may be opinion based):
import module
from module import func1, func2 # `func1` and `func2` are already part of module
Because you already hold a reference to module.
If I were you, I would import it in the form of import module. Since your issue is that module.func1() becomes too long. I may import the module and use as for creating a alias for the name. For example:
import module as mo
# ^ for illustration purpose. Even the name of
# your actual module wont be `module`.
# Alias should also be self-explanatory
# For example:
import database_manager as db_manager
Now I may access the functions as:
mo.func1()
mo.func2()
Edit: Based on the edit in actual question
If your are calling same function in the same line, there is possibility that your are already doing some thing wrong. It will be great if you can share what your that function does.
For example: Want to the rertun value of those functions to be passed as argument to another function? as:
test_func(mo.func1(x), mo.func1(y). mo.func1(z))
could be done as:
params_list = [x, y, z]
func_list = [mo.func1(param) for param in params_list]
test_func(*func_list)
I'm building a Python module for a fairly specific purpose. What I'd like to do with this is get more functionality behind importing things from it.
I'd like to have a setup by which saying from my_module import foo would run a function and pass the string "foo". This function would return the object that should be imported.
For example, maybe I want to make a cloud-based import system. I'd like to store community scripts in the cloud, and then download them when a user tries to import them.
Maybe I use the code from cloud import test_module. This would check a cache to decide whether test_module had been downloaded. If so, it would return that module. If not, it would download the module before importing it.
How can I accomplish something like this in Python, by which a dynamic range of submodules could be seamlessly imported from the cloud?
Full featured support for what you ask probably requires a bunch of complicated code using importlib and hooking into various parts of the import machinery. However, a more limited solution can be implemented with just a single custom class that pretends to be a module.
When you import a module, Python first checks in the sys.modules dictionary to see if the module is a key. If so, it returns the value associated with the key. It does this regardless of what the value is, so you can put any kind of object in sys.modules and Python will treat it like a module. A module's code can even replace its own entry in sys.modules, and the replacement will be used even the first time it is imported!
So, to implement your fancy module that downloads other modules on demand, replace the module itself with an instance of a custom class, and write that class a __getattr__ or __getattribute__ method that does the work you want.
Here's a trivial example module that returns a string for any attribute you look for in it. The string will always be the same as the requested attribute name. In your code, you'd want to do your fancy web-cache lookups and downloading, and then return the fetched module object instead of just returning a string.
class FakeModule(object):
def __getattribute__(self, name):
return name
import sys
sys.modules[__name__] = FakeModule()
On my system I've saved that as fakemodule.py. Now if I do from fakemodule import foo, I get foo with the value 'foo' in my local namespace.
Note that this only works for one level deep imports. If you do from fakemodule.subpackage import name it will not work because there's no fakemodule.subpackage entry in sys.modules.
I want to be able to access the strptime function directly (no datetime.datetime.strptime() or datetime.strptime().)
I can do this:
from datetime import datetime
strptime = datetime.strptime
But is there a way to accomplish the same thing on the import line?
Also, can you do multiple items on one line?
Here's pseudocode of what I really want to do:
from datetime.datetime import strftime, strptime
Datetime is just the example case, a similar thing would be useful for importing class methods in other libraries.
Those are methods of the datetime type and can't be imported directly. You can't directly import anything below the top-level namespace of a module. From the documentation:
The from form does not bind the module name: it goes through the list of identifiers, looks each one of them up in the module found in step (1) [i.e., the module being imported], and binds the name in the local namespace to the object thus found.
That is, the imported names must be names in the module namespace. They can't be any more deeply nested than that. So you can't, as you apparently want to do, import just certain methods of classes in the module.
The answer to the question "Can I do this on the import line?" is no.
See the definition of the import statement in Python 2. The statement imports things from modules. There is a datetime class inside the datetime module. The best you can do is
from datetime import datetime
You already understand well what this does, as you used it perfectly in your question. It looks like you wanted to do
from datetime import datetime.strptime
but that is a syntax error because datetime.strptime is not an indentifier.
You can't say
from datetime.datetime import strptime
either because Python would look for a module named datetime.datetime.
The import statement just doesn't work the way you want it to.
Note that the author of the datetime module chose to make strptime a classmethod (using #classmethod) rather than a function. So if you want to use strptime without the class qualifier you will have to do what you did, namely assign to a variable named strptime.
I have a plypython function which does some json magic. For this it obviously imports the json library.
Is the import called on every call to the function? Are there any performance implication I have to be aware of?
The import is executed on every function call. This is the same behavior you would get if you wrote a normal Python module with the import statement inside a function body as oppposed to at the module level.
Yes, this will affect performance.
You can work around this by caching your imports like this:
CREATE FUNCTION test() RETURNS text
LANGUAGE plpythonu
AS $$
if 'json' in SD:
json = SD['json']
else:
import json
SD['json'] = json
return json.dumps(...)
$$;
This is admittedly not very pretty, and better ways to do this are being discussed, but they won't happen before PostgreSQL 9.4.
The declaration in the body of a PL/Python function will eventually become an ordinary Python function and will thus behave as such. When a Python function imports a module for the first time the module is cached in the sys.modules dictionary (https://docs.python.org/3/reference/import.html#the-module-cache). Subsequent imports of the same module will simply bind the import name to the module object found in the dictionary. In a sense, what I'm saying may cast some doubt on the usefulness of the tip given in the accepted answer, since it makes it somewhat redundant, as Python already does a similar caching for you.
To sum things up, I'd say that if you import in the standard way of simply using the import or from [...] import constructs, then you need not worry about repeated imports, in functions or otherwise, Python has got you covered.
On the other hand, Python allows you to bypass its native import semantics and to implement your own (with the __import__() function and importlib module). If this is what you're doing, maybe you should review what's available in the toolbox (https://docs.python.org/3/reference/import.html).
When using many IDEs that support autocompletion with Python, things like this will show warnings, which I find annoying:
from eventlet.green.httplib import BadStatusLine
When switching to:
from eventlet.green.httplib import *
The warnings go away. What's the benefit to limiting imports to a specific set of types you'll use? Is the parsing faster? Reduces collisions? What other point is there? It seems the state of python IDEs and the nature of the typing system makes it hard for many IDEs to fully get right when a type import works and when it doesn't.
By typing from foo import *, you import all the names defined in foo into the global namespace. This is bad practice because you could have name clashes both with other modules and with built-ins.
For example, consider a module foo
#foo.py
def open(something):
pass
and a module bar:
#bar.py
def open(something_else):
pass
Now, from foo import * hides the built-in function open() which means that any calls to open() now refer to foo.open() rather than the built-in. Worse, if you then have from bar import *, the function open() in bar now hides both the built-in and the function imported from foo.
In the example above, from foo import open is equally shadowing the built-in function, but one glance at the code tells you why you can't open files for IO anymore.
This is why you should import only specific names, ensuring that you know what names are imported. Alternatively, you could use fully qualified names (import foo; foo.open(), which is perfectly safe).
EDIT: Just as a note, this can be horribly compounded if the module you're importing also uses from x import *. In this case, not only do you typically import all the stuff in the module foo, but also all the stuff in the module x into the global namespace. This can very quickly turn into an absolute mess.
It reduces collisions with user-defined types, it reduces coupling and it's self-documenting, since it makes clear from the outset of the module which classes are coming from libraries (so the rest must be user-defined). The parsing is not faster, at least not in CPython: an imported module must be read in its entirety to look for the classes/functions being imported.
(I must admit that I never use an IDE.)