print function is from which module - python

As a beginner, what I understood is that Python Standard Library (PSL) provides a lot of modules which provide a lot of functionalities, but still if I want to use those then I have to import the module, for example, sys, os etc. are PSL modules but still those need to be imported.
Now, I wonder if that is the case then how without importing anything I am able to use functions like print, list, len etc.? Is it that their "support is built-in into the interpreter"?

Yes. They're built-in functions (or in the case of list, a built-in class). You can explicitly import the __builtin__ module (Py2) or the builtins module (Py3) if you want qualified access to the names, but by default, those modules are searched whenever an attempt to access a global name doesn't find the name in the module globals. They're not normally needed though, per the docs:
This module is not normally accessed explicitly by most applications, but can be useful in modules that provide objects with the same name as a built-in value, but in which the built-in of that name is also needed.

The print function comes from the builtins module.
You can find its documentation here.
Here is an example session.
I first check what module print comes from,which is stored in its __module__ attribute.
Then, I import the builtins module, and checks if its print function is the same as the prefix-less print.
>>> print.__module__
'builtins'
>>> import builtins
>>> builtins.print("hello")
hello
>>> print is builtins.print
True

You should give the page on built-in functions a read
Quote:
The Python interpreter has a number of functions and types built into
it that are always available.

Related

Are sys.modules[__name__].__doc__ and __doc__ always the same?

I am trying to access the module docstring from within the same module it is defined. Practical example:
#!/usr/bin/env python3
"Module docstring that explains what the script does."
import argparse as ap
parser = ap.ArgumentParser(description=__doc__)
...
I have always used __doc__, but I later stumbled upon some code using sys.module[__name__].__doc__. The two objects appear to be the same, but I am wondering if they are always the same. In other words, is the following:
>>> sys.module[__name__].__doc__ is __doc__
True
always true? Can I safely use __doc__ in my example instead of sys.modules[__name__].__doc__?
In practice they will be the same unless you go out of your way to make them different.
If you assign to __name__ then you can make the expression refer to a different module:
"""My doc string"""
import sys
print(__doc__)
__name__='sys'
print(sys.modules[__name__].__doc__)
will print two different doc strings.
Alternatively, you could leave __name__ alone but delete your module from sys.modules and then import another module with the same __name__ (or in Python 2.x use reload() to reload the module after editing the docstring). If you held onto a reference to a function in the old module you could still call it and __doc__ will be the old value while sys.modules[__name__].__doc__ is the new one.
So they could be different, but only if you work at it.

Why is print listed as a builtin?

While reading through the objects listed under dir(__builtins__) in python2.7 I noticed that the keyword print is there.
Now under python3.5 I can see that print and exec are now objects.
What is the reasoning for this? Why even list it under __builtins__ to begin with? Specifically this keyword in particular, what difference would it have made if it were never grouped in there at all. It seems import might have a case for being listed there as well?
print is in __builtins__ because there really is a print built-in function. If you do from __future__ import print_function, the print keyword is disabled and print refers to the built-in.
exec and import are not in __builtins__. You most likely mixed them up with the built-in functions eval and __import__.
As an aside: don't use __builtins__. It's an implementation detail, and its value is different in different contexts. If you want the module containing all built-in names, use import __builtin__, or import builtins in Python 3.

__builtin__ module in Python

If I have a module Test and if I need to list all the functions in them, I do this:
import Test
dir(Test)
Unless I import the module I won't be able to use the functions defined in them.
But all the functions in __builtin__ module can be used without importing. But without import __builtin__ I am not able to do a dir(__builtin__). Does that mean we use the functions without importing the entire module?
from __builtin__ import zip
Is it something like the above? But if I do del zip, I get
NameError: name 'zip' is not defined
Can anyone please explain this behavior?
As explained in the Python language docs, names in Python are resolved by first looking them up in the local scope, then in any enclosing local scope, then in the module-level scope and finally in the namespace of the built-ins. So built-ins are somehow special-cased. They are not imported in your module's scope, but if a name is not found anywhere else, Python will look it up in the scope __builtin__.
Note that you can access the contents of this scope without importing it. A portable way to do this is
import sys
print(dir(sys.modules["__builtin__"]))
In CPython, this also works
print(dir(__builtins__))
but this is considered an implementation detail and might not be available for other Python implementations or future versions.
I'm by no means knowledgeable about python, but maybe dir(__builtins__), with an "s", is what you're after?
Works for me on plain Python 3.1.
when python interpreter start, it will by default execute something like
from __builtin__ import *
which allows you to use all the functions/attributes defined inside __builtin__ module
However to use __builtin__ symbol itself, you need to do
import __builtin__
this is how import statement syntax works.

Why bother to limit the types imported from a python package?

When using many IDEs that support autocompletion with Python, things like this will show warnings, which I find annoying:
from eventlet.green.httplib import BadStatusLine
When switching to:
from eventlet.green.httplib import *
The warnings go away. What's the benefit to limiting imports to a specific set of types you'll use? Is the parsing faster? Reduces collisions? What other point is there? It seems the state of python IDEs and the nature of the typing system makes it hard for many IDEs to fully get right when a type import works and when it doesn't.
By typing from foo import *, you import all the names defined in foo into the global namespace. This is bad practice because you could have name clashes both with other modules and with built-ins.
For example, consider a module foo
#foo.py
def open(something):
pass
and a module bar:
#bar.py
def open(something_else):
pass
Now, from foo import * hides the built-in function open() which means that any calls to open() now refer to foo.open() rather than the built-in. Worse, if you then have from bar import *, the function open() in bar now hides both the built-in and the function imported from foo.
In the example above, from foo import open is equally shadowing the built-in function, but one glance at the code tells you why you can't open files for IO anymore.
This is why you should import only specific names, ensuring that you know what names are imported. Alternatively, you could use fully qualified names (import foo; foo.open(), which is perfectly safe).
EDIT: Just as a note, this can be horribly compounded if the module you're importing also uses from x import *. In this case, not only do you typically import all the stuff in the module foo, but also all the stuff in the module x into the global namespace. This can very quickly turn into an absolute mess.
It reduces collisions with user-defined types, it reduces coupling and it's self-documenting, since it makes clear from the outset of the module which classes are coming from libraries (so the rest must be user-defined). The parsing is not faster, at least not in CPython: an imported module must be read in its entirety to look for the classes/functions being imported.
(I must admit that I never use an IDE.)

Having to evaluate a string to access a class from a module

I've loaded one of my modules <module my_modules.mymodule from .../my_modules/mymodule.pyc> with __import__.
Now I'm having the module saved in a variable, but I'd like to create an instance of the class mymodule. Thing is - I've gotten the module name passed as string into my script.
So I've got a variable containing the module, and I've got the module name but as string.
variable_containing_module.string_variable_with_correct_classname() doesn't work. Because it says there is no such thing in the module as "string_variable_with_correct_classname" - because it doesn't evaluate the string name. How can I do so?
Your problem is that __import__ is designed for use by the import internals, not really for direct usage. One symptom of this is that when importing a module from inside a package, the top-level package is returned rather than the module itself.
There are two ways to deal with this. The preferred way is to use importlib.import_module() instead of using __import__ directly.
If you're on an older version of Python that doesn't provide importlib, then you can create your own helper function:
import sys
def import_module(module_name):
__import__(module_name)
return sys.modules[module_name]
Also see http://bugs.python.org/issue9254

Categories