Python: import symbolic link of a folder - python

I have a folder A which contains some Python files and __init__.py.
If I copy the whole folder A into some other folder B and create there a file with "import A", it works. But now I remove the folder and move in a symbolic link to the original folder. Now it doesn't work, saying "No module named foo".
Does anyone know how to use symlink for importing?

Python doesn't check if your file is a symlink or not! Your problem lies probably in renaming the modules or not having them in your search-path!
If ModuleA becomes ModuleB and you try to import ModuleA it can't find it, because it doesn't exist.
If you moved ModuleA into another directory and you generate a symlink with another name, which represents a new directory, this new directory must be the common parent directory of your script and your module, or the symlink directory must be in the search path.
BTW it's not clear if you mean module or package. The directory containing the __init__.py file becomes a package of all files with the extension .py (= modules) residing therein.
Example
DIRA
+ __init__.py <-- makes DIRA to package DIRA
+ moduleA.py <-- module DIRA.moduleA
Moving and symlink
/otherplace/DIRA <-+
| points to DIRA
mylibraries/SYMA --+ symbolic link
If SYMA has the same name as DIRA and your script is in the directory SYMA then it should just work fine. If not, then you have to:
import sys
sys.path.append('/path/to/your/package/root')
If you want to import a module from your package SYMA you must:
import SYMA.ModuleA
A simple:
import SYMA
will import the packagename, but not the modules in the package into your namespace!

This kind of behavior can happen if your symbolic links are not set up right. For example, if you created them using relative file paths. In this case the symlinks would be created without error but would not point anywhere meaningful.
If this could be the cause of the error, use the full path to create the links and check that they are correct by lsing the link and observing the expected directory contents.

Related

How do implicit relative imports work in Python?

Assume I have the following files,
pkg/
pkg/__init__.py
pkg/main.py # import string
pkg/string.py # print("Package's string module imported")
Now, if I run main.py, it says "Package's string module imported".
This makes sense and it works as per this statement in this link:
"it will first look in the package's directory"
Assume I modified the file structure slightly (added a core directory):
pkg/
pkg/__init__.py
plg/core/__init__.py
pkg/core/main.py # import string
pkg/string.py # print("Package's string module imported")
Now, if I run python core/main.py, it loads the built-in string module.
In the second case too, if it has to comply with the statement "it will first look in the package's directory" shouldn't it load the local string.py because pkg is the "package directory"?
My sense of the term "package directory" is specifically the root folder of a collection of folders with __init__.py. So in this case, pkg is the "package directory". It is applicable to main.py and also files in sub- directories like core/main.py because it is part of this "package".
Is this technically correct?
PS: What follows after # in the code snippet is the actual content of the file (with no leading spaces).
Packages are directories with a __init__.py file, yes, and are loaded as a module when found on the module search path. So pkg is only a package that you can import and treat as a package if the parent directory is on the module search path.
But by running the pkg/core/main.py file as a script, Python added the pkg/core directory to the module search path, not the parent directory of pkg. You do have a __init__.py file on your module search path now, but that's not what defines a package. You merely have a __main__ module, there is no package relationship to anything else, and you can't rely on implicit relative imports.
You have three options:
Do not run files inside packages as scripts. Put a script file outside of your package, and have that import your package as needed. You could put it next to the pkg directory, or make sure the pkg directory is first installed into a directory already on the module search path, or by having your script calculate the right path to add to sys.path.
Use the -m command line switch to run a module as if it is a script. If you use python -m pkg.core Python will look for a __main__.py file and run that as a script. The -m switch will add the current working directory to your module search path, so you can use that command when you are in the right working directory and everything will work. Or have your package installed in a directory already on the module search path.
Have your script add the right directory to the module search path (based on os.path.absolute(__file__) to get a path to the current file). Take into account that your script is always named __main__, and importing pkg.core.main would add a second, independent module object; you'd have two separate namespaces.
I also strongly advice against using implicit relative imports. You can easily mask top-level modules and packages by adding a nested package or module with the same name. pkg/time.py would be found before the standard-library time module if you tried to use import time inside the pkg package. Instead, use the Python 3 model of explicit relative module references; add from __future__ import absolute_import to all your files, and then use from . import <name> to be explicit as to where your module is being imported from.

Expanding sys.path via __init__.py

There're a lot of threads on importing modules from sibling directories, and majority recommends to either simply add init.py to source tree, or modify sys.path from inside those init files.
Suppose I have following project structure:
project_root/
__init__.py
wrappers/
__init__.py
wrapper1.py
wrapper2.py
samples/
__init__.py
sample1.py
sample2.py
All init.py files contain code which inserts absolute path to project_root/ directory into the sys.path. I get "No module names x", no matter how I'm trying to import wrapperX modules into sampleX. And when I try to print sys.path from sampleX, it appears that it does not contain path to project_root.
So how do I use init.py correctly to set up project environment variables?
Do not run sampleX.py directly, execute as module instead:
# (in project root directory)
python -m samples.sample1
This way you do not need to fiddle with sys.path at all (which is generally discouraged). It also makes it much easier to use the samples/ package as a library later on.
Oh, and init.py is not run because it only gets run/imported (which is more or less the same thing) if you import the samples package, not if you run an individual file as script.

Purpose of some boilerplate code in __main__.py

I've seen the following code in a couple Python projects, in __main__.py. Could someone explain the purpose? Of course it puts the directory containing __main__.py at the head of sys.path, but why? And why the tests (__package__ is None and not hasattr(sys, 'frozen')? Also, in the sys.path.insert, why is os.path.dirname called twice?
import sys
if __package__ is None and not hasattr(sys, 'frozen'):
# direct call of __main__.py
import os.path
path = os.path.realpath(os.path.abspath(__file__))
sys.path.insert(0, os.path.dirname(os.path.dirname(path)))
os.path.dirname(os.path.dirname(path)) - Gets the grand-parent directory (the directory containing the directory of the given path variable); this is being added to the system's PATH variable.
os.path.realpath(os.path.abspath(__file__)) - Gets the realpath (resolves symbolic linking) of the absolute path of the running file.
Through this method, the project can now execute binary files that are included in that grandparent directory without needing to prefix the binary executable.
Sidenote: Without context of where you see this code, it's hard to give more of an answer as to why its used.
The test for __package__ lets the code run when package/__main__.py has been run with a command like python __main__.py or python package/ (naming the file directly or naming the package folder's path), not the more normal way of running the main module of a package python -m package. The other check (for sys.frozen) tests if the package has been packed up with something like py2exe into a single file, rather than being in a normal file system.
What the code does is put the parent folder of the package into sys.path. That is, if __main__.py is located at /some/path/to/package/__main__.py, the code will put /some/path/to in sys.path. Each call to dirname strips off one item off the right side of the path ("/some/path/to/package/__main__.py" => "/some/path/to/package" => "/some/path/to").

pydev - how to avoid adding sub directories to python path in order to fix unresolved import issue

My project has root src folder created by pydev project wizard. Src folder is in the project's python path. Underneath that folder I have a package (folder with __init__.py) with two files: a.py and b.py. b.py is trying to import from a.py but I'm getting the error of unresolved import.
I was able to "fix" error by explicitly adding that subfolder to project's python path as additional src folder. Now I have two folders as src folders in pythonpath. What I don't understand is, why pydev is not able to resolve import since the package/folder I'm talking about is directly underneath root src folder which is in the python path. There are no python files in root src folder.
If I add __init__.py to root src folder, the issue is still there. I simply have to add subfolder to pythonpath in order to make error go away.
Am I doing something wrong ? This doesn't seem right.
EDIT:
I was wrong. My import syntax was incorrect. I should have done: from package.module import someting and not from module import something
It's hard to tell from your description, and actual code would help, but I suspect what you're looking for is a relative import.
If you have a file pkg/a.py that just does this:
import b
That will look for a top-level module somewhere on your sys.path named b.py.
But if you do this:
from . import b
Then it will look within (and only within) pkg for a file named b.py.
Alternatively, you could use an absolute import, the same way you would in a module outside the package, like one of these:
import pkg.b
from pkg import b
Your attempted workaround of adding pkg to sys.path is a very bad idea, for multiple reasons. For example, b and pkg.b will become different modules as far as Python is concerned, so the top-level code can end up getting run twice, you can end up with two separate copies of all of the globals (and even if you think "I'm not using globals", you probably as—classes and functions are globals, and you can easily end up with a situation where b.MyClass(3) != pkg.b.MyClass(3) unexpectedly, which is always fun to debug…), etc.
Adding an __init__.py to src is also a bad idea. That turns src into a package, meaning the proper qualified name for b is now src.pkg.b, rather than pkg.b, but there's no way to import it under the proper name (unless the parent directory of src happens to be on sys.path as well as src… in which case you have the exact same problem as the above paragraph).
See PEP 328 for more details, and the tutorial section on Packages for a simpler overview.

Import Python Issues

I've been writing in python for a couple of months now and I've never found a through explanation of how import works. I downloaded this folder with subfolders with python files in them. I'm trying to use one of these files and I'm loosing my mind. How do you properlly import a folder with all the files in it?
Any help would be greatly appreciated.
As written in the python documentation on modules:
If you have a folder sound looking like that :
sound/ Top-level package
__init__.py Initialize the sound package
effects/ Subpackage for sound effects
__init__.py
echo.py
surround.py
reverse.py
To import all files of effects folder :
from sound.effects import *
Note that to be able to import module, they have to contain an __init__.py file.
First, check to see if the subdirectories have a file named __init__.py file in them. Python will not recognize directories that do not contain these files.
Then, you will have to manually change the PYTHONPATH, which you can find in sys.path. You can find a great example here.
Edit: I'm not 100% sure this is what you were asking for. If you want to import ALL the python files in a directory, you will have to import them one by one. For example, given a directory like so:
parent/
__init__.py
runner.py
example.py
language.py
you would have to type
from parent import runner, example, language
or
from parent import * # this will also import __init__
You have to create a __init__.py file in the directory to make it a package. In this file you import all the symbols from the underlying files.
See http://docs.python.org/tutorial/modules.html (especially part 6.4 Packages) for further notes on that.
The parent folder must be either in PYTHONPATH or the folder path indicated in a file with extension .pth situated in a location in your path, usually in site-packages.
Then your package and all folders inside it from which you have to import need to have a file named __init__. This file can be used for program initialization but as a starting point it can be an empty file.
For example my program folder, situated in C:\python26 has the estructure:
programas\
.....package1\
.........__init__.py
.........module1.py
.........subpackage1\
.............__init__.py
.............module2.py
.....package2\
.........__init__.py
.........module3
.....__init__.py
.....lonelyscript1.py
.....lonelyscript2.py
file site-packages\site.pth contains:
C:\Python26\programas

Categories