How to add a folder to Python path? - python

Basically, I can only reference my other files as modules when they are in a very specific location:
C:\Users\Dave\Desktop\Programming\Python.
If I want to create a new folder for a large project with multiple modules, say
C:\Users\Dave\Desktop\Programming\Python\Project1,
I can no longer import any modules and keep getting a ModuleNotFoundError. I've looked into it and it seems I need to add that folder to the Python Path somehow, but I couldn't find any answers on how to do it. My computer runs on Windows 10 if that matters.

I think the immediate solution to your problem/the answer to your question would be to use sys.path.append() at the top of your script.
import sys
sys.path.append("<ABSOLUTE/PATH/TO/YOUR/CUSTOM/MODULES/FOLDER>")
import custom_module
This isn't an ideal solution for any kind of prod use, however. Based on what you wrote in your question, this may not be what you're looking for. More info might help to craft a more stable solution:
Are you using virtual environments when running python? Do you just run python script.py or is there a specific version of python you're using that's installed elsewhere than the common C:\Program Files\Python?
When you say that you work on a project with multiple modules, does that mean there are custom modules that you/someone wrote or is it just that that project uses non-standard library modules, i.e. you had to pip install them?
Can we get an example of the code you're running and the folder structure of your project/where the modules are that you need?

Related

Using external modules in a Pycharm project

I am new to Pycharm and need some help. I am working on a project that makes use of a large library of modules (specifically, Schrodinger; which allows for a lot of cool chemistry programs). Schrodinger requires the use of Python 2.7, if that makes any difference.
There are too many modules to install to the project directory. When I move the project directory to the location of the modules, my script becomes stuck on 'initializing'. I have attempted to import it as a package to no avail.
I have also tried to use the sys.path command, however a lot of the modules make use of other modules as well. So I that has become a pain very quickly.
How can I use these modules within Pycharm? And if there is no easy way, do you have a recommendation for an IDE that does have this feature?
Thanks
Pycharm doesn't identifies user defined modules which are not imported to Pycharm.
I usually mask the module as a Sources Root see the picture for more details. if the modules are in same project.
Alternative way: In your case import the external modules using File -> Open modules with open -> open in current window -> add to currently opened project this looks like two different projects. Now you can mark Sources Root for the complete module (i.e. learning) which you have imported.
import stackoverflow
Now pycharm can identifies the user defined modules.

Python: Is it possible to create a package out of multiple external libraries?

This issue has been driving me insane for the past few days.
So basically, I'm trying to port over a Pure Python project to a proper PyCharm project. This is to basically improve code quality and project structure.
I wish it was as simple as basically creating a virtualenv to house everything, but it isn't. This project will eventually be developed simultaneously by multiple developers with Git as source control, and the default libraries will be modified. I presume this means that the libraries should ideally be tracked by Git in the end. Virtualenv shouldn't help here as far as I know because it's not portable between systems (or at least that's still being tested).
This project will also be, in the future, deployed to a Centos server.
So the only plan I can think of to successfully pull off this would be to simply bring in all of the libraries (which was done using pip install -t Libraries <ExampleLibrary>) into a single folder, with a __init__.py inside, and use them from other python files as a package within the Pycharm project.
Is this possible / recommended? I tried various methods to reference these libraries, but they all don't work during runtime. Somehow when the files in the library import something else from their own package, an ImportError is raised saying that there's no such module.
Will accept any other suggestions too.
Using Pycharm Community Edition.
EDIT: After having a good night's rest I think the crux of the issue is really just project organization. Before I ported it over to Pycharm the project worked as expected, but this had all of the python files in the root directory, and the libraries in a subfolder of the root, with every project file having the same boilerplate code:
import os, sys
absFilePath = os.path.dirname(os.path.abspath(__file__));
sys.path.insert(1, absFilePath + "/lib")
I was hoping that by using Pycharm to help me flesh out the packages, I could avoid having repeated boilerplate code.
Note: Not full solution.
The addition of the template code below forces the file containing the code to be in the same directory as the libs folder.
For Pycharm, all I had to do was mark the libs folder as a source folder. Even with the addition of the template code to the file, the modified libraries still work as expected.
For the Python Shell, this template code is still needed:
import os
import sys
absFilePath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(1, absFilePath + "/lib")

What is the proper way to work with shared modules in Python development?

I'm working toward adopting Python as part of my team's development tool suite. With the other languages/tools we use, we develop many reusable functions and classes that are specific to the work we do. This standardizes the way we do things and saves a lot of wheel re-inventing.
I can't seem to find any examples of how this is usually handled with Python. Right now I have a development folder on a local drive, with multiple project folders below that, and an additional "common" folder containing packages and modules with re-usable classes and functions. These "common" modules are imported by modules within multiple projects.
Development/
Common/
Package_a/
Package_b/
Project1/
Package1_1/
Package1_2/
Project2/
Package2_1/
Package2_2/
In trying to learn how to distribute a Python application, it seems that there is an assumption that all referenced packages are below the top-level project folder, not collateral to it. The thought also occurred to me that perhaps the correct approach is to develop common/framework modules in a separate project, and once tested, deploy those to each developer's environment by installing to the site-packages folder. However, that also raises questions re distribution.
Can anyone shed light on this, or point me to a resource that discusses this issue?
If you have common code that you want to share across multiple projects, it may be worth thinking about storing this code in a physically separate project, which is then imported as a dependency into your other projects. This is easily achieved if you host your common code project in github or bitbucket, where you can use pip to install it in any other project. This approach not only helps you to easily share common code across multiple projects, but it also helps protect you from inadvertently creating bad dependencies (i.e. those directed from your common code to your non common code).
The link below provides a good introduction to using pip and virtualenv to manage dependencies, definitely worth a read if you and your team are fairly new to working with python as this is a very common toolchain used for just this kind of problem:
http://dabapps.com/blog/introduction-to-pip-and-virtualenv-python/
And the link below shows you how to pull in dependencies from github using pip:
How to use Python Pip install software, to pull packages from Github?
The must-read-first on this kind of stuff is here:
What is the best project structure for a Python application?
in case you haven't seen it (and follow the link in the second answer).
The key is that each major package be importable as if "." was the top level directory, which means that it will also work correctly when installed in a site-packages. What this implies is that major packages should all be flat within the top directory, as in:
myproject-0.1/
myproject/
framework/
packageA/
sub_package_in_A/
module.py
packageB/
...
Then both you (within your other packages) and your users can import as:
import myproject
import packageA.sub_package_in_A.module
etc
Which means you should think hard about #MattAnderson's comment, but if you want it to appear as a separately-distributable package, it needs to be in the top directory.
Note this doesn't stop you (or your users) from doing an:
import packageA.sub_package_in_A as sub_package_in_A
but it does stop you from allowing:
import sub_package_in_A
directly.
...it seems that there is an assumption that all referenced packages
are below the top-level project folder, not collateral to it.
That's mainly because the current working directory is the first entry in sys.path by default, which makes it very convenient to import modules and packages below that directory.
If you remove it, you can't even import stuff from the current working directory...
$ touch foo.py
$ python
>>> import sys
>>> del sys.path[0]
>>> import foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named foo
The thought also occurred to me that perhaps the correct approach is
to develop common/framework modules in a separate project, and once
tested, deploy those to each developer's environment by installing to
the site-packages folder.
It's not really a major issue for development. If you're using version control, and all developers check out the source tree in the same structure, you can easily employ relative path hacks to ensure the code works correctly without having to mess around with environment variables or symbolic links.
However, that also raises questions re distribution.
This is where things can get a bit more complicated, but only if you're planning to release libraries independently of the projects which use them, and/or having multiple project installers share the same libraries. It that's the case, take a look at distutils.
If not, you can simply employ the same relative path hacks used in development to ensure you project works "out of the box".
I think that this is the best reference for creating a distributable python package:
link removed as it leads to a hacked site.
also, don't feel that you need to nest everything under a single directory. You can do things like
platform/
core/
coremodule
api/
apimodule
and then do things like from platform.core import coremodule, etc.

Local collection of Python packages: best way to import them?

I need to ship a collection of Python programs that use multiple packages stored in a local Library directory: the goal is to avoid having users install packages before using my programs (the packages are shipped in the Library directory). What is the best way of importing the packages contained in Library?
I tried three methods, but none of them appears perfect: is there a simpler and robust method? or is one of these methods the best one can do?
In the first method, the Library folder is simply added to the library path:
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'Library'))
import package_from_Library
The Library folder is put at the beginning so that the packages shipped with my programs have priority over the same modules installed by the user (this way I am sure that they have the correct version to work with my programs). This method also works when the Library folder is not in the current directory, which is good. However, this approach has drawbacks. Each and every one of my programs adds a copy of the same path to sys.path, which is a waste. In addition, all programs must contain the same three path-modifying lines, which goes against the Don't Repeat Yourself principle.
An improvement over the above problems consists in trying to add the Library path only once, by doing it in an imported module:
# In module add_Library_path:
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'Library'))
and then to use, in each of my programs:
import add_Library_path
import package_from_Library
This way, thanks to the caching mechanism of CPython, the module add_Library_path is only run once, and the Library path is added only once to sys.path. However, a drawback of this approach is that import add_Library_path has an invisible side effect, and that the order of the imports matters: this makes the code less legible, and more fragile. Also, this forces my distribution of programs to inlude an add_Library_path.py program that users will not use.
Python modules from Library can also be imported by making it a package (empty __init__.py file stored inside), which allows one to do:
from Library import module_from_Library
However, this breaks for packages in Library, as they might do something like from xlutils.filter import …, which breaks because xlutils is not found in sys.path. So, this method works, but only when including modules in Library, not packages.
All these methods have some drawback.
Is there a better way of shipping programs with a collection of packages (that they use) stored in a local Library directory? or is one of the methods above (method 1?) the best one can do?
PS: In my case, all the packages from Library are pure Python packages, but a more general solution that works for any operating system is best.
PPS: The goal is that the user be able to use my programs without having to install anything (beyond copying the directory I ship them regularly), like in the examples above.
PPPS: More precisely, the goal is to have the flexibility of easily updating both my collection of programs and their associated third-party packages from Library by having my users do a simple copy of a directory containing my programs and the Library folder of "hidden" third-party packages. (I do frequent updates, so I prefer not forcing the users to update their Python distribution too.)
Messing around with sys.path() leads to pain... The modern package template and Distribute contain a vast array of information and were in part set up to solve your problem.
What I would do is to set up setup.py to install all your packages to a specific site-packages location or if you could do it to the system's site-packages. In the former case, the local site-packages would then be added to the PYTHONPATH of the system/user. In the latter case, nothing needs to changes
You could use the batch file to set the python path as well. Or change the python executable to point to a shell script that contains a modified PYTHONPATH and then executes the python interpreter. The latter of course, means that you have to have access to the user's machine, which you do not. However, if your users only run scripts and do not import your own libraries, you could use your own wrapper for scripts:
#!/path/to/my/python
And the /path/to/my/python script would be something like:
#!/bin/sh
PYTHONPATH=/whatever/lib/path:$PYTHONPATH /usr/bin/python $*
I think you should have a look at path import hooks which allow to modify the behaviour of python when searching for modules.
For example you could try to do something like kde's scriptengine does for python plugins[1].
It adds a special token to sys.path(like "<plasmaXXXXXX>" with XXXXXX being a random number just to avoid name collisions) and then when python try to import modules and can't find them in the other paths, it will call your importer which can deal with it.
A simpler alternative is to have a main script used as launcher which simply adds the path to sys.path and execute the target file(so that you can safely avoid putting the sys.path.append(...) line on every file).
Yet an other alternative, that works on python2.6+, would be to install the library under the per-user site-packages directory.
[1] You can find the source code under /usr/share/kde4/apps/plasma_scriptengine_python in a linux installation with kde.

Why use sys.path.append(path) instead of sys.path.insert(1, path)?

Edit: based on a Ulf Rompe's comment, it is important you use "1" instead of "0", otherwise you will break sys.path.
I have been doing python for quite a while now (over a year), and I am always confused as to why people recommend you use sys.path.append() instead of sys.path.insert(). Let me demonstrate.
Let's say I am working on a module named PyWorkbooks (that is installed on my computer), but I am simultaneously working on a different module (let's say PyJob) that incorporates PyWorkbooks. As I'm working on PyJob I find errors in PyWorkbooks that I am correcting, so I would like to import a development version.
There are multiple ways to work on both (I could put my PyWorkbooks project inside of PyJob, for instance), but sometimes I will still need to play with the path. However, I cannot simply do a sys.path.append() to the folder where PyWorkbooks is at. Why? Because python will find my installed PyWorkbooks first!
This is why you have to do a sys.path.insert(1, path_to_dev_pyworkbooks)
In summary:
sys.path.append(path_to_dev_pyworkbooks)
import PyWorkbooks # does NOT import dev pyworkbooks, imports installed one
or:
sys.path.insert(1, path_to_dev_pyworkbooks) # based on comments you should use **1 not 0**
import PyWorkbooks # imports correct file
This has caused a few hangups for me in the past, and I would really like it if we (as a community) started recommending sys.path.insert(1, path), as if you are manually inserting a path I think it is safe to say that that is the path you want to use!
Or do I have something wrong? It's a question that sometimes bothers me and I wanted it in the open!
If you really need to use sys.path.insert, consider leaving sys.path[0] as it is:
sys.path.insert(1, path_to_dev_pyworkbooks)
This could be important since 3rd party code may rely on sys.path documentation conformance:
As initialized upon program startup, the first item of this list,
path[0], is the directory containing the script that was used to
invoke the Python interpreter.
If you have multiple versions of a package / module, you need to be using virtualenv (emphasis mine):
virtualenv is a tool to create isolated Python environments.
The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into /usr/lib/python2.7/site-packages (or whatever your platform’s standard location is), it’s easy to end up in a situation where you unintentionally upgrade an application that shouldn’t be upgraded.
Or more generally, what if you want to install an application and leave it be? If an application works, any change in its libraries or the versions of those libraries can break the application.
Also, what if you can’t install packages into the global site-packages directory? For instance, on a shared host.
In all these cases, virtualenv can help you. It creates an environment that has its own installation directories, that doesn’t share libraries with other virtualenv environments (and optionally doesn’t access the globally installed libraries either).
That's why people consider insert(0, to be wrong -- it's an incomplete, stopgap solution to the problem of managing multiple environments.
you are confusing the concept of appending and prepending. the following code is prepending:
sys.path.insert(1,'/thePathToYourFolder/')
it places the new information at the beginning (well, second, to be precise) of the search sequence that your interpreter will go through. sys.path.append() puts things at the very end of the search sequence.
it is advisable that you use something like virtualenv instead of manually coding your package directories into the PYTHONPATH everytime. for setting up various ecosystems that separate your site-packages and possible versions of python, read these two blogs:
python ecosystems introduction
bootstrapping python virtual environments
if you do decide to move down the path to environment isolation you would certainly benefit by looking into virtualenvwrapper: http://www.doughellmann.com/docs/virtualenvwrapper/

Categories