How to make a python Package? - python

Here is my structure,
main.py
folder1\
button.py
folder2\
picturebutton.py
folder3\
listbox.py
folder4\
customlistbox.py
folder5\
hyperlistbox.py
Now,
I have a module called, "widget.py" and I would like to make it accessible to all the modules here so that each module will be able to say import widget or something of the sort. After googling, it appears that I have to make a package to do this.
I could not function with the examples online as I have no idea how they work, and I am hoping that one of you may be able to help me with my case.
Edit:
All the folders, (except for the root one) have an __init__.py file.

Being able to import some other module does not need for that to be a package, it needs for the widget module to be put on your PYTHONPATH. You'd do that typically by installing it (writing a setup.py file, see the standard library's distutils module).
If you did want a package though, every folder that needs to be a package needs to have an __init__.py file in it (empty is fine).

Proper way is to create a setup.py file for your package but since it may take time . Below is shortcut .
If you want to use your module it frequently like in script . Easy way is to export "PYTHONPATH" in bashrc/zshrc file and give path to the directory containing your code .
For example:
export PYTHONPATH=$PYTHONPATH:$HOME/path/to/package
Do check on terminal using
echo "$PYTHONPATH"
Happy Coding

Related

Tell the module requirements in a particular directory (package)

In my office we have a quite complex directory structure when it comes to our code.
One of the things we have is a libs module to drop "common" things used by other parts of our big application (or set of applications... that are all living under a common directory).
The code in that libs/ directory requires certain packages installed in order for it to work. In said libs/ directory we have a requirements.txt file that supposedly lists the dependencies required for the things (things being Python code) in it to work. We have been filling that requirements.txt file pretty manually, tracking that "if this .py file uses this module, we should add it to the requirements file" so it's almost certain that by now we have forgotten adding some required modules.
Because of the complex structure we have (some parts use pipenv, some other have their own requirements.txt...) is very hard knowing whether a required module is going to end up installed or not.
So I would like to make sure that this libs/ directory (cough, cough... module ) has all its dependencies listed in its libs/requirements.txt.
Is that possible? Ideally it'd be "run this command passing /libs/ as an argument, it'll scan the directory and tell you what packages are needed by the py(s) found in it"
Thank you in advance.
Unfortunately, python does not know whether its dependencies are satisfied until runtime. requirements.txt is just a helper file for pip and similar tools, and you have to update it manually.
That said, you could
use the os module to recursively get a list of all *.py files in the folder
parse each one of them for lines having the format import aaa.bbb or from aaa import bbb
keep a set of the imports
However, even in that case, the name of the imported module is not the same as the name you need to pass to pip (eg, import yaml requires pyyaml in requirements.txt), but at least it could be a hint of what's missing.

How to work with absolute imports when developing a package

It's been a while that I am struggling with imports in packages. When I develop a package, I read everywhere that it is preferable to use absolute imports in submodules of that package. I understand that and I like it more as well. But then I don't like and I also read that you shouldn't use sys.path.append('/path/to/package') to use your package in development...
So my question is, how do you develop such a package from zero, using directly absolute imports? At the moment I develop the package using relative imports, since then I am able to test the code I am writing before packaging and installing, then I change the imports once I have a release and build the package.
What is the correct way of doing such thing? In Pycharm for example you would mark the folder as 'source roor' and be able to work as if the package folder was in the path. Still I read that this is not the proper way... what am I missing? How do you develop a package while testing its code?
Your mileage may vary but this is what I usually do:
Within a package (foo), absolute (import foo.bar) or relative (import .bar) doesn't matter to me as long as it works. Sometimes, I prefer relative especially when the project is large and one day I might decide to move a number of source files into a subdirectory.
How do I test? My $PYTHONPATH usually has . in it, and my directory hierarchy is like this:
/path/to/foo_project
/setup.py
/foo
/__init__.py
/bar.py
/test
/test1.py
/test2.py
then the script in foo_project/test/test1.py will be like what you normally use the package, using import foo.bar. And when I test my code, I will be in the directory foo_project and run python test/test1.py. Since I have . in my $PYTHONPATH, it will find the directory foo and use it as a package.

python module layout

I'm just starting to get to the point in my python projects that I need to start using multiple packages and I'm a little confused on exactly how everything is supposed to work together. What exactly should go into the __init__.py of the package? Some projects I see just have blank inits and all of their code are in modules in that package. Other projects implement what seems to be the majority of the package's classes and functions inside the init.
Is there a document or style guide or something that describes what the python authors had in mind for the use of packages and the __init__ file and such?
Edit:
I know the point of having the __init__.py file in the simplest sense that it makes a folder a package. But why would I put a function there instead of a module in that same folder(package)?
__init__.py can be empty, but what it really does is make sure Python treats your directories correctly, provide any initialization you might need for when your package is imported (configuring the environment or something along those lines), or defining __all__ so that Python knows what to do when someone uses from package import *.
Most everything you need to know is described in the docs on Packages. Dive Into Python also has a piece on packaging.
You already know, I guess that __init__.py files are required to make Python treat the directories as containing packages.
In the above model __init__.py can remain empty.
You can can also execute initialization code for the package.
You can also set the __all__ variable.
[Edit: learnings]
When you do "from package import item", or "from package import *", then the variable __all__ can be used to import selected packages.
See : http://docs.python.org/tutorial/modules.html

python: how/where to put a simple library installed in a well-known-place on my computer

I need to put a python script somewhere on my computer so that in another file I can use it. How do I do this and where do I put it? And where in the python documentation do I learn how to do this? I'm a beginner + don't use python much.
library file: MyLib.py put in a well-known place
def myfunc():
....
other file SourceFile.py located elsewhere, doesn't need to know where MyLib.py is:
something = MyLib.myfunc()
Option 1:
Put your file at:
<Wherever your Python is>/Lib/site-packages/myfile.py
Add this to your code:
import myfile
Pros: Easy
Cons: Clutters site-packages
Option 2:
Put your file at:
/Lib/site-packages/mypackage/myfile.py
Create an empty text file called:
<Wherever your Python is>/Lib/site-packages/mypackage/__init__.py
Add this to your code:
from mypackage import myfile
Pros: Reduces clutter in site-packages by keeping your stuff consolidated in a single directory
Cons: Slightly more work; still some clutter in site-packages. This isn't bad for stable stuff, but may be regarded as inappropriate for development work, and may be impossible if Python is installed on a shared drive
Option 3
Put your file in any directory you like
Add that directory to the PYTHONPATH environment variable
Proceed as with Option 1 or Option 2, except substitute the directory you just created for <Wherever your Python is>/Lib/site-packages/
Pros: Keeps development code out of the site-packages directory
Cons: slightly more setup
This is the approach I usually use for development work
In general, the Modules section of the Python tutorial is a good introduction for beginners on this topic. It explains how to write your own modules and where to put them, but I'll summarize the answer to your question below:
Your Python installation has a site-packages directory; any python file you put in that directory will be available to any script you write. For example, if you put the file MyLib.py in the site-packages directory, then in your script you can say
import MyLib
something = MyLib.myfunc()
If you're not sure where Python is installed, the Stack Overflow question How do I find the location of my Python site-packages directory will be helpful to you.
Alternatively, you can modify sys.path, which is a list of directories where Python looks for libraries when you use the import statement. Your site-packages directory is already in this list, but you can add (or remove) entries yourself. For example, if you wanted to put your MyLib.py file in /usr/local/pythonModules, you could say
import sys
sys.path.append("/usr/local/pythonModules")
import MyLib
something = MyLib.myfunc()
Finally, you could use the PYTHONPATH environment variable to indicate the directory where your MyLib.py is located.
However, I recommend simply placing your MyLib.py file in the site-packages directory, as described above.
No one has mentioned using .pth files in site-packages to abstract away the location.
You will have to place your MyLib.py somewhere in your load path (this the paths in your sys.path variable) and then you'll be able to import it fine. Your code would look like
import MyLib
MyLib.myfunc()
Generally speaking, you should distribute your packages using distutils so that they can be easily installed in the proper locations. It would help you as well.
Also, you might not want to install packages in your global Python install. It's customary (and recommended) to use virtualenv which you can use to create small isolated Python environments that can hold local packages.
It's best your give the whole thing a shot and then ask further questions if you have them.
The private version, from my .profile
export PYTHONPATH=${PYTHONPATH}:$HOME/lib/python
which has a subdirectory "msw" so import msw.primes is self documenting or add to a local directory that is already in sys.path
The Python tutorial section 6 talks about modules, and 6.1.2 talks about the PYTHONPATH, which determines where Python will look for modules you try to import. The tutorial: http://docs.python.org/tutorial/modules.html

Deploying a python application with shared package

I'm thinking how to arrange a deployed python application which will have a
Executable script located in /usr/bin/ which will provide a CLI to functionality implemented in
A library installed to wherever the current site-packages directory is.
Now, currently, I have the following directory structure in my sources:
foo.py
foo/
__init__.py
...
which I guess is not the best way to do things. During development, everything works as expected, however when deployed, the "from foo import FooObject" code in foo.py seemingly attempts to import foo.py itself, which is not the behaviour I'm looking for.
So the question is what is the standard practice of orchestrating situations like this? One of the things I could think of is, when installing, rename foo.py to just foo, which stops it from importing itself, but that seems rather awkward...
Another part of the problem, I suppose, is that it's a naming challenge. Perhaps call the executable script foo-bin.py?
This article is pretty good, and shows you a good way to do it. The second item from the Do list answers your question.
shameless copy paste:
Filesystem structure of a Python project
by Jp Calderone
Do:
name the directory something related to your project. For example, if your
project is named "Twisted", name the
top-level directory for its source
files Twisted. When you do releases,
you should include a version number
suffix: Twisted-2.5.
create a directory Twisted/bin and put your executables there, if you
have any. Don't give them a .py
extension, even if they are Python
source files. Don't put any code in
them except an import of and call to a
main function defined somewhere else
in your projects.
If your project is expressable as a single Python source file, then put it
into the directory and name it
something related to your project. For
example, Twisted/twisted.py. If you
need multiple source files, create a
package instead (Twisted/twisted/,
with an empty
Twisted/twisted/__init__.py) and place
your source files in it. For example,
Twisted/twisted/internet.py.
put your unit tests in a sub-package of your package (note - this means
that the single Python source file
option above was a trick - you always
need at least one other file for your
unit tests). For example,
Twisted/twisted/test/. Of course, make
it a package with
Twisted/twisted/test/__init__.py.
Place tests in files like
Twisted/twisted/test/test_internet.py.
add Twisted/README and Twisted/setup.py to explain and
install your software, respectively,
if you're feeling nice.
Don't:
put your source in a directory called src or lib. This makes it hard
to run without installing.
put your tests outside of your Python package. This makes it hard to
run the tests against an installed
version.
create a package that only has a __init__.py and then put all your code into __init__.py. Just make a module
instead of a package, it's simpler.
try to come up with magical hacks to make Python able to import your module
or package without having the user add
the directory containing it to their
import path (either via PYTHONPATH or
some other mechanism). You will not
correctly handle all cases and users
will get angry at you when your
software doesn't work in their
environment.
Distutils supports installing modules, packages, and scripts. If you create a distutils setup.py which refers to foo as a package and foo.py as a script, then foo.py should get installed to /usr/local/bin or whatever the appropriate script install path is on the target OS, and the foo package should get installed to the site_packages directory.
You should call the executable just foo, not foo.py, then attempts to import foo will not use it.
As for naming it properly: this is difficult to answer in the abstract; we would need to know what specifically it does. For example, if it configures and controls, calling it -config or ctl might be appropriate. If it is a shell API for the library, it should have the same name as the library.
Your CLI module is one thing, the package that supports it is another thing. Don't confuse the names withe module foo (in a file foo.py) and the package foo (in a directory foo with a file __init__.py).
You have two things named foo: a module and a package. What else do you want to name foo? A class? A function? A variable?
Pick a distinctive name for the foo module or the foo package. foolib, for example, is a popular package name.

Categories