Define Classes in Packages - python

I'm learning Python and I have been playing around with packages. I wanted to know the best way to define classes in packages. It seems that the only way to define classes in a package is to define them in the __init__.py of that package. Coming from Java, I'd kind of like to define individual files for my classes. Is this a recommended practice?
I'd like to have my directory look somewhat like this:
recursor/
__init__.py
RecursionException.py
RecursionResult.py
Recursor.py
So I could refer to my classes as recursor.Recursor, recursor.RecursionException, and recursor.RecursionResult. Is this doable or recommended in Python?

Go ahead and define your classes in separate modules. Then make __init__.py do something like this:
from RecursionException import RecursionException
from RecursionResult import RecursionResult
from Recursor import Recursor
That will import each class into the package's root namespace, so calling code can refer to recursor.Recursor instead of recursor.Recursor.Recursor.
I feel the need to echo some of the other comments here, though: Python is not Java. Rather than creating a new module for every class under the sun, I suggest grouping closely related classes into a single module. It's easier to understand your code that way, and calling code won't need a bazillion imports.

This is perfectly doable. Just create a new class module for each of those classes, and create exactly the structure you posted.
You can also make a Recursion.py module or something similar, and include all 3 classes in that file.
(I'm also new to Python from Java, and I haven't yet put anything in my __init__.py files...)

In Python you're not restricted to defining 1 class per file and few do that. You can if you want to though - it's totally up to you. A Package in Python is just a directory with an
__init__.py
file. You don't have to put anything in that file you can to control what gets imported etc.

Related

Fetching subclasses that are not present in current context

The primary problem I'm trying to solve is how to detect all the subclasses of a particular class. The reason I'm unable to use __subclasses__ is that the child classes aren't yet accessible in the context from which I'm attempting to access them.
The folder structure I'm working with looks like this:
main.py
projects/
__init__.py
project.py
some_project_child.py
What I'd like to do is get a list of all subclasses of Project (defined in project.py) from main.py.
I'm able to do this by doing:
from projects.project import Project
from projects.some_project_child import SomeProjectChild
Project.__subclasses__
The aspect of this approach I'd like to avoid is having to add an import line every time I add a new project file/class. I realize I can do this by iterating over the files in the directory and import each one's contents but is there a cleaner more pythonic manner of handling this?
It is the nature of a non-compiled language like Python that it is impossible to do anything like this without importing. There is simply no way for Python to know what subclasses any class has without executing the files they are defined in.

Is it pythonic to use __init__.py modules of a package for generic, abstract classes?

I am developing a rather complex application for my company following the object-oriented model in python3. The application contains several packages and sub-packages, each - of course - containing an __init__.py module.
I mostly used those __init__.py modules to declare generic classes for the package inside them, which are intended to be used as abstract templates for their respective package only.
My question is now: Is this a "nice" / "correct" / "pythonic" way to use the __init__.py module(s)? Or shall I rather declare my generic classes somewhere else?
To give an example, let's assume a package mypkg:
mypkg.__init__.py:
class Foo(object):
__some_attr = None
def __init__(self, some_attr):
self.__some_attr = some_attr
#property
def some_attr(self):
return self.__some_attr
#some_attr.setter
def some_attr(self, val):
self.__some_attr = val
mypkg.myfoo.py:
from . import Foo
class MyFoo(Foo):
def __init__(self):
super().__init__("this is my implemented value")
def printme(self):
print(self.some_attr)
It depends by what is the API you want to provide. For example the collections module in the standard library defines all the classes in the __init__.py1.
And the standard library should be pretty "pythonic", whatever that means.
However it provides mostly a "module-like" interface. It's quite rare to see:
import collections.abc
If you already have subpackages you are probably better introducing a new subpackage.
If, currently, the use of the package doesn't actually depend on subpackages you might consider putting the code in the __init__.py. Or put the code in some private module and simply import the names inside __init__.py (this is what I'd prefer)
If you are only concerned with where it's better to put the Abstract Base Classes, as shown above (collections.abc contains the abstract base classes of the collections package), and as you can see from the standard library's abc module, it's common to define an abc.py submodule that contains them.
You may consider exposing them directly from the __init__.py doing:
from .abc import *
from . import abc
# ...
__all__ = list_of_names_to_export + abc.__all__
inside your __init__.py.
1 The actual implementation used is in C however: _collectionsmodule.c
You can always put everything into one source file. The reason for splitting the more complex code into separate modules or packages is to separate the things that are mutually related from things that are unrelated. The separated things should be as independent as possible on the rest. And it applies to any level: data structures, functions, classes, modules, packages, applications.
Inside the module or inside the package should apply the same rules. I agree with Bakuriu that __init__.py should be closely related to the package infrastructure, but not neccessarily to the functionality of the module. My personal opinion is that the __init__.py should be as simple as possible. The reason firstly is that everything should be as simple as possible but not simpler. Secondly, people reading the code of your package tend to think that way. They may not expect the unexpected functionality in __init__.py. It would probably be better to create generic.py inside the package for the purpose. It is easier to document the purpose of the module (say via its top docstring).
The better the separation is from the beginning, the better can the independent features be combined in future. You get better flexibility -- both for the usage of module inside the package and also for future modifications.
It is indeed possible to use __init__.py for a specific module initialization, but I have never seen someone using it to define new functions. The only "correct" use I know is what is discussed in this subject ....
Anyway, as you might have a complex application, you can use a temporary file where you define all your needed functions instead of defining them directly in __init__.py module. This will allow you more readability it is easier to change afterwards.
Don't ever use __init__.py for anything except to define __all__. You will save so many lives if you will avoid it.
Reason: It is common for developers to look at packages and modules. But there is a problem you can stuck with sometimes. If you have package, you assume that there is a modules and code inside of it. But you will rarely count __init__.py as one, because, let's face it, most times it is just a requirement to make modules from directory importable.
package
__init__.py
mod1.py
humans.py
cats.py
dogs.py
cars.py
commons.py
Where should be located class Family? It is common class and it depends on others, because we can create family of humans, dogs and cats, even cars!
By my logic, and logic of my friends, it should be places in separated file, but I will try to find it in commons, next in humans and then... I will be embarrassed, because I don't really know where it is!
Stupid example, huh? But it gives a point.

Nested modules (packages) with python c api?

How can I create nested modules (packages?) with the python c api?
I would like the client code (python) to be able do something like this:
import MainModuleName
import MainModuleName.SubModuleName
Instead of currently:
import MainModuleName
import MainModuleNameSubModuleName
Which imo looks ugly and clutters up the namespace.
Is it possible without having to mess around with file system directories?
You do not "mess around" with file system directories. File system directories are how you create submodules, unless you want to be really obscure and have a lot of needless pain.
You want to have a module called MainModuleName.SubModuleName and then MainModuleName should be a directory with an __init__.py file.
A common way of doing C modules is to put all the C-code in modules with names starting in underscore, in this case _mainmodulename.c, and then importing them from Python files. This is done so that you only need to implement the things in C that has to be in C, and the rest you can do in Python. You can also have pure-Python fallbacks that way. I suggest you do something similar, and create the module structure in Python and then import the classes and functions from C modules with underscore names.

Python - best way to import modules in complex application?

I am refactoring a piece of Python 2 software. The code is currently in a single file, which contains 45 or so classes (and 2 lines outside of a class to bootstrap the application).
I'd like to have one class per file, with ideally files from related classes grouped in directories.
I typically like to write my Python imports as such:
from zoo.dog_classes.beagle_class import BeagleClass
from zoo.dog_classes.dalmatian_class import DalmatianClass
so that it is clear which modules are imported in the class, and that their name is as short as possible.
For this software, the logic is quite complex though, with classes referring to one another commonly in class methods, resulting in numerous circular imports, which rules out this approach.
I do not want to import modules in functions, as this is horrible for readability and code gets repeated everywhere.
It seems like my only option is to write imports in such a fashion:
import zoo.dog_classes.beagle_class
and later, when I need the class:
b = zoo.dog_classes.beagle_class.BeagleClass()
This is however extremely verbose and painful to write.
How should I deal with my imports?
import zoo.dog_classes.beagle_class as beagle
b = beagle.BeagleClass()

Multiple classes in a Python module

I'm very new to Python (I'm coming from a JAVA background) and I'm wondering if anyone could help me with some of the Python standards. Is it a normal or "proper" practice to put multiple class in a module? I have been working with Django and started with the tutorials and they place their database model classes in the same module. Is this something that is normally done or should I stick with 1 class per module? Is there a reason I would do one over the other?
Hope I'm being clear and not to generic. Thanks to everyone in advance!
Here is a useful rule of thumb from what I have seen of typical Java projects:
The bottom-most package in Java should be a file in Python
What does that mean?
If your Java project was organized:
toplevel/
subproject/
Foo.java
Bar.java
subproject2/
Baz.java
Qux.java
Then your Python project should look like:
toplevel/
subproject.py <-- put class Foo, Bar here
subproject2.py <-- put class Baz, Qux here
Things to notice re: organization:
Do not use inner classes. Just put
classes in the same module
By convention, things that start with _ are "private"
It's OK to have "public variables"
Think it this way.
In java what you write is a Class where in the case of Python, you write a module instead of a class. So a module can contain several classes.
Whenever you want to use a particular class, import the respective module first and then call the class to make objects.
Here's an example.
Classes.py (This is a module named 'Classes')
class MyClass(object):
def greet(self):
print("Hello World")
class MyNextClass(object):
def greetAgain(self):
print("Hello again")
Now I can import this module from anywhere I wish
import Classes
if __name__ == '__main__':
a=Classes.MyClass()
a.greet()
b=Classes.MyNextClass()
b.greetAgain()
When in doubt, just look at Python's standard libraries :)
For example, the standard calendar module contains 31 classes. So yes, it is ok.
It is absolutely proper to do so. A module groups related functionality. If that functionality is implemented in several classes (e.g., Tree, Node, Leaf) then it is appropriate to place them together.
A module is more closely associated with a Java package than a Java class. You can also implement a module as a folder, named for the module, with an __init__.py file inside (so Python can identify the module as such; the __init__.py may also optionally include initialization code and lists of classes, functions, and sub-packages to export.)
It is certainly a normal thing to do in Python. When and why you choose one over the other is partly a matter of taste, and partly convention.
If you're still getting to know Python, and therefore its conventions, reading the style guide is well worth your time.

Categories