One class per file causes double names in python - python

So I have a small project of my own in Python which I want to regularly use. It is, at the moment, composed of two classes, let's call them class Aclass and Bclass.
The amount of code in both classes combined is large. So large that I do not want to have to have them in one file, so I create one file for each. I put the first into the file aclass.py, the second into bclass.py so my structure now looks something like this:
my_project/
aclass.py
bclass.py
__init__.py
What I want is to be able to call my project from python and have the classes ready, like so:
import my_project
a_instance = my_project.Aclass()
b_instance = my_project.Bclass()
At the moment, what I have to do is
import my_project
a_instance = my_project.aclass.Aclass()
Which, if you ask me, is redundant. In the question:
Avoid double typing class names in python
I saw an advice to put
from aclass import Aclass
from bclass import Bclass
into the __init__.py file, but this causes confusion, because if I do that, I can create an instance of my class either by writing
import my_project
a_instance = my_project.Aclass()
or by writing
import my_project
a_instance = my_project.aclass.Aclass()
I find this solution ugly, because not only does it "double" the functions (creating both aclass.Aclass() and Aclass(), but it also means that any instance of the class Aclass will now be, in Python's eyes,
<class my_project.aclass.Aclass at ******************>
instead of
<class my_project.Aclass at ******************>
which it is in the natural project structure.
Thus, my question is this:
Is there a solution to my problem in which Aclass will be located directly inside the my_project package without writing the class into the __init__ file?

No; you only have two choices, either:
Put everything in one file, and have only a single route to each object at the cost of an unwieldy codebase; or
Have multiple files and use the __init__.py to define what's available at the top level of your library, accepting that this will mean multiple routes to an object.
For any project beyond a handful of classes and functions, the latter is clearly preferable; it is widely used in the standard library and beyond. For example, with the json library, where the __init__.py includes from .decoder import JSONDecoder:
>>> import json
>>> json.JSONDecoder
<class 'json.decoder.JSONDecoder'>
>>> json.JSONDecoder is json.decoder.JSONDecoder
True
In the overwhelming majority of cases, any user of your code will use the shorter name rather than the longer name but, as they point to the same object, it doesn't really matter if they mix and match, and you will rarely have any occasion to see what "Python thinks" the fully-qualifed class name is.

Related

Using module as a singleton in Python - is that ok?

I've got a really complex singleton object. I've decided to modify it, so it'll be a separate module with module--wide global variables that would store data.
Are there some pitfalls of this approach? I just feel, like that's a little bit hacky, and that there may be some problems I cannot see now.
Maybe someone did this or have some opinion :) Thanks in advance for help.
Regards.
// Minimal, Complete, and Verifiable example:
"""
This is __init__.py of the module, that could be used as a singleton:
I need to set and get value of IMPORTANT_VARIABLE from different places in my code.
Folder structure:
--singleton_module
|
-__init__.py
Example of usage:
import singleton_module as my_singleton
my_singleton.set_important_variable(3)
print(my_singleton.get_important_variable())
"""
IMPORTANT_VARIABLE = 0
def set_important_variable(value):
global IMPORTANT_VARIABLE
IMPORTANT_VARIABLE = value
def get_important_variable():
return IMPORTANT_VARIABLE
Technically, Python modules ARE singletons, so from this point of view there's no particular issue (except the usual issues with singletons that is) with your code. I'd just spell the varibale in all_lower (ALL_UPPER denotes a pseudo-constant) and prefix it with either a single ("protected") or double ("really private") leading underscore to make clear it's not part of the public API (standard Python naming convention).
Now whether singletons are a good idea is another debate but that's not the point here...
e.g that in one potential situation I may lost data, or that module could be imported in different places of code two times, so it would not be a singleton if imported inside scope of function or something like that.
A module is only instanciated once per process (the first time it's imported), then subsquent imports will directly get if from sys.modules. The only case where you could have two distinct instances of the same module is when the module is imported by two different path, which can only happens if you have a somewhat broken sys.path ie something like this:
src/
foo/
__init.py
bar/
__init__.py
baaz/
__init__.py
mymodule.py
with both "src" and "foo" in sys.path, then importing mymodule once as from foo.bar.baaz import mymodule and a second time as from bar.baaz import mymodule
Needless to say that it's a degenerate case, but it can happens and lead to hard to diagnose bugs. Note that when you have this case, you do have quite a few other things that breaks, like identity testing anything from mymodule.
Also, I am not sure how would using object instead of module increase security
It doesn't.
And I am just asking, if that's not a bad practice, maybe someone did this and found some problems. This is probably not a popular pattern
Well, quite on the contrary you'll often find advises on using modules as singletons instead of using classes with only staticmethods, classmethods and class attributes (another way of implementing a singleton in Python). This most often concerns stateless classes used as namespaces while your example does have a state, but this doesn't make much practical difference.
Now what you won't get are all the nice OO features like computed attributes, inheritance, magicmethods etc, but I assume you already understood this.
As far as I'm concerned, depending on the context, I might rather use a plain class but only expose one single instance of the class as the module's API ie:
# mymodule.py
__all__ = ["mysingleton"]
class __MySingletonLike(object):
def __init__(self):
self._variable = 42
#property
def variable(self):
return self._variable
#variable.setter
def variable(self, value):
check_value(value) # imaginary validation
self._variable = value
mysingleton = __MySingleton()
but that's only when I have special concerns about the class (implementation reuse, proper testability, other special features requiring a class etc).

Best way to import several classes

I have defined several classes in a single python file. My wish is to create a library with these. I would ideally like to import the library in such a way that I can use the classes without a prefix (like mylibrary.myclass() as opposed to just myclass() ), if that's what you can call them, I am not entirely sure as I am a beginner.
What is the proper way to achieve this, or the otherwise best result? Define all classes in __init __? Define them all in a single file as I currently have like AllMyClasses.py? Or should I have a separate file for every class in the library directory like FirstClass.py, SecondClass.py etc.
I realize this is a question that should be easy enough to google, but since I am still quite new to python and programming in general I haven't quite figured out what the correct keywords are for a problem in this context(such as my uncertainty about "prefix")
More information can be found in the tutorial on modules (single files) or packages (when in a directory with an __init__.py file) on the python site.
The suggested way (according to the style guide) is to spell out each class import specifically.
from my_module import MyClass1, MyClass2
object1 = MyClass1()
object2 = MyClass2()
While you can also shorten the module name:
import my_module as mo
object = mo.MyClass1()
Using from my_module import * is recommended to be avoided as it can be confusing (even if it is the recommended way for some things, like tkinter)
If it's for your personal use, you can just put all your classes Class1, Class2, ... in a myFile.py and to use them call import myFile (without the .py extension)
import myFile
myVar1 = myFile.Class1()
myVar2 = myFile.Class2()
from within another script. If you want to be able to use the classes without the file name prefix, import the file like this:
from myFile import *
Note that the file you want to import should be in a directory where Python can find it (the same where the script is running or a directory in PYTHONPATH).
The _init_ is needed if you want to create a Python module for distribution. Here are the instructions: Distributing Python Modules
EDIT after checking the Python's style guide PEP 8 on imports:
Wildcard imports (from import) should be avoided, as they make it unclear which names are present in the namespace, confusing both readers and many automated tools
So in this example you should have used
from myFile import Class1, Class2

Should Python class filenames also be camelCased?

I know that classes in Python are typically cased using camelCase.
Is it also the normal convention to have the file that contains the class also be camelCase'd especially if the file only contains the class?
For example, should class className also be stored in className.py instead of class_name.py?
The following answer is largely sourced from this answer.
If you're going to follow PEP 8, you should stick to all-lowercase names, with optional underscores.
To quote PEP 8's naming conventions for packages & modules:
Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability.
And for classes:
Class names should normally use the CapWords convention.
See this answer for the difference between a module, class and package:
A Python module is simply a Python source file, which can expose classes, functions and global variables.
The official convention is to use all lower case for file names (as others have already stated). The reason, however, has not been mentioned...
Since Python works cross platform (and it is common to use it in that manner), but file systems vary in the use of casing, it is better to just eliminate alternate cases. In Linux, for instance, it is possible to have MyClass.py and myclass.py in the same directory. That is not so in Windows!
On a related note, if you have MyClass.py and myclass.py in a git repo, or even just change the casing on the same file, git can act funky when you push/pull across from Linux and Windows.
And, while barely on topic, but in the same vein, SQL has these same issues where different standards and configurations may or may not allow UpperCases on table names.
I, personally, find it more pleasant to read TitleCasing / camelCasing even on filenames, but when you do anything that can work cross platform it's safest not to.
There is a difference in the naming convention of the class name and the file that contains this class. This missunderstanding might come from languages like java where it is common to have one file per class.
In python you can have several classes per modul (a simple .py file). The classes in this module/file should be called according to the class naming convention: Class names should normally use the CapWords convention.
The file containing this classes should follow the modul naming convention: Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability.
=> CamelCase should in the file camelcase.py (or camel_case.py if neccessary)
My question is, is it also the normal convention to have the file that
contains the class also be camelCase'd especially if the file only
contains the class
Short answer: No.
Longer answer: should be all lower case and underscores as needed.
From PEP8 "Package and Module Names":
Modules should have short, all-lowercase names. Underscores can be
used in the module name if it improves readability. Python packages
should also have short, all-lowercase names, although the use of
underscores is discouraged.
If you're unclear what a module is:
A module is a file containing Python definitions and statements. The
file name is the module name with the suffix .py appended.
First of all, as mentioned above, class names should be CapWords, e.g.:
class SampleClass:
...
BEWARE: Having the same name for file (module) and class creates confusions.
Example 1: Say you have the following module structure:
src/
__init__.py
SampleClass.py
main.py
Your SampleClass.py is:
class SampleClass:
...
Your main.py is:
from src import SampleClass
instance = SampleClass()
Will this code work? NO, cause you should've done either from src.SampleClass import SampleClass or instance = SampleClass.SampleClass(). Awkward code, isn't it?
You can also fix it by adding the following content to __init__.py:
from .SampleClass import SampleClass
Which leads to the Example 2.
Example 2: Say you develop a module:
src/
__init__.py
BaseClass.py
ConcreteClass.py
main.py
BaseClass.py content:
class BaseClass:
...
ConcreteClass.py content:
from src import BaseClass
class ConcreteClass(BaseClass):
...
And your __init__.py content:
from .ConcreteClass import ConcreteClass
from .BaseClass import BaseClass
And main.py content is:
from src import ConcreteClass
instance = ConcreteClass()
The code fails with an error:
class ConcreteClass(BaseClass):
TypeError: module() takes at most 2 arguments (3 given)
It took me a while to understand the error and why I cannot inherit from the class, cause in previous example when I added exports to __init__.py file everything worked. If you use snake case file names it does not fix the problem but the error is a bit easier to understand:
ImportError: cannot import name 'BaseClass' from partially initialized module 'src'
To fix the code you need to fix the import in ConcreteClass.py to be: from .BaseClass import BaseClass.
Last caveat, if in original code you would switch places imports in __init__.py so it looks like:
from .BaseClass import BaseClass
from .ConcreteClass import ConcreteClass
Initial code works, but you really don't want anyone to write a code that will depend on the order of imports. If someone changes the order or applies isort tool to organize imports, good luck fixing those bugs.

Circular & nested imports in python

I'm having some real headaches right now trying to figure out how to import stuff properly. I had my application structured like so:
main.py
util_functions.py
widgets/
- __init__.py
- chooser.py
- controller.py
I would always run my applications from the root directory, so most of my imports would be something like this
from util_functions import *
from widgets.chooser import *
from widgets.controller import *
# ...
And my widgets/__init__.py was setup like this:
from widgets.chooser import Chooser
from widgets.controller import MainPanel, Switch, Lever
__all__ = [
'Chooser', 'MainPanel', 'Switch', 'Lever',
]
It was working all fine, except that widgets/controller.py was getting kind of lengthy, and I wanted it to split it up into multiple files:
main.py
util_functions.py
widgets/
- __init__.py
- chooser.py
- controller/
- __init__.py
- mainpanel.py
- switch.py
- lever.py
One of issues is that the Switch and Lever classes have static members where each class needs to access the other one. Using imports with the from ___ import ___ syntax that created circular imports. So when I tried to run my re-factored application, everything broke at the imports.
My question is this: How can I fix my imports so I can have this nice project structure? I cannot remove the static dependencies of Switch and Lever on each other.
This is covered in the official Python FAQ under How can I have modules that mutually import each other.
As the FAQ makes clear, there's no silvery bullet that magically fixes the problem. The options described in the FAQ (with a little more detail than is in the FAQ) are:
Never put anything at the top level except classes, functions, and variables initialized with constants or builtins, never from spam import anything, and then the circular import problems usually don't arise. Clean and simple, but there are cases where you can't follow those rules.
Refactor the modules to move the imports into the middle of the module, where each module defines the things that need to be exported before importing the other module. This can means splitting classes into two parts, an "interface" class that can go above the line, and an "implementation" subclass that goes below the line.
Refactor the modules in a similar way, but move the "export" code (with the "interface" classes) into a separate module, instead of moving them above the imports. Then each implementation module can import all of the interface modules. This has the same effect as the previous one, with the advantage that your code is idiomatic, and more readable by both humans and automated tools that expect imports at the top of a module, but the disadvantage that you have more modules.
As the FAQ notes, "These solutions are not mutually exclusive." In particular, you can try to move as much top-level code as possible into function bodies, replace as many from spam import … statements with import spam as is reasonable… and then, if you still have circular dependencies, resolve them by refactoring into import-free export code above the line or in a separate module.
With the generalities out of the way, let's look at your specific problem.
Your switch.Switch and lever.Lever classes have "static members where each class needs to access the other one". I assume by this you mean they have class attributes that are initialized using class attributes or class or static methods from the other class?
Following the first solution, you could change things so that these values are initialized after import time. Let's assume your code looked like this:
class Lever:
switch_stuff = Switch.do_stuff()
# ...
You could change that to:
class Lever:
#classmethod
def init_class(cls):
cls.switch_stuff = Switch.do_stuff()
Now, in the __init__.py, right after this:
from lever import Lever
from switch import Switch
… you add:
Lever.init_class()
Switch.init_class()
That's the trick: you're resolving the ambiguous initialization order by making the initialization explicit, and picking an explicit order.
Alternatively, following the second or third solution, you could split Lever up into Lever and LeverImpl. Then you do this (whether as separate lever.py and leverimpl.py files, or as one file with the imports in the middle):
class Lever:
#classmethod
def get_switch_stuff(cls):
return cls.switch_stuff
from switch import Swift
class LeverImpl(Lever):
switch_stuff = Switch.do_stuff()
Now you don't need any kind of init_class method. Of course you do need to change the attribute to a method—but if you don't like that, with a bit of work, you can always change it into a "class #property" (either by writing a custom descriptor, or by using #property in a metaclass).
Note that you don't actually need to fix both classes to resolve the circularity, just one. In theory, it's cleaner to fix both, but in practice, if the fixes are ugly, it may be better to just fix the one that's less ugly to fix and leave the dependency in the opposite direction alone.

Python namespace elements visibility (vs proper package structure)

Python3
Tried to found an answer but failed. First I'll present the snippet, then I'll explain why I wanted to do it this way and what I wanted to achieve. Maybe it'll look like this approach is "the bad one".
Hence this semi-double topic, cause first I'd like to know why this snippet isn't working and second - I'd like to know if this approach is right.
So:
class Namespace:
def some_function():
pass
class SomeClass:
fcnt = some_function
This won't work due to:
NameError: name 'some_function' is not defined
What I want to achieve is code and file structure readability.
Above example is a snippet which I use (not this one, but it looks like this) in Pyramid project.
My project tree looks like this:
my_project
├── models
│   ├── __init__.py
│   └── some_model.py
├── schemas
│   ├── __init__.py
│   ├── some_schema.py
│   └── some_other_schema.py
...
├── views
│   ├── __init__.py
│   └── some_view.py
└── __init__.py
What I wanted to achieve is clean schema/model/view importing.
In some_schema.py file resides class SomeSchema, in some_other_schema.py class SomeOtherSchema.
With above snippet I can make:
from my_project.schemas.some_schema import Schema
and use it like Schema.SomeSchema()
I've got a little bit lost with packages and imports. How one could make a clean structure (one schema per file) and still be able to use Schema namespace? (In C++ I'd just put each of those classes in Schema namespace, that's why I did this in snippet above. But! What works in C++ maybe shouldn't be used in python, right?).
Thanks for answer in advance.
EDIT:
Ok, I've done some testing (I thought that I've done it, but looks like not..).
using from my_project.schemas.some_schema import Schema with another from my_project.schemas.some_other_schema import Schema causes in the second import shadowing first one. So if after first import I'd be able to use x = Schema.SomeSchema() than after second import I'd be unable to do this, because class Schema gets overriden. Right, so as Erik said - classes aren't namespaces. GOT IT!
in my very first snippet yes, I should've used fnct = Namespace.some_function. What's wierd - it works. I have the same statement in my pyramid code, with one difference. some_function has #colander.deferred decorator. In fact it looks like this:
class Schema:
#colander.deferred
def deferred_some_function(node, kw):
something = kw.get("something", [])
return deform.widget.SelectWidget(values=something,
multiple=True)
class SomeSchema(colander.MappingSchema):
somethings = colander.SchemaNode(colander.Set(),
widget=Schema.deferred_some_function)
And I get NameError: name 'Schema' is not defined
Getting back to package format. With this:
### another/file.py
from foo.bar.schema import SomeSchema
# do something with SomeSchema:
smth = SomeSchema()
smth.fcnt()
I have to make one module foo/bar/schema.py in which I'd have to put all my SomeXSchema classes. An if I have lots of them, then there's the unreadabilty glitch which I wanted to get rid off by splitting SomeXSchema - one per file. Can I accomplish this somehow? I want to call this class for example: User. And here's the THING. Maybe I do it wrong? I'd like to have class named User in schema namespace and class named User in model namespace. Shouldn't I? Maybe I ought to use prefix? Like class SchemaUser and class ModelUser ? I wanted to avoid it by the use of modules/packages.
If I'd use : import foo.bar.schema then I'd have to use it like x = foo.bar.schema.User() right? There is no way to use it like x = schema.User() ? Sorry, I just got stuck, my brain got fixed. Maybe I need a little break to take a fresh look?
ANOTHER EDIT (FOR POINT 3 ONLY)
I did some more research. The answer here would be to make it like this:
## file: myproject/schemas/__init__.py
from .some_schema import SomeSchema
from .some_other_schema import SomeOtherSchema
then usage would be like this:
## some file using it
import myproject.schemas as schema
s1 = schema.SomeSchema()
s2 = schema.SomeOtherSchema()
Would it be lege artis?
If anyone thinks that topic should be changed - go ahead, give me something more meaningful, I'd appreciate it.
Your are swimming upstream by trying to do what you are trying to do.
Classes are meant for defining new data types not as a means to group related parts of code together. Modules are perfectly suited for that, and I presume you know that well because of the "(vs proper package structure)" part in the question title.
Modules can also be imported as objects, so to achieve what you want:
### foo/bar/schema.py
def some_function():
pass
class SomeSchema:
fcnt = some_function
### another/file.py
from foo.bar import schema
# do something with SomeSchema:
smth = schema.SomeSchema()
smth.fcnt()
...although it's also typical to import classes directly into the scope like this (i.e. being able to refer to SomeSchema after the import as opposed to schema.SomeSchema):
### another/file.py
from foo.bar.schema import SomeSchema
# do something with SomeSchema:
smth = SomeSchema()
smth.fcnt()
(Also note that module names should be lowercase as suggested by PEP8 and only class names should use PascalCase)
This, by the way, applies to programming in general, not just Python. There are a few languages such as Java and C# which require that functions be declared inside of classes as statics because they disallow writing of code outside of classes for some weird reason, but even these languages have modules/proper namespaces for structuring your code; i.e. classes are not normally put inside other classes (they sometimes are, but for wholly different reasons/goals than yours).
So basically "class" means a "type" or "a set of objects having similar behavior"; once you ignore that principle/definition, you're writing bad code by definition.
PS. if you are using Python 2.x, you should be inheriting your classes from object so as to get new-style classes.
PPS. in any case, even technically speaking, what you are trying to do won't work cleanly in Python:
class fake_namespace:
def some_function():
pass
class RealClass:
some_function # <-- that name is not even visibile here;
# you'd have to use fake_namespace.some_function instead
...and this is the reason for the exception I reported I was getting: NameError: name 'some_function' is not defined.
EDIT AS PER YOUR EDITS:
I'm not really sure why you're making it so complicated; also some of your statements are false:
If I'd use : import foo.bar.schema then I'd have to use it like x = foo.bar.schema.User right?
No. Please learn how Python modules work.
I'd like to have class named User in Schema namespace and class named User in Model namespace. Shouldn't I? Maybe I ought to use prefix? Like class SchemaUser and class ModelUser
please note that namespaces a.k.a. modules should be lowercase not PascalCase.
An if I have lots of them, then there's the unreadabilty glitch which I wanted to get rid off by splitting SomeXSchema - one per file. Can I accomplish this somehow?
Yes; you can put your classes in individual submodules, e.g. schema1/class1.py, schema/class2.py etc; then you can "collect" them into schema/__init__.py so that you could import them directly from schema:
# schema/__init__.py
from .class1 import Class1
from .class2 import Class2
__all__ = [Class1, Class2] # optional
General note: you can name your schema modules differently, e.g. schema1, schema2, etc; then you could just use them like this:
from somewhere import schema1
from somewhere_else import schema2
s1_user = schema1.User()
s2_user = schema2.User()
# etc
For more information on how Python modules work, refer to http://docs.python.org/2/tutorial/modules.html
Name and binding
You can read Python naming and binding and understand how Python namespace works.
A scope defines the visibility of a name within a block. If a local variable is defined in a block, its scope includes that block. If the definition occurs in a function block, the scope extends to any blocks contained within the defining one, unless a contained block introduces a different binding for the name. The scope of names defined in a class block is limited to the class block; it does not extend to the code blocks of methods this includes generator expressions since they are implemented using a function scope.
BTW, use globals() and locals() can help debug for variable binding.
The User Problem
You can try this instead:
from model import User as modelUser
from foo.bar.schema import User as schemaUser

Categories