I am creating my first rather complex module, and like the wx module, I wish to provide a variety of constants to ease the programmers task of setting styles.
assuming the following directory structure:
Module
+-__init__py
+- Frame.py
and the contents of init.py
__ALL__=["Frame.py"]
FRAME_DEFAULT_SIZE = (640, 480)
FRAME_DEFAULT_TITLE = "Some Simple Title"
then in Frame.py
class Frame(object)
"""Some docstrings go here
"""
def __init__(self, parent, ID=-1,title=HOW DO I REFERENCE MY CONSTANT,
size = HOW DO I REFERENCE THIS CONSTANT):
etc ...
If I were to import my module for use in a piece of program code the constants would be available as Module.FRAME_DEFAULT_SIZE, and Module.FRAME_DEFAULT_TITLE respectively.
But how do i reference them as part of a class definition which is supposed to be in the modules name-space, when they are defined in a separate file?
I realize that constants (if named well and used well) should apply only to a parent class and it's children, so I could instantiate them in the Frame.py file. But how does one do it this way?
Any help appreciated here.
Import Module and access it off that. That part doesn't change just because you're in a submodule.
Related
I have Vehicle class. I want to rename it to Rover. I know we can create another reference variable, but I don't want the name of the class to be same anymore.
Played with __name__, but no success.
class Vehicle:
pass
st="Rover"
Vehicle.__name__=st
obj1=Rover()
Expected:
Vehicle class renamed to Rover without file handling and within the program.
In the comments, you have a (very slightly mangled) quote from https://docs.python.org/3/tutorial/classes.html:
This provides semantic for importing and renaming
Here's the actual quote, with a bit more context:
As in Smalltalk, classes themselves are objects. This provides semantics for importing and renaming.
What this is talking about is the fact that we can have a Python library with a class in the library:
# lib_k.py
class Klass:
... definitions ...
Then, in some other Python module, we can write:
from lib_k import Klass as LibKClass
which is mainly useful if we're also going to do:
from lib_l import Klass as LibLClass
and then write code like:
def f(args):
obj_k = LibKClass(...)
obj_l = LibLClass(...)
I personally prefer to write:
import lib_k
import lib_l
def f(args):
obj_k = lib_k.Klass(...)
obj_l = lib_l.Klass(...)
but both ways are allowed, and which to use is something of a matter of taste, rather than correctness.
I'm new at Python and previously I've been using languages like Swift, where import is not a big deal: you're just defining a new class and can access it from another part of your program.
I can't use this way with Python because here import works in the other way: you can't make cyclic imports, where two files import each other. I understand that I'm facing this problem because of using the language in a wrong way but I don't understand how to avoid it.
I mean, in most cases you just can solve this problem by combining two classes into a single file but it doesn't feel right. Also, I've found advice like "move your import statement to the end of file" but it doesn't feel like a good idea too.
I would like to understand the Python's philosophy, if you will. How should I organize my project and what should I be guided by, when deciding on making a class in a separate file?
You certainly can import child from parent and parent from child. The key to making this work is for child not to probe too deeply into parent from its module-level code, because the parent module is only partly loaded at the time that the Python runs the module-level code in child.
Here's what happens when you import child from parent and parent from child (assuming parent is loaded first):
Code at the module level of parent runs until it reaches a statement that loads child (either import child or from child import something). By "module level", I mean the statements that aren't within a class or function definition. The classes and functions defined at the module level will also be created as objects within the module. However, the functions and class methods themselves will not be run yet.
When Python gets to the import child statement (or equivalent) in parent's module-level code, it will stop running the parent code and begin running the module-level code in child. If child imports parent via import parent or from parent import something, it will get the parent module in its current, partially constructed state. So module-level code in child cannot access objects that are defined below import child in parent.py.
Once child's module-level code finishes running, control will return to parent below the import child statement, and Python will finish running all of the module-level code in parent.
This process will give you trouble if child's module-level code tries to access objects that are defined in parent after the import child statement (since parent is only partially finished when child loads). The solution to this is to import parent at the module level of child but defer accessing objects within parent from child until after child and parent have finished loading. In particular, instead of using from parent import something in child's module-level code, you may need to use import parent, then access parent.something from inside function or method code in child. This is safe to do because those functions and methods won't be run until after child and parent finish running, at which point all the elements of the parent module are correctly defined.
Here's an example of what I mean, using the setup you described in a comment. If you give more information on the code that is giving you problems, I could tailor this more closely.
Version 1 (won't work)
__main__.py:
from user import User
u = User()
user.py:
from data_manager import DataManager
...
class User:
def __init__(self, data_manager=None):
if data_manager is None:
data_manager = DataManager(user=self)
self.data_manager = data_manager
data_manager.py:
# next line will fail because user.py has been partly loaded
# and user.User doesn't exist yet
from user import User
...
class DataManager:
def __init__(self, user=None):
...
if user is None:
user = User(data_manager=self)
self.user = user
Version 2 (will work)
__main__.py:
from user import User
u = User()
user.py:
import data_manager as dm
...
class User:
def __init__(self, data_manager=None):
if data_manager is None:
data_manager = dm.DataManager(user=self)
self.data_manager = data_manager
data_manager.py:
import user as user_module
...
# this defines a class which will eventually create an instance
# of user.User, but it won't try to do that until control returns
# to __main__.py, by which point the `user` and `data_manager`
# modules are fully defined
class DataManager:
def __init__(self, user=None):
...
if user is None:
user = user_module.User(data_manager=self)
self.user = user
Note that references in your classes' __init__ methods don't get resolved until the class is actually instantiated. i.e., the user = user_module.User(data_manager=self) line does something like this: "look in the current module for an object called user_module; look in that object for a User attribute; construct an object of that class". The important thing is that data_manager can safely import the user module early on (the module exists already, even though it is only partly constructed), but the code above won't actually look for anything inside the user module until a DataManager object is instantiated, by which time user.User will be properly defined.
I can't use this way with Python because here import works in the other way: you can't make cyclic imports, where two files import each other.
Yes you can!
Say we have two files in the same working directory:
#file1.py
import file2
x = 22
y = 'spam'
print(x)
print(y)
and
#file2.py
import file1
print("file1")
Notice what happens when we import file1.py
>>> import file1
file1
22
spam
file1.py imports file2.py and file2.py imports file1.py.
This problem happens because you're writing Python program like you would write Swift or Java program. Such approach never works well: every language is different and has different best practices. If you write unpythonic code, not only it looks ugly and undecipherable to other Python developers, you're struggling with the language instead of enjoying it.
Just structure your code like python developers structure it. Group related classes in one module. If there's a rare case where there's hard to avoid import cycles (almost never happens), import offending dependency in the function or method instead of global scope.
I'm creating a digital version of the "triangle peg game" (the one commonly found at Cracker Barrel) using Python 3.6.0. I'm including a module called "stats" that allows the game to analyze all possible playthroughs, so that when the player presses the help button, it will give him the best possible move.
While this project is primarily an application, I want to have the source code available too, so that others can read it or use it if they want. To that end (and for my use in creating the game), I've included two graphics classes. The first, using turtle, will be used when someone actually plays the game, and the second, using ASCII to increase speed and efficiency, can be used for stats testing. I've created a subpackage called "graphics", and each class is in its own module inside "graphics". I decided, after much thought, to put the base class in the __init__ module. Also, since I want its __all__ variable to include everything useful to someone using the source code, I put the name of the base class in it.
Here's the __init__ module:
"""Initializes the 'graphics' subpackage."""
__all__ = ["Graphics", "ascii_", "turtle_"]
class Graphics:
"""Base class for all graphics classes.
instance attributes
-------------------
'_game' -- the instance of 'game.Game' that the object is assigned to.
This attribute is not assigned until the game is begun.
"""
def _construct(self):
"""Constructs the graphics for the game."""
pass
def _update(self):
"""Updates the graphics when a move is made."""
pass
# Move back to ASCIIGraphics if not necessary for other graphics
# classes.
def _erase(self):
"""Updates the graphics when a move is undone."""
pass
def _reset(self):
"""Resets the graphics when the game is restarted."""
pass
Should a package's __all__ variable exclusively contain module names, or can it contain the names of base classes/package-level globals defined in __init__ as well?
P.S.: I'm very much aware that "from [package|module] import *" is frowned upon by many, and I've already looked into whether base classes should be defined in __init__. That being said, this is the first time I've ever made a package in Python, and if you wish to give me advice as well as answer my question, please feel free to do so.
I have a class-factory F that generates classes. It takes no arguments other than a name. I'd like to be able to wrap this method and use it like this:
from myproject.myfactory.virtualmodule import Foo
"myfactory" is a real module in the project, but I want virtualmodule to be something that pretends to be a module.
Whenever I import something from virtualmodule I want it to build a new class using my factory method and make it look as if that was imported.
Can this be done? Is there a pattern that will allow me to wrap a class-factory as a module?
Thanks!
--
UPDATE0: Why do I need this? It's actually to test a process that will be run on a grid which requires that all classes should be importable.
An instance of the auto-generated class will be serialized on my PC and then unserialized on each of the grid nodes. If the class cannot be imported on the grid node the unserialize will fail.
If I hijack the import mechanism as an interface for making my test-classes, then I know for sure that anything I can import on my PC can be re-created exactly the same on the grid. That will satisfy my test's requirements.
You can stuff an arbitrary object into the sys.modules structure:
import sys
class VirtualModule(object):
def __init__(self, name):
self.__name__ = name.rsplit('.', 1)[-1]
self.__package__ = name
self.__loader__ = None
def __getattr__(self, name):
if name is valid:
# Return dynamic classes here
return VirtualClass(name)
raise AttributeError(name)
virtual_module_name = 'myproject.myfactory.virtualmodule'
sys.modules[virtual_module_name] = VirtualModule(virtual_module_name)
The Python import machinery will look up objects using attribute access, triggering the __getattr__ method on your VirtualModule instance.
Do this in the myproject/__init__.py or myproject/myfactory/__init__.py file and you are all set to go. The myproject.myfactory package does need to exist for the import machinery to find the myproject.myfactory.virtualmodule object.
I'm working on a project in Python, and I'm trying to follow a somewhat-strict MVC pattern for practice. My idea was to separate things out into packages named model, view and controller, plus have a Model, View and Controller class for each. Each package would have the supporting files for each piece.
My problem is that I want to be able to use them like:
from controller import Controller
And then in the file with the Controller class I can:
from controller.someclass import SomeClass
But if I put them in packages with the same name, I get problems. I read up about how modules work, and realized I needed to name them controller/__init__.py, model/__init__.py and view/__init__.py, but it seems weird to put them in that file, and it's kind of annoying that all of them show up gedit as __init__.py
Is there any better way of doing this? Am I going about this the right way?
I've seen some black magic in the django source that pulls classes from a base.py file into the __init__.py namespace. However I'm not sure how that's done. ( See comments for how to do this. )
From what I do know, you can do one of two things.
A -
inside bar/controller/__init__.py
import os,sys
# Make sure the interpreter knows were your files are.
sys.path.append(os.path.join(os.path.dirname(__file__),'../')
from bar.controller import Controller
from bar.model import Model
from bar.view import View
class Controller(object):
model = Model()
view = View()
And now you make bar/model/__init__.py and bar/view/__init__.py
B -
inside bar/controller/__init__.py
class Model(object):
pass
class View(object):
pass
class Controller(object):
model = Model()
view = View()
Edit:...
After reading your comment, a third option comes to mind. A package doesn't litertly translate into a module in python. I think your desired result is to create a directory structure like this:
bar/
__init__.py
controller.py
model.py
view.py
Then inside controller.py
import os,sys
from bar.controller import Controller
from bar.model import Model
from bar.view import View
class Controller(object):
model = Model()
view = View()
This was a huge hurdle for me to get coming from java. Your class file names do not have to match the class name. Think of them as a step, you step into the folder(module) and then into the file(.py) and then you import your class.(Model(object))
If I understand correctly, all you're interested in doing here is having this happen:
from controller import Controller
without having the Controller class defined in controller/__init__.py is that right?
If so, then just do this:
In controller/base.py: (notice there is a file called base.py or something else)
class Controller(BaseClass):
# define Controller here
In controller/__init__.py:
from base import Controller
Now you can have the exact syntax you are looking for.