This question already has answers here:
How do I create a constant in Python?
(44 answers)
Closed 3 months ago.
I am writing a program in python which contains many constant variables. I would like to create a file that will hold all these variables like the .h file in C that contains many #define. I tried to use configparser however I didn't find it easy and fun to use.
Do you know a better way?
Python does not allow constant declarations like C or C++.
Normally in Python, constants are capitalized (PEP 8 standards) which helps the programmer know it's a constant.
Ex. MY_CONSTANT = "Whatever"
Another valid way of doing it which I don't use but heard of, is using a method:
def MY_CONSTANT():
return "Whatever"
Now in theory, calling MY_CONSTANT() acts just like a constant.
EDIT
Like the comments says, someone can go and change the value by calling
MY_CONSTANT = lambda: 'Something else'
but don't forget the same person can call MY_CONSTANT = "Something else" in the first example and change the initial value. In both cases it is unlikely but possible.
Constants (in a sense) in Python 3.8+
Python 3.8 introduces the typing.Final type qualifier, which is used to indicate that a variable or attribute should not be reassigned, redefined, or overridden.
PEP 591 -- Adding a final qualifier to typing
from typing import Final
# Annotate module variables
# (with or without an explicit type, using the syntax Final[<type>])
# (type is auto-determined in absence of an explicit type)
PI: Final[float] = 3.141592654
ANSWER_TO_EVERYTHING: Final = 42
# Annotate instance variables in class bodies
# (explicit type is needed if no value is assigned)
class Point:
x: Final[int]
y: Final = 0
def __init__(self, x: int):
self.x = x
# Annotate instance variables directly
# (only allowed in __init__ methods)
class Person:
def __init__(self, birth_year: int):
self.birth_year: Final = birth_year
Linters and type checkers will show you warnings if you reassign or redefine Final variables. Note that there is no runtime check, so you can still run the code below.
ANSWER_TO_EVERYTHING: Final = 42
ANSWER_TO_EVERYTHING = 420 # shows warning
print(ANSWER_TO_EVERYTHING) # prints 420
There is also the typing.final decorator, which is used to restrict inheriting classes and overriding methods.
There are no constants in Python, the way they exist in C or Java. You can imitate them by functions:
def FOO():
return "foo"
You can wrap the function call in a property, and thus make it look like a variable:
class Const:
#property
def FOO(self):
return "foo"
CONST = Const() # You need an instance
if something == CONST.FOO:
...
With a bit of meta stuff, one can get unsettable attributes with a terse syntax:
def const(cls):
# Replace a class's attributes with properties,
# and itself with an instance of its doppelganger.
is_special = lambda name: (name.startswith("__") and name.endswith("__"))
class_contents = {n: getattr(cls, n) for n in vars(cls) if not is_special(n)}
def unbind(value): # Get the value out of the lexical closure.
return lambda self: value
propertified_contents = {name: property(unbind(value))
for (name, value) in class_contents.items()}
receptor = type(cls.__name__, (object,), propertified_contents)
return receptor() # Replace with an instance, so properties work.
#const
class Paths(object):
home = "/home"
null = "/dev/null"
Now you can access Paths.home as a normal value, but can't assign to it. You can define several classes decorated with #const, as you might use several .h files.
You can use something like this:
Files structure:
myapp/
__init__.py
settings.py
main.py
settings.py
CONST_A = 'A'
CONST_B = 'B'
__init__.py
from . import settings as global_settings
class Settings:
def __init__(self):
for setting in dir(global_settings):
if setting.isupper():
setattr(self, setting, getattr(global_settings, setting))
def __setattr__(self, attr, value):
if not getattr(self, attr, None):
super().__setattr__(attr, value)
else:
raise TypeError("'constant' does not support item assignment")
settings = Settings()
main.py
import settings
print(settings.CONST_A) # prints A
settings.CONST_A = 'C' # raises TypeError error
print(settings.CONST_A) # prints A
settings.CONST_C = 'C' # also able to add new constants
print(settings.CONST_C) # prints C
Overwritten __setattr__ in Settings class makes all the attributes read-only.
The only requirement is to have all the constants in your settings.py written in capital letters.
But be aware, that it's not gonna work if you import variables directly:
from settings import CONST_A
print(settings.CONST_A) # prints A
settings.CONST_A = 'C' # sets C
print(settings.CONST_A) # prints C
Just define a constants.py file and write all your constants. There is no other magic trick in Python. Use caps as a general convention.
Python isn't preprocessed. You can just create a file constant.py
#!/usr/bin/env python
# encoding: utf-8
"""
constant.py
"""
MY_CONSTANT = 50
Import constant.py file when ever you want constant values like below example.
#!/usr/bin/env python
# encoding: utf-8
"""
example.py
"""
import constant
print constant.MY_CONSTANT * 2
This way you can use constants across project.
You also have the option, if the constants are tied to a particular class and used privately within that class of making them specific to that class:
class Foo(object):
GOOD = 0
BAD = 1
def __init__(self...
If you want to define and use entire module, making them on top of the module
PIE = 3.47
class Foo(object):
def __init__(self...
Related
This question already has answers here:
Defining private module functions in python
(11 answers)
Closed last year.
for example i have myClassFile.py file with code as follow:
class myClass:
def first(self):
return 'tea'
def second(self):
print(f'drink {self.first()}')
then i have run.py file with code as follow:
from myClassFile import myClass
class_ = myClass()
class_.second()
which when i run will output
>>> 'drink tea'
how to prevent someone to write below code on run.py file or outside myClass ?
class_.first()
so that if they used that method outside myClass class it will be an error or some sort
You can add a level of protection around methods and attributes by prefixing them with __.
But you can't make them totally private (as far as I know), there's always a way around, as shown in example below.
class MyClass:
def __init__(self):
self.__a = 1
def __method(self):
return 2
obj = MyClass()
# obj.__a # raise an exception
# obj.__method() # raise an exception
print(dir(obj)) # you can see the method and attributes have been renamed !
print(obj._MyClass__a) # 1
print(obj._MyClass__method()) # 2
Python does not have support for these. You can mark them private by prefixing them with _, or make it harder to call them by prefixing them with __.
These are called private methods. To make a private variable or a method in python, just prefix it with a __, so def second(self) turns into def __second(self).
This also works for variables and functional programming variables!
class Test:
def test(self):
self.__test()
def __test(self):
print('_test')
x = Test()
x.__test()
gives an error, but x.test() prints _test out successfully!
Note, this can still be run by using x.__Test__test()
How do I get the exact class variable which is (in current scope) available under a given name? I want to write a function like this:
from my_module import ClassA # A subclass of my_other_module.BenevolentClass
from my_other_module import my_function
a = 'ClassA'
cls_var = my_function(a)
o = cls_var()
So that I could supply any string to my_function and so long as that string is available in the caller's namespace as a class name, it would produce the correct class much like if I copypasted the string directly to the code. The reason is that I need to supply class names to a complex object creation routine, but avoid eval when possible. My current implementation is like this:
def my_function(name):
if name in globals():
c = globals()[name]
# Actually a complex class whitelist
if issubclass(c, BenevolentClass):
return c
else:
raise ValueError(f'Potentially malicious class {name}')
But that apparently produces globals() from my_other_module, which is not what I want. I want all classes that are available at the exact line of code where my_function is called (which may be inside completely different module which is called from yet another one).
You can pass the global dict to my_function.
def my_function(name, g):
if name in g:
...
cls_var = my_function(a, globals())
I think I've found a solution. Not sure it is perfect, so I'd rather hear some comments before accepting my own answer.
import inspect
def my_function(name):
for frame in inspect.getouterframes(inspect.currentframe()):
c = None
if name in frame.frame.f_globals:
c = frame.frame.f_globals[name]
break
if c and issubclass(c, BenevolentClass):
return c
else:
raise ValueError(f'Potentially malicious class {name}')
As far as I understand it, inspect.getouterframes walks outwards along the call stack, and I can check globals on every step to see what's available. Neither local variables nor builtins are available in frame.f_globals, so it doesn't seem to have much potential for injecting malicious data.
Context: I'm making a Ren'py game. The value is Character(). Yes, I know this is a dumb idea outside of this context.
I need to create a variable from an input string inside of a class that exists outside of the class' scope:
class Test:
def __init__(self):
self.dict = {} # used elsewhere to give the inputs for the function below.
def create_global_var(self, variable, value):
# the equivalent of exec("global {0}; {0} = {1}".format(str(variable), str(value)))
# other functions in the class that require this.
Test().create_global_var("abc", "123") # hence abc = 123
I have tried vars()[], globals()[variable] = value, etc, and they simply do not work (they don't even define anything) Edit: this was my problem.
I know that the following would work equally as well, but I want the variables in the correct scope:
setattr(self.__class__, variable, value) # d.abc = 123, now. but incorrect scope.
How can I create a variable in the global scope from within a class, using a string as the variable name, without using attributes or exec in python?
And yes, i'll be sanity checking.
First things first: what we call the "global" scope in Python is actually the "module" scope
(on the good side, it diminishes the "evils" of using global vars).
Then, for creating a global var dynamically, although I still can't see why that would
be better than using a module-level dictionary, just do:
globals()[variable] = value
This creates a variable in the current module. If you need to create a module variable on the module from which the method was called, you can peek at the globals dictionary from the caller frame using:
from inspect import currentframe
currentframe(1).f_globals[variable] = name
Now, the this seems especially useless since you may create a variable with a dynamic name, but you can't access it dynamically (unless using the globals dictionary again)
Even in your test example, you create the "abc" variable passing the method a string, but then you have to access it by using a hardcoded "abc" - the language itself is designed to discourage this (hence the difference to Javascript, where array indexes and object attributes are interchangeable, while in Python you have distinct Mapping objects)
My suggestion is that you use a module-level explicit dictionary and create all your
dynamic variables as key/value pairs there:
names = {}
class Test(object):
def __init__(self):
self.dict = {} # used elsewhere to give the inputs for the function below.
def create_global_var(self, variable, value):
names[variable] = value
(on a side note, in Python 2 always inherit your classes from "object")
You can use setattr(__builtins__, 'abc', '123') for this.
Do mind you that this is most likely a design problem and you should rethink the design.
I have a class whose instances need to format output as instructed by the user. There's a default format, which can be overridden. I implemented it like this:
class A:
def __init__(self, params):
# ...
# by default printing all float values as percentages with 2 decimals
self.format_functions = {float: lambda x : '{:.2%}'.format(x)}
def __str__(self):
# uses self.format_functions to format output
# ...
a = A(params)
print(a) # uses default output formatting
# overriding default output formatting
# float printed as percentages 3 decimal digits; bool printed as Y / N
a.format_functions = {float : lambda x: '{:.3%}'.format(x),
bool : lambda x: 'Y' if x else 'N'}
print(a)
Is it ok? Let me know if there is a better way to design this.
Unfortunately, I need to pickle instances of this class. But only functions defined at the top level of the module can be pickled; lambda functions are unpicklable, so my format_functions instance attribute breaks the pickling.
I tried rewriting this to use a class method instead of lambda functions, but still no luck for the same reason:
class A:
#classmethod
def default_float_format(cls, x):
return '{:.2%}'.format(x)
def __init__(self, params):
# ...
# by default printing all float values as percentages with 2 decimals
self.format_functions = {float: self.default_float_format}
def __str__(self):
# uses self.format_functions to format output
# ...
a = A(params)
pickle.dump(a) # Can't pickle <class 'method'>: attribute lookup builtins.method failed
Note that pickling here doesn't work even if I don't override the defaults; just the fact that I assigned self.format_functions = {float : self.default_float_format} breaks it.
What to do? I'd rather not pollute the namespace and break encapsulation by defining default_float_format at the module level.
Incidentally, why in the world does pickle create this restriction? It certainly feels like a gratuitous and substantial pain to the end user.
For pickling of class instances or functions (and therefore methods), Python's pickle depend that their name is available as global variables - the reference to the method in the dictionary points to a name that is not available in the global name space - which iis better said "module namespace" -
You could circunvent that by customizing the pickling of your class, by creating teh "__setstate__" and "__getstate__" methods - but I think you be better, since the formatting function does not depend on any information of the object or of the class itself (and even if some formatting function does, you could pass that as parameters), and define a function outside of the class scope.
This does work (Python 3.2):
def default_float_format( x):
return '{:.2%}'.format(x)
class A:
def __init__(self, params):
# ...
# by default printing all float values as percentages with 2 decimals
self.format_functions = {float: default_float_format}
def __str__(self):
# uses self.format_functions to format output
pass
a = A(1)
pickle.dumps(a)
If you use the dill module, either of your two approaches will just "work" as is. dill can pickle lambda as well as instances of classes and also class methods.
No need to pollute the namespace and break encapsulation, as you said you didn't want to do… but the other answer does.
dill is basically ten years or so worth of finding the right copy_reg function that registers how to serialize the majority of objects in standard python. Nothing special or tricky, it just takes time. So why doesn't pickle do this for us? Why does pickle have this restriction?
Well, if you look at the pickle docs, the answer is there:
https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled
Basically: Functions and classes are pickled by reference.
This means pickle does not work on objects defined in __main__, and it also doesn't work on many dynamically modified objects. dill registers __main__ as a module, so it has a valid namespace. dill also given you the option to not pickle by reference, so you can serialize dynamically modified objects… and class instances, class methods (bound and unbound), and so on.
I want to modify some classes in the standard library to use a different set of globals the ones that other classes in that module use.
Example
This example is an example only:
# module_a.py
my_global = []
class A:
def __init__(self):
my_global.append(self)
class B:
def __init__(self):
my_global.append(self)
In this example, If I create an instance of A, via A(), it will call append on the object named by my_global. But now I wish to create a new module, import B to it, and have B use my_global from the module it's been imported into, instead of the my_global from the module B was original defined.
# module_b.py
from module_a import B
my_global = []
Related
I'm struggling to explain my problem, here is my previous attempt which did in fact ask something completely different:
Clone a module and make changes to the copy
Update0
The example above is only for illustration of what I'm trying to achieve.
Since there is no variable scope for classes (unlike say, C++), I think a reference to a globals mapping is not stored in a class, but instead is attached to every function when defined.
Update1
An example was requested from the standard library:
Many (maybe all?) of the classes in the threading module make use of globals such as _allocate_lock, get_ident, and _active, defined here and here. One cannot change these globals without changing it for all the classes in that module.
You can't change the globals without affecting all other users of the module, but what you sort of can do is create a private copy of the whole module.
I trust you are familiar with sys.modules, and that if you remove a module from there, Python forgets it was imported, but old objects referencing it will continue to do so. When imported again, a new copy of the module will be made.
A hacky solution to your problem could would be something like this:
import sys
import threading
# Remove the original module, but keep it around
main_threading = sys.modules.pop('threading')
# Get a private copy of the module
import threading as private_threading
# Cover up evidence by restoring the original
sys.modules['threading'] = main_threading
# Modify the private copy
private_threading._allocate_lock = my_allocate_lock()
And now, private_threading.Lock has globals entirely separate from threading.Lock!
Needless to say, the module wasn't written with this in mind, and especially with a system module such as threading you might run into problems. For example, threading._active is supposed to contain all running threads, but with this solution, neither _active will have them all. The code may also eat your socks and set your house on fire, etc. Test rigorously.
Okay, here's a proof-of-concept that shows how to do it. Note that it only goes one level deep -- properties and nested functions are not adjusted. To implement that, as well as make this more robust, each function's globals() should be compared to the globals() that should be replaced, and only make the substitution if they are the same.
def migrate_class(cls, globals):
"""Recreates a class substituting the passed-in globals for the
globals already in the existing class. This proof-of-concept
version only goes one-level deep (i.e. properties and other nested
functions are not changed)."""
name = cls.__name__
bases = cls.__bases__
new_dict = dict()
if hasattr(cls, '__slots__'):
new_dict['__slots__'] = cls.__slots__
for name in cls.__slots__:
if hasattr(cls, name):
attr = getattr(cls, name)
if callable(attr):
closure = attr.__closure__
defaults = attr.__defaults__
func_code = attr.__code__
attr = FunctionType(func_code, globals)
new_dict[name] = attr
if hasattr(cls, '__dict__'):
od = getattr(cls, '__dict__')
for name, attr in od.items():
if callable(attr):
closure = attr.__closure__
defaults = attr.__defaults__
kwdefaults = attr.__kwdefaults__
func_code = attr.__code__
attr = FunctionType(func_code, globals, name, defaults, closure)
if kwdefaults:
attr.__kwdefaults__ = kwdefaults
new_dict[name] = attr
return type(name, bases, new_dict)
After having gone through this excercise, I am really curious as to why you need to do this?
"One cannot change these globals without changing it for all the classes in that module." That's the root of the problem isn't it, and a good explanation of the problem with global variables in general. The use of globals in threading tethers its classes to those global objects.
By the time you jerry-rig something to find and monkey patch each use of a global variable within an individual class from the module, are you any further ahead of just reimplementing the code for your own use?
The only work around that "might" be of use in your situation is something like mock. Mock's patch decorators/context managers (or something similar) could be used to swap out a global variable for the life-time of a given object. It works well within the very controlled context of unit testing, but in any other circumstances I wouldn't recommend it and would think about just reimplementing the code to suit my needs.
Globals are bad for exactly this reason, as I am sure you know well enough.
I'd try to reimplement A and B (maybe by subclassing them) in my own module and with all references to
my_global replaced by an injected dependency on A and B, which I'll call registry here.
class A(orig.A):
def __init__(self, registry):
self.registry = registry
self.registry.append(self)
# more updated methods
If you are creating all instances of A yourself you are pretty much done. You might want to create a factory which hides away the new init parameter.
my_registry = []
def A_in_my_registry():
return A(my_registry)
If foreign code creates orig.A instances for you, and you would rather have new A instances, you have to hope the foreign code is customizeable
with factories. If not, derive from the foreign classes and update them to use (newly injected) A factories instead. .... And rinse repeat for for the creation of those updated classes. I realize this can be tedious to almost impossible depending on the complexity of the foreign code, but most std libs are quite flat.
--
Edit: Monkey patch std lib code.
If you don't mind monkey patching std libs, you could also try to modifiy the original classes to work
with a redirection level which defaults to the original globals, but is customizable per instance:
import orig
class A(orig.A):
def __init__(self, registry=orig.my_globals):
self.registry = registry
self.registry.append(self)
# more updated methods
orig.A = A
As before you will need to control creations of A which should use non "standard globals",
but you won't have different A classes around as long as you monkey patch early enough.
If you use Python 3, you can subclass B and redefine the __globals__ attribute of the __init__ method like this:
from module_a import B
function = type(lambda: 0) # similar to 'from types import FunctionType as function', but faster
my_global = []
class My_B (B):
__init__ = function(B.__init__.__code__, globals(), '__init__', B.__init__.__defaults__, B.__init__.__closure__)
IMHO it is not possible to override global variables...
Globals are rarely a good idea.
Implicit variables are rarely a good idea.
An implicitly-used global is easy to indict as also "rarely good".
Additionally, you don't want A.__init__() doing anything "class-level" like updating some mysterious collection that exists for the class as a whole. That's often a bad idea.
Rather than mess with implicit class-level collection, you want a Factory in module_a that (1) creates A or B instances and (b) updates an explicit collection.
You can then use this factory in module_b, except with a different collection.
This can promote testability by exposing an implicit dependency.
module_a.py
class Factory( object ):
def __init__( self, collection ):
self.collection= collection
def make( self, name, *args, **kw ):
obj= eval( name )( *args, **kw )
self.collection.append( obj )
return obj
module_collection = []
factory= Factory( module_collection )
module_b.py
module_collection = []
factory = module_a.Factory( module_collection )
Now a client can do this
import module_b
a = module_b.factory.make( "A" )
b = module_b.factory.make( "B" )
print( module_b.module_collection )
You can make the API a bit more fluent by making the factory "callable" (implementing __call__ instead of make.
The point is to make the collection explicit via a factory class.