how to save frequently used physical constants in python - python

I would like to have a place for my physical constants.
The following answer is already a starting point:
How-to import constants in many files
So I have a seperate file called constants.py which I import into my projects.
Now, i would like to save and access additional information:
units
documentation
The resulting interface should be like:
import constants as c
print c.R
>>> 287.102
print c.R.units
>>> J/(kg K)
print c.R.doc
>>> ideal gas constant
Calculations should use c.R to access the value.
It is basically a class, which behaves like the float class
but holds two additional strings: units and documentation.
How can this be designed?

Inheriting from class float, you have to overwrite the __new__-method:
class Constant(float):
def __new__(cls, value, units, doc):
self = float.__new__(cls, value)
self.units = units
self.doc = doc
return self
R = Constant(287.102, "J/(kg K)", "deal gas constant")
print R, R * 2
>>> 287.102 574.204
print R.units
>>> J/(kg K)
print R.doc
>>> ideal gas constant

I recommend using the json library, which will allow you to store your constant values in a readable and modifiable format.
Using #Daniel's Constant class which inherits from float and adds your custom attributes, you can load all your constants at once into a new Constants object.
You can then get these attributes as c.R to access the value.
Complete file:
#!/usr/bin/env python
import json
class Constant(float):
def __new__(cls, value):
self = float.__new__(cls, value["value"]) # KeyError if missing "value"
self.units = value.get("units", None)
self.doc = value.get("doc", None)
return self
class Constants():
# load the json file into a dictionary of Constant objects
def __init__(self):
with open("constants.json") as fh:
json_object = json.load(fh)
# create a new dictionary
self.constants_dict = {}
for constant in json_object.keys():
# put each Constant into it
self.constants_dict[constant] = Constant(json_object[constant])
# try to get the requested attribute
def __getattr__(self, name):
# missing keys are returned None, use self.constants_dict[name]
# if you want to raise a KeyError instead
return self.constants_dict.get(name, None)
c = Constants()
print c.R # 287.102
print c.R.doc # ideal gas constant
print c.R + 5 # 292.102
print c.F.units # C mol-1
print c.missing # None
Example constants.json:
{
"R": {
"value": 287.102,
"units": "J/(kg K)",
"doc": "ideal gas constant"
},
"F": {
"value": 96485.33,
"units": "C mol-1",
"doc": "Faraday contant"
}
}

Related

Python Refactor JSON into different JSON Structure

I have a bunch of JSON data that I did mostly by hand. Several thousand lines. I need to refactor it into a totally different format using Python.
An overview of my 'stuff':
Column: The basic 'unit' of my data. Each Column has attributes. Don't worry about the meaning of the attributes, but the attributes need to be retained for each Column if they exist.
Folder: Folders group Columns and other Folders together. The folders currently have no attributes, they (currently) only contain other Folder and Column objects (Object does not necessarily refer to JSON objects here... more of an 'entity')
Universe: Universes group everything into big chunks which, in the larger scope of my project, are unable to interact with each other. That is not important here, but that's what they do.
Some limitations:
Columns cannot contain other Column objects, Folder objects, or Universe objects.
Folders cannot contain Universe objects.
Universes cannot contain other Universe objects.
Currently, I have Columns in this form:
"Column0Name": {
"type": "a type",
"dtype": "data type",
"description": "abcdefg"
}
and I need it to go to:
{
"name": "Column0Name",
"type": "a type",
"dtype": "data type",
"description": "abcdefg"
}
Essentially I need to convert the Column key-value things to an array of things (I am new to JSON, don't know the terminology). I also need each Folder to end up with two new JSON arrays (in addition to the "name": "FolderName" key-value pair). It needs a "folders": [] and "columns": [] to be added. So I have this for folders:
"Folder0Name": {
"Column0Name": {
"type": "a",
"dtype": "b",
"description": "c"
},
"Column1Name": {
"type": "d",
"dtype": "e",
"description": "f"
}
}
and need to go to this:
{
"name": "Folder0Name",
"folders": [],
"columns": [
{"name": "Column0Name", "type": "a", "dtype": "b", "description": "c"},
{"name": "Column1Name", "type": "d", "dtype": "e", "description": "f"}
]
}
The folders will also end up in an array inside its parent Universe. Likewise, each Universe will end up with "name", "folders", and "columns" things. As such:
{
"name": "Universe0",
"folders": [a bunch of folders in a JSON array],
"columns": [occasionally some columns in a JSON array]
}
Bottom line:
I'm going to guess that I need a recursive function to iterate though all the nested dictionaries after I import the JSON data with the json Python module.
I'm thinking some sort of usage of yield might help but I'm not super familiar yet with it.
Would it be easier to update the dicts as I go, or destroy each key-value pairs and construct an entirely new dict as I go?
Here is what I have so far. I'm stuck on getting the generator to return actual dictionaries instead of a generator object.
import json
class AllUniverses:
"""Container to hold all the Universes found in the json file"""
def __init__(self, filename):
self._fn = filename
self.data = {}
self.read_data()
def read_data(self):
with open(self._fn, 'r') as fin:
self.data = json.load(fin)
return self
def universe_key(self):
"""Get the next universe key from the dict of all universes
The key will be used as the name for the universe.
"""
yield from self.data
class Universe:
def __init__(self, json_filename):
self._au = AllUniverses(filename=json_filename)
self.uni_key = self._au.universe_key()
self._universe_data = self._au.data.copy()
self._col_attrs = ['type', 'dtype', 'description', 'aggregation']
self._folders_list = []
self._columns_list = []
self._type = "Universe"
self._name = ""
self.uni = dict()
self.is_folder = False
self.is_column = False
def output(self):
# TODO: Pass this to json.dump?
# TODO: Still need to get the actual folder and column dictionaries
# from the generators
out = {
"name": self._name,
"type": "Universe",
"folder": [f.me for f in self._folders_list],
"columns": [c.me for c in self._columns_list]}
return out
def update_universe(self):
"""Get the next universe"""
universe_k = next(self.uni_key)
self._name = str(universe_k)
self.uni = self._universe_data.pop(universe_k)
return self
def parse_nodes(self):
"""Process all child nodes"""
nodes = [_ for _ in self.uni.keys()]
for k in nodes:
v = self.uni.pop(k)
self._is_column(val=v)
if self.is_column:
fc = Column(data=v, key_name=k)
self._columns_list.append(fc)
else:
fc = Folder(data=v, key_name=k)
self._folders_list.append(fc)
return self
def _is_column(self, val):
"""Determine if val is a Column or Folder object"""
self.is_folder = False
self._column = False
if isinstance(val, dict) and not val:
self.is_folder = True
elif not isinstance(val, dict):
raise TypeError('Cannot handle inputs not of type dict')
elif any([i in val.keys() for i in self._col_attrs]):
self._column = True
else:
self.is_folder = True
return self
def parse_children(self):
for folder in self._folders_list:
assert(isinstance(folder, Folder)), f'bletch idk what happened'
folder.parse_nodes()
class Folder:
def __init__(self, data, key_name):
self._data = data.copy()
self._name = str(key_name)
self._node_keys = [_ for _ in self._data.keys()]
self._folders = []
self._columns = []
self._col_attrs = ['type', 'dtype', 'description', 'aggregation']
#property
def me(self):
# maybe this should force the code to parse all children of this
# Folder? Need to convert the generator into actual dictionaries
return {"name": self._name, "type": "Folder",
"columns": [(c.me for c in self._columns)],
"folders": [(f.me for f in self._folders)]}
def parse_nodes(self):
"""Parse all the children of this Folder
Parse through all the node names. If it is detected to be a Folder
then create a Folder obj. from it and add to the list of Folder
objects. Else create a Column obj. from it and append to the list
of Column obj.
This should be appending dictionaries
"""
for key in self._node_keys:
_folder = False
_column = False
values = self._data.copy()[key]
if isinstance(values, dict) and not values:
_folder = True
elif not isinstance(values, dict):
raise TypeError('Cannot handle inputs not of type dict')
elif any([i in values.keys() for i in self._col_attrs]):
_column = True
else:
_folder = True
if _folder:
f = Folder(data=values, key_name=key)
self._folders.append(f.me)
else:
c = Column(data=values, key_name=key)
self._columns.append(c.me)
return self
class Column:
def __init__(self, data, key_name):
self._data = data.copy()
self._stupid_check()
self._me = {
'name': str(key_name),
'type': 'Column',
'ctype': self._data.pop('type'),
'dtype': self._data.pop('dtype'),
'description': self._data.pop('description'),
'aggregation': self._data.pop('aggregation')}
def __str__(self):
# TODO: pretty sure this isn't correct
return str(self.me)
#property
def me(self):
return self._me
def to_json(self):
# This seems to be working? I think?
return json.dumps(self, default=lambda o: str(self.me)) # o.__dict__)
def _stupid_check(self):
"""If the key isn't in the dictionary, add it"""
keys = [_ for _ in self._data.keys()]
keys_defining_a_column = ['type', 'dtype', 'description', 'aggregation']
for json_key in keys_defining_a_column:
if json_key not in keys:
self._data[json_key] = ""
return self
if __name__ == "__main__":
file = r"dummy_json_data.json"
u = Universe(json_filename=file)
u.update_universe()
u.parse_nodes()
u.parse_children()
print('check me')
And it gives me this:
{
"name":"UniverseName",
"type":"Universe",
"folder":[
{"name":"Folder0Name",
"type":"Folder",
"columns":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB0B0>],
"folders":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB190>]
},
{"name":"Folder2Name",
"type":"Folder",
"columns":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB040>],
"folders":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB120>]
},
{"name":"Folder4Name",
"type":"Folder",
"columns":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB270>],
"folders":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB200>]
},
{"name":"Folder6Name",
"type":"Folder",
"columns":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB2E0>],
"folders":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB350>]
},
{"name":"Folder8Name",
"type":"Folder",
"columns":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB3C0>],
"folders":[<generator object Folder.me.<locals>.<genexpr> at 0x000001ACFBEDB430>]
}
],
"columns":[]
}
If there is an existing tool for this kind of transformation so that I don't have to write Python code, that would be an attractive alternative, too.
Lets create the 3 classes needed to represent Columns, Folders and Unverses. Before starting some topics I wanna talk about, I give a short description of them here, if any of them is new to you I can go deeper:
I will use type annotations to make clear what type each variable is.
I am gonna use __slots__. By telling the Column class that its instances are gonna have a name, ctype, dtype, description and aggragation attributes, each instance of Column will require less memory space. The downside is that it will not accept any other attribute not listed there. This is, it saves memory but looses flexibility. As we are going to have several (maybe hundreds or thousands) of instances, reduced memory footprint seems more important than the flexibility of being able to add any attribute.
Each class will have the standard constructor where every argument has a default value except name, which is mandatory.
Each class will have another constructor called from_old_syntax. It is going to be a class method that receives the string corresponding to the name and a dict corresponding to the data as its arguments and outputs the corresponding instance (Column, Folder or Universe).
Universes are basically the same as Folders with different names (for now) so it will basically inherit it (class Universe(Folder): pass).
from typing import List
class Column:
__slots__ = 'name', 'ctype', 'dtype', 'description', 'aggregation'
def __init__(
self,
name: str,
ctype: str = '',
dtype: str = '',
description: str = '',
aggregation: str = '',
) -> None:
self.name = name
self.ctype = ctype
self.dtype = dtype
self.description = description
self.aggregation = aggregation
#classmethod
def from_old_syntax(cls, name: str, data: dict) -> "Column":
column = cls(name)
for key, value in data.items():
# The old syntax used type for column type but in the new syntax it
# will have another meaning so we use ctype instead
if key == 'type':
key = 'ctype'
try:
setattr(column, key, value)
except AttributeError as e:
raise AttributeError(f"Unexpected key {key} for Column") from e
return column
class Folder:
__slots__ = 'name', 'folders', 'columns'
def __init__(
self,
name: str,
columns: List[Column] = None,
folders: List["Folder"] = None,
) -> None:
self.name = name
if columns is None:
self.columns = []
else:
self.columns = [column for column in columns]
if folders is None:
self.folders = []
else:
self.folders = [folder for folder in folders]
#classmethod
def from_old_syntax(cls, name: str, data: dict) -> "Folder":
columns = [] # type: List[Column]
folders = [] # type: List["Folder"]
for key, value in data.items():
# Determine if it is a Column or a Folder
if 'type' in value and 'dtype' in value:
columns.append(Column.from_old_syntax(key, value))
else:
folders.append(Folder.from_old_syntax(key, value))
return cls(name, columns, folders)
class Universe(Folder):
pass
As you can see the constructors are pretty trivial, assign the arguments to the attributes and done. In the case of Folders (and thus in Universes too), two arguments are lists of columns and folders. The default value is None (in this case we initialize as an empty list) because using mutable variables as default values has some issues so it is good practice to use None as the default value for mutable variables (such as lists).
Column's from_old_syntax class method creates an empty Column with the provided name. Afterwards we iterate over the data dict that was also provided and assign its key value pair to its corresponding attribute. There is a special case where "type" key is converted to "ctype" as "type" is going to be used for a different purpose with the new syntax. The assignation itself is done by setattr(column, key, value). We have included it inside a try ... except ... clause because as we said above, only the items in __slots__ can be used as attributes, so if there is an attribute that you forgot, you will get an exception saying "AttributeError: Unexpected key 'NAME'" and you will only have to add that "NAME" to the __slots__.
Folder's (and thus Unverse's) from_old_syntax class method is even simpler. Create a list of columns and folders, iterate over the data checking if it is a folder or a column and use the appropiate from_old_syntax class method. Then use those two lists and the provided name to return the instance. Notice that Folder.from_old_syntax notation is used to create the folders instead of cls.from_old_syntax because cls may be Universe. However, to create the insdance we do use cls(...) as here we do want to use Universe or Folder.
Now you could do universes = [Universe.from_old_syntax(name, data) for name, data in json.load(f).items()] where f is the file and you will get all your Universes, Folders and Columns in memory. So now we need to encode them back to JSON. For this we are gonna extend the json.JSONEncoder so that it knows how to parse our classes into dictionaries that it can encode normally. To do so, you just need to overwrite the default method, check if the object passed is of our classes and return a dict that will be encoded. If it is not one of our classes we will let the parent default method to take care of it.
import json
# JSON fields with this values will be omitted
EMPTY_VALUES = "", [], {}
class CustomEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, (Column, Folder, Universe)):
# Make a dict with every item in their respective __slots__
data = {
attr: getattr(obj, attr) for attr in obj.__slots__
if getattr(obj, attr) not in EMPTY_VALUES
}
# Add the type fild with the class name
data['type'] = obj.__class__.__name__
return data
# Use the parent class function for any object not handled explicitly
super().default(obj)
Converting the classes to dictionaries is basically taking what is in __slots__ as the key and the attribute's value as the value. We will filter those values that are an empty string, an empty list or an empty dict as we do not need to write them to JSON. We finally add the "type" key to the dict by reading the objects class name (Column, Folder and Universe).
To use it you have to pass the CustomEncoder as the cls argument to json.dump.
So the code will look like this (omitting the class definitions to keep it short):
import json
from typing import List
# JSON fields with this values will be omitted
EMPTY_VALUES = "", [], {}
class Column:
# ...
class Folder:
# ...
class Universe(Folder):
pass
class CustomEncoder(json.JSONEncoder):
# ...
if __name__ == '__main__':
with open('dummy_json_data.json', 'r') as f_in, open('output.json', 'w') as f_out:
universes = [Universe.from_old_syntax(name, data)
for name, data in json.load(f_in).items()]
json.dump(universes, f_out, cls=CustomEncoder, indent=4)

Is there a Python equivalent for Swift's #dynamicMemberLookup?

In Swift, you can define #dynamicMemberLookup (see documentation) to get direct access to properties that are nested inside another type. Is there a Python equivalent?
Example of what I want to achieve with Python
Let's say I have a class with members, e.g.:
c = OuterClass()
c.inner_class = ClassWithManyMembers()
c.inner_class.member1 = "1"
c.inner_class.member2 = "2"
c.inner_class.member3 = "3"
I would like to be able to get/set those members without having to type the inner_class every time:
print(c.member1) # prints "1"
c.member1 = 3
print(c.member1) # prints "3"
Example in Swift (Source):
Dynamic member lookup by member name
#dynamicMemberLookup
struct DynamicStruct {
let dictionary = ["someDynamicMember": 325,
"someOtherMember": 787]
subscript(dynamicMember member: String) -> Int {
return dictionary[member] ?? 1054
}
}
let s = DynamicStruct()
// Use dynamic member lookup.
let dynamic = s.someDynamicMember
print(dynamic)
// Prints "325"
Dynamic member lookup by key path
struct Point { var x, y: Int }
#dynamicMemberLookup
struct PassthroughWrapper<Value> {
var value: Value
subscript<T>(dynamicMember member: KeyPath<Value, T>) -> T {
get { return value[keyPath: member] }
}
}
let point = Point(x: 381, y: 431)
let wrapper = PassthroughWrapper(value: point)
print(wrapper.x)
My only idea in Python would be to monkey-patch all nested properties directly to the outer class.
I would advise against nesting classes in one another, but if you must do it, try this:
class MetaOuter(type):
def __getattr__(cls, attr):
for member in cls.__dict__.values():
if hasattr(member, attr):
return getattr(member, attr)
raise AttributeError(attr)
def __setattr__(cls, attr, value):
for member in cls.__dict__.values():
if hasattr(member, attr):
setattr(member, attr, value)
return
super().__setattr__(attr, value)
class Outer(metaclass=MetaOuter):
a = 0
class Inner:
x = 1
y = 2
Now any attributes of a nested class inside Outer are available (and can be written to) as an attribute of Outer:
>>> Outer.x, Outer.y
(1, 2)
>>> Outer.a # Accessing regular attributes still works as usual
0
>>> Outer.x = True
>>> Outer.Inner.x
True
If you need to nest more than one level, use the same meta class for any inner encapsulating classes:
class Outer(metaclass=MetaOuter):
a = 0
class Inner(metaclass=MetaOuter):
x = 1
y = 2
class Innerer:
z = 42
>>> Outer.a, Outer.x, Outer.y, Outer.z
(0, 1, 2, 42)
>>> Outer.z = -1
>>> Outer.z
-1
Note: Be aware that if you're trying to access an attribute that is found in multiple nested classes, you can't be sure of which class the attribute will come from. A more predictable implementation in this case would be to handle some kind of key path that will be looked up, but that's essentially the same as what Python provides by default (e.g., Outer.Inner.Innerer.z).
Generally, you can just save a reference to the inner object when you want to make repeated accesses to it.
c = OuterClass()
c.inner_class = ClassWithManyMembers()
ic = c.inner_class
print(ic.member1)
print(ic.member2)
print(ic.member3)
ic.member1 = "5"

Python: unable to access generated functions in a python class

I have a class that contains a nested dictionary that I want to make getters and setters for. I use a depth first search to generate the functions and add them to the class's __dict__ attribute, but when I try to call any of the generated functions, I just get an AttributeError: 'MyClass' object has no attribute 'getA'.
import operator
from functools import reduce
class MyClass:
def __init__(self):
self.dictionary = {
"a": {
"b": 1,
"c": 2
},
"d": {
"e": {
"f": 3,
"g": 4
}
}
}
self.addGettersSetters()
def addGettersSetters(self):
def makegetter(self, keyChain):
def func():
return reduce(operator.getitem, keyChain, self.dictionary)
return func
def makesetter(self, keyChain):
def func(arg):
print("setter ", arg)
path = self.dictionary
for i in keyChain[:-1]:
path = path[i]
path[keyChain[-1]] = arg
return func
# depth first search of dictionary
def recurseDict(self, dictionary, keyChain=[]):
for key, value in dictionary.items():
keyChain.append(key)
# capitalize the first letter of each part of the keychain for the function name
capKeyChain = [i.title().replace(" ", "")
for i in keyChain]
# setter version
print('set{}'.format("".join(capKeyChain)))
self.__dict__['set{}'.format(
"".join(capKeyChain))] = makesetter(self, keyChain)
# getter version
print('get{}'.format("".join(capKeyChain)))
self.__dict__['set{}'.format(
"".join(capKeyChain))] = makegetter(self, keyChain)
# recurse down the dictionary chain
if isinstance(value, dict):
recurseDict(self, dictionary=value,
keyChain=keyChain)
# remove the last key for the next iteration
while keyChain[-1] != key:
keyChain = keyChain[: -1]
keyChain = keyChain[: -1]
recurseDict(self, self.dictionary)
print(self.__dict__)
if __name__ == '__main__':
myclass = MyClass()
print(myclass.getA())
If you run this code, it outputs the names of all of the generated functions as well as the state of __dict___ after generating the functions and terminates with the AttributionError.
What has me puzzled is that I used another piece of code that uses essentially the same methodology as an example for how to generate getters and setters this way. That piece of code works just fine, but mine does not and, per my eyes and research, I am at a loss as to why. What am I missing here?
For reference I am running Anaconda Python 3.6.3

JSON serialize a class and change property casing with Python

I'd like to create a JSON representation of a class and change the property names automatically from snake_case to lowerCamelCase, as I'd like to comply with PEP8 in Python and also the JavaScript naming conventions (and maybe even more importantly, the backend I'm communicating to uses lowerCamelCase).
I prefer to use the standard json module, but I have nothing against using another, open source library (e.g. jsonpickle might solve my issue?).
>>> class HardwareProfile:
... def __init__(self, vm_size):
... self.vm_size = vm_size
>>> hp = HardwareProfile('Large')
>>> hp.vm_size
'Large'
### ### What I want ### ###
>>> magicjson.dumps(hp)
'{"vmSize": "Large"}'
### ### What I have so far... ### ###
>>> json.dumps(hp, default=lambda o: o.__dict__)
'{"vm_size": "Large"}'
You just need to create a function to transform the snake_case keys to camelCase. You can easily do that using .split, .lower, and .title.
import json
class HardwareProfile:
def __init__(self, vm_size):
self.vm_size = vm_size
self.some_other_thing = 42
self.a = 'a'
def snake_to_camel(s):
a = s.split('_')
a[0] = a[0].lower()
if len(a) > 1:
a[1:] = [u.title() for u in a[1:]]
return ''.join(a)
def serialise(obj):
return {snake_to_camel(k): v for k, v in obj.__dict__.items()}
hp = HardwareProfile('Large')
print(json.dumps(serialise(hp), indent=4, default=serialise))
output
{
"vmSize": "Large",
"someOtherThing": 42,
"a": "a"
}
You could put serialise in a lambda, but I think it's more readable to write it as a proper def function.

Dynamically subclass an Enum base class

I've set up a metaclass and base class pair for creating the line specifications of several different file types I have to parse.
I have decided to go with using enumerations because many of the individual parts of the different lines in the same file often have the same name. Enums make it easy to tell them apart. Additionally, the specification is rigid and there will be no need to add more members, or extend the line specifications later.
The specification classes work as expected. However, I am having some trouble dynamically creating them:
>>> C1 = LineMakerMeta('C1', (LineMakerBase,), dict(a = 0))
AttributeError: 'dict' object has no attribute '_member_names'
Is there a way around this? The example below works just fine:
class A1(LineMakerBase):
Mode = 0, dict(fill=' ', align='>', type='s')
Level = 8, dict(fill=' ', align='>', type='d')
Method = 10, dict(fill=' ', align='>', type='d')
_dummy = 20 # so that Method has a known length
A1.format(**dict(Mode='DESIGN', Level=3, Method=1))
# produces ' DESIGN 3 1'
The metaclass is based on enum.EnumMeta, and looks like this:
import enum
class LineMakerMeta(enum.EnumMeta):
"Metaclass to produce formattable LineMaker child classes."
def _iter_format(cls):
"Iteratively generate formatters for the class members."
for member in cls:
yield member.formatter
def __str__(cls):
"Returns string line with all default values."
return cls.format()
def format(cls, **kwargs):
"Create formatted version of the line populated by the kwargs members."
# build resulting string by iterating through members
result = ''
for member in cls:
# determine value to be injected into member
try:
try:
value = kwargs[member]
except KeyError:
value = kwargs[member.name]
except KeyError:
value = member.default
value_str = member.populate(value)
result = result + value_str
return result
And the base class is as follows:
class LineMakerBase(enum.Enum, metaclass=LineMakerMeta):
"""A base class for creating Enum subclasses used for populating lines of a file.
Usage:
class LineMaker(LineMakerBase):
a = 0, dict(align='>', fill=' ', type='f'), 3.14
b = 10, dict(align='>', fill=' ', type='d'), 1
b = 15, dict(align='>', fill=' ', type='s'), 'foo'
# ^-start ^---spec dictionary ^--default
"""
def __init__(member, start, spec={}, default=None):
member.start = start
member.spec = spec
if default is not None:
member.default = default
else:
# assume value is numerical for all provided types other than 's' (string)
default_or_set_type = member.spec.get('type','s')
default = {'s': ''}.get(default_or_set_type, 0)
member.default = default
#property
def formatter(member):
"""Produces a formatter in form of '{0:<format>}' based on the member.spec
dictionary. The member.spec dictionary makes use of these keys ONLY (see
the string.format docs):
fill align sign width grouping_option precision type"""
try:
# get cached value
return '{{0:{}}}'.format(member._formatter)
except AttributeError:
# add width to format spec if not there
member.spec.setdefault('width', member.length if member.length != 0 else '')
# build formatter using the available parts in the member.spec dictionary
# any missing parts will simply not be present in the formatter
formatter = ''
for part in 'fill align sign width grouping_option precision type'.split():
try:
spec_value = member.spec[part]
except KeyError:
# missing part
continue
else:
# add part
sub_formatter = '{!s}'.format(spec_value)
formatter = formatter + sub_formatter
member._formatter = formatter
return '{{0:{}}}'.format(formatter)
def populate(member, value=None):
"Injects the value into the member's formatter and returns the formatted string."
formatter = member.formatter
if value is not None:
value_str = formatter.format(value)
else:
value_str = formatter.format(member.default)
if len(value_str) > len(member) and len(member) != 0:
raise ValueError(
'Length of object string {} ({}) exceeds available'
' field length for {} ({}).'
.format(value_str, len(value_str), member.name, len(member)))
return value_str
#property
def length(member):
return len(member)
def __len__(member):
"""Returns the length of the member field. The last member has no length.
Length are based on simple subtraction of starting positions."""
# get cached value
try:
return member._length
# calculate member length
except AttributeError:
# compare by member values because member could be an alias
members = list(type(member))
try:
next_index = next(
i+1
for i,m in enumerate(type(member))
if m.value == member.value
)
except StopIteration:
raise TypeError(
'The member value {} was not located in the {}.'
.format(member.value, type(member).__name__)
)
try:
next_member = members[next_index]
except IndexError:
# last member defaults to no length
length = 0
else:
length = next_member.start - member.start
member._length = length
return length
This line:
C1 = enum.EnumMeta('C1', (), dict(a = 0))
fails with exactly the same error message. The __new__ method of EnumMeta expects an instance of enum._EnumDict as its last argument. _EnumDict is a subclass of dict and provides an instance variable named _member_names, which of course a regular dict doesn't have. When you go through the standard mechanism of enum creation, this all happens correctly behind the scenes. That's why your other example works just fine.
This line:
C1 = enum.EnumMeta('C1', (), enum._EnumDict())
runs with no error. Unfortunately, the constructor of _EnumDict is defined as taking no arguments, so you can't initialize it with keywords as you apparently want to do.
In the implementation of enum that's backported to Python3.3, the following block of code appears in the constructor of EnumMeta. You could do something similar in your LineMakerMeta class:
def __new__(metacls, cls, bases, classdict):
if type(classdict) is dict:
original_dict = classdict
classdict = _EnumDict()
for k, v in original_dict.items():
classdict[k] = v
In the official implementation, in Python3.5, the if statement and the subsequent block of code is gone for some reason. Therefore classdict must be an honest-to-god _EnumDict, and I don't see why this was done. In any case the implementation of Enum is extremely complicated and handles a lot of corner cases.
I realize this is not a cut-and-dried answer to your question but I hope it will point you to a solution.
Create your LineMakerBase class, and then use it like so:
C1 = LineMakerBase('C1', dict(a=0))
The metaclass was not meant to be used the way you are trying to use it. Check out this answer for advice on when metaclass subclasses are needed.
Some suggestions for your code:
the double try/except in format seems clearer as:
for member in cls:
if member in kwargs:
value = kwargs[member]
elif member.name in kwargs:
value = kwargs[member.name]
else:
value = member.default
this code:
# compare by member values because member could be an alias
members = list(type(member))
would be clearer with list(member.__class__)
has a false comment: listing an Enum class will never include the aliases (unless you have overridden that part of EnumMeta)
instead of the complicated __len__ code you have now, and as long as you are subclassing EnumMeta you should extend __new__ to automatically calculate the lengths once:
# untested
def __new__(metacls, cls, bases, clsdict):
# let the main EnumMeta code do the heavy lifting
enum_cls = super(LineMakerMeta, metacls).__new__(cls, bases, clsdict)
# go through the members and calculate the lengths
canonical_members = [
member
for name, member in enum_cls.__members__.items()
if name == member.name
]
last_member = None
for next_member in canonical_members:
next_member.length = 0
if last_member is not None:
last_member.length = next_member.start - last_member.start
The simplest way to create Enum subclasses on the fly is using Enum itself:
>>> from enum import Enum
>>> MyEnum = Enum('MyEnum', {'a': 0})
>>> MyEnum
<enum 'MyEnum'>
>>> MyEnum.a
<MyEnum.a: 0>
>>> type(MyEnum)
<class 'enum.EnumMeta'>
As for your custom methods, it might be simpler if you used regular functions, precisely because Enum implementation is so special.

Categories