My class looks like this:
class Person:
def __init__(name=None, id_=None):
self.name = name
self.id_ = id_
# I'm currently doing this. member object is of Person type.
return template('index.html', name=member.name, id_=member.id_)
# What I want to do
return template('index.html', member=member)
First way is fine when we don't have many attributes to deal, but my class currently has around 10 attributes and it doesn't look good to pass so many parameters to template function. Now I want to pass an object of this class to bottle template and use it there. How can I do it?
# What I want to do
return template('index.html', member=member)
Just do that. It should work fine. In your template, you'll simply reference member.name, member.id_, etc.
If you have python 3.7+
from dataclasses import dataclass, asdict
#dataclass
class Person:
name: str
id_: str
member = Person('toto', '1')
return template('index.html', **asdict(member))
But maybe its more interesting to inject your object directly in your template.
Related
class Phone:
def install():
...
class InstagramApp(Phone):
...
def install_app(phone: "Phone", app_name):
phone.install(app_name)
app = InstagramApp()
install_app(app, 'instagram') # <--- is that OK ?
install_app gets a Phone object.
will it work with with InstagramApp object ?
The inheritance works correctly. install method is inherited from Phone class. But your code doesn't work. When you run it, it will say:
TypeError: Phone.install() takes 0 positional arguments but 2 were given
What are these two arguments that have been passed?
Second one is obviously the 'instagram' string. You passed that but no parameter expects it.
The first one is, Because you invoke that install() method from an instance, Python turns it into a "Method" and automatically fills the first parameter of it to a reference to the instance(this is how descriptors work). But again you don't have any parameter to receive it.
To make that work:
class Phone:
def install(self, name):
print(self)
print(name)
class InstagramApp(Phone):
...
def install_app(phone: "Phone", app_name):
phone.install(app_name)
app = InstagramApp()
install_app(app, "instagram")
Yes, methods are also inherited from classes. However, you will need to add a parameter to the install method so it can take the app name:
class Phone:
def install(self, app_name): # Allow the method to take an input app name
...
class InstagramApp(Phone):
...
def install_app(phone: "Phone", app_name):
phone.install(app_name)
app = InstagramApp()
install_app(app, 'instagram') # Yes, this will also work with the InstagramApp class
Yes - so long as the InstagramApp doesn't delete any of the methods that 'install_app', and the methods return the same types as the Phone class does then it will work.
I am curious as to why to pass the instance and pass the name as text - you could accomplish the sanme by simply ascessing the __name__ variable of the class - so for instance :
`app.__class_.__name__ will equal 'Instagram'
As others have pointed out you need to fix the various function calls.
install_app gets a Phone object. will it work with InstagramApp object?
the inheritance class will have the function
class Phone:
def install(app_name:str):# change type to str
pass
class InstagramApp(Phone):
pass
def install_app(phone: Phone, app_name):# change the type to Phone Class
phone.install(app_name)
app = InstagramApp()
install_app(app, 'instagram') #
I am working with a codebase where you have couples of classes, always one dataclass and another execution class. The dataclass serves as a data collector (as the name suggests).
To "connect" the dataclass to the other class, I set a class variable in the other class to make clear what the relevant dataclass is. This works fine - I can use this class variable to instantiate the data class as I please. However, it is not clear to me how I can use this to specify for a given method that it will return an instance of the linked data class.
Take this example (executable):
from abc import ABC
from dataclasses import dataclass
from typing import ClassVar
#dataclass
class Name(ABC):
name: str
class RelatedName(ABC):
_INDIVIDAL: ClassVar[Name]
def return_name(self, **properties) -> Name:
# There is a typing issue here too but you can ignore that for now
return self._INDIVIDAL(**properties)
#dataclass
class BiggerName(Name):
other_name: str
class RelatedBiggerName(RelatedName):
_INDIVIDAL: ClassVar[Name] = BiggerName
if __name__ == "__main__":
biggie = RelatedBiggerName()
biggiename = biggie.return_name(name="Alfred", other_name="Biggie").other_name
print(biggiename)
The script works fine, but there is a typing problem. In the last but one line, you'll see the issue that the attribute other_name is undefined for the Name class. This is to be expected, but I am not sure how I can change the output type of return_name so that it will use the class that is defined in _INDIVIDUAL.
I tried def return_name(self, **properties) -> _INDIVIDAL but that naturally leads to name '_INDIVIDAL' is not defined.
Perhaps it is not possible what I am after. Is it at all possible to have typing within a class that depends on class variables? I'm interested in Python 3.8 and higher.
I agree with #cherrywoods that a custom generic base class seems like the way to go here.
I would like to add my own variation that should do what you want:
from abc import ABC
from dataclasses import dataclass
from typing import Any, Generic, Optional, Type, TypeVar, get_args, get_origin
T = TypeVar("T", bound="Name")
#dataclass
class Name(ABC):
name: str
class RelatedName(ABC, Generic[T]):
_INDIVIDUAL: Optional[Type[T]] = None
#classmethod
def __init_subclass__(cls, **kwargs: Any) -> None:
"""Identifies and saves the type argument"""
super().__init_subclass__(**kwargs)
for base in cls.__orig_bases__: # type: ignore[attr-defined]
origin = get_origin(base)
if origin is None or not issubclass(origin, RelatedName):
continue
type_arg = get_args(base)[0]
# Do not set the attribute for GENERIC subclasses!
if not isinstance(type_arg, TypeVar):
cls._INDIVIDUAL = type_arg
return
#classmethod
def get_individual(cls) -> Type[T]:
"""Getter ensuring that we are not dealing with a generic subclass"""
if cls._INDIVIDUAL is None:
raise AttributeError(
f"{cls.__name__} is generic; type argument unspecified"
)
return cls._INDIVIDUAL
def __setattr__(self, name: str, value: Any) -> None:
"""Prevent instances from overwriting `_INDIVIDUAL`"""
if name == "_INDIVIDUAL":
raise AttributeError("Instances cannot modify `_INDIVIDUAL`")
super().__setattr__(name, value)
def return_name(self, **properties: Any) -> T:
return self.get_individual()(**properties)
#dataclass
class BiggerName(Name):
other_name: str
class RelatedBiggerName(RelatedName[BiggerName]):
pass
if __name__ == "__main__":
biggie = RelatedBiggerName()
biggiename = biggie.return_name(name="Alfred", other_name="Biggie").other_name
print(biggiename)
Works without problems or complaints from mypy --strict.
Differences
The _INDIVIDUAL attribute is no longer marked as a ClassVar because that (for no good reason) disallows type variables.
To protect it from being changed by instances, we use a simple customization of the __setattr__ method.
You no longer need to explicitly set _INDIVIDUAL on any specific subclass of RelatedName. This is taken care of automatically during subclassing by __init_subclass__. (If you are interested in details, I explain them in this post.)
Direct access to the _INDIVIDUAL attribute is discouraged. Instead there is the get_individual getter. If the additional parentheses annoy you, I suppose you can play around with discriptors to construct a property-like situation for _INDIVIDUAL. (Note: You can still just use cls._INDIVIDUAL or self._INDIVIDUAL, it's just that there will be the possible None-type issue.)
The base class is obviously a bit more complicated this way, but on the other hand the creation of specific subclasses is much nicer in my opinion.
Hope this helps.
Can you use generics?
from abc import ABC
from dataclasses import dataclass
from typing import ClassVar, TypeVar, Generic, Type
T = TypeVar("T", bound="Name")
#dataclass
class Name(ABC):
name: str
class RelatedName(ABC, Generic[T]):
# This would resolve what juanpa.arrivillaga pointed out, but mypy says:
# ClassVar cannot contain type variables, so I guess your use-case is unsupported
# _INDIVIDAL: ClassVar[Type[T]]
# One option:
# _INDIVIDAL: ClassVar
# Second option to demonstrate Type[T]
_INDIVIDAL: Type[T]
def return_name(self, **properties) -> T:
return self._INDIVIDAL(**properties)
#dataclass
class BiggerName(Name):
other_name: str
class RelatedBiggerName(RelatedName[BiggerName]):
# see above
_INDIVIDAL: Type[BiggerName] = BiggerName
if __name__ == "__main__":
biggie = RelatedBiggerName()
biggiename = biggie.return_name(name="Alfred", other_name="Biggie").other_name
print(biggiename)
mypy reports no errors on this and I think conceptually this is what you want.
I tested on python 3.10.
I am currently using a root_validator in my FastAPI project using Pydantic like this:
class User(BaseModel):
id: Optional[int]
name: Optional[str]
#root_validator
def validate(cls, values):
if not values.get("id") and not values.get("name"):
raise ValueError("It's an error")
return values
The issue is, when I query the request body in FastAPI, because of return values, instead of the request body being of type class User, it is just a simple python dictionary. How do I get the same object of type class User?
So initially, my request body would like this when I printed it: id=0 name='string' (which is how I want it) (and when I print the type() of it, it shows: <class 'User'>)
Here is what it looks like with return values: {"id":0, "name"="string"}
I have tried making it as just return cls, but this is what it looks like when I print it: <class 'User'> (and when I print the type() of it, it shows: <class 'pydantic.main.ModelMetaclass'>)
How to get my solution?
I had raised this issue in FastAPI and Pydantic discussions. And found the answer here by a community member: https://github.com/tiangolo/fastapi/discussions/4563
The solution is to rename the def validate func to anything else like def validate_all_fields
And the reason for this is validate is base method for BaseModel in Pydantic!!
1、I first used Python version 2.7, and through pip installed enum module.
from enum import Enum
class Format(Enum):
json = 0
other = 1
#staticmethod
def exist(ele):
if Format.__members__.has_key(ele):
return True
return False
class Weather(Enum):
good = 0
bad = 1
#staticmethod
def exist(ele):
if Weather.__members__.has_key(ele):
return True
return False
Format.exist('json')
Which works well, but I want to improve the code.
2、So I thought a better way might be like this:
from enum import Enum
class BEnum(Enum):
#staticmethod
def exist(ele):
if BEnum.__members__.has_key(ele)
return True
return False
class Format(Enum):
json = 0
other = 1
class Weather(Enum):
good = 0
bad = 1
Format.exist('json')
However this results in an error, because BEnum.__members__ is a class variable.
How can I get this to work?
There are three things you need to do here. First, you need to make BEnum inherit from Enum:
class BEnum(Enum):
Next, you need to make BEnum.exist a class method:
#classmethod
def exist(cls,ele):
return cls.__members__.has_key(ele)
Finally, you need to have Format and Weather inherit from BEnum:
class Format(BEnum):
class Weather(BEnum):
With exist being a static method, it can only operate on a specific class, regardless of the class that it is called from. By making it a class method, the class it is called from is passed automatically as the first argument (cls), and can be used for member access.
Here is a great description about the differences between static and class methods.
I'm using base class constructor as factory and changing class in this constructor/factory to select appropriate class -- is this approach is good python practice or there are more elegant ways?
I've tried to read help about metaclasses but without big success.
Here example of what I'm doing.
class Project(object):
"Base class and factory."
def __init__(self, url):
if is_url_local(url):
self.__class__ = ProjectLocal
else:
self.__class__ = ProjectRemote
self.url = url
class ProjectLocal(Project):
def do_something(self):
# do the stuff locally in the dir pointed by self.url
class ProjectRemote(Project):
def do_something(self):
# do the stuff communicating with remote server pointed by self.url
Having this code I can create the instance of ProjectLocal/ProjectRemote via base class Project:
project = Project('http://example.com')
project.do_something()
I know that alternate way is to using fabric function that will return the class object based on url, then code will looks similar:
def project_factory(url):
if is_url_local(url):
return ProjectLocal(url)
else:
return ProjectRemote(url)
project = project_factory(url)
project.do_something()
Is my first approach just matter of taste or it has some hidden pitfalls?
You shouldn't need metaclasses for this. Take a look at the __new__ method. This will allow you to take control of the creation of the object, rather than just the initialisation, and so return an object of your choosing.
class Project(object):
"Base class and factory."
def __new__(cls, url):
if is_url_local(url):
return super(Project, cls).__new__(ProjectLocal, url)
else:
return super(Project, cls).__new__(ProjectRemote, url)
def __init__(self, url):
self.url = url
I would stick with the factory function approach. It's very standard python and easy to read and understand. You could make it more generic to handle more options in several ways such as by passing in the discriminator function and a map of results to classes.
If the first example works it's more by luck than by design. What if you wanted to have an __init__ defined in your subclass?
The following links may be helpful:
http://www.suttoncourtenay.org.uk/duncan/accu/pythonpatterns.html#factory
http://code.activestate.com/recipes/86900/
In addition, as you are using new style classes, using __new__ as the factory function (and not in a base class, a separate class is better) is what is usually done (as far as I know).
A factory function is generally simpler (as other people have already posted)
In addition, it isn't a good idea to set the __class__ attribute the way you have done.
I hope you find the answer and the links helpful.
All the best.
Yeah, as mentioned by #scooterXL, factory function is the best approach in that case, but I like to note a case for factories as classmethods.
Consider the following class hierarchy:
class Base(object):
def __init__(self, config):
""" Initialize Base object with config as dict."""
self.config = config
#classmethod
def from_file(cls, filename):
config = read_and_parse_file_with_config(filename)
return cls(filename)
class ExtendedBase(Base):
def behaviour(self):
pass # do something specific to ExtendedBase
Now you can create Base objects from config dict and from config file:
>>> Base({"k": "v"})
>>> Base.from_file("/etc/base/base.conf")
But also, you can do the same with ExtendedBase for free:
>>> ExtendedBase({"k": "v"})
>>> ExtendedBase.from_file("/etc/extended/extended.conf")
So, this classmethod factory can be also considered as auxiliary constructor.
I usually have a seperate factory class to do this. This way you don't have to use meta classes or assignments to self.__class__
I also try to avoid to put the knowledge about which classes are available for creation into the factory. Rather, I have all the available classes register themselves withe the factory during module import. The give there class and some information about when to select this class to the factory (this could be a name, a regex or a callable (e.g. a class method of the registering class)).
Works very well for me and also implements such things like encapsulation and information hiding.
I think the second approach using a factory function is a lot cleaner than making the implementation of your base class depend on its subclasses.
Adding to #Brian's answer, the way __new__ works with *args and **kwargs would be as follows:
class Animal:
def __new__(cls, subclass: str, name: str, *args, **kwargs):
if subclass.upper() == 'CAT':
return super(Animal, cls).__new__(Dog)
elif subclass.upper() == 'DOG':
return super(Animal, cls).__new__(Cat)
raise NotImplementedError(f'Unsupported subclass: "{subclass}"')
class Dog(Animal):
def __init__(self, name: str, *args, **kwargs):
self.name = name
print(f'Created Dog "{self.name}"')
class Cat(Animal):
def __init__(self, name: str, *args, num_whiskers: int = 5, **kwargs):
self.name = name
self.num_whiskers = num_whiskers
print(f'Created Cat "{self.name}" with {self.num_whiskers} whiskers')
sir_meowsalot = Animal(subclass='Cat', name='Sir Meowsalot')
shadow = Animal(subclass='Dog', name='Shadow')