i'm trying to create back-end app using FastApi and sqlAchemy. I have a lot of entities which has relations with database. So, my question is: How to speed up development? Now i write for each entity code:
#app.get("/holidays")
def getHolidays():
session = Session(bind=engine)
holidays: List[Holiday] = session.query(Holiday).all()
return [x.to_json() for x in holidays]
#app.get("/exclusive_operations")
def getExclusiveOperations():
session = Session(bind=engine)
exclusive_operations: List[ExclusiveOperation] = session.query(ExclusiveOperation).all()
return [x.to_json() for x in exclusive_operations]
#app.get('/category_descriptions')
def getCategoryDescr():
session = Session(bind=engine)
category_descrs: List[CategoryDescr] = session.query(CategoryDescr).all()
return [x.to_json() for x in category_descrs]
So if i want to create all crud operations, i need to create 12 typical methods for 3 entities. Maybe another solution exists?
It is Python - as a dynamic language, the functions and methods are created at runtime. The "#app.get" decorator is what registers your views in the application, not their existence in the top level of a module.
Therefore, you can create a for loop that simply recreates and registers the view for each of your entities - it can be done either at the module level or inside a function.
(it is nice to have in mind that the "#xxxx" decorator syntax is just syntax sugar for calling the decorator passing the decorated function as its sole parameter)
for Entity, name in [(Holiday, "holidays"), (ExclusiveOperation, "exclusive_operations"), (CategoryDescr, "category_descriptions")]:
def freeze_var_wrapper(Entity, name):
# this intermediary function is needed, otherwise the Entity and name
# variables would be always up-to-date inside the view function
# and always point to the last value in the external for-loop after
# it finished execution:
def view():
session = Session(bind=engine)
entities = session.query(Entity).all()
return [x.to_json() for x in entities]
# optional, may facilitate debugging:
view.__name__ = f"get{Entity.__name__}s"
# actually registers the view function with the framework:
# (could be done in the same line, without the "view_registrer" var)
view_registrer = app.get(f"/{name}")
view_registrer(view)
freeze_var_wrapper(Entity, name)
There are other ways of doing this that might remove the boiler-plate and look more elegant - for example with class inheritance and an apropriate __init__subclass__in a base class (even if the framework does not use "class views", we will register the bound method for each class, which is just a callable):
class BaseView:
Entity: cls
view_name: str
def __init_subclass__(cls, *args, **kw):
super().__init_subclass__(*args, **kw)
app.get(f"/{cls.view_name}")(cls.view)
# above, cls.view is bound to the subclass being processed, therefore
# the class attributes as defined in each class body are used inside the method
# this could easily register post, delete and detail views as well
#classmethod
def view(cls);
session = Session(bind=engine)
entities = session.query(cls.Entity).all()
return [x.to_json() for x in entities]
class HolydayView(BaseView):
Entity = Holyday
view_name = "holydays"
# thats is just it.
class ExclusiveOperationView(BaseView):
Entity = ExclusiveOperation
view_name = "exclusive_operations"
class CatewgoryDescriptionView(BaseView):
Entity = CategoryDescription
view_name = "category_descriptions"
Related
I've created decorators that wrap functions before, but in this instance, I don't need to wrap, so I'm guessing I'm using the wrong paradigm, so maybe somebody can help me figure this out and solve my ultimate goal.
What I imagined was a decorator that, when called (during compile time), it takes 3 arguments:
The decorated function (that resides inside a Model class)
The name of a data member of the class (i.e. a database field, e.g. a name field of type CharField)
The name of a parent key data member in the class (e.g. parent of type ForeignKey)
My decorator code would register the function, the field, and related key associated with it in a global list variable.
I would then have a class that inherits from Model that over-rides save() and delete(). If would cycle through the global list to update the associated fields using the output of the function and then call the parent model's .save() method so that it would update its decorated fields as well.
I quickly realized though that the decorator isn't passing the function that has the decorator, because I get an exception I created for when there isn't a field or a parent key supplied to the decorator during compile time.
In case this isn't clear, here's the code I have:
updater_list: Dict[str, List] = {}
def field_updater_function(fn, update_field_name=None, parent_field_name=None):
"""
This is a decorator for functions in a Model class that are identified to be used to update a supplied field and
fields of any linked parent record (if the record is changed). The function should return a value compatible with
the field type supplied. These decorators are identified by the MaintainedModel class, whose save and delete
methods override the parent model and call the given functions to update the supplied field. It also calls linked
dependent models (if supplied) update methods.
"""
if update_field_name is None and parent_field_name is None:
raise Exception(
"Either an update_field_name or parent_field_name argument is required."
)
# Get the name of the class the function belongs to
class_name = fn.__qualname__.split(".")[0]
func_dict = {
"function": fn.__name__,
"update_field": update_field_name,
"parent_field": parent_field_name,
}
if class_name in updater_list:
updater_list[class_name].append(func_dict)
else:
updater_list[class_name] = [func_dict]
if settings.DEBUG:
print(f"Added field_updater_function decorator to function {fn.__qualname__}")
return fn
class MaintainedModel(Model):
"""
This class maintains database field values for a django.models.Model class whose values can be derived using a
function. If a record changes, the decorated function is used to update the field value. It can also propagate
changes of records in linked models. Every function in the derived class decorated with the
`#field_updater_function` decorator (defined above, outside this class) will be called and the associated field
will be updated. Only methods that take no arguments are supported. This class overrides the class's save and
delete methods as triggers for the updates.
"""
def save(self, *args, **kwargs):
# Set the changed value triggering this update
super().save(*args, **kwargs)
# Update the fields that change due to the above change (if any)
self.update_decorated_fields()
# Now save the updated values (i.e. save again)
super().save(*args, **kwargs)
# Percolate changes up to the parents (if any)
self.call_parent_updaters()
def delete(self, *args, **kwargs):
# Delete the record triggering this update
super().delete(*args, **kwargs) # Call the "real" delete() method.
# Percolate changes up to the parents (if any)
self.call_parent_updaters()
def update_decorated_fields(self):
"""
Updates every field identified in each field_updater_function decorator that generates its value
"""
for updater_dict in self.get_my_updaters():
update_fun = getattr(self, updater_dict["function"])
update_fld = updater_dict["update_field"]
if update_fld is not None:
setattr(self, update_fld, update_fun())
def call_parent_updaters(self):
parents = []
for updater_dict in self.get_my_updaters():
parent_fld = getattr(self, updater_dict["parent_field"])
if parent_fld is not None and parent_fld not in parents:
parents.append(parent_fld)
for parent_fld in parents:
parent_instance = getattr(self, parent_fld)
if isinstance(parent_instance, MaintainedModel):
parent_instance.save()
elif isinstance(parent_instance, ManyToManyField) and :
parent_instance.all().save()
else:
raise Exception(
f"Parent class {parent_instance.__class__.__name__} or {self.__class__.__name__} must inherit "
f"from {MaintainedModel.__name__}."
)
#classmethod
def get_my_updaters(cls):
"""
Convenience method to retrieve all the updater functions of the calling model.
"""
if cls.__name__ in updater_list:
return updater_list[cls.__name__]
else:
return []
class Meta:
abstract = True
And here's the first decorator I applied that triggers the exception at compiletime:
class Tracer(models.Model, TracerLabeledClass):
id = models.AutoField(primary_key=True)
name = models.CharField(
max_length=256,
unique=True,
help_text="A unique name or lab identifier of the tracer, e.g. 'lysine-C14'.",
)
compound = models.ForeignKey(
to="DataRepo.Compound",
on_delete=models.RESTRICT,
null=False,
related_name="tracer",
)
class Meta:
verbose_name = "tracer"
verbose_name_plural = "tracers"
ordering = ["name"]
def __str__(self):
return str(self._name())
#field_updater_function("name", "infusates")
def _name(self):
# format: `compound - [ labelname,labelname,... ]` (but no spaces)
if self.labels is None or self.labels.count() == 0:
return self.compound.name
return (
self.compound.name
+ "-["
+ ",".join(list(map(lambda l: str(l), self.labels.all())))
+ "]"
)
And my exception:
...
File ".../tracer.py", line 31, in Tracer
#field_updater_function("name")
File ".../maintained_model.py", line 19, in field_updater_function
raise Exception(
Exception: Either an update_field_name or parent_field_name argument is required.
The basic idea is we have a bunch of fields in the database that can be derived fully from other fields in the database. We'd started out originally with cached_properties, but they provided virtually no speedup, so we'd rather just save the computed values in the database.
I'd written a caching mechanism which auto-refreshes the cache using an override of .save and .delete, and that works great, but has various drawbacks.
We could custom code an override of .save() that explicitly calls the function to update every field, but I wanted something that made the overhead of maintaining field values as simple as applying decorators to the functions that perform the updates, and just supply the fields they compute the values for and the links to other affected fields up the hierarchy. Such as:
#field_updater_function("name", "infusates")
def _name(self):
...
Is there something other than decorators I should be using to accomplish this? I could just make a dummy decorator using functools.wraps that just returns the supplied function as is (I think), but that just feels wrong.
You need to make a decorator factory. That is, a function you call with arguments that returns a decorator function that gets passed the function to be decorated.
A typical way to do that is with nested functions. A function defined within another function can access the variables in the enclosing function's namespace. Here's what I think it would look like for your code:
def field_updater_function(update_field_name=None, parent_field_name=None): # factory
# docstring omitted for brevity
if update_field_name is None and parent_field_name is None:
raise Exception(
"Either an update_field_name or parent_field_name argument is required."
)
def decorator(fn): # the actual decorator
class_name = fn.__qualname__.split(".")[0]
func_dict = {
"function": fn.__name__,
"update_field": update_field_name, # you can still access variables
"parent_field": parent_field_name, # from the enclosing namespace
}
if class_name in updater_list:
updater_list[class_name].append(func_dict)
else:
updater_list[class_name] = [func_dict]
if settings.DEBUG:
print(f"Added field_updater_function decorator to function {fn.__qualname__}")
return fn
return decorator # here the factory returns the decorator
I'm building an HTTP API and I factored out a lot of code into a superclass that handles requests to a collection of objects. In my subclass, I specify what database models the operation should work on and the superclass takes care of the rest.
This means that I don't need to re-implement the get, post, etc. methods from the superclass, however, I want to change their docstrings in the subclass so that I can have some documentation more specific to the actual model the endpoint is operating on.
What is the cleanest way to inherit the parent class's functionality but change the docstrings?
Example:
class CollectionApi(Resource):
"""Operate on a collection of something.
"""
class Meta(object):
model = None
schema = None
def get(self):
"""Return a list of collections.
"""
# snip
def post(self):
"""Create a new item in this collection.
"""
# snip
class ActivityListApi(CollectionApi):
"""Operations on the collection of Activities.
"""
class Meta(object):
model = models.Activity
schema = schemas.ActivitySchema
Specifically, I need ActivityListApi to have get and post run like in CollectionApi, but I want different docstrings (for automatic documentation's sake).
I can do this:
def get(self):
"""More detailed docs
"""
return super(ActivityListApi, self).get()
But this seems messy.
class CollectionApi(Resource):
"""Operate on a collection of something.
"""
def _get(self):
"""actual work... lotsa techy doc here!
the get methods only serve to have something to hang
their user docstrings onto
"""
pass
def get(self):
"""user-intended doc for CollectionApi"""
return self._get()
class ActivityListApi(CollectionApi):
def get(self):
"""user-intended doc for ActivityListApi"""
return self._get()
I'm writing a website generator with various classes that represent the content in the webpages such as Page, NewsPost, Tag, Category etc.
I'd like to be able to construct these objects plainly, and I don't have a problem with that.
However, I'd also like to construct these objects within a certain context - say, the context of a website with a particular root URL. Let's say I put this context into an instance of a class ContentManager. This is the code I ultimately hope to end up with:
page = Page(title='Test Page', content='hello world!')
assert page.cm == None
cm = ContentManager(root_url='//localhost')
page = cm.Page(title='Test Page', content='hello world!')
assert page.cm == cm
I can easily manage this if page.cm is a per-instance property set in __init__, but I need to call class methods on cm.Page which need access to the cm object, so it has to be a static property.
If I just set it as a static property on the Page class, it would end up affecting other ContentManagers pages as well, which is not desirable.
How would I achieve this? Metaclasses? Or some sort of class factory function?
One solution could be creating a subclass of Page for every ContentManage instance:
class Page:
cm = None
def __init__(self, title, content):
self.title = title
self.content = content
class ContentManager:
def __init__(self, root_url):
class PerContentManagerPage(Page):
cm = self
self.Page = PerContentManagerPage
page0 = Page(title='Test Page', content='hello world!')
cm = ContentManager(root_url='//localhost')
page = cm.Page(title='Test Page', content='hello world!')
cm2 = ContentManager(root_url='//localhost')
page2 = cm2.Page(title='Test Page 2', content='hello world!')
assert page0.cm is None
assert page.cm == cm
assert page2.cm == cm2
In python a class is also an object (an instance of its metaclass). This solution creates a new subclass of Page every time you instantiate ContentManager. This means that the cm.Page class isn't the same as the cm2.Page class but both are the subclasses of Page. This is why it's possible that cm.Page.cm and cm2.Page.cm have different values, because these are two separate classes (or class objects).
Note: Although in python this could be solved by creating subclass objects dynamically, problems usually have better solutions. Creating classes/subclasses dynamically is a warning sign (HACK).
I'm still convinced that you shouldn't create a page subclass for each content manager instance. Instead I would simply use instances of the global ContentManager and Page classes by connecting them with references to each other in a suitable way and putting the data and the code into instance attributes/methods.
Setting everything else aside, you'll just need to dynamically construct a class to tie to each instance of ContentManager; we can do this using the built-in type function, which can either, with one argument, give us the type of an object, or, with three arguments (class name, base classes, and class dictionary) construct a new class.
Here's a sample of how that might look in your situation:
class Page(object):
# This is just a default value if we construct a Page
# outside the context of a ContentManager
cm = None
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
#classmethod
def do_class_thing(cls):
return cls.cm
class ContentManager(object):
def __init__(self, root_url):
self.url = root_url
"""
This is where the magic happens. We're telling type() to
construct a class, with the class name ContentManagerPage,
have it inherit from the above explicitly-declared Page
class, and then overriding its __dict__ such that the class
cm variable is set to be the ContentManager we're
constructing it from.
"""
self.Page = type(str('ContentManagerPage'), (Page,), {'cm': self})
Once you've got all this set up, it's simple enough to do exactly what you're trying to do, with cm as a class variable.
I wrote unit tests first, then I made all the tests pass, now I am looking how to refactor the code to avoid repetitions.
I have a function which returns different values depending on the context. All context is extracted on-the-fly from the Django models.
Currently my code is structured like that:
from django.test import TestCase
class MyTest(TestCase):
def test_case1(self):
user = User.objects.create(username='user')
tested_class = MyClass(user)
Model1.objects.create(...) # one type of context
self.assertEqual(...) # test the class method for this type of context
def test_case2(self):
user = User.objects.create(username='user')
tested_class = MyClass(user)
Model2.objects.create(...) # another type of context
self.assertEqual(...) # test the class method for this type of context
def test_case3(self):
user = User.objects.create(username='user')
tested_class = MyClass(user)
Model1.objects.create(...) # yet another type of context
Model2.objects.create(...)
self.assertEqual(...) # test the class method for this type of context
Obviously, the code is quite repetitive: the first two lines are the same in each function.
My first idea was to use a shared setup function:
def setUp(self):
self.user = User.objects.create(username='user')
self.tested_class = MyClass(user)
but this solution didn't work: all model updates were shared, and tests became dependent on each other.
What I need instead is a clean state ("empty database") before starting each test.
What else can I try?
Why don't you just destroy all the objects you don't want in your teardown? Looks like Django allows you to do this type of thing pretty easily.
def tearDown(self):
User.objects.all().delete()
I'm a web application developer and in using SQLAlchemy I find it clumsy to do this in many of my controllers when I'm wanting a specific row from (say) the users table:
from model import dbsession # SQLAlchemy SessionMaker instance
from model import User
user = dbsession().query(User).filter_by(some_kw_args).first()
Or say I want to add a user to the table (assuming another controller):
from model import dbsession # SQLAlchemy SessionMaker instance
from model import User
user = User("someval", "anotherval", "yanv")
dbsession().add(user)
So, because of that clumsiness (I won't go into some of my other personal idioms) I didn't like having to do all of that just to add a record to the table or to get a record from the table. So I decided (after a lot of nasty hacking on SQLAlchemy and deciding I was doing too many "magical" things) this was appropriate for the proxy pattern.
I (at first) did something like this inside of the model module:
def proxy_user(delete=False, *args, **kwargs):
session = DBSession()
# Keyword args? Let's instantiate it...
if (len(kwargs) > 0) and delete:
obj = session.query(User).filter_by(**kwargs).first()
session.delete(obj)
return True
elif len(kwargs) > 0:
kwargs.update({'removed' : False})
return session.query(User).filter_by(**kwargs).first()
else:
# Otherwise, let's create an empty one and add it to the session...
obj = User()
session.add(obj)
return obj
I did this for all of my models (nasty duplication of code, I know) and it works quite well. I can pass in keyword arguments to the proxy function and it handles all of the session querying for me (even providing a default filter keyword for the removed flag). I can initialize an empty model object and then add data to it by updating the object attributes and all of those changes are tracked (and committed/flushed) because the object has been added to the SQLAlchemy session.
So, to reduce duplication, I put the majority of the logic an decorator function and am now doing this:
def proxy_model(proxy):
"""Decorator for the proxy_model pattern."""
def wrapper(delete=False, *args, **kwargs):
model = proxy()
session = DBSession()
# Keyword args? Let's instantiate it...
if (len(kwargs) > 0) and delete:
obj = session.query(model).filter_by(**kwargs).first()
session.delete(obj)
return True
elif len(kwargs) > 0:
kwargs.update({'removed' : False})
return session.query(model).filter_by(**kwargs).first()
else:
# Otherwise, let's create an empty one and add it to the session...
obj = model()
session.add(obj)
return obj
return wrapper
# The proxy_model decorator is then used like so:
#proxy_model
def proxy_user(): return User
So now, in my controllers I can do this:
from model import proxy_user
# Fetch a user
user = proxy_user(email="someemail#ex.net") # Returns a user model filtered by that email
# Creating a new user, ZopeTransaction will handle the commit so I don't do it manually
new_user = proxy_user()
new_user.email = 'anotheremail#ex.net'
new_user.password = 'just an example'
If I need to do other more complex queries I will usually write function that handles it if I use it often. If it is a one-time thing I will just import the dbsession instance and then do the "standard" SQLAlchemy orm query....
This is much cleaner and works wonderfully but I still feel like it isn't "locked in" quite. Can anyone else (or more experienced python programmers) provide a better idiom that would achieve a similar amount of lucidity that I'm seeking while being a clearer abstraction?
You mention "didn't like having to do 'all of that'" where 'all of that' looks an awful lot like only 1 - 2 lines of code so I'm feeling that this isn't really necessary. Basically I don't really think that either statement you started with is all that verbose or confusing.
However, If I had to come up with a way to express this I wouldn't use a decorator here as you aren't really decorating anything. The function "proxy_user" really doesn't do anything without the decorator applied imo. Since you need to provide the name of the model somehow I think you're better of just using a function and passing the model class to it. I also think that rolling the delete functionality into your proxy is out of place and depending on how you've configured your Session the repeated calls to DBSession() may be creating new unrelated sessions which is going to cause problems if you need to work with multiple objects in the same transaction.
Anyway, here's a quick stab at how I would refactor your decorator into a pair of functions:
def find_or_add(model, session, **kwargs):
if len(kwargs) > 0:
obj = session.query(model).filter_by(**kwargs).first()
if not obj:
obj = model(**kwargs)
session.add(obj)
else:
# Otherwise, let's create an empty one and add it to the session...
obj = model()
session.add(obj)
return obj
def find_and_delete(model, session, **kwargs):
deleted = False
obj = session.query(model).filter_by(**kwargs).first()
if obj:
session.delete(obj)
deleted = True
return deleted
Again, I'm not convinced this is necessary but I think I can agree that:
user = find_or_add(User, mysession, email="bob#localhost.com")
Is perhaps nicer looking than the straight SQLAlchemy code necessary to find / create a user and add them to session.
I like the above functions better than your current decorator approach because:
The names clearly denote what your intent is here, where I feel proxy_user doesn't really make it clear that you want a user object if it exists otherwise you want to create it.
The session is managed explicitly
They don't require me to wrap every model in a decorator
The find_or_add function always returns an instance of model instead of sometimes returning True, a query result set, or a model instance.
the find_and_delete function always returns a boolean indicated whether or not it was successfully able to find and delete the record specified in kwargs.
Of course you might consider using a class decorator to add these functions as methods on your model classes, or perhaps deriving your models from a base class that includes this functionality so that you can do something like:
# let's add a classmethod to User or its base class:
class User(...):
...
#classmethod
def find_or_add(cls, session, **kwargs):
if len(kwargs) > 0:
obj = session.query(cls).filter_by(**kwargs).first()
if not obj:
obj = cls(**kwargs)
session.add(obj)
else:
# Otherwise, let's create an empty one and add it to the session...
obj = cls()
session.add(obj)
return obj
...
user = User.find_or_add(session, email="someone#tld.com")