Does anyone has any insight on organizing sqlalchemy based projects? I have many tables and classes with foreign keys, and relations. What is everyone doing in terms of separating classes, tables, and mappers ? I am relatively new to the framework, so any help would be appreciated.
Example:
classA.py # table definition and class A definition
classB.py # table definition and class B definition
### model.py
import classA,classB
map(classA.classA,clasSA.table)
map(classB.classB,clasSB.table)
Including mappers inside classA, and classB works, but poses cross import issues when building relations.. Maybe I am missing something :)
Take a look at Pylons project including SA setup.
meta.py includes engine and metadata objects
models package includes declerative classes (no mapper needed). Inside that package, structure your classes by relavance into modules.
Maybe a good example would be reddit source code:)
There are two features in SQLAlchemy design to avoid cross imports when defining relations:
backref argument of relation() allows you to define backward relation.
Using strings (model class and their fields names). Unfortunately this works for declarative only, which is not your case.
See this chapter in tutorial for more information.
Related
I am working on a project, where all the MongoDB collections contains mandatory fields.
While modeling the same in FastAPI, I am trying to create ABC(Abstract Base Class) for mandatory fields and trying to inherit in child classes.
Issue is: Code is not considering fields in ABC class at all.
This url, says "Models can't be inherited".
My environment is: Python + FastAPI + MongoDB. I am using ODMantic for MongoDB operations.
Is there any workaround for this issue? Any help is much appreciated.
Ok, they can't be inherited, but what's their use for? I don't see the real question here.
I can only make assumptions on what you may be needing:
If you need to check the input, then fastapi has you backed up with pydantic. See https://fastapi.tiangolo.com/tutorial/body/?h=pydantic#create-your-data-model . You can then create the odmantic model passing the input as dictionary (omodel(**model_name.dict()) or whatever name you use).
If you want to reduce the amount of copy and paste code or want the two models to share a common base, there are docs on the link you mentioned on how to integrate it with fastapi https://art049.github.io/odmantic/usage_fastapi/
Apart from the two points above, I do not understand what other needs you could have. If this answer did not get you on the right path, let me know, but before please be more specific about your goal.
I'm using SQLAlchemy Automap with reflection to connect to an existing database. Some of the relationships work properly and some do not. I'd like a way to audit the results of prepare() so I can better understand what I'm working with. How can I view the relationship objects produced after I run prepare()?
Base.classes.<classname>.__table__ shows the tables and included ForeignKey objects as described in the documentation but no relationships are backreferences are included here, probably because it's at the Table level rather than the class level.
Not sure what AutoMap does. Inspect may help. not sure
from sqlalchemy.inspection import inspect
relations = inspect(Base.classes.<classname>).relationships.items()
It's more of a general question but I am trying to implement this using Python on top of Peewee as ORM. What is a decent OO way of abstracting the DB out of a python program?
In Peewee , classes are defined which inherit from peewee.Model and have Peewee fields as attributes.For example:
class Person(peewee.Model):
class Meta:
database = db
name = peewee.TextField()
height = peewee.DecimalField()
In an OO implementation we would like to have methods such as grow(size), die(),.. to be part of the objects. Is it best to build a class on top of these Peewee models to contain such functionality or should this be put in the model itself?
I can remember in a Java EE program that we used to have a DAO (Data access object) and DTO (Data transfer object). The peewee model object is a DAO or can it be both? Is there some sort of pattern that can be applied here?
Peewee is an ActiveRecord ORM, so there's no distinction between the data access and the object representation. This means when you execute queries the data is returned to you as model instances. Given that this is the case, it's common to put methods on the model itself, since you're using it anyways.
Whether you want to build a service layer on top of your models is entirely up to you. If you have mutually-dependent models this may make sense.
I have a class in my existing python project, User, that I would like to map to tables. But I'm not sure what the best way to do this is?
Does this mean I can remove:
class User:
pass
from my model/__ init __.py?
Or should I leave that in there, and have something like:
from project.model.user import User
class User:
pass
In essence, having (2) different classes with the same name?
Thanks.
You should not define a class that maps onto a table in the model's __init__.py file, nor should you have two different clases with the same name.
Classes that map onto tables belong in your project's model directory, grouped into modules. Then import the classes in __init__.py to make them available.
I'm using SQLAlchemy and I can create tables that I have defined in /model/__init__.py but I have defined my classes, tables and their mappings in other files found in the /model directory.
For example I have a profile class and a profile table which are defined and mapped in /model/profile.py
To create the tables I run:
paster setup-app development.ini
But my problem is that the tables that I have defined in /model/__init__.py are created properly but the table definitions found in /model/profile.py are not created. How can I execute the table definitions found in the /model/profile.py so that all my tables can be created?
Thanks for the help!
I ran into the same problem with my first real Pylons project. The solution that worked for me was this:
Define tables and classes in your profile.py file
In your __init__.py add from profile import * after your def init_model
I then added all of my mapper definitions afterwards. Keeping them all in the init file solved some problems I was having relating between tables defined in different files.
Also, I've since created projects using the declarative method and didn't need to define the mapping in the init file.
Just import your other table's modules in your init.py, and use metadata object from models.meta in other files. Pylons default setup_app function creates all tables found in metadata object from model.meta after importing it.
If you are using declarative style, be sure to use Base.meta for tables generation.