Running into something guys and was hoping to get some ideas/help.
I have a database with the tree structure where leaf can participate in the several parents as a foreign key. The typical example is a city, which belongs to the country and to the continent. Needless to say that countries and continents should not be repeatable, hence before adding another city I need to find an object in the DB. If it doesn't exist I have to create it, but if for instance country doesn't exist yet, then I have to check for the continent and if this one doesn't exist then I have to have creation process for it.
So far I got around with the creation of a whole bunch of items if I run it from the single file, but if I push the SQL alchemy code into module the story becomes different. For some reason meta scope becomes limited and if the table doesn't exist yet, then the code start throwing ProgrammingError exceptions if I query for the foreign key presence (from the city for the country). I have intercepted it and in the __init__ class constructor of the class I am looking for (country) I am checking if the table exists and creating it if doesn't. Two things I have a problem with and need an advice on:
1) Verification of the table is inefficient - I am working with the Base.metadata.sorted_tables array through which I have to look through and figure out if the table structure is the one that matches my class __tablename__. Such as:
for table in Base.metadata.sorted_tables:
# Find a right table in the list of tables
if table.name == self.__tablename__:
if __DEBUG__:
print 'DEBUG: Found table {} that equal to the class table {}'.format(table.name, self.__tablename__)
if not table.exists():
session.get_bind().execute(table.create())
Needless to say, this takes time I am looking for more efficient way to do the same.
2) The second issue is with the inheritance of the declarative base (declarative_base()) with respect to the OOP in Python. I want to take some of the code repetitions away and pull them into one class from which the other classes will be derived from. For instance code above can be taken out into the separate function and have something like this:
Base = declarative_base()
class OnDemandTables(Base):
__tablename__ = 'no_table'
# id = Column(Integer, Sequence('id'), nullable=False, unique=True, primary_key=True, autoincrement=True)
def create_my_table(self, session):
if __DEBUG__:
print 'DEBUG: Creating tables for the class {}'.format(self.__class__)
print 'DEBUG: Base.metadata.sorted_tables exists returns {}'.format(Base.metadata.sorted_tables)
for table in Base.metadata.sorted_tables:
# Find a right table in the list of tables
if table.name == self.__tablename__:
if __DEBUG__:
print 'DEBUG: Found table {} that equal to the class table {}'.format(table.name, self.__tablename__)
if not table.exists():
session.get_bind().execute(table.create())
class Continent(OnDemandTables):
__tablename__ = 'continent'
id = Column(Integer, Sequence('id'), nullable=False, unique=True, primary_key=True, autoincrement=True)
name = Column(String(64), unique=True, nullable=False)
def __init__(self, session, continent_description):
if type(continent_description) != dict:
raise AttributeError('Continent should be described by the dictionary!')
else:
self.create_my_table(session)
if 'continent' not in continent_description:
raise ReferenceError('No continent can be created without a name!. Dictionary is {}'.
format(continent_description))
else:
self.name = continent_description['continent']
print 'DEBUG: Continent name is {} '.format(self.name)
The problem here is that the metadata is trying to link unrelated classes together and requires __tablename__ and some index column to be present in the parent OnDemandTables class, which doesn't make any sense to me.
Any ideas?
Cheers
Wanted to post the solution here for the rest of the gang to keep it in mind. Apparently, SQLAlchemy doesn't see the classes in the module if they are not being used, so to say. After couple days of trying to work around things, the simplest solution that I found was to do it in a semi-manual way - not rely on the ORM to construct and build-up the database for you, but rather do this part in a sort of manual approach using class methods. The code is:
__DEBUG__ = True
from sqlalchemy import String, Integer, Column, ForeignKey, BigInteger, Float, Boolean, Sequence
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound
from sqlalchemy.exc import ProgrammingError
from sqlalchemy import create_engine, schema
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
engine = create_engine("mysql://test:test123#localhost/test", echo=True)
Session = sessionmaker(bind=engine, autoflush=False)
session = Session()
schema.MetaData.bind = engine
class TemplateBase(object):
__tablename__ = None
#classmethod
def create_table(cls, session):
if __DEBUG__:
print 'DEBUG: Creating tables for the class {}'.format(cls.__class__)
print 'DEBUG: Base.metadata.sorted_tables exists returns {}'.format(Base.metadata.sorted_tables)
for table in Base.metadata.sorted_tables:
# Find a right table in the list of tables
if table.name == cls.__tablename__:
if __DEBUG__:
print 'DEBUG: Found table {} that equal to the class table {}'.format(table.name, cls.__tablename__)
if not table.exists():
if __DEBUG__:
print 'DEBUG: Session is {}, engine is {}, table is {}'.format(session, session.get_bind(), dir(table))
table.create()
#classmethod
def is_provisioned(cls):
for table in Base.metadata.sorted_tables:
# Find a right table in the list of tables
if table.name == cls.__tablename__:
if __DEBUG__:
print 'DEBUG: Found table {} that equal to the class table {}'.format(table.name, cls.__tablename__)
return table.exists()
class Continent(Base, TemplateBase):
__tablename__ = 'continent'
id = Column(Integer, Sequence('id'), nullable=False, unique=True, primary_key=True, autoincrement=True)
name = Column(String(64), unique=True, nullable=False)
def __init__(self, session, provision, continent_description):
if type(continent_description) != dict:
raise AttributeError('Continent should be described by the dictionary!')
else:
if 'continent' not in continent_description:
raise ReferenceError('No continent can be created without a name!. Dictionary is {}'.
format(continent_description))
else:
self.name = continent_description['continent']
if __DEBUG__:
print 'DEBUG: Continent name is {} '.format(self.name)
It gives the following:
1. Class methods is_provisioned and create_table can be called during initial code start and will reflect the database state
2. Class inheritance is done from the second class where these methods are being kept and which is not interfering with the ORM classes, hence is not being linked.
As the result of the Base.metadata.sorted_tables loop is just a class table, the code can be optimized even further removing the loop. The following action would be to organize classes to have their tables checked and possibly created in a form of a list with keeping in mind their linkages and then loop through them using is_provisioned and, if necessary, create table methods.
Hope it helps the others.
Regards
Related
I'm using sqlalchemy declarative and python2.7 to read asset information from an existing database. The database uses a number of foreign keys for constant values. Many of the foreign keys exist on a different database.
How can I specify a foreign key relationship where the data exists on a separate database?
I've tried to use two separate Base classes, with the models inheriting from them separately.
I've also looked into specifying the primaryjoin keyword in relationship, but I've been unable to understand how it would be done in this case.
I think the problem is that I can only bind one engine to a session object. I can't see any way to ask sqlalchemy to use a different engine when making a query on a nested foreign key item.
OrgBase = declarative_base()
CommonBase = declarative_base()
class SomeClass:
def __init__(sql_user, sql_pass, sql_host, org_db, common_host, common)
self.engine = create_engine("{type}://{user}:{password}#{url}/{name}".format(type=db_type,
user=sql_user,
password=sql_pass,
url=sql_host,
name=org_db))
self.engine_common = create_engine("{type}://{user}:{password}#{url}/{name}".format(type=db_type,
user=sql_user,
password=sql_pass,
url=common_host,
name="common"))
self.session = sessionmaker(bind=self.engine)()
OrgBase.metadata.bind = self.engine
CommonBase.metadata.bind = self.engine_common
models.py:
class FrameRate(CommonBase):
__tablename__ = 'content_frame_rates'
__table_args__ = {'autoload': True}
class VideoAsset(OrgBase):
__tablename__ = 'content_video_files'
__table_args__ = {'autoload': True}
frame_rate_id = Column(Integer, ForeignKey('content_frame_rates.frame_rate_id'))
frame_rate = relationship(FrameRate, foreign_keys=[frame_rate_id])
Error with this code:
NoReferencedTableError: Foreign key associated with column 'content_video_files.frame_rate_id' could not find table 'content_frame_rates' with which to generate a foreign key to target column 'frame_rate_id'
if I run:
asset = self.session.query(self.VideoAsset).filter_by(uuid=asset_uuid).first()
My hope is that the VideoAsset model can nest frame_rate properly, finding the value on the separate database.
Thank you!
I have read in the following link:
Sqlalchemy adding multiple records and potential constraint violation
That using SQLAlchemy core library to perform the inserts is much faster option, rather than the ORM's session.add() method:
i.e:
session.add()
should be replaced with:
session.execute(Entry.__table__.insert(), params=inserts)
In the following code I have tried to replace .add with .insert:
from sqlalchemy import Column, DateTime, String, Integer, ForeignKey, func
from sqlalchemy.orm import relationship, backref
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Department(Base):
__tablename__ = 'department'
id = Column(Integer, primary_key=True)
name = Column(String)
class Employee(Base):
__tablename__ = 'employee'
id = Column(Integer, primary_key=True)
name = Column(String)
# Use default=func.now() to set the default hiring time
# of an Employee to be the current time when an
# Employee record was created
hired_on = Column(DateTime, default=func.now())
department_id = Column(Integer, ForeignKey('department.id'))
# Use cascade='delete,all' to propagate the deletion of a Department onto its Employees
department = relationship(
Department,
backref=backref('employees',
uselist=True,
cascade='delete,all'))
from sqlalchemy import create_engine
engine = create_engine('postgres://blah:blah#blah:blah/blah')
from sqlalchemy.orm import sessionmaker
session = sessionmaker()
session.configure(bind=engine)
Base.metadata.create_all(engine)
d = Department(name="IT")
emp1 = Employee(name="John", department=d)
s = session()
s.add(d)
s.add(emp1)
s.commit()
s.delete(d) # Deleting the department also deletes all of its employees.
s.commit()
s.query(Employee).all()
# Insert Option Attempt
from sqlalchemy.dialects.postgresql import insert
d = insert(Department).values(name="IT")
d1 = d.on_conflict_do_nothing()
s.execute(d1)
emp1 = insert(Employee).values(name="John", department=d1)
emp1 = emp1.on_conflict_do_nothing()
s.execute(emp1)
The error I receive:
sqlalchemy.exc.CompileError: Unconsumed column names: department
I can't quite understand the syntax and how to do it in the right way, I'm new to the SQLAlchemy.
It looks my question is similar to How to get primary key columns in pd.DataFrame.to_sql insertion method for PostgreSQL "upsert"
, so potentially by answering either of our questions, you could help two people at the same time ;-)
I am new to SQLAlchemy as well, but this is what I found :
Using your exact code, adding department only didn't work using "s.execute(d1)", so I changed it to the below and it does work :
with engine.connect() as conn:
d = insert(Department).values(name="IT")
d1 = d.on_conflict_do_nothing()
conn.execute(d1)
I found on SQLAlchemy documentation that in the past it was just a warning when you try to use a virtual column that doesn't really exist. But from version 0.8, it has been changed to an exception.
As a result, I am not sure if you can do that using the insert. I think that SQLAlchemy does it behind the scene in some other way when using session.add(). Maybe some experts can elaborate here.
I hope that will help.
I am using flaks python and sqlalchemy to connect to a huge db, where a lot of stats are saved. I need to create some useful insights with the use of these stats, so I only need to read/get the data and never modify.
The issue I have now is the following:
Before I can access a table I need to replicate the table in my models file. For example I see the table Login_Data in the DB. So I go into my models and recreate the exact same table.
class Login_Data(Base):
__tablename__ = 'login_data'
id = Column(Integer, primary_key=True)
date = Column(Date, nullable=False)
new_users = Column(Integer, nullable=True)
def __init__(self, date=None, new_users=None):
self.date = date
self.new_users = new_users
def get(self, id):
if self.id == id:
return self
else:
return None
def __repr__(self):
return '<%s(%r, %r, %r)>' % (self.__class__.__name__, self.id, self.date, self.new_users)
I do this because otherwise I cant query it using:
some_data = Login_Data.query.limit(10)
But this feels unnecessary, there must be a better way. Whats the point in recreating the models if they are already defined. What shall I use here:
some_data = [SOMETHING HERE SO I DONT NEED TO RECREATE THE TABLE].query.limit(10)
Simple question but I have not found a solution yet.
Thanks to Tryph for the right sources.
To access the data of an existing DB with sqlalchemy you need to use automap. In your configuration file where you load/declare your DB type. You need to use the automap_base(). After that you can create your models and use the correct table names of the DB without specifying everything yourself:
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine
import stats_config
Base = automap_base()
engine = create_engine(stats_config.DB_URI, convert_unicode=True)
# reflect the tables
Base.prepare(engine, reflect=True)
# mapped classes are now created with names by default
# matching that of the table name.
LoginData = Base.classes.login_data
db_session = Session(engine)
After this is done you can now use all your known sqlalchemy functions on:
some_data = db_session.query(LoginData).limit(10)
You may be interested by reflection and automap.
Unfortunately, since I never used any of those features, I am not able to tell you more about them. I just know that they allow to use the database schema without explicitly declaring it in Python.
I'm using SQL Alchemy and have some schema's that are account specific. The name of the schema is derived using the account ID, so I don't have the name of the schema until I hit my application service or repository layer. I'm wondering if it's possible to run a query against an entity that has it's schema dynamically set at runtime?
I know I need to set the __table_args__['schema'] and have tried doing that using the type() built-in, but I always get the following error:
could not assemble any primary key columns for mapped table
I'm ready to give up and just write straight sql, but I really hate to do that. Any idea how this can be done? I'm using SA 0.99 and I do have a PK mapped.
Thanks
from sqlalchemy 1.1,
this can be done easily using using schema_translation_map.
https://docs.sqlalchemy.org/en/11/changelog/migration_11.html#multi-tenancy-schema-translation-for-table-objects
One option would be to reflect the particular account-dependent tables. Here is the SqlAlchemy Documentation on the matter.
Alternatively, You can create the table with a static schema attribute and update it as needed at runtime and run the queries you need to. I can't think of a non-messy way to do this. So here's the messy option
Use a loop to update the schema property in each table definition whenever the account is switched.
add all the tables that are account-specific to a list.
if the tables are expressed in the declarative syntax, then you have to modify the DeclarativeName.__table__.schema attribute. I'm not sure if you need to also modify DeclarativeName.__table_args__['schema'], but I guess it won't hurt.
If the tables are expressed in the old style Table syntax, then you have to modify the Table.schema attribute.
If you're using text for any relationships or foreign keys, then that will break, and you have to inspect each table for such hard coded usage and change them
example
user_id = Column(ForeignKey('my_schema.user.id')) needs to be written as user_id = Column(ForeignKey(User.id)). Then you can change the schema of User to my_new_schema. Otherwise, at query time sqlalchemy will be confused because the foreign key will point to my_schema.user.id while the query would point to my_new_schema.user.
I'm not sure if more complicated relationships can be expressed without the use of plain text, so I guess that's the limit to my proposed solution.
Here's an example I wrote up in the terminal:
>>> from sqlalchemy import Column, Table, Integer, String, select, ForeignKey
>>> from sqlalchemy.orm import relationship, backref
>>> from sqlalchemy.ext.declarative import declarative_base
>>> B = declarative_base()
>>>
>>> class User(B):
... __tablename__ = 'user'
... __table_args__ = {'schema': 'first_schema'}
... id = Column(Integer, primary_key=True)
... name = Column(String)
... email = Column(String)
...
>>> class Posts(B):
... __tablename__ = 'posts'
... __table_args__ = {'schema':'first_schema'}
... id = Column(Integer, primary_key=True)
... user_id = Column(ForeignKey(User.id))
... text = Column(String)
...
>>> str(select([User.id, Posts.text]).select_from(User.__table__.join(Posts)))
'SELECT first_schema."user".id, first_schema.posts.text \nFROM first_schema."user" JOIN first_schema.posts ON first_schema."user".id = first_schema.posts.user_id'
>>> account_specific = [User, Posts]
>>> for Tbl in account_specific:
... Tbl.__table__.schema = 'second_schema'
...
>>> str(select([User.id, Posts.text]).select_from(User.__table__.join(Posts)))
'SELECT second_schema."user".id, second_schema.posts.text \nFROM second_schema."user" JOIN second_schema.posts ON second_schema."user".id = second_schema.posts.user_id'
As you see the same query refers to the second_schema after I update the table's schema attribute.
edit: Although you can do what I did here, using the schema translation map as shown in the the answer below is the proper way to do it.
They are set statically. Foreign keys needs the same treatment, and I have an additional issue, in that I have multiple schemas that contain multiple tables so I did this:
from sqlalchemy.ext.declarative import declarative_base
staging_dbase = declarative_base()
model_dbase = declarative_base()
def adjust_schemas(staging, model):
for vv in staging_dbase.metadata.tables.values():
vv.schema = staging
for vv in model_dbase.metadata.tables.values():
vv.schema = model
def all_tables():
return staging_dbase.metadata.tables.union(model_dbase.metadata.tables)
Then in my startup code:
adjust_schemas(staging=staging_name, model=model_name)
You can mod this for a single declarative base.
I'm working on a project in which I have to create postgres schemas and tables dynamically and then insert data in proper schema. Here is something I have done maybe it will help someone:
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from app.models.user import User
engine_uri = "postgres://someusername:somepassword#localhost:5432/users"
engine = create_engine(engine_uri, pool_pre_ping=True)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
def create_schema(schema_name: str):
"""
Creates a new postgres schema
- **schema_name**: name of the new schema to create
"""
if not engine.dialect.has_schema(engine, schema_name):
engine.execute(sqlalchemy.schema.CreateSchema(schema_name))
def create_tables(schema_name: str):
"""
Create new tables for postgres schema
- **schema_name**: schema in which tables are to be created
"""
if (
engine.dialect.has_schema(engine, schema_name) and
not engine.dialect.has_table(engine, str(User.__table__.name))
):
User.__table__.schema = schema_name
User.__table__.create(engine)
def add_data(schema_name: str):
"""
Add data to a particular postgres schema
- **schema_name**: schema in which data is to be added
"""
if engine.dialect.has_table(engine, str(User.__table__.name)):
db = SessionLocal()
db.connection(execution_options={
"schema_translate_map": {None: schema_name}},
)
user = User()
user.name = "Moin"
user.salary = 10000
db.add(user)
db.commit()
in an attempt to learn sqlalchemy (and python), i am trying to duplicate an already existing project, but am having trouble figuring out sqlalchemy and inheritance with postgres.
here is an example of what our postgres database does (obviously, this is simplified):
CREATE TABLE system (system_id SERIAL PRIMARY KEY,
system_name VARCHAR(24) NOT NULL);
CREATE TABLE file_entry(file_entry_id SERIAL,
file_entry_msg VARCHAR(256) NOT NULL,
file_entry_system_name VARCHAR(24) REFERENCES system(system_name) NOT NULL);
CREATE TABLE ops_file_entry(CONSTRAINT ops_file_entry_id_pkey PRIMARY KEY (file_entry_id),
CONSTRAINT ops_system_name_check CHECK ((file_entry_system_name = 'ops'::bpchar))) INHERITS (file_entry);
CREATE TABLE eng_file_entry(CONSTRAINT eng_file_entry_id_pkey PRIMARY KEY (file_entry_id),
CONSTRAINT eng_system_name_check CHECK ((file_entry_system_name = 'eng'::bpchar)) INHERITS (file_entry);
CREATE INDEX ops_file_entry_index ON ops_file_entry USING btree (file_entry_system_id);
CREATE INDEX eng_file_entry_index ON eng_file_entry USING btree (file_entry_system_id);
And then the inserts would be done with a trigger, so that they were properly inserted into the child databases. Something like:
CREATE FUNCTION file_entry_insert_trigger() RETURNS "trigger"
AS $$
DECLARE
BEGIN
IF NEW.file_entry_system_name = 'eng' THEN
INSERT INTO eng_file_entry(file_entry_id, file_entry_msg, file_entry_type, file_entry_system_name) VALUES (NEW.file_entry_id, NEW.file_entry_msg, NEW.file_entry_type, NEW.file_entry_system_name);
ELSEIF NEW.file_entry_system_name = 'ops' THEN
INSERT INTO ops_file_entry(file_entry_id, file_entry_msg, file_entry_type, file_entry_system_name) VALUES (NEW.file_entry_id, NEW.file_entry_msg, NEW.file_entry_type, NEW.file_entry_system_name);
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
in summary, i have a parent table with a foreign key to another table. then i have 2 child tables that exist, and the inserts are done based upon a given value. in my example above, if file_entry_system_name is 'ops', then the row goes into the ops_file_entry table; 'eng' goes into eng_file_entry_table. we have hundreds of children tables in our production environment, and considering the amount of data, it really speeds things up, so i would like to keep this same structure. i can query the parent, and as long as i give it the right 'system_name', it immediately knows which child table to look into.
my desire is to emulate this with sqlalchemy, but i can't find any examples that go into this much detail. i look at the sql generated by sqlalchemy by examples, and i can tell it is not doing anything similar to this on the database side.
the best i can come up with is something like:
class System(_Base):
__tablename__ = 'system'
system_id = Column(Integer, Sequence('system_id_seq'), primary_key = True)
system_name = Column(String(24), nullable=False)
def __init(self, name)
self.system_name = name
class FileEntry(_Base):
__tablename__ = 'file_entry'
file_entry_id = Column(Integer, Sequence('file_entry_id_seq'), primary_key=True)
file_entry_msg = Column(String(256), nullable=False)
file_entry_system_name = Column(String(24), nullable=False, ForeignKey('system.system_name'))
__mapper_args__ = {'polymorphic_on': file_entry_system_name}
def __init__(self, msg, name)
self.file_entry_msg = msg
self.file_entry_system_name = name
class ops_file_entry(FileEntry):
__tablename__ = 'ops_file_entry'
ops_file_entry_id = Column(None, ForeignKey('file_entry.file_entry_id'), primary_key=True)
__mapper_args__ = {'polymorphic_identity': 'ops_file_entry'}
in the end, what am i missing? how do i tell sqlalchemy to associate anything that is inserted into FileEntry with a system name of 'ops' to go to the 'ops_file_entry' table? is my understanding way off?
some insight into what i should do would be amazing.
You just create a new instance of ops_file_entry (shouldn't this be OpsFileEntry?), add it into the session, and upon flush, one row will be inserted into table file_entry as well as table ops_file_entry.
You don't need to set the file_entry_system_name attribute, nor the trigger.
I don't really know python or sqlalchemy, but I figured I'd give it a shot for old times sake. ;)
Have you tried basically setting up your own trigger at the application level? Something like this might work:
from sqlalchemy import event, orm
def my_after_insert_listener(mapper, connection, target):
# set up your constraints to store the data as you want
if target.file_entry_system_name = 'eng'
# do your child table insert
elseif target.file_entry_system_name = 'ops'
# do your child table insert
#…
mapped_file_entry_class = orm.mapper(FileEntry, 'file_entry')
# associate the listener function with FileEntry,
# to execute during the "after_insert" hook
event.listen(mapped_file_entry_class, 'after_insert', my_after_insert_listener)
I'm not positive, but I think target (or perhaps mapper) should contain the data being inserted.
Events (esp. after_create) and mapper will probably be helpful.