Is Python Camelot tied to Elixir? - python

The docs for Camelot say that it uses Elixir models. Since SQLAlchemy has included declarative_base for a while, I had used that instead of Elixir for another app. Now I would like to use the SQLAlchemy/declarative models directly in Camelot.
There is a post on Stackoverflow that says Camelot is not tied to Elixir and that using different models would be possible but it doesn't say how.
Camelot's original model.py only has this content:
import camelot.types
from camelot.model import metadata, Entity, Field, ManyToOne, OneToMany, Unicode, Date, Integer, using_options
from camelot.view.elixir_admin import EntityAdmin
from camelot.view.forms import *
__metadata__ = metadata
I added my SQLAlchemy model and changed model.py to this:
import camelot.types
from camelot.model import metadata, Entity, Field, ManyToOne, OneToMany, Unicode, Date, using_options
from camelot.view.elixir_admin import EntityAdmin
from camelot.view.forms import *
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
__metadata__ = metadata
Base = declarative_base()
class Test(Base):
__tablename__ = "test"
id = Column(Integer, primary_key=True)
text = Column(String)
It didn't work. When I start main.py, I can see the GUI and Test in the sidebar, but can't see any rows. This is the tail of the traceback:
File "/usr/lib/python2.6/dist-packages/camelot/view/elixir_admin.py", line 52, in get_query
return self.entity.query
AttributeError: type object 'Test' has no attribute 'query'
This is the elixir_admin.py code for line 46-52:
#model_function
def get_query(self):
""":return: an sqlalchemy query for all the objects that should be
displayed in the table or the selection view. Overwrite this method to
change the default query, which selects all rows in the database.
"""
return self.entity.query
If this code is causing the problem, how do I overwrite the method to change the default query to make it work?
How can you use SQLAlchemy/declarative models in Camelot?

Here is some sample code on using Declarative to define a Movie model for Camelot, some explanation can be found here.
import sqlalchemy.types
from sqlalchemy import Column
from sqlalchemy.ext.declarative import ( declarative_base,
_declarative_constructor )
from camelot.admin.entity_admin import EntityAdmin
from camelot.model import metadata
import camelot.types
from elixir import session
class Entity( object ):
def __init__( self, **kwargs ):
_declarative_constructor( self, **kwargs )
session.add( self )
Entity = declarative_base( cls = Entity,
metadata = metadata,
constructor = None )
class Movie( Entity ):
__tablename__ = 'movie'
id = Column( sqlalchemy.types.Integer, primary_key = True )
name = Column( sqlalchemy.types.Unicode(50), nullable = False )
cover = Column( camelot.types.Image(), nullable = True )
class Admin( EntityAdmin ):
list_display = ['name']
form_display = ['name', 'cover']

Which version of Camelot are you using ?
With the current version of Camelot (11.12.30) it is possible to use Declarative through some
hacks. The upcoming version will make it much easier, while after this, the examples will be
ported to Declarative as well.

Related

sqlalchemy mixin: after_create not firing in child

I am working on an ORM style version of the pq library (PostgreSQL powered python queue system) where users can have their own queue model. It also has added features such as bulk insert/get, asynchronous support and more (if all goes well I'll be able to publish it).
I am having difficulties creating a trigger (I use a PostgreSQL notification system) automatically after table creation (I want to make the usage as simple as possible so that would be much better than adding an additional classmethod for creating the trigger).
This is similar to the answer in this post however I cannot use this solution because I need to pass a connection (for escaping SQL identifiers by checking the dialect of the connection and for checking if objects exist beforehand).
Here is my attempt at it based on the post I mentionned earlier. I apologize for the long code but I figured I had to include a bit of context.
Base model
from sqlalchemy import (BIGINT, Column, func, Index, nullslast,
nullsfirst, SMALLINT, TIMESTAMP)
from sqlalchemy.orm import declared_attr, declarative_mixin
from sqlalchemy.event import listens_for
# this is the function that returns the base model
def postgres_queue_base(schema:str='public', tz_aware:bool=True, use_trigger:bool=True) -> 'PostgresQueueBase':
#declarative_mixin # this is only for MyPy, it does not modify anything
class PostgresQueueBase:
__tablename__ = 'queue'
#declared_attr
def __table_args__(cls):
return (Index(nullsfirst(cls.schedule_at), nullslast(cls.dequeued_at), postgresql_where=(cls.dequeued_at == None)),
{'schema':schema})
id = Column('id', BIGINT, primary_key=True)
internal_mapping = Column('internal_mapping', BIGINT, nullable=False)
enqueued_at = Column('enqueued_at', TIMESTAMP(timezone=tz_aware), nullable=False, server_default=func.now())
dequeued_at = Column('dequeued_at', TIMESTAMP(timezone=tz_aware))
expected_at = Column(TIMESTAMP(timezone=tz_aware))
schedule_at = Column(TIMESTAMP(timezone=tz_aware))
status = Column(SMALLINT, index=True)
#listens_for(PostgresQueueBase, "instrument_class", propagate=True)
def instrument_class(mapper, class_):
print('EVENT INSTRUMENT CLASS')
if use_trigger and mapper.local_table is not None:
trigger_for_table(table=mapper.local_table)
def trigger_for_table(table):
print('Registering after_create event')
#listens_for(table, "after_create")
def create_trigger(table, connection):
print('AFTER CREATE EVENT')
# code that creates triggers and logs that (here I'll just print something and put pseudo code in a comment)
# trig = PostgresQueueTrigger(schema=get_schema_from_model(table), table_name=table.name, connection=connection)
# trig.add_trigger()
print('Creating notify function public.notify_job')
# unique trigger name using hash of schema.table_name (avoids problems with long names and special chars)
print('Creating trigger trigger_job_5d69fc3870b446d0a1f56a793b799ae3')
return PostgresQueueBase
When I try the base model
from sqlalchemy import Column, create_engine, INTEGER, TEXT
from sqlalchemy.orm import declarative_base
# IMPORTANT: inherit both a declarative base AND the postgres queue base
Base = declarative_base()
PostgresQueueBase = postgres_queue_base(schema='public')
# create custom queue model
class MyQueue(Base, PostgresQueueBase):
# optional custom table name (by default it is "queue")
__tablename__ = 'demo_queue'
# custom columns
operation = Column(TEXT)
project_id = Column(INTEGER)
# create table in database
# change connection string accordingly!
engine = create_engine('postgresql://username:password#localhost:5432/postgres')
Base.metadata.create_all(bind=engine)
EVENT INSTRUMENT CLASS
Registering after_create event
I cannot see "AFTER CREATE EVENT" printed out 😟. How do I get the "after_create" event to be fired?
Thanks in advance for your help 👍!
Sorry, I finally figured it out... The table already existed so the events were never firing. Also the code above has some errors in the events (I could not test them since they were not being executed) and the composite index in table_args somehow gets the name """ NULLS FIRST"". I used a hash to have a better name and avoid problems with character limitation or escaping.
import hashlib
from sqlalchemy import (BIGINT, Column, func, Index, nullslast,
nullsfirst, SMALLINT, TIMESTAMP)
from sqlalchemy.orm import declared_attr, declarative_mixin
from sqlalchemy.event import listens_for
# this is the function that returns the base model
def postgres_queue_base(schema:str='public', tz_aware:bool=True, use_trigger:bool=True) -> 'PostgresQueueBase':
#declarative_mixin # this is only for MyPy, it does not modify anything
class PostgresQueueBase:
__tablename__ = 'queue'
#declared_attr
def __table_args__(cls):
# to prevent any problems such as escaping, SQL injection or limit of characters I'll just md5 the table name for the index
md5 = hashlib.md5(cls.__tablename__.encode('utf-8')).hexdigest()
return (Index(f'queue_prio_ix_{md5}', nullsfirst(cls.schedule_at), nullslast(cls.dequeued_at),
postgresql_where=(cls.dequeued_at == None)),
{'schema':schema})
id = Column('id', BIGINT, primary_key=True)
internal_mapping = Column('internal_mapping', BIGINT, nullable=False)
enqueued_at = Column('enqueued_at', TIMESTAMP(timezone=tz_aware), nullable=False, server_default=func.now())
dequeued_at = Column('dequeued_at', TIMESTAMP(timezone=tz_aware))
expected_at = Column(TIMESTAMP(timezone=tz_aware))
schedule_at = Column(TIMESTAMP(timezone=tz_aware))
status = Column(SMALLINT, index=True)
if use_trigger:
#listens_for(PostgresQueueBase, "instrument_class", propagate=True)
def class_instrument(mapper, class_):
if mapper.local_table is not None:
create_trigger_event(table=mapper.local_table)
def create_trigger_event(table):
#listens_for(table, "after_create")
def create_trigger(target, connection, **kw):
print('Create trigger')
return PostgresQueueBase

SQLAlchemy: how to create dynamic columns multiple times

I succeeded to create a sql table with columns defined dynamically, thanks to python class reflexion.
But I cannot run the code more than one time.
For instance, the following import_file , should create a static table and a dynamic table with specific columns.
It works if I run it one time. But the second time it crashs and returns the following error :
Table 'dynamic' is already defined for this MetaData instance
Code example :
from sqlalchemy import Column, Integer, String, Float, Boolean, Table
from sqlalchemy import create_engine, MetaData
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship
from sqlalchemy.orm import clear_mappers
import os
def import_file(filename, columns):
path = filename
if os.path.exists(path):
os.remove(path)
engine = create_engine(f"sqlite:///{path}", echo=True)
clear_mappers()
Base = declarative_base()
class StaticTable(Base):
__tablename__ = "static"
id = Column(Integer, primary_key=True)
name = Column(String)
class DynamicTable(Base):
__tablename__ = "dynamic"
id = Column(Integer, primary_key=True)
for c in columns:
setattr(DynamicTable,c,Column(String))
Base.metadata.create_all(engine)
import_file("test.db", columns = ["age","test"]) # WORKS
import_file("test2.db", columns= ["id","age","foo","bar"]) # NOT WORKING
I try to use sqlalchemy.orm.clear_mappers, but unsucessfully.
Any idea how can I resolve it ?
I think that it is not full you code, because conflicts on Base.metadata
Mappers for link on orm classes and tables, you have defined tables.
You can try somethink like that
import_file("test.db", columns = ["age","test"])
Base.metadata.clear()
sqlalchemy.orm.clear_mappers()
import_file("test2.db", columns= ["id","age","foo","bar"])

Should I use a single class to put SQLAlchemy table specifications and my business logic?

I have a class Contract to represent my contracts:
.../mypackage/Contract.py
class Contract:
# setter and getters.
def isValid( self, contract_number=None ):
#code
def cancelTheContract( self, contract_number=None ):
# code
And my SQLAlchemy Contract class:
.../mypackage/orm.py
from sqlalchemy import create_engine
from sqlalchemy import MetaData
from sqlalchemy import Column, ForeignKey, Integer, String, Table
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
db = create_engine( 'mysql://myuser:mypasswd#localhost/mydatabase' )
contracts = Table( 'contracts', MetaData( bind = None ) )
class Connection:
def connect( self ):
Session = sessionmaker( bind = db )
session = Session()
return session
class Contract( Base ):
__tablename__ = 'contracts'
id = Column( Integer, primary_key = True )
type = Column( String )
price = Column( Float )
So...
Would be ok to merge both Contract classes in a single one?
If not, I have to instantiate a class specific for the database table and another class specific for the business logic, so when I have to deal with database data, manipulate it and put it back, I have to deal with two objects that are basically the thing.
Well... I guess I'm missing some important concept here.
What should I read to understand better about my question implications?
Thanks!
Gio
Yes, the philosophy of ORMs is to map physical tables to business entity objects, so it is best practice to combine your two classes. SQLA attributes manage the persistent fields of your entity and you can encapsulate all the business logic in that class per standard object-oriented modeling techniques.

Instantiating object automatically adds to SQLAlchemy Session. Why?

From my understanding of SQLAlchemy, in order to add a model to a session, I need to call session.add(obj). However, for some reason, in my code, SQLAlchemy seems to do this automatically.
Why is it doing this, and how can I stop it? Am I approaching session in the correct way?
example
>>> from database import Session as db
>>> import clients
>>> from instances import Instance
>>> from uuid import uuid4
>>> len(db.query(Instance).all())
>>> 0 # Note, no instances in database/session
>>> i = Instance(str(uuid4()), clients.get_by_code('AAA001'), [str(uuid4())])
>>> len(db.query(Instance).all())
>>> 1 # Why?? I never called db.add(i)!
database.py
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.ext.declarative import declarative_base
import config
Base = declarative_base()
class Database():
def __init__(self):
db_url = 'postgresql://{:s}:{:s}#{:s}:{}/{:s}'.format(
config.database['user'],
config.database['password'],
config.database['host'],
config.database['port'],
config.database['dbname']
)
self.engine = create_engine(db_url)
session_factory = sessionmaker(bind=self.engine)
self.session = scoped_session(session_factory)
Database = Database()
Session = Database.session
instance.py
from sqlalchemy import Column, Text, ForeignKey
from sqlalchemy.orm import relationship
from sqlalchemy.dialects.postgresql import UUID, ARRAY
import database
Base = database.Base
class Instance(Base):
__tablename__ = 'instances'
uuid = Column(UUID, primary_key=True)
client_code = Column(
Text, ForeignKey('clients.code', ondelete='CASCADE'), nullable=False)
mac_addresses = Column(ARRAY(Text, as_tuple=True),
primary_key=True)
client = relationship("Client", back_populates="instances")
def __init__(self, uuid, client, mac_addresses):
self.uuid = uuid
self.client = client
self.mac_addresses = tuple(mac_addresses)
client.py
from sqlalchemy import Column, Text
from sqlalchemy.orm import relationship
import database
from database import Session as db
Base = database.Base
class Client(Base):
__tablename__ = 'clients'
code = Column(Text, primary_key=True)
name = Column(Text)
instances = relationship("Instance", back_populates='client')
def __init__(self, code, name=None):
self.code = code
self.name = name
def get_by_code(code):
client = db.query(Client).filter(Client.code == code).first()
return client
When you create a SQLAlchemy object and link it directly to another SQLAlchemy object, both objects end up in the session.
The reason is that SQLAlchemy needs to make sure you can query these objects.
Take, for example, a user with addresses.
If you create a user in code, with an address, both the user and the address end up in the session, because the address is linked to the user and SQLAlchemy needs to make sure you can query all addresses of a user using user.addresses.all().
In that case all (possibly) existing addresses need to be fetched, as well as the new address you just added. For that purpose the newly added address needs to be saved in the database.
To prevent this from happening (for example if you only need objects to just calculate with), you can link the objects with their IDs/Foreign Keys:
address.user_id = user.user_id
However, if you do this, you won't be able to access the SQLAlchemy properties anymore. So user.addresses or address.user will no longer yield results.
The reverse is also true; I asked a question myself a while back why linking two objects by ID will not result in SQLAlchemy linking these objects in the ORM:
relevant stackoverflow question
another description of this behavior

How to create an SQL View with SQLAlchemy?

Is there a "Pythonic" way (I mean, no "pure SQL" query) to define an SQL view with SQLAlchemy?
Update: SQLAlchemy now has a great usage recipe here on this topic, which I recommend. It covers different SQL Alchemy versions up to the latest and has ORM integration (see comments below this answer and other answers). And if you look through the version history, you can also learn why using literal_binds is iffy (in a nutshell: binding parameters should be left to the database), but still arguably any other solution would make most users of the recipe not happy. I leave the below answer mostly for historical reasons.
Original answer: Creating a (read-only non-materialized) view is not supported out of the box as far as I know. But adding this functionality in SQLAlchemy 0.7 is straightforward (similar to the example I gave here). You just have to write a compiler extension CreateView. With this extension, you can then write (assuming that t is a table object with a column id)
createview = CreateView('viewname', t.select().where(t.c.id>5))
engine.execute(createview)
v = Table('viewname', metadata, autoload=True)
for r in engine.execute(v.select()):
print r
Here is a working example:
from sqlalchemy import Table
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import Executable, ClauseElement
class CreateView(Executable, ClauseElement):
def __init__(self, name, select):
self.name = name
self.select = select
#compiles(CreateView)
def visit_create_view(element, compiler, **kw):
return "CREATE VIEW %s AS %s" % (
element.name,
compiler.process(element.select, literal_binds=True)
)
# test data
from sqlalchemy import MetaData, Column, Integer
from sqlalchemy.engine import create_engine
engine = create_engine('sqlite://')
metadata = MetaData(engine)
t = Table('t',
metadata,
Column('id', Integer, primary_key=True),
Column('number', Integer))
t.create()
engine.execute(t.insert().values(id=1, number=3))
engine.execute(t.insert().values(id=9, number=-3))
# create view
createview = CreateView('viewname', t.select().where(t.c.id>5))
engine.execute(createview)
# reflect view and print result
v = Table('viewname', metadata, autoload=True)
for r in engine.execute(v.select()):
print r
If you want, you can also specialize for a dialect, e.g.
#compiles(CreateView, 'sqlite')
def visit_create_view(element, compiler, **kw):
return "CREATE VIEW IF NOT EXISTS %s AS %s" % (
element.name,
compiler.process(element.select, literal_binds=True)
)
stephan's answer is a good one and covers most bases, but what left me unsatisfied was the lack of integration with the rest of SQLAlchemy (the ORM, automatic dropping etc.). After hours of experimenting and piecing together knowledge from all corners of the internet I came up with the following:
import sqlalchemy_views
from sqlalchemy import Table
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.ddl import DropTable
class View(Table):
is_view = True
class CreateView(sqlalchemy_views.CreateView):
def __init__(self, view):
super().__init__(view.__view__, view.__definition__)
#compiles(DropTable, "postgresql")
def _compile_drop_table(element, compiler, **kwargs):
if hasattr(element.element, 'is_view') and element.element.is_view:
return compiler.visit_drop_view(element)
# cascade seems necessary in case SQLA tries to drop
# the table a view depends on, before dropping the view
return compiler.visit_drop_table(element) + ' CASCADE'
Note that I am utilizing the sqlalchemy_views package, just to simplify things.
Defining a view (e.g. globally like your Table models):
from sqlalchemy import MetaData, text, Text, Column
class SampleView:
__view__ = View(
'sample_view', MetaData(),
Column('bar', Text, primary_key=True),
)
__definition__ = text('''select 'foo' as bar''')
# keeping track of your defined views makes things easier
views = [SampleView]
Mapping the views (enable ORM functionality):
Do when loading up your app, before any queries and after setting up the DB.
for view in views:
if not hasattr(view, '_sa_class_manager'):
orm.mapper(view, view.__view__)
Creating the views:
Do when initializing the database, e.g. after a create_all() call.
from sqlalchemy import orm
for view in views:
db.engine.execute(CreateView(view))
How to query a view:
results = db.session.query(SomeModel, SampleView).join(
SampleView,
SomeModel.id == SampleView.some_model_id
).all()
This would return exactly what you expect (a list of objects that each has a SomeModel object and a SampleView object).
Dropping a view:
SampleView.__view__.drop(db.engine)
It will also automatically get dropped during a drop_all() call.
This is obviously a very hacky solution but in my eyes it is the best one and cleanest one out there at the moment. I have tested it these past few days and have not had any issues. I'm not sure how to add in relationships (ran into problems there) but it's not really necessary, as demonstrated above in the query.
If anyone has any input, finds any unexpected issues, or knows a better way to do things, please do leave a comment or let me know.
This was tested on SQLAlchemy 1.2.6 and Python 3.6.
These days there's a PyPI package for that: SQLAlchemy Views.
From it's PyPI Page:
>>> from sqlalchemy import Table, MetaData
>>> from sqlalchemy.sql import text
>>> from sqlalchemy_views import CreateView, DropView
>>> view = Table('my_view', metadata)
>>> definition = text("SELECT * FROM my_table")
>>> create_view = CreateView(view, definition, or_replace=True)
>>> print(str(create_view.compile()).strip())
CREATE OR REPLACE VIEW my_view AS SELECT * FROM my_table
However, you asked for a no "pure SQL" query, so you probably want the definition above to be created with SQLAlchemy query object.
Luckily, the text() in the example above makes it clear that the definition parameter to CreateView is such a query object. So something like this should work:
>>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
>>> from sqlalchemy.sql import select
>>> from sqlalchemy_views import CreateView, DropView
>>> metadata = MetaData()
>>> users = Table('users', metadata,
... Column('id', Integer, primary_key=True),
... Column('name', String),
... Column('fullname', String),
... )
>>> addresses = Table('addresses', metadata,
... Column('id', Integer, primary_key=True),
... Column('user_id', None, ForeignKey('users.id')),
... Column('email_address', String, nullable=False)
... )
Here is the interesting bit:
>>> view = Table('my_view', metadata)
>>> definition = select([users, addresses]).where(
... users.c.id == addresses.c.user_id
... )
>>> create_view = CreateView(view, definition, or_replace=True)
>>> print(str(create_view.compile()).strip())
CREATE OR REPLACE VIEW my_view AS SELECT users.id, users.name,
users.fullname, addresses.id, addresses.user_id, addresses.email_address
FROM users, addresses
WHERE users.id = addresses.user_id
SQLAlchemy-utils just added this functionality in 0.33.6 (available in pypi). It has views, materialized views, and it integrates with the ORM. It is not documented yet, but I am successfully using the views + ORM.
You can use their test as an example for both regular and materialized views using the ORM.
To create a view, once you install the package, use the following code from the test above as a base for your view:
class ArticleView(Base):
__table__ = create_view(
name='article_view',
selectable=sa.select(
[
Article.id,
Article.name,
User.id.label('author_id'),
User.name.label('author_name')
],
from_obj=(
Article.__table__
.join(User, Article.author_id == User.id)
)
),
metadata=Base.metadata
)
Where Base is the declarative_base, sa is the SQLAlchemy package, and create_view is a function from sqlalchemy_utils.view.
Loosely based on https://github.com/sqlalchemy/sqlalchemy/wiki/Views
Complete executable example with sqlalchemy only, hope you don't spend hours just to make it run.
import sqlalchemy as sa
import sqlalchemy.schema
import sqlalchemy.ext.compiler
engine = sa.create_engine('postgresql://localhost/postgres')
meta = sa.MetaData()
Session = sa.orm.sessionmaker(bind=engine)
session = Session()
class Drop(sa.schema.DDLElement):
def __init__(self, name, schema):
self.name = name
self.schema = schema
class Create(sa.schema.DDLElement):
def __init__(self, name, select, schema='public'):
self.name = name
self.schema = schema
self.select = select
sa.event.listen(meta, 'after_create', self)
sa.event.listen(meta, 'before_drop', Drop(name, schema))
#sa.ext.compiler.compiles(Create)
def createGen(element, compiler, **kwargs):
return 'CREATE OR REPLACE VIEW {schema}."{name}" AS {select}'.format(
name = element.name,
schema = element.schema,
select = compiler.sql_compiler.process(
element.select,
literal_binds = True
),
)
#sa.ext.compiler.compiles(Drop)
def dropGen(element, compiler, **kw):
return 'DROP VIEW {schema}."{name}"'.format(
name = element.name,
schema = element.schema,
)
if __name__ == '__main__':
view = Create(
name = 'myview',
select = sa.select(sa.literal_column('1 AS col'))
)
meta.create_all(bind=engine, checkfirst=True)
print(session.execute('SELECT * FROM myview').all())
session.close()
I couldn't find an short and handy answer.
I don't need extra functionality of View (if any), so I simply treat a view as an ordinary table as other table definitions.
So basically I have a.py where defines all tables and views, sql related stuff, and main.py where I import those class from a.py and use them.
Here's what I add in a.py and works:
class A_View_From_Your_DataBase(Base):
__tablename__ = 'View_Name'
keyword = Column(String(100), nullable=False, primary_key=True)
Notably, you need to add the primary_key property even though there's no primary key in the view.
SQL View without pure SQL?
You can create a class or function to implement a defined view.
function get_view(con):
return Table.query.filter(Table.name==con.name).first()

Categories