As far as I know from the research I've made, the typical way of defining a table using PonyORM in Python is like the following:
from pony.orm import *
db = Database()
# Database connection ...
class SampleTable(db.entity):
sample_int_field = Required(int)
sample_string_field = Required(str)
# ...
db.generate_mapping(create_tables=True)
My Problem: this uses db.entity
I wish to define a table without using the specific Databse instance in an abstract general manner, and connect it to the instance when I need to.
is there a way to do so?
concept (not real runnable code presumably):
# SampleAbstractTable.py
from pony.orm import *
class SampleAbstractTable(Database):
sample_int_field = Required(int)
sample_string_field = Required(str)
# ...
# main.py
from pony.orm import *
import SampleAbstractTable
db = Database()
# Database connection ...
db.connectTables((SampleAbstractTable.SampleAbstractTable, ...))
db.generate_mapping(create_tables=True)
EDIT:
One idea I have is to create a wrapper class for the database I wish to use with a certain group of tables, and define the tables in the init, because the whole point of me wishing to define tables dynamically is to seperate the Database instance creation from the table classes' definitions, namely:
from pony.orm import *
class sampleDatabase:
def __init__(self):
self._db = Database()
# Database connection ...
class TableA(db.entity):
# ...
class TableB(db.entity):
# ...
self._db.generate_mapping(create_tables=True)
but then I have issues in accessing the database tables...
First of all you're working with Entities, not Tables. They're not the same thing.
Your problem can be solved just defining the function like factory
def define_entities(db):
class Entity1(db.Entity):
attr1 = Required(str)
... and so on
And then later when you create your Database instance you just call
db = Database(...)
define_entities(db)
Related
I working in a project in which i have different projects with the same database architecture,
so i used peewee Model in which:
dynamic_db = SqliteDatabase(None)
class BaseModel(Model):
class Meta:
database = dynamic_db
class KV (BaseModel):
key = TextField()
value = IntegerField()
And whenever i new project is created i will call a function
dynamic_db.init(r'{}\database.db'.format(ProjectName.upper()))
dynamic_db.connect()
dynamic_db.create_tables([KV])
dynamic_db.close()
The problem is that once this database is created, i can't access with peewee.
When i try to create a record:
KV.create(key = 'Saul', value = 123)
I get this error:
peewee.InterfaceError: Error, database must be initialized before opening a connection.
I would appreciate any help or cookbook for peewee.
I believe something is incorrect, either in your question description, or in the error you are receiving. The call you are making to .init() is what initializes the database. After that, you should have no problems using it.
Full example which works fine:
from peewee import *
db = SqliteDatabase(None)
class Base(Model):
class Meta:
database = db
class KV(Base):
key = TextField()
value = IntegerField()
db.init('foo.db') # database is now initialized
db.connect()
db.create_tables([KV]) # no problems.
db.close()
I was finally able to create a record.
I didn't mention that i was trying to create them in another file, but the procedure is the same as the one coleifer posted on the answer.
The file in which i create the peewee models is databases.py, so in the other file i do the following:
import databases
databases.db.init('foo.db')
databases.KV.create(name = 'Saul', value= 123)
Thanks!
I need to use some ORM engine, like peewee, for handling SQLite database within my python application. However, most of such libraries offer syntax like this to define models.py:
import peewee
db = peewee.Database('hello.sqlite')
class Person(peewee.Model):
name = peewee.CharField()
class Meta:
database = db
However, in my application, i cannot use such syntax since database file name is provided by outside code after import, from module, which imports my models.py.
How to initialize models from outside of their definition knowing dynamic database file name? Ideally, models.py should not contain "database" mentions at all, like normal ORM.
Maybe you are looking at proxy feature :
proxy - peewee
database_proxy = Proxy() # Create a proxy for our db.
class BaseModel(Model):
class Meta:
database = database_proxy # Use proxy for our DB.
class User(BaseModel):
username = CharField()
# Based on configuration, use a different database.
if app.config['DEBUG']:
database = SqliteDatabase('local.db')
elif app.config['TESTING']:
database = SqliteDatabase(':memory:')
else:
database = PostgresqlDatabase('mega_production_db')
# Configure our proxy to use the db we specified in config.
database_proxy.initialize(database)
I have a database class in my module, db.py that I can only call once, my design goal is to have one single database object and have that be used across all other modules. So db has the layout
#db.py
Class DatabaseManager
def __init__():
# initialize engine and database
db = DatabaseManager()
The problem is across multiple imports db is reinitialized each time, what I want to be able to do is something along the lines of:
# polygon.py
from db import db
class Polygon:
def something(self):
db.commitChange(...)
# main.py
class GUIWindow:
def something(self):
db.getJSON(...)
How can I create one object for the entire program, and have all other modules importing db use that one object? I was under the impression db would not be reinitialized, but I am receiving the engine initialization output twice, here's an example and my output
# db.py
class DatabaseManager(object):
'''
classdocs
'''
def __init__(self):
'''
Constructor
'''
print "hi"
db = DatabaseManager()
# polygon.py
from db import db
# main.py
from db import db
output:
hi
hi
Is there a way to create custom methods to the query object so you can do something like this?
User.query.all_active()
Where all_active() is essentially .filter(User.is_active == True)
And be able to filter off of it?
User.query.all_active().filter(User.age == 30)
You can subclass the base Query class to add your own methods:
from sqlalchemy.orm import Query
class MyQuery(Query):
def all_active(self):
return self.filter(User.is_active == True)
You then tell SQLAlchemy to use this new query class when you create the session (docs here). From your code it looks like you might be using Flask-SQLAlchemy, so you would do it as follows:
db = SQLAlchemy(session_options={'query_cls': MyQuery})
Otherwise you would pass the argument directly to the sessionmaker:
sessionmaker(bind=engine, query_cls=MyQuery)
As of right now, this new query object isn't that interesting because we hardcoded the User class in the method, so it won't work for anything else. A better implementation would use the query's underlying class to determine which filter to apply. This is slightly tricky but can be done as well:
class MyOtherQuery(Query):
def _get_models(self):
"""Returns the query's underlying model classes."""
if hasattr(query, 'attr'):
# we are dealing with a subquery
return [query.attr.target_mapper]
else:
return [
d['expr'].class_
for d in query.column_descriptions
if isinstance(d['expr'], Mapper)
]
def all_active(self):
model_class = self._get_models()[0]
return self.filter(model_class.is_active == True)
Finally, this new query class won't be used by dynamic relationships (if you have any). To let those also use it, you can pass it as argument when you create the relationship:
users = relationship(..., query_class=MyOtherQuery)
this work for me finely
from sqlalchemy.orm import query
from flask_sqlalchemy import BaseQuery
class ParentQuery(BaseQuery):
def _get_models(self):
if hasattr(query, 'attr'):
return [query.attr.target_mapper]
else:
return self._mapper_zero().class_
def FilterByCustomer(self):
model_class = self._get_models()
return self.filter(model_class.customerId == int(g.customer.get('customerId')))
#using like this
class AccountWorkflowModel(db.Model):
query_class = ParentQuery
.................
To provide a custom method that will be used by all your models that inherit from a particular parent, first as mentioned before inherit from the Query class:
from flask_sqlalchemy import SQLAlchemy, BaseQuery
from sqlalchemy.inspection import inspect
class MyCustomQuery(BaseQuery):
def all_active(self):
# get the class
modelClass = self._mapper_zero().class_
# get the primary key column
ins = inspect(modelClass)
# get a list of passing objects
passingObjs = []
for modelObj in self:
if modelObj.is_active == True:
# add to passing object list
passingObjs.append(modelObj.__dict__[ins.primary_key[0].name])
# change to tuple
passingObjs = tuple(passingObjs)
# run a filter on the query object
return self.filter(ins.primary_key[0].in_(passingObjs))
# add this to the constructor for your DB object
myDB = SQLAlchemy(query_class=MyCustomQuery)
This is for flask-sqlalchemy, for which people will still get here when looking for this answer.
I currently have a column that contains HTML markup. Inside that markup, there is a timestamp that I want to store in a new column (so I can query against it). My idea was to do the following in a single migration:
Create a new, nullable column for the data
Use the ORM to pull back the HTML I need to parse
For each row
parse the HTML to pull out the timestamp
update the ORM object
But when I try to run my migration, it appears to be stuck in an infinite loop. Here's what I've got so far:
def _extract_publication_date(html):
root = html5lib.parse(html, treebuilder='lxml', namespaceHTMLElements=False)
publication_date_string = root.xpath("//a/#data-datetime")[0]
return parse_date(publication_date)
def _update_tip(tip):
tip.publication_date = _extract_publication_date(tip.rendered_html)
tip.save()
def upgrade():
op.add_column('tip', sa.Column('publication_date', sa.DateTime(timezone=True)))
tips = Tip.query.all()
map(tips, _update_tip)
def downgrade():
op.drop_column('tip', 'publication_date')
After a bit of experimentation using #velochy's answer, I settled on something like the following pattern for using SqlAlchemy inside Alembic. This worked great for me and could probably serve as a general solution for the OP's question:
from sqlalchemy.orm.session import Session
from alembic import op
# Copy the model definitions into the migration script if
# you want the migration script to be robust against later
# changes to the models. Also, if your migration includes
# deleting an existing column that you want to access as
# part of the migration, then you'll want to leave that
# column defined in the model copies here.
class Model1(Base): ...
class Model2(Base): ...
def upgrade():
# Attach a sqlalchemy Session to the env connection
session = Session(bind=op.get_bind())
# Perform arbitrarily-complex ORM logic
instance1 = Model1(foo='bar')
instance2 = Model2(monkey='banana')
# Add models to Session so they're tracked
session.add(instance1)
session.add(instance2)
# Apply a transform to existing data
m1s = session.query(Model1).all()
for m1 in m1s:
m1.foo = transform(m1.foo)
session.commit()
def downgrade():
# Attach a sqlalchemy Session to the env connection
session = Session(bind=op.get_bind())
# Perform ORM logic in downgrade (e.g. clear tables)
session.query(Model2).delete()
session.query(Model1).delete()
# Revert transform of existing data
m1s = session.query(Model1).all()
for m1 in m1s:
m1.foo = un_transform(m1.foo)
session.commit()
This approach appears to handle transactions properly. Frequently while working on this, I would generate DB exceptions and they would roll things back as expected.
What worked for me is to get a session by doing the following:
connection = op.get_bind()
Session = sa.orm.sessionmaker()
session = Session(bind=connection)
You can use the automap extension to automatically create ORM models of your database as they exist during the time of the migration, without copying them to the code:
import sqlalchemy as sa
from alembic import op
from sqlalchemy.ext.automap import automap_base
Base = automap_base()
def upgrade():
# Add the new column
op.add_column('tip', sa.Column('publication_date', sa.DateTime(timezone=True)))
# Reflect ORM models from the database
# Note that this needs to be done *after* all needed schema migrations.
bind = op.get_bind()
Base.prepare(autoload_with=bind) # SQLAlchemy 1.4 and later
# Base.prepare(bind, reflect=True) # SQLAlchemy before version 1.4
Tip = Base.classes.tip # "tip" is the table name
# Query/modify the data as it exists during the time of the migration
session = Session(bind=bind)
tips = session.query(Tip).all()
for tip in tips:
# arbitrary update logic
...
def downgrade():
op.drop_column('tip', 'publication_date')
Continue from the comments, you can try something like this:
import sqlalchemy as sa
tip = sa.sql.table(
'tip',
sa.sql.column('id', sa.Integer),
sa.sql.column('publication_date', sa.DateTime(timezone=True)),
)
def upgrade():
mappings = [
(x.id, _extract_publication_date(x.rendered_html))
for x in Tip.query
]
op.add_column('tip', sa.Column('publication_date', sa.DateTime(timezone=True)))
exp = sa.sql.case(value=tip.c.id, whens=(
(op.inline_literal(id), op.inline_literal(publication_date))
for id, publication_date in mappings.iteritems()
))
op.execute(tip.update().values({'publication_date': exp}))
def downgrade():
op.drop_column('tip', 'publication_date')