sqlalchemy existing database query - python

I am using SQLAlchemy as ORM for a python project. I have created few models/schema and it is working fine. Now I need to query a existing MySQL database, no insert/update just the select statement.
How can I create a wrapper around the tables of this existing database? I have briefly gone through the sqlalchemy docs and SO but couldn't find anything relevant. All suggest execute method, where I need to write the raw sql queries, while I want to use the SQLAlchemy query method in same way as I am using with the SA models.
For example if the existing db has table name User then I want to query it using the dbsession ( only the select operation, probably with join)

You seem to have an impression that SQLAlchemy can only work with a database structure created by SQLAlchemy (probably using MetaData.create_all()) - this is not correct. SQLAlchemy can work perfectly with a pre-existing database, you just need to define your models to match database tables. One way to do that is to use reflection, as Ilja Everilä suggests:
from sqlalchemy import Table
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class MyClass(Base):
__table__ = Table('mytable', Base.metadata,
autoload=True, autoload_with=some_engine)
(which, in my opinion, would be totally fine for one-off scripts but may lead to incredibly frustrating bugs in a "real" application if there's a potential that the database structure may change over time)
Another way is to simply define your models as usual taking care to define your models to match the database tables, which is not that difficult. The benefit of this approach is that you can map only a subset of database tables to you models and even only a subset of table columns to your model's fields. Suppose you have 10 tables in the database but only interested in users table from where you only need id, name and email fields:
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class User(Base):
id = sa.Column(sa.Integer, primary_key=True)
name = sa.Column(sa.String)
email = sa.Column(sa.String)
(note how we didn't need to define some details which are only needed to emit correct DDL, such as the length of the String fields or the fact that the email field has an index)
SQLAlchemy will not emit INSERT/UPDATE queries unless you create or modify models in your code. If you want to ensure that your queries are read-only you may create a special user in the database and grant that user SELECT privileges only. Alternatively/in addition, you may also experiment with rolling back the transaction in your application code.

You can access an existing table using the automap extension:
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
Base = automap_base()
Base.prepare(engine, reflect=True)
Users = Base.classes.users
session = Session(engine)
res = session.query(Users).first()

Create a table with autoload enabled that will inspect it. Some example code:
from sqlalchemy.sql import select
from sqlalchemy import create_engine, MetaData, Table
CONN_STR = '…'
engine = create_engine(CONN_STR, echo=True)
metadata = MetaData()
cookies = Table('cookies', metadata, autoload=True,
autoload_with=engine)
cols = cookies.c
with engine.connect() as conn:
query = (
select([cols.created_at, cols.name])
.order_by(cols.created_at)
.limit(1)
)
for row in conn.execute(query):
print(row)

Other answers don't mention what to do if you have a table with no primary key, so I thought I would address this. Assuming a table called Customers that has columns for CustomerId, CustomerName, CustomerLocation you could do;
from sqlalchemy.ext.automap import automap_base
from sqlalchemy import create_engine, MetaData, Column, String, Table
from sqlalchemy.orm import Session
Base = automap_base()
conn_str = '...'
engine = create_engine(conn_str)
metadata = MetaData()
# you only need to define which column is the primary key. It can automap the rest of the columns.
customers = Table('Customers',metadata, Column('CustomerId', String, primary_key=true), autoload=True, autoload_with=engine)
Base.prepare()
Customers= Base.classes.Customers
session = Session(engine)
customer1 = session.query(Customers).first()
print(customer1.CustomerName)

Assume we have a Postgresql database named accounts. And we already have a table named users.
import sqlalchemy as sa
psw = "verysecret"
db = "accounts"
# create an engine
pengine = sa.create_engine('postgresql+psycopg2://postgres:' + psw +'#localhost/' + db)
from sqlalchemy.ext.declarative import declarative_base
# define declarative base
Base = declarative_base()
# reflect current database engine to metadata
metadata = sa.MetaData(pengine)
metadata.reflect()
# build your User class on existing `users` table
class User(Base):
__table__ = sa.Table("users", metadata)
# call the session maker factory
Session = sa.orm.sessionmaker(pengine)
session = Session()
# filter a record
session.query(User).filter(User.id==1).first()
Warning: Your table should have a Primary Key defined. Otherwise, Sqlalchemy won't like it.

Related

How to update an item on a database table using SQLAlchemy (without the ORM Object)?

I would like to update (or add/delete) an item from a Database Table using SQLAlchemy but the table has 1000+ columns and it wouldn't be feasible to create the ORM Object using the declarative_base procedure. Ideally, I would like to be able to extract the structure of the Item from the database and send a new one (or update, or delete).
I tried using the sqlalchemy.Table object, but it only allow me to read data from the database and doesn't allow for editting.
For example :
from datetime import date
import sqlalchemy as sa
import sqlalchemy.orm as orm
engine = sa.create_engine('sqlite:///FullDatabase.db')
meta = sa.MetaData(engine)
history = sa.Table('history', meta, autoload=True)
session = orm.Session(engine)
entry = session.query(history).where(history.c.Date == date(2023, 1, 25)).first()
With this I can extract the information, but I can't edit it the same way I could if I had the History object defined (but as I mentioned, there are 1000+ columns to map)
For example : (one of the columns is HousePrice)
entry.HousePrice = 100
AttributeError: can't set attribute
Is there a way to do this update ?
A similar solution will be used to add/delete new items as well, I'd assume.
Appreciate in advance
You can definitely reflect classes declaratively (see the docs or this answer).
And then you can update in the way you've described above (see my example below).
However I'd imagine that in your case the error is just that you're trying to access an attribute directly while you should be reaching through the c columns.
entry.c.HousePrice = 100 # should work
Demo of reflecting a class.
Let's set up a simple table.
import sqlite3
con = sqlite3.connect("/tmp/so.db")
cur = con.cursor()
cur.execute("CREATE TABLE user (username VARCHAR PRIMARY KEY)")
cur.executemany(
"INSERT INTO user (username) VALUES (?)",
[("alice",), ("bob",), ("charlie",)],
)
con.commit()
con.close()
And then, reflect and update alice to ALICE.
from sqlalchemy import Table, create_engine, select
from sqlalchemy.orm import Session, declarative_base
engine = create_engine("sqlite:////tmp/so.db", future=True, echo=True)
Base = declarative_base()
class User(Base):
__table__ = Table("user", Base.metadata, autoload_with=engine)
with Session(engine) as session:
alice = session.scalars(select(User).filter(User.username == "alice")).first()
alice.username = "ALICE"
session.add(alice)
session.commit()
with Session(engine) as session:
results = session.scalars(select(User)).all()
print([r.username for r in results]) # ['ALICE', 'bob', 'charlie']

Django: create a separate set of models in in-memory SQLite DB on the flight

What I'm trying to do:
I have a general huge set of models in my Django (v2.3.3) application in PostgreSQL DB, but now for a specific task I need to create a rather complicated aggregation of DB objects. And it would be much easier to work with them if I could only in that thread/process that handles this specific web-request create an in-memory SQLite DB, define a new set of model-classes there (without foreign keys to the global set of models of course), create some objects in that DB, do my calculations and kill that DB upon the response serving. For consistency I would like to use Django models for this small DB as well.
Is this possible? Or do you have some better ideas how to approach this?
Eventually I've ended up with SQLAlchemy + in-memory SQLite solution:
from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, sessionmaker, backref
Base = declarative_base()
engine = create_engine('sqlite:///:memory:')
Session = sessionmaker(bind=engine)
session = Session()
class Node(Base):
"""A node entity in sankey diagram."""
__tablename__ = 'node'
id = Column(Integer, primary_key=True)
name = Column(String) # unique - might include parts from event.extra_data
event = Column(String) # not unique - totally equeals to event.event
Base.metadata.create_all(engine)

How to recognise a sqlite database created with sqlalchemy using pydal?

I create a very simple database with sqlalchemy as follows:
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
class Person(Base):
__tablename__ = 'person'
id = Column(Integer, primary_key=True)
name = Column(String(250), nullable=False)
engine = create_engine('sqlite:///sqlalchemy_example.db')
# Create all tables in the engine. This is equivalent to "Create Table"
# statements in raw SQL.
Base.metadata.create_all(engine)
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
# Insert a Person in the person table
new_person = Person(name='new person')
session.add(new_person)
session.commit()
and then I tried to read it using pyDAL reference.
from pydal import DAL, Field
db = DAL('sqlite://sqlalchemy_example.db', auto_import=True)
db.tables
>> []
db.define_table('person', Field('name'))
>> OperationalError: table "person" already exists
How do I access the table using pyDAL?
thank you
First, do not set auto_import=True, as that is only relevant if pyDAL *.table migration metadata files exist for the tables, which will not be the case here.
Second, pyDAL does not know the table already exists, and because migrations are enabled by default, it attempts to create the table. To prevent this, you can simply disable migrations:
# Applies to all tables.
db = DAL('sqlite://sqlalchemy_example.db', migrate_enabled=False)
or:
# Applies to this table only.
db.define_table('person', Field('name'), migrate=False)
If you would like pyDAL to take over migrations for future changes to this table, then you should run a "fake migration", which will cause pyDAL to generate a *.table migration metadata file for this table without actually running the migration. To do this, temporarily make the following change:
db.define_table('person', Field('name'), fake_migrate=True)
After leaving the above in place for a single request, the *.table file will be generated, and you can remove the fake_migrate=True argument.
Finally, note that pyDAL expects the id field to be an auto-incrementing integer primary key field.

Flask sqlAlchemy: Creating models for the existing database structure

I am new to sqlAlchemy and I wonder if there is a way create a class that would be mapped to the existing table in DB without specifying any columns of the table (but all columns could be accessed as attributes of the object)?
This is how I generated models for flask-sqlalchemy which use MS SQL.
flask-sqlacodegen --flask mssql+pymssql://<username>:<password>#<server>/<database> --tables <table_names>> db_name.py
you have to install flask-sqlacodegen and pymssql though.
Have recently came across this issue. Try steps below:
from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy.orm import mapper, sessionmaker
class User(object): # class which can can act as ORM class
pass
dbPath = 'places.sqlite'
engine = create_engine('sqlite:///%s' % dbPath, echo=True) # create engine
metadata = MetaData(engine)
user_table= Table('user_existing_class', metadata, autoload=True) # create a Table object
mapper(User, user_table) # map Table to ORM class
Session = sessionmaker(bind=engine)
session = Session()
res = session.query(User).all()
res[1].name

SQLAlchemy printing raw SQL from create()

I am giving Pylons a try with SQLAlchemy, and I love it, there is just one thing, is it possible to print out the raw SQL CREATE TABLE data generated from Table().create() before it's executed?
from sqlalchemy.schema import CreateTable
print(CreateTable(table))
If you are using declarative syntax:
print(CreateTable(Model.__table__))
Update:
Since I have the accepted answer and there is important information in klenwell answer, I'll also add it here.
You can get the SQL for your specific database (MySQL, Postgresql, etc.) by compiling with your engine.
print(CreateTable(Model.__table__).compile(engine))
Update 2:
#jackotonye Added in the comments a way to do it without an engine.
print(CreateTable(Model.__table__).compile(dialect=postgresql.dialect()))
You can set up you engine to dump the metadata creation sequence, using the following:
def metadata_dump(sql, *multiparams, **params):
# print or write to log or file etc
print(sql.compile(dialect=engine.dialect))
engine = create_engine(myDatabaseURL, strategy='mock', executor=metadata_dump)
metadata.create_all(engine)
One advantage of this approach is that enums and indexes are included in the printout. Using CreateTable leaves this out.
Another advantage is that the order of the schema definitions is correct and (almost) usable as a script.
I needed to get the raw table sql in order to setup tests for some existing models. Here's a successful unit test that I created for SQLAlchemy 0.7.4 based on Antoine's answer as proof of concept:
from sqlalchemy import create_engine
from sqlalchemy.schema import CreateTable
from model import Foo
sql_url = "sqlite:///:memory:"
db_engine = create_engine(sql_url)
table_sql = CreateTable(Foo.table).compile(db_engine)
self.assertTrue("CREATE TABLE foos" in str(table_sql))
Something like this? (from the SQLA FAQ)
http://docs.sqlalchemy.org/en/latest/faq/sqlexpressions.html
It turns out this is straight-forward:
from sqlalchemy.dialects import postgresql
from sqlalchemy.schema import CreateTable
from sqlalchemy import Table, Column, String, MetaData
metadata = MetaData()
users = Table('users', metadata,
Column('username', String)
)
statement = CreateTable(users)
print(statement.compile(dialect=postgresql.dialect()))
Outputs this:
CREATE TABLE users (
username VARCHAR
)
Going further, it can even support bound parameters in prepared statements.
Reference
How do I render SQL expressions as strings, possibly with bound parameters inlined?
...
or without an Engine:
from sqlalchemy.dialects import postgresql
print(statement.compile(dialect=postgresql.dialect()))
SOURCE: http://docs.sqlalchemy.org/en/latest/faq/sqlexpressions.html#faq-sql-expression-string
Example: Using SQL Alchemy to generate a user rename script
#!/usr/bin/env python
import csv
from sqlalchemy.dialects import postgresql
from sqlalchemy import bindparam, Table, Column, String, MetaData
metadata = MetaData()
users = Table('users', metadata,
Column('username', String)
)
renames = []
with open('users.csv') as csvfile:
for row in csv.DictReader(csvfile):
renames.append({
'from': row['sAMAccountName'],
'to': row['mail']
})
for rename in renames:
stmt = (users.update()
.where(users.c.username == rename['from'])
.values(username=rename['to']))
print(str(stmt.compile(dialect=postgresql.dialect(),
compile_kwargs={"literal_binds": True})) + ';')
When processing this users.csv:
sAMAccountName,mail
bmcboatface,boaty.mcboatface#example.com
ndhyani,naina.dhyani#contoso.com
Gives output like this:
UPDATE users SET username='boaty.mcboatface#example.com' WHERE users.username = 'bmcboatface';
UPDATE users SET username='naina.dhyani#contoso.com' WHERE users.username = 'ndhyani';users.username = 'ndhyani';
Why a research vessel has an email address is yet to be determined. I have been in touch with Example Inc's IT team and have had no response.
May be you mean echo parameter of sqlalchemy.create_engine?
/tmp$ cat test_s.py
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Department(Base):
__tablename__ = "departments"
department_id = sa.Column(sa.types.Integer, primary_key=True)
name = sa.Column(sa.types.Unicode(100), unique=True)
chief_id = sa.Column(sa.types.Integer)
parent_department_id = sa.Column(sa.types.Integer,
sa.ForeignKey("departments.department_id"))
parent_department = sa.orm.relation("Department")
engine = sa.create_engine("sqlite:///:memory:", echo=True)
Base.metadata.create_all(bind=engine)
/tmp$ python test_s.py
2011-03-24 15:09:58,311 INFO sqlalchemy.engine.base.Engine.0x...42cc PRAGMA table_info("departments")
2011-03-24 15:09:58,312 INFO sqlalchemy.engine.base.Engine.0x...42cc ()
2011-03-24 15:09:58,312 INFO sqlalchemy.engine.base.Engine.0x...42cc
CREATE TABLE departments (
department_id INTEGER NOT NULL,
name VARCHAR(100),
chief_id INTEGER,
parent_department_id INTEGER,
PRIMARY KEY (department_id),
UNIQUE (name),
FOREIGN KEY(parent_department_id) REFERENCES departments (department_id)
)
2011-03-24 15:09:58,312 INFO sqlalchemy.engine.base.Engine.0x...42cc ()
2011-03-24 15:09:58,312 INFO sqlalchemy.engine.base.Engine.0x...42cc COMMIT

Categories