I'm working with SQLAlchemy to run SQL queries against an Oracle database. I have read access to the database, but the user I have does not own any of the tables I'm working with.
The database updates on a regular basis, so rather than explicitly listing the MetaData, I was hoping to use reflection. I found this question, that describes an issue similar to what I'm having. However, I don't have a way to change ownership of the tables, nor modify the database in any way. I just have read access.
Is there a way to reflect Oracle tables in SQLAlchemy if I don't have ownership of those tables?
(Edit)
Example Code:
engine = create_engine('ORACLE CONNECTION STRING')
metadata = MetaData()
students = Table('students', metadata, autoload=True, autoload_with=engine)
I receive an exception of sqlalchemy.exc.NoSuchTableError: students
However, when I run the following:
results = engine.execute('SELECT * FROM students')
for r in results:
print(r)
I receive the output that I expected from the table, which is a tuple of all the fields for each row.
So instead of trying to reflect a single table, I try to reflect all of them:
metadata.reflect(bind=engine)
print(metadata.tables)
The output is immutabledict({}).
So essentially it's nothing. All of these tables are owned by user A where as I'm logging in with a read-only of user B.
You might have better luck reflecting someone else's tables if you specify the schema (account) you're targeting:
metadata.reflect(bind=engine, schema='userA')
This way, you'll reflect all readable tables belonging to 'userA'. I'm not sure why you're able to query students using engine.execute, though.
Related
Our current project relies heavily on SQL Alchemy for table creation/data insertion. We would like to switch to timescaledb's hypertables, but it seems the recommended way to create hypertables is by executing a
create_hypertable
command. I need to be able to dynamically create tables, and so manually doing this for every table created is not really an option. One way of handling the conversion is to run a python script sending psycopg2 commands to convert all newly-created tables into hypertables, but this seems a little clumsy. Does timescaledb offer any integration with SQL Alchemy with regards to creating hypertables?
We currently do not offer any specific integrations with SQL Alchemy (broadly or specifically for creating hypertables). We are always interested in hearing new feature requests, so if you wanted to post your issue/use case on our Github it would help us keep better track of it for future work.
One thing that might work for your use case is to create an event trigger that executes on table creation. You'd have to check that it's in the correct schema since TimescaleDB creates its own chunk tables dynamically and you don't want to have them converted to hypertables.
See this answer for more info on event triggers:
execute a trigger when I create a table
Here is a practical example of using event trigger to create a hyper table:
from sqlalchemy import Column, Integer, DateTime, event, DDL, orm
Base = orm.declarative_base()
class ExampleModel(Base):
__tablename__ = 'example_model'
id = Column(Integer, primary_key=True)
time = Column(DateTime)
event.listen(
ExampleModel.__table__,
'after_create',
DDL(f"SELECT create_hypertable('{ExampleModel.__tablename__}', 'time');")
)
--- UPDATED to be more clear ---
I have quite the task ahead of me, and I am hoping Alembic and SQLAlchemy can do what I need.
I want to develop a desktop application that helps migrate data from one SQLite database to a completely different model and back again. So migrations right? That’s all well and good, but I need to ensure if I haven’t modeled a specific column/table, that it’s ported properly to a table that will be read later to reconstruct the database.
Example:
DB 1:
Table names
ID
First Name
Last Name
Table address
Street 1
Street 2
City
State
DB2:
Table givenName
ID
name
Table Surname
ID
Name
Say with alembic I have mapped the following:
DB1 names.firstname => DB2 givenName.name
DB1 names.lastname => DB2 surname.name
But say I want to use a migration to port DB1 to DB2, store the unknown data somewhere, and then reconstruct it properly when I go from DB2 -> DB1.
So how I’d envision this is a joiner table of sorts.
DB2
Table Joiner
table_name
column_name
data
The thing is I want this all to be completely dynamic so no piece of information is ever lost.
Now let's add an extra complexity, and I want to construct a generator of sorts so I can simply pass down new XML/JSON declarations. This would define the mappings, and any calls to translators already based in the code (date conversions, etc).
For an example of two database formats that need to be migrated from one to the other see https://cloudup.com/cYzP2lCQjbo
My question is if this is even possible or conceivable with SQLAlchemy and Alembic. How feasible is this? Any thoughts?
I need to write a complex query, which retrieves a lot of data from a bunch of tables. Basically I need to find all instances of the models
Customer
Payment
Invoice
where relationships intersect in a specific way. In SqlAlchemy, I would be able to do something like
for c, p, i in session.query(Customer, Payment, Invoice).\
filter(User.id==Payment.customer_id).\
filter(Invoice.id==Payment.invoice_id).\
filter(Payment.date==...).\
filter(Customer.some_property==...)
all():
# Do stuff ...
This would allow me to set several constraints and retrieve it all at once. In Django, I currently do something stupid like
customers = Customer.objects.filter(...)
payments = Payment.objects.filter(customer=customer)
invoices = Invoice.objects.filter(customer=customer, payment_set=payments)
Now, we already have three different queries (some details are left out to keep it simple). Could I reduce it to one? Well, I could have done something like
customers = Customer.objects.filter(...).prefetch_related(
'payments', 'payments__invoices'
)
but now I have to traverse a crazy tree of data instead of having it all laid out neatly in rows, like with SqlAlchemy. Is there any way Django can do something like that? Or would I have to drop through to custom SQL directly?
After reading up on different solutions, I have decided to use SqlAlchemy on top of my Django models. Some people try to completely replace the Django ORM with SqlAlchemy, but this almost completely defeats the purpose of using Django, since most of the framework relies on the ORM.
Instead, I use SqlAlchemy simple for querying the tables defined by the Django ORM. I follow a recipe similar to this
# Setup sqlalchemy bindings
import sqlalchemy as s
from sqlalchemy.orm import sessionmaker
engine = s.create_engine('postgresql://<user>:<password>#<host>:<port>/<db_name>')
# Automatically read the database tables and create metadata
meta = s.MetaData()
meta.reflect(bind=engine)
Session = sessionmaker(bind=engine)
# Create a session, which can query the tables
session = Session()
# Build table instances without hardcoding tablenames
s_payment = meta.tables[models.Payment()._meta.db_table]
s_allocation = meta.tables[models.Allocation()._meta.db_table]
s_customer = meta.tables[models.Customer()._meta.db_table]
s_invoice = meta.tables[models.Invoice()._meta.db_table]
report = session.query(s_payment.c.amount, ...).all()
There is room for a few improvements on this recipe, e.g. it is not very elegant to create an empty instance of Django models in order to find their table name, however, with a few lines of code, I get the full flexibility of SqlAlchemy without compromising with the Django ORM layer. This means both can live happily alongside each other.
One caveat is that SqlAlchemy will not use the same connection as the Django ORM, which means that the view of things may not appear consistent if I use both approaches in the same context. This won't be a problem for me though, since I just want to read a bunch of data from the database.
I have a relatively complex MySQL Database (60+ Tables) that I need to populate regularly. There are a lot of foreign key constraints on most of the tables. I started writing my import engine using SQL Alchemy.
Do I need to reconstruct my entire Database with SQL Alchemy classes in order to do this? Does anyone have any better suggestions? Only 8 of the tables actually accept new raw data, the rest are populated from these tables.
You can use SQLAlchemy reflection to create the classes mapped to MySQL table structure. See Reflecting Database Objects. There is a subchapter there showing how to reflect all tables (Reflecting Database Objects).
Code copied from the above link to reflect one table:
messages = Table('messages', meta, autoload=True, autoload_with=engine)
All tables:
meta = MetaData()
meta.reflect(bind=someengine)
users_table = meta.tables['users']
addresses_table = meta.tables['addresses']
I'm trying to reflect some tables from a legacy DB (have no control over how it is set up, can't change it). The schema has a ton of tables in it, most of which aren't relevant to me, and which I don't have access to. When I try to reflect from the tables I do have access to, SQLAlchemy errors out because it can't do a SHOW command on some table I don't care about. E.g.:
fooTable = Table('foo', meta)
insp = reflection.Inspector.from_engine(eng)
insp.reflecttable(fooTable, include_columns=("id", "name"))
sqlalchemy.exc.OperationalError: (OperationalError) (1142, "SHOW command denied to user 'xxx'#'yyy' for table 'bar'") 'SHOW CREATE TABLE `schema`.`bar`' ()
There is an FK in fooTable to barTable (not on the id or name columns), but since I have no read access to or interest in barTable at all I'd really just prefer if SQLAlchemy could ignore it. Is this possible? I would be OK with a solution that doesn't try to load any relations at all, since I just need to read the data in individual tables.