When I try to reflect all tables in my Sybase DB
metadata = MetaData()
metadata.reflect(bind=engine)
SQLAlchemy runs the following query:
SELECT o.name AS name
FROM sysobjects o JOIN sysusers u ON o.uid = u.uid
WHERE u.name = #schema_name
AND o.type = 'U'
I then try to print the contents of metadata.tables, and this yields nothing.
I've tried creating an individual Table object and using the autoload=True option, but this yields a TableDoesNotExist error.
accounts = Table('Accounts', metadata, autoload=True, autoload_with=engine)
I looked into this query and it seems the #schema_name is becoming my username, and none of the tables which come from "sysobjects" appear to have a "name" attribute set to my username. They are all set to "dbo", which means the Database Owner, and thus the query returns nothing, and nothing is ever reflected. Is there any way to force SQLAlchemy to use something different as schema_name?
I've found two questions regarding table reflection using the Sybase dialect. Both were asked 6 years ago and seem to indicate that table reflection with Sybase was unsupported. However, it seems that SQLAlchemy tries to run a genuine sybase reflection query as above, so I don't think this is the case now.
I've solved this by setting the schema parameter on the MetaData object. I had to set it to dbo. You can also specify this in the reflect function.
Related
I am using MySql. I was however able to find ways to do this using sqlalchemy orm but not using expression language.So I am specifically looking for core / expression language based solutions. The databases lie on the same server
This is how my connection looks like:
connection = engine.connect().execution_options(
schema_translate_map = {current_database_schema: new_database_schema})
engine_1=create_engine("mysql+mysqldb://root:user#*******/DB_1")
engine_2=create_engine("mysql+mysqldb://root:user#*******/DB_2",pool_size=5)
metadata_1=MetaData(engine_1)
metadata_2=MetaData(engine_2)
metadata.reflect(bind=engine_1)
metadata.reflect(bind=engine_2)
table_1=metadata_1.tables['table_1']
table_2=metadata_2.tables['table_2']
query=select([table_1.c.name,table_2.c.name]).select_from(join(table_2,table_1.c.id==table_2.c.id,'left')
result=connection.execute(query).fetchall()
However, when I try to join tables from different databases it throws an error obviously because the connection belongs to one of the databases. And I haven't tried anything else because I could not find a way to solve this.
Another way to put the question (maybe) is 'how to connect to multiple databases using a single connection in sqlalchemy core'.
Applying the solution from here to Core only you could create a single Engine object that connects to your server, but without defaulting to one database or the other:
engine = create_engine("mysql+mysqldb://root:user#*******/")
and then using a single MetaData instance reflect the contents of each schema:
metadata = MetaData(engine)
metadata.reflect(schema='DB_1')
metadata.reflect(schema='DB_2')
# Note: use the fully qualified names as keys
table_1 = metadata.tables['DB_1.table_1']
table_2 = metadata.tables['DB_2.table_2']
You could also use one of the databases as the "default" and pass it in the URL. In that case you would reflect tables from that database as usual and pass the schema= keyword argument only when reflecting the other database.
Use the created engine to perform the query:
query = select([table_1.c.name, table_2.c.name]).\
select_from(outerjoin(table1, table_2, table_1.c.id == table_2.c.id))
with engine.begin() as connection:
result = connection.execute(query).fetchall()
I'm connecting to an Oracle database from sqlalchemy and I want to know when the tables in the database were created. I can access this information through the sql developer application so I know that it is stored somewhere, but I don't know if its possible to get this information from sqlalchemy.
Also if its not possible, how should I be getting it?
SqlAlchemy doesn't provide anything to help you get that information. You have to query the database yourself.
something like:
with engine.begin() as c:
result = c.execute("""
SELECT created
FROM dba_objects
WHERE object_name = <<your table name>>
AND object_type = 'TABLE'
""")
I want to use SQLAlchemy to create a view in my PostgreSQL database. I'm using the CreateView compiler from sqlalchemy-views. I'm using the answer to this question as a reference:
How to create an SQL View with SQLAlchemy?
My code for creating the view looks like this:
def create_view(self, myparameter):
mytable = Table('mytable', metadata, autoload=True)
myview = Table('myview', metadata)
engine.execute(CreateView(myview, mytable.select().where(mytable.c.mycolumn==myparameter)))
However, when I attempt to run this query, the following exception is thrown:
KeyError: 'mycolumn_1'
Looking at the compiled query, it seems that a placeholder for my parameter value is not being replaced:
'\nCREATE VIEW myview AS SELECT mytable.mycolumn \nFROM mytable \nWHERE mytable.mycolumn = %(mycolumn_1)s\n\n'
Since the placeholder is not being replaced, the query obviously fails. However, I do not understand why the replacement does not happen, since my code does not differ much from the example.
My first suspicion was that maybe the type of the parameter and the column were incompatible. Currently, the parameter comes in as a unicode string, which should be mapped to a text column in my database. I have also tried mapping the parameter as a long to a bigint column with the same (failed) result.
Does anyone have another suggestion?
From the SQLAlchemy documentation, I can see that when one wants to pass the actual value that will be ultimately used at expression time, the bindparam() is used. A nice example is also provided:
from sqlalchemy import bindparam
stmt = select([users_table]).\
where(users_table.c.name == bindparam('username'))
I am creating a sql alchemy table like so:
myEngine = self.get_my_engine() # creates engine
metadata = MetaData(bind=myEngine)
SnapshotTable = Table("mytable", metadata, autoload=False, schema="my schema")
I have to use autoload false because the table may or may not exist (and that code has to run before the table is created)
The problem is, if I use autoload = False, when I try to query the table (after it was created by another process) doing session.query(SnapshotTable) I get an:
InvalidRequestError: Query contains no columns with which to SELECT from.
error; which is understandable because the table wasn't loaded yet.
My question is: how do I "load" the table metadata after it has been defined with autoload = False.
I looked at the schema.py code and it seems that I can do:
SnapshotTable._autoload(metadata, None, None)
but that doesn't look right to me...any other ideas or thoughts?
Thanks
First declare the table model:
class MyTable(Base):
__table__ = Table('mytable', metadata)
Or directly:
MyTable = Table("mytable", metadata)
Then, once you are ready to load it, call this:
Table('mytable', metadata, autoload_with=engine, extend_existing=True)
Where the key to it all is extend_existing=True.
All credit goes to Mike Bayer on the SQLAlchemy mailing list.
I was dealing with this issue just last night, and it turns out that all you need to do is load all available table definitions from the database with the help of metadat.reflect. This is very much similar to #fgblomqvist's solution. The major difference is that you do not have to recreate the table. In essence, the following should help:
SnapshotTable.metadata.reflect(extend_existing=True, only=['mytable'])
The unsung hero here is the extend_existing parameter. It basically makes sure that the schema and other info associated with SnapshotTable are reloaded. The parameter only is used here to limit how much information is retrieved. This will save you a tremendous amount of time, if you are dealing with a large database
I hope this serves a purpose in the future.
I guess that problem is with not reflected metadata.
You could try to load metadata with method this call bevore executing any query :
metadata.reflect()
It will reload definition of table, so framework will know how to build proper SELECT.
And then calling
if SnapshotTable.exists :
SnapshotTable._init_existing()
I'm somewhat new to SQLAlchemy ORM, and I'm trying to select and then store data from a column within a view that has a forward slash in the name of the column.
The databases are mapped using the following:
source_engine = create_engine("...")
base = automap_base()
base.prepare(source_engine, reflect=True)
metadata = MetaData(self.engine)
table_1 = Table("table_1", self.metadata, autoload=True)
The second destination table is mapped the same way.
Then, I connect to this database, and I'm trying to select information from columns to copy into a different database:
source_table_session = Session(source_engine)
dest_table_session = Session(dest_engine)
table_1_data = table_1_session.query(table_1)
for instance in table_1_data:
newrow = dest_table.base.classes.dest_table()
newrow.Column1 = instance.Column1 # This works fine, column has normal name
But then, the problem is that there's a column in the view with the name "Slot/Port"
With a direct query, you can do:
select "Slot/Port" from source_database;
But in ORM, you can't just type:
newrow.Slot/Port = instance.Slot/Port
or
newrow.'Slot/Port' = instance.'Slot/Port'
That isn't going to be correct, and the following doesn't work either:
newrow.SlotPort = instance.SlotPort
AttributeError: 'result' object has no attribute 'SlotPort'
I have no control over how columns are named in the source database.
I find the SQLAlchemy documentation to be generally fragmented (only showing small snippets of code) and confusing, so I'm not sure if this is kind of thing is addressed or not. Is there a way to get around this limitation, or perhaps if the columns are already mapped to a valid name without a slash or a way to do so?
Thanks to #DeepSpace for helping me find the answer.
Instead of
newrow.whatever = instance.whatever
I needed:
setattr(newrow, 'Slot/Port', getattr(instance, 'Slot/Port'))