SQLAlchemy - get date that table was created - python

I'm connecting to an Oracle database from sqlalchemy and I want to know when the tables in the database were created. I can access this information through the sql developer application so I know that it is stored somewhere, but I don't know if its possible to get this information from sqlalchemy.
Also if its not possible, how should I be getting it?

SqlAlchemy doesn't provide anything to help you get that information. You have to query the database yourself.
something like:
with engine.begin() as c:
result = c.execute("""
SELECT created
FROM dba_objects
WHERE object_name = <<your table name>>
AND object_type = 'TABLE'
""")

Related

Query data from schema.table in Postgres

I'm very new to sqlalchemy and I encountered a problem regards to Postgres databases. I can successfully connect to the postgresql database, and I think I've directed the engine to my desired schema.
cstr = f"postgresql+psycopg2://{username}:{password}#{server}:{port}/{database}"
engine = create_engine(cstr,connect_args={'options': '-csearch_path={}'.format("schema_name")},echo=True)
con = engine.connect()
print(con.execute('SELECT * FROM table_name'))
This prints out the correct schema_name.
insp = inspect(con)
print(insp.default_schema_name)
However, I still get error messages saying that the table does not exist.
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "table_name" does not exist
I also tried without the ,connect_args={'options': '-csearch_path={}'.format("google")} clause and use schema_name.table_name in the sql query. Same error occurs. It's not a local database, so I can't do anything to the database except getting data from it. What should I do here?
It's interesting how I searched the answers for hours and decided to ask instead. And right after, I found the solution. Just in case anyone is interested in the answer. I got my solution from this answer
Selecting data from schema based table in Postgresql using psycopg2
print(con.execute("""SELECT DISTINCT "column_name" FROM schema_name."table_name";"""))
This is the way to do it, with a looot of quotation marks
I don't know about your framework alchemy but the correct query should be something like that:
SELECT table_name FROM information_schema.tables WHERE table_schema='public'
Reference docs
Rather than manually quoting the identifiers in queries you can let SQLAlchemy do the work for you.
Given this table and data:
test# create table "Horse-Bus" (
test(# id integer generated always as identity,
test(# name varchar,
test(# primary key(id)
test(# );
CREATE TABLE
test#
test# insert into "Horse-Bus" (name) values ('Alice'), ('Bob'), ('Carol');
INSERT 0 3
You can create a Table object and query it like this:
>>>import sqlalchemy as sa
>>> engine = sa.create_engine('postgresql:///test', echo=False, future=True)
>>> tbl = sa.Table('Horse-Bus', sa.MetaData(), autoload_with=engine)
>>> with engine.connect() as conn:
... rows = conn.execute(sa.select(tbl))
... for row in rows:
... print(row)
...
(1, 'Alice')
(2, 'Bob')
(3, 'Carol')
>>>

How to set or select postgres' current_settings with peewee?

I have a postgres database that I want to query with peewee in python. If I connect to the database directly (psql or pgadmin) I can do something like
set my.setting='test'
or
select current_setting('my.setting')
How can I do this with peewee? The model I have contains only the tables I have in my database.
Thanks for help!
You can execute raw SQL using the Database method execute_sql(), example:
db = PostgresqlDatabase("test")
db.execute_sql("set my.setting to 'test'")
cur = db.execute_sql("show my.setting")
print(cur.fetchone()[0])

SQLAlchemy table reflection with Sybase

When I try to reflect all tables in my Sybase DB
metadata = MetaData()
metadata.reflect(bind=engine)
SQLAlchemy runs the following query:
SELECT o.name AS name
FROM sysobjects o JOIN sysusers u ON o.uid = u.uid
WHERE u.name = #schema_name
AND o.type = 'U'
I then try to print the contents of metadata.tables, and this yields nothing.
I've tried creating an individual Table object and using the autoload=True option, but this yields a TableDoesNotExist error.
accounts = Table('Accounts', metadata, autoload=True, autoload_with=engine)
I looked into this query and it seems the #schema_name is becoming my username, and none of the tables which come from "sysobjects" appear to have a "name" attribute set to my username. They are all set to "dbo", which means the Database Owner, and thus the query returns nothing, and nothing is ever reflected. Is there any way to force SQLAlchemy to use something different as schema_name?
I've found two questions regarding table reflection using the Sybase dialect. Both were asked 6 years ago and seem to indicate that table reflection with Sybase was unsupported. However, it seems that SQLAlchemy tries to run a genuine sybase reflection query as above, so I don't think this is the case now.
I've solved this by setting the schema parameter on the MetaData object. I had to set it to dbo. You can also specify this in the reflect function.

psycopg2.OperationalError: FATAL: database does not exist

I'm trying to populate a couple databases with psycopg2 within a server I am not the root user of (don't know if it's relevant or not). My code looks like
import json
from psycopg2 import connect
cors = connect(user='jungal01', dbname='course')
req = connect(user="jungal01", dbname='requirement')
core = cors.cursor()
reqs = req.cursor()
with open('gened.json') as gens:
geneds = json.load(gens)
for i in range(len(geneds)):
core.execute('''insert into course (number, description, title)
values({0}, {1}, {2});''' .format(geneds[i]["number"], geneds[i]['description'], geneds[i]['title'] ))
reqs.execute('''insert into requirement (fulfills)
values({0});''' .format(geneds[i]['fulfills'] ))
db.commit()
when I execute the code, I get the above pycopg2 error. I know that these particular databases exist, but I just can't figure out why it won't connect to my databases. (side quest, I am also unsure about that commit statement. Should it be in the for loop, or outside of it? It suppose to be database specific?)
First, you have db is not a defined variable, so you code shouldn't run completely anyway.
\list on this server is a bunch of databases full of usernames, of which my username is one
Then the following is how you should connect. To a database, not a table, and the regular pattern is to put the database name, and then the user/pass.
A "schema" is a loose term in relational database. Both tables and databases have schemas, but you seem to be expecting to connect to a table, not a database.
So, try this code with an attempt at fixing your indentation and SQL injection problem -- See this documentation
Note that you first must have created the two tables in the database you are connecting to.
import json
from psycopg2 import connect
username = 'jungal01'
conn = connect(dbname=username, user=username)
cur = conn.cursor()
with open('gened.json') as gens:
geneds = json.load(gens)
for g in geneds:
cur.execute('''insert into course (number, description, title)
values(%(number)s, %(description)s, %(title)s);''', g)
cur.execute('''insert into requirement (fulfills)
values(%(fulfills)s);''', g)
conn.commit()
Allen, you said: "in postgres, tables are databases." That's wrong. Your error message results from this misunderstanding. You want to connect to a database, and insert into a table that exists in that database. You're trying to insert into a database -- a nonsensical operation.
Make sure you are giving the catalog name as database name and not the schema's under catalog.
Catalog is confusing and quite unnecessary. More details below: What's the difference between a catalog and a schema in a relational database?

sqlalchemy database table is locked

I am trying to select all the records from a sqlite db I have with sqlalchemy, loop over each one and do an update on it. I am doing this because I need to reformat ever record in my name column.
Here is the code I am using to do a simple test:
def loadDb(name):
sqlite3.connect(name)
engine = create_engine('sqlite:///'+dbPath(), echo=False)
metadata = MetaData(bind=engine)
return metadata
db = database("dealers.db")
metadata = db.loadDb()
dealers = Table('dealers', metadata, autoload=True)
dealer = dealers.select().order_by(asc(dealers.c.id)).execute()
for d in dealer:
u = dealers.update(dealers.c.id==d.id)
u.execute(name="hi")
break
I'm getting the error:
sqlalchemy.exc.OperationalError: (OperationalError) database table is locked u'UPDATE dealers SET name=? WHERE dealers.id = ?' ('hi', 1)
I'm very new to sqlalchemy and I'm not sure what this error means or how to fix it. This seems like it should be a really simple task, so I know I am doing something wrong.
With SQLite, you can't update the database while you are still performing the select. You need to force the select query to finish and store all of the data, then perform your loop. I think this would do the job (untested):
dealer = list(dealers.select().order_by(asc(dealers.c.id)).execute())
Another option would be to make a slightly more complicated SQL statement so that the loop executes inside the database instead of in Python. That will certainly give you a big performance boost.

Categories