I'm very new to sqlalchemy and I encountered a problem regards to Postgres databases. I can successfully connect to the postgresql database, and I think I've directed the engine to my desired schema.
cstr = f"postgresql+psycopg2://{username}:{password}#{server}:{port}/{database}"
engine = create_engine(cstr,connect_args={'options': '-csearch_path={}'.format("schema_name")},echo=True)
con = engine.connect()
print(con.execute('SELECT * FROM table_name'))
This prints out the correct schema_name.
insp = inspect(con)
print(insp.default_schema_name)
However, I still get error messages saying that the table does not exist.
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "table_name" does not exist
I also tried without the ,connect_args={'options': '-csearch_path={}'.format("google")} clause and use schema_name.table_name in the sql query. Same error occurs. It's not a local database, so I can't do anything to the database except getting data from it. What should I do here?
It's interesting how I searched the answers for hours and decided to ask instead. And right after, I found the solution. Just in case anyone is interested in the answer. I got my solution from this answer
Selecting data from schema based table in Postgresql using psycopg2
print(con.execute("""SELECT DISTINCT "column_name" FROM schema_name."table_name";"""))
This is the way to do it, with a looot of quotation marks
I don't know about your framework alchemy but the correct query should be something like that:
SELECT table_name FROM information_schema.tables WHERE table_schema='public'
Reference docs
Rather than manually quoting the identifiers in queries you can let SQLAlchemy do the work for you.
Given this table and data:
test# create table "Horse-Bus" (
test(# id integer generated always as identity,
test(# name varchar,
test(# primary key(id)
test(# );
CREATE TABLE
test#
test# insert into "Horse-Bus" (name) values ('Alice'), ('Bob'), ('Carol');
INSERT 0 3
You can create a Table object and query it like this:
>>>import sqlalchemy as sa
>>> engine = sa.create_engine('postgresql:///test', echo=False, future=True)
>>> tbl = sa.Table('Horse-Bus', sa.MetaData(), autoload_with=engine)
>>> with engine.connect() as conn:
... rows = conn.execute(sa.select(tbl))
... for row in rows:
... print(row)
...
(1, 'Alice')
(2, 'Bob')
(3, 'Carol')
>>>
Related
TLDR / Summmary :
How to convert / Cast remote database (and constituent tables) fetched by asyncpg into sqlite3 ?
I have a remote database (the wrapper connecting to this database is asyncpg) and wanted to clone it locally in sqlite3 such that both table structure and data are cloned completely.
My initial idea was to do something like :
CREATE TABLE new_table LIKE ?;
INSERT INTO new_table SELECT * FROM ?;
but this runs into a problem of what object should be passed as an arg in sqlite3 cursor, since this dilemma couldn't be solved by myself, I thought of the next solution to be get the table name and the constituent column data to reconstruct the SQL query which could then be passed onto sqlite3, if worked correctly we could at least clone the table structure and then pass in the adequate data.
To proceed with this notion of thinking, I first got all the tables in the remote database,
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
(Albeit this gets some tables that weren't even defined by me but this works)
Now we have the table name (of every table), let's say there was a table fetched named foo so that we can work with it. Now we are interested in cloning this table locally into sqlite3, we attempt to do that by first getting the column info
SELECT column_name, data_type, character_maximum_length, column_default, is_nullable
FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name = 'foo`
Here is the code for "reconstructing" the table :
import asyncpg
import asyncio
import os
from asyncpg import Record
from typing import List
def make_sql(table_name: str, column_data: List[Record]) -> str:
sql = f"CREATE TABLE {table_name} (\n"
for i in column_data:
sql += f"{i['column_name']} {i['data_type'].upper()} {'NOT NULL' if i['is_nullable'].upper() == 'NO' else ''} {'' if i['column_default'] is None else i['column_default']},\n"
sql = sql[:-2] + "\n);"
return sql
async def main():
con = await asyncpg.connect(os.getenv("DATABASE_URL"))
# In the actual case we will be getting every table name from the database
# for now let's only focus on the table foo
response = await con.fetch(
"SELECT * FROM information_schema.columns WHERE table_name = 'foo'"
)
print(make_sql("foo", response))
await con.close()
asyncio.get_event_loop().run_until_complete(main())
Here is how the table foo was actually created :
CREATE TABLE foo (
bar bigint NOT NULL,
baz text NOT NULL,
CONSTRAINT foo_pkey PRIMARY KEY (bar)
);
Here is how my code attempted to reconstruct the previous query :
CREATE TABLE foo (
bar BIGINT NOT NULL ,
baz TEXT NOT NULL
);
This very clearly has lost its structure (referring to constraints) while trying to reconstruct the query and I am sure there will be other edge cases that this didn't cover, and I think sewing and stitching strings around like this is extremely vulnerable, I think some other approach to this would be better because this seems fundamentally wrong.
Which leads me to pose the question, how do I make it so that the exact table structure and data is cloned done to sqlite3 properly?
I'm trying to do something extremely simple that works, but not the way I expect it to. I have a database with various tables and for each of those tables, I'm trying to extract the column names from the information schema. I'm using the code below and everything works like a charm (python):
import psycopg2 as pgsql
# code to connect and generate cursor
table = 'some_table_name'
query = 'SELECT column_name FROM information_schema.columns WHERE table_name = %s'
cursor.execute(query, (table,))
result = pd.DataFrame(cursor.fetchall())
print(result)
So far, so good. The problem arises when I replace the query variable with the following:
import psycopg2 as pgsql
# code to connect and generate cursor
table = 'some_table_name'
**query = 'SELECT column_name FROM information_schema.columns WHERE table_name='+table
cursor.execute(query)**
result = pd.DataFrame(cursor.fetchall())
print(result)
If I print the statement, it's correct:
SELECT column_name FROM information_schema.columns WHERE table_name=some_table_name
However, when I run the query, I'm getting this error message:
UndefinedColumn: column "some_table_name" does not exist
LINE 1: ... FROM information_schema.columns WHERE table_name=some_tabl...
some_table_name is a table name as a parameter to the WHERE clause, not a column name. How is this even possible?
Thanks!
Your problem is that you haven't put some_table_name in quotes so it is treated as a column name, not a string literal. Why not stick with the first method which both worked and is in line with the psycopg documentation?
I'm connecting to an Oracle database from sqlalchemy and I want to know when the tables in the database were created. I can access this information through the sql developer application so I know that it is stored somewhere, but I don't know if its possible to get this information from sqlalchemy.
Also if its not possible, how should I be getting it?
SqlAlchemy doesn't provide anything to help you get that information. You have to query the database yourself.
something like:
with engine.begin() as c:
result = c.execute("""
SELECT created
FROM dba_objects
WHERE object_name = <<your table name>>
AND object_type = 'TABLE'
""")
Is there a way to return the aliased column names from a sql query returned from JayDeBeApi?
For example, I have the following query:
sql = """ SELECT visitorid AS id_alias FROM table LIMIT 1 """
I then run the following (connect_to_vdm() establishes a connection to my DB):
curs = connect_to_vdm().cursor()
curs.execute(sql)
vals = curs.fetchall()
I normally retrieve column names like so:
desc = curs.description
column_names = [col[0] for col in desc]
This returns the original column name "visitorid" and not the alias specified in the query "id_alias".
I know I could swap the names for the value in Python, but hoping to be able to have this done within the query since it is already defined in the Select statement. This behaves as expected in a SQL client, but I cannot seem to get the Aliases to return when using python/JayDeBeApi. Is there a way to do this using JayDeBeApi?
EDIT:
I have discovered that structuring my query with a CTE seems to help fix the problem, but still wondering if there is a more straightforward solution out there. Here is how I rewrote the same query:
sql = """ WITH cte (id_alias) AS (SELECT visitorid AS id_alias FROM table LIMIT 1) SELECT id_alias from cte"""
I was able to fix this using a CTE (Common Table Expression)
sql = """ WITH cte (id_alias) AS (SELECT visitorid AS id_alias FROM table LIMIT 1) SELECT id_alias from cte"""
Hat tip to pybokeh on Github, but this worked for me.
According to IBM (here and here), the behavior of JDBC drivers changed at some point. Bizarrely, the column aliases display just fine when using a tool like DBVisualizer, but not by querying through jaydebeapi.
To fix, add the following to the end of your DB URL:
:useJDBC4ColumnNameAndLabelSemantics=false;
Example:
jdbc:db2://[DBSERVER]:[PORT]/[DBNAME]:useJDBC4ColumnNameAndLabelSemantics=false;
I would like to check if a database table exists or not, but I don't know how to do.
I wrote (for example with SQLite, although I use MySQL mainly),
import sqlite3
table_name = "some_table"
connection = sqlite3.connect(db)
cursor = connection.cursor()
table_check = "SELECT name FROM sqlite_master WHERE type='table' AND name={};".format(table_name)
if not cursor.execute(table_check).fetchone(): # if the table doesn't exist
# OR if cursor.execute(table_check).fetchone() == "":
create_table()
else:
update_table()
But, an Error occured and I cannot proceed.
sqlite3.OperationalError: no such column: some_table
I read several Q&A here, but I couldn't get those.
Any advice can help me.
Thank you.
Python 3.5.1
The answer is depending on what rdbms product (mysql, sqlite, ms sql, etc.) you use.
You are getting this particular error in your above query because you do not enclose the value of table_name variable in single quotes.
In mysql you can use information_schema.tables table to query if a table exists.