In SQLAlchemy, a query like
user = sess.query(User).first()
will emit SQL with each column in the select clause qualified with the schema name
select
myschema.user.id, -- (vs user.id)
....
from
myschema.user
For some dialects, like Presto views, that is a syntax error.
Is there any way to make sqlalchemy skip the schema name in the columns of the select statement? e.g. user.id vs myschema.user.id without using aliased on every table? or a setting such that sqlalchemy automatically uses an alias?
Related
I am trying to delete all values in a table but leave the table as is using python sql alchemy.
my table name is attributes.charger_status (attributes being the schema, charger_status being the table status). When i try to do this i get an error message. Is there somewhere else I should be referencing my schema name?
dele = attributes.charger_status.delete()
engine.execute(dele)
I'd like to execute a DDL statement (for example: create table test(id int, str varchar)) in different DB schemas.
In order to execute this DDL i was going to use the following code:
from sqlalchemy import DDL, text, create_engine
engine = create_engine(...)
ddl_cmd = "create table test(id int, str varchar)"
DDL(ddl_cmd).execute(bind=engine)
How can I specify in which DB schema to execute this DDL statement, not changing the DDL command itself?
I don't understand why such a basic parameter like schema is missing in the DDL().execute() method. I guess I'm missing some important concept, but I couldn't figure it out.
UPD: I've found the "schema_translate_map" execution option, but it didn't work for me - the table will be still created in the default schema.
Here are my attempts:
conn = engine.connect().execution_options(schema_translate_map={None: "my_schema"})
then i tried different variants:
# variant 1
conn.execute(ddl_cmd)
# variant 2
conn.execution_options(schema_translate_map={None: "my_schema"}).execute()
# variant 3
DDL(ddl_cmd).compile(bind=conn).execute()
# variant 4
DDL(ddl_cmd).compile(bind=conn).execution_options(schema_translate_map={None: "my_schema"})
but every time the table will be created in the default schema. :(
How can I execute an SQL query where the schema and table name are passed in a function? Something like below?
def get(engine, schema: str, table: str):
query = text("select * from :schema.:table")
result = engine.connect().execute(query, schema=schema, table=table)
return result
Two things going on here:
Avoiding SQL injection
Dynamically setting a schema with (presumably) PostgreSQL
The first question has a very broad scope, you might want to look at older questions about SQLAlchemy and SQL Injection like this one SQLAlchemy + SQL Injection
Your second question can be addressed in a number of ways, though I would recommend the following approach from SQLAlchemy's documentation: https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path
PostgreSQL supports a "search path" command which sets the schema for all operations in the transaction.
So your query code might look like:
qry_str = f"SET search_path TO {schema}";
Alternatively, if you use an SQLAlchemy declarative approach, you can use a MetaData object like in this question/answer SQLAlchemy support of Postgres Schemas
You could create a collection of existing table and schema names in your database and check inputs against those values before creating your query:
-- assumes we are connected to the correct *database*
SELECT table_schema, table_name
FROM information_schema.tables;
Is there anyway I can get the datatypes of fields returned in a query in mysql. Say I have a query:
SELECT a.*,b.* FROM tbl_name a LEFT JOIN other_tbl b ON a.id=b.first_id
Is there a command I can use in mysql that will return the names of the fields that this query will return and their datatypes. I know I can potentially create a view using this query and then DESCRIBE that view, but is there any other way I can do it on the fly?
I'm using SQLAlchemy to perform this raw query and my tables are dynamically generated. Is there a SQLAlchemy way if not a MySQL way.
You can get the datatypes from a table with this in MySQL
SELECT COLUMN_TYPE
FROM information_schema.COLUMNS
WHERE TABLE_NAME = 'a'
I'm using alembic to manage my database migrations. In my current migration I need also to populate a column based on a SELECT statement (basically copying a column from a different table).
With plain SQL I can do:
UPDATE foo_table
SET bar_id=
(SELECT bar_table.id FROM bar_table
WHERE bar_table.foo_id = foo_table.id);
However can't figure out how to do that with alembic:
execute(
foo_table.update().\
values({
u'bar_id': ???
})
)
I tried to use plain SQLAlchemy expressions for the '???':
select([bar_table.columns['id']],
bar_table.columns[u'foo_id'] == foo_table.columns[u'id'])
But that only generates bad SQL and a ProgrammingError during execution:
'UPDATE foo_table SET ' {}
Actually it works exactly as I described above.
My problem was that the table definition for 'foo_table' in my alembic script did not include the 'bar_id' column so SQLALchemy did not use that to generate the SQL...