I am using SQLAlchemy to pull data from my database. More specifically, I use the db.select method. So I manage to pull out only the values from the columns or only the names of the columns, but I need to pull out in the format NAME: VALUE. Help how to do this?
connection = engine.connect()
metadata = db.MetaData()
report = db.Table('report', metadata, autoload=True, autoload_with=engine)
query = db.select([report])
ResultProxy = connection.execute(query)
ResultSet = ResultProxy.fetchall()
With SQLAlchemy 1.4+ we can use .mappings() to return results in a dictionary-like format:
import sqlalchemy as sa
# …
t = sa.Table(
"t",
sa.MetaData(),
sa.Column("id", sa.Integer, primary_key=True, autoincrement=False),
sa.Column("txt", sa.String),
)
t.create(engine)
# insert some sample data
with engine.begin() as conn:
conn.exec_driver_sql(
"INSERT INTO t (id, txt) VALUES (1, 'foo'), (2, 'bar')"
)
# test code
with engine.begin() as conn:
results = conn.execute(select(t)).mappings().fetchall()
pprint(results)
# [{'id': 1, 'txt': 'foo'}, {'id': 2, 'txt': 'bar'}]
As the docs state, ResultProxy.fetchall() returns a list of RowProxy objects. These behave like namedtuples, but can also be used like dictionaries:
>>> ResultSet[0]['column_name']
column_value
For more info, see https://docs.sqlalchemy.org/en/13/core/tutorial.html#coretutorial-selecting
Related
I am in need of combining the results of a SQLAlchemy query and a pyscopg query.
Currently I use psycopg to do most of my SQL selects in my code. This is done using a cursor and fetchall().
However, I have a separate microservice that returns some extra WHERE clauses I need for my statement, based on some variables. This is returned as a SQLAlchemy SELECT object. This is out of my control.
Example return:
select * from users where name = 'bar';
My current solution for this is to hardcode the results of the microservice (just the WHERE clauses) into an enum and add them into the pyscopg statement using an f-string. This is a temporary solution.
Simplified example:
user_name = "bar"
sql_enum = {
"foo": "name = 'foo'"
"bar": "name = 'bar'"
}
with conn.cursor() as cur:
cur.execute(f"select * from users where location = 'FOOBAR' and {sql_enum[user_name]}")
I am looking for a way to better join these two statements. Any suggestions are greatly appreciated!
Rather than mess with dynamic SQL (f-strings, etc.), I would just start with a SQLAlchemy Core select() statement and then add the whereclause from the statement returned by the microservice:
import sqlalchemy as sa
engine = sa.create_engine("postgresql://scott:tiger#192.168.0.199/test")
users = sa.Table(
"users", sa.MetaData(),
sa.Column("id", sa.Integer, primary_key=True),
sa.Column("name", sa.String(50)),
sa.Column("location", sa.String(50))
)
users.drop(engine, checkfirst=True)
users.create(engine)
# mock return from microservice
from_ms = sa.select(sa.text("*")).select_from(users).where(users.c.name == "bar")
base_query = sa.select(users).where(users.c.location == "FOOBAR")
full_query = base_query.where(from_ms.whereclause)
engine.echo = True
with engine.begin() as conn:
result = conn.execute(full_query)
"""SQL emitted:
SELECT users.id, users.name, users.location
FROM users
WHERE users.location = %(location_1)s AND users.name = %(name_1)s
[generated in 0.00077s] {'location_1': 'FOOBAR', 'name_1': 'bar'}
"""
I'm trying to understand what the set_ means in SQLAlchemy's on_conflict_do_update method. i have the following Table:
Table(
"test",
metadata,
Column("id", Integer, primary_key=True),
Column("firstname", String(100)),
Column("lastname", String(100)),
)
and what insert some like this (if i wrote it in psql)
INSERT INTO test (id, firstname, lastname) VALUES (1, 'John', 'Doe)
ON CONFLICT (id) DO UPDATE SET firstname = EXCLUDED.firstname, lastname = EXCLUDED.lastname
I did some due diligence and saw people write in the set_ like this:
import sqlalchemy.dialects import postgresql
insert_stmt = postgresql.insert(target).values([{'id':1,'firstname':'John','lastname':'Doe'}])
primary_keys = [key.name for key in inspect(target).primary_key]
update_dict = {c.name: c for c in insert_stmt.excluded if not c.primary_key}
stmt = insert_stmt.on_conflict_do_update(index_elements = primary_keys , set_ = update_dict)
engine.execute(stmt)
Is the update_dict just looking at the EXCLUDED values (the ones I want to update with) that I set in my insert_stmt? If I str(update_dict) I get an dictionary of specific information regarding the column {'firstname': Column('firstname', VARCHAR(length=100), table=<excluded>), 'lastname': Column('lastname', VARCHAR(length=100), table=<excluded>)}, is the method above the only way to retrieve the data? Can you write it out manually?
Consider the following database table:
ID ticker description
1 GDBR30 30YR
2 GDBR10 10YR
3 GDBR5 5YR
4 GDBR2 2YR
It can be replicated with this piece of code:
from sqlalchemy import (
Column,
Integer,
MetaData,
String,
Table,
create_engine,
insert,
select,
)
engine = create_engine("sqlite+pysqlite:///:memory:", echo=True, future=True)
metadata = MetaData()
# Creating the table
tickers = Table(
"tickers",
metadata,
Column("id", Integer, primary_key=True, autoincrement=True),
Column("ticker", String, nullable=False),
Column("description", String(), nullable=False),
)
metadata.create_all(engine)
# Populating the table
with engine.connect() as conn:
result = conn.execute(
insert(tickers),
[
{"ticker": "GDBR30", "description": "30YR"},
{"ticker": "GDBR10", "description": "10YR"},
{"ticker": "GDBR5", "description": "5YR"},
{"ticker": "GDBR2", "description": "2YR"},
],
)
conn.commit()
I need to filter tickers for some values:
search_list = ["GDBR10", "GDBR5", "GDBR30"]
records = conn.execute(
select(tickers.c.description).where((tickers.c.ticker).in_(search_list))
)
print(records.fetchall())
# Result
# [('30YR',), ('10YR',), ('5YR',)]
However, I need the resulting list of tuples ordered in the way search_list has been ordered. That is, I need the following result:
print(records.fetchall())
# Expected result
# [('10YR',), ('5YR',), ('30YR',)]
Using SQLite, you could create a cte with two columns (id and ticker). Applying the following code will lead to the expected result (see Maintain order when using SQLite WHERE-clause and IN operator). Unfortunately, I am not able to transfer the SQLite solution to sqlalchemy.
WITH cte(id, ticker) AS (VALUES (1, 'GDBR10'), (2, 'GDBR5'), (3, 'GDBR30'))
SELECT t.*
FROM tbl t INNER JOIN cte c
ON c.ticker = t.ticker
ORDER BY c.id
Suppose, I have search_list_tuple as folllows, how am I suppose to code the sqlalchemy query?
search_list_tuple = [(1, 'GDBR10'), (2, 'GDBR5'), (3, 'GDBR30')]
Below works and is actually equivalent to the VALUES (...) on sqlite albeit somewhat more verbose:
# construct the CTE
sub_queries = [
select(literal(i).label("id"), literal(v).label("ticker"))
for i, v in enumerate(search_list)
]
cte = union_all(*sub_queries).cte("cte")
# desired query
records = conn.execute(
select(tickers.c.description)
.join(cte, cte.c.ticker == tickers.c.ticker)
.order_by(cte.c.id)
)
print(records.fetchall())
# [('10YR',), ('5YR',), ('30YR',)]
Below is using the values() contruct, but unfortunately the resulting query fails on SQLite, but it works perfectly on postgresql:
cte = select(
values(
column("id", Integer), column("ticker", String), name="subq"
).data(list(zip(range(len(search_list)), search_list)))
).cte("cte")
qq = (
select(tickers.c.description)
.join(cte, cte.c.ticker == tickers.c.ticker)
.order_by(cte.c.id)
)
records = conn.execute(qq)
print(records.fetchall())
I want to run a parametrized mysql UPDATE SET statement in flask_sqlalchemy. Since I do not know which columns should be updated I wrote a helper function to help in writing the statement.
My model
class User(db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement= True)
username = db.Column(db.String(80), nullable=False)
email = db.Column(db.String(120), nullable=False)
def __repr__(self):
return '<User %r>' % self.username
Helper function
def run_sql(params):
# e.g: params = {'username': "testing", "email": "testing#testing.ts", "id": 1}
id = params.pop("id")
# params = {'username': "testing", "email": "testing#testing.ts"}
sets = list(map(lambda col: f"{col}=:{col}", params.keys()))
# sets = ["username=:username", "email=:email"]
sets = ", ".join(sets)
# sets = "username=:username, email=:email"
params["id"] = id
# params = {'username': "testing", "email": "testing#testing.ts", "id": 1}
sql_statement = f"""UPDATE User SET {sets} WHERE id=:id LIMIT 1"""
# sql_statement = UPDATE User SET username=:username, email=:email WHERE id=:id LIMIT 1
return sql_statement
Calling helper function
if __name__ == "__main__":
conn = engine.connect()
params = {'username': "testing", "email": "testing#testing.ts", "id": 1}
sql_statement = run_sql(params)
conn.execute(sql_statement, params)
Running the previous code generates the following exception
"sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ':username, email=:email WHERE id=:id LIMIT 1' at line 1") [SQL: 'UPDATE User SET username=:username, email=:email WHERE id=:id LIMIT 1'] [parameters: {'username': 'testing', 'email': 'testing#testing.ts', 'id': 1}] (Background on this error at: http://sqlalche.me/e/f405)"
The SQL statement looks fine to me, so the parameters. Am I missing something?
The bind params aren't MySQL style, and as you're passing in plain text to engine.execute(), SQLAlchemy isn't applying a dialect to the query before executing it.
Try this:
engine.execute("SELECT :val", {"val": 1}) # will fail, same as your query
...and then this:
engine.execute("SELECT %(val)s", {"val": 1}) # will execute
Wrapping the query with text() will let SQLAlchemy handle the proper bind style:
from sqlalchemy import text # or you can use db.text w/ flask-sqlalchemy
engine.execute(text("SELECT :val"), {"val": 1})
One other thing to note is that SQLAlchemy will automatically handle construction of the UPDATE query for you, respecting the values in the parameter dict, e.g.:
id_ = 1
params = {'username': "testing", "email": "testing#testing.ts"}
User.__table__.update().values(params).where(id=id_)
# UPDATE user SET username=%(username)s, email=%(email)s WHERE user.id = %(id_1)s
params = {'username': "testing"}
User.__table__.update().values(params).where(id=id_)
# UPDATE user SET username=%(username)s WHERE user.id = %(id_1)s
params = {'username': "testing", "unexpected": "value"}
User.__table__.update().values(params).where(id=id_)
# sqlalchemy.exc.CompileError: Unconsumed column names: unexpected
If you want to use the :named_parameter form for your SQL statement you'll need to use SQLAlchemy's text method and then call the execute method of a Connection object:
import sqlalchemy as sa
# ...
with engine.begin() as conn:
conn.exec_driver_sql("CREATE TEMPORARY TABLE temp (id varchar(10) PRIMARY KEY)")
sql = sa.text("INSERT INTO temp (id) VALUES (:id)")
params = {'id': 'foo'}
conn.execute(sql, params)
print(conn.execute(sa.text("SELECT * FROM temp")).fetchall()) # [('foo',)]
Using SQLAlchemy Core (not ORM), I'm trying to INSERT multiple rows using subqueries in the values. For MySQL, the actual SQL would look something like this:
INSERT INTO widgets (name, type) VALUES
('Melon', (SELECT type FROM widgetTypes WHERE type='Squidgy')),
('Durian', (SELECT type FROM widgetTypes WHERE type='Spiky'))
But I only seem to be able to use subqueries when using the values() method on an insert() clause which only allows me to do one insert at a time. I'd like to insert multiple values at once by passing them all to the Connection's execute() method as a list of bind parameters, but this doesn't seem to be supported.
Is it possible to do what I want in a single call to execute()?
Here's a self contained demonstration. Note this uses the sqlite engine which doesn't support multiple inserts in the same way as MySQL, but the SQLAlchemy code still fails in the same way as the real MySQL app.
from sqlalchemy import *
if __name__ == "__main__":
# Construct database
metadata = MetaData()
widgetTypes = Table('widgetTypes', metadata,
Column('id', INTEGER(), primary_key=True),
Column('type', VARCHAR(), nullable=False),
)
widgets = Table('widgets', metadata,
Column('id', INTEGER(), primary_key=True),
Column('name', VARCHAR(), nullable=False),
Column('type', INTEGER(), nullable=False),
ForeignKeyConstraint(['type'], ['widgetTypes.id']),
)
engine = create_engine("sqlite://")
metadata.create_all(engine)
# Connect and populate db for testing
conn = engine.connect()
conn.execute(widgetTypes.insert(), [
{'type': 'Spiky'},
{'type': 'Squidgy'},
])
# Some select queries for later use.
select_squidgy_id = select([widgetTypes.c.id]).where(
widgetTypes.c['type']=='Squidgy'
).limit(1)
select_spiky_id = select([widgetTypes.c.id]).where(
widgetTypes.c['type']=='Squidgy'
).limit(1)
# One at a time works via values()
conn.execute(widgets.insert().values(
{'name': 'Tomato', 'type': select_squidgy_id},
))
# And multiple values work if we avoid subqueries
conn.execute(
widgets.insert(),
{'name': 'Melon', 'type': 2},
{'name': 'Durian', 'type': 1},
)
# Check above inserts did actually work
print conn.execute(widgets.select()).fetchall()
# But attempting to insert many at once with subqueries does not work.
conn.execute(
widgets.insert(),
{'name': 'Raspberry', 'type': select_squidgy_id},
{'name': 'Lychee', 'type': select_spiky_id},
)
Run it and it dies on the last execute() call with:
sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding
parameter 1 - probably unsupported type. u'INSERT INTO widgets (name,
type) VALUES (?, ?)' (('Raspberry', <sqlalchemy.sql.expression.Select
at 0x19f14d0; Select object>), ('Lychee',
<sqlalchemy.sql.expression.Select at 0x19f1a50; Select object>))
Instead of providing subselect statement as parameter value, you have to embed it into INSERT statement:
type_select = select([widgetTypes.c.id]).where(
widgetTypes.c.type==bindparam('type_name'))
insert = widgets.insert({'type': type_select})
conn.execute(insert, [
{'name': 'Melon', 'type_name': 'Squidgy'},
{'name': 'Lychee', 'type_name': 'Spiky'},
])