Syntax for row_to_json with sqlalchemy - python

I would like to figure out how to use Postgres' (9.2) row_to_json with SqlAlchemy. However I haven't been able to come up with any working syntax.
details_foo_row_q = select([Foo.*]
).where(Foo.bar_id == Bar.id
).alias('details_foo_row_q')
details_foo_q = select([
func.row_to_json(details_foo_row_q).label('details')
]).where(details_foo_row_q.c.bar_id == Bar.id
).alias('details_foo_q')
I would ideally like to not to have to type out each and every field from the table model if possible.
Got the answer from 'mn':
It should be something more like this:
details_foo_row_q = select([Foo]).where(Foo.bar_id == Bar.id).alias('details_foo_row_q')
details_foo_q = select([
func.row_to_json(literal_column(details_foo_row_q.name)).label('details')
]).select_from(details_foo_row_q).where(
details_foo_row_q.c.bar_id == Bar.id
).alias('details_foo_q')
Thank you mn, works great!

Your query generates an incorrect SQL
SELECT row_to_json(SELECT ... FROM foo) AS details
FROM (SELECT ... FROM foo) AS details_foo_row_q
It should be
SELECT row_to_json(details_foo_row_q) AS details
FROM (SELECT ... FROM foo) AS details_foo_row_q
You need to use select as literal_column
from sqlalchemy.sql.expression import literal_column
details_foo_q = select([
func.row_to_json(literal_column(details_foo_row_q.name)).label('details')
]).select_from(details_foo_row_q).where(
details_foo_row_q.c.bar_id == Bar.id
).alias('details_foo_q')

Sounds like maybe there is a solution IFF you are using the latest SQLAlchemy:
# New in version 1.4.0b2.
>>> from sqlalchemy import table, column, func, select
>>> a = table( "a", column("id"), column("x"), column("y"))
>>> stmt = select(func.row_to_json(a.table_valued()))
>>> print(stmt)
SELECT row_to_json(a) AS row_to_json_1
FROM a
https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#table-types-passed-to-functions

If other people still struggle with row_to_json function, I have good news for you.
Let's imagine we have User class with fields email, id and we want to receive email and id fields as JSON.
This can be done using json_build_object function:
from sqlalchemy import func
session.query(func.json_build_object("email", User.email, "id", User.id))

Related

Using sqlalchemy with psycopg

I am in need of combining the results of a SQLAlchemy query and a pyscopg query.
Currently I use psycopg to do most of my SQL selects in my code. This is done using a cursor and fetchall().
However, I have a separate microservice that returns some extra WHERE clauses I need for my statement, based on some variables. This is returned as a SQLAlchemy SELECT object. This is out of my control.
Example return:
select * from users where name = 'bar';
My current solution for this is to hardcode the results of the microservice (just the WHERE clauses) into an enum and add them into the pyscopg statement using an f-string. This is a temporary solution.
Simplified example:
user_name = "bar"
sql_enum = {
"foo": "name = 'foo'"
"bar": "name = 'bar'"
}
with conn.cursor() as cur:
cur.execute(f"select * from users where location = 'FOOBAR' and {sql_enum[user_name]}")
I am looking for a way to better join these two statements. Any suggestions are greatly appreciated!
Rather than mess with dynamic SQL (f-strings, etc.), I would just start with a SQLAlchemy Core select() statement and then add the whereclause from the statement returned by the microservice:
import sqlalchemy as sa
engine = sa.create_engine("postgresql://scott:tiger#192.168.0.199/test")
users = sa.Table(
"users", sa.MetaData(),
sa.Column("id", sa.Integer, primary_key=True),
sa.Column("name", sa.String(50)),
sa.Column("location", sa.String(50))
)
users.drop(engine, checkfirst=True)
users.create(engine)
# mock return from microservice
from_ms = sa.select(sa.text("*")).select_from(users).where(users.c.name == "bar")
base_query = sa.select(users).where(users.c.location == "FOOBAR")
full_query = base_query.where(from_ms.whereclause)
engine.echo = True
with engine.begin() as conn:
result = conn.execute(full_query)
"""SQL emitted:
SELECT users.id, users.name, users.location
FROM users
WHERE users.location = %(location_1)s AND users.name = %(name_1)s
[generated in 0.00077s] {'location_1': 'FOOBAR', 'name_1': 'bar'}
"""

Flask-SQLAlchemy check if table exists in database

Flask-SQLAlchemy check if table exists in database.
I see similar problems, but I try not to succeed.
Flask-SQLAlchemy check if row exists in table
I have create a table object ,like this:
<class'flask_sqlalchemy.XXX'>,
now how to check the object if exists in database.
I do many try:
eg:
for t in db.metadata.sorted_tables:
print("tablename",t.name)
some table object is created before,but it doesnt exists in database,and now they. all print.
eg:print content is
tablename: table_1
tablename: table_2
tablename: table_3
but only table_1 is exist datable,table_2 and table_3 is dynamica create,now I only want use the table_1.
very thanks.
I used these methods. Looking at the model like you did only tells you what SHOULD be in the database.
import sqlalchemy as sa
def database_is_empty():
table_names = sa.inspect(engine).get_table_names()
is_empty = table_names == []
print('Db is empty: {}'.format(is_empty))
return is_empty
def table_exists(name):
ret = engine.dialect.has_table(engine, name)
print('Table "{}" exists: {}'.format(name, ret))
return ret
There may be a simpler method than this:
def model_exists(model_class):
engine = db.get_engine(bind=model_class.__bind_key__)
return model_class.metadata.tables[model_class.__tablename__].exists(engine)
the solution is too easy, just write this two rows in you code and it should work fine for you
from flask_sqlalchemy import SQLAlchemy, inspect
...
inspector = inspect(db.engine)
print(inspector.has_table("user")) # output: Boolean
have a nice day
SQL Alchemy's recommended way to check for the presence of a table is to create an inspector object and use its has_table() method.
The following example was copied from sqlalchemy.engine.reflection.Inspector.has_table, with the addition of an SQLite engine (in memory) to make it reproducible:
In [17]: from sqlalchemy import create_engine, inspect
...: from sqlalchemy import MetaData, Table, Column, Text
...: engine = create_engine('sqlite://')
...: meta = MetaData()
...: meta.bind = engine
...: user_table = Table('user', meta, Column("first_name", Text))
...: user_table.create()
...: inspector = inspect(engine)
...: inspector.has_table('user')
Out[17]: True
You can also use the user_table metadata element name to check if it exists as such:
inspector.has_table(user_table.name)

Insert and update with core SQLAlchemy

I have a database that I don't have metadata or orm classes for (the database already exists).
I managed to get the select stuff working by:
from sqlalchemy.sql.expression import ColumnClause
from sqlalchemy.sql import table, column, select, update, insert
from sqlalchemy.ext.declarative import *
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine
import pyodbc
db = create_engine('mssql+pyodbc://pytest')
Session = sessionmaker(bind=db)
session = Session()
list = []
list.append (column("field1"))
list.append (column("field2"))
list.append (column("field3"))
s = select(list)
s.append_from('table')
s.append_whereclause("field1 = 'abc'")
s = s.limit(10)
result = session.execute(s)
out = result.fetchall()
print(out)
So far so good.
The only way I can get an update/insert working is by executing a raw query like:
session.execute(<Some sql>)
I would like to make it so I can make a class out of that like:
u = Update("table")
u.Set("file1","some value")
u.Where(<some conditon>)
seasion.execute(u)
Tried (this is just one of the approaches I tried):
i = insert("table")
v = i.values([{"name":"name1"}, {"name":"name2"}])
u = update("table")
u = u.values({"name": "test1"})
I can't get that to execute on:
session.execute(i)
or
session.execute(u)
Any suggestion how to construct an insert or update without writing ORM models?
As you can see from the SQLAlchemy Overview documentation, sqlalchemy is build with two layers: ORM and Core. Currently you are using only some constructs of the Core and building everything manually.
In order to use Core you should let SQLAlchemy know some meta information about your database in order for it to operate on it. Assuming you have a table mytable with columns field1, field2, field3 and a defined primary key, the code below should perform all the tasks you need:
from sqlalchemy.sql import table, column, select, update, insert
# define meta information
metadata = MetaData(bind=engine)
mytable = Table('mytable', metadata, autoload=True)
# select
s = mytable.select() # or:
#s = select([mytable]) # or (if only certain columns):
#s = select([mytable.c.field1, mytable.c.field2, mytable.c.field3])
s = s.where(mytable.c.field1 == 'abc')
result = session.execute(s)
out = result.fetchall()
print(out)
# insert
i = insert(mytable)
i = i.values({"field1": "value1", "field2": "value2"})
session.execute(i)
# update
u = update(mytable)
u = u.values({"field3": "new_value"})
u = u.where(mytable.c.id == 33)
session.execute(u)

sqlalchemy: union query few columns from multiple tables with condition

I'm trying to adapt some part of a MySQLdb application to sqlalchemy in declarative base. I'm only beginning with sqlalchemy.
The legacy tables are defined something like:
student: id_number*, semester*, stateid, condition, ...
choice: id_number*, semester*, choice_id, school, program, ...
We have 3 tables for each of them (student_tmp, student_year, student_summer, choice_tmp, choice_year, choice_summer), so each pair (_tmp, _year, _summer) contains information for a specific moment.
select *
from `student_tmp`
inner join `choice_tmp` using (`id_number`, `semester`)
My problem is the information that is important to me is to get the equivalent of the following select:
SELECT t.*
FROM (
(
SELECT st.*, ct.*
FROM `student_tmp` AS st
INNER JOIN `choice_tmp` as ct USING (`id_number`, `semester`)
WHERE (ct.`choice_id` = IF(right(ct.`semester`, 1)='1', '3', '4'))
AND (st.`condition` = 'A')
) UNION (
SELECT sy.*, cy.*
FROM `student_year` AS sy
INNER JOIN `choice_year` as cy USING (`id_number`, `semester`)
WHERE (cy.`choice_id` = 4)
AND (sy.`condition` = 'A')
) UNION (
SELECT ss.*, cs.*
FROM `student_summer` AS ss
INNER JOIN `choice_summer` as cs USING (`id_number`, `semester`)
WHERE (cs.`choice_id` = 3)
AND (ss.`condition` = 'A')
)
) as t
* used for shorten the select, but I'm actually only querying for about 7 columns out of the 50 availables.
This information is used in many flavors... "Do I have new students? Do I still have all students from a given date? Which students are subscribed after the given date? etc..." The result of this select statement is to be saved in another database.
Would it be possible for me to achieve this with a single view-like class? The information is read-only so I don't need to be able to modify/create/delte. Or do I have to declare a class for each table (ending up with 6 classes) and every time I need to query, I have to remember to filter?
Thanks for pointers.
EDIT: I don't have modification access to the database (I cannot create a view). Both databases may not be on the same server (so I cannot create a view on my second DB).
My concern is to avoid the full table scan before filtering on condition and choice_id.
EDIT 2: I've set up declarative classes like this:
class BaseStudent(object):
id_number = sqlalchemy.Column(sqlalchemy.String(7), primary_key=True)
semester = sqlalchemy.Column(sqlalchemy.String(5), primary_key=True)
unique_id_number = sqlalchemy.Column(sqlalchemy.String(7))
stateid = sqlalchemy.Column(sqlalchemy.String(12))
condition = sqlalchemy.Column(sqlalchemy.String(3))
class Student(BaseStudent, Base):
__tablename__ = 'student'
choices = orm.relationship('Choice', backref='student')
#class StudentYear(BaseStudent, Base):...
#class StudentSummer(BaseStudent, Base):...
class BaseChoice(object):
id_number = sqlalchemy.Column(sqlalchemy.String(7), primary_key=True)
semester = sqlalchemy.Column(sqlalchemy.String(5), primary_key=True)
choice_id = sqlalchemy.Column(sqlalchemy.String(1))
school = sqlalchemy.Column(sqlalchemy.String(2))
program = sqlalchemy.Column(sqlalchemy.String(5))
class Choice(BaseChoice, Base):
__tablename__ = 'choice'
__table_args__ = (
sqlalchemy.ForeignKeyConstraint(['id_number', 'semester',],
[Student.id_number, Student.semester,]),
)
#class ChoiceYear(BaseChoice, Base): ...
#class ChoiceSummer(BaseChoice, Base): ...
Now, the query that gives me correct SQL for one set of table is:
q = session.query(StudentYear, ChoiceYear) \
.select_from(StudentYear) \
.join(ChoiceYear) \
.filter(StudentYear.condition=='A') \
.filter(ChoiceYear.choice_id=='4')
but it throws an exception...
"Could not locate column in row for column '%s'" % key)
sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for column '*'"
How do I use that query to create myself a class I can use?
If you can create this view on the database, then you simply map the view as if it was a table. See Reflecting Views.
# DB VIEW
CREATE VIEW my_view AS -- #todo: your select statements here
# SA
my_view = Table('my_view', metadata, autoload=True)
# define view object
class ViewObject(object):
def __repr__(self):
return "ViewObject %s" % str((self.id_number, self.semester,))
# map the view to the object
view_mapper = mapper(ViewObject, my_view)
# query the view
q = session.query(ViewObject)
for _ in q:
print _
If you cannot create a VIEW on the database level, you could create a selectable and map the ViewObject to it. The code below should give you the idea:
student_tmp = Table('student_tmp', metadata, autoload=True)
choice_tmp = Table('choice_tmp', metadata, autoload=True)
# your SELECT part with the columns you need
qry = select([student_tmp.c.id_number, student_tmp.c.semester, student_tmp.stateid, choice_tmp.school])
# your INNER JOIN condition
qry = qry.where(student_tmp.c.id_number == choice_tmp.c.id_number).where(student_tmp.c.semester == choice_tmp.c.semester)
# other WHERE clauses
qry = qry.where(student_tmp.c.condition == 'A')
You can create 3 queries like this, then combine them with union_all and use the resulting query in the mapper:
view_mapper = mapper(ViewObject, my_combined_qry)
In both cases you have to ensure though that a PrimaryKey is properly defined on the view, and you might need to override the autoloaded view, and specify the primary key explicitely (see the link above). Otherwise you will either receive an error, or might not get proper results from the query.
Answer to EDIT-2:
qry = (session.query(StudentYear, ChoiceYear).
select_from(StudentYear).
join(ChoiceYear).
filter(StudentYear.condition == 'A').
filter(ChoiceYear.choice_id == '4')
)
The result will be tuple pairs: (Student, Choice).
But if you want to create a new mapped class for the query, then you can create a selectable as the sample above:
student_tmp = StudentTmp.__table__
choice_tmp = ChoiceTmp.__table__
.... (see sample code above)
This is to show what I ended up doing, any comment welcomed.
class JoinedYear(Base):
__table__ = sqlalchemy.select(
[
StudentYear.id_number,
StudentYear.semester,
StudentYear.stateid,
ChoiceYear.school,
ChoiceYear.program,
],
from_obj=StudentYear.__table__.join(ChoiceYear.__table__),
) \
.where(StudentYear.condition == 'A') \
.where(ChoiceYear.choice_id == '4') \
.alias('YearView')
and I will elaborate from there...
Thanks #van

sqlalchemy exists for query

How do I check whether data in a query exists?
For example:
users_query = User.query.filter_by(email='x#x.com')
How I can check whether users with that email exist?
I can check this with
users_query.count()
but how to check it with exists?
The following solution is a bit simpler:
from sqlalchemy.sql import exists
print session.query(exists().where(User.email == '...')).scalar()
The most acceptable and readable option for me is
session.query(<Exists instance>).scalar()
like
session.query(User.query.filter(User.id == 1).exists()).scalar()
which returns True or False.
There is no way that I know of to do this using the orm query api. But you can drop to a level lower and use exists from sqlalchemy.sql.expression:
from sqlalchemy.sql.expression import select, exists
users_exists_select = select((exists(users_query.statement),))
print engine.execute(users_exists_select).scalar()
2021 Answer for SqlAlchemy 1.4
Refrain from calling .query and instead use select directly chained with .exists as follows:
from sqlalchemy import select
stmt = select(User).where(User.email=="x#x.com").exists()
Source: https://docs.sqlalchemy.org/en/14/core/selectable.html?highlight=exists#sqlalchemy.sql.expression.exists
For SQL Server, I had to do this:
from sqlalchemy.sql.expression import literal
result = session.query(literal(True)).filter(
session.query(User)
.filter_by(email='...')
.exists()
).scalar()
print(result is not None)
# Resulting query:
# SELECT 1
# WHERE EXISTS (SELECT 1
# FROM User
# WHERE User.email = '...')
But it's much simpler without EXISTS:
result = (
session.query(literal(True))
.filter(User.email == '...')
.first()
)
print(result is not None)
# Resulting query:
# SELECT TOP 1 1
# FROM User
# WHERE User.email = '...'
it can be done:
from sqlalchemy import select
user = session.scalars(
select(User).where(User.email=="x#x.com")
).first()
if user:
pass
else:
pass

Categories