sqlalchemy exists for query - python

How do I check whether data in a query exists?
For example:
users_query = User.query.filter_by(email='x#x.com')
How I can check whether users with that email exist?
I can check this with
users_query.count()
but how to check it with exists?

The following solution is a bit simpler:
from sqlalchemy.sql import exists
print session.query(exists().where(User.email == '...')).scalar()

The most acceptable and readable option for me is
session.query(<Exists instance>).scalar()
like
session.query(User.query.filter(User.id == 1).exists()).scalar()
which returns True or False.

There is no way that I know of to do this using the orm query api. But you can drop to a level lower and use exists from sqlalchemy.sql.expression:
from sqlalchemy.sql.expression import select, exists
users_exists_select = select((exists(users_query.statement),))
print engine.execute(users_exists_select).scalar()

2021 Answer for SqlAlchemy 1.4
Refrain from calling .query and instead use select directly chained with .exists as follows:
from sqlalchemy import select
stmt = select(User).where(User.email=="x#x.com").exists()
Source: https://docs.sqlalchemy.org/en/14/core/selectable.html?highlight=exists#sqlalchemy.sql.expression.exists

For SQL Server, I had to do this:
from sqlalchemy.sql.expression import literal
result = session.query(literal(True)).filter(
session.query(User)
.filter_by(email='...')
.exists()
).scalar()
print(result is not None)
# Resulting query:
# SELECT 1
# WHERE EXISTS (SELECT 1
# FROM User
# WHERE User.email = '...')
But it's much simpler without EXISTS:
result = (
session.query(literal(True))
.filter(User.email == '...')
.first()
)
print(result is not None)
# Resulting query:
# SELECT TOP 1 1
# FROM User
# WHERE User.email = '...'

it can be done:
from sqlalchemy import select
user = session.scalars(
select(User).where(User.email=="x#x.com")
).first()
if user:
pass
else:
pass

Related

Using sqlalchemy with psycopg

I am in need of combining the results of a SQLAlchemy query and a pyscopg query.
Currently I use psycopg to do most of my SQL selects in my code. This is done using a cursor and fetchall().
However, I have a separate microservice that returns some extra WHERE clauses I need for my statement, based on some variables. This is returned as a SQLAlchemy SELECT object. This is out of my control.
Example return:
select * from users where name = 'bar';
My current solution for this is to hardcode the results of the microservice (just the WHERE clauses) into an enum and add them into the pyscopg statement using an f-string. This is a temporary solution.
Simplified example:
user_name = "bar"
sql_enum = {
"foo": "name = 'foo'"
"bar": "name = 'bar'"
}
with conn.cursor() as cur:
cur.execute(f"select * from users where location = 'FOOBAR' and {sql_enum[user_name]}")
I am looking for a way to better join these two statements. Any suggestions are greatly appreciated!
Rather than mess with dynamic SQL (f-strings, etc.), I would just start with a SQLAlchemy Core select() statement and then add the whereclause from the statement returned by the microservice:
import sqlalchemy as sa
engine = sa.create_engine("postgresql://scott:tiger#192.168.0.199/test")
users = sa.Table(
"users", sa.MetaData(),
sa.Column("id", sa.Integer, primary_key=True),
sa.Column("name", sa.String(50)),
sa.Column("location", sa.String(50))
)
users.drop(engine, checkfirst=True)
users.create(engine)
# mock return from microservice
from_ms = sa.select(sa.text("*")).select_from(users).where(users.c.name == "bar")
base_query = sa.select(users).where(users.c.location == "FOOBAR")
full_query = base_query.where(from_ms.whereclause)
engine.echo = True
with engine.begin() as conn:
result = conn.execute(full_query)
"""SQL emitted:
SELECT users.id, users.name, users.location
FROM users
WHERE users.location = %(location_1)s AND users.name = %(name_1)s
[generated in 0.00077s] {'location_1': 'FOOBAR', 'name_1': 'bar'}
"""

Flask-SQLAlchemy check if table exists in database

Flask-SQLAlchemy check if table exists in database.
I see similar problems, but I try not to succeed.
Flask-SQLAlchemy check if row exists in table
I have create a table object ,like this:
<class'flask_sqlalchemy.XXX'>,
now how to check the object if exists in database.
I do many try:
eg:
for t in db.metadata.sorted_tables:
print("tablename",t.name)
some table object is created before,but it doesnt exists in database,and now they. all print.
eg:print content is
tablename: table_1
tablename: table_2
tablename: table_3
but only table_1 is exist datable,table_2 and table_3 is dynamica create,now I only want use the table_1.
very thanks.
I used these methods. Looking at the model like you did only tells you what SHOULD be in the database.
import sqlalchemy as sa
def database_is_empty():
table_names = sa.inspect(engine).get_table_names()
is_empty = table_names == []
print('Db is empty: {}'.format(is_empty))
return is_empty
def table_exists(name):
ret = engine.dialect.has_table(engine, name)
print('Table "{}" exists: {}'.format(name, ret))
return ret
There may be a simpler method than this:
def model_exists(model_class):
engine = db.get_engine(bind=model_class.__bind_key__)
return model_class.metadata.tables[model_class.__tablename__].exists(engine)
the solution is too easy, just write this two rows in you code and it should work fine for you
from flask_sqlalchemy import SQLAlchemy, inspect
...
inspector = inspect(db.engine)
print(inspector.has_table("user")) # output: Boolean
have a nice day
SQL Alchemy's recommended way to check for the presence of a table is to create an inspector object and use its has_table() method.
The following example was copied from sqlalchemy.engine.reflection.Inspector.has_table, with the addition of an SQLite engine (in memory) to make it reproducible:
In [17]: from sqlalchemy import create_engine, inspect
...: from sqlalchemy import MetaData, Table, Column, Text
...: engine = create_engine('sqlite://')
...: meta = MetaData()
...: meta.bind = engine
...: user_table = Table('user', meta, Column("first_name", Text))
...: user_table.create()
...: inspector = inspect(engine)
...: inspector.has_table('user')
Out[17]: True
You can also use the user_table metadata element name to check if it exists as such:
inspector.has_table(user_table.name)

Syntax for row_to_json with sqlalchemy

I would like to figure out how to use Postgres' (9.2) row_to_json with SqlAlchemy. However I haven't been able to come up with any working syntax.
details_foo_row_q = select([Foo.*]
).where(Foo.bar_id == Bar.id
).alias('details_foo_row_q')
details_foo_q = select([
func.row_to_json(details_foo_row_q).label('details')
]).where(details_foo_row_q.c.bar_id == Bar.id
).alias('details_foo_q')
I would ideally like to not to have to type out each and every field from the table model if possible.
Got the answer from 'mn':
It should be something more like this:
details_foo_row_q = select([Foo]).where(Foo.bar_id == Bar.id).alias('details_foo_row_q')
details_foo_q = select([
func.row_to_json(literal_column(details_foo_row_q.name)).label('details')
]).select_from(details_foo_row_q).where(
details_foo_row_q.c.bar_id == Bar.id
).alias('details_foo_q')
Thank you mn, works great!
Your query generates an incorrect SQL
SELECT row_to_json(SELECT ... FROM foo) AS details
FROM (SELECT ... FROM foo) AS details_foo_row_q
It should be
SELECT row_to_json(details_foo_row_q) AS details
FROM (SELECT ... FROM foo) AS details_foo_row_q
You need to use select as literal_column
from sqlalchemy.sql.expression import literal_column
details_foo_q = select([
func.row_to_json(literal_column(details_foo_row_q.name)).label('details')
]).select_from(details_foo_row_q).where(
details_foo_row_q.c.bar_id == Bar.id
).alias('details_foo_q')
Sounds like maybe there is a solution IFF you are using the latest SQLAlchemy:
# New in version 1.4.0b2.
>>> from sqlalchemy import table, column, func, select
>>> a = table( "a", column("id"), column("x"), column("y"))
>>> stmt = select(func.row_to_json(a.table_valued()))
>>> print(stmt)
SELECT row_to_json(a) AS row_to_json_1
FROM a
https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#table-types-passed-to-functions
If other people still struggle with row_to_json function, I have good news for you.
Let's imagine we have User class with fields email, id and we want to receive email and id fields as JSON.
This can be done using json_build_object function:
from sqlalchemy import func
session.query(func.json_build_object("email", User.email, "id", User.id))

Peewee ORM query result fn.COUNT is type unicode, not integer

Please help me understand behavior of peewee 2.4.5 when talking to MySQL 5.5. I'm running a simple query to count children associated with a parent; in this case documents at a path. As plain SQL it boils down to this:
select p.name, count(d.file) as child_count
from path as p, doc as d
where p.id = d.path_id
group by p.name
The Peewee code uses the fn.COUNT feature, see below for a self-contained example. The result comes back just fine and with the results I expect, with one exception: the query result object attribute "child_count" is of type unicode instead of integer. In this little example there's 1 row and I get back a string (essentially) '1' instead of the number 1.
I'm confused because in other queries I have done with fn.COUNT the result is of type integer. Is this a feature? Am I making a silly python mistake here? Thanks in advance.
'''
Example of accessing MySQL from Python using Peewee.
Developed with peewee 2.4.5, pymysql 0.6.3, MySql 5.5
'''
from __future__ import print_function
from peewee import MySQLDatabase, Model, CharField, ForeignKeyField, fn
db = MySQLDatabase(database="test", host="localhost", user="mumble", password="foo")
class MySQLModel(Model):
'''
Base class to associate the database object
'''
class Meta:
database = db
class Path(MySQLModel):
# peewee adds primary key field 'id'
name = CharField()
class Doc(MySQLModel):
# peewee adds primary key field 'id'
path = ForeignKeyField(Path)
file = CharField()
def main():
db.connect()
db.create_tables([Path, Doc], True)
newpath = Path(name='ab/23')
newpath.save()
newdoc1 = Doc(path=newpath.id, file='file1.txt')
newdoc1.save()
newdoc2 = Doc(path=newpath.id, file='file2.txt')
newdoc2.save()
for row in Path.select():
print("Path: id=%d, name=%s" % (row.id, row.name))
for row in Doc.select():
print("Doc: id=%d, file=%s" % (row.id, row.file))
# query in plain old SQL:
# select p.name, count(d.file) from path as p, doc as d where p.id = d.path_id group by p.name
path_doc_result = (Path
.select(Path.name, fn.COUNT(Doc.file).alias('child_count'))
.join(Doc, on=(Path.id == Doc.path))
.group_by(Path.name))
path_doc_count = len(list(path_doc_result))
print("Path-doc parent-child result count is %d" % path_doc_count)
if path_doc_count == 0:
print("Programmer error, no results!")
else:
# get the first one
d_row = path_doc_result[0]
#### Why is the child_count attribute not integer? ###
print("Type of child_count attribute is %s" % type(d_row.child_count))
print("Path-Doc result: name=%s child_count=%d" % (d_row.name, int(d_row.child_count)))
newdoc1.delete_instance()
newdoc2.delete_instance()
newpath.delete_instance()
# order matters for foreign keys!
db.drop_table(Doc)
db.drop_table(Path)
db.close()
if __name__ == "__main__":
main()
Peewee functions look at the type of the first argument and attempt to coerce the return value to that type. This makes sense in most cases but I can see why it's causing an issue here.
To work around, just call fn.COUNT(Doc.file).coerce(False).alias('child_count')
path_doc_result = (Path
.select(Path.name, fn.COUNT(Doc.file).coerce(False).alias('child_count'))
.join(Doc, on=(Path.id == Doc.path))
.group_by(Path.name))

SqlAlchemy add new Field to class and create corresponding column in table

I want to add a field to an existing mapped class, how would I update the sql table automatically. Does sqlalchemy provide a method to update the database with a new column, if a field is added to the class.
Sometimes Migrate is too much work - you just want to column automatically added when you run your changed code. So here is a function that does that.
Caveats: it pokes around in the SQLAlchemy internals and tends to require small changes every time SQLAlchemy undergoes a major revision. (There's probably a much better way of doing this - I am not a SQLAlchemy expert). It also doesn't handle constraints.
import logging
import re
import sqlalchemy
from sqlalchemy import MetaData, Table, exceptions
import sqlalchemy.engine.ddl
_new_sa_ddl = sqlalchemy.__version__.startswith('0.7')
def create_and_upgrade(engine, metadata):
"""For each table in metadata, if it is not in the database then create it.
If it is in the database then add any missing columns and warn about any columns
whose spec has changed"""
db_metadata = MetaData()
db_metadata.bind = engine
for model_table in metadata.sorted_tables:
try:
db_table = Table(model_table.name, db_metadata, autoload=True)
except exceptions.NoSuchTableError:
logging.info('Creating table %s' % model_table.name)
model_table.create(bind=engine)
else:
if _new_sa_ddl:
ddl_c = engine.dialect.ddl_compiler(engine.dialect, None)
else:
# 0.6
ddl_c = engine.dialect.ddl_compiler(engine.dialect, db_table)
# else:
# 0.5
# ddl_c = engine.dialect.schemagenerator(engine.dialect, engine.contextual_connect())
logging.debug('Table %s already exists. Checking for missing columns' % model_table.name)
model_columns = _column_names(model_table)
db_columns = _column_names(db_table)
to_create = model_columns - db_columns
to_remove = db_columns - model_columns
to_check = db_columns.intersection(model_columns)
for c in to_create:
model_column = getattr(model_table.c, c)
logging.info('Adding column %s.%s' % (model_table.name, model_column.name))
assert not model_column.constraints, \
'Arrrgh! I cannot automatically add columns with constraints to the database'\
'Please consider fixing me if you care!'
model_col_spec = ddl_c.get_column_specification(model_column)
sql = 'ALTER TABLE %s ADD %s' % (model_table.name, model_col_spec)
engine.execute(sql)
# It's difficult to reliably determine if the model has changed
# a column definition. E.g. the default precision of columns
# is None, which means the database decides. Therefore when I look at the model
# it may give the SQL for the column as INTEGER but when I look at the database
# I have a definite precision, therefore the returned type is INTEGER(11)
for c in to_check:
model_column = model_table.c[c]
db_column = db_table.c[c]
x = model_column == db_column
logging.debug('Checking column %s.%s' % (model_table.name, model_column.name))
model_col_spec = ddl_c.get_column_specification(model_column)
db_col_spec = ddl_c.get_column_specification(db_column)
model_col_spec = re.sub('[(][\d ,]+[)]', '', model_col_spec)
db_col_spec = re.sub('[(][\d ,]+[)]', '', db_col_spec)
db_col_spec = db_col_spec.replace('DECIMAL', 'NUMERIC')
db_col_spec = db_col_spec.replace('TINYINT', 'BOOL')
if model_col_spec != db_col_spec:
logging.warning('Column %s.%s has specification %r in the model but %r in the database' %
(model_table.name, model_column.name, model_col_spec, db_col_spec))
if model_column.constraints or db_column.constraints:
# TODO, check constraints
logging.debug('Column constraints not checked. I am too dumb')
for c in to_remove:
model_column = getattr(db_table.c, c)
logging.warning('Column %s.%s in the database is not in the model' % (model_table.name, model_column.name))
def _column_names(table):
# Autoloaded columns return unicode column names - make sure we treat all are equal
return set((unicode(i.name) for i in table.c))
SQLAlchemy itself doesn't support automatic updates of schema, but there is a third party SQLAlchemy Migrate tool to automate migrations. Look though "Database schema versioning workflow" chapter to see how it works.
Alembic is the latest package that offers migration of database.
See sqlalchemy docs regarding migration here.
# database.py has definition for engine.
# from sqlalchemy import create_engine
# engine = create_engine('mysql://......', convert_unicode=True)
from database import engine
from sqlalchemy import DDL
add_column = DDL('ALTER TABLE USERS ADD COLUMN city VARCHAR(60) AFTER email')
engine.execute(add_column)
It's possible to do with sqlalchemy-migrate but not actually use migrations:
sqlalchemy.MetaData(bind=dbinterface.db.engine)
table = sqlalchemy.schema.Table(table_name, meta_data)
try:
col = sqlalchemy.Column('column_name', sqlalchemy.String)
col.create(table)
except Exception as e:
print "Error adding column: {}".format(e)
To use with python 3 I needed sqlalchemy-migrate==0.12.0.
You can install 'DB Browser (SQLite)' and open your current database file and simple add/edit table in your database and save it, and run your app
(add script in your model after save above process)

Categories