sqalchemy + postgresql: casting a uuid to text and matching with like - python

How can I convert a uuid to text and match using the like operator? E.g I want to do following in sqlalchemy:
SELECT * FROM user
WHERE id::text like '%0c%';
PS: Column definition is
from sqlalchemy.dialects.postgresql import UUID
id = Column(
UUID(as_uuid=True),
primary_key=True,
index=True,
nullable=False,
default=uuid4,
)

Use the cast function
import sqlalchemy as sa
...
with Session() as s:
results = s.scalars(
sa.select(MyModel).where(
sa.cast(MyModel.id, sa.TEXT).like(f'%{fragment}%')
)
)
id::text in Postgresql is shorthand* for cast:
with x as (select gen_random_uuid() as uu)
select uu::text,
cast(uu as text),
uu::text = cast(uu as text) as "Equals?" from x;
uu │ uu │ Equals?
══════════════════════════════════════╪══════════════════════════════════════╪═════════
b23efe1b-fb05-4d8a-92a3-59339d2ea0f4 │ b23efe1b-fb05-4d8a-92a3-59339d2ea0f4 │ t
* Well, mostly...

Related

SQLAlchemy 2.0 ORM Model DateTime Insertion

I am having some real trouble getting a created_date column working with SQLAlchemy 2.0 with the ORM model. The best answer so far I've found is at this comment: https://stackoverflow.com/a/33532154 however I haven't been able to make that function work. In my (simplified) models.py file I have:
import datetime
from sqlalchemy import Integer, String, DateTime
from sqlalchemy.sql import func
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
class Base(DeclarativeBase):
pass
class MyTable(Base):
__tablename__ = "my_table"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String, nullable=False)
created_date: Mapped[datetime.datetime] = mapped_column(DateTime(timezone=True), server_default=func.now())
So far, so good, thinks I. In the simplified engine.py I have:
from sqlalchemy import create_engine
from sqlalchemy import select
from sqlalchemy.orm import Session
import models
def add_entry(engine, name_str):
this_row = models.MyTable()
this_row.name = name_str
with Session(engine) as session:
session.add(this_row)
session.commit()
If I'm understanding correctly, the default value for the created_date to be a SQL function, and SQLAlchemy maps now() to SQLite3's datetime(). With the engine set to echo=True, I get the following result when it tries to run this insert command (Please note, this is data from the non-simplified form but it's still pretty simple, had 3 strings instead of the one I described)
2023-02-06 09:47:07,080 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-02-06 09:47:07,080 INFO sqlalchemy.engine.Engine INSERT INTO coaches (d_name, bb2_name, bb3_name) VALUES (?, ?, ?) RETURNING id, created_date
2023-02-06 09:47:07,081 INFO sqlalchemy.engine.Engine [generated in 0.00016s] ('andy#1111', 'AndyAnderson', 'Killer Andy')
2023-02-06 09:47:07,081 INFO sqlalchemy.engine.Engine ROLLBACK
This causes an exception when it gets to the time function: IntegrityError: NOT NULL constraint failed: coaches.created_date
Some additional data (I have been using the rich library which produces an enormous amount of debug information so I'm trying to get the best bits:
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ exc_tb = <traceback object at 0x00000108BD2565C0> │ │
│ │ exc_type = <class 'sqlalchemy.exc.IntegrityError'> │ │
│ │ exc_value = IntegrityError('(sqlite3.IntegrityError) NOT NULL constraint failed: │ │
│ │ coaches.created_date') │ │
│ │ self = <sqlalchemy.util.langhelpers.safe_reraise object at 0x00000108BD1B79A0> │ │
│ │ traceback = None │ │
│ │ type_ = None │ │
│ │ value = None │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯
In any event, I feel like I've gotten the wrong end of the stick on the way to make a table column automatically execute a SQL command with the func call. Any notions on this one? I haven't found any direct example in the SQLAlchemy 2.0 docs, and aside from the pretty awesome comment to a similar question, I haven't found any working solutions.
Thanks for considering!
I implemented a SQLAlchemy 2.0 mapped_column with a server_default of func.now() expecting the column to automatically fill during an INSERT operation. During the insert operation, SQLAlchemy threw an exception claiming the column NOT NULLABLE constraint was violated -- thus it was not automatically filling.
Posting an answer to my own question to note what actually did work (actual problem still exists, but a simplified variation does work just dandy the way I expect it to.)
import datetime
from sqlalchemy import Integer, String, DateTime
from sqlalchemy import create_engine
from sqlalchemy.sql import func
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import Session
class Base(DeclarativeBase):
pass
class MyTable(Base):
__tablename__ = "my_table"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String)
created_date: Mapped[datetime.datetime] = mapped_column(
DateTime(timezone=True), server_default=func.now()
)
def initialize_engine(filename):
return create_engine(f"sqlite+pysqlite:///{filename}", echo=True)
def initialize_tables(engine):
Base.metadata.create_all(engine)
def add_row(engine, name):
this_row = MyTable(name=name)
print(this_row)
with Session(engine) as session:
session.add(this_row)
session.commit()
my_file = "test.db"
my_engine = initialize_engine(my_file)
initialize_tables(my_engine)
add_row(my_engine, "Dave")
This produces the result:
python datetest.py
2023-02-06 11:02:41,157 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-02-06 11:02:41,158 INFO sqlalchemy.engine.Engine PRAGMA main.table_info("my_table")
2023-02-06 11:02:41,158 INFO sqlalchemy.engine.Engine [raw sql] ()
2023-02-06 11:02:41,158 INFO sqlalchemy.engine.Engine COMMIT
<__main__.MyTable object at 0x000002CC767ECD50>
2023-02-06 11:02:41,159 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-02-06 11:02:41,160 INFO sqlalchemy.engine.Engine INSERT INTO my_table (name) VALUES (?) RETURNING id, created_date
2023-02-06 11:02:41,160 INFO sqlalchemy.engine.Engine [generated in 0.00020s] ('Dave',)
2023-02-06 11:02:41,171 INFO sqlalchemy.engine.Engine COMMIT
The schema in the correctly working database reads:
sqlite> .schema my_table
CREATE TABLE my_table (
id INTEGER NOT NULL,
name VARCHAR NOT NULL,
created_date DATETIME DEFAULT (CURRENT_TIMESTAMP) NOT NULL,
PRIMARY KEY (id)
);
So... all I have to do is figure out why my original code isn't doing the simple variation!

SQLalchemy with column names starting and ending with underscores

Set RDBMS_URI env var to a connection string like postgresql://username:password#host/database, then on Python 3.9 with PostgreSQL 15 and SQLalchemy 1.14 run:
from os import environ
from sqlalchemy import Boolean, Column, Identity, Integer
from sqlalchemy import create_engine
from sqlalchemy.orm import declarative_base
Base = declarative_base()
class Tbl(Base):
__tablename__ = 'Tbl'
__has_error__ = Column(Boolean)
id = Column(Integer, primary_key=True, server_default=Identity())
engine = create_engine(environ["RDBMS_URI"])
Base.metadata.create_all(engine)
Checking the database:
=> \d "Tbl"
Table "public.Tbl"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+----------------------------------
id | integer | | not null | generated by default as identity
Indexes:
"Tbl_pkey" PRIMARY KEY, btree (id)
How do I force the column names with double underscore to work?
I believe that the declarative machinery explicitly excludes attributes whose names start with a double underscore from the mapping process (based on this and this). Consequently your __has_error__ column is not created in the target table.
There are at least two possible workarounds. Firstly, you could give the model attribute a different name, for example:
_has_error = Column('__has_error__', BOOLEAN)
This will create the database column __has_attr__, accessed through Tbl._has_error*.
If you want the model's attribute to be __has_error__, then you can achieve this by using an imperative mapping.
import sqlalchemy as sa
from sqlalchemy import orm
mapper_registry = orm.registry()
tbl = sa.Table(
'tbl',
mapper_registry.metadata,
sa.Column('__has_error__', sa.Boolean),
sa.Column(
'id', sa.Integer, primary_key=True, server_default=sa.Identity()
),
)
class Tbl:
pass
mapper_registry.map_imperatively(Tbl, tbl)
mapper_registry.metadata.create_all(engine)
* I tried using a synonym to map __has_error__ to _has_error but it didn't seem to work. It probably gets exluded in the mapper as well, but I didn't investigate further.

SqlAlchemy column_property or hybrid_property to be used in .filter()

I'd like to be able to use a computed column in a statement filter, but I don't know how to do it.
Here is what I've tried so far with a minimal reproducible example:
1. the base code
from typing import Any
from datetime import (
date,
timedelta,
)
from sqlalchemy import (
Column,
Integer,
Date,
select,
)
from sqlalchemy.ext.declarative import (
as_declarative,
declared_attr,
)
from sqlalchemy.orm import object_session
from sqlalchemy.sql import func
from sqlalchemy.ext.hybrid import hybrid_property
#as_declarative()
class BaseModel:
id: Any
__name__: str
# Generate __tablename__ automatically
#declared_attr
def __tablename__(cls) -> str:
return cls.__name__.lower()
class A(BaseModel):
id: int = Column(Integer, primary_key=True, index=True)
owner_id: int = Column(Integer, nullable=False)
valid_from: date = Column(Date, server_default=func.current_date())
Now I want a computed field named valid_to. Inside SQL (e.g. a view on the table) my query would look like:
2. native sql solution
select
(case
when valid_from = max(valid_from) over (partition by owner_id)
then '9999-12-31'
else
(lead(valid_from) over (partition by owner_id order by valid_from) - interval '1' day)
end)::date
as valid_to
from public.a
As you can notice I'm using the windowed function lead. This is what makes it dificult for me, since SqlAlchemies column_property and hybrig_property require only one output row. I've managed to implement a hybrid_property to be used in python on the instance itself with the following code:
3. my partial success
#hybrid_property
def valid_to(self) -> date:
"""Calculates the valid_to date based on the user and the other existing entries."""
statement = (
select(A)
.filter(A.owner_id == self.owner_id)
.filter(A.valid_from > self.valid_from)
.order_by('valid_from')
.limit(1)
)
a_next = object_session(self).execute(statement).scalars().first()
if a_next:
return a_next.valid_from - timedelta(days=1)
else:
return date.max
Now this gives me the correct result when I have an instance of A. However, I'd like to use this property in filters on the class level, e.g.:
4. what I wan't, but can't with the code from above
statement = (
select(A)
.filter(A.owner_id == 1)
.filter(A.valid_from <= date(2022, 11, 15))
.filter(A.valid_to >= date(2022, 11, 15))
)
I'm really stuck with the implementation of the #valid_to.expression, or if it is easier the column_property()...
Does anybody have an idea? Help is much appreciated!

how make Select with SQLAlachemy?

I am using SQAlchemy and python, without flask, well when I do an infinite loop and invoke the select method that sqlalchemy offers, it returns the value of my table, but when changing the value of a specific column from phpmyadmin, in the python script is not reflected the change made, someone could give me some advice or help, please thank you in advance.
PD: I leave the code for you to analyze:
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey, create_engine, Float, DateTime, update, Date, Time
from sqlalchemy.sql import select
import time
import os
ahora = Table('ahora', metadata,
Column('id', Integer, primary_key=True),
Column('temperatura', Float()),
Column('humedad', Float()),
Column('canal1', Float()),
Column('canal2', Float()),
Column('canal3', Float()),
Column('canal4', Float()),
Column('hora', Time()),
)
engine = create_engine('mysql+pymysql://javi:javiersolis12#10.0.0.20/Tuti')
connection = engine.connect()
while True:
#Seleccionara la unica entrada en la tabla Configuracion
query = select([configuracion])
confi_actual = connection.execute(query).fetchone()
query_aux = select([ahora])
datos_actuales = connection.execute(query_aux).fetchone()
print(datos_actuales)
time.sleep(8)
You can specify the components you need to select into the Query in your Session. And then use, for example all() to get all the enterance. Please look for the next example:
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
...
class Ahora(Base):
__tablename__ = 'ahora'
id = sa.Column(sa.Integer, primary_key=True)
temperatura: int = sa.Column(sa.Float)
...
...
engine = create_engine('mysql+pymysql://javi:javiersolis12#10.0.0.20/Tuti')
session = Session(engine)
query = session.query(Ahora)
# some actions with query, for example filter(), limit(), etc
ahora = query.all()

SQLAlchemy Nested CTE Query

The sqlalchemy core query builder appears to unnest and relocate CTE queries to the "top" of the compiled sql.
I'm converting an existing Postgres query that selects deeply joined data as a single JSON object. The syntax is pretty contrived but it significantly reduces network overhead for large queries. The goal is to build the query dynamically using the sqlalchemy core query builder.
Here's a minimal working example of a nested CTE
with res_cte as (
select
account_0.name acct_name,
(
with offer_cte as (
select
offer_0.id
from
offer offer_0
where
offer_0.account_id = account_0.id
)
select
array_agg(offer_cte.id)
from
offer_cte
) as offer_arr
from
account account_0
)
select
acct_name::text, offer_arr::text
from res_cte
Result
acct_name, offer_arr
---------------------
oliver, null
rachel, {3}
buddy, {4,5}
(my incorrect use of) the core query builder attempts to unnest offer_cte and results in every offer.id being associated with every account_name in the result.
There's no need to re-implement this exact query in an answer, any example that results in a similarly nested CTE would be perfect.
I just implemented the nesting cte feature. It should land with 1.4.24 release.
Pull request: https://github.com/sqlalchemy/sqlalchemy/pull/6709
import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base
# Model declaration
Base = declarative_base()
class Offer(Base):
__tablename__ = "offer"
id = sa.Column(sa.Integer, primary_key=True)
account_id = sa.Column(sa.Integer, nullable=False)
class Account(Base):
__tablename__ = "account"
id = sa.Column(sa.Integer, primary_key=True)
name = sa.Column(sa.TEXT, nullable=False)
# Query construction
account_0 = sa.orm.aliased(Account)
# Watch the nesting keyword set to True
offer_cte = (
sa.select(Offer.id)
.where(Offer.account_id == account_0.id)
.select_from(Offer)
.correlate(account_0).cte("offer_cte", nesting=True)
)
offer_arr = sa.select(sa.func.array_agg(offer_cte.c.id).label("offer_arr"))
res_cte = sa.select(
account_0.name.label("acct_name"),
offer_arr.scalar_subquery().label("offer_arr"),
).cte("res_cte")
final_query = sa.select(
sa.cast(res_cte.c.acct_name, sa.TEXT),
sa.cast(res_cte.c.offer_arr, sa.TEXT),
)
It constructs this query that returns the result you expect:
WITH res_cte AS
(
SELECT
account_1.name AS acct_name
, (
WITH offer_cte AS
(
SELECT
offer.id AS id
FROM
offer
WHERE
offer.account_id = account_1.id
)
SELECT
array_agg(offer_cte.id) AS offer_arr
FROM
offer_cte
) AS offer_arr
FROM
account AS account_1
)
SELECT
CAST(res_cte.acct_name AS TEXT) AS acct_name
, CAST(res_cte.offer_arr AS TEXT) AS offer_arr
FROM
res_cte

Categories