I have a Postgres query (via SQLAlchemy) that selects matching rows using complex criteria:
original_query = session.query(SomeTable).filter(*complex_filters)
I don't know exactly how the query is constructed, I only have access to the resulting Query instance.
Now I want to use this "opaque" query (black-box for the purposes of this question) to construct other queries, from the same table using the exact same criteria, but with additional logic on top of the matched original_query rows. For example with SELECT DISTINCT(column) on top:
another_query = session.query(SomeTable.column).distinct().?select_from_query?(original_query)
or
SELECT SUM(tab_value) FROM (
SELECT tab.key AS tab_key, tab.value AS tab_value -- inner query, fixed
FROM tab
WHERE tab.product_id IN (1, 2) -- simplified; the inner query is quite complex
) AS tbl
WHERE tab_key = 'length';
or
SELECT tab_key, COUNT(*) FROM (
SELECT tab.key AS tab_key, tab.value AS tab_value
FROM tab
WHERE tab.product_id IN (1, 2)
) AS tbl
GROUP BY tab_key;
etc.
How to implement that ?select_from_query? part cleanly, in SQLAlchemy?
Basically, how to do SELECT dynamic FROM (SELECT fixed) in SqlAlchemy?
Motivation: the inner Query object comes from a different part of code. I don't have control over how it is constructed, and want to avoid duplicating its logic ad-hoc for each SELECT that I have to run on top of it. I want to re-use that query, but add additional logic on top (as per the examples above).
original_query is just a SQLAlchemy query API object, you can apply additional filters and criteria to this. The query API is generative; each Query() instance operation returns a new (immutable) instance and your starting point (original_query) is unaffected.
This includes using Query.distinct() to add a DISTINCT() clause, Query.with_entities() to alter what columns are part of the query, and Query.values() to execute your query but return just specific single column values.
Use either .distinct(<column>).with_entities(<column>) to create a new query object (which can be further re-used):
another_query = original_query.distinct(SomeTable.column).with_entities(SomeTable.column)
or just use .distinct(<column>).values(<column>) to get an iterator of (column_value,) tuple results right there and then:
distinct_values = original_query.distinct(SomeTable.column).values(SomeTable.column)
Note that .values() executes the query immediately, like .all() would, while .with_entities() gives you back a new Query object with just the single column (and .all() or iteration or slicing would then execute and return the results).
Demo, using a contrived Foo model (executing against sqlite to make it easier to demo quickly):
>>> from sqlalchemy import *
>>> from sqlalchemy.ext.declarative import declarative_base
>>> from sqlalchemy.orm import sessionmaker
>>> Base = declarative_base()
>>> class Foo(Base):
... __tablename__ = "foo"
... id = Column(Integer, primary_key=True)
... bar = Column(String)
... spam = Column(String)
...
>>> engine = create_engine('sqlite:///:memory:', echo=True)
>>> session = sessionmaker(bind=engine)()
>>> Base.metadata.create_all(engine)
2019-06-10 13:10:43,910 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("foo")
2019-06-10 13:10:43,910 INFO sqlalchemy.engine.base.Engine ()
2019-06-10 13:10:43,911 INFO sqlalchemy.engine.base.Engine
CREATE TABLE foo (
id INTEGER NOT NULL,
bar VARCHAR,
spam VARCHAR,
PRIMARY KEY (id)
)
2019-06-10 13:10:43,911 INFO sqlalchemy.engine.base.Engine ()
2019-06-10 13:10:43,913 INFO sqlalchemy.engine.base.Engine COMMIT
>>> original_query = session.query(Foo).filter(Foo.id.between(17, 42))
>>> print(original_query) # show what SQL would be executed for this query
SELECT foo.id AS foo_id, foo.bar AS foo_bar, foo.spam AS foo_spam
FROM foo
WHERE foo.id BETWEEN ? AND ?
>>> another_query = original_query.distinct(Foo.bar).with_entities(Foo.bar)
>>> print(another_query) # print the SQL again, don't execute
SELECT DISTINCT foo.bar AS foo_bar
FROM foo
WHERE foo.id BETWEEN ? AND ?
>>> distinct_values = original_query.distinct(Foo.bar).values(Foo.bar) # executes!
2019-06-10 13:10:48,470 INFO sqlalchemy.engine.base.Engine SELECT DISTINCT foo.bar AS foo_bar
FROM foo
WHERE foo.id BETWEEN ? AND ?
2019-06-10 13:10:48,470 INFO sqlalchemy.engine.base.Engine (17, 42)
In the above demo, the original query would select certain Foo instances with a BETWEEN filter, but adding .distinct(Foo.bar).values(Foo.bar) then executes a query for just the DISTINCT foo.bar column, but with the same BETWEEN filter in place. Similarly, by using .with_entities(), we were given a new query object for just that single column, but the filter is still part of that new query.
Your added example works just the same way; you don't actually need to have a sub-select there, as the same query can be expressed as:
SELECT sum(tab.value)
FROM tab
WHERE tab.product_id IN (1, 2) AND tab_key = 'length';
which can be achieved simply by adding extra filters and then use .with_entities() to replace the columns selected with your SUM():
summed_query = (
original_query
.filter(Tab.key == 'length') # add a filter
.with_entities(func.sum(Tab.value)
or, in terms of the above Foo demo:
>>> print(original_query.filter(Foo.spam == 42).with_entities(func.sum(Foo.bar)))
SELECT sum(foo.bar) AS sum_1
FROM foo
WHERE foo.id BETWEEN ? AND ? AND foo.spam = ?
There are use-cases for sub-queries (such as limiting results from a specific table in a join), but this is not one of those.
If you do need a sub-query, then the query API has Query.from_self() (for simpler cases) and Query.subselect().
For example, if you needed to select only aggregated rows from the original query and filter on the aggregated values via HAVING, and then join the results with another table for the highest row id for each group and some further filtering, then you need a subquery:
summed_col = func.sum(SomeTable.some_column)
max_id = func.max(SomeTable.primary_key)
summed_results_by_eggs = (
original_query
.with_entities(max_id, summed_col) # only select highest id and the sum
.group_by(SomeTable.other_column) # per group
.having(summed_col > 10) # where the sum is high enough
.from_self(summed_col) # give us the summed value as a subselect
.join( # join these rows with another table
OtherTable,
OtherTable.foreign_key == max_id # using the highest id
)
.filter(OtherTable.some_column < 1000) # and filter some more
)
The above would only select the summed SomeTable.some_column values where that value is greater than 10, and where the highest SomeTable.id value in each group. This query has to use a sub-query, because you want to limit the eligible SomeTable rows before joining against the other table.
To demo this, I added a second table Eggs:
>>> from sqlalchemy.orm import relationship
>>> class Eggs(Base):
... __tablename__ = "eggs"
... id = Column(Integer, primary_key=True)
... foo_id = Column(Integer, ForeignKey(Foo.id))
... foo = relationship(Foo, backref="eggs")
...
>>> summed_col = func.sum(Foo.bar)
>>> max_id = func.max(Foo.id)
>>> print(
... original_query
... .with_entities(max_id, summed_col)
... .group_by(Foo.spam)
... .having(summed_col > 10)
... .from_self(summed_col)
... .join(Eggs, Eggs.foo_id==max_id)
... .filter(Eggs.id < 1000)
... )
SELECT anon_1.sum_2 AS sum_1
FROM (SELECT max(foo.id) AS max_1, sum(foo.bar) AS sum_2
FROM foo
WHERE foo.id BETWEEN ? AND ? GROUP BY foo.spam
HAVING sum(foo.bar) > ?) AS anon_1 JOIN eggs ON eggs.foo_id = anon_1.max_1
WHERE eggs.id < ?
The Query.from_self() method takes new entities to use in the outer query, if you omit those then all columns are pulled out. In the above I pulled out the summed column value; without that argument the MAX(Foo.id) column would also be selected.
Related
The SQL query I have can identify the Max Edit Time from the 3 tables that it is joining together:
Select Identity.SSN, Schedule.First_Class, Students.Last_Name,
(SELECT Max(v)
FROM (VALUES (Students.Edit_DtTm), (Schedule.Edit_DtTm),
(Identity.Edit_DtTm)) AS value(v)) as [MaxEditDate]
FROM Schedule
LEFT JOIN Students ON Schedule.stdnt_id=Students.Student_Id
LEFT JOIN Identity ON Schedule.std_id=Identity.std_id
I need this to be in SQLAlchemy so I can reference the columns being used elsewhere in my code. Below is the simplest version of what i'm trying to do but it doesn't work. I've tried changing around how I query it but I either get a SQL error that I'm using VALUES incorrectly or it doesn't join properly and gets me the actual highest value in those columns without matching it to the outer query
max_edit_subquery = sa.func.values(Students.Edit_DtTm, Schedule.Edit_DtTm, Identity.Edit_DtTm)
base_query = (sa.select([Identity.SSN, Schedule.First_Class, Students.Last_Name,
(sa.select([sa.func.max(self.max_edit_subquery)]))]).
select_from(Schedule.__table__.join(Students, Schedule.stdnt_id == Students.stdnt_id).
join(Ident, Schedule.std_id == Identity.std_id)))
I am not an expert at SQLAlchemy but you could exchange VALUES with UNION ALL:
Select Identity.SSN, Schedule.First_Class, Students.Last_Name,
(SELECT Max(v)
FROM (SELECT Students.Edit_DtTm AS v
UNION ALL SELECT Schedule.Edit_DtTm
UNION ALL SELECT Identity.Edit_DtTm) s
) as [MaxEditDate]
FROM Schedule
LEFT JOIN Students ON Schedule.stdnt_id=Students.Student_Id
LEFT JOIN Identity ON Schedule.std_id=Identity.std_id;
Another approach is to use GREATEST function (not available in T-SQL):
Select Identity.SSN, Schedule.First_Class, Students.Last_Name,
GREATEST(Students.Edit_DtTm, Schedule.Edit_DtTm,Identity.Edit_DtTm)
as [MaxEditDate]
FROM Schedule
LEFT JOIN Students ON Schedule.stdnt_id=Students.Student_Id
LEFT JOIN Identity ON Schedule.std_id=Identity.std_id;
I hope that it will help you to translate it to ORM version.
I had the similar problem and i solved using the below approach. I have added the full code and resultant query. The code was executed on the MSSQL server. I had used different tables and masked with the tables and columns used in your requirement in the below code snippet.
from sqlalchemy import *
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.types import String
from sqlalchemy.sql.expression import FromClause
class values(FromClause):
def __init__(self, *args):
self.column_names = args
#compiles(values)
def compile_values(element, compiler, asfrom=False, **kwrgs):
values = "VALUES %s" % ", ".join("(%s)" % compiler.render_literal_value(elem, String()) for elem in element.column_names)
if asfrom:
values = "(%s)" % values
return values
base_query = self.db_session.query(Schedule.Edit_DtTm.label("Schedule_Edit_DtTm"),
Identity.Edit_DtTm.label("Identity_Edit_DtTm"),
Students.Edit_DtTm.label("Students_Edit_DtTm"),
Identity.SSN
).outerjoin(Students, Schedule.stdnt_id==Students.Student_Id
).outerjoin(Identity, Schedule.std_id==Identity.std_id).subquery()
values_at_from_clause = values(("Students_Edit_DtTm"), ("Schedule_Edit_DtTm"), ("Identity_Edit_DtTm")
).alias('values(MaxEditDate)')
get_max_from_values = self.db_session.query(func.max(text('MaxEditDate'))
).select_from(values_at_from_clause)
output_query = self.db_session.query(get_max_from_values.subquery()
).label("MaxEditDate")
**print output_query**
SELECT
anon_1.Schedule_Edit_DtTm AS anon_1_Schedule_Edit_DtTm,
anon_1.Students_Edit_DtTm AS anon_1_Students_Edit_DtTm,
anon_1.Identity_Edit_DtTm AS anon_1_Identity_Edit_DtTm,
anon_1.SSN AS anon_1_SSN
(
SELECT
anon_2.max_1
FROM
(
SELECT
max( MaxEditDate ) AS max_1
FROM
(
VALUES (Students_Edit_DtTm),
(Schedule_Edit_DtTm),
(Identity_Edit_DtTm)
) AS values(MaxEditDate)
) AS anon_2
) AS MaxEditDate
FROM
(
SELECT
Schedule.Edit_DtTm AS Schedule_Edit_DtTm,
Students.Edit_DtTm AS Students_Edit_DtTm,
Identity.Edit_DtTm AS Identity_Edit_DtTm,
Identity.SSN AS SSN
FROM
Schedule WITH(NOLOCK)
LEFT JOIN Students WITH(NOLOCK) ON
Schedule.stdnt_id==Students.Student_Id
LEFT JOIN Identity WITH(NOLOCK) ON
Schedule.std_id==Identity.std_id
) AS anon_1
I'm trying to translate this SQL query into a Flask-SQLAlchemy call:
SELECT *
FROM "ENVOI"
WHERE "ID_ENVOI" IN (SELECT d."ID_ENVOI"
FROM "DECLANCHEMENT" d
WHERE d."STATUS" = 0
AND d."DATE" = (SELECT max("DECLANCHEMENT"."DATE")
FROM "DECLANCHEMENT"
WHERE "DECLANCHEMENT"."ID_ENVOI" = d."ID_ENVOI"))
As you can see, it uses subqueries and, most important part, one of the subqueries is a correlated query (it use d table defined in an outer query).
I know how to use subqueries with subquery() function, but I can't find documentation about correlated queries with SQLAlchemy. Do you know a way to do it ?
Yes, we can.
Have a look at the following example (especially the correlate method call):
from sqlalchemy import select, func, table, Column, Integer
table1 = table('table1', Column('col', Integer))
table2 = table('table2', Column('col', Integer))
subquery = select(
[func.if_(table1.c.col == 1, table2.c.col, None)]
).correlate(table1)
query = (
select([table1.c.col,
subquery.label('subquery')])
.select_from(table1)
)
if __name__ == '__main__':
print(query)
will result in the following query
SELECT table1.col, (SELECT if(table1.col = :col_1, table2.col, NULL) AS if_1
FROM table2) AS subquery
FROM table1
As you can see, if you call correlate on a select, the given Table will not be added to it's FROM-clause.
You have to do this even when you specify select_from directly, as SQLAlchemy will happily add any table it finds in the columns.
Based on the link from univerio's comment, I've done this code for my request:
Declch = db.aliased(Declanchement)
maxdate_sub = db.select([db.func.max(Declanchement.date)])\
.where(Declanchement.id_envoi == Declch.id_envoi)
decs_sub = db.session.query(Declch.id_envoi)\
.filter(Declch.status == SMS_EN_ATTENTE)\
.filter(Declch.date < since)\
.filter(Declch.date == maxdate_sub).subquery()
envs = Envoi.query.filter(Envoi.id_envoi.in_(decs_sub)).all()
Note: this is a question about SQL Alchemy's expression language not the ORM
SQL Alchemy is fine for adding WHERE or HAVING clauses to an existing query:
q = select([bmt_gene.c.id]).select_from(bmt_gene)
q = q.where(bmt_gene.c.ensembl_id == "ENSG00000000457")
print q
SELECT bmt_gene.id
FROM bmt_gene
WHERE bmt_gene.ensembl_id = %s
However if you try to add a JOIN in the same way you'll get an exception:
q = select([bmt_gene.c.id]).select_from(bmt_gene)
q = q.join(bmt_gene_name)
sqlalchemy.exc.NoForeignKeysError: Can't find any foreign key relationships between 'Select object' and 'bmt_gene_name'
If you specify the columns it creates a subquery (which is incomplete SQL anyway):
q = select([bmt_gene.c.id]).select_from(bmt_gene)
q = q.join(bmt_gene_name, q.c.id == bmt_gene_name.c.gene_id)
(SELECT bmt_gene.id AS id FROM bmt_gene)
JOIN bmt_gene_name ON id = bmt_gene_name.gene_id
But what I actually want is this:
SELECT
bmt_gene.id AS id
FROM
bmt_gene
JOIN bmt_gene_name ON id = bmt_gene_name.gene_id
edit: Adding the JOIN has to be after the creation of the initial query expression q. The idea is that I make a basic query skeleton then I iterate over all the joins requested by the user and add them to the query.
Can this be done in SQL Alchemy?
The first error (NoForeignKeysError) means that your table lacks foreign key definition. Fix this if you don't want to write join clauses by hand:
from sqlalchemy.types import Integer
from sqlalchemy.schema import MetaData, Table, Column, ForeignKey
meta = MetaData()
bmt_gene_name = Table(
'bmt_gene_name', meta,
Column('id', Integer, primary_key=True),
Column('gene_id', Integer, ForeignKey('bmt_gene.id')),
# ...
)
The joins in SQLAlchemy expression language work a little bit different from what you expect. You need to create Join object where you join all the tables and only then provide it to Select object:
q = select([bmt_gene.c.id])
q = q.where(bmt_gene.c.ensembl_id == 'ENSG00000000457')
j = bmt_gene # Initial table to join.
table_list = [bmt_gene_name, some_other_table, ...]
for table in table_list:
j = j.join(table)
q = q.select_from(j)
The reason why you see the subquery in your join is that Select object is treated like a table (which essentially it is) which you asked to join to another table.
You can access the current select_from of a query with the froms attribute, and then join it with another table and update the select_from.
As explained in the documentation, calling select_from usually adds another selectable to the FROM list, however:
Passing a Join that refers to an already present Table or other selectable will have the effect of concealing the presence of that selectable as an individual element in the rendered FROM list, instead rendering it into a JOIN clause.
So you can add a join like this, for example:
q = select([bmt_gene.c.id]).select_from(bmt_gene)
q = q.select_from(
join(q.froms[0], bmt_gene_name,
bmt_gene.c.id == bmt_gene_name.c.gene_id)
)
I need to query multiple entities, something like session.query(Entity1, Entity2), only from a subquery rather than directly from the tables. The docs have something about selecting one entity from a subquery but I can't find how to select more than one, either in the docs or by experimentation.
My use case is that I need to filter the tables underlying the mapped classes by a window function, which in PostgreSQL can only be done in a subquery or CTE.
EDIT: The subquery spans a JOIN of both tables so I can't just do aliased(Entity1, subquery).
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class A(Base):
__tablename__ = "a"
id = Column(Integer, primary_key=True)
bs = relationship("B")
class B(Base):
__tablename__ = "b"
id = Column(Integer, primary_key=True)
a_id = Column(Integer, ForeignKey('a.id'))
e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add_all([A(bs=[B(), B()]), A(bs=[B()])])
s.commit()
# with_labels() here is to disambiguate A.id and B.id.
# without it, you'd see a warning
# "Column 'id' on table being replaced by another column with the same key."
subq = s.query(A, B).join(A.bs).with_labels().subquery()
# method 1 - select_from()
print s.query(A, B).select_from(subq).all()
# method 2 - alias them both. "subq" renders
# once because FROM objects render based on object
# identity.
a_alias = aliased(A, subq)
b_alias = aliased(B, subq)
print s.query(a_alias, b_alias).all()
I was trying to do something like the original question: join a filtered table with another filtered table using an outer join. I was struggling because it's not at all obvious how to:
create a SQLAlchemy query that returns entities from both tables. #zzzeek's answer showed me how to do that: get_session().query(A, B).
use a query as a table in such a query. #zzzeek's answer showed me how to do that too: filtered_a = aliased(A).filter(...).subquery().
use an OUTER join between the two entities. Using select_from() after outerjoin() destroys the join condition between the tables, resulting in a cross join. From #zzzeek answer I guessed that if a is aliased(), then you can include a in the query() and also .outerjoin(a), and it won't be joined a second time, and that appears to work.
Following either of #zzzeek's suggested approaches directly resulted in a cross join (combinatorial explosion), because one of my models uses inheritance, and SQLAlchemy added the parent tables outside the inner SELECT without any conditions! I think this is a bug in SQLAlchemy. The approach that I adopted in the end was:
filtered_a = aliased(A, A.query().filter(...)).subquery("filtered_a")
filtered_b = aliased(B, B.query().filter(...)).subquery("filtered_b")
query = get_session().query(filtered_a, filtered_b)
query = query.outerjoin(filtered_b, filtered_a.relation_to_b)
query = query.order_by(filtered_a.some_column)
for a, b in query:
...
I'd like to know if it's possible to generate a SELECT COUNT(*) FROM TABLE statement in SQLAlchemy without explicitly asking for it with execute().
If I use:
session.query(table).count()
then it generates something like:
SELECT count(*) AS count_1 FROM
(SELECT table.col1 as col1, table.col2 as col2, ... from table)
which is significantly slower in MySQL with InnoDB. I am looking for a solution that doesn't require the table to have a known primary key, as suggested in Get the number of rows in table using SQLAlchemy.
Query for just a single known column:
session.query(MyTable.col1).count()
I managed to render the following SELECT with SQLAlchemy on both layers.
SELECT count(*) AS count_1
FROM "table"
Usage from the SQL Expression layer
from sqlalchemy import select, func, Integer, Table, Column, MetaData
metadata = MetaData()
table = Table("table", metadata,
Column('primary_key', Integer),
Column('other_column', Integer) # just to illustrate
)
print select([func.count()]).select_from(table)
Usage from the ORM layer
You just subclass Query (you have probably anyway) and provide a specialized count() method, like this one.
from sqlalchemy.sql.expression import func
class BaseQuery(Query):
def count_star(self):
count_query = (self.statement.with_only_columns([func.count()])
.order_by(None))
return self.session.execute(count_query).scalar()
Please note that order_by(None) resets the ordering of the query, which is irrelevant to the counting.
Using this method you can have a count(*) on any ORM Query, that will honor all the filter andjoin conditions already specified.
I needed to do a count of a very complex query with many joins. I was using the joins as filters, so I only wanted to know the count of the actual objects. count() was insufficient, but I found the answer in the docs here:
http://docs.sqlalchemy.org/en/latest/orm/tutorial.html
The code would look something like this (to count user objects):
from sqlalchemy import func
session.query(func.count(User.id)).scalar()
Addition to the Usage from the ORM layer in the accepted answer: count(*) can be done for ORM using the query.with_entities(func.count()), like this:
session.query(MyModel).with_entities(func.count()).scalar()
It can also be used in more complex cases, when we have joins and filters - the important thing here is to place with_entities after joins, otherwise SQLAlchemy could raise the Don't know how to join error.
For example:
we have User model (id, name) and Song model (id, title, genre)
we have user-song data - the UserSong model (user_id, song_id, is_liked) where user_id + song_id is a primary key)
We want to get a number of user's liked rock songs:
SELECT count(*)
FROM user_song
JOIN song ON user_song.song_id = song.id
WHERE user_song.user_id = %(user_id)
AND user_song.is_liked IS 1
AND song.genre = 'rock'
This query can be generated in a following way:
user_id = 1
query = session.query(UserSong)
query = query.join(Song, Song.id == UserSong.song_id)
query = query.filter(
and_(
UserSong.user_id == user_id,
UserSong.is_liked.is_(True),
Song.genre == 'rock'
)
)
# Note: important to place `with_entities` after the join
query = query.with_entities(func.count())
liked_count = query.scalar()
Complete example is here.
If you are using the SQL Expression Style approach there is another way to construct the count statement if you already have your table object.
Preparations to get the table object. There are also different ways.
import sqlalchemy
database_engine = sqlalchemy.create_engine("connection string")
# Populate existing database via reflection into sqlalchemy objects
database_metadata = sqlalchemy.MetaData()
database_metadata.reflect(bind=database_engine)
table_object = database_metadata.tables.get("table_name") # This is just for illustration how to get the table_object
Issuing the count query on the table_object
query = table_object.count()
# This will produce something like, where id is a primary key column in "table_name" automatically selected by sqlalchemy
# 'SELECT count(table_name.id) AS tbl_row_count FROM table_name'
count_result = database_engine.scalar(query)
I'm not clear on what you mean by "without explicitly asking for it with execute()" So this might be exactly what you are not asking for.
OTOH, this might help others.
You can just run the textual SQL:
your_query="""
SELECT count(*) from table
"""
the_count = session.execute(text(your_query)).scalar()
def test_query(val: str):
query = f"select count(*) from table where col1='{val}'"
rtn = database_engine.query(query)
cnt = rtn.one().count
but you can find the way if you checked debug watch
query = session.query(table.column).filter().with_entities(func.count(table.column.distinct()))
count = query.scalar()
this worked for me.
Gives the query:
SELECT count(DISTINCT table.column) AS count_1
FROM table where ...
Below is the way to find the count of any query.
aliased_query = alias(query)
db.session.query(func.count('*')).select_from(aliased_query).scalar()
Here is the link to the reference document if you want to explore more options or read details.