Reuse same query across multiple group-bys? - python

I have a DB query that matches the desired rows. Let's say (for simplicity):
select * from stats where id in (1, 2);
Now I want to extract several frequency statistics (count of distinct values) for multiple columns, across these matching rows:
-- `stats.status` is one such column
select status, count(*) from stats where id in (1, 2) group by 1 order by 2 desc;
-- `stats.category` is another column
select category, count(*) from stats where id in (1, 2) group by 1 order by 2 desc;
-- etc.
Is there a way to re-use the same underlying query in SqlAlchemy? Raw SQL works too.
Or even better, return all the histograms at once, in a single command?
I'm mostly interested in performance, because I don't want Postgres to run the same row-matching many times, once for each column, over and over. The only change is which column is used for the histogram grouping. Otherwise it's the same set of rows.

I don't want Postgres to run the same row-matching many times
That's one of the motivations behind the GROUPING SETS functionality. Try this model:
SELECT category, status, count(*)
FROM stats where id in (1,2)
GROUP BY grouping sets ((category),(status));

User Abelisto's comment & the other answer both have the correct sql required to generate the histogram for multiple fields in 1 single query.
The only edit I would suggest to their efforts is to add an ORDER BY clause, as it seems from OP's attempts that more frequent labels are desired at the top of the result. You might find that sorting the results in python rather than in the database is simpler. In that case, disregard the complexity brought on the order by clause.
Thus, the modified query would be:
SELECT category, status, count(*)
FROM stats
WHERE id IN (1, 2)
GROUP BY GROUPING SETS (
(category), (status)
)
ORDER BY
GROUPING(category, status), 3 DESC
It is also possible to express the same query using sqlalchemy.
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Stats(Base):
__tablename__ = 'stats'
id = Column(Integer, primary_key=True)
category = Column(Text)
status = Column(Text)
stmt = select(
[Stats.category, Stats.status, func.count(1)]
).where(
Stats.id.in_([1, 2])
).group_by(
func.grouping_sets(tuple_(Stats.category),
tuple_(Stats.status))
).order_by(
func.grouping(Stats.category, Stats.status),
func.count(1).desc()
)
Investigating the output, we see that it generates the desired query (extra newlines added in output for legibility)
print(stmt.compile(compile_kwargs={'literal_binds': True}))
# outputs:
SELECT stats.category, stats.status, count(1) AS count_1
FROM stats
WHERE stats.id IN (1, 2)
GROUP BY GROUPING SETS((stats.category), (stats.status))
ORDER BY grouping(stats.category, stats.status), count(1) DESC

Related

How do I nest these queries in one Replace Into query?

I have three queries and another table called output_table. This code works but needs to be executed in 1. REPLACE INTO query. I know this involves nested and subqueries, but I have no idea if this is possible since my key is the DISTINCT coins datapoints from target_currency.
How to rewrite 2 and 3 so they execute in query 1? That is, the REPLACE INTO query instead of the individual UPDATE ones:
1. conn3.cursor().execute(
"""REPLACE INTO coin_best_returns(coin) SELECT DISTINCT target_currency FROM output_table"""
)
2. conn3.cursor().execute(
"""UPDATE coin_best_returns SET
highest_price = (SELECT MAX(ask_price_usd) FROM output_table WHERE coin_best_returns.coin = output_table.target_currency),
lowest_price = (SELECT MIN(bid_price_usd) FROM output_table WHERE coin_best_returns.coin = output_table.target_currency)"""
)
3. conn3.cursor().execute(
"""UPDATE coin_best_returns SET
highest_market = (SELECT exchange FROM output_table WHERE coin_best_returns.highest_price = output_table.ask_price_usd),
lowest_market = (SELECT exchange FROM output_table WHERE coin_best_returns.lowest_price = output_table.bid_price_usd)"""
)
You can do it with the help of some window functions, a subquery, and an inner join. The version below is pretty lengthy, but it is less complicated than it may appear. It uses window functions in a subquery to compute the needed per-currency statistics, and factors this out into a common table expression to facilitate joining it to
itself.
Other than the inline comments, the main reason for the complication is original query number 3. Queries (1) and (2) could easily be combined as a single, simple, aggregate query, but the third query is not as easily addressed. To keep the exchange data associated with the corresponding ask and bid prices, this query uses window functions instead of aggregate queries. This also provides a vehicle different from DISTINCT for obtaining one result per currency.
Here's the bare query:
WITH output_stats AS (
-- The ask and bid information for every row of output_table, every row
-- augmented by the needed maximum ask and minimum bid statistics
SELECT
target_currency as tc,
ask_price_usd as ask,
bid_price_usd as bid,
exchange as market,
MAX(ask_price_usd) OVER (PARTITION BY target_currency) as high,
ROW_NUMBER() OVER (
PARTITION_BY target_currency, ask_price_usd ORDER BY exchange DESC)
as ask_rank
MIN(bid_price_usd) OVER (PARTITION BY target_currency) as low,
ROW_NUMBER() OVER (
PARTITION_BY target_currency, bid_price_usd ORDER BY exchange ASC)
as bid_rank
FROM output_table
)
REPLACE INTO coin_best_returns(
-- you must, of course, include all the columns you want to fill in the
-- upsert column list
coin,
highest_price,
lowest_price,
highest_market,
lowest_market)
SELECT
-- ... and select a value for each column
asks.tc,
asks.ask,
bids.bid,
asks.market,
bids.market
FROM output_stats asks
JOIN output_stats bids
ON asks.tc = bids.tc
WHERE
-- These conditions choose exactly one asks row and one bids row
-- for each currency
asks.ask = asks.high
AND asks.ask_rank = 1
AND bids.bid = bids.low
AND bids.bid_rank = 1
Note well that unlike the original query 3, this will consider only exchange values associated with the target currency for setting the highest_market and lowest_market columns in the destination table. I'm supposing that that's what you really want, but if not, then a different strategy will be needed.

How to compute cumulative sum of a count field in Django

I have a model that register some kind of event and the date in which it occurs. I need to calculate: 1) the count of events for each date, and 2) the cumulative count of events over time.
My model looks something like this:
class Event(models.Model):
date = models.DateField()
...
Calculating 1) is pretty straightforward, but I'm having trouble calculating the cumulative sum. I tried something like this:
query_set = Event.objects.values("date") \
.annotate(count=Count("date")) \
.annotate(cumcount=Window(Sum("count"), order_by="date"))
But I'm getting this error:
Cannot compute Sum('count'): 'count' is an aggregate
Edit: Ideally, I'd like to have a query set equivalent to this SQL query, using Django's ORM:
SELECT date,
COUNT(date) as count,
SUM(COUNT(date)) OVER(ORDER BY date) acc_count
FROM event_event
GROUP BY date
In some cases perform an aggregate of an aggregate are not valid in SQL, whether you're using the ORM or not, for example: MAX(SUM(...)).
In your case you can do it by a raw query (as already mentioned in the other answer(s) and in your query).
Or using the ORM as following:
subquery = (
Event.objects.filter(date=OuterRef("date")) # we need this for the join
.values("date") # this to create the group by
.annotate(subcount=Count("date")) # the aggregate function
)
Event.objects.values("date").annotate(count=Count("date")).annotate(
sumcount=Window(Sum(subquery.values("subcount")), order_by="date")
# above we can use the Sum with the subquery
# we can also replace it for any aggregation functions that we want
).values("date", "count", "cumcount")
It will generate the following SQL:
SELECT
"app_event"."date",
COUNT("app_event"."date") AS "count",
SUM((SELECT
COUNT(U0."date") AS "subcount"
FROM
"app_event" U0
WHERE
U0."date" = ("app_event"."date")
GROUP BY
U0."date"
)) OVER ( ORDER BY "app_event"."date")
AS "cumcount"
FROM
"app_event"
GROUP BY
"app_event"."date"
It's surprisingly common to see developers wanting to convert from SQL query to Django QuerySet.
In this case, as OP already knows SQL, OP might be better off just performing raw SQL query.
There are different ways one can go about doing it, like executing custom SQL directly.
from django.db import connection
def my_custom_sql(self):
with connection.cursor() as cursor:
cursor.execute("SELECT date, COUNT(date) as count, SUM(COUNT(date)) OVER(ORDER BY date) acc_count
FROM event_event
GROUP BY date")
Then, call cursor.fetchone() or cursor.fetchall() to return the resulting rows.

Multiple column_property which use the same query but return different columns in sqlalchemy

I have got 2 column properties which use the same query, but just return different columns:
action_time = column_property(
select([Action.created_at]).where((Action.id == id)).order_by(desc(Action.created_at)).limit(1)
)
action_customer = column_property(
select([Action.customer_id]).where((Action.id == id)).order_by(desc(Action.created_at)).limit(1)
)
SQL query that is produced will have 2 subqueries for each of the properties. So it mean if I'd like to add a few more similar properties, SQL query will end up with N subqueries.
I am wondering whether it is possible to have one LEFT OUTER JOIN which will be used for multiple column_property (ies)?

Sum numeric values from different tables in one query

In SQL, I can sum two counts like
SELECT (
(SELECT count(*) FROM a WHERE val=42)
+
(SELECT count(*) FROM b WHERE val=42)
)
How do I perform this query with the Django ORM?
The closest I got is
a.objects.filter(val=42).order_by().values_list('id', flat=True).union(
b.objects.filter(val=42).order_by().values_list('id', flat=True)
).count()
This works fine if the returned count is small, but seems bad if there's a lot of rows that the database must hold in memory just to count them.
Your solution can be only little simplified by values('pk') instead of values_list('id', flat=True), because this would affect only a type of rows of the output, but the source SQL of both querysets is the same:
SELECT id FROM a WHERE val=42 UNION SELECT id FROM b WHERE val=42
and the method .count() makes only a query around a subquery:
SELECT COUNT(*) FROM (... subquery ...)
It is not necessary that a database backend would hold all values in memory. It can also only count them and forget. (not checked)
Similarly if you run a simple SELECT COUNT(id) FROM a, it doesn't need to collect id.
Subqueries of the form SELECT count(*) FROM a WHERE val=42 in a bigger query are not possible because Django doesn't use lazy evaluation for aggregations and immediately evaluates them.
The evaluation can be postponed e.g. by grouping by some expression that has only one possible value, e.g. GROUP BY (i >= 0) (or by an outer reference if it would work), but the query plan can be worse.
Another problem is that a SELECT is not possible without a table. Therefore I will use an unimportant row of an unimportant table in the base of query.
Example:
qs = Unimportant.objects.filter(pk=unimportant_pk).values('id').annotate(
total_a=a.objects.filter(val=42).order_by().values('val')
.annotate(cnt=models.Count('*')).values('cnt'),
total_b=b.objects.filter(val=42).order_by().values('val')
.annotate(cnt=models.Count('*')).values('cnt')
)
It is not nice, but it could be easily parallelized
SELECT
id,
(SELECT COUNT(*) AS cnt FROM a WHERE val=42 GROUP BY val) AS total_a,
(SELECT COUNT(*) AS cnt FROM b WHERE val=42 GROUP BY val) AS total_b
FROM unimportant WHERE id = unimportant_pk
Django docs confirms that simple solution doesn't exist.
Using aggregates within a Subquery expression
...
... This is the only way to perform an aggregation within a Subquery, as using aggregate() attempts to evaluate the queryset (and if there is an OuterRef, this will not be possible to resolve).

Calculate Max of Sum of an annotated field over a grouped by query in Django ORM?

To keep it simple I have four tables(A, B, Category and Relation), Relation table stores the Intensity of A in B and Category stores the type of B.
A <--- Relation ---> B ---> Category
(So the relation between A and B is n to n, when the relation between B and Category is n to 1)
I need an ORM to group Relation records by Category and A, then calculate Sum of Intensity in each (Category, A) (seems simple till here), then I want to annotate Max of calculated Sum in each Category.
My code is something like:
A.objects.values('B_id').annotate(AcSum=Sum(Intensity)).annotate(Max(AcSum))
Which throws the error:
django.core.exceptions.FieldError: Cannot compute Max('AcSum'): 'AcSum' is an aggregate
Django-group-by package with the same error.
For further information please also see this stackoverflow question.
I am using Django 2 and PostgreSQL.
Is there a way to achieve this using ORM, if there is not, what would be the solution using raw SQL expression?
Update
After lots of struggling I found out that what I wrote was indeed an aggregation, however what I want is to find out the maximum of AcSum of each A in each category. So I suppose I have to group-by the result once more after AcSum Calculation. Based on this insight I found a stack-overflow question which asks the same concept(The question was asked 1 year, 2 months ago without any accepted answer).
Chaining another values('id') to the set does not function neither as a group_by nor as a filter for output attributes, It removes AcSum from the set. Adding AcSum to values() is also not an option due to changes in the grouped by result set.
I think what I am trying to do is re grouping the grouped by query based on the fields inside a column (i.e id).
any thoughts?
You can't do an aggregate of an aggregate Max(Sum()), it's not valid in SQL, whether you're using the ORM or not. Instead, you have to join the table to itself to find the maximum. You can do this using a subquery. The below code looks right to me, but keep in mind I don't have something to run this on, so it might not be perfect.
from django.db.models import Subquery, OuterRef
annotation = {
'AcSum': Sum('intensity')
}
# The basic query is on Relation grouped by A and Category, annotated
# with the Sum of intensity
query = Relation.objects.values('a', 'b__category').annotate(**annotation)
# The subquery is joined to the outerquery on the Category
sub_filter = Q(b__category=OuterRef('b__category'))
# The subquery is grouped by A and Category and annotated with the Sum
# of intensity, which is then ordered descending so that when a LIMIT 1
# is applied, you get the Max.
subquery = Relation.objects.filter(sub_filter).values(
'a', 'b__category').annotate(**annotation).order_by(
'-AcSum').values('AcSum')[:1]
query = query.annotate(max_intensity=Subquery(subquery))
This should generate SQL like:
SELECT a_id, category_id,
(SELECT SUM(U0.intensity) AS AcSum
FROM RELATION U0
JOIN B U1 on U0.b_id = U1.id
WHERE U1.category_id = B.category_id
GROUP BY U0.a_id, U1.category_id
ORDER BY SUM(U0.intensity) DESC
LIMIT 1
) AS max_intensity
FROM Relation
JOIN B on Relation.b_id = B.id
GROUP BY Relation.a_id, B.category_id
It may be more performant to eliminate the join in Subquery by using a backend specific feature like array_agg (Postgres) or GroupConcat (MySQL) to collect the Relation.ids that are grouped together in the outer query. But I don't know what backend you're using.
Something like this should work for you. I couldn't test it myself, so please let me know the result:
Relation.objects.annotate(
b_category=F('B__Category')
).values(
'A', 'b_category'
).annotate(
SumInensityPerCategory=Sum('Intensity')
).values(
'A', MaxIntensitySumPerCategory=Max('SumInensityPerCategory')
)

Categories