GROUP BY date bucket + week-over-week comparison
Branch 13's manager pinged me — teller activity feels off this week. I want a fast week-over-week sanity check before the regional huddle. From `retailops_branch_activity`, pull branch 13 only, two weeks of data: 2025-11-16 through 2025-11-29. Bucket the rows into `last_7d` (2025-11-23 onward) and `prior_7d` (everything before). Two columns of measures: total `foot_traffic` and total `deposits_taken_count`. Output four columns — `branch_id`, `bucket`, `foot_traffic`, `deposits` — sorted with `last_7d` first.
| Column | Type | Key |
|---|---|---|
| activity_id | INT | PK |
| branch_id | INT | FK → retail_branches |
| activity_date | TEXT | |
| teller_count | INT | |
| account_openings | INT | |
| foot_traffic | INT | |
| deposits_taken_count | INT |
Each hint you reveal reduces the XP you can earn. Try the query first.
Two buckets, one branch, two weeks. The bucketing is a CASE WHEN on `activity_date` — anything from 2025-11-23 onward is the `last_7d` bucket, the rest is `prior_7d`.
Filter to `branch_id = 13` and `activity_date BETWEEN '2025-11-16' AND '2025-11-29'` in WHERE — narrowing the dataset before the GROUP BY keeps the query cheap and the result easy to read.
GROUP BY `branch_id` and the bucket expression. SUM `foot_traffic` and `deposits_taken_count` for the totals. Sort by the bucket ASC so `last_7d` lands above `prior_7d` alphabetically — that's the order Priya asked for in the briefing.