Computed accuracy metric — ABS difference / forecast.
For Friday's S&OP review I need forecast accuracy by SKU for last week (week ending 2026-04-25). Pull from `fact_demand` joined to `dim_skus`. Aggregate units_sold and forecast_units across warehouses per SKU. Accuracy metric (MAPE): `100 * ABS(units_sold - forecast_units) / forecast_units`. Round to 1 decimal. Sort by error DESC so the worst forecasts are at the top.
| Column | Type | Key |
|---|---|---|
| demand_id | INT | PK |
| sku_id | INT | FK → dim_skus |
| warehouse_id | INT | FK → dim_warehouses |
| week_ending | TEXT | |
| units_sold | INT | |
| forecast_units | INT |
| Column | Type | Key |
|---|---|---|
| sku_id | INT | PK |
| sku_code | TEXT | |
| sku_name | TEXT | |
| category | TEXT | |
| unit_cost | REAL | |
| reorder_point | INT |
Each hint you reveal reduces the XP you can earn. Try the query first.
Filter `week_ending = '2026-04-25'`. JOIN to `dim_skus` for sku_name. GROUP BY sku_name with SUM(units_sold) and SUM(forecast_units).
Compute the error AFTER the SUM: `100.0 * ABS(SUM(units_sold) - SUM(forecast_units)) / SUM(forecast_units)`.
Use 100.0 (not 100) to force float division. ROUND(..., 1) for readability. ORDER BY error DESC.