Career Change

SQL for Career Changers: No CS Degree Required

A 90-day plan for moving into a data analyst role from an unrelated field — with the portfolio moves, interview realities, and a first SQL exercise.

8 min read

If you’re reading this as a nurse, a teacher, a bartender, an ops coordinator, an accountant, a bench scientist, or anything else that isn’t "software engineer": good news — the data analyst path is open to you, and has been for years. The credential gatekeeping you may have been warned about applies almost entirely to software engineering, not analytics. What actually matters for analytics hiring is a mix of SQL literacy, judgment under ambiguity, and the ability to explain a number to a non-technical stakeholder. None of those require a CS degree. Two of them are easier to acquire if you already have another career under your belt.

You don’t need a CS degree to write SQL

SQL is declarative. You describe the result you want; the engine figures out how to get it. That sentence is the reason the language is learnable in weeks rather than years, and the reason analytics hiring doesn’t filter on the same signals software engineering does.

Compare that to a typical software role, where you’re expected to reason about memory, concurrency, testing frameworks, deployment pipelines, and a stack-specific language idiom. That stack is what the CS degree quietly teaches, and why bootcamp-to-SWE transitions take so long. None of it is in the analyst job description.

What’s actually in the analyst job description: write a query, understand the business context of the result, communicate the result to someone who doesn’t know SQL. If you came from a role that involved explaining complicated things to stakeholders — healthcare, education, operations, client services — you already have half the job. You just need the query-writing half.

What hiring managers actually look for

I’ve been on the interview panel for analyst roles at three companies. The signal hiring managers weight most heavily isn’t clever SQL. It’s trustworthy SQL. Two questions run in the background of every interview:

  1. If I give this person an ambiguous ticket, will they come back with a defensible answer or a guess?
  2. If they make a mistake, will they notice it themselves or will I find it in the dashboard three weeks later?

That’s it. Everything else — window functions, dialect trivia, query optimization — is a distant second. Which means your job as a career changer is to build evidence of judgment, not syntax mastery. A portfolio that shows "I can write a medium-complexity query, and here are the assumptions I wrote down next to it" beats a portfolio of 50 LeetCode-style SQL puzzles, every time.

A realistic 90-day learning plan

Full disclosure: 90 days only works if you can sustain 8–10 hours per week of focused practice. If you can do more, it accelerates. If you can do less, just stretch the plan — the sequence matters more than the calendar.

Weeks 1–3: Core SQL syntax. SELECT, WHERE, GROUP BY, the six join types, subqueries, CTEs. Use one free resource all the way through rather than hopping — consistency compounds. By the end of week 3, you should be able to write a query that joins three tables, aggregates a numeric column, and filters the result, without looking things up.

Weeks 4–6: Window functions and dirty data. ROW_NUMBER, RANK, LAG, LEAD, running totals. Simultaneously, start practicing against a dataset that has real-world problems — NULLs, duplicates, bad dates. The caseSQL starter path is designed for exactly this window.

Weeks 7–9: Open-ended problems. Stop doing puzzle-style exercises. Start picking questions like "how is the business doing" and forcing yourself to write a full analysis — assumptions, query, result, one paragraph of commentary. Quality over quantity: one full analysis a day beats five puzzle solutions.

Weeks 10–12: Portfolio and interview prep. Pick 3 of your analyses and polish them into shareable form. Set up a GitHub repo, write a short README for each, put them in order. In parallel, start doing mock interviews — ideally with a working analyst friend, but even narrating your thought process out loud into a voice memo helps.

Exercise

Starter exercise: write a query that returns the monthly revenue trend for the last 6 months, so you have a graph-ready result. Include month, revenue, month-over-month delta, and MoM percent change. Round percent to 1 decimal.

Schema hint

fact_purchases(customer_id, purchase_amount, purchased_at). Use STRFTIME('%Y-%m', ...) to bucket to month.

Expected

Six rows, one per month, sorted chronologically. Columns: month, revenue, prev_month_revenue, mom_delta, mom_pct.

Show solution
WITH monthly AS (
  SELECT
    STRFTIME('%Y-%m', purchased_at) AS month,
    SUM(purchase_amount) AS revenue
  FROM fact_purchases
  WHERE purchased_at >= DATE('now', '-6 months')
  GROUP BY month
)
SELECT
  month,
  revenue,
  LAG(revenue) OVER (ORDER BY month) AS prev_month_revenue,
  revenue - LAG(revenue) OVER (ORDER BY month) AS mom_delta,
  ROUND(
    (revenue - LAG(revenue) OVER (ORDER BY month)) * 100.0
    / NULLIF(LAG(revenue) OVER (ORDER BY month), 0),
    1
  ) AS mom_pct
FROM monthly
ORDER BY month;

How to build a portfolio without a job

The single biggest unlock for career changers is realizing that a portfolio doesn’t require a data job to build. You need three things: a public dataset, a notebook or markdown file that shows your work, and a GitHub repo that hosts it. That’s it.

Good public datasets for analyst portfolios: NYC TLC trip data (taxis), the Citibike dataset, the Nashville housing dataset, the Brazilian Olist e-commerce dataset, any of the Kaggle marketing or retail datasets. Pick one. Spend two weeks on it. Produce a portfolio piece.

What makes a portfolio piece senior-looking, not junior-looking:

  • A clearly stated question at the top (not a dataset description — an actual question someone would ask).
  • Assumptions explicitly listed before any SQL appears.
  • Data quality notes — what did you have to clean, why, and what did you leave alone on purpose.
  • The query, readable and commented where non-obvious.
  • A one-paragraph conclusion that a non-technical reader can follow.
  • A caveats section at the bottom — what would you check if you had another day on this.

That structure is the best signal you can send that you think like an analyst, not like a student. Three of those in a public repo outperforms twenty LeetCode solutions in every screening conversation I’ve sat in.

The interview reality

Most analyst interviews have four stages: a phone screen (resume-walk + why-analytics), a SQL screen (live coding, 45–60 min), a case study (open-ended business question, take-home or 90-min on-site), and a behavioral round (collaboration, communication, handling ambiguity).

The SQL screen is the one people stress about, but it’s rarely the stage that kills candidates. The case study is. Case studies are where tutorial-SQL trained candidates fall apart — they can write the queries but can’t handle the ambiguity of the prompt.

Practice the case study explicitly. Give yourself a business prompt (search "data analyst case study interview" for sample datasets and prompts), set a 90-minute timer, and produce a deliverable: 1 summary slide, 2–3 supporting charts, 4–5 bullet points of insight. Do three of these before your first real interview and the format will stop feeling foreign.

Exercise

Case-study style warm-up: given the star schema in this post, produce three chart-ready query results that together tell the story "is the marketing function growing." Don’t just pick three queries — pick three that together form an argument.

Schema hint

Think: top-line revenue trend, efficiency trend (ROAS or CPA), and mix shift. The point is the argument, not the queries individually.

Expected

Three queries, each ending in result shape you’d feed to a chart. Commentary above explaining why these three and not others.

Show solution
-- Argument: marketing is growing if (1) revenue trend is up,
-- (2) efficiency is stable-or-up, and (3) growth isn't concentrated
-- in a single channel.

-- 1. Top-line trend (monthly).
SELECT STRFTIME('%Y-%m', purchased_at) AS month,
       SUM(purchase_amount)           AS revenue
FROM fact_purchases
WHERE purchased_at >= DATE('now', '-6 months')
GROUP BY month
ORDER BY month;

-- 2. Efficiency trend — revenue per send, monthly.
SELECT STRFTIME('%Y-%m', s.sent_at) AS month,
       COUNT(*)                                        AS sends,
       COALESCE(SUM(p.purchase_amount), 0)             AS revenue,
       ROUND(COALESCE(SUM(p.purchase_amount), 0) * 1.0
             / NULLIF(COUNT(*), 0), 3)                 AS revenue_per_send
FROM fact_sends s
LEFT JOIN fact_purchases p
  ON p.customer_id = s.customer_id
 AND p.purchased_at BETWEEN s.sent_at AND DATE(s.sent_at, '+30 days')
WHERE s.sent_at >= DATE('now', '-6 months')
GROUP BY month
ORDER BY month;

-- 3. Channel mix — revenue share by campaign, last 30 days vs previous 30.
WITH windows AS (
  SELECT campaign_id,
         SUM(CASE WHEN purchased_at >= DATE('now', '-30 days')
                  THEN purchase_amount ELSE 0 END) AS last_30,
         SUM(CASE WHEN purchased_at BETWEEN DATE('now', '-60 days')
                                       AND DATE('now', '-30 days')
                  THEN purchase_amount ELSE 0 END) AS prev_30
  FROM fact_purchases
  GROUP BY campaign_id
)
SELECT c.name, w.last_30, w.prev_30,
       ROUND((w.last_30 - w.prev_30) * 100.0 / NULLIF(w.prev_30, 0), 1) AS pct_change
FROM windows w
JOIN dim_campaigns c ON c.id = w.campaign_id
WHERE w.last_30 > 0 OR w.prev_30 > 0
ORDER BY w.last_30 DESC;

The interview signal those three queries send together: you think about arguments not just queries, you understand trend + efficiency + mix as the three legs of a growth story, and you know when to compare windows rather than just aggregating one. That’s the career-changer advantage — you already know how to tell a story to a non-technical audience, you’re just learning a new medium.

Things career changers worry about that don’t matter

I’ve coached a few dozen career changers through analytics job searches, and the same three non-problems eat up 80% of their anxiety. None of them are worth the time:

  • "I don’t have a math background." Analytics isn’t statistics. You need arithmetic, percentages, the difference between mean and median, and a working feel for what’s a "reasonable" number in your business. You already have all of that from running a household budget.
  • "I’m too old." Every analytics team I’ve been on has had people in their 30s, 40s, and 50s — often ex-teachers, ex-nurses, ex-ops managers. Business context and communication skill compound; SQL is the new layer on top.
  • "I don’t know which tool to learn first." Learn SQL. If you have time left over, learn one of Python or spreadsheet-level Excel. Skip Tableau/Looker/Power BI until you have a job offer — most teams train on their specific BI tool in the first week.

What matters instead: you can write a query against a real schema without peeking at a tutorial, you can talk about a number without jargon, and you have one polished portfolio piece that demonstrates both. That’s the bar.

Build these reflexes against real data.

caseSQL runs 100+ missions against a realistic star schema with planted data-quality issues. Free, in-browser, no account needed.

Keep reading