Prep guide

What entry-level data analyst interviews actually test

The four-round loop every analyst sees, the rubric interviewers quietly score on, the red flags that kill offers, and a focused four-week plan. If you’re preparing for a first or second analyst role and you want to know how interviewers are actually evaluating you — this is the reference.

Part 1

The interview loop

A typical loop runs four to six rounds over two to four weeks. The flesh hanging off the skeleton varies sharply by employer type, but the structure is remarkably consistent.

  1. Round 1 · Recruiter screen

    ~30 minutes. Resume, salary expectations, why the company. Low technical content; high attrition from candidates who can’t articulate a coherent “why this role.”

  2. Round 2 · Technical screen

    45–60 minutes. Almost always SQL, sometimes with light Python or Excel. Most common reason candidates are rejected — SQL appears in roughly 95% of data-analyst interviews.

    Live coding

    Default at FAANG / tech. Shared CoderPad, HackerRank, or StrataScratch. Narration matters.

    Whiteboard / verbal

    Finance, consulting, compliance. Cares more about logic than syntax.

    Take-home

    Series-A to pre-IPO startups. 48–72 hrs on a CSV + loose prompt (“tell us something interesting about churn”).

  3. Round 3 · Case / domain round

    Take-home dataset (48–72 hrs) at data-forward companies, live business-case discussion elsewhere. Probes whether you can turn SQL output into a business recommendation.

  4. Round 4 · Behavioral / manager fit

    Often combined. FAANG loops add a data-sense round (metric design, A/B testing) and a bar-raiser behavioral. Amazon uniquely frontloads Leadership Principles across every round.

Entry-level vs mid-level

Entry-level (0–2 yrs) expectations: clean joins, GROUP BY with HAVING, basic window functions (ROW_NUMBER, running SUM), date extraction, and articulating a clear thought process. Mid-level (2–5) adds query optimization, data-model critique, experimentation statistics, and stakeholder-influence stories. The fastest way to signalmid-level as an entry-level candidate: narrate trade-offs out loud. “I could use a self-join here but a window function will scan the table once instead of twice.”

Part 2

The hidden rubric

Many technically strong candidates fail the behavioral round. Hiring managers consistently report that behavioral is where they see the clearest signal on whether a junior analyst can be trusted with stakeholders. Below is what they actually grade on — synthesized from rubrics at Meta, Amazon, Google, and mid-market data teams.

  • Clarifying questions before diving in

    Candidates who jump into solutions signal junior-ness. Always ask at least two clarifying questions on scope, time window, or metric definition before you start writing.

  • Stating assumptions out loud

    “I’m going to assume we’re measuring weekly active users with at least one session of 60+ seconds — is that right?”

  • Quantifying impact in every story

    Not “improved the dashboard” but “cut report prep time from 8 to 1 hour weekly.” Every STAR story must end with a number.

  • Using "I" not "we"

    Amazon explicitly grades this; other companies do implicitly. “We analyzed” is a red flag — “I built the model, partnered with our engineer on the pipeline, and presented to the VP.”

  • Acknowledging uncertainty and what you’d do differently

    Candidates who claim every project was a triumph lose credibility. Strong answers include “in retrospect I’d have validated X earlier.”

  • Altitude-appropriate communication

    When told "explain this to a non-technical VP", the expected register is plain-language analogies, not jargon.

  • Trade-off reasoning

    Saying "I chose median over mean because of outlier sensitivity" beats just doing it silently.

Part 3

15 red flags that kill offers

Reported directly by hiring managers across the research corpus. If you recognize your own behavior in more than two of these, rehearse the alternative before the next loop.

  1. 01Cannot explain why they chose one metric over another.
  2. 02Over-engineers a simple question (builds an ML model when SQL suffices).
  3. 03Never asks clarifying questions.
  4. 04Cannot quantify past impact in numbers.
  5. 05Blames former teammates or managers.
  6. 06Shows no evidence of learning from mistakes.
  7. 07Uses jargon without being able to define it.
  8. 08Claims expertise in a tool but fails a basic question on it.
  9. 09Cannot explain what stakeholders actually did with the analysis.
  10. 10Has no opinion on metric definitions.
  11. 11Doesn’t acknowledge data limitations or caveats.
  12. 12Cannot articulate the business model of their previous employer.
  13. 13Gives purely technical answers to business questions (and vice versa).
  14. 14Insists on the "correct" answer when the interviewer probes a trade-off.
  15. 15Fails to ask any questions at the end of the interview.

Part 4

A focused four-week prep plan

The single biggest gap between offers and rejections at entry-level is not SQL knowledge but structured business communication. Candidates who advance don’t know more — they communicate more clearly under pressure. Build around that.

Week 1

SQL foundation

Solve 40 easy/medium SQL problems covering the seven recurring patterns (top-N per group, running totals, self-joins, LEFT JOIN + NULL filter, dedup with ROW_NUMBER, conditional-aggregation pivot, date truncation). Build one personal projectquerying a public dataset (NYC Taxi, Airbnb, Stack Overflow dump) and write up three “insights” you found — you’ll use these in behavioral stories.

Week 2

Domain depth

Pick your target industry and memorize its metrics cold. Build flashcards: formula on one side, common trap on the other. Read Ronny Kohavi’s Trustworthy Online Controlled Experiments(first three chapters) if targeting product. Read Kimball’s dimensional-modeling primer if targeting data-warehouse-adjacent roles.

Week 3

Case practice and mocks

Do at least five timed mock cases (with a peer, or an interview platform). Record yourself. Listen back for whether you asked clarifying questions, stated assumptions, and quantified recommendations. This is the single highest-leverage prep activity most candidates skip.

Week 4

Behavioral stories

Write seven STAR stories covering: wrong analysis, stakeholder conflict, messy data, ambiguous scope, tight deadline, cross-functional win, and ethics / pushback. Each story must end with a number.Rehearse out loud until they run 90 seconds each. Prepare three thoughtful questions for every interviewer — questions about how the team measures its own impact tend to land best.