Evaluating RAG: Precision@K, Recall, and Practical KPIs

Evaluating RAG: Precision@K, Recall, and Practical KPIs

How to Evaluate and Benchmark Retrieval-Augmented Generation (RAG) Systems

Measure RAG quality and cost with clear metrics, tests, and checklists to boost relevance and user satisfaction—follow this practical guide to get started.

Evaluating RAG systems requires both relevance metrics and operational KPIs. This guide shows practical steps, example queries, and computations to compare models and retrieval components effectively.

  • Define clear evaluation objectives tied to user tasks and business outcomes.
  • Create representative test queries with curated ground truth and edge cases.
  • Measure Precision@K, Recall, NDCG, plus latency, cost, hallucination rate, and satisfaction.

Quick answer

Evaluate RAG by (1) defining objectives and user scenarios, (2) assembling representative queries and labeled ground truth, (3) computing Precision@K, Recall, and NDCG for retrieval and RAG outputs, and (4) tracking operational KPIs like latency, cost, hallucination rate, and user satisfaction to make deployment decisions.

Define evaluation objectives

Start with concrete questions: Are you optimizing for factual accuracy, answer completeness, or speed? Objectives drive dataset composition, metric choice, and acceptance thresholds.

  • Business goals: reduce support time, increase conversion, lower hallucinations.
  • User goals: fast concise answers, full coverage for research tasks, or high-precision citations.
  • System constraints: query latency budget, cost per request, and privacy/regulatory needs.

Build representative test queries and ground truth

Representative queries should reflect real user intent distribution and include difficulty strata and edge cases.

  • Collect production logs (anonymized) and augment with synthetic or curated queries for rare scenarios.
  • Label ground truth at two levels: relevant documents/snippets for retrieval and correct final answers for generation.
  • Include negative examples and ambiguous queries to test robustness and disambiguation.

Example test-query breakdown:

Sample test set composition
TypeSharePurpose
FAQ-like40%High-precision retrieval & short answers
Exploratory / research30%Long-form synthesis and coverage
Ambiguous20%Disambiguation and clarification flow
Edge cases / adversarial10%Hallucination & robustness tests

Select core metrics: Precision@K, Recall, NDCG

Choose metrics that match your objectives. Retrieval-focused tasks favor Precision@K and Recall; ranking and user satisfaction favor NDCG and human judgments.

  • Precision@K: proportion of top-K retrieved items that are relevant.
  • Recall: fraction of all relevant items retrieved (often at large K or corpus-wide).
  • NDCG: normalized discounted cumulative gain rewards correct items ranked higher and handles graded relevance.

Compute Precision@K: procedure and variations

Procedure for Precision@K:

  1. For each query, mark the top K retrieved documents as relevant (1) or not (0) using ground truth.
  2. Compute precision_i = (sum of relevant labels in top K) / K.
  3. Aggregate across queries: Precision@K = mean(precision_i).

Variations and practical notes:

  • Choose K based on UI: K=1 for single-best suggestions, K=5 or 10 for a list view.
  • Use micro-averaging (mean over queries) or weighted averages by query importance.
  • Consider Precision@K with graded relevance by replacing binary labels with relevance scores and averaging.
  • Report confidence intervals or bootstrap samples to show statistical stability.

Evaluate recall and retrieval coverage

Recall matters when the system must surface any relevant source (e.g., legal or medical retrieval). It’s also useful to measure retrieval coverage across intents.

  • Recall@K: compute fraction of known relevant documents appearing in top K per query.
  • Corpus-wide coverage: percentage of queries for which at least one relevant document is retrieved within K.
  • Use pooling: if full relevance judgements are expensive, pool top results across systems to form ground truth.
Recall examples
MetricWhen to use
Recall@10High-recall tasks with moderate UI result lists
CoverageMeasuring if system ever retrieves any relevant doc

Track operational KPIs: latency, cost, hallucination, satisfaction

Combine offline relevance metrics with real-world KPIs to judge deployability and user impact.

  • Latency: measure end-to-end 50/95/99th percentiles for retrieval + generation.
  • Cost: track tokens, API calls, and retrieval throughput per successful query.
  • Hallucination rate: fraction of generated answers with unsupported or false claims (require human labels or automated fact-checkers).
  • User satisfaction: direct ratings, task completion, and downstream metrics like reduced support re-open rates.

Example KPI table:

Operational KPI targets (example)
KPITargetWhy it matters
Median latency<300msPerceived responsiveness
95th percentile latency<1.2sConsistent UX
Hallucination rate<2%Trust and compliance
User satisfaction>4/5Adoption and retention

Common pitfalls and how to avoid them

  • Overfitting to a small test set — remedy: enlarge the test set, use cross-validation, and keep a hidden holdout for final evaluation.
  • Using only Precision@K — remedy: combine with recall/NDCG and human judgments for generation quality.
  • Ignoring latency and cost — remedy: measure end-to-end performance under realistic load and include cost-per-query in comparisons.
  • Labeler inconsistency — remedy: write clear annotation guidelines, use adjudication, and measure inter-annotator agreement (e.g., Cohen’s kappa).
  • Pooling bias in relevance judgements — remedy: pool results from multiple systems and periodically refresh pools to reduce blind spots.
  • Counting citations as truth — remedy: verify cited sources and mark unsupported claims in hallucination assessments.

Implementation checklist

  • Define objectives and acceptance thresholds tied to user tasks.
  • Assemble a representative test set with graded ground-truth labels.
  • Compute Precision@K, Recall@K, and NDCG; report confidence intervals.
  • Measure end-to-end latency (p50/p95/p99) and cost per query.
  • Annotate hallucinations and capture user satisfaction signals.
  • Run A/B tests or shadow deployments before full rollout.
  • Automate periodic re-evaluation and dataset updates.

FAQ

Q: Which K should I use for Precision@K?
A: Pick K to match your UI: K=1 for single-answer widgets, K=5–10 for ranked lists; report multiple K values when possible.
Q: How do I measure hallucination reliably?
A: Use human annotation with clear criteria and spot-check with automated fact-checkers; measure both rate and severity.
Q: Can offline metrics predict user satisfaction?
A: They correlate but don’t fully predict satisfaction—combine offline metrics with user feedback and task completion rates.
Q: How often should I re-evaluate?
A: Re-evaluate whenever retrieval or model components change, and schedule periodic (e.g., monthly) checks for dataset drift.