Back to Infrastructure

Re-Read Rate

Agent Behavior

25 sessions with data in past 30d (51 total)

Count
25
Average
2.60
↑ 0.24 vs prior 30d
P10
1.82
↑ 0.73 vs prior 30d
P50
2.50
↑ 0.08 vs prior 30d
P90
3.60
↑ 0.07 vs prior 30d

Trend

Distribution

1.2–1.6
2
1.6–2.0
2
2.0–2.4
6
2.4–2.8
7
2.8–3.2
2
3.2–3.6
3
3.6–4.0
3

Notable Sessions

Highest
fix/cors-preflight3.75
optimize/graphql-n13.71
optimize/cold-start3.60
Lowest
feat/keyboard-shortcuts1.50
feat/prometheus1.50
chore/remove-deprecated1.82

About This Metric

Re-Read Rate

What It Measures

The ratio of total file reads to unique files read across sessions correlated to a PR. A value of 1.0 means every file was read exactly once; higher values indicate files were read multiple times.

Why It Matters

When the model re-reads the same file multiple times in a session, it typically means one of: the file is too large to retain in context, the model lost track of information it read earlier, or the task required revisiting the same code repeatedly.

Re-reading burns tokens without adding new information. While some re-reading is normal (checking a file after modifying it), excessive re-reading suggests the model's context management could be improved — for instance, by providing better summaries in CLAUDE.md, using partial reads with offset/limit, or keeping files smaller.

How It's Calculated

re_read_rate = total_file_reads / unique_files_read

Where total_file_reads counts every Read tool invocation and unique_files_read counts distinct file paths read (already tracked as files_read_count). Summed across all correlated sessions. Returns null if no files were read.

Data Sources Required

  • Claude Code session data — Count of Read tool invocations (total) and unique file paths read.