Skip to content

EuniAI/ContextBench

Repository files navigation

ContextBench Logo

ContextBench

A Comprehensive Benchmark for Evaluating Context Retrieval in Code Agents

arXiv Hugging Face Dataset Leaderboard Documentation License

A collaboration between
Nanjing University      University College London


Overview

LLM-based coding agents have shown strong performance on automated issue resolution benchmarks, yet existing evaluations largely focus on final task success, providing limited insight into how agents retrieve and use code context during problem solving.

We introduce ContextBench, a process-oriented evaluation of context retrieval in coding agents. ContextBench consists of 1,136 issue-resolution tasks from 66 repositories across eight programming languages, each augmented with human-annotated gold contexts. We further implement an automated evaluation framework that tracks agent trajectories and measures context recall, precision, and efficiency throughout issue resolution.

Using ContextBench, we evaluate four frontier LLMs and five coding agents. Our results show that sophisticated agent scaffolding yields only marginal gains in context retrieval ("The Bitter Lesson" of coding agents), LLMs consistently favor recall over precision, and substantial gaps exist between explored and utilized context.

ContextBench augments existing end-to-end benchmarks with intermediate gold-context metrics that unbox the issue-resolution process. These contexts offer valuable intermediate signals for guiding LLM reasoning in software tasks.

ContextBench Pipeline

The pipeline extracts file views and spans from agent trajectories, then computes coverage and precision metrics by comparing against human-annotated gold context at multiple granularities.

Leaderboard

ContextBench Leaderboard

πŸ† Live leaderboard and interactive results: https://contextbench.github.io/

Quickstart

Installation

# Install dependencies
pip install -r requirements.txt

Download Dataset

Download the ContextBench dataset from Hugging Face:

from datasets import load_dataset

# Load full dataset (1,136 instances)
dataset = load_dataset("Contextbench/ContextBench", "default")

# Or load the verified subset (500 instances)
dataset_verified = load_dataset("Contextbench/ContextBench", "contextbench_verified")

# Save to parquet for evaluation
dataset['train'].to_parquet("data/full.parquet")

Or download directly from: πŸ€— Hugging Face Dataset

Run Evaluation

python -m contextbench.evaluate \
    --gold data/full.parquet \
    --pred path/to/trajectory.traj.json \
    --out results.jsonl

The evaluation automatically detects trajectory formats, clones repositories, extracts code symbols, and computes comprehensive metrics across file, symbol, span, and edit-location granularities.

Documentation

Online documentation: https://euniai.github.io/ContextBench/

Complete documentation is available in the docs/ directory:

Repository Layout

ContextBench/                    # Repository root
β”œβ”€β”€ README.md                    # This file (project homepage)
β”œβ”€β”€ contextbench/                # Python package
β”‚   β”œβ”€β”€ agents/                  # Trajectory extractors for different agents
β”‚   β”œβ”€β”€ core/                    # Repo management, intervals, file I/O
β”‚   β”œβ”€β”€ extractors/              # Tree-sitter symbol extraction
β”‚   β”œβ”€β”€ metrics/                 # Metric computation
β”‚   β”œβ”€β”€ parsers/                 # Gold, trajectory, and diff parsers
β”‚   └── evaluate.py              # Main evaluation entrypoint
β”œβ”€β”€ data/                        # Benchmark datasets (Verified, Pro, Poly, Multi)
β”‚   β”œβ”€β”€ selected_500_instances.csv
β”‚   └── *.parquet
β”œβ”€β”€ docs/                        # Documentation and assets
β”‚   β”œβ”€β”€ source/                  # Sphinx RST documentation
β”‚   └── assets/                  # Images and figures
β”œβ”€β”€ scripts/                     # Utility scripts for running agents
β”œβ”€β”€ agent-frameworks/            # Agent implementation submodules
β”‚   β”œβ”€β”€ agentless/
β”‚   β”œβ”€β”€ mini-swe-agent/
β”‚   β”œβ”€β”€ openhands/
β”‚   └── swe-agent/
└── requirements.txt             # Python dependencies

For detailed metrics definitions and implementation, see the Sphinx documentation.

Citation

If you use ContextBench in your research, please cite our paper:

@misc{li2026contextbenchbenchmarkcontextretrieval,
  title={ContextBench: A Benchmark for Context Retrieval in Coding Agents}, 
  author={Han Li and Letian Zhu and Bohan Zhang and Rili Feng and Jiaming Wang and Yue Pan and Earl T. Barr and Federica Sarro and Zhaoyang Chu and He Ye},
  year={2026},
  eprint={2602.05892},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2602.05892}
}

πŸ“„ Paper: arXiv:2602.05892

Acknowledgements

ContextBench is a collaborative research project between:

  • Nanjing University (南京倧学)
  • University College London

We thank the developers of the agent frameworks evaluated in this benchmark: Agentless, SWE-agent, Mini-SWE-Agent, OpenHands, and Prometheus.

We gratefully acknowledge Mistral AI and Amazon Web Services (AWS) for providing API support that enabled large-scale experiments and evaluations.

License

This project is licensed under the Apache License 2.0. See the LICENSE file for details.


About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published