Terminal-Bench Leaderboard
tb run -d terminal-bench-core==0.1.1 -a "<agent-name>" -m "<model-name>"
Rank | Agent | Model | Date | Agent Org | Model Org | Accuracy |
---|---|---|---|---|---|---|
1 | Ante | claude-sonnet-4-5 | 2025-10-10 | Antigma Labs | Anthropic | 60.3%± 1.1 |
2 | Droid | claude-opus-4-1 | 2025-09-24 | Factory | Anthropic | 58.8%± 0.9 |
3 | Droid | claude-sonnet-4-5 | 2025-09-29 | Factory | Anthropic | 57.5%± 0.8 |
4 | OB-1 | Multiple | 2025-09-10 | OpenBlock | Multiple | 56.7%± 0.6 |
5 | Ante | claude-sonnet-4 | 2025-09-30 | Antigma Labs | Anthropic | 54.8%± 1.5 |
6 | Droid | gpt-5 | 2025-09-24 | Factory | OpenAI | 52.5%± 2.1 |
7 | Chaterm | claude-sonnet-4-5 | 2025-10-10 | Chaterm | Anthropic | 52.5%± 0.5 |
8 | Warp | Multiple | 2025-06-23 | Warp | Anthropic | 52.0%± 1.0 |
9 | Terminus 2 | claude-sonnet-4-5 | 2025-09-30 | Stanford | Anthropic | 51.0%± 0.8 |
10 | Droid | claude-sonnet-4 | 2025-09-24 | Factory | Anthropic | 50.5%± 1.4 |
11 | Chaterm | claude-sonnet-4 | 2025-09-10 | Chaterm | Anthropic | 49.3%± 1.3 |
12 | Goose | claude-opus-4 | 2025-09-03 | Block | Anthropic | 45.3%± 1.5 |
13 | Engine Labs | claude-sonnet-4 | 2025-07-14 | Engine Labs | Anthropic | 44.8%± 0.8 |
14 | Terminus 2 | claude-opus-4-1 | 2025-08-11 | Stanford | Anthropic | 43.8%± 1.4 |
15 | Claude Code | claude-opus-4 | 2025-05-22 | Anthropic | Anthropic | 43.2%± 1.3 |
16 | Codex CLI | gpt-5-codex | 2025-09-14 | OpenAI | OpenAI | 42.8%± 2.1 |
17 | Letta | claude-sonnet-4 | 2025-08-04 | Letta | Anthropic | 42.5%± 0.8 |
18 | Goose | claude-opus-4 | 2025-07-12 | Block | Anthropic | 42.0%± 1.3 |
19 | OpenHands | claude-sonnet-4 | 2025-07-14 | OpenHands | Anthropic | 41.3%± 0.7 |
20 | Terminus 2 | gpt-5 | 2025-08-11 | Stanford | OpenAI | 41.3%± 1.1 |
21 | Goose | claude-sonnet-4 | 2025-09-03 | Block | Anthropic | 41.3%± 1.3 |
22 | Orchestrator | claude-opus-4-1 | 2025-09-23 | Dan Austin | Anthropic | 40.5%± 0.3 |
23 | Terminus 1 | GLM-4.5 | 2025-07-31 | Stanford | Z.ai | 39.9%± 1.0 |
24 | Terminus 2 | claude-opus-4 | 2025-08-05 | Stanford | Anthropic | 39.0%± 0.4 |
25 | Alpha | claude-sonnet-4-5 | 2025-10-12 | Ataraxy Labs Inc. | Anthropic | 38.3%± 1.1 |
26 | Orchestrator | claude-sonnet-4 | 2025-09-01 | Dan Austin | Anthropic | 37.0%± 2.0 |
27 | Terminus 2 | claude-sonnet-4 | 2025-08-05 | Stanford | Anthropic | 36.4%± 0.6 |
28 | Claude Code | claude-sonnet-4 | 2025-05-22 | Anthropic | Anthropic | 35.5%± 1.0 |
29 | Terminus 1 | glaive-swe-v1 | 2025-08-14 | Stanford | OpenAI | 35.3%± 0.7 |
30 | Claude Code | claude-3-7-sonnet | 2025-05-16 | Anthropic | Anthropic | 35.2%± 1.3 |
31 | Goose | claude-sonnet-4 | 2025-07-12 | Block | Anthropic | 34.3%± 1.0 |
32 | Terminus 2 | grok-4-fast | 2025-09-21 | Stanford | xAI | 31.3%± 1.4 |
33 | Terminus 1 | claude-3-7-sonnet | 2025-05-16 | Stanford | Anthropic | 30.6%± 1.9 |
34 | Terminus 1 | gpt-4.1 | 2025-05-15 | Stanford | OpenAI | 30.3%± 2.1 |
35 | Terminus 1 | o3 | 2025-05-15 | Stanford | OpenAI | 30.2%± 0.9 |
36 | Terminus 1 | gpt-5 | 2025-08-07 | Stanford | OpenAI | 30.0%± 0.9 |
37 | Goose | o4-mini | 2025-05-18 | Block | OpenAI | 27.5%± 1.3 |
38 | Terminus 1 | gemini-2.5-pro | 2025-05-15 | Stanford | 25.3%± 2.8 | |
39 | Codex CLI | o4-mini | 2025-05-15 | OpenAI | OpenAI | 20.0%± 1.5 |
40 | Orchestrator | Qwen3-Coder-480B | 2025-09-01 | Dan Austin | Alibaba | 19.7%± 2.0 |
41 | Terminus 1 | o4-mini | 2025-05-15 | Stanford | OpenAI | 18.5%± 1.4 |
42 | Terminus 1 | grok-3-beta | 2025-05-17 | Stanford | xAI | 17.5%± 4.2 |
43 | Terminus 1 | gemini-2.5-flash | 2025-05-17 | Stanford | 16.8%± 1.3 | |
44 | Terminus 1 | Llama-4-Maverick-17B | 2025-05-15 | Stanford | Meta | 15.5%± 1.7 |
45 | TerminalAgent | Qwen3-32B | 2025-07-31 | Dan Austin | Alibaba | 15.5%± 1.1 |
46 | Mini SWE-Agent | claude-sonnet-4 | 2025-08-23 | SWE-Agent | Anthropic | 12.8%± 0.2 |
47 | Codex CLI | codex-mini-latest | 2025-05-18 | OpenAI | OpenAI | 11.3%± 1.6 |
48 | Codex CLI | gpt-4.1 | 2025-05-15 | OpenAI | OpenAI | 8.3%± 1.4 |
49 | Terminus 1 | Qwen3-235B | 2025-05-15 | Stanford | Alibaba | 6.6%± 1.4 |
50 | Terminus 1 | DeepSeek-R1 | 2025-05-15 | Stanford | DeepSeek | 5.7%± 0.7 |
Results in this leaderboard correspond to terminal-bench-core==0.1.1.
Follow our submission guide to add your agent or model to the leaderboard.
A Terminal-Bench team member ran the evaluation and verified the results.