Navigate
AI BENCHY
Compare Charts
❤️ Made by XCS
AD
Track all your projects in one dashboard. Get 📊stats, 🔥heatmaps and 👀recordings in one self-hosted dashboard.
uxwizz.com

AI BENCHY Compare

Anthropic: Claude Opus 4.6 vs Inception: Mercury 2

Compare:

Last updated at: 2026-03-05

Metric Anthropic: Claude Opus 4.6 medium Release: 2026-02-05 Inception: Mercury 2 none Release: 2026-02-24
Rank #30 #50
Avg Score 64 34
Consistency 89 89
Cost per result 14.411 0.147
Total Cost $1.297 $0.006
Response Time (avg) 25.08s 594ms
Response Time (max) 83.40s 1.27s
Response Time (total) 200.67s 8.91s
Tests Correct
Attempt pass rate 64.4% 33.3%
Flaky tests 2 2
Output Tokens 26,066 1,144
Reasoning Tokens 17,071 0

Top Models by Score

Response Time (avg)

Score vs Total Cost

Avg Score vs Response Time (avg)

Category Breakdown

Anti-AI Tricks Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Anthropic: Claude Opus 4.6 40 44 55.6% 2 11.88s 897 1,000
Inception: Mercury 2 100 100 0.0% 0 466ms 274 0
Combined Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Anthropic: Claude Opus 4.6 100 100 100.0% 0 76.66s 8,178 5,194
Inception: Mercury 2 100 100 0.0% 0 606ms 131 0
Data parsing and extraction Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Anthropic: Claude Opus 4.6 99 100 100.0% 0 7.37s 691 757
Inception: Mercury 2 55 59 83.3% 1 667ms 180 0
Domain specific Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Anthropic: Claude Opus 4.6 100 100 0.0% 0 83.40s 14,642 8,687
Inception: Mercury 2 40 72 44.4% 1 534ms 46 0
Instructions following Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Anthropic: Claude Opus 4.6 100 100 100.0% 0 2.43s 266 467
Inception: Mercury 2 55 100 50.0% 0 551ms 82 0
Puzzle Solving Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Anthropic: Claude Opus 4.6 70 100 66.7% 0 4.60s 531 637
Inception: Mercury 2 100 100 0.0% 0 533ms 234 0
Tool Calling Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Anthropic: Claude Opus 4.6 100 100 100.0% 0 9.73s 861 329
Inception: Mercury 2 100 100 100.0% 0 1.27s 197 0

Quick Compare

Switch Comparison Pair