AI BENCHY Compare
Inception: Mercury 2 vs OpenAI: GPT-5 Mini
Compare:
Last updated at: 2026-03-05
| Metric | Inception: Mercury 2 none Release: 2026-02-24 | OpenAI: GPT-5 Mini medium Release: 2025-08-07 |
|---|---|---|
| Rank | #50 | #31 |
| Avg Score | 34 | 61 |
| Consistency | 89 | 89 |
| Cost per result | 0.147 | 1.401 |
| Total Cost | $0.006 | $0.113 |
| Response Time (avg) | 594ms | 25.92s |
| Response Time (max) | 1.27s | 88.15s |
| Response Time (total) | 8.91s | 388.79s |
| Tests Correct | ||
| Attempt pass rate | 33.3% | 62.2% |
| Flaky tests | 2 | 2 |
| Output Tokens | 1,144 | 5,477 |
| Reasoning Tokens | 0 | 46,912 |
Response Time (avg)
Score vs Total Cost
Avg Score vs Response Time (avg)
Category Breakdown
| Anti-AI Tricks | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 0.0% | 0 | 466ms | 274 | 0 | |
| OpenAI: GPT-5 Mini | 70 | 96 | 66.7% | 0 | 16.45s | 1,645 | 5,824 |
| Combined | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 0.0% | 0 | 606ms | 131 | 0 | |
| OpenAI: GPT-5 Mini | 100 | 100 | 100.0% | 0 | 88.15s | 754 | 11,520 |
| Data parsing and extraction | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 55 | 59 | 83.3% | 1 | 667ms | 180 | 0 | |
| OpenAI: GPT-5 Mini | 99 | 100 | 100.0% | 0 | 12.58s | 453 | 3,200 |
| Domain specific | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 40 | 72 | 44.4% | 1 | 534ms | 46 | 0 | |
| OpenAI: GPT-5 Mini | 100 | 72 | 22.2% | 1 | 44.63s | 293 | 14,016 |
| Instructions following | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 55 | 100 | 50.0% | 0 | 551ms | 82 | 0 | |
| OpenAI: GPT-5 Mini | 75 | 66 | 83.3% | 1 | 15.66s | 318 | 4,992 |
| Puzzle Solving | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 0.0% | 0 | 533ms | 234 | 0 | |
| OpenAI: GPT-5 Mini | 43 | 98 | 33.3% | 0 | 14.09s | 1,527 | 5,760 |
| Tool Calling | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 100.0% | 0 | 1.27s | 197 | 0 | |
| OpenAI: GPT-5 Mini | 100 | 100 | 100.0% | 0 | 18.64s | 487 | 1,600 |
Quick Compare
Switch Comparison Pair
Mercury 2nonevsQwen3 Coder NextmediumMercury 2nonevsGLM 4.7 FlashmediumGPT-5 MinimediumvsGLM 5noneGPT-5 MinimediumvsQwen3.5 Plus 2026-02-15noneClaude Sonnet 4.6nonevsGPT-5 MinimediumGemini 2.5 FlashnonevsGPT-5 MinimediumDeepSeek V3.2nonevsGPT-5 MinimediumGemini 3 Flash PreviewnonevsGPT-5 MinimediumGPT-5 MinimediumvsQwen3.5-122B-A10BnoneGPT-5 MinimediumvsQwen3.5-FlashnoneGPT-5 MinimediumvsQwen3.5-27BnoneGemini 3.1 Flash Lite PreviewnonevsGPT-5 Minimedium