AI BENCHY Compare
Inception: Mercury 2 vs OpenAI: GPT-5 Nano
Compare:
Last updated at: 2026-03-05
| Metric | Inception: Mercury 2 none Release: 2026-02-24 | OpenAI: GPT-5 Nano medium Release: 2025-08-07 |
|---|---|---|
| Rank | #50 | #34 |
| Avg Score | 34 | 57 |
| Consistency | 89 | 68 |
| Cost per result | 0.147 | 0.829 |
| Total Cost | $0.006 | $0.058 |
| Response Time (avg) | 594ms | 51.74s |
| Response Time (max) | 1.27s | 204.02s |
| Response Time (total) | 8.91s | 413.95s |
| Tests Correct | ||
| Attempt pass rate | 33.3% | 64.4% |
| Flaky tests | 2 | 6 |
| Output Tokens | 1,144 | 4,184 |
| Reasoning Tokens | 0 | 137,472 |
Response Time (avg)
Score vs Total Cost
Avg Score vs Response Time (avg)
Category Breakdown
| Anti-AI Tricks | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 0.0% | 0 | 466ms | 274 | 0 | |
| OpenAI: GPT-5 Nano | 70 | 100 | 66.7% | 0 | 37.73s | 1,107 | 19,968 |
| Combined | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 0.0% | 0 | 606ms | 131 | 0 | |
| OpenAI: GPT-5 Nano | 100 | 100 | 100.0% | 0 | 65.96s | 578 | 17,984 |
| Data parsing and extraction | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 55 | 59 | 83.3% | 1 | 667ms | 180 | 0 | |
| OpenAI: GPT-5 Nano | 100 | 17 | 50.0% | 2 | 21.42s | 453 | 10,560 |
| Domain specific | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 40 | 72 | 44.4% | 1 | 534ms | 46 | 0 | |
| OpenAI: GPT-5 Nano | 40 | 44 | 55.6% | 2 | 204.02s | 237 | 64,448 |
| Instructions following | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 55 | 100 | 50.0% | 0 | 551ms | 82 | 0 | |
| OpenAI: GPT-5 Nano | 90 | 68 | 83.3% | 1 | 11.90s | 382 | 4,096 |
| Puzzle Solving | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 0.0% | 0 | 533ms | 234 | 0 | |
| OpenAI: GPT-5 Nano | 40 | 72 | 44.4% | 1 | 19.81s | 869 | 13,440 |
| Tool Calling | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 100.0% | 0 | 1.27s | 197 | 0 | |
| OpenAI: GPT-5 Nano | 100 | 100 | 100.0% | 0 | 33.30s | 558 | 6,976 |
Quick Compare
Switch Comparison Pair
Mercury 2nonevsQwen3 Coder NextmediumGPT-5 NanomediumvsGLM 5noneMercury 2nonevsGLM 4.7 FlashmediumGemini 2.5 FlashnonevsGPT-5 NanomediumDeepSeek V3.2nonevsGPT-5 NanomediumGPT-5 NanomediumvsQwen3.5-122B-A10BnoneGPT-5 NanomediumvsQwen3.5 Plus 2026-02-15noneGPT-5 NanomediumvsQwen3.5-FlashnoneGPT-5 NanomediumvsQwen3.5-27BnoneGPT-5 NanomediumvsQwen3.5-35B-A3BnoneClaude Sonnet 4.6nonevsGPT-5 NanomediumGemini 3 Flash PreviewnonevsGPT-5 Nanomedium