Navigate
AI BENCHY
AD
Track all your projects in one dashboard. Get 📊stats, 🔥heatmaps and 👀recordings in one self-hosted dashboard.
uxwizz.com

AI BENCHY Compare

Inception: Mercury 2 vs OpenAI: GPT-5 Nano

Last updated at: 2026-04-04

Metric Mercury 2 Mercury 2 medium Release: 2026-02-24 GPT-5 Nano GPT-5 Nano medium Release: 2025-08-07
Score 6.3 6.2
Rank #52 #54
Consistency 8.5 6.7
Tests Correct
Attempt pass rate 51.0% 58.8%
Flaky tests 3 7
Total Runs 51 51
Cost per result 0.634 0.864
Total Cost $0.045 $0.061
Input Price $0.250 / 1M $0.050 / 1M
Output Price $0.750 / 1M $0.400 / 1M
Output Tokens 3,723 4,500
Reasoning Tokens 46,120 143,296
Response Time (avg) 2.25s 44.47s
Response Time (max) 14.63s 204.02s
Response Time (total) 35.99s 444.74s

Top Models by Score

Score vs Total Cost

Response Time (avg)

Score vs Response Time (avg)

Total Output Tokens

Score vs Total Output Tokens

Category Breakdown

Anti-AI Tricks Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 6.9 9.9 50.0% 0 1.12s 2,546 2,609
GPT-5 Nano 6.5 7.9 58.3% 1 25.50s 1,221 21,184
Combined Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 10.0 10.0 100.0% 0 3.28s 268 4,887
GPT-5 Nano 10.0 10.0 100.0% 0 65.96s 578 17,984
Data parsing and extraction Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 7.3 5.9 83.3% 1 1.11s 183 1,656
GPT-5 Nano 3.7 1.7 50.0% 2 21.42s 453 10,560
Domain specific Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 2.9 7.2 11.1% 1 6.48s 41 30,754
GPT-5 Nano 5.2 4.4 55.6% 2 204.02s 237 64,448
General Intelligence Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 4.8 10.0 0.0% 0 821ms 137 542
GPT-5 Nano 4.1 10.0 0.0% 0 17.51s 202 4,608
Instructions following Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 10.0 10.0 100.0% 0 1.07s 14 958
GPT-5 Nano 8.5 6.8 83.3% 1 11.90s 382 4,096
Puzzle Solving Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 3.9 7.5 22.2% 1 934ms 354 2,758
GPT-5 Nano 5.3 7.2 44.4% 1 19.81s 869 13,440
Tool Calling Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 10.0 10.0 100.0% 0 1.89s 180 1,956
GPT-5 Nano 10.0 10.0 100.0% 0 33.30s 558 6,976

Quick Compare

Switch Comparison Pair