Navigate
AI BENCHY
Your ad here

AI BENCHY Compare

Inception: Mercury 2 vs OpenAI: GPT-5.4 Nano

Last updated at: 2026-03-17

Metric Mercury 2 Mercury 2 medium Release: 2026-02-24 GPT-5.4 Nano GPT-5.4 Nano none Release: 2026-03-17
Rank #42 #73
Score 6.3 4.3
Consistency 8.5 7.3
Cost per result 0.634 0.404
Total Cost $0.045 $0.009
Tests Correct
Attempt pass rate 51.0% 29.4%
Flaky tests 3 6
Total Runs 51 51
Output Tokens 3,723 2,185
Reasoning Tokens 46,120 0
Response Time (avg) 2.25s 1.39s
Response Time (max) 14.63s 3.84s
Response Time (total) 35.99s 23.70s

Top Models by Score

Score vs Total Cost

Response Time (avg)

Score vs Response Time (avg)

Total Output Tokens

Score vs Total Output Tokens

Category Breakdown

Anti-AI Tricks Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 6.9 9.9 50.0% 0 1.12s 2,546 2,609
GPT-5.4 Nano 3.5 8.0 16.7% 1 1.18s 800 0
Combined Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 10.0 10.0 100.0% 0 3.28s 268 4,887
GPT-5.4 Nano 3.0 10.0 0.0% 0 3.84s 280 0
Data parsing and extraction Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 7.3 5.9 83.3% 1 1.11s 183 1,656
GPT-5.4 Nano 6.5 10.0 50.0% 0 1.11s 219 0
Domain specific Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 2.9 7.2 11.1% 1 6.48s 41 30,754
GPT-5.4 Nano 2.9 4.4 22.2% 2 926ms 52 0
General Intelligence Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 4.8 10.0 0.0% 0 821ms 137 542
GPT-5.4 Nano 3.8 2.5 33.3% 1 1.31s 180 0
Instructions following Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 10.0 10.0 100.0% 0 1.07s 14 958
GPT-5.4 Nano 5.0 6.8 33.3% 1 787ms 84 0
Puzzle Solving Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 3.9 7.5 22.2% 1 934ms 354 2,758
GPT-5.4 Nano 3.7 7.3 22.2% 1 1.29s 348 0
Tool Calling Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 10.0 10.0 100.0% 0 1.89s 180 1,956
GPT-5.4 Nano 10.0 10.0 100.0% 0 3.40s 222 0

Quick Compare

Switch Comparison Pair