Navigate
AI BENCHY
Your ad here

AI BENCHY Compare

Inception: Mercury 2 vs OpenAI: GPT-5.4 Mini

Last updated at: 2026-03-17

Metric Mercury 2 Mercury 2 medium Release: 2026-02-24 GPT-5.4 Mini GPT-5.4 Mini none Release: 2026-03-17
Rank #42 #66
Score 6.3 4.8
Consistency 8.5 8.6
Cost per result 0.634 0.737
Total Cost $0.045 $0.030
Tests Correct
Attempt pass rate 51.0% 31.4%
Flaky tests 3 3
Total Runs 51 51
Output Tokens 3,723 2,085
Reasoning Tokens 46,120 0
Response Time (avg) 2.25s 1.17s
Response Time (max) 14.63s 2.52s
Response Time (total) 35.99s 19.82s

Top Models by Score

Score vs Total Cost

Response Time (avg)

Score vs Response Time (avg)

Total Output Tokens

Score vs Total Output Tokens

Category Breakdown

Anti-AI Tricks Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 6.9 9.9 50.0% 0 1.12s 2,546 2,609
GPT-5.4 Mini 3.1 8.1 8.3% 1 929ms 654 0
Combined Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 10.0 10.0 100.0% 0 3.28s 268 4,887
GPT-5.4 Mini 3.0 10.0 0.0% 0 2.52s 298 0
Data parsing and extraction Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 7.3 5.9 83.3% 1 1.11s 183 1,656
GPT-5.4 Mini 10.0 10.0 100.0% 0 1.30s 222 0
Domain specific Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 2.9 7.2 11.1% 1 6.48s 41 30,754
GPT-5.4 Mini 3.5 4.4 33.3% 2 937ms 88 0
General Intelligence Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 4.8 10.0 0.0% 0 821ms 137 542
GPT-5.4 Mini 4.8 10.0 0.0% 0 1.82s 174 0
Instructions following Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 10.0 10.0 100.0% 0 1.07s 14 958
GPT-5.4 Mini 6.3 10.0 50.0% 0 728ms 101 0
Puzzle Solving Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 3.9 7.5 22.2% 1 934ms 354 2,758
GPT-5.4 Mini 5.4 10.0 33.3% 0 860ms 293 0
Tool Calling Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Mercury 2 10.0 10.0 100.0% 0 1.89s 180 1,956
GPT-5.4 Mini 3.0 10.0 0.0% 0 2.32s 255 0

Quick Compare

Switch Comparison Pair