Navigate
AI BENCHY
Compare Charts
❤️ Made by XCS
Your ad here

AI BENCHY Compare

Inception: Mercury 2 vs OpenAI: GPT-4o-mini

Compare:

Last updated at: 2026-03-05

Metric Inception: Mercury 2 medium Release: 2026-02-24 OpenAI: GPT-4o-mini none Release: 2024-07-18
Rank #35 #46
Avg Score 54 41
Consistency 83 100
Cost per result 0.622 0.111
Total Cost $0.044 $0.005
Response Time (avg) 2.47s 2.21s
Response Time (max) 14.63s 7.58s
Response Time (total) 34.56s 17.69s
Tests Correct
Attempt pass rate 57.8% 26.7%
Flaky tests 3 0
Output Tokens 3,571 1,528
Reasoning Tokens 45,379 0

Top Models by Score

Response Time (avg)

Score vs Total Cost

Avg Score vs Response Time (avg)

Category Breakdown

Anti-AI Tricks Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Inception: Mercury 2 73 98 66.7% 0 1.30s 2,531 2,410
OpenAI: GPT-4o-mini 40 100 33.3% 0 1.83s 180 0
Combined Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Inception: Mercury 2 100 100 100.0% 0 3.28s 268 4,887
OpenAI: GPT-4o-mini 100 100 0.0% 0 7.58s 568 0
Data parsing and extraction Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Inception: Mercury 2 55 59 83.3% 1 1.11s 183 1,656
OpenAI: GPT-4o-mini 99 100 100.0% 0 1.27s 183 0
Domain specific Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Inception: Mercury 2 100 72 11.1% 1 6.48s 41 30,754
OpenAI: GPT-4o-mini 100 100 0.0% 0 637ms 15 0
Instructions following Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Inception: Mercury 2 100 100 100.0% 0 1.07s 14 958
OpenAI: GPT-4o-mini 45 100 0.0% 0 1.27s 69 0
Puzzle Solving Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Inception: Mercury 2 17 75 22.2% 1 934ms 354 2,758
OpenAI: GPT-4o-mini 23 100 0.0% 0 1.30s 308 0
Tool Calling Score Consistency Attempt pass rate Flaky tests Tests Correct Response Time (avg) Output Tokens Reasoning Tokens
Inception: Mercury 2 100 100 100.0% 0 1.89s 180 1,956
OpenAI: GPT-4o-mini 100 100 100.0% 0 2.51s 205 0

Quick Compare

Switch Comparison Pair