| Anti-AI Tricks | 2/2 | 10.00 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #11/27 62% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 MiniMax: MiniMax M2.5 10.00 Anthropic: Claude Sonnet 4.6 1.00 1.00 10.00 | 10.00 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #16/27 42% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 MiniMax: MiniMax M2.5 10.00 Anthropic: Claude Opus 4.6 1.62 1.62 10.00 | 100.0% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #11/27 62% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% OpenAI: GPT-5.2 100.0% MiniMax: MiniMax M2.5 100.0% Anthropic: Claude Sonnet 4.6 0.0% 0.0% 100.0% | 0 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #16/27 42% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 MiniMax: MiniMax M2.5 0 Anthropic: Claude Opus 4.6 2 0 2 | 7.58 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #11/17 38% OpenAI: gpt-oss-120b 10.00 Anthropic: Claude Opus 4.6 10.00 StepFun: Step 3.5 Flash 10.00 Anthropic: Claude Sonnet 4.6 9.83 MoonshotAI: Kimi K2.5 9.77 MiniMax: MiniMax M2.5 7.58 Qwen: Qwen3 Coder Next 1.00 1.00 10.00 | $0.00902 Total Cost Rank: #21/27 23% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00003 Xiaomi: MiMo-V2-Flash $0.00004 Qwen: Qwen3 Coder Next $0.00005 Qwen: Qwen3 Coder Next $0.00005 MiniMax: MiniMax M2.5 $0.00902 Anthropic: Claude Opus 4.6 $0.03036 $0.00000 $0.03036 |
| Data parsing and extraction | 1/2 | 5.50 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #19/27 31% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 MiniMax: MiniMax M2.5 5.50 Z.ai: GLM 4.7 Flash 0.50 0.50 10.00 | 5.81 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #23/27 15% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 MiniMax: MiniMax M2.5 5.81 Z.ai: GLM 5 5.56 5.56 10.00 | 83.3% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #20/27 27% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% OpenAI: GPT-5.2 100.0% MiniMax: MiniMax M2.5 83.3% Xiaomi: MiMo-V2-Flash 16.7% 0.0% 100.0% | 1 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #24/27 12% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 MiniMax: MiniMax M2.5 1 Z.ai: GLM 5 1 0 1 | 9.45 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #11/17 38% OpenAI: gpt-oss-120b 10.00 Z.ai: GLM 4.7 Flash 9.87 Anthropic: Claude Sonnet 4.6 9.83 Anthropic: Claude Opus 4.6 9.83 Z.ai: GLM 5 9.80 MiniMax: MiniMax M2.5 9.45 Qwen: Qwen3 Coder Next 4.00 4.00 10.00 | $0.00774 Total Cost Rank: #17/27 38% StepFun: Step 3.5 Flash $0.00000 Xiaomi: MiMo-V2-Flash $0.00029 Xiaomi: MiMo-V2-Flash $0.00029 Z.ai: GLM 4.7 Flash $0.00050 OpenAI: gpt-oss-120b $0.00052 MiniMax: MiniMax M2.5 $0.00774 Anthropic: Claude Opus 4.6 $0.07755 $0.00000 $0.07755 |
| Domain specific | 0/3 | 1.00 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #25/27 8% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 7.00 Google: Gemini 3 Flash Preview 7.00 Anthropic: Claude Sonnet 4.6 7.00 Z.ai: GLM 4.7 Flash 7.00 MiniMax: MiniMax M2.5 1.00 Anthropic: Claude Sonnet 4.6 1.00 1.00 10.00 | 4.41 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #26/27 4% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Google: Gemini 3 Flash Preview 10.00 Anthropic: Claude Sonnet 4.6 10.00 MiniMax: MiniMax M2.5 4.41 Z.ai: GLM 5 4.41 4.41 10.00 | 22.2% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #23/27 15% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 66.7% Google: Gemini 3 Flash Preview 66.7% Anthropic: Claude Sonnet 4.6 66.7% Z.ai: GLM 4.7 Flash 66.7% MiniMax: MiniMax M2.5 22.2% Z.ai: GLM 5 0.0% 0.0% 100.0% | 2 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #26/27 4% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Google: Gemini 3 Flash Preview 0 Anthropic: Claude Sonnet 4.6 0 MiniMax: MiniMax M2.5 2 Z.ai: GLM 5 2 0 2 | 6.06 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #9/17 50% Xiaomi: MiMo-V2-Flash 8.72 OpenAI: gpt-oss-120b 8.53 StepFun: Step 3.5 Flash 8.44 Z.ai: GLM 5 8.43 Z.ai: GLM 4.7 Flash 8.21 MiniMax: MiniMax M2.5 6.06 Google: Gemini 3 Pro Preview 2.44 2.44 8.72 | $0.16952 Total Cost Rank: #25/27 8% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00005 Xiaomi: MiMo-V2-Flash $0.00008 Qwen: Qwen3 Coder Next $0.00010 Qwen: Qwen3 Coder Next $0.00010 MiniMax: MiniMax M2.5 $0.16952 Anthropic: Claude Sonnet 4.6 $0.64205 $0.00000 $0.64205 |
| Instructions following | 1/2 | 7.00 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #15/27 46% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Anthropic: Claude Sonnet 4.6 10.00 OpenAI: gpt-oss-120b 10.00 Z.ai: GLM 5 10.00 MiniMax: MiniMax M2.5 7.00 xAI: Grok 4.1 Fast 1.00 1.00 10.00 | 6.41 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #24/27 12% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 OpenAI: GPT-5.2 10.00 Anthropic: Claude Sonnet 4.6 10.00 MiniMax: MiniMax M2.5 6.41 Xiaomi: MiMo-V2-Flash 5.80 5.80 10.00 | 66.7% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #16/27 42% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% OpenAI: GPT-5.2 100.0% MiniMax: MiniMax M2.5 66.7% xAI: Grok 4.1 Fast 0.0% 0.0% 100.0% | 1 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #22/27 19% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 MiniMax: MiniMax M2.5 1 Google: Gemini 3 Flash Preview 1 0 1 | 8.33 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #10/17 44% Anthropic: Claude Sonnet 4.6 10.00 Z.ai: GLM 5 9.75 StepFun: Step 3.5 Flash 9.67 OpenAI: gpt-oss-120b 9.50 Anthropic: Claude Opus 4.6 9.50 MiniMax: MiniMax M2.5 8.33 xAI: Grok 4.1 Fast 3.25 3.25 10.00 | $0.00307 Total Cost Rank: #17/27 38% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00006 Xiaomi: MiMo-V2-Flash $0.00008 Qwen: Qwen3 Coder Next $0.00013 Qwen: Qwen3 Coder Next $0.00014 MiniMax: MiniMax M2.5 $0.00307 Google: Gemini 3.1 Pro Preview $0.03134 $0.00000 $0.03134 |
| Puzzle Solving | 1/3 | 4.33 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #17/27 38% Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 Anthropic: Claude Sonnet 4.6 10.00 Z.ai: GLM 5 10.00 MiniMax: MiniMax M2.5 4.33 Z.ai: GLM 4.7 Flash 1.00 1.00 10.00 | 4.79 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #27/27 0% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 MiniMax: MiniMax M2.5 4.79 4.79 10.00 | 55.5% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #13/27 54% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% Anthropic: Claude Sonnet 4.6 100.0% MiniMax: MiniMax M2.5 55.5% OpenAI: GPT-4o-mini 0.0% 0.0% 100.0% | 2 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #26/27 4% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 MiniMax: MiniMax M2.5 2 OpenAI: GPT-5 Nano 2 0 2 | 8.28 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #10/17 44% Z.ai: GLM 5 9.50 Anthropic: Claude Sonnet 4.6 9.44 Anthropic: Claude Opus 4.6 9.44 MoonshotAI: Kimi K2.5 9.26 StepFun: Step 3.5 Flash 9.22 MiniMax: MiniMax M2.5 8.28 Qwen: Qwen3 Coder Next 4.33 4.33 9.50 | $0.01205 Total Cost Rank: #17/27 38% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00008 OpenAI: GPT-4o-mini $0.00028 xAI: Grok 4.1 Fast $0.00053 Qwen: Qwen3 Coder Next $0.00058 MiniMax: MiniMax M2.5 $0.01205 Qwen: Qwen3.5 Plus 2026-02-15 $0.05508 $0.00000 $0.05508 |