| Anti-AI Tricks | 2/2 | 10.00 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #9/27 69% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 OpenAI: GPT-5 Nano 10.00 Anthropic: Claude Sonnet 4.6 1.00 1.00 10.00 | 10.00 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #11/27 62% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 OpenAI: GPT-5 Nano 10.00 Anthropic: Claude Opus 4.6 1.62 1.62 10.00 | 100.0% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #9/27 69% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% OpenAI: GPT-5.2 100.0% OpenAI: GPT-5 Nano 100.0% Anthropic: Claude Sonnet 4.6 0.0% 0.0% 100.0% | 0 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #11/27 62% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 OpenAI: GPT-5 Nano 0 Anthropic: Claude Opus 4.6 2 0 2 | 4.17 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #14/17 19% OpenAI: gpt-oss-120b 10.00 Anthropic: Claude Opus 4.6 10.00 StepFun: Step 3.5 Flash 10.00 Anthropic: Claude Sonnet 4.6 9.83 MoonshotAI: Kimi K2.5 9.77 OpenAI: GPT-5 Nano 4.17 Qwen: Qwen3 Coder Next 1.00 1.00 10.00 | $0.00351 Total Cost Rank: #16/27 42% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00003 Xiaomi: MiMo-V2-Flash $0.00004 Qwen: Qwen3 Coder Next $0.00005 Qwen: Qwen3 Coder Next $0.00005 OpenAI: GPT-5 Nano $0.00351 Anthropic: Claude Opus 4.6 $0.03036 $0.00000 $0.03036 |
| Data parsing and extraction | 1/2 | 5.50 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #17/27 38% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 OpenAI: GPT-5 Nano 5.50 Z.ai: GLM 4.7 Flash 0.50 0.50 10.00 | 5.81 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #21/27 23% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 OpenAI: GPT-5 Nano 5.81 Z.ai: GLM 5 5.56 5.56 10.00 | 83.3% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #18/27 35% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% OpenAI: GPT-5.2 100.0% OpenAI: GPT-5 Nano 83.3% Xiaomi: MiMo-V2-Flash 16.7% 0.0% 100.0% | 1 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #22/27 19% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 OpenAI: GPT-5 Nano 1 Z.ai: GLM 5 1 0 1 | 7.83 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #15/17 13% OpenAI: gpt-oss-120b 10.00 Z.ai: GLM 4.7 Flash 9.87 Anthropic: Claude Sonnet 4.6 9.83 Anthropic: Claude Opus 4.6 9.83 Z.ai: GLM 5 9.80 OpenAI: GPT-5 Nano 7.83 Qwen: Qwen3 Coder Next 4.00 4.00 10.00 | $0.00395 Total Cost Rank: #14/27 50% StepFun: Step 3.5 Flash $0.00000 Xiaomi: MiMo-V2-Flash $0.00029 Xiaomi: MiMo-V2-Flash $0.00029 Z.ai: GLM 4.7 Flash $0.00050 OpenAI: gpt-oss-120b $0.00052 OpenAI: GPT-5 Nano $0.00395 Anthropic: Claude Opus 4.6 $0.07755 $0.00000 $0.07755 |
| Domain specific | 1/3 | 4.00 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #10/27 65% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 7.00 Google: Gemini 3 Flash Preview 7.00 Anthropic: Claude Sonnet 4.6 7.00 Z.ai: GLM 4.7 Flash 7.00 OpenAI: GPT-5 Nano 4.00 Anthropic: Claude Sonnet 4.6 1.00 1.00 10.00 | 4.41 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #24/27 12% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Google: Gemini 3 Flash Preview 10.00 Anthropic: Claude Sonnet 4.6 10.00 OpenAI: GPT-5 Nano 4.41 Z.ai: GLM 5 4.41 4.41 10.00 | 55.5% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #10/27 65% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 66.7% Google: Gemini 3 Flash Preview 66.7% Anthropic: Claude Sonnet 4.6 66.7% Z.ai: GLM 4.7 Flash 66.7% OpenAI: GPT-5 Nano 55.5% Z.ai: GLM 5 0.0% 0.0% 100.0% | 2 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #24/27 12% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Google: Gemini 3 Flash Preview 0 Anthropic: Claude Sonnet 4.6 0 OpenAI: GPT-5 Nano 2 Z.ai: GLM 5 2 0 2 | 4.22 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #15/17 13% Xiaomi: MiMo-V2-Flash 8.72 OpenAI: gpt-oss-120b 8.53 StepFun: Step 3.5 Flash 8.44 Z.ai: GLM 5 8.43 Z.ai: GLM 4.7 Flash 8.21 OpenAI: GPT-5 Nano 4.22 Google: Gemini 3 Pro Preview 2.44 2.44 8.72 | $0.01355 Total Cost Rank: #15/27 46% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00005 Xiaomi: MiMo-V2-Flash $0.00008 Qwen: Qwen3 Coder Next $0.00010 Qwen: Qwen3 Coder Next $0.00010 OpenAI: GPT-5 Nano $0.01355 Anthropic: Claude Sonnet 4.6 $0.64205 $0.00000 $0.64205 |
| Instructions following | 1/2 | 7.00 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #14/27 50% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Anthropic: Claude Sonnet 4.6 10.00 OpenAI: gpt-oss-120b 10.00 Z.ai: GLM 5 10.00 OpenAI: GPT-5 Nano 7.00 xAI: Grok 4.1 Fast 1.00 1.00 10.00 | 6.41 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #23/27 15% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 OpenAI: GPT-5.2 10.00 Anthropic: Claude Sonnet 4.6 10.00 OpenAI: GPT-5 Nano 6.41 Xiaomi: MiMo-V2-Flash 5.80 5.80 10.00 | 83.3% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #14/27 50% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% OpenAI: GPT-5.2 100.0% OpenAI: GPT-5 Nano 83.3% xAI: Grok 4.1 Fast 0.0% 0.0% 100.0% | 1 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #21/27 23% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 OpenAI: GPT-5 Nano 1 Google: Gemini 3 Flash Preview 1 0 1 | 6.45 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #13/17 25% Anthropic: Claude Sonnet 4.6 10.00 Z.ai: GLM 5 9.75 StepFun: Step 3.5 Flash 9.67 OpenAI: gpt-oss-120b 9.50 Anthropic: Claude Opus 4.6 9.50 OpenAI: GPT-5 Nano 6.45 xAI: Grok 4.1 Fast 3.25 3.25 10.00 | $0.00179 Total Cost Rank: #15/27 46% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00006 Xiaomi: MiMo-V2-Flash $0.00008 Qwen: Qwen3 Coder Next $0.00013 Qwen: Qwen3 Coder Next $0.00014 OpenAI: GPT-5 Nano $0.00179 Google: Gemini 3.1 Pro Preview $0.03134 $0.00000 $0.03134 |
| Puzzle Solving | 1/3 | 4.67 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #16/27 42% Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 Anthropic: Claude Sonnet 4.6 10.00 Z.ai: GLM 5 10.00 OpenAI: GPT-5 Nano 4.67 Z.ai: GLM 4.7 Flash 1.00 1.00 10.00 | 4.90 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #26/27 4% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 OpenAI: GPT-5 Nano 4.90 MiniMax: MiniMax M2.5 4.79 4.79 10.00 | 55.5% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #12/27 58% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% Anthropic: Claude Sonnet 4.6 100.0% OpenAI: GPT-5 Nano 55.5% OpenAI: GPT-4o-mini 0.0% 0.0% 100.0% | 2 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #25/27 8% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 OpenAI: GPT-5 Nano 2 0 2 | 6.72 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #15/17 13% Z.ai: GLM 5 9.50 Anthropic: Claude Sonnet 4.6 9.44 Anthropic: Claude Opus 4.6 9.44 MoonshotAI: Kimi K2.5 9.26 StepFun: Step 3.5 Flash 9.22 OpenAI: GPT-5 Nano 6.72 Qwen: Qwen3 Coder Next 4.33 4.33 9.50 | $0.00527 Total Cost Rank: #15/27 46% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00008 OpenAI: GPT-4o-mini $0.00028 xAI: Grok 4.1 Fast $0.00053 Qwen: Qwen3 Coder Next $0.00058 OpenAI: GPT-5 Nano $0.00527 Qwen: Qwen3.5 Plus 2026-02-15 $0.05508 $0.00000 $0.05508 |