| Anti-AI Tricks | 0/2 | 1.00 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #25/27 8% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 Qwen: Qwen3 Coder Next 1.00 Anthropic: Claude Sonnet 4.6 1.00 1.00 10.00 | 10.00 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #22/27 19% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 Qwen: Qwen3 Coder Next 10.00 Anthropic: Claude Opus 4.6 1.62 1.62 10.00 | 0.0% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #25/27 8% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% OpenAI: GPT-5.2 100.0% Qwen: Qwen3 Coder Next 0.0% Anthropic: Claude Sonnet 4.6 0.0% 0.0% 100.0% | 0 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #22/27 19% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 Qwen: Qwen3 Coder Next 0 Anthropic: Claude Opus 4.6 2 0 2 | 1.00 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #17/17 0% OpenAI: gpt-oss-120b 10.00 Anthropic: Claude Opus 4.6 10.00 StepFun: Step 3.5 Flash 10.00 Anthropic: Claude Sonnet 4.6 9.83 MoonshotAI: Kimi K2.5 9.77 Qwen: Qwen3 Coder Next 1.00 1.00 10.00 | $0.00005 Total Cost Rank: #4/27 88% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00003 Xiaomi: MiMo-V2-Flash $0.00004 Qwen: Qwen3 Coder Next $0.00005 Qwen: Qwen3 Coder Next $0.00005 Anthropic: Claude Opus 4.6 $0.03036 $0.00000 $0.03036 |
| Data parsing and extraction | 1/2 | 5.50 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #22/27 19% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 Qwen: Qwen3 Coder Next 5.50 Z.ai: GLM 4.7 Flash 0.50 0.50 10.00 | 10.00 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #18/27 35% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 Qwen: Qwen3 Coder Next 10.00 Z.ai: GLM 5 5.56 5.56 10.00 | 50.0% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #24/27 12% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% OpenAI: GPT-5.2 100.0% Qwen: Qwen3 Coder Next 50.0% Xiaomi: MiMo-V2-Flash 16.7% 0.0% 100.0% | 0 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #18/27 35% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 Qwen: Qwen3 Coder Next 0 Z.ai: GLM 5 1 0 1 | 4.00 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #17/17 0% OpenAI: gpt-oss-120b 10.00 Z.ai: GLM 4.7 Flash 9.87 Anthropic: Claude Sonnet 4.6 9.83 Anthropic: Claude Opus 4.6 9.83 Z.ai: GLM 5 9.80 Qwen: Qwen3 Coder Next 4.00 4.00 10.00 | $0.00105 Total Cost Rank: #6/27 81% StepFun: Step 3.5 Flash $0.00000 Xiaomi: MiMo-V2-Flash $0.00029 Xiaomi: MiMo-V2-Flash $0.00029 Z.ai: GLM 4.7 Flash $0.00050 OpenAI: gpt-oss-120b $0.00052 Qwen: Qwen3 Coder Next $0.00105 Anthropic: Claude Opus 4.6 $0.07755 $0.00000 $0.07755 |
| Domain specific | 1/3 | 4.00 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #16/27 42% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 7.00 Google: Gemini 3 Flash Preview 7.00 Anthropic: Claude Sonnet 4.6 7.00 Z.ai: GLM 4.7 Flash 7.00 Qwen: Qwen3 Coder Next 4.00 Anthropic: Claude Sonnet 4.6 1.00 1.00 10.00 | 10.00 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #12/27 58% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Google: Gemini 3 Flash Preview 10.00 Anthropic: Claude Sonnet 4.6 10.00 Qwen: Qwen3 Coder Next 10.00 Z.ai: GLM 5 4.41 4.41 10.00 | 33.3% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #20/27 27% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 66.7% Google: Gemini 3 Flash Preview 66.7% Anthropic: Claude Sonnet 4.6 66.7% Z.ai: GLM 4.7 Flash 66.7% Qwen: Qwen3 Coder Next 33.3% Z.ai: GLM 5 0.0% 0.0% 100.0% | 0 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #12/27 58% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Google: Gemini 3 Flash Preview 0 Anthropic: Claude Sonnet 4.6 0 Qwen: Qwen3 Coder Next 0 Z.ai: GLM 5 2 0 2 | 5.00 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #14/17 19% Xiaomi: MiMo-V2-Flash 8.72 OpenAI: gpt-oss-120b 8.53 StepFun: Step 3.5 Flash 8.44 Z.ai: GLM 5 8.43 Z.ai: GLM 4.7 Flash 8.21 Qwen: Qwen3 Coder Next 5.00 Google: Gemini 3 Pro Preview 2.44 2.44 8.72 | $0.00010 Total Cost Rank: #4/27 88% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00005 Xiaomi: MiMo-V2-Flash $0.00008 Qwen: Qwen3 Coder Next $0.00010 Qwen: Qwen3 Coder Next $0.00010 Anthropic: Claude Sonnet 4.6 $0.64205 $0.00000 $0.64205 |
| Instructions following | 0/2 | 4.50 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #24/27 12% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Anthropic: Claude Sonnet 4.6 10.00 OpenAI: gpt-oss-120b 10.00 Z.ai: GLM 5 10.00 Qwen: Qwen3 Coder Next 4.50 xAI: Grok 4.1 Fast 1.00 1.00 10.00 | 6.88 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #20/27 27% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 OpenAI: GPT-5.2 10.00 Anthropic: Claude Sonnet 4.6 10.00 Qwen: Qwen3 Coder Next 6.88 Xiaomi: MiMo-V2-Flash 5.80 5.80 10.00 | 16.7% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #26/27 4% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% OpenAI: GPT-5.2 100.0% Qwen: Qwen3 Coder Next 16.7% xAI: Grok 4.1 Fast 0.0% 0.0% 100.0% | 1 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #26/27 4% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 Qwen: Qwen3 Coder Next 1 Google: Gemini 3 Flash Preview 1 0 1 | 7.50 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #12/17 31% Anthropic: Claude Sonnet 4.6 10.00 Z.ai: GLM 5 9.75 StepFun: Step 3.5 Flash 9.67 OpenAI: gpt-oss-120b 9.50 Anthropic: Claude Opus 4.6 9.50 Qwen: Qwen3 Coder Next 7.50 xAI: Grok 4.1 Fast 3.25 3.25 10.00 | $0.00014 Total Cost Rank: #5/27 85% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00006 Xiaomi: MiMo-V2-Flash $0.00008 Qwen: Qwen3 Coder Next $0.00013 Qwen: Qwen3 Coder Next $0.00014 Google: Gemini 3.1 Pro Preview $0.03134 $0.00000 $0.03134 |
| Puzzle Solving | 0/3 | 1.00 Summarizes broad quality across our full private benchmark suite, so ranking reflects consistent performance. Rank: #26/27 4% Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 Anthropic: Claude Sonnet 4.6 10.00 Z.ai: GLM 5 10.00 Qwen: Qwen3 Coder Next 1.00 Z.ai: GLM 4.7 Flash 1.00 1.00 10.00 | 7.28 Consistency score reflects repeat-to-repeat stability (10 = very consistent, even if consistently wrong). Rank: #20/27 27% Google: Gemini 3 Flash Preview 10.00 Google: Gemini 3.1 Pro Preview 10.00 Google: Gemini 3 Pro Preview 10.00 Qwen: Qwen3.5 Plus 2026-02-15 10.00 OpenAI: GPT-5.2 10.00 Qwen: Qwen3 Coder Next 7.28 MiniMax: MiniMax M2.5 4.79 4.79 10.00 | 11.1% Attempt pass rate = passed attempts / total attempts across repeats. Rank: #21/27 23% Google: Gemini 3 Flash Preview 100.0% Google: Gemini 3.1 Pro Preview 100.0% Google: Gemini 3 Pro Preview 100.0% Qwen: Qwen3.5 Plus 2026-02-15 100.0% Anthropic: Claude Sonnet 4.6 100.0% Qwen: Qwen3 Coder Next 11.1% OpenAI: GPT-4o-mini 0.0% 0.0% 100.0% | 1 Flaky tests had mixed outcomes across repeats (at least one pass and one fail). Rank: #23/27 15% Google: Gemini 3 Flash Preview 0 Google: Gemini 3.1 Pro Preview 0 Google: Gemini 3 Pro Preview 0 Qwen: Qwen3.5 Plus 2026-02-15 0 OpenAI: GPT-5.2 0 Qwen: Qwen3 Coder Next 1 OpenAI: GPT-5 Nano 2 0 2 | 4.33 Measures reasoning clarity, efficiency, and consistency independent of final answer correctness. Rank: #17/17 0% Z.ai: GLM 5 9.50 Anthropic: Claude Sonnet 4.6 9.44 Anthropic: Claude Opus 4.6 9.44 MoonshotAI: Kimi K2.5 9.26 StepFun: Step 3.5 Flash 9.22 Qwen: Qwen3 Coder Next 4.33 4.33 9.50 | $0.00058 Total Cost Rank: #5/27 85% StepFun: Step 3.5 Flash $0.00000 Z.ai: GLM 4.7 Flash $0.00008 OpenAI: GPT-4o-mini $0.00028 xAI: Grok 4.1 Fast $0.00053 Qwen: Qwen3 Coder Next $0.00058 Qwen: Qwen3.5 Plus 2026-02-15 $0.05508 $0.00000 $0.05508 |