AI BENCHY Category Failures
Domain specific
Wrong answer
Domain specific
Wrong answer
See which AI models are most likely to hit Wrong answer on Domain specific, so you can spot weak points faster. Sort by: Tests Correct ↓.
Related Failure Reasons
| Rank | Model | Company | Wrong answer Count | Category Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #2 | Gemini 3.1 Pro Preview medium | 1 | 7.0 | 2/3 | 32.7s | |
| #20 | Gemini 3 Flash Preview none | 1 | 7.0 | 2/3 | 963ms | |
| #25 | Claude Sonnet 4.6 none | Anthropic | 1 | 7.0 | 2/3 | 3.54s |
| #37 | Qwen3.5-Flash none | Qwen | 1 | 7.0 | 2/3 | 905ms |
| #42 | Qwen3.5-35B-A3B none | Qwen | 1 | 7.0 | 2/3 | 485ms |
| #49 | GLM 4.7 Flash none | Z.ai | 1 | 7.0 | 2/3 | 744ms |
| #3 | GPT-5.3-Codex medium | OpenAI | 2 | 4.0 | 1/3 | 64.3s |
| #4 | Qwen3.5 Plus 2026-02-15 medium | Qwen | 1 | 4.0 | 1/3 | 17.5s |
| #5 | Gemini 3 Flash Preview low | 2 | 4.0 | 1/3 | 8.05s | |
| #6 | Gemini 3 Pro Preview medium | 2 | 4.0 | 1/3 | 7.01s | |
| #7 | Qwen3.5-27B medium | Qwen | 1 | 4.0 | 1/3 | 79.5s |
| #8 | Gemini 3.1 Flash Lite Preview high | 2 | 4.0 | 1/3 | 127.6s | |
| #9 | GPT-5.4 medium | OpenAI | 2 | 4.0 | 1/3 | 74.3s |
| #13 | Step 3.5 Flash medium | Stepfun | 2 | 4.0 | 1/3 | 170.5s |
| #15 | GPT-5.2 Chat none | OpenAI | 2 | 4.0 | 1/3 | 17.8s |
| #16 | Gemini 2.5 Flash medium | 2 | 4.0 | 1/3 | 37.3s | |
| #17 | Gemini 3.1 Flash Lite Preview low | 2 | 4.0 | 1/3 | 2.36s | |
| #18 | DeepSeek V3.2 medium | DeepSeek | 1 | 4.0 | 1/3 | 39.3s |
| #21 | MiMo-V2-Flash medium | Xiaomi | 2 | 4.0 | 1/3 | 96.0s |
| #22 | Gemini 3.1 Flash Lite Preview none | 2 | 4.0 | 1/3 | 942ms | |
| #24 | Qwen3.5-Flash medium | Qwen | 1 | 4.0 | 1/3 | 146.5s |
| #27 | GPT-5.2 medium | OpenAI | 1 | 4.0 | 1/3 | 77.8s |
| #29 | Qwen3.5 Plus 2026-02-15 none | Qwen | 2 | 4.0 | 1/3 | 1.17s |
| #30 | Grok 4.1 Fast medium | X AI | 1 | 4.0 | 1/3 | 121.8s |
| #34 | GPT-5 Nano medium | OpenAI | 1 | 4.0 | 1/3 | 204.0s |
| #38 | Gemini 2.5 Flash none | 2 | 4.0 | 1/3 | 495ms | |
| #40 | Qwen3.5-122B-A10B none | Qwen | 2 | 4.0 | 1/3 | 465ms |
| #44 | GPT-5.4 none | OpenAI | 2 | 4.0 | 1/3 | 1.07s |
| #45 | Trinity Large Preview none | Arcee AI | 2 | 4.0 | 1/3 | 877ms |
| #46 | Kimi K2.5 none | Moonshot AI | 2 | 4.0 | 1/3 | 4.38s |
| #48 | Qwen3 Coder Next none | Qwen | 2 | 4.0 | 1/3 | 962ms |
| #50 | Qwen3 Coder Next medium | Qwen | 2 | 4.0 | 1/3 | 638ms |
| #51 | Mercury 2 none | Inception | 2 | 4.0 | 1/3 | 534ms |
| #53 | Grok 4.1 Fast none | X AI | 2 | 4.0 | 1/3 | 1.06s |
| #54 | MiMo-V2-Flash none | Xiaomi | 2 | 4.0 | 1/3 | 564ms |
| #55 | LFM2-24B-A2B none | Liquid | 1 | 4.0 | 1/3 | 287ms |
| #10 | Qwen3.5-122B-A10B medium | Qwen | 3 | 10.0 | 0/3 | 63.4s |
| #11 | Claude Sonnet 4.6 medium | Anthropic | 1 | 10.0 | 0/3 | 0ms |
| #12 | Gemini 3.1 Flash Lite Preview medium | 3 | 10.0 | 0/3 | 4.21s | |
| #14 | GLM 5 medium | Z.ai | 2 | 10.0 | 0/3 | 0ms |
| #19 | GPT-5.3 Chat none | OpenAI | 3 | 10.0 | 0/3 | 13.0s |
| #26 | Claude Opus 4.6 medium | Anthropic | 1 | 10.0 | 0/3 | 83.4s |
| #28 | Kimi K2.5 medium | Moonshot AI | 2 | 10.0 | 0/3 | 137.3s |
| #31 | GLM 5 none | Z.ai | 3 | 10.0 | 0/3 | 2.24s |
| #32 | GPT-5 Mini medium | OpenAI | 2 | 10.0 | 0/3 | 44.6s |
| #33 | DeepSeek V3.2 none | DeepSeek | 3 | 10.0 | 0/3 | 1.61s |
| #35 | Qwen3.5-35B-A3B medium | Qwen | 1 | 10.0 | 0/3 | 88.3s |
| #36 | Mercury 2 medium | Inception | 3 | 10.0 | 0/3 | 6.48s |
| #39 | gpt-oss-120b medium | OpenAI | 3 | 10.0 | 0/3 | 50.9s |
| #41 | Qwen3.5-27B none | Qwen | 3 | 10.0 | 0/3 | 540ms |
| #43 | MiniMax M2.5 medium | Minimax | 2 | 10.0 | 0/3 | 237.3s |
| #47 | GPT-4o-mini none | OpenAI | 3 | 10.0 | 0/3 | 637ms |
| #52 | GLM 4.7 Flash medium | Z.ai | 2 | 10.0 | 0/3 | 174.6s |