AI BENCHY Category
Combined Ranking
See which AI models perform best on Combined, which ones stay reliable, and where the biggest gaps appear. Sort by: Tests Correct ↓.
| Rank | Model | Company | Combined Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #91 | Mercury 2 none | Inception | 3.0 | 4.8 | 0/1 | 606ms |
| #92 | Qwen3 Coder Next medium | Qwen | 3.0 | 4.7 | 0/1 | 4.28s |
| #93 | GLM 4.7 Flash medium | Z.ai | 2.8 | 4.6 | 0/1 | 65.6s |
| #94 | MiMo-V2-Flash none | Xiaomi | 3.0 | 4.5 | 0/1 | 2.87s |
| #95 | Grok 4.1 Fast none | X AI | 3.0 | 4.5 | 0/1 | 3.33s |
| #96 | GPT-5.4 Nano none | OpenAI | 3.0 | 4.5 | 0/1 | 3.84s |
| #97 | Qwen3.5-9B medium | Qwen | 3.0 | 4.4 | 0/1 | 0ms |
| #98 | LFM2-24B-A2B none | Liquid | 3.0 | 4.1 | 0/1 | 0ms |