AI BENCHY Category
Combined Ranking
See which AI models perform best on Combined, which ones stay reliable, and where the biggest gaps appear. Sort by: Response Time (avg) ↓.
| Rank | Model | Company | Combined Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #83 | Mistral Small 4 none | Mistral | 3.0 | 5.2 | 0/1 | 1.72s |
| #91 | Mercury 2 none | Inception | 3.0 | 4.8 | 0/1 | 606ms |
| #14 | Gemma 4 31B medium | 3.0 | 8.3 | 0/1 | 0ms | |
| #48 | Gemma 4 31B none | 3.0 | 6.9 | 0/1 | 0ms | |
| #56 | Grok 4.20 Multi Agent Beta medium | X AI | 3.0 | 6.4 | 0/1 | 0ms |
| #84 | gpt-oss-120b none | OpenAI | 3.0 | 5.2 | 0/1 | 0ms |
| #97 | Qwen3.5-9B medium | Qwen | 3.0 | 4.4 | 0/1 | 0ms |
| #98 | LFM2-24B-A2B none | Liquid | 3.0 | 4.1 | 0/1 | 0ms |