AI BENCHY Category
Trivia Ranking
See which AI models perform best on Trivia, which ones stay reliable, and where the biggest gaps appear. Sort by: Response Time (avg) ↓.
Failure Reasons
| Rank | Model | Company | Trivia Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #24 | Grok 4.3 medium | X AI | 3.0 | 8.0 | 0/1 | 44.5s |
| #39 | HY3 Preview low | Tencent | 3.0 | 7.7 | 0/1 | 41.7s |
| #49 | GLM 5V Turbo medium | Z.ai | 3.0 | 7.5 | 0/1 | 41.0s |
| #20 | GLM 5 Turbo medium | Z.ai | 3.0 | 8.1 | 0/1 | 40.2s |
| #34 | HY3 Preview medium | Tencent | 3.0 | 7.8 | 0/1 | 39.9s |
| #65 | DeepSeek V4 Pro high | DeepSeek | 3.0 | 6.9 | 0/1 | 39.1s |
| #4 | GPT-5.5 medium | OpenAI | 2.8 | 8.9 | 0/1 | 37.9s |
| #95 | Cobuddy medium | Baidu | 3.0 | 5.8 | 0/1 | 37.0s |
| #90 | Qwen3.5 Plus 2026-04-20 none | Qwen | 3.0 | 5.9 | 0/1 | 33.3s |
| #21 | Qwen3.6 35B A3B medium | Qwen | 3.0 | 8.0 | 0/1 | 32.9s |
| #60 | GPT-5.4 Mini medium | OpenAI | 3.0 | 7.2 | 0/1 | 30.1s |
| #35 | Claude Sonnet 4.6 medium | Anthropic | 3.0 | 7.8 | 0/1 | 30.1s |
| #47 | GLM 5.1 medium | Z.ai | 3.0 | 7.6 | 0/1 | 29.4s |
| #58 | GPT-5.2 medium | OpenAI | 3.0 | 7.2 | 0/1 | 28.2s |
| #99 | gpt-oss-120b medium | OpenAI | 3.0 | 5.7 | 0/1 | 26.5s |