AI BENCHY Category
Trivia Ranking
See which AI models perform best on Trivia, which ones stay reliable, and where the biggest gaps appear. Sort by: Response Time (avg) ↓.
Failure Reasons
| Rank | Model | Company | Trivia Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #100 | Kimi K2.6 none | Moonshot AI | 3.0 | 5.7 | 0/1 | 1.36s |
| #30 | Gemini 3.1 Flash Lite Preview low | 3.0 | 7.9 | 0/1 | 1.35s | |
| #125 | GPT-5.4 Mini none | OpenAI | 3.0 | 5.0 | 0/1 | 1.33s |
| #79 | MiMo-V2-Omni none | Xiaomi | 3.0 | 6.3 | 0/1 | 1.30s |
| #64 | Gemma 4 31B none | 3.0 | 6.9 | 0/1 | 1.25s | |
| #81 | Gemini 2.5 Flash none | 3.0 | 6.3 | 0/1 | 1.15s | |
| #76 | Qwen3.5 Plus 2026-02-15 none | Qwen | 3.0 | 6.5 | 0/1 | 1.11s |
| #29 | Gemini 3 Flash Preview none | 3.0 | 7.9 | 0/1 | 1.07s | |
| #118 | Ling-2.6-flash none | Inclusionai | 3.0 | 5.3 | 0/1 | 1.06s |
| #98 | GPT-5.4 none | OpenAI | 3.0 | 5.7 | 0/1 | 990ms |
| #40 | Gemini 3.1 Flash Lite Preview none | 3.0 | 7.7 | 0/1 | 814ms | |
| #127 | GPT-4o-mini none | OpenAI | 3.0 | 4.9 | 0/1 | 794ms |
| #82 | Gemma 4 26B A4B none | 3.0 | 6.3 | 0/1 | 778ms | |
| #130 | Trinity Large Preview none | Arcee AI | 3.0 | 4.8 | 0/1 | 777ms |
| #135 | GPT-5.4 Nano none | OpenAI | 3.0 | 4.5 | 0/1 | 773ms |