AI BENCHY Category
Trivia Ranking
See which AI models perform best on Trivia, which ones stay reliable, and where the biggest gaps appear. Sort by: Tests Correct ↑.
Failure Reasons
| Rank | Model | Company | Trivia Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #119 | gpt-oss-120b none | OpenAI | 3.0 | 5.2 | 0/1 | 47.3s |
| #120 | DeepSeek V4 Flash none | DeepSeek | 3.0 | 5.2 | 0/1 | 3.07s |
| #121 | Qwen3 Coder Next none | Qwen | 3.0 | 5.2 | 0/1 | 601ms |
| #122 | Nemotron 3 Super none | NVIDIA | 3.0 | 5.2 | 0/1 | 8.94s |
| #123 | MiniMax M2.7 medium | Minimax | 3.0 | 5.1 | 0/1 | 22.8s |
| #124 | Mistral Small 4 none | Mistral | 3.0 | 5.1 | 0/1 | 397ms |
| #125 | GPT-5.4 Mini none | OpenAI | 3.0 | 5.0 | 0/1 | 1.33s |
| #126 | Qwen3.6 35B A3B none | Qwen | 3.0 | 5.0 | 0/1 | 414ms |
| #127 | GPT-4o-mini none | OpenAI | 3.0 | 4.9 | 0/1 | 794ms |
| #128 | MiMo-V2.5 none | Xiaomi | 3.0 | 4.9 | 0/1 | 3.89s |
| #129 | Qwen3 Coder Next medium | Qwen | 3.0 | 4.8 | 0/1 | 399ms |
| #130 | Trinity Large Preview none | Arcee AI | 3.0 | 4.8 | 0/1 | 777ms |
| #131 | Mercury 2 none | Inception | 3.0 | 4.7 | 0/1 | 548ms |
| #132 | Qwen3.5-9B none | Qwen | 3.0 | 4.7 | 0/1 | 2.32s |
| #133 | HY3 Preview none | Tencent | 3.0 | 4.6 | 0/1 | 2.71s |