AI BENCHY Category
Trivia Ranking
See which AI models perform best on Trivia, which ones stay reliable, and where the biggest gaps appear. Sort by: Score ↓.
Failure Reasons
| Rank | Model | Company | Trivia Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #1 | Gemini 3 Flash Preview medium | 10.0 | 10.0 | 1/1 | 5.50s | |
| #2 | Gemini 3.1 Pro Preview medium | 10.0 | 9.6 | 1/1 | 6.27s | |
| #3 | Claude Opus 4.7 medium | Anthropic | 3.0 | 8.9 | 0/1 | 2.25s |
| #4 | GPT-5.5 medium | OpenAI | 2.8 | 8.9 | 0/1 | 37.9s |
| #5 | Claude Opus 4.7 none | Anthropic | 3.0 | 8.9 | 0/1 | 1.46s |
| #6 | GPT-5.5 low | OpenAI | 3.0 | 8.9 | 0/1 | 10.1s |
| #7 | Gemini 3 Flash Preview low | 10.0 | 8.8 | 1/1 | 2.75s | |
| #9 | Qwen3.6 Max Preview medium | Qwen | 3.0 | 8.5 | 0/1 | 60.6s |
| #10 | Gemini 3 PRO Preview medium | 0.0 | 8.4 | 0/0 | 0ms | |
| #11 | Seed-2.0-Lite medium | Bytedance Seed | 3.0 | 8.3 | 0/1 | 48.3s |
| #12 | Qwen3.5 Plus 2026-02-15 medium | Qwen | 3.0 | 8.2 | 0/1 | 103.8s |
| #13 | GPT-5.3-Codex medium | OpenAI | 2.8 | 8.2 | 0/1 | 14.4s |
| #14 | Gemma 4 31B medium | 3.0 | 8.2 | 0/1 | 90.1s | |
| #15 | Qwen3.6 Plus Preview medium | Qwen | 0.0 | 8.2 | 0/0 | 0ms |
| #17 | Qwen3.5-27B medium | Qwen | 3.0 | 8.1 | 0/1 | 85.1s |