AI BENCHY
Advertise here

AI BENCHY Category

Trivia Ranking

See which AI models perform best on Trivia, which ones stay reliable, and where the biggest gaps appear.

Models Shown

15

Average Trivia Score

2.9

Rank Model Company Trivia Score Score Tests Correct Response Time (avg)
#127 GPT-4o-mini none OpenAI 3.0 4.9 0/1 794ms
#128 MiMo-V2.5 none Xiaomi 3.0 4.9 0/1 3.89s
#129 Qwen3 Coder Next medium Qwen 3.0 4.8 0/1 399ms
#130 Trinity Large Preview none Arcee AI 3.0 4.8 0/1 777ms
#131 Mercury 2 none Inception 3.0 4.7 0/1 548ms
#132 Qwen3.5-9B none Qwen 3.0 4.7 0/1 2.32s
#133 HY3 Preview none Tencent 3.0 4.6 0/1 2.71s
#135 GPT-5.4 Nano none OpenAI 3.0 4.5 0/1 773ms
#136 GLM 4.7 Flash medium Z.ai 3.0 4.5 0/1 11.1s
#137 MiMo-V2-Flash none Xiaomi 3.0 4.5 0/1 1.82s
#139 Grok 4.1 Fast none X AI 3.0 4.4 0/1 731ms
#140 Qwen3.5-9B medium Qwen 3.0 4.3 0/1 177.0s
#142 Granite 4.1 8B none IBM Granite 3.0 4.1 0/1 306ms
#4 GPT-5.5 medium OpenAI 2.8 8.9 0/1 37.9s
#13 GPT-5.3-Codex medium OpenAI 2.8 8.2 0/1 14.4s

Top Models by Trivia Score

Trivia Score vs Total Cost

Top Models by Response Time (avg)