AI BENCHY
Your ad here

AI BENCHY Category

Combined Ranking

See which AI models perform best on Combined, which ones stay reliable, and where the biggest gaps appear. Sort by: Tests Correct ↑.

Models Shown

15

Average Combined Score

6.2

Rank Model Company Combined Score Score Tests Correct Response Time (avg)
#89 GPT-4o-mini none OpenAI 3.0 4.9 0/1 7.58s
#90 Qwen3.5-9B none Qwen 3.0 4.8 0/1 5.91s
#91 Mercury 2 none Inception 3.0 4.8 0/1 606ms
#92 Qwen3 Coder Next medium Qwen 3.0 4.7 0/1 4.28s
#93 GLM 4.7 Flash medium Z.ai 2.8 4.6 0/1 65.6s
#94 MiMo-V2-Flash none Xiaomi 3.0 4.5 0/1 2.87s
#95 Grok 4.1 Fast none X AI 3.0 4.5 0/1 3.33s
#96 GPT-5.4 Nano none OpenAI 3.0 4.5 0/1 3.84s
#97 Qwen3.5-9B medium Qwen 3.0 4.4 0/1 0ms
#98 LFM2-24B-A2B none Liquid 3.0 4.1 0/1 0ms
#1 Gemini 3 Flash Preview medium Google 10.0 10.0 1/1 50.2s
#2 Gemini 3.1 Pro Preview medium Google 9.5 9.6 1/1 40.6s
#3 Claude Opus 4.7 medium Anthropic 10.0 9.2 1/1 21.4s
#4 Claude Opus 4.7 none Anthropic 9.5 9.2 1/1 18.3s
#6 Seed-2.0-Lite medium Bytedance Seed 10.0 8.6 1/1 37.7s

Top Models by Combined Score

Combined Score vs Total Cost

Top Models by Response Time (avg)