AI BENCHY Category
General Intelligence Ranking
See which AI models perform best on General Intelligence, which ones stay reliable, and where the biggest gaps appear. Sort by: Response Time (avg) ↓.
| Rank | Model | Company | General Intelligence Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #22 | Gemini 3.1 Flash Lite Preview low | 4.0 | 8.1 | 0/1 | 1.54s | |
| #92 | Qwen3 Coder Next medium | Qwen | 6.3 | 4.7 | 0/1 | 1.39s |
| #87 | Qwen3 Coder Next none | Qwen | 10.0 | 5.1 | 1/1 | 1.34s |
| #96 | GPT-5.4 Nano none | OpenAI | 3.8 | 4.5 | 0/1 | 1.31s |
| #63 | Qwen3.5-35B-A3B none | Qwen | 6.5 | 6.1 | 0/1 | 1.19s |
| #55 | MiMo-V2-Omni none | Xiaomi | 4.5 | 6.5 | 0/1 | 1.19s |
| #21 | Gemini 3 Flash Preview none | 10.0 | 8.1 | 1/1 | 1.13s | |
| #70 | Qwen3.5-122B-A10B none | Qwen | 5.0 | 5.7 | 0/1 | 1.12s |
| #95 | Grok 4.1 Fast none | X AI | 4.4 | 4.5 | 0/1 | 1.08s |
| #81 | Elephant medium | Openrouter | 4.3 | 5.2 | 0/1 | 920ms |
| #89 | GPT-4o-mini none | OpenAI | 4.0 | 4.9 | 0/1 | 909ms |
| #85 | Elephant none | Openrouter | 4.0 | 5.2 | 0/1 | 854ms |
| #54 | Mercury 2 medium | Inception | 4.8 | 6.5 | 0/1 | 821ms |
| #59 | Qwen3.5-Flash none | Qwen | 10.0 | 6.2 | 1/1 | 803ms |
| #75 | GLM 5.1 none | Z.ai | 5.0 | 5.6 | 0/1 | 790ms |