AI BENCHY Category
General Intelligence Ranking
See which AI models perform best on General Intelligence, which ones stay reliable, and where the biggest gaps appear. Sort by: Score ↓.
| Rank | Model | Company | General Intelligence Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #1 | Gemini 3 Flash Preview medium | 10.0 | 10.0 | 1/1 | 4.09s | |
| #2 | Gemini 3.1 Pro Preview medium | 10.0 | 9.6 | 1/1 | 11.8s | |
| #3 | Claude Opus 4.7 medium | Anthropic | 10.0 | 9.2 | 1/1 | 2.87s |
| #4 | Claude Opus 4.7 none | Anthropic | 10.0 | 9.2 | 1/1 | 3.47s |
| #5 | Gemini 3 Flash Preview low | 10.0 | 8.8 | 1/1 | 3.68s | |
| #6 | Seed-2.0-Lite medium | Bytedance Seed | 6.7 | 8.6 | 0/1 | 18.2s |
| #7 | GPT-5.3-Codex medium | OpenAI | 4.6 | 8.6 | 0/1 | 4.87s |
| #8 | Qwen3.5 Plus 2026-02-15 medium | Qwen | 4.7 | 8.5 | 0/1 | 79.9s |
| #9 | Qwen3.6 Plus Preview medium | Qwen | 5.1 | 8.5 | 0/1 | 27.1s |
| #10 | Qwen3.5-27B medium | Qwen | 6.1 | 8.4 | 0/1 | 101.4s |
| #11 | Gemini 3.1 Flash Lite Preview high | 10.0 | 8.4 | 1/1 | 5.25s | |
| #12 | Gemini 3 PRO Preview medium | 10.0 | 8.4 | 1/1 | 9.34s | |
| #13 | GLM 5 medium | Z.ai | 6.1 | 8.4 | 0/1 | 14.7s |
| #14 | Gemma 4 31B medium | 10.0 | 8.3 | 1/1 | 9.57s | |
| #15 | Gemini 2.5 Flash medium | 4.8 | 8.2 | 0/1 | 4.86s |