AI BENCHY Category
Combined Ranking
See which AI models perform best on Combined, which ones stay reliable, and where the biggest gaps appear. Sort by: Metric ↑.
| Rank | Model | Company | Combined Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #38 | GPT-5.4 Nano medium | OpenAI | 9.8 | 7.6 | 1/1 | 24.1s |
| #41 | MiMo-V2-Flash medium | Xiaomi | 9.8 | 7.5 | 1/1 | 75.7s |
| #1 | Gemini 3 Flash Preview medium | 10.0 | 10.0 | 1/1 | 50.2s | |
| #3 | Claude Opus 4.7 medium | Anthropic | 10.0 | 9.2 | 1/1 | 21.4s |
| #6 | Seed-2.0-Lite medium | Bytedance Seed | 10.0 | 8.6 | 1/1 | 37.7s |
| #7 | GPT-5.3-Codex medium | OpenAI | 10.0 | 8.6 | 1/1 | 19.6s |
| #8 | Qwen3.5 Plus 2026-02-15 medium | Qwen | 10.0 | 8.5 | 1/1 | 46.8s |
| #9 | Qwen3.6 Plus Preview medium | Qwen | 10.0 | 8.5 | 1/1 | 35.0s |
| #10 | Qwen3.5-27B medium | Qwen | 10.0 | 8.4 | 1/1 | 164.0s |
| #11 | Gemini 3.1 Flash Lite Preview high | 10.0 | 8.4 | 1/1 | 280.5s | |
| #13 | GLM 5 medium | Z.ai | 10.0 | 8.4 | 1/1 | 29.0s |
| #15 | Gemini 2.5 Flash medium | 10.0 | 8.2 | 1/1 | 28.4s | |
| #16 | GPT-5.4 medium | OpenAI | 10.0 | 8.2 | 1/1 | 20.6s |
| #17 | Gemini 3.1 Flash Lite Preview medium | 10.0 | 8.2 | 1/1 | 14.9s | |
| #18 | GLM 5 Turbo medium | Z.ai | 10.0 | 8.1 | 1/1 | 13.9s |