AI BENCHY 分类
指令遵循 排名
看看哪些 AI 模型在 指令遵循 上表现最好,哪些更稳定,以及差距主要出现在哪里。
| 排名 | 模型 | 公司 | 指令遵循 得分 | 分数 | 测试正确 | 响应时间(平均) |
|---|---|---|---|---|---|---|
| #1 | Gemini 3 Flash Preview medium | 10.0 | 10.0 | 2/2 | 6.10s | |
| #2 | Gemini 3.1 Pro Preview medium | 10.0 | 9.6 | 2/2 | 9.56s | |
| #3 | Claude Opus 4.7 medium | Anthropic | 10.0 | 9.2 | 2/2 | 1.57s |
| #4 | Claude Opus 4.7 none | Anthropic | 10.0 | 9.2 | 2/2 | 1.46s |
| #6 | Seed-2.0-Lite medium | Bytedance Seed | 10.0 | 8.6 | 2/2 | 7.26s |
| #7 | GPT-5.3-Codex medium | OpenAI | 10.0 | 8.6 | 2/2 | 3.04s |
| #8 | Qwen3.5 Plus 2026-02-15 medium | Qwen | 10.0 | 8.5 | 2/2 | 31.9s |
| #9 | Qwen3.6 Plus Preview medium | Qwen | 10.0 | 8.5 | 2/2 | 7.54s |
| #10 | Qwen3.5-27B medium | Qwen | 10.0 | 8.4 | 2/2 | 19.7s |
| #13 | GLM 5 medium | Z.ai | 10.0 | 8.4 | 2/2 | 7.25s |
| #14 | Gemma 4 31B medium | 10.0 | 8.3 | 2/2 | 12.8s | |
| #16 | GPT-5.4 medium | OpenAI | 10.0 | 8.2 | 2/2 | 3.11s |
| #17 | Gemini 3.1 Flash Lite Preview medium | 10.0 | 8.2 | 2/2 | 1.91s | |
| #18 | GLM 5 Turbo medium | Z.ai | 10.0 | 8.1 | 2/2 | 5.38s |
| #19 | Qwen3.5-122B-A10B medium | Qwen | 10.0 | 8.1 | 2/2 | 9.88s |