AI BENCHY 分类
常识问答 排名
看看哪些 AI 模型在 常识问答 上表现最好,哪些更稳定,以及差距主要出现在哪里。 排序方式: 测试正确 ↓.
失败原因
| 排名 | 模型 | 公司 | 常识问答 得分 | 分数 | 测试正确 | 响应时间(平均) |
|---|---|---|---|---|---|---|
| #35 | Claude Sonnet 4.6 medium | Anthropic | 3.0 | 7.8 | 0/1 | 30.1s |
| #36 | Step 3.5 Flash none | Stepfun | 3.0 | 7.8 | 0/1 | 114.1s |
| #37 | MiMo-V2-Pro medium | Xiaomi | 3.0 | 7.7 | 0/1 | 82.7s |
| #38 | Gemma 4 26B A4B medium | 3.0 | 7.7 | 0/1 | 180.9s | |
| #39 | HY3 Preview low | Tencent | 3.0 | 7.7 | 0/1 | 41.7s |
| #40 | Gemini 3.1 Flash Lite Preview none | 3.0 | 7.7 | 0/1 | 814ms | |
| #41 | GPT-5.2 Chat none | OpenAI | 3.0 | 7.6 | 0/1 | 6.89s |
| #42 | Kimi K2.6 medium | Moonshot AI | 3.0 | 7.6 | 0/1 | 130.3s |
| #43 | Step 3.5 Flash medium | Stepfun | 3.0 | 7.6 | 0/1 | 108.4s |
| #44 | Gemini 3.1 Flash Lite low | 3.0 | 7.6 | 0/1 | 1.46s | |
| #45 | Qwen3.5-Flash medium | Qwen | 3.0 | 7.6 | 0/1 | 49.0s |
| #46 | GPT-5.3 Chat none | OpenAI | 3.0 | 7.6 | 0/1 | 4.38s |
| #47 | GLM 5.1 medium | Z.ai | 3.0 | 7.6 | 0/1 | 29.4s |
| #48 | DeepSeek V4 Flash high | DeepSeek | 3.0 | 7.6 | 0/1 | 54.5s |
| #49 | GLM 5V Turbo medium | Z.ai | 3.0 | 7.5 | 0/1 | 41.0s |