AI BENCHY 分类
常识问答 排名
看看哪些 AI 模型在 常识问答 上表现最好,哪些更稳定,以及差距主要出现在哪里。 排序方式: 测试正确 ↓.
失败原因
| 排名 | 模型 | 公司 | 常识问答 得分 | 分数 | 测试正确 | 响应时间(平均) |
|---|---|---|---|---|---|---|
| #50 | Qwen3.6 Flash medium | Qwen | 3.0 | 7.5 | 0/1 | 122.9s |
| #52 | Claude Opus 4.6 medium | Anthropic | 3.0 | 7.4 | 0/1 | 63.2s |
| #53 | GPT-5.4 Nano medium | OpenAI | 3.0 | 7.3 | 0/1 | 4.81s |
| #54 | Qwen3.6 Max Preview none | Qwen | 3.0 | 7.2 | 0/1 | 1.97s |
| #55 | MiMo-V2-Flash medium | Xiaomi | 3.0 | 7.2 | 0/1 | 1.96s |
| #56 | Seed-2.0-Mini medium | Bytedance Seed | 3.0 | 7.2 | 0/1 | 56.8s |
| #57 | Qwen3.5-35B-A3B medium | Qwen | 3.0 | 7.2 | 0/1 | 177.4s |
| #58 | GPT-5.2 medium | OpenAI | 3.0 | 7.2 | 0/1 | 28.2s |
| #59 | DeepSeek V3.2 medium | DeepSeek | 3.0 | 7.2 | 0/1 | 84.0s |
| #60 | GPT-5.4 Mini medium | OpenAI | 3.0 | 7.2 | 0/1 | 30.1s |
| #61 | Claude Sonnet 4.6 none | Anthropic | 3.0 | 7.2 | 0/1 | 4.67s |
| #62 | MiMo-V2-Omni medium | Xiaomi | 3.0 | 7.2 | 0/1 | 234.2s |
| #63 | Laguna M.1 medium | Poolside | 0.0 | 6.9 | 0/0 | 0ms |
| #64 | Gemma 4 31B none | 3.0 | 6.9 | 0/1 | 1.25s | |
| #65 | DeepSeek V4 Pro high | DeepSeek | 3.0 | 6.9 | 0/1 | 39.1s |