AI BENCHY Category
Domain specific Ranking
See which AI models perform best on Domain specific, which ones stay reliable, and where the biggest gaps appear.
| Rank | Model | Company | Domain specific Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #27 | DeepSeek V3.2 medium | DeepSeek | 5.3 | 8.0 | 1/3 | 39.3s |
| #30 | Step 3.5 Flash medium | Stepfun | 5.3 | 7.9 | 1/3 | 170.5s |
| #31 | GLM 5V Turbo medium | Z.ai | 5.3 | 7.8 | 1/3 | 38.1s |
| #32 | Qwen3.5-Flash medium | Qwen | 5.3 | 7.8 | 1/3 | 146.5s |
| #34 | Kimi K2.6 medium | Moonshot AI | 5.3 | 7.7 | 1/3 | 202.4s |
| #65 | MiMo-V2-Pro none | Xiaomi | 5.3 | 6.0 | 1/3 | 1.78s |
| #66 | GPT-5.4 none | OpenAI | 5.3 | 5.9 | 1/3 | 1.07s |
| #69 | Kimi K2.6 none | Moonshot AI | 5.3 | 5.8 | 1/3 | 1.48s |
| #73 | Mistral Small 4 medium | Mistral | 5.3 | 5.7 | 1/3 | 6.11s |
| #91 | Mercury 2 none | Inception | 5.3 | 4.8 | 1/3 | 534ms |
| #94 | MiMo-V2-Flash none | Xiaomi | 5.3 | 4.5 | 1/3 | 564ms |
| #57 | GPT-5 Nano medium | OpenAI | 5.2 | 6.3 | 1/3 | 204.0s |
| #43 | Qwen3.5-35B-A3B medium | Qwen | 4.1 | 7.4 | 0/3 | 88.3s |
| #44 | GPT-5.4 Mini medium | OpenAI | 4.1 | 7.3 | 0/3 | 65.3s |
| #45 | GPT-5 Mini medium | OpenAI | 3.6 | 7.0 | 0/3 | 44.6s |