Catégorie AI BENCHY
Classement Spécifique au domaine
Voyez quels modèles d'IA réussissent le mieux sur Spécifique au domaine, lesquels restent fiables et où les écarts sont les plus marqués. Trier par: Temps de réponse (moy.) ↓.
Raisons d'échec liées
| Rang | Modèle | Entreprise | Score Spécifique au domaine | Score moy. | Tests corrects | Temps de réponse (moy.) |
|---|---|---|---|---|---|---|
| #43 | MiniMax M2.5 medium | Minimax | 10.0 | 4.7 | 0/3 | 237.3s |
| #34 | GPT-5 Nano medium | OpenAI | 4.0 | 5.5 | 1/3 | 204.0s |
| #52 | GLM 4.7 Flash medium | Z.ai | 10.0 | 3.1 | 0/3 | 174.6s |
| #13 | Step 3.5 Flash medium | Stepfun | 4.0 | 7.4 | 1/3 | 170.5s |
| #24 | Qwen3.5-Flash medium | Qwen | 4.0 | 6.9 | 1/3 | 146.5s |
| #28 | Kimi K2.5 medium | Moonshot AI | 10.0 | 6.4 | 0/3 | 137.3s |
| #8 | Gemini 3.1 Flash Lite Preview high | 4.0 | 8.2 | 1/3 | 127.6s | |
| #30 | Grok 4.1 Fast medium | X AI | 4.0 | 6.2 | 1/3 | 121.8s |
| #21 | MiMo-V2-Flash medium | Xiaomi | 4.0 | 7.2 | 1/3 | 96.0s |
| #35 | Qwen3.5-35B-A3B medium | Qwen | 10.0 | 5.5 | 0/3 | 88.3s |
| #26 | Claude Opus 4.6 medium | Anthropic | 10.0 | 6.6 | 0/3 | 83.4s |
| #7 | Qwen3.5-27B medium | Qwen | 4.0 | 8.2 | 1/3 | 79.5s |
| #27 | GPT-5.2 medium | OpenAI | 4.0 | 6.5 | 1/3 | 77.8s |
| #9 | GPT-5.4 medium | OpenAI | 4.0 | 8.0 | 1/3 | 74.3s |
| #3 | GPT-5.3-Codex medium | OpenAI | 4.0 | 8.4 | 1/3 | 64.3s |
| #10 | Qwen3.5-122B-A10B medium | Qwen | 10.0 | 7.7 | 0/3 | 63.4s |
| #39 | gpt-oss-120b medium | OpenAI | 10.0 | 5.1 | 0/3 | 50.9s |
| #32 | GPT-5 Mini medium | OpenAI | 10.0 | 6.0 | 0/3 | 44.6s |
| #18 | DeepSeek V3.2 medium | DeepSeek | 4.0 | 7.3 | 1/3 | 39.3s |
| #16 | Gemini 2.5 Flash medium | 4.0 | 7.4 | 1/3 | 37.3s | |
| #2 | Gemini 3.1 Pro Preview medium | 7.0 | 9.4 | 2/3 | 32.7s | |
| #1 | Gemini 3 Flash Preview medium | 10.0 | 10.0 | 3/3 | 21.1s | |
| #15 | GPT-5.2 Chat none | OpenAI | 4.0 | 7.4 | 1/3 | 17.8s |
| #4 | Qwen3.5 Plus 2026-02-15 medium | Qwen | 4.0 | 8.3 | 1/3 | 17.5s |
| #19 | GPT-5.3 Chat none | OpenAI | 10.0 | 7.3 | 0/3 | 13.0s |
| #5 | Gemini 3 Flash Preview low | 4.0 | 8.2 | 1/3 | 8.05s | |
| #6 | Gemini 3 Pro Preview medium | 4.0 | 8.2 | 1/3 | 7.01s | |
| #36 | Mercury 2 medium | Inception | 10.0 | 5.3 | 0/3 | 6.48s |
| #46 | Kimi K2.5 none | Moonshot AI | 4.0 | 4.1 | 1/3 | 4.38s |
| #12 | Gemini 3.1 Flash Lite Preview medium | 10.0 | 7.5 | 0/3 | 4.21s | |
| #25 | Claude Sonnet 4.6 none | Anthropic | 7.0 | 6.8 | 2/3 | 3.54s |
| #17 | Gemini 3.1 Flash Lite Preview low | 4.0 | 7.3 | 1/3 | 2.36s | |
| #31 | GLM 5 none | Z.ai | 10.0 | 6.0 | 0/3 | 2.24s |
| #33 | DeepSeek V3.2 none | DeepSeek | 10.0 | 5.5 | 0/3 | 1.61s |
| #29 | Qwen3.5 Plus 2026-02-15 none | Qwen | 4.0 | 6.2 | 1/3 | 1.17s |
| #44 | GPT-5.4 none | OpenAI | 4.0 | 4.5 | 1/3 | 1.07s |
| #53 | Grok 4.1 Fast none | X AI | 4.0 | 2.9 | 1/3 | 1.06s |
| #20 | Gemini 3 Flash Preview none | 7.0 | 7.2 | 2/3 | 963ms | |
| #48 | Qwen3 Coder Next none | Qwen | 4.0 | 4.0 | 1/3 | 962ms |
| #22 | Gemini 3.1 Flash Lite Preview none | 4.0 | 7.1 | 1/3 | 942ms | |
| #37 | Qwen3.5-Flash none | Qwen | 7.0 | 5.2 | 2/3 | 905ms |
| #45 | Trinity Large Preview none | Arcee AI | 4.0 | 4.2 | 1/3 | 877ms |
| #49 | GLM 4.7 Flash none | Z.ai | 7.0 | 3.9 | 2/3 | 744ms |
| #50 | Qwen3 Coder Next medium | Qwen | 4.0 | 3.5 | 1/3 | 638ms |
| #47 | GPT-4o-mini none | OpenAI | 10.0 | 4.0 | 0/3 | 637ms |
| #54 | MiMo-V2-Flash none | Xiaomi | 4.0 | 2.9 | 1/3 | 564ms |
| #41 | Qwen3.5-27B none | Qwen | 10.0 | 4.9 | 0/3 | 540ms |
| #51 | Mercury 2 none | Inception | 4.0 | 3.4 | 1/3 | 534ms |
| #38 | Gemini 2.5 Flash none | 4.0 | 5.2 | 1/3 | 495ms | |
| #42 | Qwen3.5-35B-A3B none | Qwen | 7.0 | 4.7 | 2/3 | 485ms |
| #40 | Qwen3.5-122B-A10B none | Qwen | 4.0 | 5.0 | 1/3 | 465ms |
| #55 | LFM2-24B-A2B none | Liquid | 4.0 | 2.6 | 1/3 | 287ms |
| #11 | Claude Sonnet 4.6 medium | Anthropic | 10.0 | 7.7 | 0/3 | 0ms |
| #14 | GLM 5 medium | Z.ai | 10.0 | 7.4 | 0/3 | 0ms |
| #23 | Seed-2.0-Mini medium | Bytedance Seed | 10.0 | 6.9 | 0/3 | 0ms |