AI BENCHY
Compare Charts Methodology
❤️ Made by XCS
AD
Track all your projects in one dashboard. Get 📊stats, 🔥heatmaps and 👀recordings in one self-hosted dashboard.
uxwizz.com

AI BENCHY Category Failures

Data parsing and extraction
Wrong answer

See which AI models are most likely to hit Wrong answer on Data parsing and extraction, so you can spot weak points faster. Sort by: Failure Count ↑.

Models Shown

11

Total Failures

14

Most Affected Model

DeepSeek V3.2 1
Rank Model Company Wrong answer Count Category Score Tests Correct Response Time (avg)
#33 DeepSeek V3.2 none DeepSeek 1 5.4 1/2 9.42s
#36 Mercury 2 medium Inception 1 5.5 1/2 1.11s
#39 gpt-oss-120b medium OpenAI 1 5.5 1/2 1.98s
#46 Kimi K2.5 none Moonshot AI 1 5.4 1/2 42.1s
#48 Qwen3 Coder Next none Qwen 1 5.4 1/2 1.32s
#49 GLM 4.7 Flash none Z.ai 1 5.4 1/2 4.82s
#50 Qwen3 Coder Next medium Qwen 1 5.4 1/2 81.8s
#51 Mercury 2 none Inception 1 5.5 1/2 667ms
#34 GPT-5 Nano medium OpenAI 2 10.0 0/2 21.4s
#43 MiniMax M2.5 medium Minimax 2 10.0 0/2 7.48s
#55 LFM2-24B-A2B none Liquid 2 10.0 0/2 714ms

Top Models by Wrong answer Count

Wrong answer Count vs Avg Score

Top Models by Response Time (avg)

Top Models by Estimated Wasted Cost