AI BENCHY
Compare Charts Methodology
❤️ Made by XCS
AD
Track all your projects in one dashboard. Get 📊stats, 🔥heatmaps and 👀recordings in one self-hosted dashboard.
uxwizz.com

AI BENCHY Category Failures

Data parsing and extraction
Wrong answer

See which AI models are most likely to hit Wrong answer on Data parsing and extraction, so you can spot weak points faster. Sort by: Response Time (avg) ↑.

Models Shown

11

Total Failures

14

Most Affected Model

Mercury 2 1
Rank Model Company Wrong answer Count Category Score Tests Correct Response Time (avg)
#51 Mercury 2 none Inception 1 5.5 1/2 667ms
#55 LFM2-24B-A2B none Liquid 2 10.0 0/2 714ms
#36 Mercury 2 medium Inception 1 5.5 1/2 1.11s
#48 Qwen3 Coder Next none Qwen 1 5.4 1/2 1.32s
#39 gpt-oss-120b medium OpenAI 1 5.5 1/2 1.98s
#49 GLM 4.7 Flash none Z.ai 1 5.4 1/2 4.82s
#43 MiniMax M2.5 medium Minimax 2 10.0 0/2 7.48s
#33 DeepSeek V3.2 none DeepSeek 1 5.4 1/2 9.42s
#34 GPT-5 Nano medium OpenAI 2 10.0 0/2 21.4s
#46 Kimi K2.5 none Moonshot AI 1 5.4 1/2 42.1s
#50 Qwen3 Coder Next medium Qwen 1 5.4 1/2 81.8s

Top Models by Wrong answer Count

Wrong answer Count vs Avg Score

Top Models by Response Time (avg)

Top Models by Estimated Wasted Cost