AI BENCHY
Your ad here

AI BENCHY Category

Tool Calling Ranking

See which AI models perform best on Tool Calling, which ones stay reliable, and where the biggest gaps appear.

Models Shown

15

Average Tool Calling Score

8.7

Rank Model Company Tool Calling Score Score Tests Correct Response Time (avg)
#55 MiMo-V2-Omni none Xiaomi 10.0 6.5 1/1 2.76s
#57 GPT-5 Nano medium OpenAI 10.0 6.3 1/1 33.3s
#58 GLM 5V Turbo none Z.ai 10.0 6.2 1/1 4.86s
#59 Qwen3.5-Flash none Qwen 10.0 6.2 1/1 3.67s
#60 Gemma 4 26B A4B none Google 10.0 6.2 1/1 57.1s
#61 Seed-2.0-Lite none Bytedance Seed 10.0 6.2 1/1 3.94s
#62 Gemini 2.5 Flash none Google 10.0 6.2 1/1 1.91s
#63 Qwen3.5-35B-A3B none Qwen 10.0 6.1 1/1 2.30s
#64 DeepSeek V3.2 none DeepSeek 10.0 6.1 1/1 11.8s
#65 MiMo-V2-Pro none Xiaomi 10.0 6.0 1/1 4.39s
#66 GPT-5.4 none OpenAI 10.0 5.9 1/1 2.75s
#67 Qwen3.5-27B none Qwen 10.0 5.9 1/1 3.54s
#69 Kimi K2.6 none Moonshot AI 10.0 5.8 1/1 4.46s
#70 Qwen3.5-122B-A10B none Qwen 10.0 5.7 1/1 2.04s
#71 MiniMax M2.5 medium Minimax 10.0 5.7 1/1 15.4s

Top Models by Tool Calling Score

Tool Calling Score vs Total Cost

Top Models by Response Time (avg)