AI BENCHY
Your ad here

AI BENCHY Category

Tool Calling Ranking

See which AI models perform best on Tool Calling, which ones stay reliable, and where the biggest gaps appear.

Models Shown

15

Average Tool Calling Score

8.7

Rank Model Company Tool Calling Score Score Tests Correct Response Time (avg)
#94 MiMo-V2-Flash none Xiaomi 10.0 4.5 1/1 2.28s
#96 GPT-5.4 Nano none OpenAI 10.0 4.5 1/1 3.40s
#97 Qwen3.5-9B medium Qwen 10.0 4.4 1/1 4.31s
#68 gpt-oss-120b medium OpenAI 9.8 5.8 1/1 6.91s
#31 GLM 5V Turbo medium Z.ai 7.0 7.8 0/1 12.5s
#40 GPT-5.2 medium OpenAI 4.7 7.5 0/1 10.3s
#44 GPT-5.4 Mini medium OpenAI 4.7 7.3 0/1 9.62s
#80 MiniMax M2.7 medium Minimax 4.7 5.3 0/1 12.0s
#88 Nemotron 3 Super none NVIDIA 4.7 5.1 0/1 16.0s
#14 Gemma 4 31B medium Google 3.0 8.3 0/1 0ms
#25 Grok 4.20 Beta medium X AI 3.0 8.0 0/1 12.4s
#33 GLM 5.1 medium Z.ai 3.0 7.8 0/1 0ms
#47 Grok 4.20 medium X AI 3.0 7.0 0/1 13.7s
#48 Gemma 4 31B none Google 3.0 6.9 0/1 0ms
#56 Grok 4.20 Multi Agent Beta medium X AI 3.0 6.4 0/1 0ms

Top Models by Tool Calling Score

Tool Calling Score vs Total Cost

Top Models by Response Time (avg)