AI BENCHY
Your ad here

AI BENCHY Category

Tool Calling Ranking

See which AI models perform best on Tool Calling, which ones stay reliable, and where the biggest gaps appear. Sort by: Tests Correct ↓.

Models Shown

15

Average Tool Calling Score

8.7

Rank Model Company Tool Calling Score Score Tests Correct Response Time (avg)
#71 MiniMax M2.5 medium Minimax 10.0 5.7 1/1 15.4s
#72 Hunter Alpha none OpenRouter 10.0 5.7 1/1 6.02s
#73 Mistral Small 4 medium Mistral 10.0 5.7 1/1 3.50s
#75 GLM 5.1 none Z.ai 10.0 5.6 1/1 10.7s
#76 Kimi K2.5 none Moonshot AI 10.0 5.5 1/1 14.0s
#77 GLM 5 Turbo none Z.ai 10.0 5.5 1/1 8.21s
#78 Trinity Large Preview none Arcee AI 10.0 5.3 1/1 6.67s
#79 Grok 4.20 Beta none X AI 10.0 5.3 1/1 4.79s
#82 Grok 4.20 none X AI 10.0 5.2 1/1 4.63s
#83 Mistral Small 4 none Mistral 10.0 5.2 1/1 1.40s
#87 Qwen3 Coder Next none Qwen 10.0 5.1 1/1 2.47s
#89 GPT-4o-mini none OpenAI 10.0 4.9 1/1 2.51s
#90 Qwen3.5-9B none Qwen 10.0 4.8 1/1 1.27s
#91 Mercury 2 none Inception 10.0 4.8 1/1 1.27s
#92 Qwen3 Coder Next medium Qwen 10.0 4.7 1/1 2.64s

Top Models by Tool Calling Score

Tool Calling Score vs Total Cost

Top Models by Response Time (avg)