AI BENCHY Compare
Inception: Mercury 2 vs Z.ai: GLM 5.1
Last updated at: 2026-04-07
| Metric | Mercury 2 Mercury 2 medium | GLM 5.1 GLM 5.1 none |
|---|---|---|
| Score | 6.3 | 5.6 |
| Rank | #53 | #67 |
| Consistency | 8.5 | 8.2 |
| Tests Correct | ||
| Attempt pass rate | 51.0% | 39.2% |
| Flaky tests | 3 | 4 |
| Total Runs | 51 | 51 |
| Cost per result | 0.634 | 1.000 |
| Total Cost | $0.045 | $0.050 |
| Input Price | $0.250 / 1M | $1.000 / 1M |
| Output Price | $0.750 / 1M | $3.200 / 1M |
| Output Tokens | 3,723 | 3,219 |
| Reasoning Tokens | 46,120 | 0 |
| Response Time (avg) | 2.25s | 4.01s |
| Response Time (max) | 14.63s | 32.57s |
| Response Time (total) | 35.99s | 68.23s |
Score vs Total Cost
Response Time (avg)
Score vs Response Time (avg)
Total Output Tokens
Score vs Total Output Tokens
Category Breakdown
Quick Compare
Switch Comparison Pair
Gemma 4 26B A4BnoneFree AvailablevsMercury 2mediumDeepSeek V3.2nonevsMercury 2mediumMistral Small 4mediumvsGLM 5.1noneMercury 2mediumvsMiMo-V2-OmninoneMiniMax M2.5mediumFree AvailablevsGLM 5.1noneMercury 2mediumvsQwen3.5-FlashnoneMercury 2mediumvsGLM 5V TurbononeSeed-2.0-LitenonevsMercury 2mediumgpt-oss-120bmediumFree AvailablevsGLM 5.1noneGemini 2.5 FlashnonevsMercury 2mediumMercury 2mediumvsQwen3.5-35B-A3BnoneMercury 2mediumvsGLM 5none