AI BENCHY Compare
Inception: Mercury 2 vs Z.ai: GLM 5
Compare:
Last updated at: 2026-03-05
| Metric | Inception: Mercury 2 medium Release: 2026-02-24 | Z.ai: GLM 5 none Release: 2026-02-12 |
|---|---|---|
| Rank | #35 | #32 |
| Avg Score | 54 | 58 |
| Consistency | 83 | 100 |
| Cost per result | 0.622 | 0.219 |
| Total Cost | $0.044 | $0.018 |
| Response Time (avg) | 2.47s | 4.13s |
| Response Time (max) | 14.63s | 11.07s |
| Response Time (total) | 34.56s | 33.03s |
| Tests Correct | ||
| Attempt pass rate | 57.8% | 53.3% |
| Flaky tests | 3 | 0 |
| Output Tokens | 3,571 | 1,445 |
| Reasoning Tokens | 45,379 | 0 |
Response Time (avg)
Score vs Total Cost
Avg Score vs Response Time (avg)
Category Breakdown
| Anti-AI Tricks | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 73 | 98 | 66.7% | 0 | 1.30s | 2,531 | 2,410 | |
| Z.ai: GLM 5 | 40 | 100 | 33.3% | 0 | 3.39s | 272 | 0 |
| Combined | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 100.0% | 0 | 3.28s | 268 | 4,887 | |
| Z.ai: GLM 5 | 100 | 100 | 0.0% | 0 | 4.98s | 406 | 0 |
| Data parsing and extraction | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 55 | 59 | 83.3% | 1 | 1.11s | 183 | 1,656 | |
| Z.ai: GLM 5 | 99 | 100 | 100.0% | 0 | 5.78s | 203 | 0 |
| Domain specific | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 72 | 11.1% | 1 | 6.48s | 41 | 30,754 | |
| Z.ai: GLM 5 | 100 | 100 | 0.0% | 0 | 2.24s | 19 | 0 |
| Instructions following | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 100.0% | 0 | 1.07s | 14 | 958 | |
| Z.ai: GLM 5 | 100 | 100 | 100.0% | 0 | 1.48s | 61 | 0 |
| Puzzle Solving | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 17 | 75 | 22.2% | 1 | 934ms | 354 | 2,758 | |
| Z.ai: GLM 5 | 70 | 100 | 66.7% | 0 | 2.05s | 264 | 0 |
| Tool Calling | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Response Time (avg) | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 100.0% | 0 | 1.89s | 180 | 1,956 | |
| Z.ai: GLM 5 | 100 | 100 | 100.0% | 0 | 11.07s | 220 | 0 |
Quick Compare
Switch Comparison Pair
Qwen3.5-35B-A3BmediumvsGLM 5noneGPT-5 NanomediumvsGLM 5noneGemini 2.5 FlashnonevsMercury 2mediumDeepSeek V3.2nonevsMercury 2mediumGPT-5 MinimediumvsGLM 5noneMercury 2mediumvsQwen3.5-122B-A10BnoneMercury 2mediumvsQwen3.5-FlashnoneMercury 2mediumvsQwen3.5-27BnoneClaude Opus 4.6mediumvsGLM 5noneKimi K2.5mediumvsGLM 5noneGrok 4.1 FastmediumvsGLM 5nonegpt-oss-120bmediumFree AvailablevsGLM 5none