AI BENCHY Compare
Inception: Mercury 2 vs Z.ai: GLM 4.7 Flash
Compare:
Last updated at: 2026-03-05
| Metric | Inception: Mercury 2 none Release: 2026-02-24 | Z.ai: GLM 4.7 Flash medium Release: 2026-01-19 |
|---|---|---|
| Rank | #50 | #52 |
| Avg Score | 34 | 33 |
| Consistency | 89 | 61 |
| Cost per result | 0.147 | 1.018 |
| Total Cost | $0.006 | $0.041 |
| Tests Correct | ||
| Attempt pass rate | 33.3% | 44.4% |
| Flaky tests | 2 | 7 |
| Output Tokens | 1,144 | 38,664 |
| Reasoning Tokens | 0 | 62,814 |
Score vs Total Cost
Category Breakdown
| Anti-AI Tricks | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 0.0% | 0 | 274 | 0 | |
| Z.ai: GLM 4.7 Flash | 40 | 45 | 55.6% | 2 | 1,085 | 5,597 |
| Combined | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 0.0% | 0 | 131 | 0 | |
| Z.ai: GLM 4.7 Flash | 100 | 21 | 33.3% | 1 | 2,585 | 20,648 |
| Data parsing and extraction | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 55 | 59 | 83.3% | 1 | 180 | 0 | |
| Z.ai: GLM 4.7 Flash | 50 | 100 | 50.0% | 0 | 584 | 2,755 |
| Domain specific | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 40 | 72 | 44.4% | 1 | 46 | 0 | |
| Z.ai: GLM 4.7 Flash | 100 | 44 | 33.3% | 2 | 33,000 | 25,394 |
| Instructions following | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 55 | 100 | 50.0% | 0 | 82 | 0 | |
| Z.ai: GLM 4.7 Flash | 50 | 58 | 66.7% | 1 | 388 | 2,181 |
| Puzzle Solving | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 0.0% | 0 | 234 | 0 | |
| Z.ai: GLM 4.7 Flash | 100 | 72 | 11.1% | 1 | 798 | 5,225 |
| Tool Calling | Score | Consistency | Attempt pass rate | Flaky tests | Tests Correct | Output Tokens | Reasoning Tokens |
|---|---|---|---|---|---|---|---|
| Inception: Mercury 2 | 100 | 100 | 100.0% | 0 | 197 | 0 | |
| Z.ai: GLM 4.7 Flash | 100 | 100 | 100.0% | 0 | 224 | 1,014 |
Quick Compare
Switch Comparison Pair
Mercury 2nonevsQwen3 Coder NextmediumQwen3 Coder NextnonevsGLM 4.7 FlashmediumGrok 4.1 FastnonevsGLM 4.7 FlashmediumKimi K2.5nonevsGLM 4.7 FlashmediumMiMo-V2-FlashnonevsGLM 4.7 FlashmediumLFM2-24B-A2BnonevsGLM 4.7 FlashmediumGPT-4o-mininonevsGLM 4.7 FlashmediumTrinity Large Preview (free)noneFree AvailablevsGLM 4.7 FlashmediumGPT-5.4nonevsGLM 4.7 FlashmediumMercury 2nonevsMiniMax M2.5mediumQwen3.5-35B-A3BnonevsGLM 4.7 FlashmediumQwen3.5-27BnonevsGLM 4.7 Flashmedium