AI BENCHY Compare
OpenAI: GPT-5.4 vs Z.ai: GLM 5.1
Last updated at: 2026-04-07
| Metric | GPT-5.4 GPT-5.4 none | GLM 5.1 GLM 5.1 none |
|---|---|---|
| Score | 5.6 | 5.6 |
| Rank | #68 | #67 |
| Consistency | 9.0 | 8.2 |
| Tests Correct | ||
| Attempt pass rate | 39.2% | 39.2% |
| Flaky tests | 2 | 4 |
| Total Runs | 51 | 51 |
| Cost per result | 1.573 | 1.000 |
| Total Cost | $0.095 | $0.050 |
| Input Price | $2.500 / 1M | $1.000 / 1M |
| Output Price | $15.000 / 1M | $3.200 / 1M |
| Output Tokens | 1,837 | 3,219 |
| Reasoning Tokens | 0 | 0 |
| Response Time (avg) | 1.43s | 4.01s |
| Response Time (max) | 2.89s | 32.57s |
| Response Time (total) | 24.27s | 68.23s |
Score vs Total Cost
Response Time (avg)
Score vs Response Time (avg)
Total Output Tokens
Score vs Total Output Tokens
Category Breakdown
Quick Compare
Switch Comparison Pair
Mistral Small 4mediumvsGPT-5.4noneMistral Small 4mediumvsGLM 5.1noneMiniMax M2.5mediumFree AvailablevsGLM 5.1noneMiniMax M2.5mediumFree AvailablevsGPT-5.4nonegpt-oss-120bmediumFree AvailablevsGLM 5.1noneGrok 4.20 Multi Agent BetamediumvsGLM 5.1noneGPT-5.4nonevsGrok 4.20 Multi Agent BetamediumGPT-5 NanomediumvsGLM 5.1noneMiniMax M2.7mediumvsGPT-5.4noneMiniMax M2.7mediumvsGLM 5.1noneMercury 2mediumvsGLM 5.1noneMercury 2mediumvsGPT-5.4none