#74
Poolside
Release: 2026-04-28
Tested on: 2026-04-28 22:45
poolside/laguna-m.1::medium
(medium)
(none)
Input Price
$0.000 / 1M
Output Price
$0.000 / 1M
Flaky tests
3
Flaky tests had mixed outcomes across runs (at least one pass and one fail).
Charts
Choose the first model, then click a second model to open a side-by-side page.
Score vs Total Cost
Response Time (avg)
Score vs Response Time (avg)
Total Output Tokens
Score vs Total Output Tokens
Quick Compare
Laguna M.1mediumFree AvailablevsGrok 4.1 FastmediumLaguna M.1mediumFree AvailablevsMercury 2mediumLaguna M.1mediumFree AvailablevsDeepSeek V4 PrononeLaguna M.1mediumFree AvailablevsMiMo-V2-OmninoneLaguna M.1mediumFree AvailablevsNemotron 3 SupermediumFree AvailableLaguna M.1mediumFree AvailablevsGemini 3 Flash PreviewmediumLaguna M.1mediumFree AvailablevsGemini 3.1 Pro PreviewmediumLaguna M.1mediumFree AvailablevsHY3 PreviewhighFree Available
Category Breakdown
| Category | Score | Consistency | Tests Correct |
|---|---|---|---|
| Anti-AI Tricks | 6.6 | 10.0 | |
| Coding | 4.3 | 1.1 | |
| Combined | 3.0 | 10.0 | |
| Data parsing and extraction | 10.0 | 10.0 | |
| Domain specific | 5.3 | 7.2 | |
| General Intelligence | 4.1 | 10.0 | |
| Instructions following | 10.0 | 10.0 | |
| Puzzle Solving | 3.6 | 7.2 | |
| Tool Calling | 10.0 | 10.0 |