AI BENCHY
Your ad here

#74

Laguna M.1

Poolside Release: 2026-04-28 Tested on: 2026-04-28 22:45 poolside/laguna-m.1::medium
(medium) (none)

Score

6.3

Consistency

8.6

Total Cost

$0.000

Total Output Tokens

63,822

Input Price

$0.000 / 1M

Output Price

$0.000 / 1M

Tests Correct

Wrong Tests: 10

Attempt pass rate: 53.7%

Flaky tests

3

Flaky tests had mixed outcomes across runs (at least one pass and one fail).

Response Time (avg)

13.90s

Response Time (max): 53.14s

Response Time (total): 250.28s

Charts

Choose the first model, then click a second model to open a side-by-side page.

Total Output Tokens

Score vs Total Output Tokens

Quick Compare

Category Breakdown

Category Score Consistency Tests Correct
Anti-AI Tricks 6.6 10.0
Coding 4.3 1.1
Combined 3.0 10.0
Data parsing and extraction 10.0 10.0
Domain specific 5.3 7.2
General Intelligence 4.1 10.0
Instructions following 10.0 10.0
Puzzle Solving 3.6 7.2
Tool Calling 10.0 10.0

Compared models