AI BENCHY Category
Data parsing and extraction Ranking
See which AI models perform best on Data parsing and extraction, which ones stay reliable, and where the biggest gaps appear. Sort by: Response Time (avg) ↑.
| Rank | Model | Company | Data parsing and extraction Score | Score | Tests Correct | Response Time (avg) |
|---|---|---|---|---|---|---|
| #61 | Seed-2.0-Lite none | Bytedance Seed | 10.0 | 6.2 | 2/2 | 1.82s |
| #49 | Qwen3.5 Plus 2026-02-15 none | Qwen | 10.0 | 6.8 | 2/2 | 1.89s |
| #68 | gpt-oss-120b medium | OpenAI | 6.4 | 5.8 | 1/2 | 1.98s |
| #4 | Claude Opus 4.7 none | Anthropic | 10.0 | 9.2 | 2/2 | 2.15s |
| #36 | GPT-5.3 Chat none | OpenAI | 10.0 | 7.7 | 2/2 | 2.21s |
| #48 | Gemma 4 31B none | 10.0 | 6.9 | 2/2 | 2.25s | |
| #35 | MiMo-V2-Omni medium | Xiaomi | 10.0 | 7.7 | 2/2 | 2.29s |
| #17 | Gemini 3.1 Flash Lite Preview medium | 10.0 | 8.2 | 2/2 | 2.29s | |
| #3 | Claude Opus 4.7 medium | Anthropic | 10.0 | 9.2 | 2/2 | 2.37s |
| #44 | GPT-5.4 Mini medium | OpenAI | 10.0 | 7.3 | 2/2 | 2.43s |
| #77 | GLM 5 Turbo none | Z.ai | 10.0 | 5.5 | 2/2 | 2.47s |
| #38 | GPT-5.4 Nano medium | OpenAI | 10.0 | 7.6 | 2/2 | 2.54s |
| #22 | Gemini 3.1 Flash Lite Preview low | 10.0 | 8.1 | 2/2 | 3.00s | |
| #28 | GPT-5.2 Chat none | OpenAI | 10.0 | 7.9 | 2/2 | 3.05s |
| #7 | GPT-5.3-Codex medium | OpenAI | 10.0 | 8.6 | 2/2 | 3.07s |