Mental Model
Playground is organized around reusable sessions and chat panes where each pane can target a different model.Key Concepts
Session
A session is your working container for one or more chats. It stores prompt/response history and settings context.Chat Pane
A pane is an individual conversation track. Multiple panes enable side-by-side model testing.Model Binding
Each pane is bound to a selected model or endpoint. Rebinding lets you test alternatives quickly.Prompt Iteration
Prompts evolve through edit/resend, retry, and parameter tuning. Keep prompts stable when benchmarking models.Response Signals
Use output quality, style adherence, latency, and token usage to evaluate model fit.Common Workflows
- Single model drafting: refine one prompt until output quality is acceptable.
- Comparative evaluation: run identical prompts in multiple panes.
- Preset validation: validate default generation settings for team-wide reuse.
Operational Guidelines
Prefer short, deterministic prompts first; add complexity after baseline checks.
Record winning prompts and settings in team docs for repeatability.
Use consistent evaluation criteria (accuracy, tone, latency, safety).
Related Pages
- Creating First Playground Workflow
- Guides for model comparison and prompt design
- Troubleshooting for common response and session issues