Skip to main content

Mental Model

Playground is organized around reusable sessions and chat panes where each pane can target a different model.

Key Concepts

Session

A session is your working container for one or more chats. It stores prompt/response history and settings context.

Chat Pane

A pane is an individual conversation track. Multiple panes enable side-by-side model testing.

Model Binding

Each pane is bound to a selected model or endpoint. Rebinding lets you test alternatives quickly.

Prompt Iteration

Prompts evolve through edit/resend, retry, and parameter tuning. Keep prompts stable when benchmarking models.

Response Signals

Use output quality, style adherence, latency, and token usage to evaluate model fit.

Common Workflows

  1. Single model drafting: refine one prompt until output quality is acceptable.
  2. Comparative evaluation: run identical prompts in multiple panes.
  3. Preset validation: validate default generation settings for team-wide reuse.

Operational Guidelines

Prefer short, deterministic prompts first; add complexity after baseline checks.
Record winning prompts and settings in team docs for repeatability.
Use consistent evaluation criteria (accuracy, tone, latency, safety).
  • Creating First Playground Workflow
  • Guides for model comparison and prompt design
  • Troubleshooting for common response and session issues