Skip to main content

Documentation Index

Fetch the complete documentation index at: https://budecosystem-b7b14df4.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Run Workflow

Before You Run

  • Confirm model target is valid for your environment.
  • Confirm selected traits align with the decision you need to make.
  • Use focused dataset scope for faster feedback loops.

During Execution

  • Watch status and timestamps in experiment detail.
  • Track total evaluations and cumulative duration.
  • Note failed runs and rerun after correcting configuration.

After Execution

  1. Check benchmark summary for aggregate score and duration.
  2. Review current metrics for trait-level performance.
  3. Open dataset detail page for leaderboard and explorer evidence.
  4. Export results if review or compliance requires an artifact.

Rerun Strategies

StrategyWhen to UseBenefit
Same config rerunValidate consistencyDetect noisy results
Model swapCompare candidatesFaster selection
Trait subset runIsolate regressionsFocused debugging
Dataset expansionIncrease confidenceBetter generalization signal

Best Practices

Keep experiment names outcome-focused (for example: “May release quality gate”).
Use tags to separate baseline, canary, and production candidates.
Store notes on why a rerun was triggered.