Documentation Index
Fetch the complete documentation index at: https://budecosystem-b7b14df4.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Run Workflow
Before You Run
- Confirm model target is valid for your environment.
- Confirm selected traits align with the decision you need to make.
- Use focused dataset scope for faster feedback loops.
During Execution
- Watch status and timestamps in experiment detail.
- Track total evaluations and cumulative duration.
- Note failed runs and rerun after correcting configuration.
After Execution
- Check benchmark summary for aggregate score and duration.
- Review current metrics for trait-level performance.
- Open dataset detail page for leaderboard and explorer evidence.
- Export results if review or compliance requires an artifact.
Rerun Strategies
| Strategy | When to Use | Benefit |
|---|---|---|
| Same config rerun | Validate consistency | Detect noisy results |
| Model swap | Compare candidates | Faster selection |
| Trait subset run | Isolate regressions | Focused debugging |
| Dataset expansion | Increase confidence | Better generalization signal |
Best Practices
Keep experiment names outcome-focused (for example: “May release quality gate”).
Use tags to separate baseline, canary, and production candidates.
Store notes on why a rerun was triggered.