Skip to main content
Bud AI Foundry updates to Model Adapters make it easier to run many domain-specific model behaviors on nearly the same infrastructure as one base deployment.

Key capabilities

  • Dynamic adapter loading: add or remove adapters while a model stays live, with no service interruption.
  • Multi-domain inference on shared infrastructure: run legal, finance, support, and document-focused behaviors on shared GPU capacity.
  • Adapter-level monitoring and analytics: track usage and behavior per adapter for governance and audits.

Step-by-step guide

  1. Open Models and add your adapter model to the model catalog first.
  2. Go to Projects and select the project that contains your deployed base model.
  3. Open Deployments and then open the target deployment detail page.
  4. In deployment details, open the Adapters tab.
  5. Click Add Adapter.
  6. In Select Adapter Model, choose the adapter model that you already added in Models.
  7. Enter adapter deployment details (for example, adapter deployment name) and continue.
  8. Deploy adapter and monitor status until completion.
  9. Validate adapter behavior with use-case specific prompts.
  10. Use adapter-level metrics and logs for ongoing optimization.
Image Important: The Add Adapter flow in deployment details only lists adapter-compatible models from the catalog. If your adapter model is not visible, first add/publish it from the Models page.

Why this matters

  • Launch specialized AI capabilities faster.
  • Reduce infrastructure cost by reusing shared base deployments.
  • Improve control and visibility with adapter-level governance data.

Suggested rollout strategy

  1. Start with one high-value domain adapter.
  2. Compare quality and cost against deploying a separate full model.
  3. Expand to additional adapters once baseline SLAs are met.
  4. Establish adapter naming, ownership, and review standards.