Skip to main content
Deploy your first model and make an API request in minutes.

1. Prerequisites

  • Bud AI Foundry account with access to the platform.
  • A project created in your workspace.
  • Deploy model in the project.
  • API key with permission to call endpoint.

2. Step 1: Create an account

  1. Sign in or create a Bud AI Foundry account.
  2. Complete workspace setup and verify your organization details.
  3. Invite teammates if needed and assign roles.

3. Step 2: Create your first project

  1. Navigate to Projects and click Create Project.
  2. Name the project, add tags and description.
  3. Generate an API key for the project.

4. Step 3: Add a model

  1. Go to Models and select +Model.
  2. Choose a model source (Cloud, Hugging Face, URL, or Disk).
  3. Provide metadata, tags, and approval details, then save the model.

5. Step 4: Deploy the model

  1. Open your project detail page and select Deploy Model.
  2. Choose a model from the catalog and confirm the model source.
  3. Select a cluster and hardware profile.
  4. Configure scaling and safety settings, then launch the deployment.

6. Step 5: Test your endpoint

Use your project API key to call the endpoint with an OpenAI-compatible request.
curl https://api.bud.studio/v1/chat/completions \
  -H "Authorization: Bearer $BUD_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "your-deployed-model",
    "messages": [{"role": "user", "content": "Hello from Bud AI Foundry"}]
  }'

7. Step 6: Monitor and iterate

  1. Open Observability to review latency, token usage, and error trends.
  2. Run evaluations or benchmarks to compare models and configurations.
  3. Update routing or scaling policies based on usage patterns.

8. Next steps