Skip to main content

Overview

Event-driven pipelines execute automatically when specific platform events occur. React to model deployments, cluster changes, or custom events without manual intervention.

Available Event Types

Event TypeWhen TriggeredUse Case
model.addedNew model registeredAuto-deploy new models
model.updatedModel metadata changedUpdate deployments
deployment.createdNew deployment startedConfigure monitoring
deployment.failedDeployment failedSend alerts, rollback
cluster.readyCluster becomes availableDeploy waiting models
cluster.unhealthyCluster health degradedTrigger diagnostics

Creating Event Triggers

In the UI

  1. Open your pipeline
  2. Go to Triggers tab
  3. Click Add Event Trigger
  4. Configure:
    • Event Type: model.added
    • Filter (optional): JSON condition
    • Enabled: Toggle on
  5. Click Save

Using the SDK

from bud import BudClient

client = BudClient()

# Create event trigger
trigger = client.event_triggers.create(
    pipeline_id="pipe_abc123",
    event_type="model.added",
    filter={
        "model_source": "hugging_face"
    },
    enabled=True
)

print(f"Event trigger created: {trigger.id}")

Event Filtering

Filter events to only trigger for specific conditions:
# Only trigger for production deployments
trigger = client.event_triggers.create(
    pipeline_id="pipe_monitoring",
    event_type="deployment.created",
    filter={
        "environment": "production",
        "cluster_id": "cluster_prod"
    }
)

Event Data Access

Access event data in your pipeline using the event context:
# In your pipeline actions, reference event data
{
    "id": "notify",
    "action": "notification",
    "params": {
        "message": "Model {{event.model_name}} was added",
        "recipients": ["team@example.com"]
    }
}

Example: Auto-Deploy on Model Add

Automatically deploy models when they’re added to the registry:
from bud import BudClient

client = BudClient()

# Create auto-deployment pipeline
pipeline = client.pipelines.create(
    name="Auto-Deploy New Models",
    definition={
        "steps": [
            {
                "id": "health_check",
                "action": "cluster_health",
                "params": {
                    "cluster_id": "cluster_prod"
                }
            },
            {
                "id": "deploy",
                "action": "deployment_create",
                "params": {
                    "model_id": "{{event.model_id}}",
                    "cluster_id": "cluster_prod",
                    "deployment_name": "{{event.model_name}}-auto"
                },
                "depends_on": ["health_check"]
            },
            {
                "id": "notify",
                "action": "notification",
                "params": {
                    "message": "Deployed {{event.model_name}} successfully",
                    "channel": "slack"
                },
                "depends_on": ["deploy"]
            }
        ]
    }
)

# Create event trigger
trigger = client.event_triggers.create(
    pipeline_id=pipeline.id,
    event_type="model.added",
    filter={
        "model_source": "hugging_face",
        "auto_deploy": True
    },
    enabled=True
)

Example: Failure Alert Pipeline

Send alerts when deployments fail:
pipeline = client.pipelines.create(
    name="Deployment Failure Alerts",
    definition={
        "steps": [
            {
                "id": "log_failure",
                "action": "log",
                "params": {
                    "message": "Deployment failed: {{event.deployment_id}}",
                    "level": "error"
                }
            },
            {
                "id": "send_alert",
                "action": "notification",
                "params": {
                    "message": "🚨 Deployment {{event.deployment_name}} failed: {{event.error_message}}",
                    "channel": "slack",
                    "priority": "high"
                }
            }
        ]
    }
)

trigger = client.event_triggers.create(
    pipeline_id=pipeline.id,
    event_type="deployment.failed",
    enabled=True
)

Managing Event Triggers

List All Triggers

# Get all event triggers for a pipeline
triggers = client.event_triggers.list(pipeline_id="pipe_abc123")

for trigger in triggers:
    print(f"{trigger.event_type}: {trigger.enabled}")

Disable a Trigger

# Temporarily disable
client.event_triggers.disable(trigger_id="trig_xyz789")

Update Filter

# Update event filter conditions
client.event_triggers.update(
    trigger_id="trig_xyz789",
    filter={
        "environment": "production",
        "priority": "high"
    }
)

Best Practices

Use Filters: Avoid triggering on every event - filter for relevance
Idempotent Actions: Ensure pipeline actions are safe to retry
Add Logging: Track which events triggered executions
Test Events: Manually trigger test events before enabling
Monitor Executions: Watch for unexpected trigger frequency

Event Data Structure

Each event includes standard fields:
{
  "event_type": "model.added",
  "event_id": "evt_abc123",
  "timestamp": "2024-01-29T12:00:00Z",
  "source": "budmodel",
  "data": {
    "model_id": "model_xyz789",
    "model_name": "Llama-3.2-1B",
    "model_source": "hugging_face",
    "created_by": "user@example.com"
  }
}

Troubleshooting

Cause: Event filter too restrictive or trigger disabledSolution: Check filter conditions, verify trigger is enabled
Cause: Events firing more frequently than expectedSolution: Add more specific filters, check event source
Cause: Event payload doesn’t include expected fieldsSolution: Check event schema, add fallback values in pipeline

Next Steps