Skip to main content

Introduction

The Anthropic integration lets you connect Rootly to your organization’s Anthropic account, using your own API key to power AI-assisted incident workflows with Claude. This gives your team control over the model, data handling, and API usage under your organization’s specific agreements with Anthropic. Usage is billed directly to your Anthropic account — not through Rootly — making this a good fit for organizations with data residency requirements or custom API agreements. With the Anthropic integration, you can:
  • Generate AI-powered summaries, analyses, and responses within incident workflows
  • Send custom prompts to Claude with full Liquid template support for dynamic, incident-aware content
  • Use a system prompt to define Claude’s role, tone, or constraints for each workflow action
  • Choose which Claude model to use based on your account’s available models

Before You Begin

Before setting up the Anthropic integration, make sure you have:
  • A Rootly account with permission to manage integrations
  • An Anthropic API key with access to the models you want to use
We recommend creating a dedicated API key for Rootly rather than using a personal key — this makes it easier to rotate or revoke access independently. Your key is validated on save and encrypted at rest in Rootly.
API keys are validated against the Anthropic API when you save the integration. If the key is inactive, expired, or over its usage limit, the integration will not save.

Installation

Open the integrations page in Rootly

Navigate to the integrations page in your Rootly workspace and select Anthropic.

Enter your Anthropic API key

Paste your Anthropic API key into the API Key field. You can generate or manage keys in the Anthropic Console.

Select a default model

Choose the Claude model you want to use for workflow actions. The available models are fetched dynamically from your Anthropic account. Common options include:
  • claude-opus-4-5 — most capable, best for complex analysis
  • claude-sonnet-4-5 — balanced performance and speed
  • claude-haiku-4-5 — fastest, best for high-volume workflows

Integration connected

Your Anthropic integration is active. The Create Anthropic Chat Completion workflow action is now available in your incident and action item workflows.

Workflow Actions

Create Anthropic Chat Completion

Sends a prompt to Claude and captures the response as a workflow output. The response can be used in subsequent workflow steps, posted to Slack, or written to incident fields. Rootly uses a maximum output of 4,000 tokens per request — for longer responses, consider breaking your workflow into multiple steps with more focused prompts.
FieldDescriptionRequired
ModelThe Claude model to use — fetched from your Anthropic accountYes
PromptThe user message sent to Claude — supports Liquid templatingYes
System PromptInstructions for Claude’s role or behavior — supports Liquid templatingNo
The System Prompt field is useful for setting Claude’s persona or output format. For example: “You are an incident response assistant. Respond in bullet points. Be concise.”
Use Liquid variables in your prompts to include live incident context — for example {{ incident.title }}, {{ incident.severity }}, and {{ incident.description }}. See the Liquid variables reference for all available fields.

Troubleshooting

Rootly validates your API key by making a test request to the Anthropic API. If the key is rejected, confirm it is active and has not been revoked in the Anthropic Console. Also check that the key has not exceeded its usage limits.
If the integration was working and then stopped, the API key may have been rotated or revoked. Go to the integration settings in Rootly and update the API key — Rootly will re-validate on save.
Anthropic enforces rate limits based on your plan tier. If you are running many concurrent workflows, you may hit requests-per-minute or tokens-per-minute limits. Consider staggering workflows or upgrading your Anthropic plan. Rootly does not retry rate-limited requests automatically.
The model list is fetched dynamically from your Anthropic account. If a model is not appearing, confirm that your API key has access to that model. Some models require a specific Anthropic tier or may not yet be available on your account.
Check your Liquid syntax — unclosed tags or undefined variables can cause rendering failures or unexpected output. Use the Liquid variables reference to confirm correct variable names and test your template in a low-stakes workflow first.

Incident Workflows

Build workflows that use Claude to analyze, summarize, or respond to incidents automatically.

Liquid Variables

Reference for all incident variables available in Liquid-templated prompts.

Rootly AI

Learn about Rootly’s built-in AI features for incident management.