Introduction
The Anthropic integration lets you connect Rootly to your organization’s Anthropic account, using your own API key to power AI-assisted incident workflows with Claude. This gives your team control over the model, data handling, and API usage under your organization’s specific agreements with Anthropic. Usage is billed directly to your Anthropic account — not through Rootly — making this a good fit for organizations with data residency requirements or custom API agreements. With the Anthropic integration, you can:- Generate AI-powered summaries, analyses, and responses within incident workflows
- Send custom prompts to Claude with full Liquid template support for dynamic, incident-aware content
- Use a system prompt to define Claude’s role, tone, or constraints for each workflow action
- Choose which Claude model to use based on your account’s available models
Before You Begin
Before setting up the Anthropic integration, make sure you have:- A Rootly account with permission to manage integrations
- An Anthropic API key with access to the models you want to use
API keys are validated against the Anthropic API when you save the integration. If the key is inactive, expired, or over its usage limit, the integration will not save.
Installation
Open the integrations page in Rootly
Navigate to the integrations page in your Rootly workspace and select Anthropic.
Enter your Anthropic API key
Paste your Anthropic API key into the API Key field. You can generate or manage keys in the Anthropic Console.
Select a default model
Choose the Claude model you want to use for workflow actions. The available models are fetched dynamically from your Anthropic account. Common options include:
claude-opus-4-5— most capable, best for complex analysisclaude-sonnet-4-5— balanced performance and speedclaude-haiku-4-5— fastest, best for high-volume workflows
Workflow Actions
Create Anthropic Chat Completion
Sends a prompt to Claude and captures the response as a workflow output. The response can be used in subsequent workflow steps, posted to Slack, or written to incident fields. Rootly uses a maximum output of 4,000 tokens per request — for longer responses, consider breaking your workflow into multiple steps with more focused prompts.| Field | Description | Required |
|---|---|---|
| Model | The Claude model to use — fetched from your Anthropic account | Yes |
| Prompt | The user message sent to Claude — supports Liquid templating | Yes |
| System Prompt | Instructions for Claude’s role or behavior — supports Liquid templating | No |
Use Liquid variables in your prompts to include live incident context — for example
{{ incident.title }}, {{ incident.severity }}, and {{ incident.description }}. See the Liquid variables reference for all available fields.Troubleshooting
The API key is rejected when saving the integration
The API key is rejected when saving the integration
Rootly validates your API key by making a test request to the Anthropic API. If the key is rejected, confirm it is active and has not been revoked in the Anthropic Console. Also check that the key has not exceeded its usage limits.
The workflow action fails with an authentication error
The workflow action fails with an authentication error
If the integration was working and then stopped, the API key may have been rotated or revoked. Go to the integration settings in Rootly and update the API key — Rootly will re-validate on save.
The workflow action fails with a rate limit error
The workflow action fails with a rate limit error
Anthropic enforces rate limits based on your plan tier. If you are running many concurrent workflows, you may hit requests-per-minute or tokens-per-minute limits. Consider staggering workflows or upgrading your Anthropic plan. Rootly does not retry rate-limited requests automatically.
The model I want is not appearing in the model selector
The model I want is not appearing in the model selector
The model list is fetched dynamically from your Anthropic account. If a model is not appearing, confirm that your API key has access to that model. Some models require a specific Anthropic tier or may not yet be available on your account.
Liquid variables are not rendering correctly in the prompt
Liquid variables are not rendering correctly in the prompt
Check your Liquid syntax — unclosed tags or undefined variables can cause rendering failures or unexpected output. Use the Liquid variables reference to confirm correct variable names and test your template in a low-stakes workflow first.
Related Pages
Incident Workflows
Build workflows that use Claude to analyze, summarize, or respond to incidents automatically.
Liquid Variables
Reference for all incident variables available in Liquid-templated prompts.
Rootly AI
Learn about Rootly’s built-in AI features for incident management.