# LLM actions: Generate data & decisions with AI

[BetaThis feature is new and we're actively working on it.](/beta-experimental-features/#beta-features)

An **LLM action** lets you prompt a Large Language Model (LLM) to generate and store data for use throughout a campaign. It’s how you use generative AI to enhance your workflows!

[![An LLM action in a campaign workflow](https://docs.customer.io/images/llm-action.png)](#bdf7c025ddf3432e082a572d86197f63-lightbox)

 Not seeing this AI feature?

Make sure “Customer.io AI” is enabled in [Privacy, Data, & AI settings](https://fly.customer.io/settings/privacy). Reach out to an *Account Admin* if you can’t edit the toggle.

## How it works[](#how-it-works)

LLM actions let you prompt an AI model as a part of a campaign and store the output as attributes so you can use them later in the campaign. You can personalize messages, enrich data, and create conditions to help you reach the right audience.

By default, LLM actions store data as **[journey attributesAn attribute stored on a journey during a campaign. Journey attributes expire when people exit your campaign.](/journeys/set-journey-attributes/)**, which expire when people exit your campaign. If you want to use the LLM’s response outside of the campaign, you can change them to **[customer attributesData stored on your customers’ profiles, like a person’s name. You can include this data in messages or conditions across your workflows.](/journeys/attributes/)** instead.

## Billing: LLM actions use AI credits[](#billing-llm-actions-use-ai-credits)

Unlike other workflow blocks, LLM actions have their own currency: **AI credits**. Each time an LLM action calls a model, it uses AI credits. This includes when a person reaches the action in a campaign and when you use Preview response to test it. The number of credits consumed depends on the model you select, the size of the prompt, and the amount of context sent with the request. See [AI credits](/accounts-and-workspaces/ai-credits/) for details on pricing and what happens when credits run out.

## Ways to use LLM actions[](#ways-to-use-llm-actions)

You can use LLM actions to generate data for use across your workflows. Here are a few use cases you could consider:

*   **Personalized product recommendations**: Pass purchase history and browsing data to suggest relevant products for each person.
*   **Follow-up on purchase based on customer sentiment**: Create message content based on a customer’s experience from purchase to delivery. If sentiment is positive, request review. If sentiment is negative, send a follow-up asking what you could do better.
*   **Classify accounts**: Classify customers based on their companies’ data.

### Update data from the response of an LLM action[](#update-data-from-the-response-of-an-llm-action)

You can use LLM actions to analyze a customer’s behavior and generate insights that you store on attributes for use later on in your campaign.

To set or update data based on an LLM’s insights, you would follow these steps:

1.  Prompt the LLM to analyze specific customer attributes, trigger data, or data provided in the prompt.
2.  Store the output as a journey or customer attribute, depending on if you want to use the data outside of the campaign.
3.  Create subsequent conditions that target the updated attribute or reference the data in messages using liquid.

### Send a message using content from an LLM action[](#send-a-message-using-content-from-an-llm-action)

 Don’t communicate sensitive information or updates with LLM actions

If you’re looking to automate personalized messaging at scale, you can use LLM actions to create email content unique to each person moving through your workflow. However, you’ll be sending content that hasn’t been reviewed by your team.

Remember that LLMs can make mistakes, like not quite matching your tone or incorrectly categorizing your data. Don’t communicate sensitive matters with unreviewed, LLM-generated content. **Consider using our [Agent](/ai/ai-assistant/) to generate a template instead.**

To send a message using content from an LLM action, you would follow these steps:

1.  Prompt the LLM action to create copy based on your customer’s data and your content guidelines.
2.  Store the output as a journey attribute, like `body`.
3.  Reference the journey attribute in a subsequent message block.
    *   If the attribute value doesn’t contain liquid syntax, you can reference it as: `{{journey.body}}`.
    *   If the LLM-generated content contains liquid syntax—like `{{customer.first_name}}`—use [`{% render_liquid journey.body %}`](/journeys/liquid-tag-list/?version=latest#render_liquid-latest) so the liquid within the value renders dynamically. If you use `{{journey.body}}` instead, any liquid in the value displays as static text.

## Set up an LLM action[](#set-up-an-llm-action)

LLM actions are available for campaigns. In the workflow builder, scroll down to *Data*, then drag the **Run LLM** onto your campaign’s canvas.

[![An LLM action in a campaign workflow](https://docs.customer.io/images/llm-action.png)](#bdf7c025ddf3432e082a572d86197f63-lightbox)

1.  Click the block to open its configuration menu, and select **Edit Content** to get started.
    
    [![To the right of the LLM action is the configuration menu with options to edit content or add conditions.](https://docs.customer.io/images/llm-action-config.png)](#4b4d4488e3d5073fb170ae209ea79ed9-lightbox)
    
    (Optional) If you only want certain people who trigger the campaign to run the LLM action, you can add **Conditions** here to filter your audience.
    
2.  Add a **Prompt** to instruct the LLM on what to do and how. The more specific you are, the better the results will be. Learn more about [creating prompts](#prompts) below.
    
    [![The LLM action prompt is a text area where you can add your prompt.](https://docs.customer.io/images/llm-action-example.png)](#025b0a05091feb68b6fd720b773c4980-lightbox)
    
3.  Consider the type of task it should perform then choose your **Model**. [Learn more about model types, credit usage, and costs below](#model-choose-the-right-model-for-the-task).
    
4.  Generate **Output Fields**—the [journey attributesAn attribute stored on a journey during a campaign. Journey attributes expire when people exit your campaign.](/journeys/set-journey-attributes/) you want to create to store data from the LLM response. Learn more about [setting and storing responses](#output-store-the-response-as-attributes) below.
    
5.  Click the **Response** tab to set **fallback values** for each attribute created by your output fields.
    
    If you want this data available outside the campaign, this is also where you can change a journey attribute to a **customer attribute**.
    
6.  Click **Preview Response** to test the LLM action and see an example of how the chosen LLM would interpret your prompt. This counts towards your AI credit usage. [Learn more about billing](#billing-llm-actions-use-ai-credits).
    

## Prompt: Tell the LLM what to do and how[](#prompts)

When you prompt an LLM action, you should include the following so the LLM has full context on your use case:

*   Define your goal. If you don’t know exactly what you want, the LLM won’t either.
*   Be direct, concise, and specific. Provide any context that’s necessary to achieve your goal, like how and why to evaluate data.
*   Include any attributes you want the LLM to use in its response. See [What data can an LLM action use](#llm-data) for more info.
*   Define the structure of your output.

Your [AI settings (compliance prompt, business context, etc.)](#review-your-ai-settings) influence the output of LLM actions too. If you want your responses to differ from these defaults, consider updating your settings or explicitly define the tone, audience, etc in the prompt of the LLM action.

You can learn more about best practices for prompts from the LLM providers.

*   If you choose a Google model to process your prompt, learn more in the [Google’s Gemini documentation](https://ai.google.dev/gemini-api/docs/prompting-strategies).
*   If you choose an Anthropic model to process your prompt, learn more in the [Anthropic’s Claude documentation](https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/claude-prompting-best-practices).

### Prompt example[](#prompt-example)

Below is an example of how to improve a prompt. Bottom line, you should [preview responses](#preview-an-llm-action) to your prompt to gauge whether the output is what you want. But if you’re looking to improve your output quality and make it more consistent, here’s an example that highlights best practices.

Prompt

Quality

Why

Account upsell: Compare customer seat utilization to their current plan.

Low

The goal is not clear; there’s only an idea around upselling. The data to use is barely defined and the desired output is absent.

Analyze this account’s expansion readiness. Compare their seat utilization `{{customer.seats_used}}` to their current plan `{{customer.plan_name}}`. An account may expand if seat utilization is greater than 80% and they’re not on the highest plan.

Medium

The goal is stated. Some data is identified along with some criteria for evaluation. But the desired output is still absent.

[![The prompt gives the LLM action a persona, followed by a goal. Then there are three separate lists showing what data to use to make a decision, how to evaluate criteria, and what the output should look like.](https://docs.customer.io/images/llm-actions-prompt-3.png)](#12f62923ea25ab0d17ead124af19cb3a-lightbox)

High

The goal is defined and criteria for being expansion ready is defined. The prompt includes the data to use and desired output format.

### Review your AI settings[](#review-your-ai-settings)

In your account and workspace settings, you can add context about your company and audience to improve how AI generates responses across your workflows. These settings influence how the agent communicates with you, how AI features like segment generation and email content analysis work, and the data generated by LLM actions.

You can manage context given to LLM actions on these pages:

*   [Workspace settings > Business context](https://fly.customer.io/workspaces/last/settings/ai-business-profile)
*   [Account settings > Privacy, Data, & AI](https://fly.customer.io/settings/privacy)
    *   Gemini Safety Settings—Within Run LLM actions, these settings only apply if you’re using one of Google’s models, as indicated in the model dropdown menu. They don’t apply to Anthropic models.
    *   Compliance Prompt

If you want a single LLM action to differ from these defaults, make sure you include that in the prompt you give the LLM.

### What data can an LLM action use?[](#llm-data)

An LLM action bases responses on the text in the LLM action prompt, the [context from account and workspace settings](#review-your-ai-settings), and the workspace data it has access to.

Data available

Data unavailable

Text and data provided in the LLM action prompt

Any media files like images and videos

Context from account and workspace settings

Websites, articles, or other online content; it can’t crawl any sites

Customer attributes

**Events (unless they’re part of the trigger data)**

Journey attributes set earlier in the campaign

**Object or relationship attributes (unless they’re part of the trigger data)**

**Data that triggered the campaign**

Any trigger data available through liquid is accessible to LLM actions; the LLM action can use events, objects, webhooks, etc that trigger campaigns to generate responses. However, LLM actions cannot access event and object relationships that did not trigger campaigns.

For instance, this means you could ask an LLM action to generate a message based on event data from the trigger, but you shouldn’t prompt the LLM action to analyze all event data for a person and save its findings to the customer’s profile. That wouldn’t be inclusive of the breadth of a person’s activity across your platform.

## Model: Choose the right model for the task[](#model-choose-the-right-model-for-the-task)

When you configure an LLM action, you choose which model processes your data. Different models have different strengths—and different [costs](#billing-llm-actions-use-ai-credits).

*   **Reasoning models** produce higher-quality results for complex tasks but use more credits per run.
*   **Quick models** are faster and more cost-efficient, using fewer credits per run.

Consider the complexity of your task when choosing a model. If you’re doing simple categorization or translation, a quick model may work well. For nuanced analysis or creative content, a reasoning model may produce better results.

When you choose a model, you’ll see a multiplier beside the model name. This represents the credit burn rate compared to the base model. In this example, the Anthropic model uses 10x more than our base model—Google’s Gemini 2.5 Flash Lite. [Learn more about credit burn rates](/accounts-and-workspaces/ai-credits/#how-llms-consume-ai-credits).

[![The model dropdown shows three use cases. Quick answers is opened and shows two models where one is 10x more than the base model.](https://docs.customer.io/images/llm-action-model-cost.png)](#06a2641462119b663a662534af259c7a-lightbox)

## Output: Store the response as attributes[](#output-store-the-response-as-attributes)

After you add your prompt, you’ll generate the output—how the LLM will store its response. By default, the LLM action stores data as [journey attributesAn attribute stored on a journey during a campaign. Journey attributes expire when people exit your campaign.](/journeys/set-journey-attributes/), which you can use throughout a person’s journey in the campaign, but not once they exit. If you want to use this data outside the campaign, [change them to customer attributes in the Response tab](#move-to-customer).

You can use these attributes in a variety of ways in subsequent actions:

*   Personalize messages with liquid
*   Create branches in your workflow based on the attribute output from the model
*   Build conditions to filter people out of certain actions or messages
*   Use them as inputs for other LLM actions downstream

### Create outputs manually[](#create-outputs-manually)

1.  On the Content tab, click **Add field** under Output Fields.
    
    [![A filled in output field with a name, type and description. The checkbox for required is checked.](https://docs.customer.io/images/llm-action-add-field.png)](#ac42f7c949945cca9431b1e7996b1696-lightbox)
    
2.  Add a **Name**. This becomes the key used to reference the output through liquid syntax.
3.  Select a [**Type** of value you want to store](#types-of-values).
4.  Enter a **Description** so you know how to use the output. This is especially helpful if you’re setting customer attributes. This description will appear in your Data Index and help you audit your data in the future.
5.  Select whether the LLM action is required to generate the output.
6.  Click **Save**.

By default, output fields are journey attributes, which expire once a person exits the campaign. If you want to use these attributes outside the campaign, you can [change them to customer attributes in the Response tab](#move-to-customer).

### Generate outputs from your prompt[](#generate-outputs-from-your-prompt)

1.  On the Content tab, click **Generate from prompt** under Output Fields.
2.  Click **Replace** to view the latest output fields.
3.  Review the output: click to view the returned name, value type, and descriptions. Modify them as you see fit.
    
    [![A filled in output field with a name, type and description. The checkbox for required is checked.](https://docs.customer.io/images/llm-action-add-field.png)](#ac42f7c949945cca9431b1e7996b1696-lightbox)
    
    *   **Name**: The key used to reference the output through liquid syntax.
    *   **Type**: The [type of value you want to store](#types-of-values).
    *   **Description**: A description of the output. This is especially helpful if you’re setting customer attributes. This description will appear in your Data Index and help you audit your data in the future.
4.  Save your changes.

You can also [add fields manually](#create-outputs-manually) alongside generated outputs or [delete items](#delete-output-fields) you don’t want to store.

By default, output fields are journey attributes, which expire once a person exits the campaign. If you want to use these attributes outside the campaign, you can [change them to customer attributes in the Response tab](#move-to-customer).

### Types of values[](#types-of-values)

Each output field has a type of value that defines what the LLM action should store in your attributes.

Type

Description

Example

Text

A text string value

“Mark your calendars: the summer solstice is coming!”

Number

A number that can include decimals

`3.14`

Integer

A whole number (no decimals)

`42`

Boolean

A true/false value

true

Date

A date string (ISO 8601 format)

“2026-03-31”

Date and Time

A timestamp string (ISO 8601 format)

“2026-03-31T14:30:00Z”

Time

A time string

“14:30:00”

List

An array of generated text values

`["Subject line 1", "Subject line 2", "Subject line 3"]`

Single Select

One value picked from predefined options

“positive” (from options like `["positive", "negative", "neutral"]`)

Multi Select

Multiple values picked from predefined options

`["positive", "neutral"]` (from options like `["positive", "negative", "neutral"]`)

### Delete output fields[](#delete-output-fields)

To remove output fields stored from an LLM action response, go to the Content tab and click beside the field you want to delete. The Response tab will update to reflect the changes.

### Change from journey to customer attributes[](#move-to-customer)

By default, the output fields generated in the Content tab are journey attributes, but you can change that in the Response tab. If you want to take action on the data outside the campaign, then you’ll want to change them to **customer attributes**.

Click beside an attribute to switch types.

[![Response tab. Under journey attributes, there's a field expansion_score. To the right, a menu is selected showing Move to customer attribute.](https://docs.customer.io/images/llm-action-switch-type-2.png)](#893664bd457fe701c6e4f98a11518c1a-lightbox)

You can’t set or modify events, objects, or relationships with LLM actions. However, you can use a [*Send event* action](/journeys/event-action/) to store events based on customer or journey attributes set by an LLM action.

### Respond to failed LLM actions[](#respond-to-failed-llm-actions)

An LLM action can fail for reasons including:

*   Your account [runs out of AI credits](/accounts-and-workspaces/ai-credits/#purchase-additional-credits)
*   The model returns an error
*   The action times out

**If an LLM action fails, your campaign will retry the action twice.** If the action fails after three attempts, the journey will continue without the attribute updates, which could impact subsequent workflow actions that rely on them.

You can set **fallback values** so any condition or content that references the attributes continues to be evaluated in a way that’s best for your customers. **By default, output attributes do not have fallback values, but you can set them in the Response tab.**

[![Response tab. To the right of each attribute name is a field labeled Fallback value.](https://docs.customer.io/images/llm-action-fallback.png)](#c6596b6e191d76136590118864fef3bf-lightbox)

Consider what’s best for your use case. How should people move through your campaign if the Run LLM action fails?

*   If the LLM action generates email copy, it might make sense to store fallback content so your customers still get the core of your message in a subsequent action, just with less personalization. Otherwise, the email would fail to send altogether, and they’d move onto the next action.
*   If the LLM action is meant to determine whether your customer is likely to upgrade their plan, you might leave the fallback blank so you know it didn’t update and send them down a different path in the workflow when the attribute does not exist.

If a customer or journey attribute is already set and the LLM action should update them, the attributes will only update if the LLM action succeeds or has fallback values. If the LLM action fails and has no fallbacks set, the attributes remain unchanged; they won’t be cleared or unset.

## Preview your LLM action response[](#preview-an-llm-action)

You’ll see two preview options in an LLM action:

*   **Preview Response**—This shows you an example of how the LLM model you selected will interpret your prompt. This uses up [AI credits](#billing-llm-actions-use-ai-credits).
*   **Processed Prompt**—This renders any liquid in your prompt according to the sample data selected in the panel. Use this to make sure any liquid logic in your prompt works as expected. On smaller screens, click the *Preview* tab to see the processed prompt.

[![At the top of an LLM action prompt, there's a tab labeled Preview and a button labeled Preview Response.](https://docs.customer.io/images/llm-action-previews.png)](#4cbfb9f8c2ddcd81c28fadbe3b9c9be9-lightbox)

To use either preview, any liquid in your prompt must render. This means the keys must exist in the sample data selected and/or have fallbacks. If the prompt preview doesn’t work, click **Review Errors** to find and fix liquid that’s causing an issue.

Before you activate a campaign with an LLM action, test it to make sure it returns the results you expect.

1.  Search for and select a person from the Sample Data panel that would cause the LLM action to run.
    
2.  Click **Preview Response**. [Remember, each run uses AI credits.](#billing-llm-actions-use-ai-credits)
    
    [![A pop-up modal shows a response from the LLM action including the model used, credits used, and attributes that would update.](https://docs.customer.io/images/llm-action-preview-response.png)](#a5562224a54966c5222e9f49ba7f9734-lightbox)
    
3.  Review the model’s output to verify it meets your expectations.
    
    [Check your credit usage](#billing-llm-actions-use-ai-credits); does your account have enough credits to run the action considering the anticipated size of your audience?
    
    If a value is cutoff, hover your cursor over it to view the full output.
    
4.  Adjust your prompt or model selection if needed and preview the response again.
    

 Test LLM actions with multiple people

Try testing with several people to make sure your prompts handle a variety of inputs. Check edge cases like missing attributes or unusual values to make sure the LLM returns something useful.