LLM actions: Generate data & decisions with AI
BetaThis feature is new and we're actively working on it. UpdatedAn LLM action lets you prompt a Large Language Model (LLM) to generate and store data for use throughout a campaign. It’s how you use generative AI to enhance your workflows!


Not seeing this AI feature?
How it works
LLM actions let you prompt an AI model as a part of a campaign and store the output as attributes so you can use them later in the campaign. You can personalize messages, enrich data, and create conditions to help you reach the right audience.
campaign] --> B[LLM action
runs] B --> C[Response stored as
attribute] C --> D[Use attribute in
messages and conditions]
By default, LLM actions store data as journey attributesAn attribute stored on a journey during a campaign. Journey attributes expire when people exit your campaign., which expire when people exit your campaign. If you want to use the LLM’s response outside of the campaign, you can change them to customer attributesData stored on your customers’ profiles, like a person’s name. You can include this data in messages or conditions across your workflows. instead.
Billing: LLM actions use AI credits
Unlike other workflow blocks, LLM actions have their own currency: AI credits. Each time an LLM action calls a model, it uses AI credits. This includes when a person reaches the action in a campaign and when you use Preview response to test it. The number of credits consumed depends on the model you select, the size of the prompt, and the amount of context sent with the request. See AI credits for details on pricing and what happens when credits run out.
Ways to use LLM actions
You can use LLM actions to generate data for use across your workflows. Here are a few use cases you could consider:
- Personalized product recommendations: Pass purchase history and browsing data to suggest relevant products for each person.
- Follow-up on purchase based on customer sentiment: Create message content based on a customer’s experience from purchase to delivery. If sentiment is positive, request review. If sentiment is negative, send a follow-up asking what you could do better.
- Classify accounts: Classify customers based on their companies’ data.
Update data from the response of an LLM action
You can use LLM actions to analyze a customer’s behavior and generate insights that you store on attributes for use later on in your campaign.
To set or update data based on an LLM’s insights, you would follow these steps:
- Prompt the LLM to analyze specific customer attributes, trigger data, or data provided in the prompt.
- Store the output as a journey or customer attribute, depending on if you want to use the data outside of the campaign.
- Create subsequent conditions that target the updated attribute or reference the data in messages using liquid.
Send a message using content from an LLM action
Don’t communicate sensitive information or updates with LLM actions
If you’re looking to automate personalized messaging at scale, you can use LLM actions to create email content unique to each person moving through your workflow. However, you’ll be sending content that hasn’t been reviewed by your team.
Remember that LLMs can make mistakes, like not quite matching your tone or incorrectly categorizing your data. Don’t communicate sensitive matters with unreviewed, LLM-generated content. Consider using our Agent to generate a template instead.
To send a message using content from an LLM action, you would follow these steps:
- Prompt the LLM action to create copy based on your customer’s data and your content guidelines.
- Store the output as a journey attribute, like
body. - Reference the journey attribute in a subsequent message block:
{{journey.body}}.
Set up an LLM action
LLM actions are available for campaigns. In the workflow builder, scroll down to Data, then drag the Run LLM onto your campaign’s canvas.


Click the block to open its configuration menu, and select Edit Content to get started.


(Optional) If you only want certain people who trigger the campaign to run the LLM action, you can add Conditions here to filter your audience.
Add a Prompt to instruct the LLM on what to do and how. The more specific you are, the better the results will be. Learn more about creating prompts below.


Choose your Model. Read the descriptions to determine which one will best suit your needs.
Generate Output Fields—the journey attributesAn attribute stored on a journey during a campaign. Journey attributes expire when people exit your campaign. you want to create to store data from the LLM response. Learn more about setting and storing responses below.
Click the Response tab to set fallback values for each attribute created by your output fields.
If you want this data available outside the campaign, this is also where you can change a journey attribute to a customer attribute.
Click Preview Response to test the LLM action and see an example of how the chosen LLM would interpret your prompt. This counts towards your AI credit usage. Learn more about billing.
Prompt: Tell the LLM what to do and how
When you prompt an LLM action, you should include the following so the LLM has full context on your use case:
- Define your goal. If you don’t know exactly what you want, the LLM won’t either.
- Be direct, concise, and specific. Provide any context that’s necessary to achieve your goal, like how and why to evaluate data.
- Include any attributes you want the LLM to use in its response. See What data can an LLM action use for more info.
- Define the structure of your output.
Your AI settings (compliance prompt, business context, etc.) influence the output of LLM actions too. If you want your responses to differ from these defaults, consider updating your settings or explicitly define the tone, audience, etc in the prompt of the LLM action.
You can learn more about best practices for prompts from the LLM providers.
- If you choose a Google model to process your prompt, learn more in the Google’s Gemini documentation.
- If you choose an Anthropic model to process your prompt, learn more in the Anthropic’s Claude documentation.
Prompt example
Below is an example of how to improve a prompt. Bottom line, you should preview responses to your prompt to gauge whether the output is what you want. But if you’re looking to improve your output quality and make it more consistent, here’s an example that highlights best practices.
Review your AI settings
In your account and workspace settings, you can add context about your company and audience to improve how AI generates responses across your workflows. These settings influence how the Agent communicates with you, how AI features like segment generation and email content analysis work, and the data generated by LLM actions.
You can manage context given to LLM actions on these pages:
- Workspace settings > Business context
- Account settings > Privacy, Data, & AI
- Gemini Safety Settings—Within Run LLM actions, these settings only apply if you’re using one of Google’s models, as indicated in the model dropdown menu. They don’t apply to Anthropic models.
- Compliance Prompt
If you want a single LLM action to differ from these defaults, make sure you include that in the prompt you give the LLM.
What data can an LLM action use?
An LLM action bases responses on the text in the LLM action prompt, the context from account and workspace settings, and the workspace data it has access to.
| Data available | Data unavailable |
|---|---|
| Text and data provided in the LLM action prompt | Any media files like images and videos |
| Context from account and workspace settings | Websites, articles, or other online content; it can’t crawl any sites |
| Customer attributes | Events (unless they’re part of the trigger data) |
| Journey attributes set earlier in the campaign | Object or relationship attributes (unless they’re part of the trigger data) |
| Data that triggered the campaign |
Any trigger data available through liquid is accessible to LLM actions; the LLM action can use events, objects, webhooks, etc that trigger campaigns to generate responses. However, LLM actions cannot access event and object relationships that did not trigger campaigns.
For instance, this means you could ask an LLM action to generate a message based on event data from the trigger, but you shouldn’t prompt the LLM action to analyze all event data for a person and save its findings to the customer’s profile. That wouldn’t be inclusive of the breadth of a person’s activity across your platform.
Model: Choose the right model for the task
When you configure an LLM action, you choose which model processes your data. Different models have different strengths—and different costs.
- Reasoning models produce higher-quality results for complex tasks but use more credits per run.
- Quick models are faster and more cost-efficient, using fewer credits per run.
Consider the complexity of your task when choosing a model. If you’re doing simple categorization or translation, a quick model may work well. For nuanced analysis or creative content, a reasoning model may produce better results. Check the descriptions in the model dropdown for more info.
Output: Store the response as attributes
After you add your prompt, you’ll generate the output—how the LLM will store its response. By default, the LLM action stores data as journey attributesAn attribute stored on a journey during a campaign. Journey attributes expire when people exit your campaign., which you can use throughout a person’s journey in the campaign, but not once they exit. If you want to use this data outside the campaign, change them to customer attributes in the Response tab.
You can use these attributes in a variety of ways in subsequent actions:
- Personalize messages with liquid
- Create branches in your workflow based on the attribute output from the model
- Build conditions to filter people out of certain actions or messages
- Use them as inputs for other LLM actions downstream
Create outputs manually
- On the Content tab, click Add field under Output Fields.


- Add a Name. This becomes the key used to reference the output through liquid syntax.
- Select a Type of value you want to store.
- Enter a Description so you know how to use the output. This is especially helpful if you’re setting customer attributes. This description will appear in your Data Index and help you audit your data in the future.
- Select whether the LLM action is required to generate the output.
- Click Save.
By default, output fields are journey attributes, which expire once a person exits the campaign. If you want to use these attributes outside the campaign, you can change them to customer attributes in the Response tab.
Generate outputs from your prompt
- On the Content tab, click Generate from prompt under Output Fields.
- Click Replace to view the latest output fields.
- Review the output: click to view the returned name, value type, and descriptions. Modify them as you see fit.


- Name: The key used to reference the output through liquid syntax.
- Type: The type of value you want to store.
- Description: A description of the output. This is especially helpful if you’re setting customer attributes. This description will appear in your Data Index and help you audit your data in the future.
- Save your changes.
You can also add fields manually alongside generated outputs or delete items you don’t want to store.
By default, output fields are journey attributes, which expire once a person exits the campaign. If you want to use these attributes outside the campaign, you can change them to customer attributes in the Response tab.
Types of values
Each output field has a type of value that defines what the LLM action should store in your attributes.
| Type | Description | Example |
|---|---|---|
| Text | A text string value | “Mark your calendars: the summer solstice is coming!” |
| Number | A number that can include decimals | 3.14 |
| Integer | A whole number (no decimals) | 42 |
| Boolean | A true/false value | true |
| Date | A date string (ISO 8601 format) | “2026-03-31” |
| Date and Time | A timestamp string (ISO 8601 format) | “2026-03-31T14:30:00Z” |
| Time | A time string | “14:30:00” |
| List | An array of generated text values | ["Subject line 1", "Subject line 2", "Subject line 3"] |
| Single Select | One value picked from predefined options | “positive” (from options like ["positive", "negative", "neutral"]) |
| Multi Select | Multiple values picked from predefined options | ["positive", "neutral"] (from options like ["positive", "negative", "neutral"]) |
Delete output fields
To remove output fields stored from an LLM action response, go to the Content tab and click beside the field you want to delete. The Response tab will update to reflect the changes.
Change from journey to customer attributes
By default, the output fields generated in the Content tab are journey attributes, but you can change that in the Response tab. If you want to take action on the data outside the campaign, then you’ll want to change them to customer attributes.
Click beside an attribute to switch types.


You can’t set or modify events, objects, or relationships with LLM actions. However, you can use a Send event action to store events based on customer or journey attributes set by an LLM action.
Respond to failed LLM actions
An LLM action can fail for reasons including:
- Your account runs out of AI credits
- The model returns an error
- The action times out
If an LLM action fails, your campaign will retry the action twice. If the action fails after three attempts, the journey will continue without the attribute updates, which could impact subsequent workflow actions that rely on them.
You can set fallback values so any condition or content that references the attributes continues to be evaluated in a way that’s best for your customers. By default, output attributes do not have fallback values, but you can set them in the Response tab.


Consider what’s best for your use case. How should people move through your campaign if the Run LLM action fails?
- If the LLM action generates email copy, it might make sense to store fallback content so your customers still get the core of your message in a subsequent action, just with less personalization. Otherwise, the email would fail to send altogether, and they’d move onto the next action.
- If the LLM action is meant to determine whether your customer is likely to upgrade their plan, you might leave the fallback blank so you know it didn’t update and send them down a different path in the workflow when the attribute does not exist.
If a customer or journey attribute is already set and the LLM action should update them, the attributes will only update if the LLM action succeeds or has fallback values. If the LLM action fails and has no fallbacks set, the attributes remain unchanged; they won’t be cleared or unset.
Preview your LLM action response
You’ll see two preview options in an LLM action:
- Preview Response—This shows you an example of how the LLM model you selected will interpret your prompt. This uses up AI credits.
- Processed Prompt—This renders any liquid in your prompt according to the sample data selected in the panel. Use this to make sure any liquid logic in your prompt works as expected. On smaller screens, click the Preview tab to see the processed prompt.


To use either preview, any liquid in your prompt must render. This means the keys must exist in the sample data selected and/or have fallbacks. If the prompt preview doesn’t work, click Review Errors to find and fix liquid that’s causing an issue.
Before you activate a campaign with an LLM action, test it to make sure it returns the results you expect.
Search for and select a person from the Sample Data panel that would cause the LLM action to run.
Click Preview Response. Remember, each run uses AI credits.


Review the model’s output to verify it meets your expectations.
Check your credit usage; does your account have enough credits to run the action considering the anticipated size of your audience?
If a value is cutoff, hover your cursor over it to view the full output.
Adjust your prompt or model selection if needed and preview the response again.
Test LLM actions with multiple people
Try testing with several people to make sure your prompts handle a variety of inputs. Check edge cases like missing attributes or unusual values to make sure the LLM returns something useful.

