Snowflake

PremiumThis feature is available on our Premium and Enterprise plans. Updated

Send Customer.io data about messages, people, metrics, etc to your Snowflake warehouse by way of an Amazon S3 or Google Cloud Project (GCP) storage bucket. This integration syncs up to every 15 minutes, helping you keep up to date on your audience’s message activities.

How it works

snowflake integration example
snowflake integration example

This integration exports individual parquet files for Deliveries, Metrics, Subjects, Outputs, Content, People, and Attributes to your storage bucket. Each parquet file contains data that changed since the last export.

Once the parquet files are in your storage bucket, you can import them into data platforms like Fivetran or data warehouses like Redshift, BigQuery, and Snowflake.

Note that this integration only publishes parquet files to your storage bucket. You must set your data warehouse to ingest this data. There are many approaches to ingesting data, but it typically requires a COPY command to load the parquet files from your bucket. After you load parquet files, you should set them to expire to delete them automatically.

We attempt to export parquet files every 15 minutes, though actual sync intervals and processing times may vary. When syncing large data sets, or Customer.io experiences a high volume of concurrent sync operations, it can take up to several hours to process and export data. This feature is not intended to sync data in real time.

sequenceDiagram participant a as Customer.io participant b as Storage Bucket participant c as Snowflake loop up to every 15 minutes a->>b: export parquet files b->>c: ingest c->>b: expire/delete files
before next sync end

 Your initial sync includes historical data

During the first sync, you’ll receive a history of your Deliveries, Metrics, Subjects, and Outputs data. However, People who have been deleted or suppressed before the first sync are not included in the People file export and the historical data in the other export files is anonymized for the deleted and suppressed People.

The initial export vs incremental exports

Your initial sync is a set of files containing historical data to represent your workspace’s current state. Subsequent sync files contain changesets.

  • Metrics: The initial metrics sync is broken up into files with two sequence numbers, as follows. <name>_v4_<workspace_id>_<sequence1>_<sequence2>.
  • Attributes: The initial Attributes sync includes a list of profiles and their current attributes. Subsequent files will only contain attribute changes, with one change per row.
flowchart LR a{is it the initial sync?}-->|yes|b[send all history] a-->|no|c{was the file
already enabled?} c-->|yes|d[send changes since last sync] c-->|no|e{was the file
ever enabled?} e-->|yes|f[send changeset since
file was disabled] e-->|no|g[send all history]

For example, let’s say you’ve enabled the Attributes export. We will attempt to sync your data to your storage bucket every 15 minutes:

  1. 12:00pm We sync your Attributes Schema for the first time. This includes a list of profiles and their current attributes.
  2. 12:05pm User1’s email is updated to company-email@example.com.
  3. 12:10pm User1’s email is updated to personal-email@example.com.
  4. 12:15 We sync your data again. In this export, you would only see attribute changes, with one change per row. User1 would have one row dedicated to his email changing.

Requirements

If you use a firewall or an allowlist, you must allow the following IP addresses to support traffic from Customer.io.

Make sure you use the correct IP addresses for your account region.

US RegionEU Region
34.71.192.24534.76.81.2
35.184.88.7635.187.55.80
34.72.101.57104.199.99.65
34.123.199.3334.77.146.181
34.67.167.19034.140.234.108
35.240.84.170

 Do you use other Customer.io features?

These IP addresses are specific to outgoing Data Warehouse integrations. If you use your own SMTP server or receive webhooks, you may also need to allow additional addresses. See our complete IP allowlist.

Set up Snowflake with Google Cloud Storage

Before you begin, make sure that you’re prepared to ingest relevant parquet files from Customer.io.

To use a GCS storage bucket, you must set up a service account key (JSON) that grants read/write permissions to the bucket. You’ll provide the contents of this key to Customer.io when you set up this integration.

  1. Go to Integrations and select Snowflake and then click Sync Bucket for Google Cloud Storage.
  2. Enter information about your GCS bucket and click Validate & select data.
    • Enter Name of your GCS bucket.
    • Enter the Path to your GCS bucket.
    • Paste the JSON of your Service Account Key.
dws-gcs-bucket.png
dws-gcs-bucket.png
  1. Select the data that you want to export from Customer.io to your bucket. By default, we export all data, but you can disable the types that you aren’t interested in.

  2. Click Create and sync data.

Set up Snowflake with Amazon S3 or Yandex

Before you begin, make sure that you’re prepared to ingest relevant parquet files from Customer.io. For S3, you’ll need to set up your bucket with ListBucketVersions, ListBucket, GetObject, and PutObject before you can sync data from Customer.io.

  1. Create an Access Key and a Service Key with read/write permissions to your S3 or Yandex bucket.
  2. Go to Integrations and select Snowflake and then click Sync Bucket.
  3. Enter information about your bucket and click Select data.
    • Enter the Name of your bucket.
    • Enter the path to your bucket.
    • Paste your Access and Secret keys in the appropriate fields.
    • Select the Region your bucket is in.
dws-s3-setup.png
dws-s3-setup.png
  1. Select the data types that you want to export from Customer.io to your bucket. By default, we export all data types, but you can disable the types that you aren’t interested in.
  2. Click Create and sync data.

Pausing and resuming your sync

You can turn off files you no longer want to receive, or pause them momentarily as you update your integration, and turn them back on. When you turn a file schema on, we send files to catch you up from the last export.If you haven’t exported a particular file before—the file was never “on”—the initial sync contains your historical data.

You can also disable your entire sync, in which case we’ll quit sending files all together. When you enable your sync again, we send all of your historical data as if you’re starting a new integration. Before you disable a sync, consider if you simply want to disable individual files and resume them later.

 Delete old sync files before you re-enable a sync

Before you resume a sync that you previously disabled, you should clear any old files from your storage bucket so that there’s no confusion between your old files and the files we send with the re-enabled sync.

Disabling and enabling individual export files

  1. Go to Data & Integrations > Integrations and select Snowflake.
  1. Select the files you want to turn on or off.

When you enable a file, the next sync will contain baseline historical data catching up from your previous sync or the complete history if you haven’t synced a file before; subsequent syncs will contain changesets.

 Turning the People file off

If you turn the People file off for more than 7 days, you will not be able to re-enable it. You’ll need to delete your sync configuration, purge all sync files from your destination storage bucket, and create a new sync to resume syncing people data.

Turn files on or off
Turn files on or off

Disabling your sync

If your sync is already disabled, you can enable it again with these instructions. But, before you re-enable your sync, you should clear the previous sync files from your data warehouse bucket first. See Pausing and resuming your sync for more information.

  1. Go to Data & Integrations > Integrations and select Snowflake.
  2. Click Disable Sync.

Manage your configuration

You can change settings for a bucket, if your path changes or you need to swap keys for security purposes.

  1. Go to Data & Integrations > Integrations and select Snowflake.
  2. Click Manage Configuration for your bucket.
  3. Make your changes. No matter your changes, you must input your Service Account Key (GCS) or Secret Key (S3, Yandex) again.
  4. Click Update Configuration. Subsequent syncs will use your new configuration.
Edit your data warehouse bucket configuration
Edit your data warehouse bucket configuration

Update sync schema version

Before you prepare to update your data warehouse sync version, see the changelog. You’ll need to update schemas to upgrade to the latest version (v4).

 When updating from v1 to a later version, you must:

  • Update ingestion logic to accept the new file name format: <name>_v<x>_<workspace_id>_<sequence>.parquet
  • Delete existing rows in your Subjects and Outputs tables. When you update, we send all of your Subjects and Outputs data from the beginning of your history using the new file schema.

dws-upgrade-version.png
dws-upgrade-version.png
  1. Go to Data & Integrations > Integrations and select Snowflake.
  2. Click Upgrade Schema Version.
  3. Follow the instructions to make sure that your ingestion logic is updated accordingly.
  4. Confirm that you’ve made the appropriate pages and click Upgrade sync. The next sync uses the updated schema version.

Parquet file schemas

This section describes the different kinds of files you can export from our Database-out integrations. Many schemas include an internal_customer_id—this is the cio_idAn identifier for a person that is automatically generated by Customer.io and cannot be changed. This identifier provides a complete, unbroken record of a person across changes to their other identifiers (id, email, etc).. You can use it to resolve a person associated with a subject, delivery, etc.

These schemas represent the latest versions available. Check out our changelog for information about earlier versions.

Deliveries are individual email, in-app, push, SMS, slack, and webhook records sent from your workspace. The first deliveries export file includes baseline historical data. Subsequent files contain rows for data that changed since the last export.

Field NamePrimary KeyForeign KeyDescription
workspace_idINTEGER (Required). The ID of the Customer.io workspace associated with the delivery record.
delivery_idSTRING (Required). The ID of the delivery record.
internal_customer_idPeopleSTRING (Nullable). The cio_id of the person in question. Use the people parquet file to resolve this ID to an external customer_id or email address.
subject_idSubjectsSTRING (Nullable). If the delivery was created as part of a Campaign or API Triggered Broadcast workflow, this is the ID for the path the person went through in the workflow. Note: This value refers to, and is the same as, the subject_name in the subjects table.
event_idSubjectsSTRING (Nullable). If the delivery was created as part of an event-triggered Campaign, this is the ID for the unique event that triggered the workflow. Note that this is a foreign key for the subjects table, and not the metrics table.
delivery_typeSTRING (Required). The type of delivery: email, push, in-app, sms, slack, or webhook.
campaign_idINTEGER (Nullable). If the delivery was created as part of a Campaign or API Triggered Broadcast workflow, this is the ID for the Campaign or API Triggered Broadcast.
action_idINTEGER (Nullable). If the delivery was created as part of a Campaign or API Triggered Broadcast workflow, this is the ID for the unique workflow item that caused the delivery to be created.
newsletter_idINTEGER (Nullable). If the delivery was created as part of a Newsletter, this is the unique ID of that Newsletter.
content_idINTEGER (Nullable). If the delivery was created as part of a Newsletter split test, this is the unique ID of the Newsletter variant.
trigger_idINTEGER (Nullable). If the delivery was created as part of an API Triggered Broadcast, this is the unique trigger ID associated with the API call that triggered the broadcast.
created_atTIMESTAMP (Required). The timestamp the delivery was created at.
transactional_message_idINTEGER (Nullable). If the delivery occurred as a part of a transactional message, this is the unique identifier for the API call that triggered the message.
seq_numINTEGER (Required) A monotonically increasing number indicating relative recency for each record: the larger the number, the more recent the record.
Copied to clipboard!
  Contents
Is this page helpful?