Snowflake (Advanced)
PremiumThis feature is available for Premium plans. EnterpriseThis feature is available for Enterprise plans. UpdatedAbout this integration
Snowflake is a cloud data platform that provides a data warehouse-as-a-service designed for the cloud. It allows you to unify, integrate, analyze, and share previously siloed data.
How it works
This integration sends CSV, JSON, or parquet files containing your data to your Snowflake (Advanced) bucket. Then you can ingest the files in your storage bucket to your data warehouse of choice.
We write files for each type of incoming call to your storage bucket every 10 minutes. So you’ll have files for identify calls, track calls, and so on. Files are named with an incrementing number, so it’s easy to determine the sequence of files, and the order of incoming calls.
before next sync end
Sync frequency and file names
Syncs occur every 10 minutes. Each sync file contains data from the previous sync interval. For example, if the last sync occurred at 12:00 PM, the next sync will only send data from 12:00 PM to 12:09:59 PM.
Each sync generates new files for each data type in your storage bucket. Files are named in the format <integration id>.<integration action id>.<current position>.<type>.
- The integration ID and action ID are unique identifiers generated by Customer.io. You’ll see them with the first sync.
current positionis an incrementing number beginning at 1 that indicates the order of syncs. So your first sync is 1, the next one is 2, etc.typeis the type of incoming call—identify,track,page,screen,alias, orgroup.
So, if your file is called 2184.13699.1.track.json, it’s the first sync file for the track call type.
Getting started
To support Snowflake (Advanced), you’ll set up a Google Cloud Storage, Amazon S3, or Microsoft Azure Blob Storage bucket to store your data. Then, you’ll query and import data from your storage bucket to Snowflake (Advanced) either through a direct query or a product like Stitch.
As a part of this integration, we’ll create parquet, JSON, or CSV files in your storage bucket. See data warehouses for a list of data schemas.
Go to Data & Integrations > Integrations and select Snowflake (Advanced) in the Directory tab.
Connect to your storage bucket:
Review your setup and click Finish to enable your integration.
Google Cloud Storage (GCS)
Endpoint: Endpoint for the internal ETL API.
Token: Authentication token for the internal ETL API.
Format: Format of the data files that will be created.
Bucket Name: Name of the Google Cloud Storage Bucket where files will be written to. Learn more about GCS buckets and bucket naming rules.
Bucket Path: Optional folder inside the bucket where files will be written to.
Service Account: The JSON string of the Google Cloud Service Account with permissions to upload files to a bucket, which can be found in your Google Cloud Console. Learn more about Google Cloud Service Accounts.
Amazon S3
Endpoint: Endpoint for the internal ETL API.
Token: Authentication token for the internal ETL API.
Format: Format of the data files that will be created.
Bucket Name: Name of an existing bucket. Learn more about S3 buckets and bucket naming rules.
Bucket Path: Optional folder inside the bucket where files will be written to.
Access Key: The AWS Access Key ID that will be used to connect to your S3 Bucket. Your Access Key ID can be found in the My Security Credentials section of your AWS Console. Learn more about AWS credentials.
Secret Key: The AWS Secret Access Key that will be used to connect to your S3 Bucket. Your Secret Access Key can be found in the My Security Credentials section of your AWS Console. Learn more about AWS credentials.
Region: The AWS Region where your S3 Bucket resides in. Learn more about AWS Regions.
Azure Blob Storage
Endpoint: Endpoint for the internal ETL API.
Token: Authentication token for the internal ETL API.
Format: Format of the data files that will be created.
Blob Sas Url: The SAS URL of the Azure Blob Storage container with permissions to upload files to a container. Learn how to generate an Azure SAS URL in our documentation.
Blob Path: Optional folder inside the container where files will be written to.
Schemas
The following schemas represent JSON for the different types of files we export to your storage bucket (identify, track, and so on). For CSV and Parquet files, we stringify objects and arrays. For example, if identify calls contain the traits object with a first_name and last_name, CSV files output to your storage bucket will contain a traits column with data that looks like this for each row: "{ "\first_name\": \"Bugs\", \"last_name\": \"Bunny\" }".
identify
Identifies files contain identify calls sent to Customer.io. The context and traits in the schema below are objects in JSON. In CSV and parquet files, these columns contain stringified objects.
-
- createdAt string (date-time)We recommend that you pass date-time values as ISO 8601 date-time strings. We convert this value to fit destinations where appropriate.
- email stringA person’s email address. In some cases, you can pass an empty
userIdand we’ll use this value to identify a person. - Additional Traits* any typeTraits that you want to set on a person. These can take any JSON shape.
group
Groups files contain group calls sent to Customer.io. If your integration outputs CSV or parquet files, the context and traits columns contain stringified objects.
-
- Additional Traits* any typeTraits can have any name, like `account_name` or `total_employees`. These can take any JSON shape.
track
Tracks contains entries for the track calls you send to Customer.io. It shows information about the events your users perform.
If your integration outputs CSV or parquet files, the context and properties columns contain stringified objects. If your integration outputs JSON files, the context and properties columns contain objects.
- event stringThe slug of the event name, mapping to an event-specific table.
- event_text stringThe name of the event.
-
- Event Properties* any type
page
Pages contains entries for the page calls sent to Customer.io. If your integration outputs CSV or parquet files, the context and properties columns contain stringified objects. If your integration outputs JSON files, the context and properties columns contain objects.
-
- category stringThe category of the page. This might be useful if you have a single page routes or have a flattened URL structure.
- path stringThe path of the page. This defaults to
location.pathname, but can be overridden. - referrer stringThe referrer of the page, if applicable. This defaults to
document.referrer, but can be overridden. - search stringThe search query in the URL, if present. This defaults to
location.search, but can be overridden. - title stringThe title of the page. This defaults to
document.title, but can be overridden. - url stringThe URL of the page. This defaults to a canonical url if available, and falls back to
document.location.href. - Page Properties* any type
screen
Screens files contain entries for the screen calls sent to Customer.io. If your integration outputs CSV or parquet files, the context and properties columns contain stringified objects. If your integration outputs JSON files, the context and properties columns contain objects.
-
- Additional event properties* any typeProperties that you sent in the event. These can take any JSON shape.
