Snowflake

Overview

The Airbyte Snowflake destination allows you to sync data to Snowflake.

Sync overview

Output schema

Each stream will be output into its own table in Snowflake. Each table will contain 3 columns:
  • _airbyte_ab_id: a uuid assigned by Airbyte to each event that is processed. The column type in Snowflake is VARCHAR.
  • _airbyte_emitted_at: a timestamp representing when the event was pulled from the data source. The column type in Snowflake is TIMESTAMP WITH TIME ZONE.
  • _airbyte_data: a json blob representing with the event data. The column type in Snowflake is VARIANT.
Note that Airbyte will create permanent tables. If you prefer to create transient tables (see Snowflake docs for a comparison), you will want to create a dedicated transient database for Airbyte (CREATE TRANSIENT DATABASE airbyte_database).

Features

Feature
Supported?(Yes/No)
Notes
Full Refresh Sync
Yes
Incremental - Append Sync
Yes
Incremental - Deduped History
Yes
Namespaces
Yes

Getting started

We recommend creating an Airbyte-specific warehouse, database, schema, user, and role for writing data into Snowflake so it is possible to track costs specifically related to Airbyte (including the cost of running this warehouse) and control permissions at a granular level. Since the Airbyte user creates, drops, and alters tables, OWNERSHIP permissions are required in Snowflake. If you are not following the recommended script below, please limit the OWNERSHIP permissions to only the necessary database and schema for the Airbyte user.
We provide the following script to create these resources. Before running, you must change the password to something secure. You may change the names of the other resources if you desire.
1
-- set variables (these need to be uppercase)
2
set airbyte_role = 'AIRBYTE_ROLE';
3
set airbyte_username = 'AIRBYTE_USER';
4
set airbyte_warehouse = 'AIRBYTE_WAREHOUSE';
5
set airbyte_database = 'AIRBYTE_DATABASE';
6
set airbyte_schema = 'AIRBYTE_SCHEMA';
7
8
-- set user password
9
set airbyte_password = 'password';
10
11
begin;
12
13
-- create Airbyte role
14
use role securityadmin;
15
create role if not exists identifier($airbyte_role);
16
grant role identifier($airbyte_role) to role SYSADMIN;
17
18
-- create Airbyte user
19
create user if not exists identifier($airbyte_username)
20
password = $airbyte_password
21
default_role = $airbyte_role
22
default_warehouse = $airbyte_warehouse;
23
24
grant role identifier($airbyte_role) to user identifier($airbyte_username);
25
26
-- change role to sysadmin for warehouse / database steps
27
use role sysadmin;
28
29
-- create Airbyte warehouse
30
create warehouse if not exists identifier($airbyte_warehouse)
31
warehouse_size = xsmall
32
warehouse_type = standard
33
auto_suspend = 60
34
auto_resume = true
35
initially_suspended = true;
36
37
-- create Airbyte database
38
create database if not exists identifier($airbyte_database);
39
40
-- grant Airbyte warehouse access
41
grant USAGE
42
on warehouse identifier($airbyte_warehouse)
43
to role identifier($airbyte_role);
44
45
-- grant Airbyte database access
46
grant OWNERSHIP
47
on database identifier($airbyte_database)
48
to role identifier($airbyte_role);
49
50
commit;
51
52
begin;
53
54
USE DATABASE identifier($airbyte_database);
55
56
-- create schema for Airbyte data
57
CREATE SCHEMA IF NOT EXISTS identifier($airbyte_schema);
58
59
commit;
60
61
begin;
62
63
-- grant Airbyte schema access
64
grant OWNERSHIP
65
on schema identifier($airbyte_schema)
66
to role identifier($airbyte_role);
67
68
commit;
Copied!

Setup the Snowflake destination in Airbyte

You should now have all the requirements needed to configure Snowflake as a destination in the UI. You'll need the following information to configure the Snowflake destination:
  • Host
  • Role
  • Warehouse
  • Database
  • Schema
  • Username
  • Password

Notes about Snowflake Naming Conventions

Unquoted Identifiers:

  • Start with a letter (A-Z, a-z) or an underscore (“_”).
  • Contain only letters, underscores, decimal digits (0-9), and dollar signs (“$”).
  • Are case-insensitive.
When an identifier is unquoted, it is stored and resolved in uppercase.

Quoted Identifiers:

  • The identifier is case-sensitive.
  • Delimited identifiers (i.e. identifiers enclosed in double quotes) can start with and contain any valid characters, including:
    • Numbers
    • Special characters (., ', !, @, #, $, %, ^, &, *, etc.)
    • Extended ASCII and non-ASCII characters
    • Blank spaces
When an identifier is double-quoted, it is stored and resolved exactly as entered, including case.

Note

  • Regardless of whether an identifier is unquoted or double-quoted, the maximum number of characters allowed is 255 (including blank spaces).
  • Identifiers can also be specified using string literals, session variables or bind variables. For details, see SQL Variables.
  • If an object is created using a double-quoted identifier, when referenced in a query or any other SQL statement, the identifier must be specified exactly as created, including the double quotes. Failure to include the quotes might result in an Object does not exist error (or similar type of error).
  • Also, note that the entire identifier must be enclosed in quotes when referenced in a query/SQL statement. This is particularly important if periods (.) are used in identifiers because periods are also used in fully-qualified object names to separate each object.
Therefore, Airbyte Snowflake destination will create tables and schemas using the Unquoted identifiers when possible or fallback to Quoted Identifiers if the names are containing special characters.

Cloud Storage Staging

By default, Airbyte uses batches of INSERT commands to add data to a temporary table before copying it over to the final table in Snowflake. This is too slow for larger/multi-GB replications. For those larger replications we recommend configuring using cloud storage to allow batch writes and loading.

Internal Staging

Internal named stages are storage location objects within a Snowflake database/schema. Because they are database objects, the same security permissions apply as with any other database objects. No need to provide additional properties for internal staging
Operating on a stage also requires the USAGE privilege on the parent database and schema.

AWS S3

For AWS S3, you will need to create a bucket and provide credentials to access the bucket. We recommend creating a bucket that is only used for Airbyte to stage data to Snowflake. Airbyte needs read/write access to interact with this bucket.

Google Cloud Storage (GCS)

First you will need to create a GCS bucket.
Then you will need to run the script below:
  • You must run the script as the account admin for Snowflake.
  • You should replace AIRBYTE_ROLE with the role you used for Airbyte's Snowflake configuration.
  • Replace YOURBUCKETNAME with your bucket name
  • The stage name can be modified to any valid name.
  • gcs_airbyte_integration must be used
The script:
1
create storage INTEGRATION gcs_airbyte_integration
2
TYPE = EXTERNAL_STAGE
3
STORAGE_PROVIDER = GCS
4
ENABLED = TRUE
5
STORAGE_ALLOWED_LOCATIONS = ('gcs://YOURBUCKETNAME');
6
7
create stage gcs_airbyte_stage
8
url = 'gcs://YOURBUCKETNAME'
9
storage_integration = gcs_airbyte_integration;
10
11
GRANT USAGE ON integration gcs_airbyte_integration TO ROLE AIRBYTE_ROLE;
12
GRANT USAGE ON stage gcs_airbyte_stage TO ROLE AIRBYTE_ROLE;
13
14
DESC STORAGE INTEGRATION gcs_airbyte_integration;
Copied!
The final query should show a STORAGE_GCP_SERVICE_ACCOUNT property with an email as the property value.
Finally, you need to add read/write permissions to your bucket with that email.
Version
Date
Pull Request
Subject
0.4.2
2022-01-10
#9141
Fixed duplicate rows on retries
0.4.1
2021-01-06
#9311
Update сreating schema during check
0.4.0
2021-12-27
#9063
Updated normalization to produce permanent tables
0.3.24
2021-12-23
#8869
Changed staging approach to Byte-Buffered
0.3.23
2021-12-22
#9039
Added part_size configuration in UI for S3 loading method
0.3.22
2021-12-21
#9006
Updated jdbc schema naming to follow Snowflake Naming Conventions
0.3.21
2021-12-15
#8781
Updated check method to verify permissions to create/drop stage for internal staging; compatibility fix for Java 17
0.3.20
2021-12-10
#8562
Moving classes around for better dependency management; compatibility fix for Java 17
0.3.19
2021-12-06
#8528
Set Internal Staging as default choice
0.3.18
2021-11-26
#8253
Snowflake Internal Staging Support
0.3.17
2021-11-08
#7719
Improve handling of wide rows by buffering records based on their byte size rather than their count
0.3.15
2021-10-11
#6949
Each stream was split into files of 10,000 records each for copying using S3 or GCS
0.3.14
2021-09-08
#5924
Fixed AWS S3 Staging COPY is writing records from different table in the same raw table
0.3.13
2021-09-01
#5784
Updated query timeout from 30 minutes to 3 hours
0.3.12
2021-07-30
#5125
Enable additionalPropertities in spec.json
0.3.11
2021-07-21
#3555
Partial Success in BufferedStreamConsumer
0.3.10
2021-07-12
#4713
Tag traffic with airbyte label to enable optimization opportunities from Snowflake
Last modified 8d ago