This page guides you through the process of setting up the Firebolt destination connector.
This Firebolt destination connector has two replication strategies:
S3: Replicates data by first uploading data to an S3 bucket, creating an External Table and writing into a final Fact Table. This is the recommended loading approach. Requires an S3 bucket and credentials in addition to Firebolt credentials.
For SQL strategy:
- Engine (optional)
Airbyte automatically picks an approach depending on the given configuration - if S3 configuration is present, Airbyte will use the S3 strategy.
For S3 strategy:
- S3 Bucket Name
- See this to create an S3 bucket.
- S3 Bucket Region
- Create the S3 bucket on the same region as the Firebolt database.
- Access Key Id
- Secret Access Key
- Corresponding key to the above key id.
- Host (optional)
- Firebolt backend URL. Can be left blank for most usecases.
- Engine (optional)
- If connecting to a non-default engine you should specify its name or url here.
- Create a Firebolt account following the guide
- Follow the getting started tutorial to setup a database.
- Create a General Purpose (read-write) engine as described in here
- (Optional) Create a staging S3 bucket (for the S3 strategy).
- (Optional) Create an IAM with programmatic access to read, write and delete objects from an S3 bucket.
Supported sync modes
The Firebolt destination connector supports the following sync modes:
- Full Refresh
- Incremental - Append Sync
Connector-specific features & highlights
Each stream will be output into its own raw Fact table in Firebolt. Each table will contain 3 columns:
_airbyte_ab_id: a uuid assigned by Airbyte to each event that is processed. The column type in Firebolt is
_airbyte_emitted_at: a timestamp representing when the event was pulled from the data source. The column type in Firebolt is
_airbyte_data: a json blob representing the event data. The column type in Firebolt is
VARCHARbut can be be parsed with JSON functions.
|0.1.0||2022-05-18||13118||New Destination: Firebolt|