This destination syncs data to Databricks cluster. Each stream is written to its own table.
This connector requires a JDBC driver to connect to Databricks cluster. The driver is developed by Simba. Before using the driver and the connector, you must agree to the JDBC ODBC driver license. This means that you can only use this connector to connector third party applications to Apache Spark SQL within a Databricks offering using the ODBC and/or JDBC protocols.
Due to legal reasons, this is currently a private connector that is only available in Airbyte Cloud. We are working on publicizing it. Please follow this issue for progress.
Full Refresh Sync
Warning: this mode deletes all previously synced data in the configured bucket path.
Incremental - Append Sync
Incremental - Deduped History
Databricks supports various cloud storage as the data source. Currently, only Amazon S3 is supported.
⚠️ Please note that under "Full Refresh Sync" mode, data in the configured bucket and path will be wiped out before each sync. We recommend you to provision a dedicated S3 resource for this sync to prevent unexpected data deletion from misconfiguration. ⚠️
Staging Parquet Files
Data streams are first written as staging Parquet files on S3, and then loaded into Databricks tables. All the staging files will be deleted after the sync is done. For debugging purposes, here is the full path for a staging file:
Currently, all streams are synced into unmanaged Spark SQL tables. See documentation for details. In summary, you have full control of the location of the data underlying an unmanaged table. The full path of each data stream is:
Please keep these data directories on S3. Otherwise, the corresponding tables will have no data in Databricks.
Each table will have the following columns:
Data emission timestamp.
Data fields from the source stream
All fields in the staging Parquet files will be expanded in the table.
Under the hood, an Airbyte data stream in Json schema is first converted to an Avro schema, then the Json object is converted to an Avro record, and finally the Avro record is outputted to the Parquet format. Because the data stream can come from any data source, the Json to Avro conversion process has arbitrary rules and limitations. Learn more about how source data is converted to Avro and the current limitations here.