Skip to main content



Full Refresh SyncYes
Incremental SyncNo
Replicate Incremental DeletesNo
Replicate Folders (multiple Files)No
Replicate Glob Patterns (multiple Files)No

This source produces a single table for the target file as it replicates only one file at a time for the moment. Note that you should provide the dataset_name which dictates how the table will be identified in the destination (since URL can be made of complex characters).

Storage Providers

Storage ProvidersSupported?
Google Cloud StorageYes
Amazon Web Services S3Yes
local filesystemLocal use only (inaccessible for Airbyte Cloud)

File / Stream Compression


File Formats

Excel Binary WorkbookYes

This connector does not support syncing unstructured data files such as raw text, audio, or videos.

Getting Started (Airbyte Cloud)

Setup through Airbyte Cloud will be exactly the same as the open-source setup, except for the fact that local files are disabled.

Getting Started (Airbyte Open-Source)

  1. Once the File Source is selected, you should define both the storage provider along its URL and format of the file.
  2. Depending on the provider choice and privacy of the data, you will have to configure more options.

Provider Specific Information

  • In case of GCS, it is necessary to provide the content of the service account keyfile to access private buckets. See settings of BigQuery Destination
  • In case of AWS S3, the pair of aws_access_key_id and aws_secret_access_key is necessary to access private S3 buckets.
  • In case of AzBlob, it is necessary to provide the storage_account in which the blob you want to access resides. Either sas_token (info) or shared_key (info) is necessary to access private blobs.

Reader Options

The Reader in charge of loading the file format is currently based on Pandas IO Tools. It is possible to customize how to load the file into a Pandas DataFrame as part of this Source Connector. This is doable in the reader_options that should be in JSON format and depends on the chosen file format. See pandas' documentation, depending on the format:

For example, if the format CSV is selected, then options from the read_csv functions are available.

  • It is therefore possible to customize the delimiter (or sep) to in case of tab separated files.
  • Header line can be ignored with header=0 and customized with names
  • etc

We would therefore provide in the reader_options the following json:

{ "sep" : "\t", "header" : 0, "names": "column1, column2"}

In case you select JSON format, then options from the read_json reader are available.

For example, you can use the {"orient" : "records"} to change how orientation of data is loaded (if data is [{column -> value}, … , {column -> value}])

Changing data types of source columns

Normally, Airbyte tries to infer the data type from the source, but you can use reader_options to force specific data types. If you input {"dtype":"string"}, all columns will be forced to be parsed as strings. If you only want a specific column to be parsed as a string, simply use {"dtype" : {"column name": "string"}}.


Here are a list of examples of possible file inputs:

Dataset NameStorageURLReader ImplService AccountDescription
epidemiologyHTTPS Public dataset on BigQuery
hr_and_financialsGCSgs://airbyte-vault/financial.csvsmart_open or gcfs{"type": "service_account", "private_key_id": "XXXXXXXX", ...}data from a private bucket, a service account is necessary
landsat_indexGCSgcp-public-data-landsat/index.csv.gzsmart_openUsing smart_open, we don't need to specify the compression (note the gs:// is optional too, same for other providers)

Examples with reader options:

Dataset NameStorageURLReader ImplReader OptionsDescription
landsat_indexGCSgs://gcp-public-data-landsat/index.csv.gzGCFS{"compression": "gzip"}Additional reader options to specify a compression option to read_csv
GDELTS3s3://gdelt-open-data/events/20190914.export.csv{"sep": "\t", "header": null}Here is TSV data separated by tabs without header row from AWS Open Data
server_logslocal/local/logs.log{"sep": ";"}After making sure a local text file exists at /tmp/airbyte_local/logs.log with logs file from some server that are delimited by ';' delimiters

Example for SFTP:

Dataset NameStorageUserPasswordHostURLReader OptionsDescription
Test{"sep": "\r\n", "header": null, "names": ["text"], "engine": "python"}We use python engine for read_csv in order to handle delimiter of more than 1 character while providing our own column names.

Please see (or add) more at airbyte-integrations/connectors/source-file/integration_tests/ for further usages examples.

Performance Considerations and Notes

In order to read large files from a remote location, this connector uses the smart_open library. However, it is possible to switch to either GCSFS or S3FS implementations as it is natively supported by the pandas library. This choice is made possible through the optional reader_impl parameter.

  • Note that for local filesystem, the file probably have to be stored somewhere in the /tmp/airbyte_local folder with the same limitations as the CSV Destination so the URL should also starts with /local/.
  • The JSON implementation needs to be tweaked in order to produce more complex catalog and is still in an experimental state: Simple JSON schemas should work at this point but may not be well handled when there are multiple layers of nesting.


VersionDatePull RequestSubject
0.2.92022-02-019974Update airbyte-cdk 0.1.47
0.2.82021-12-068524Update connector fields title/description
0.2.72021-10-287387Migrate source to CDK structure, add SAT testing.
0.2.62021-08-265613Add support to xlsb format
0.2.52021-07-264953Allow non-default port for SFTP type
0.2.42021-06-093973Add AIRBYTE_ENTRYPOINT for Kubernetes support
0.2.32021-06-013771Add Azure Storage Blob Files option
0.2.22021-04-162883Fix CSV discovery memory consumption
0.2.12021-04-032726Fix base connector versioning
0.2.02021-03-092238Protocol allows future/unknown properties
0.1.102021-02-182118Support JSONL format
0.1.92021-02-021768Add test cases for all formats
0.1.82021-01-271738Adopt connector best practices
0.1.72020-12-161331Refactor Python base connector
0.1.62020-12-081249Handle NaN values
0.1.52020-11-301046Add connectors using an index YAML file