Upgrade to Helm chart V2 (Self-Managed Enterprise)
Airbyte has upgraded its Helm chart to a new version called "V2." Upgrading to Helm chart V2 is currently optional. At some future date the V2 Helm chart will become the standard, so we advise that you upgrade your existing deployment to use the new chart before the transition. If you're a new Airbyte customer, you can skip the upgrade altogether and start with the new chart.
Follow the Self-Managed Community guide instead.
Why you should upgrade
Upgrading to the new Helm chart now has the following benefits.
-
By upgrading in advance, you can schedule this upgrade for a convenient time. Avoid blocking yourself from upgrading Airbyte to a future version when the new chart is mandatory and you're busy.
-
The new Helm chart doesn't require Keycloak. If you don't want to use Keycloak for authentication, or want to use generic OIDC, you must run Helm chart V2.
-
The new Helm chart is more aligned with Helm's best practices for chart design.
-
The new Helm chart has broader and more detailed options to customize your deployment. In most cases, it's no longer necessary to specify environment variables in your
values.yaml
file because the chart offers a more detailed interface for customization. If you do need to use environment variables, you can use fewer of them.
Which versions can upgrade to Helm chart V2
The following versions of Airbyte can use Helm chart V2:
- Airbyte version 1.6.0 and later, if installed and managed with Helm
The following versions of Airbyte can't use Helm chart V2:
-
Airbyte versions before 1.6.0
-
Airbyte versions installed and managed with abctl
Schedule time with Airbyte
Engage your Airbyte Solution Architect for help with this migration. We caution against doing it alone. Airbyte can provide guidance to help you manage uncommon customizations and verify the migration is successful.
How to upgrade
In most cases, upgrading is straightforward. To upgrade to Helm chart V2, you complete the following steps.
-
Ensure you have configured Airbyte to use an external database and external bucket storage
-
Prepare to deploy a fresh installation of Airbyte.
-
Create a new
values.yaml
file. -
Deploy a new version of Airbyte using your new
values.yaml
file and the new Helm chart version.
Configure an external database and bucket storage
Airbyte's solutions team guides Enterprise customers to configure your own external database and external storage, as explained in the implementation guide. Verify that you've set these up, but you almost certainly have.
Prepare a new namespace for Airbyte
When moving to Helm chart V2, deploy Airbyte with a new namespace and use a fresh values and secrets file. It is possible to do a straight upgrade, but different Airbyte users have different and sometimes complex configurations that could produce unique and unexpected situations during the upgrade. By doing a fresh install, you create a separate environment that's easier to troubleshoot if something in your values or secrets files acts unexpectedly.
kubectl create namespace airbyte-v2
Add and index the repo
Helm chart V2 uses a separate repo than V1 did. In your command line tool, add this repo and index it.
helm repo add airbyte-v2 https://airbytehq.github.io/charts
helm repo update
You can browse all charts uploaded to your repository.
helm search repo airbyte-v2
Update your values.yaml file
In most cases, the adjustments to values.yaml
are small and involve changing keys and moving sections. This section walks you through the main updates you need to make. If you already know what to do, see Values.yaml reference for the full V1 and V2 interfaces.
Airbyte recommends approaching this project in this way:
-
Note the customizations in your V1
values.yaml
file to ensure you don't forget anything. -
Start with a basic V2
values.yaml
to verify that it works. Map your V1 settings to V2, transferring one set of configurations at a time. -
Don't test in production.
Follow the steps below to start generating values.yaml
.
Create a values.yaml
file and a global
configuration
values.yaml
file and a global
configurationCreate a new values.yaml
file on your machine. In that file, create your basic global configuration.
global:
edition: enterprise
enterprise:
secretName: "" # Secret name where an Airbyte license key is stored
licenseKeySecretKey: "" # The key within `licenseKeySecretName` where the Airbyte license key is stored
airbyteUrl: "" # The URL where Airbyte will be reached; This should match your Ingress host
Optional: deploy Airbyte before you add additional configurations. If there are issues with your deployment, troubleshooting them is easier before you integrate additional services.
Add auth
and single sign on
auth
and single sign onYou can implement single sign on (SSO) with OIDC or new generic OIDC. For more help, see Single sign on (SSO).
- OIDC
- Generic OIDC
global:
auth:
enabled: false # Set to false if you're using SSO
# -- Admin user configuration
instanceAdmin:
firstName: ""
lastName: ""
emailSecretKey: "" # The key within `emailSecretName` where the initial user's email is stored
passwordSecretKey: "" # The key within `passwordSecretName` where the initial user's password is stored
# -- SSO Identify Provider configuration; (requires Enterprise)
identityProvider:
secretName: "" # Secret name where the OIDC configuration is stored
type: "oidc"
oidc:
# -- OIDC application domain
domain: ""
# -- OIDC application name
appName: ""
# -- The key within `clientIdSecretName` where the OIDC client id is stored
clientIdSecretKey: ""
# -- The key within `clientSecretSecretName` where the OIDC client secret is stored
clientSecretSecretKey: ""
global:
auth:
enabled: false # Set to false if you're using SSO
# -- Admin user configuration
instanceAdmin:
firstName: ""
lastName: ""
emailSecretKey: "" # The key within `emailSecretName` where the initial user's email is stored
passwordSecretKey: "" # The key within `passwordSecretName` where the initial user's password is stored
# -- SSO Identify Provider configuration; (requires Enterprise)
identityProvider:
secretName: "" # Secret name where the OIDC configuration is stored
type: "generic-oidc"
genericOidc:
clientId: ""
audience: ""
issuer: ""
endpoints:
authorizationServerEndpoint: ""
jwksEndpoint: ""
fields:
subject: sub
email: email
name: name
issuer: iss
Add your database
Disable Airbyte's default Postgres database and add your own. The main difference in Helm chart V2 is the global.database.database
key has changed to global.database.name
.
global:
database:
# -- Secret name where database credentials are stored
secretName: "" # e.g. "airbyte-config-secrets"
# -- The database host
host: ""
# -- The database port
port:
# -- The database name - this key used to be "database" in Helm chart 1.0
name: ""
# Use EITHER user or userSecretKey, but not both
# -- The database user
user: ""
# -- The key within `secretName` where the user is stored
userSecretKey: "" # e.g. "database-user"
# Use EITHER password or passwordSecretKey, but not both
# -- The database password
password: ""
# -- The key within `secretName` where the password is stored
passwordSecretKey: "" # e.g."database-password"
postgresql:
enabled: false
Add external log storage
global:
storage:
secretName: ""
type: minio # default storage is minio. Set to s3, gcs, or azure, according to what you use.
bucket:
log: airbyte-bucket
state: airbyte-bucket
workloadOutput: airbyte-bucket
activityPayload: airbyte-bucket
# Set ONE OF the following storage types, according to your specification above
# S3
s3:
region: "" ## e.g. us-east-1
authenticationType: credentials ## Use "credentials" or "instanceProfile"
accessKeyId: ""
secretAccessKey: ""
# GCS
gcs:
projectId: <project-id>
credentialsJson: <base64-encoded>
credentialsJsonPath: /secrets/gcs-log-creds/gcp.json
# Azure
azure:
# one of the following: connectionString, connectionStringSecretKey
connectionString: <azure storage connection string>
connectionStringSecretKey: <secret coordinate containing an existing connection-string secret>
Add external connector secret management
global:
secretsManager:
enabled: false
type: "" # one of: VAULT, GOOGLE_SECRET_MANAGER, AWS_SECRET_MANAGER, AZURE_KEY_VAULT, TESTING_CONFIG_DB_TABLE
secretName: "airbyte-config-secrets"
# Set ONE OF the following groups of configurations, based on your configuration in global.secretsManager.type.
awsSecretManager:
region: <aws-region>
authenticationType: credentials ## Use "credentials" or "instanceProfile"
tags: ## Optional - You may add tags to new secrets created by Airbyte.
- key: ## e.g. team
value: ## e.g. deployments
- key: business-unit
value: engineering
kms: ## Optional - ARN for KMS Decryption.
# OR
googleSecretManager:
projectId: <project-id>
credentialsSecretKey: gcp.json
# OR
azureKeyVault:
tenantId: ""
vaultUrl: ""
clientId: ""
clientIdSecretKey: ""
clientSecret: ""
clientSecretSecretKey: ""
tags: ""
# OR
vault:
address: ""
prefix: ""
authToken: ""
authTokenSecretKey: ""
Add audit logging (version 1.7 or later)
If you're using version 1.7 or later, you can enable audit logging. Unlike Helm chart V1, it's no longer necessary to specify environment variables. For more help with audit logging, see Audit logging.
server:
auditLogginEnabled: true
storage:
bucket:
auditLogging: your-audit-logging-bucket-name-here
Update syntax for other customizatons
If you have further customizations in your V1 values.yaml file, move those over to your new values.yaml file, and update key names where appropriate.
-
Change hyphenated V1 keys keys to camel case in V2. For example, when copying over
workload-launcher
, change it toworkloadLauncher
. -
Some keys have different names. For example,
orchestrator
iscontainerOrchestrator
in V2.
Here is the full list of changes.
Helm chart V1 | Helm chart V2 |
---|---|
global.database.database | global.database.name |
workload-launcher | workloadLauncher |
airbyte-bootloader | airbyteBootloader |
orchestrator | containerOrchestrator |
workload-launcher.extraEnvs[JOB_KUBE_NODE_SELECTORS] | global.jobs.kube.nodeSelector |
workload-launcher.extraEnvs[CHECK_JOB_KUBE_NODE_SELECTORS] | global.jobs.kube.scheduling.check.nodeSelectors |
workload-launcher.extraEnvs[DISCOVER_JOB_KUBE_NODE_SELECTORS] | global.jobs.kube.scheduling.discover.nodeSelectors |
worker.extraEnvs[MAX_SYNC_WORKERS] | worker.maxSyncWorkers |
worker.extraEnvs[MAX_CHECK_WORKERS] | worker.maxCheckWorkers |
server.extraEnvs[HTTP_IDLE_TIMEOUT] | server.httpIdleTimeout |
global.env_vars[TRACKING_STRATEGY] | global.tracking.strategy |
server.env_vars[AUDIT_LOGGING_ENABLED] | server.auditLoggingEnabled |
global.env_vars[STORAGE_BUCKET_AUDIT_LOGGING] | global.storage.bucket.auditLogging |
global.env_vars[JOB_MAIN_CONTAINER_CPU_REQUEST] | global.workloads.resources.mainContainer.cpu.request |
orchestrator.nodeSelector | global.jobs.kube.nodeSelector |
Individual bucket env vars (S3_LOG_BUCKET , GCS_LOG_BUCKET , etc.) | global.storage.bucket.log |
STORAGE_BUCKET_STATE | global.storage.bucket.state |
STORAGE_BUCKET_WORKLOAD_OUTPUT | global.storage.bucket.workloadOutput |
STORAGE_BUCKET_ACTIVITY_PAYLOAD | global.storage.bucket.activityPayload |
Convert extraEnv
variables
extraEnv
variablesIn previous versions of your values.yaml file, you might have specified a number of environment variables through extraEnv
. Many (but not all) of these variables have a dedicated interface in Helm chart V2. For example, look at the following configuration, which tells workload-launcher
to run pods in the jobs
node group.
workload-launcher:
nodeSelector:
type: static
## Pods spun up by the workload launcher will run in the 'jobs' node group.
extraEnv:
- name: JOB_KUBE_NODE_SELECTORS
value: type=jobs
- name: SPEC_JOB_KUBE_NODE_SELECTORS
value: type=jobs
- name: CHECK_JOB_KUBE_NODE_SELECTORS
value: type=jobs
- name: DISCOVER_JOB_KUBE_NODE_SELECTORS
value: type=jobs
You can specify these values directly without using environment variables, achieving the same effect.
global:
jobs:
kube:
nodeSelector:
type: jobs
scheduling:
check:
nodeSelectors:
type: jobs
discover:
nodeSelectors:
type: jobs
spec:
nodeSelectors:
type: jobs
workloadLauncher:
nodeSelector:
type: static
Here is a complete list of environment variables with their Helm chart V2 equivalent. Some environment variables don't have direct V2 equivalents, so you can set these using the extraEnv
configuration in the appropriate service section.
Environment variable | Helm chart V2 equivalent |
---|---|
Core | |
AIRBYTE_VERSION | global.version |
AIRBYTE_EDITION | global.edition |
AIRBYTE_CLUSTER_TYPE | global.cluster.type |
AIRBYTE_CLUSTER_NAME | global.cluster.name |
AIRBYTE_URL | global.airbyteUrl |
AIRBYTE_API_HOST | global.api.host |
AIRBYTE_API_AUTH_HEADER_NAME | global.api.authHeaderName |
AIRBYTE_API_AUTH_HEADER_VALUE | global.api.authHeaderValue |
AIRBYTE_SERVER_HOST | global.server.host |
API_AUTHORIZATION_ENABLED | global.auth.enabled |
CONNECTOR_BUILDER_SERVER_API_HOST | global.connectorBuilderServer.apiHost |
DEPLOYMENT_ENV | global.deploymentEnv |
INTERNAL_API_HOST | global.api.internalHost |
LOCAL | global.local |
WEBAPP_URL | global.webapp.url |
SPEC_CACHE_BUCKET | Use extraEnvs |
Secrets | |
SECRET_PERSISTENCE | global.secretsManager.type |
SECRET_STORE_GCP_PROJECT_ID | global.secretsManager.googleSecretManager.projectId |
SECRET_STORE_GCP_CREDENTIALS | global.secretsManager.googleSecretManager.credentials |
VAULT_ADDRESS | global.secretsManager.vault.address |
VAULT_PREFIX | global.secretsManager.vault.prefix |
VAULT_AUTH_TOKEN | global.secretsManager.vault.token |
VAULT_AUTH_METHOD | global.secretsManager.vault.authMethod |
AWS_ACCESS_KEY | global.aws.accessKeyId |
AWS_SECRET_ACCESS_KEY | global.aws.secretAccessKey |
AWS_KMS_KEY_ARN | global.secretsManager.awsSecretManager.kmsKeyArn |
AWS_SECRET_MANAGER_SECRET_TAGS | global.secretsManager.awsSecretManager.tags |
AWS_ASSUME_ROLE_ACCESS_KEY_ID | global.aws.assumeRole.accessKeyId |
Database | |
DATABASE_USER | global.database.user |
DATABASE_PASSWORD | global.database.password |
DATABASE_URL | global.database.url |
DATABASE_HOST | global.database.host |
DATABASE_PORT | global.database.port |
DATABASE_DB | global.database.database |
JOBS_DATABASE_INITIALIZATION_TIMEOUT_MS | global.database.initializationTimeoutMs |
CONFIG_DATABASE_USER | global.database.user |
CONFIG_DATABASE_PASSWORD | global.database.password |
CONFIG_DATABASE_URL | global.database.url |
CONFIG_DATABASE_INITIALIZATION_TIMEOUT_MS | global.database.initializationTimeoutMs |
RUN_DATABASE_MIGRATION_ON_STARTUP | global.migrations.runAtStartup |
USE_CLOUD_SQL_PROXY | global.cloudSqlProxy.enabled |
Airbyte Services | |
TEMPORAL_HOST | temporal.host |
Jobs | |
SYNC_JOB_MAX_ATTEMPTS | Use extraEnvs |
SYNC_JOB_RETRIES_COMPLETE_FAILURES_MAX_SUCCESSIVE | Use extraEnvs |
SYNC_JOB_RETRIES_COMPLETE_FAILURES_MAX_TOTAL | Use extraEnvs |
SYNC_JOB_RETRIES_COMPLETE_FAILURES_BACKOFF_MIN_INTERVAL_S | Use extraEnvs |
SYNC_JOB_RETRIES_COMPLETE_FAILURES_BACKOFF_MAX_INTERVAL_S | Use extraEnvs |
SYNC_JOB_RETRIES_COMPLETE_FAILURES_BACKOFF_BASE | Use extraEnvs |
SYNC_JOB_RETRIES_PARTIAL_FAILURES_MAX_SUCCESSIVE | Use extraEnvs |
SYNC_JOB_RETRIES_PARTIAL_FAILURES_MAX_TOTAL | Use extraEnvs |
SYNC_JOB_MAX_TIMEOUT_DAYS | Use extraEnvs |
JOB_MAIN_CONTAINER_CPU_REQUEST | global.workloads.resources.mainContainer.cpu.request |
JOB_MAIN_CONTAINER_CPU_LIMIT | global.workloads.resources.mainContainer.cpu.limit |
JOB_MAIN_CONTAINER_MEMORY_REQUEST | global.workloads.resources.mainContainer.memory.request |
JOB_MAIN_CONTAINER_MEMORY_LIMIT | global.workloads.resources.mainContainer.memory.limit |
JOB_KUBE_TOLERATIONS | global.jobs.kube.tolerations |
JOB_KUBE_NODE_SELECTORS | global.jobs.kube.nodeSelector |
JOB_KUBE_ANNOTATIONS | global.jobs.kube.annotations |
JOB_KUBE_MAIN_CONTAINER_IMAGE_PULL_POLICY | global.jobs.kube.mainContainerImagePullPolicy |
JOB_KUBE_MAIN_CONTAINER_IMAGE_PULL_SECRET | global.jobs.kube.mainContainerImagePullSecret |
JOB_KUBE_SIDECAR_CONTAINER_IMAGE_PULL_POLICY | global.jobs.kube.sidecarContainerImagePullPolicy |
JOB_KUBE_SOCAT_IMAGE | global.jobs.kube.images.socat |
JOB_KUBE_BUSYBOX_IMAGE | global.jobs.kube.images.busybox |
JOB_KUBE_CURL_IMAGE | global.jobs.kube.images.curl |
JOB_KUBE_NAMESPACE | global.jobs.kube.namespace |
JOB_KUBE_SERVICEACCOUNT | global.jobs.kube.serviceAccount |
Jobs-specific | |
SPEC_JOB_KUBE_NODE_SELECTORS | global.jobs.kube.scheduling.spec.nodeSelectors |
CHECK_JOB_KUBE_NODE_SELECTORS | global.jobs.kube.scheduling.check.nodeSelectors |
DISCOVER_JOB_KUBE_NODE_SELECTORS | global.jobs.kube.scheduling.discover.nodeSelectors |
SPEC_JOB_KUBE_ANNOTATIONS | global.jobs.kube.scheduling.spec.annotations |
CHECK_JOB_KUBE_ANNOTATIONS | global.jobs.kube.scheduling.check.annotations |
DISCOVER_JOB_KUBE_ANNOTATIONS | global.jobs.kube.scheduling.discover.annotations |
Connections | |
MAX_FIELDS_PER_CONNECTION | Use extraEnvs |
MAX_DAYS_OF_ONLY_FAILED_JOBS_BEFORE_CONNECTION_DISABLE | Use extraEnvs |
MAX_FAILED_JOBS_IN_A_ROW_BEFORE_CONNECTION_DISABLE | Use extraEnvs |
Logging | |
LOG_LEVEL | global.logging.level |
GCS_LOG_BUCKET | global.storage.gcs.bucket |
S3_BUCKET | global.storage.s3.bucket |
S3_REGION | global.storage.s3.region |
S3_AWS_KEY | global.storage.s3.accessKeyId |
S3_AWS_SECRET | global.storage.s3.secretAccessKey |
S3_MINIO_ENDPOINT | global.storage.minio.endpoint |
S3_PATH_STYLE_ACCESS | global.storage.s3.pathStyleAccess |
Monitoring | |
PUBLISH_METRICS | global.metrics.enabled |
METRIC_CLIENT | global.metrics.client |
DD_AGENT_HOST | global.datadog.agentHost |
DD_AGENT_PORT | global.datadog.agentPort |
OTEL_COLLECTOR_ENDPOINT | global.metrics.otel.exporter.endpoint |
MICROMETER_METRICS_ENABLED | global.metrics.enabled |
Worker | |
MAX_CHECK_WORKERS | worker.maxCheckWorkers |
MAX_SYNC_WORKERS | worker.maxSyncWorkers |
TEMPORAL_WORKER_PORTS | worker.temporalWorkerPorts |
DISCOVER_REFRESH_WINDOW_MINUTES | Use extraEnvs |
Launcher | |
WORKLOAD_LAUNCHER_PARALLELISM | workloadLauncher.parallelism |
Data Retention | |
TEMPORAL_HISTORY_RETENTION_IN_DAYS | Use extraEnvs |
Server | |
AUDIT_LOGGING_ENABLED | server.auditLoggingEnabled |
STORAGE_BUCKET_AUDIT_LOGGING | server.auditLoggingBucket |
HTTP_IDLE_TIMEOUT | server.httpIdleTimeout |
READ_TIMEOUT | Use extraEnvs |
Authentication | |
AB_INSTANCE_ADMIN_PASSWORD | global.auth.instanceAdmin.password |
AB_AUTH_SECRET_CREATION_ENABLED | global.auth.secretCreationEnabled |
AB_KUBERNETES_SECRET_NAME | global.auth.managedSecretName |
AB_INSTANCE_ADMIN_CLIENT_ID | global.auth.instanceAdmin.clientId |
AB_INSTANCE_ADMIN_CLIENT_SECRET | global.auth.instanceAdmin.clientSecret |
AB_JWT_SIGNATURE_SECRET | global.auth.security.jwtSignatureSecret |
AB_COOKIE_SECURE | global.auth.security.cookieSecureSetting |
INITIAL_USER_FIRST_NAME | global.auth.instanceAdmin.firstName |
INITIAL_USER_LAST_NAME | global.auth.instanceAdmin.lastName |
INITIAL_USER_EMAIL | global.auth.instanceAdmin.email |
INITIAL_USER_PASSWORD | global.auth.instanceAdmin.password |
Tracking | |
TRACKING_ENABLED | global.tracking.enabled |
TRACKING_STRATEGY | global.tracking.strategy |
Enterprise | |
AIRBYTE_LICENSE_KEY | global.enterprise.licenseKey |
Feature Flags | |
FEATURE_FLAG_CLIENT | global.featureFlags.client |
LAUNCHDARKLY_KEY | global.featureFlags.launchDarkly.sdkKey |
Java | |
JAVA_TOOL_OPTIONS | global.java.opts |
Temporal | |
AUTO_SETUP | temporal.autoSetup |
TEMPORAL_CLI_ADDRESS | global.temporal.cli.address |
TEMPORAL_CLOUD_ENABLED | global.temporal.cloud.enabled |
TEMPORAL_CLOUD_HOST | global.temporal.cloud.host |
TEMPORAL_CLOUD_NAMESPACE | global.temporal.cloud.namespace |
TEMPORAL_CLOUD_CLIENT_CERT | global.temporal.cloud.clientCert |
TEMPORAL_CLOUD_CLIENT_KEY | global.temporal.cloud.clientKey |
Container Orchestrator | |
CONTAINER_ORCHESTRATOR_SECRET_NAME | global.workloads.containerOrchestrator.secretName |
CONTAINER_ORCHESTRATOR_SECRET_MOUNT_PATH | global.workloads.containerOrchestrator.secretMountPath |
CONTAINER_ORCHESTRATOR_DATA_PLANE_CREDS_SECRET_NAME | global.workloads.containerOrchestrator.dataPlane.credentialsSecretName |
CONTAINER_ORCHESTRATOR_IMAGE | global.workloads.containerOrchestrator.image |
Workload Launcher | |
WORKLOAD_LAUNCHER_PARALLELISM | workloadLauncher.parallelism |
CONNECTOR_PROFILER_IMAGE | workloadLauncher.connectorProfiler.image |
WORKLOAD_INIT_IMAGE | workloadLauncher.workloadInit.image |
Connector Registry | |
CONNECTOR_REGISTRY_SEED_PROVIDER | global.connectorRegistry.seedProvider |
CONNECTOR_REGISTRY_BASE_URL | global.connectorRegistry.baseUrl |
AI Assist | |
AI_ASSIST_URL_BASE | connectorBuilderServer.aiAssistUrlBase |
AI_ASSIST_API_KEY | connectorBuilderServer.aiAssistApiKey |
Connector Rollout | |
CONNECTOR_ROLLOUT_EXPIRATION_SECONDS | global.connectorRollout.expirationSeconds |
CONNECTOR_ROLLOUT_PARALLELISM | global.connectorRollout.parallelism |
CONNECTOR_ROLLOUT_GITHUB_AIRBYTE_PAT | connectorRolloutWorker.githubToken |
Customer.io | |
CUSTOMERIO_API_KEY | global.customerio.apiKey |
Shopify | |
SHOPIFY_CLIENT_ID | global.shopify.clientId |
SHOPIFY_CLIENT_SECRET | global.shopify.clientSecret |
Keycloak | |
KEYCLOAK_ADMIN_USER | keycloak.auth.adminUsername |
KEYCLOAK_ADMIN_PASSWORD | keycloak.auth.adminPassword |
KEYCLOAK_ADMIN_REALM | keycloak.auth.adminRealm |
KEYCLOAK_INTERNAL_REALM_ISSUER | keycloak.realmIssuer |
MinIO | |
MINIO_ROOT_USER | minio.rootUser |
MINIO_ROOT_PASSWORD | minio.rootPassword |
Micronaut | |
MICRONAUT_ENVIRONMENTS | global.micronaut.environments |
Topology | |
NODE_SELECTOR_LABEL | global.topology.nodeSelectorLabel |
QUICK_JOBS_NODE_SELECTOR_LABEL | global.topology.quickJobsNodeSelectorLabel |
Workloads | |
CONNECTOR_SPECIFIC_RESOURCE_DEFAULTS_ENABLED | global.workloads.resources.useConnectorResourceDefaults |
DATA_CHECK_TASK_QUEUES | global.workloads.queues.check |
Deploy Airbyte
Here is an example of how to deploy version 1.7.0 of Airbyte using the latest Helm chart V2 values. Normally, in V1, the Helm chart version is identical to the Airbyte version. Since using this chart version is optional, the Helm chart and Airbyte have different, but compatible, versions.
helm install airbyte-enterprise airbyte-v2/airbyte \
--namespace airbyte-v2 \ # Target Kubernetes namespace
--values ./values.yaml \ # Custom configuration values
--version 2.0.3 \ # Helm chart version to use
--set global.image.tag=1.7.0 # Airbyte version to use