Agent connector tutorial: Pydantic AI
In this tutorial, you'll create a new Python project with uv, add a Pydantic AI agent, equip it with one of Airbyte's agent connectors, and use natural language to explore your data. This tutorial uses GitHub, but if you don't have a GitHub account you can swap in any other agent connector and perform different operations.
Your agent executes through Airbyte. Airbyte Agents owns the OAuth apps, stores your third-party tokens, and refreshes them for you. Your Python code only ever sees your Airbyte client ID and client secret.
Overview
This tutorial is for AI engineers and other technical users who work with data and AI tools. You can complete it in about 15 minutes.
The tutorial assumes you have basic knowledge of the following tools, but most software engineers shouldn't struggle with anything that follows.
- Python and package management with uv
- Pydantic AI
- GitHub, or a different third-party service you want to connect to
Before you start
Before you begin this tutorial, ensure you have the following.
- Python version 3.10 or later
- uv
- An Airbyte Agents account. You can sign up for free.
- Your Airbyte API credentials. Copy
AIRBYTE_CLIENT_IDandAIRBYTE_CLIENT_SECRETfrom the Profile page on app.airbyte.ai. See Manage your user profile for details. - A GitHub personal access token. A classic token with
reposcope is sufficient. - An OpenAI API key. This tutorial uses OpenAI, but Pydantic AI supports other LLM providers if you prefer.
Part 1: Create a new Python project
In this tutorial you initialize a basic Python project to work in. However, if you have an existing project you want to work with, feel free to use that instead.
Create a new project using uv:
uv init my-ai-agent --app
cd my-ai-agent
This creates a project with the following structure:
my-ai-agent/
├── .gitignore
├── .python-version
├── main.py
├── pyproject.toml
└── README.md
You create .env and uv.lock files in later steps, so don't worry about them yet.
Part 2: Install dependencies and create your agent file
-
Install the Airbyte agent SDK, Pydantic AI, and
python-dotenv:uv add airbyte-agent-sdk pydantic-ai python-dotenvThis command installs:
-
airbyte-agent-sdk: The Airbyte Agent SDK, which ships every connector as a typed submodule. -
pydantic-ai: The AI agent framework, which includes support for multiple LLM providers including OpenAI, Anthropic, and Google. -
python-dotenv: A library you can use to load environment variables from a.envfile.
noteIf you want a smaller installation with only OpenAI support, you can use
pydantic-ai-slim[openai]instead ofpydantic-ai. See the Pydantic AI installation docs for more options. -
-
Create an
agent.pyfile with the following imports:agent.pyfrom dotenv import load_dotenv
from pydantic_ai import Agent
from airbyte_agent_sdk import connect
from airbyte_agent_sdk.connectors.github import GithubConnectorThese imports provide:
load_dotenv: Load environment variables from your.envfile.Agent: The Pydantic AI agent class that orchestrates LLM interactions and tool calls.connect: The Airbyte agent SDK entry point. One call returns a typed connector bound to your workspace.GithubConnector: The connector class. You reference it when decorating the tool so the SDK can describe the connector's entities and actions to the agent.
Part 3: Add a .env file with your secrets
-
Create a
.envfile in your project root and add your secrets to it. Replace the placeholder values with your actual credentials..envAIRBYTE_CLIENT_ID=your-airbyte-client-id
AIRBYTE_CLIENT_SECRET=your-airbyte-client-secret
OPENAI_API_KEY=your-openai-api-keyCopy
AIRBYTE_CLIENT_IDandAIRBYTE_CLIENT_SECRETfrom the Profile page on app.airbyte.ai.warningNever commit your
.envfile to version control. If you do this by mistake, rotate your secrets immediately. -
Add the following line to
agent.pyafter your imports to load the environment variables:agent.pyload_dotenv()This makes your secrets available via
os.environ. Pydantic AI automatically readsOPENAI_API_KEYfrom the environment, and the agent SDK picks upAIRBYTE_CLIENT_IDandAIRBYTE_CLIENT_SECRETfrom the environment in the next step.
Part 4: Add the GitHub connector
Before you can query GitHub data, add a GitHub connector to your Airbyte Agents workspace. You can do this through either the web app, MCP, or API.
-
Web app: Open app.airbyte.ai, click Connectors, click Add Connector, search for GitHub, and complete the authentication flow with your GitHub personal access token. See Add a connector (UI) for a full walkthrough.
-
API: Send a
POSTrequest to create the connector programmatically. See Add a connector (API) for request examples. -
MCP: If you run Airbyte's Agent MCP, you can add a new connector from your existing agent. See Agent MCP to learn how to use the MCP server.
You only need to add the connector once. After it exists in your workspace, you can skip this step when setting up new agents.
Part 5: Configure your connector and agent
Now add the following code to agent.py to connect to the GitHub connector and create the Pydantic AI agent.
Define the connector
Connect to GitHub through your Airbyte Agents workspace:
github = connect("github")
One line does four things for you:
- Reads
AIRBYTE_CLIENT_IDandAIRBYTE_CLIENT_SECRETfrom the environment. - Defaults to the
"default"workspace, which is where the web app stores credentials unless you change it. - Returns a typed
GithubConnectorbound to the GitHub connector in your workspace. - Routes every
github.execute(...)call through Airbyte's hosted API, which holds the GitHub tokens and refreshes them for you.
If you want to connect to a different workspace or pass credentials explicitly, use connect("github", workspace_name="my-workspace", client_id=..., client_secret=...) or pass an AirbyteAuthConfig. See the SDK reference for details.
Define the agent
Create a Pydantic AI agent with a system prompt that describes its purpose:
agent = Agent(
"openai:gpt-4o",
system_prompt=(
"You are a helpful assistant that can access GitHub data through the "
"github_execute tool. Be concise and accurate."
),
)
- The
"openai:gpt-4o"string specifies the model to use. You can use a different model by changing the model string. For example, use"openai:gpt-4o-mini"to lower costs, or see the Pydantic AI models documentation for other providers like Anthropic or Google. - The
system_promptparameter is where you encode any API idiosyncrasies the model can't see in the tool schema. The Airbyte agent SDK already exposes entity names, actions, and enum values through the tool description, so the prompt only needs to carry domain constraints (pagination defaults, date formats, preferred streams) as your agent grows. - The prompt references a
github_executetool. You register that tool in the next part.
Part 6: Add a tool to your agent
Rather than one tool per GitHub endpoint, the Airbyte agent SDK exposes the entire GitHub API through a single execute(entity, action, params) entry point. The tool_utils decorator fills in the entity and action catalog as the tool description, so the model knows what's available without you writing a schema.
Add the following to agent.py:
@agent.tool_plain
@GithubConnector.tool_utils
async def github_execute(entity: str, action: str, params: dict | None = None):
return await github.execute(entity, action, params or {})
The decorator stack is the whole tool definition. No per-action docstring, no GITHUB_LIST_COMMITS or GITHUB_GET_PR sprawl, one entry point that covers the full connector. @GithubConnector.tool_utils appends the full entity and action catalog to the tool description, and caps oversized responses. As the connector grows, the tool signature stays the same.
Each execute call returns a structured result with data (the records) and meta (pagination cursors). Pydantic AI serializes the dict for the model automatically, so you don't need to call json.dumps here. You can keep the result as-is, filter it in Python, or page through it using meta.end_cursor.
Part 7: Run your project
Now that your agent is configured with a tool, update main.py and run your project.
-
Update
main.py. This creates a simple chat interface that lets your agent remember your conversation history between prompts.main.pyimport asyncio
from agent import agent
async def main():
print("GitHub Agent Ready! Ask questions about GitHub repositories.")
print("Type 'quit' to exit.\n")
history = None
while True:
prompt = input("You: ")
if prompt.lower() in ('quit', 'exit', 'q'):
break
result = await agent.run(prompt, message_history=history)
history = result.all_messages()
print(f"\nAgent: {result.output}\n")
if __name__ == "__main__":
asyncio.run(main()) -
Run the project.
uv run main.py
Chat with your agent
The agent waits for your input. Once you prompt it, the agent decides which entity and action to call based on your question, asks Airbyte to execute it, and returns a natural language response. Try prompts like:
- "List the 10 most recent open issues in airbytehq/airbyte"
- "What are the 10 most recent pull requests that are still open in airbytehq/airbyte?"
- "Are there any open issues that might be fixed by a pending PR?"
The agent has basic message history within each session, and you can ask followup questions based on its responses.
Troubleshooting
If your agent fails to retrieve GitHub data, check the following:
- HTTP 401/403 errors from Airbyte: Verify that
AIRBYTE_CLIENT_IDandAIRBYTE_CLIENT_SECRETare copied correctly from your Profile page on app.airbyte.ai. - "No connector found" or "connector not configured": Make sure you added the GitHub connector to your workspace before running
main.py.connect("github")defaults to the"default"workspace; if you created the connector in a different workspace, passworkspace_name="your-workspace-name"toconnect(). - HTTP 401/403 errors from GitHub: The GitHub token stored in your connector is invalid or missing required scopes. Verify that the token you provided when adding the connector has
reposcope. - Empty
data=[]responses from filtered queries: Most GitHub filters use case-sensitive values. Confirm the agent is sending uppercase values (for example,states=["OPEN"]rather thanstates=["open"]). The system prompt in this tutorial nudges the model to do that by default. - OpenAI errors: Verify your
OPENAI_API_KEYis valid, has available credits, and won't exceed rate limits.
See the Github agent connector page for more details.
Summary
In this tutorial, you learned how to:
- Set up a new Python project with uv
- Add Pydantic AI and Airbyte's GitHub agent connector to your project
- Configure environment variables for your Airbyte Agents credentials
- Register a single tool that covers the entire GitHub API
- Run your project and use natural language to interact with GitHub data through Airbyte
Next steps
-
Learn more about the SDK: See the full SDK interface tutorial and reference documentation.
-
Let your AI assistant scaffold the next agent. The Airbyte agent SDK ships skills for Claude Code and Codex that carry the patterns above, so you can ask your assistant to build a new agent quickly. See the airbyte-agent-sdk repository for installation instructions.
-
Reach the same connectors from any other interface. Airbyte Agents exposes the same connectors through all of its interfaces. Since you already added a connector, you can use that connector anywhere you use Airbyte Agents.