Agent connector tutorial: LangChain
In this tutorial, you'll create a new Python project with uv, add a LangChain agent, equip it to use one of Airbyte's agent connectors, and use natural language to explore your data. This tutorial uses GitHub, but if you don't have a GitHub account, you can use one of Airbyte's other agent connectors and perform different operations.
Overview
This tutorial is for AI engineers and other technical users who work with data and AI tools. You can complete it in about 15 minutes.
The tutorial assumes you have basic knowledge of the following tools, but most software engineers shouldn't struggle with anything that follows.
- Python and package management with uv
- LangChain and LangGraph
- GitHub, or a different third-party service you want to connect to
Before you start
Before you begin this tutorial, ensure you have the following.
- Python version 3.13 or later
- uv
- A GitHub personal access token. For this tutorial, a classic token with
reposcope is sufficient. - An OpenAI API key. This tutorial uses OpenAI, but LangChain supports other LLM providers if you prefer.
Part 1: Create a new Python project
In this tutorial you initialize a basic Python project to work in. However, if you have an existing project you want to work with, feel free to use that instead.
Create a new project using uv:
uv init my-langchain-agent --app
cd my-langchain-agent
This creates a project with the following structure:
my-langchain-agent/
├── .gitignore
├── .python-version
├── main.py
├── pyproject.toml
└── README.md
You create .env and uv.lock files in later steps, so don't worry about them yet.
Part 2: Install dependencies
Install the GitHub connector, LangChain with OpenAI support, and LangGraph for the agent runtime:
uv add airbyte-agent-github langchain langchain-openai langgraph
This command installs:
airbyte-agent-github: The Airbyte agent connector for GitHub, which provides type-safe access to GitHub's API.langchain: The LangChain framework core.langchain-openai: LangChain's OpenAI integration for chat models.langgraph: The LangGraph agent runtime, which provides acreate_react_agentfunction for building tool-calling agents.
The GitHub connector also includes python-dotenv, which you can use to load environment variables from a .env file.
Part 3: Import LangChain and the GitHub agent connector
-
Create an
agent.pyfile for your agent definition:touch agent.py -
Add the following imports to
agent.py:agent.pyimport os
import json
from dotenv import load_dotenv
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
from airbyte_agent_github import GithubConnector
from airbyte_agent_github.models import GithubPersonalAccessTokenAuthConfigThese imports provide:
osandjson: Access environment variables and serialize connector results.load_dotenv: Load environment variables from your.envfile.tool: LangChain's decorator for converting a function into a tool.ChatOpenAI: LangChain's OpenAI chat model integration.create_react_agent: LangGraph's function for creating a ReAct agent that can call tools.GithubConnector: The Airbyte agent connector that provides type-safe access to GitHub's API.GithubPersonalAccessTokenAuthConfig: The authentication configuration for the GitHub connector using a personal access token.
Part 4: Add a .env file with your secrets
-
Create a
.envfile in your project root and add your secrets to it. Replace the placeholder values with your actual credentials..envGITHUB_ACCESS_TOKEN=your-github-personal-access-token
OPENAI_API_KEY=your-openai-api-keywarningNever commit your
.envfile to version control. If you do this by mistake, rotate your secrets immediately. -
Add the following line to
agent.pyafter your imports to load the environment variables:agent.pyload_dotenv()This makes your secrets available via
os.environ. LangChain'sChatOpenAIautomatically readsOPENAI_API_KEYfrom the environment, and you'll useos.environ["GITHUB_ACCESS_TOKEN"]to configure the connector in the next section.
Part 5: Configure your connector and agent
Now that your environment is set up, add the following code to agent.py to create the GitHub connector and LangChain agent.
Define the connector
Define the agent connector for GitHub. It authenticates using your personal access token.
connector = GithubConnector(
auth_config=GithubPersonalAccessTokenAuthConfig(
token=os.environ["GITHUB_ACCESS_TOKEN"]
)
)
Define the tool
Create an async function that wraps the connector's execute method as a LangChain tool. The @tool decorator converts the function into a LangChain tool, and @GithubConnector.tool_utils automatically generates a comprehensive tool description from the connector's metadata. This tells the agent what entities are available (issues, pull requests, repositories, etc.), what actions it can perform on each entity, and what parameters each action requires.
@tool
@GithubConnector.tool_utils
async def github_execute(entity: str, action: str, params: dict | None = None) -> str:
"""Execute GitHub connector operations."""
result = await connector.execute(entity, action, params or {})
return json.dumps(result, default=str)
Define the agent
Create a LangChain chat model and a LangGraph ReAct agent:
llm = ChatOpenAI(model="gpt-4o")
agent = create_react_agent(llm, [github_execute])
ChatOpenAI(model="gpt-4o")creates an OpenAI chat model. You can use a different model by changing the model string. For example, use"gpt-4o-mini"to lower costs. LangChain also supports other providers like Anthropic and Google.create_react_agentcreates a ReAct agent that reasons about which tools to call based on the user's input.
Part 6: Run your project
Now that your agent is configured with tools, update main.py and run your project.
-
Update
main.py. This code creates a simple chat interface in your command line tool and allows your agent to remember your conversation history between prompts.main.pyimport asyncio
from agent import agent
async def main():
print("GitHub Agent Ready! Ask questions about GitHub repositories.")
print("Type 'quit' to exit.\n")
history = []
while True:
prompt = input("You: ")
if prompt.lower() in ("quit", "exit", "q"):
break
history.append({"role": "user", "content": prompt})
result = await agent.ainvoke({"messages": history})
response = result["messages"][-1].content
history = result["messages"]
print(f"\nAgent: {response}\n")
if __name__ == "__main__":
asyncio.run(main()) -
Run the project.
uv run main.py
Chat with your agent
The agent waits for your input. Once you prompt it, the agent decides which tools to call based on your question, fetches the data from GitHub, and returns a natural language response. Try prompts like:
- "List the 10 most recent open issues in airbytehq/airbyte"
- "What are the 10 most recent pull requests that are still open in airbytehq/airbyte?"
- "Are there any open issues that might be fixed by a pending PR?"
The agent has basic message history within each session, and you can ask followup questions based on its responses.
Troubleshooting
If your agent fails to retrieve GitHub data, check the following:
- HTTP 401 errors: Your
GITHUB_ACCESS_TOKENis invalid or expired. Generate a new token and update your.envfile. - HTTP 403 errors: Your
GITHUB_ACCESS_TOKENdoesn't have the required scopes. Ensure your token hasreposcope for accessing repository data. - OpenAI errors: Verify your
OPENAI_API_KEYis valid, has available credits, and won't exceed rate limits.
Summary
In this tutorial, you learned how to:
- Set up a new Python project with uv
- Add LangChain, LangGraph, and Airbyte's GitHub agent connector to your project
- Configure environment variables and authentication
- Create a LangChain tool from the GitHub connector
- Build a ReAct agent with LangGraph and use natural language to interact with GitHub data
Next steps
-
Add more agent connectors to your project. Explore other agent connectors in the Airbyte agent connectors catalog to give your agent access to more services like Stripe, HubSpot, and Salesforce.
-
Consider how you might like to expand your agent's capabilities. For example, you might want to trigger effects like sending a Slack message or an email based on the agent's findings. You aren't limited to the capabilities of Airbyte's agent connectors. You can use other libraries and integrations to build an increasingly robust agent ecosystem.