1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
|
# LangGraph MCP GitHub Token Sample
This sample shows how to wrap a LangGraph ReAct-style agent that is augmented with an MCP (Model Context Protocol) server requiring an API key / personal access token (GitHub) and expose it through the Azure AI Agents Adapter so it can be called via the unified `responses` endpoint.
Compared to `mcp_simple`, this version demonstrates adding authorization headers (Bearer token) for an MCP server (GitHub) that expects a token.
## What It Does
`mcp_apikey.py`:
1. Loads environment variables from a local `.env` file.
2. Creates an Azure OpenAI chat model deployment (defaults to `gpt-4o`, override with `AZURE_OPENAI_DEPLOYMENT`).
3. Reads a GitHub access token (`GITHUB_TOKEN`). This can be a classic or fine‑grained PAT (or an OAuth access token you obtained elsewhere).
4. Constructs a `MultiServerMCPClient` pointing at the public GitHub MCP endpoint and injects the token as an `Authorization: Bearer ...` header.
5. Fetches the available MCP tools exposed by the GitHub server.
6. Builds a LangGraph ReAct agent (`create_react_agent`) with those tools.
7. Hosts the agent using `from_langgraph(...).run_async()` making it available over HTTP (default: `http://localhost:8088`).
## Folder Contents
- `mcp_apikey.py` – Main script that builds and serves the token-authenticated MCP agent.
- `.env-template` – Template for required environment variables.
- `.env` – (User created) Actual secrets/endpoint values. Not committed.
## Prerequisites
Dependencies used by `mcp_apikey.py`:
- agents_adapter[langgraph]
- python-dotenv
- langchain-mcp-adapters
Install:
```bash
pip install -e container_agents_adapter/python[langgraph]
pip install python-dotenv langchain-mcp-adapters
```
Requires Python 3.11+, Azure OpenAI deployment, and a `GITHUB_TOKEN`.
## Environment Variables
Copy `.env-template` to `.env` and fill in values:
```
AZURE_OPENAI_API_KEY=<azure-openai-key>
AZURE_OPENAI_ENDPOINT=https://<endpoint-name>.cognitiveservices.azure.com/
OPENAI_API_VERSION=2025-03-01-preview
# Optional if your deployment name differs from gpt-4o
AZURE_OPENAI_DEPLOYMENT=<your-deployment-name>
# GitHub MCP auth (required)
GITHUB_TOKEN=<your-github-token>
```
Notes:
- `AZURE_OPENAI_DEPLOYMENT` defaults to `gpt-4o` if omitted.
- Do NOT commit `.env`.
## (Dependencies Covered Above)
## Run the Sample
From the `mcp-apikey` folder (or anywhere after install) run:
```bash
python mcp_apikey.py
```
The adapter starts an HTTP server (default `http://localhost:8088`).
## Test the Agent
Non-streaming example:
```bash
curl -X POST http://localhost:8088/responses \
-H "Content-Type: application/json" \
-d '{
"agent": {"name": "local_agent", "type": "agent_reference"},
"stream": false,
"input": "Use ONLY the Microsoft Learn MCP tools exposed by the connected MCP server (no built-in web search, no cached data).call the \"list tools\" capability and record the exact tool names returned.Use the search tool to query: \"Model Context Protocol\" (limit 3).Pick the top result and use the fetch tool to retrieve details/content for that document."
}'
```
Streaming example (server will stream delta events):
```bash
curl -N -X POST http://localhost:8088/responses \
-H "Content-Type: application/json" \
-d '{
"agent": {"name": "local_agent", "type": "agent_reference"},
"stream": true,
"input": "Use ONLY the Microsoft Learn MCP tools exposed by the connected MCP server (no built-in web search, no cached data).call the \"list tools\" capability and record the exact tool names returned.Use the search tool to query: \"Model Context Protocol\" (limit 3).Pick the top result and use the fetch tool to retrieve details/content for that document."
}'
```
Alternatively, you can send the richer structured message format:
```bash
curl -X POST http://localhost:8088/responses \
-H "Content-Type: application/json" \
-d '{
"agent": {"name": "local_agent", "type": "agent_reference"},
"stream": false,
"input": [{
"type": "message",
"role": "user",
"content": [{"type": "input_text", "text": "Use ONLY the Microsoft Learn MCP tools exposed by the connected MCP server (no built-in web search, no cached data).call the \"list tools\" capability and record the exact tool names returned.Use the search tool to query: \"Model Context Protocol\" (limit 3).Pick the top result and use the fetch tool to retrieve details/content for that document."}]
}]
}'
```
## Customization Ideas
- Add additional MCP endpoints (e.g., documentation, search, custom internal tools).
- Swap `create_react_agent` for a custom LangGraph graph with memory, guardrails, or ranking.
- Integrate tracing / telemetry (LangSmith, OpenTelemetry) by adding callbacks to the model / agent.
## Troubleshooting
| Issue | Likely Cause | Fix |
|-------|--------------|-----|
| 401 from MCP server | Missing/invalid `GITHUB_TOKEN` | Regenerate PAT; ensure env var loaded |
| 401 / auth from model | Azure key/endpoint incorrect | Re-check `.env` values |
| Model not found | Deployment name mismatch | Set `AZURE_OPENAI_DEPLOYMENT` correctly |
| No tools listed | GitHub MCP endpoint changed | Verify endpoint URL & token scopes |
| Import errors | Extras not installed | Re-run dependency install |
## Related Samples
See `samples/langgraph/mcp_simple` for a no-auth MCP example and `samples/langgraph/agent_calculator` for arithmetic tooling.
---
Extend this pattern to securely integrate additional authenticated MCP servers.
|