This project demonstrates how to build AI agents that can interact with real-world APIs using the Model Context Protocol (MCP). It features a complete burger ordering system with a serverless API, web interfaces, and an MCP server that enables AI agents to browse menus, place orders, and track order status. The agent uses LangChain.js to handle LLM reasoning and tool calling. The system consists of multiple interconnected services, as detailed in the Architecture section below.
You can test this application locally without deployment needed or any cloud costs. The MCP server also works with popular AI tools like GitHub Copilot, Claude, and other MCP-compatible clients. Follow the instructions in the Local Development section to get started.
Key features
LangChain.js agent with tool calling via MCP (Streamable HTTP transport)
100% serverless architecture, for cost-effective scaling
Single-command deployment using Infrastructure as Code (IaC)
Architecture
Building AI applications can be complex and time-consuming, but using LangChain.js and Azure serverless technologies allows to greatly simplify the process. This application is a AI agent that can be access through different interfaces (web app, CLI) and that can call tools through MCP to interact with a burger ordering API.
The application is made from these main components:
There are multiple ways to get started with this project. The quickest way is to use GitHub Codespaces that provides a preconfigured environment for you. Alternatively, you can set up your local environment following the instructions below.
Use GitHub Codespaces
You can run this project directly in your browser by using GitHub Codespaces, which will open a web-based VS Code:
A similar option to Codespaces is VS Code Dev Containers, that will open the project in your local VS Code instance using the Dev Containers extension.
You will also need to have Docker installed on your machine to run the container.
Open a terminal and navigate to the root of the project
Authenticate with Azure by running azd auth login
Run azd up to deploy the application to Azure. This will provision Azure resources and deploy all services
You will be prompted to select a base location for the resources
The deployment process will take a few minutes
Once deployment is complete, you'll see the URLs of all deployed services in the terminal.
Important
If you're using an Azure for Students or Free Trial account that you just created, you need to run the following command before running azd up or else the deployment will fail:
azd env set AZURE_OPENAI_MODEL_CAPACITY 1
This ensures that the resources deployed will fit within the free tier limits. This limitations reduce the capacity for AI models usage, so you'll also have to provide another OpenAI endpoint to use the application properly. To do that, use the following commands to set the OpenAI endpoint, API key and model you want to use:
azd env set AZURE_OPENAI_ALT_ENDPOINT <your_openai_endpoint>
azd env set AZURE_OPENAI_API_KEY <your_openai_api_key>
azd env set AZURE_OPENAI_MODEL <your_openai_model>
You can for example use the free GitHub Models for this, or any other OpenAI compatible endpoint.
Cost estimation
Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. However, you can use the Azure pricing calculator with pre-configured estimations to get an idea of the costs: Azure Pricing Calculator.
Clean up resources
To clean up all the Azure resources created by this sample:
azd down --purge
Use Ollama for local development
If you have a machine with enough resources, you can run this sample entirely locally without using any cloud resources. To do that, you first have to install Ollama and then run the following commands to download a model on your machine:
ollama pull qwen3:8b
Tip
If you have a powerful machine with enough memory, you can try using gpt-oss:20b instead to get better results.
After that create a .env file in the root of the project with the following content:
You can now continue to the run locally section to start the application.
Note
Local models may not work as well as cloud-hosted models, and may even fail to respond for some complex requests. This is a limitation of the local models and not of the application itself.
Run locally
After setting up your environment and either provisioned the Azure resources or set up Ollama, you can run the entire application locally:
# Install dependencies for all services
npm install
# Start all services locally
npm start
Starting the different services may take some time, you need to wait until you see the following message in the terminal: π All services ready π
When running locally without having deployed the application, the servers will use in-memory storage, so any data will be lost when you stop the servers.
After a successful deployment, the servers will use Azure Cosmos DB for persistent storage.
You can then open the Agent web app and ask things like:
What spicy burgers do you have?
Order two Classic Cheeseburgers with extra bacon.
Show my recent orders
The agent will decide which MCP tool(s) to call, then come up with a response.
Available scripts
This project uses npm workspaces to manage multiple packages in a single repository. You can run scripts from the root folder that will apply to all packages, or you can run scripts for individual packages as indicated in their respective README files.
Common scripts (run from repo root):
Action
Command
Start everything
npm start
Build all
npm run build
Lint
npm run lint
Fix lint
npm run lint:fix
Format
npm run format
MCP tools
The Burger MCP server provides these tools for AI agents:
Tool Name
Description
get_burgers
Get a list of all burgers in the menu
get_burger_by_id
Get a specific burger by its ID
get_toppings
Get a list of all toppings in the menu
get_topping_by_id
Get a specific topping by its ID
get_topping_categories
Get a list of all topping categories
get_orders
Get a list of all orders in the system
get_order_by_id
Get a specific order by its ID
place_order
Place a new order with burgers (requires userId, optional nickname)
delete_order_by_id
Cancel an order if it has not yet been started (status must be pending, requires userId)
Testing the MCP Server
Using the MCP Inspector
You can test the MCP server using the MCP Inspector:
Install and start MCP Inspector:
npx -y @modelcontextprotocol/inspector
In your browser, open the MCP Inspector (the URL will be shown in the terminal)
Configure the connection:
Transport: Streamable HTTP
URL: http://localhost:3000/mcp
Click Connect and explore the available tools
Using GitHub Copilot
To use the MCP server in local mode with GitHub Copilot, create a local .vscode/mcp.json configuration file in your project root:
Then, you can use GitHub Copilot in agent mode to interact with the MCP server. For example, you can ask questions like "What burgers are available?" or "Place an order for a vegan burger" and Copilot will use the MCP server to provide answers or perform actions.
Tip
Copilot models can behave differently regarding tools usage, so if you don't see it calling the burger-mcp tools, you can explicitly mention using the Bruger MCP server by adding #burger-mcp in your prompt.
Resources
Here are some resources to learn more about the technologies used in this project:
claude-code-safety-netπ‘οΈ Enhance code safety with Claude Code Safety Net, a tool designed to identify and mitigate risks in your codebase effectively.main@2026-04-21
claude-team-mcpπ€ Orchestrate AI models like GPT and Claude for seamless collaboration in multi-agent development tasks with the Claude Team MCP Server.master@2026-04-21
codex-mcp-serverπ Connect your IDE or AI assistant to the Codex CLI with this open-source MCP server for efficient automation and robust code analysis.main@2026-04-21
mcp-gatewayManage and debug local Model Context Protocol servers with an easy desktop app that auto-discovers and monitors MCP instances.main@2026-04-21
mcp-verified-repo-memoryProvide reliable, repository-scoped memory for AI coding agents with code citations, just-in-time verification, and stale-proof data management.main@2026-04-21