Jun 12, 2025

MCP Servers and How LangGraph/LangChain Fit In

Model Context Protocol (MCP) servers provide a structured way for tools and data sources to talk to language models. Instead of wiring every tool directly into an app, MCP defines a standard interface for listing resources, executing actions, and injecting structured context.

This write‑up explains MCP servers at a high level and how LangGraph and LangChain can sit on top of them for orchestration.

What an MCP server does

An MCP server exposes:

  • Resources: structured data the model can read (files, database schemas, APIs).
  • Tools: actions the model can call (search, write, deploy, run jobs).
  • Context gates: rules about when and how data is shared.

Think of it as a consistent “tool bus” that multiple agents can plug into.

Why MCP matters

MCP solves two common problems:

  1. Tool sprawl: every tool has its own API and auth flow.
  2. Context chaos: unstructured prompts make it hard to guarantee correctness.

With MCP, you can move to a single integration surface and keep tool access auditable.

Where LangChain fits

LangChain provides:

  • Tool calling abstractions (tools, retrievers, memory).
  • Chains that sequence multiple calls.
  • Prompt templates and output parsers.

If you expose tools through MCP, LangChain can treat MCP as a single “tool provider,” reducing integration overhead.

Where LangGraph fits

LangGraph builds on LangChain and adds graph‑structured execution:

  • Multiple agents can run in parallel.
  • You can route based on conditions (e.g., “if low confidence, escalate”).
  • You get stateful workflows with checkpoints.

In MCP terms, LangGraph becomes the control plane that decides which MCP tools to call and when.

A simple integration pattern

  1. MCP server exposes tools and resources.
  2. LangChain wraps those tools into a chain (search → summarize → answer).
  3. LangGraph orchestrates the chain in a stateful workflow (retry, branch, human review).

This is how you scale from a single tool call to a robust agent system.

Example scenario

  • A user asks for a report.
  • LangGraph starts a workflow:
    • Pulls data from MCP resources.
    • Runs a summarization chain.
    • Calls an MCP tool to generate a PDF.
    • Escalates to a human if confidence is low.

Everything is auditable because MCP logs all resource/tool access.

Tradeoffs

  • Pros: standardization, better observability, clean separation of tools vs logic.
  • Cons: extra infra layer, more moving parts, requires good access control design.

Final thought

MCP gives you a consistent “contract” for tools and context. LangChain and LangGraph then let you build agent logic on top of that contract. Together, they shift AI apps from ad‑hoc prompt glue to structured, testable systems.


Thanks for reading! If you want to see future content, you can follow me on Twitter or get connected over at LinkedIn.


Support My Content

If you find my content helpful, consider supporting a humanitarian cause (building homes for elderly people in rural Terai region of Nepal) that I am planning with your donation:

Ethereum (ETH)

0xB62409A5B227D2aE7D8C66fdaA5EEf4eB4E37959

Thank you for your support!