rw-book-cover

Metadata

Highlights

  • Contextual Data Injection MCP lets you pull in external resources — like files, database rows, or API responses — right into the prompt or working memory. All of it comes through a standardized interface, so your LLM can stay lightweight and clean. (View Highlight)
  • Function Routing & Invocation MCP also lets models call tools dynamically. You can register capabilities like searchCustomerData or generateReport, and the LLM can invoke them on demand. It’s like giving your AI access to a toolbox, but without hardwiring the tools into the model itself. (View Highlight)
  • Prompt Orchestration Rather than stuffing your prompt with every possible detail, MCP helps assemble just the context that matters. Think modular, on-the-fly prompt construction — smarter context, fewer tokens, better outputs. (View Highlight)
  • Implementation Characteristics • Operates over HTTP(S) with JSON-based capability descriptors • Designed to be model-agnostic — any LLM with a compatible runtime can leverage MCP-compliant servers • Compatible with API gateways and enterprise authentication standards (e.g., OAuth2, mTLS) (View Highlight)
  • Engineering Use Cases ➀ LLM integrations for internal APIs: Enable secure, read-only or interactive access to structured business data without exposing raw endpoints. ➁ Enterprise agents: Equip autonomous agents with runtime context from tools like Salesforce, SAP, or internal knowledge bases. ➂ Dynamic prompt construction: Tailor prompts based on user session, system state, or task pipeline logic (View Highlight)
  • The Agent Communication Protocol (ACP) is an open standard originally proposed by BeeAI and IBM to enable structured communication, discovery, and coordination between AI agents operating in the same local or edge environment. (View Highlight)
  • Unlike cloud-oriented protocols such as A2A or context-routing protocols like MCP, ACP is designed for local-first, real-time agent orchestration with minimal network overhead and tight integration across agents deployedc within a shared runtime. (View Highlight)
  • ACP defines a decentralized agent environment in which: • Each agent advertises its identity, capabilities, and state using a local broadcast/discovery layer. • Agents communicate through event-driven messaging, often using a local bus or IPC (inter-process communication) system. • A runtime controller (optional) can orchestrate agent behavior, aggregate telemetry, and enforce execution policies. ACP agents typically operate as lightweight, stateless services or containers with a shared communication substrate. (View Highlight)
  • Implementation Characteristics • Designed for low-latency environments (e.g., local orchestration, robotics, offline edge AI) • Can be implemented over gRPC, ZeroMQ, or custom runtime buses • Emphasizes local sovereignty — no cloud dependency or external service registration required • Supports capability typing and semantic descriptors for automated task routing (View Highlight)
  • Implementation Characteristics • Designed for low-latency environments (e.g., local orchestration, robotics, offline edge AI) • Can be implemented over gRPC, ZeroMQ, or custom runtime buses • Emphasizes local sovereignty — no cloud dependency or external service registration required • Supports capability typing and semantic descriptors for automated task routing Engineering Use (View Highlight)
  • Engineering Use Cases ➀ Multi-agent orchestration on edge devices (e.g., drones, IoT clusters, or robotic fleets) ➁ Local-first LLM systems coordinating model invocations, sensor inputs, and action execution ➂ Autonomous runtime environments where agents must coordinate without centralized cloud infrastructure (View Highlight)
  • In short, ACP offers a runtime-local protocol layer for modular AI systems — prioritizing low-latency coordination, resilience, and composability. It’s a natural fit for privacy-sensitive, autonomous, or edge-first deployments where cloud-first protocols are impractical. (View Highlight)
  • The Agent-to-Agent (A2A) Protocol, introduced by Google, is a cross-platform specification for enabling AI agents to communicate, collaborate, and delegate tasks across heterogeneous systems. (View Highlight)
  • Unlike ACP’s local-first focus or MCP’s tool integration layer, A2A addresses horizontal interoperability — standardizing how agents from different vendors or runtimes can exchange capabilities and coordinate workflows over the open web. (View Highlight)
  • A2A defines a HTTP-based communication model where agents are treated as interoperable services. Each agent exposes an “Agent Card” — a machine-readable JSON descriptor detailing its identity, capabilities, endpoints, and authentication requirements. Agents use this information to: • Discover each other programmatically • Negotiate tasks and roles • Exchange messages, data, and streaming updates A2A is transport-layer agnostic in principle, but currently specifies JSON-RPC 2.0 over HTTPS as its core mechanism for interaction. (View Highlight)
  • Core Components **Agent Cards **JSON documents describing an agent’s capabilities, endpoints, supported message types, auth methods, and runtime metadata. **A2A Client/Server Interface **Each agent may function as a client (task initiator), a server (task executor), or both, enabling dynamic task routing and negotiation. Message & Artifact Exchange Supports multipart tasks with context, streaming output (via SSE), and persistent artifacts (e.g., files, knowledge chunks). User Experience Negotiation Agents can adapt message format, content granularity, and visualization to match downstream agent capabilities. (View Highlight)
  • Security Architecture • OAuth 2.0 and API key-based authorization • Capability-scoped endpoints — agents only expose functions required for declared interactions • Agents can operate in “opaque” mode — hiding internal logic while revealing callable services (View Highlight)
  • Implementation Characteristics • Web-native by design: built on HTTP, JSON-RPC, and standard web security • Model-agnostic: works with any agent system (LLM or otherwise) that implements the protocol • Supports task streaming and multi-turn collaboration with lightweight payloads (View Highlight)
  • Engineering Use Cases ➀ Cross-platform agent ecosystems where agents from different teams or vendors need to interoperate securely ➁ Distributed agent orchestration in cloud-native AI environments (e.g., Vertex AI, LangChain, HuggingFace Agents) ➂ Multi-agent collaboration frameworks, such as enterprise AI workflows that span multiple systems (e.g., CRM, HR, IT agents) (View Highlight)
  • (View Highlight)
  • (View Highlight)
  • A2A + MCP A2A and MCP aren’t fighting each other — they’re solving totally different parts of the agentic AI puzzle, and they actually fit together pretty nicely. (View Highlight)
  • Think of MCP as the protocol that lets AI agents plug into the world. It gives them access to files, APIs, databases — basically, all the structured context they need to do something useful. Whether it’s pulling real-time sales data or generating a custom report, MCP handles the connection to tools and data. (View Highlight)
  • Now layer on A2A. This is where agents start collaborating. A2A gives them a shared language and set of rules to discover each other, delegate tasks, and negotiate how they’ll work together — even if they’re built by different vendors or running on different platforms. (View Highlight)
  • So here’s a simple way to think about it: ⟢ MCP connects AI to tools. ⟢ A2A connects AI to other AI.

    Together, they form a strong modular base for building smart, collaborative systems. (View Highlight)

  • Then there’s ACP, which takes a different approach altogether. It’s all about local-first agent coordination — no cloud required. Instead of using HTTP and web-based discovery, ACP enables agents to find and talk to each other right inside a shared runtime. (View Highlight)
  • This is perfect for situations where: • You have limited bandwidth or need low-latency (like in robotics or on-device assistants), • Privacy matters and you want to keep everything offline, • Or you’re deploying in environments cut off from the internet (e.g., factory floors, edge nodes).

    ACP isn’t trying to compete with A2A — it just fills a different niche. But in some setups, especially in tightly controlled environments, ACP might replace A2A entirely, because it skips the overhead of web-native protocols and just gets the job done locally. (View Highlight)

  • Best case? We see convergence. Imagine a unified agent platform where A2A handles the back-and-forth between agents, MCP manages access to tools and data, and ACP-style runtimes plug in for edge or offline scenarios. Everything just works, and developers can build on top without worrying which protocol is doing what behind the scenes. (View Highlight)
  • Worst case? Things fragment. Different vendors push their own flavors of A2A or MCP, and we end up with a mess — like the early days of web services, when nothing talked to anything else without a lot of glue code. (View Highlight)
  • The middle ground? Open-source tools and middleware could save the day. These projects would sit between agents and protocols, abstracting the differences and giving devs a clean, unified API — while translating under the hood depending on where and how your agents run. (View Highlight)
  • In short: we’re early. But how we build and adopt these standards now will shape whether AI agents become a cohesive ecosystem — or a patchwork of silos. (View Highlight)