MCP Lets Java AI Agents Call Tools Across Microservices Without Handwritten HTTP Clients
MCP (Model Context Protocol) enables Java AI agents to discover and invoke tools exposed by microservices using JSON-RPC over HTTP/SSE, removing the need for bespoke HTTP clients.
Why @Tool Alone Breaks Down in Distributed Systems
When AI agent logic runs inside a single JVM, annotations like @Tool make it easy to expose methods directly to the model. That pattern works for monoliths, but it becomes cumbersome once functionality is split across multiple services. In the author’s saga orchestration example, business logic is distributed across five services — order-service, product-validation-service, payment-service, inventory-service and an orchestrator — with an additional ai-saga-agent that hosts the AI agents. Each service owns its own database and capabilities, and the ai-saga-agent needs to consult all of them at runtime.
Using @Tool for cross-service calls forces the agent service to carry custom HTTP clients, DTOs, retry and error-handling logic for every other service it talks to. Every time a service adds a new capability, the agent code must be updated. That tight coupling was the pain point that motivated adopting MCP as a different integration pattern.
What MCP Is and How It Changes Service Integration
MCP, or Model Context Protocol, is a lightweight way to expose service capabilities to LLM-driven agents via a common protocol. In the implementation described, MCP operates as a JSON-RPC layer transported over HTTP with Server-Sent Events (SSE) for the agent to subscribe to server messages. Services register “tools” — named, described operations with parameter schemas and handlers — and any agent that speaks the MCP transport can discover and call those tools at runtime.
The core benefit is decoupling. Rather than embedding a separate HTTP client for each service, an agent opens one or more MCP client connections, discovers the set of available tools across services, and invokes them using a standard request shape. Services keep full ownership of their data and business logic; the MCP layer is a thin, standardized exposure surface.
How the Example Microservices Are Organized
The example architecture used to evaluate MCP is a saga orchestration system composed of the following services and ports:
- order-service (port 3000): MongoDB-backed, manages orders and events
- product-validation-service (port 8090): PostgreSQL-backed, validates catalog entries
- payment-service (port 8091): PostgreSQL-backed, handles payments and fraud scoring
- inventory-service (port 8092): PostgreSQL-backed, manages stock
- orchestrator (port 8050): coordinates the saga via Kafka
- ai-saga-agent (port 8099): hosts AI agents that need to query the other services
The ai-saga-agent must consult all four business services during saga execution and analysis, which is where MCP becomes useful: it lets each service publish a small set of callable tools, and lets the agent discover those tools without bespoke client code.
Turning a Microservice into an MCP Server
Making an existing microservice available as an MCP server can be done without rewriting the underlying business logic. In the payment-service example, existing Spring beans like PaymentService and FraudValidationService remain the single source of truth; the MCP layer wraps selected methods from those beans and publishes them as tools.
Key steps in the example process are:
- Adding the MCP SDK dependency to the service build so the MCP server and transport classes are available.
- Configuring the MCP transport so the service exposes an SSE endpoint and a message endpoint to receive JSON-RPC-style calls. The transport registration maps the transport provider to routes such as /sse and /mcp/message.
- Creating and registering an McpSyncServer (MCP server instance) which includes server metadata, capability flags (for example, exposing tools), and a list of tool specifications built from existing service beans.
This approach reuses existing service methods rather than duplicating them. The MCP server registration simply supplies metadata and wiring so the LLM-aware client can discover and call the service’s functions.
What a Tool Specification Looks Like in Practice
Each published MCP tool in the example requires four elements:
- A human-friendly name and descriptive text so an LLM can understand when to call it.
- A JSON schema that defines the tool’s input parameters so callers can validate and assemble requests.
- A handler function (the bridge) that takes the parsed arguments and invokes the underlying business logic.
- A stable tool identity that appears in the service’s capabilities list.
The payment-service example exposes tools such as getPaymentStatus, getRefundRate and getFraudRiskScore. For instance, the getPaymentStatus tool describes the operation to the model, specifies a JSON schema requiring a transactionId string, and wraps a call to the existing paymentService.findByTransactionId(…) method. The handler returns either a formatted status string (including values such as status, totalAmount and totalItems) or a “not found” message if the transaction is missing.
Crucially, the example does not add new business logic — it surfaces what already exists inside the service under a standardized protocol.
What Each Service Exposes via MCP
Across the four business services the author implemented, the MCP tools were mapped as follows:
- order-service: getOrderById, listRecentEvents, getLastEventByOrder
- payment-service: getPaymentStatus, getRefundRate, getFraudRiskScore
- inventory-service: getStockByProduct, getLowStockAlert, checkReservationExists
- product-validation-service: checkProductExists, checkValidationExists, listCatalog
Each service retains full data ownership; the MCP interface is intentionally a thin translation layer from local method to remote tool call.
How an Agent Connects and Discovers Tools
On the agent side — in this example, the ai-saga-agent — an MCP tool provider aggregates MCP client connections to the various service endpoints. The agent builds a list of MCP clients, each pointing at a service’s SSE URL (for example, http://localhost:8092/sse for inventory). The tool provider automatically discovers the tools that each server publishes and makes them available to the agent runtime.
When constructing an agent instance in the example, the MCP tool provider is supplied to the agent builder so the agent’s tool catalog includes the discovered tools from all connected services. That means one agent instance can enumerate and invoke a dozen or more tools across multiple services without any custom HTTP client code.
Testing and Debugging Tools Without an LLM
Because MCP is just HTTP and JSON-RPC over SSE, tools can be exercised manually during development. The example lists a simple three-step workflow:
- Open an SSE session at the service’s /sse endpoint to obtain a sessionId.
- POST a JSON-RPC request to the service’s message endpoint to list available tools.
- POST a JSON-RPC tools/call request with the tool name and JSON arguments to exercise a tool (for example, calling getStockByProduct with a productCode argument).
Using curl to open a session, list tools, and call a tool is valuable for debugging: when an agent behaves unexpectedly, you can determine whether the problem lies in the model prompt or in the tool implementation by invoking the tool directly.
When to Use @Tool Versus MCP
From the implementer’s experience, the choice between @Tool and MCP is one of locality and ownership:
- Use @Tool when the logic is co-located in the same JVM as the agent. It avoids network overhead and keeps integration simple; only that agent can use the annotated code.
- Use MCP when the logic lives in a separate service. MCP provides language- and platform-agnostic discovery and invocation using a JSON-RPC transport, and adding new tools does not require changes on the agent side.
Practically, the author’s agents rely on MCP for cross-service calls and reserve @Tool for small utility functions that don’t belong in any microservice (formatting helpers, date calculations, etc.).
How MCP Fits into a Saga-Oriented Architecture
The example system uses the Saga Pattern to coordinate distributed transactions without a two-phase commit. The flow is composed of a sequence of local actions and, when necessary, compensating rollback steps. The high-level execution path in the author’s system is:
Order Service → Orchestrator → Product Validation → Payment → Inventory → Success
If a later step fails, compensating messages flow back through Kafka topics to undo earlier steps. The orchestrator emits and consumes topics and follows a state transition table that maps (source, status) pairs to the next topic. For instance, a successful PRODUCT_VALIDATION step leads to a payment-success topic; a PAYMENT failure triggers a product-validation-fail rollback topic.
MCP augments this architecture by giving AI agents a standardized way to query each service’s state, metrics and domain-specific checks during orchestration, diagnosis and analytics — without hardwired clients.
Agents, Use Cases, and What Comes Next
With the MCP layer operational, the next phase is deploying agents that put those cross-service tools to work. The author teases three agents to be covered in a follow-up:
- OperationsAgent: listens for failed sagas on Kafka and performs automated diagnosis using retrieval-augmented generation (RAG) techniques.
- SagaComposerAgent: periodically refactors the saga execution plan based on observed failure data.
- DataAnalystAgent: answers natural-language queries such as “list the 5 most recent failed sagas and assess their fraud risk” by composing tool calls across services.
Those agents illustrate how MCP can shift AI agent design from embedding ad-hoc integration code into composing domain-aware workflows over a shared tool surface.
Broader Implications for Developers and Organizations
Standardizing service exposure with MCP has implications beyond this specific example:
- Reduced duplication: Teams no longer need to maintain parallel HTTP clients in each agent project. Services export their capabilities once and agents consume them.
- Clear ownership boundaries: Data and business logic remain inside the owning service; MCP provides a controlled surface for invocation.
- Language and platform neutrality: Because MCP uses JSON-RPC and HTTP/SSE, clients can be implemented in different languages while speaking the same protocol.
- Faster agent iteration: Adding a tool to a service automatically becomes available to any MCP-aware agent without agent-side code changes, enabling quicker experimentation with AI-driven automation and analytics.
For developers, this pattern encourages treating LLMs and agents as first-class consumers of service APIs, and it reframes “tool exposure” as an operational contract rather than an ad-hoc integration point.
Practical Considerations and Development Workflow
In the author’s workflow, adopting MCP reduced the amount of boilerplate and maintenance required in the ai-saga-agent. The lifecycle went from a model where each new cross-service capability required adding a client and DTOs to a lightweight pattern of publishing a tool specification and letting agents discover it.
Testing remains an essential step: because MCP tools are reachable via plain HTTP, teams can exercise and debug tool behavior with standard tooling before connecting agents. This reduces ambiguity when an agent decision is unexpected; developers can determine whether the issue is the model, the tool metadata, or the tool’s implementation.
The approach preserves observability as well: every saga event is stored and transitions are logged via Kafka, so agent-driven actions and tool invocations can be audited as part of the existing event trail.
A public repository containing the example implementation is available for reference, and the author indicates the code used for the saga orchestration and MCP integration is open-source.
The example’s next installment promises detailed walkthroughs of the three agent types that consume the published tools, showing how diagnosis, plan rewriting, and natural-language querying are implemented on top of MCP.
Looking ahead, publishing service capabilities through a lightweight JSON-RPC layer like MCP makes it simpler to evolve both services and agents independently while maintaining a discoverable, machine-friendly interface. As agents take on more operational responsibilities — from diagnosing failed sagas to dynamically altering execution plans — a concise, standardized tool surface reduces friction and helps organizations scale AI-driven orchestration across microservice landscapes.


















