Module 4 — Working with MCP in Postman

Teaching Your Agent New Tricks

In a previous course, you gave your agent a brain and a goal. Now we're giving it some hands — specifically, very specialised hands that can reach into Slack, Notion, Stripe, Google Maps, or whatever your workflow needs, without writing a single line of integration code.

What You'll Learn
 

How the Request–Tool–Response Loop Works

When you attach an MCP server to an AI request, the model doesn't just get a longer system prompt. It gets a menu. Every tool your MCP server exposes shows up as something the model can choose to invoke — or not. The key word there is choose.

Here's what that loop actually looks like, from the moment you hit Send to the moment the response lands:

Request flow — AI request + MCP server
1
You send a prompt via an AI request
Same as any AI request — enter your prompt, hit Send. The difference is what the model can see on the other side.
2
Model sees available tools from the attached MCP server
Postman passes the MCP server's tool list to the model as part of the request context. The model now knows what actions are available to it.
3
Model decides whether to invoke a tool The interesting bit
This is the part that surprises people. The model makes its own judgement call. If the prompt is answerable from training data alone, it might skip the tool entirely. If it needs live data or an action, it'll reach for one.
4
Tool returns context
The MCP server executes the tool call and returns a result — a list of items, a created resource, a fetched document, whatever the tool does.
5
Model incorporates the result into its response
The tool output gets folded into the model's answer. In Postman, you can see the full chain — which tool was called, with what arguments, and what came back.
🤔
The model decides whether to use the tool — not you. You're not scripting "call this tool when X happens." You're giving the model agency and trusting its judgement. That's both the power of this approach and the reason you should test it thoroughly in Postman before pointing it at anything production-shaped.
 

See It in Action

The walkthrough below shows the full flow end-to-end: connecting a real MCP server to an AI request, sending a prompt, and watching the model decide to invoke a tool — then inspecting exactly what happened in the Postman response pane.

 
Arcade Demo — MCP Server Walkthrough in Postman
Replace this block with the embedded Arcade video when ready
 
🚀 Try This

Add the Postman MCP Server to one of your existing AI requests. Prompt the model to create a Collection based on a simple API you use regularly — then inspect what it built.

Does it match what you'd have made manually? What's missing? What surprised you? There are no wrong answers here — the goal is to develop an intuition for when the model reaches for a tool and when it doesn't.

🔧
Up Next

Building Your Own MCP Server