Building Your First AI Agent: A Step‑by‑Step Guide

1. Getting Started with StitchGrid
Building an AI agent doesn’t have to be complicated. With StitchGrid, you can create powerful workflows using a visual interface that stitches together LLMs, APIs, and custom logic.
2. Define Your Trigger
What should start your agent’s workflow?
- Webhook – Triggered by an external event (e.g., a new email).
- Schedule – Run at regular intervals (e.g., every 5 minutes).
- Manual – Triggered by a user click in the UI.
3. Choose Your LLM
Select the best model for your task.
- GPT‑4 – Best for reasoning, summarisation, and complex dialogue.
- GPT‑3.5‑Turbo – Fast, cost‑effective for simpler queries.
- Custom fine‑tuned models – For domain‑specific language.
4. Add Tools via MCP
StitchGrid’s Model‑Context‑Protocol (MCP) is the bridge that lets your LLM interact with external services—APIs, databases, web scraping, etc.—in a safe, sandboxed way.
What is MCP?
MCP is a lightweight, language‑agnostic protocol that:
| Feature | What it does |
|---|---|
| Tool discovery | The LLM can request the list of available tools (e.g., “search‑web”, “call‑API‑X”). |
| Tool invocation | The LLM sends a JSON payload describing the tool name and arguments. |
| Execution sandbox | The platform validates, authenticates, and runs the tool, then returns the result. |
| Context management | Each tool call’s output is added to the conversation context so the LLM can reason over it later. |
| Security | Only whitelisted tools can be called; data is logged and audited. |
How to add an MCP tool
- Create a new MCP transport (if you haven’t already) – this is the “connector” that knows how to call external services.
- Register the tool – give it a name, description, and the function signature (JSON schema).
- Attach the tool to your agent – in the agent UI, toggle the tool on.
- Use it in your prompt – e.g., “Please search the web for the latest news on X.” The LLM will automatically format the call as an MCP request.
Example MCP Tool Definition
{
"name": "search-web",
"description": "Search the web for a query and return the top 3 results.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query."
}
},
"required": ["query"]
}
}
5. Craft Your System Prompt
The system prompt sets the behavior of the LLM. When you enable MCP, you can add a small instruction that tells the agent to “use MCP tools when needed”.
Example system prompt:
You are a helpful assistant that can call external tools via the Model‑Context‑Protocol (MCP). Whenever you need external data, format a JSON request as specified by the tool definitions. If you do not need to call a tool, simply answer the user’s question.
You can paste this into the “System Prompt” field in the StitchGrid agent editor. The LLM will now automatically know when to invoke MCP tools.
6. Test Your Agent
- Run a dry‑run – Use the “Simulate” button to see how the agent behaves.
- Check logs – Verify that tool calls are logged and returned correctly.
- Iterate – Refine prompts or tool signatures based on the results.
7. Deploy
Once you’re satisfied:
- Publish the agent to your workspace.
- Expose it via a webhook or schedule it to run automatically.
- Monitor usage and logs in the StitchGrid dashboard.
- MCP lets your LLM safely call external services.
- Add tools by defining JSON schemas and attaching them to agents.
- A concise system prompt tells the LLM to use MCP when needed.
- Test thoroughly before deploying.
Happy building! 🚀
Found this helpful?
Share this article with your network