codeWithYoha logo
Code with Yoha
HomeAboutContact
Nanobot

Building Custom MCP Agents with Nanobot: From Setup to Deployment

CodeWithYoha
CodeWithYoha
6 min read
Building Custom MCP Agents with Nanobot: From Setup to Deployment

Introduction

In the rapidly evolving landscape of AI, the ability to connect Large Language Models (LLMs) to real-world data and tools is a game-changer. The Model Context Protocol (MCP), introduced by Anthropic, provides an open standard for how AI models interact with external tools and data sources. While MCP servers handle the "how" of tool interaction, building a full-featured agent often requires a layer of orchestration, memory, and reasoning.

Nanobot (specifically the nanobot-ai framework) is an ultra-lightweight Python-based agent framework designed to transform MCP servers into autonomous agents. It abstracts the complexities of tool orchestration, provides built-in reasoning capabilities, and supports rich UI rendering. Whether you're automating DevOps workflows, building personal productivity bots, or managing complex IoT swarms, Nanobot offers a streamlined path from setup to deployment.

This guide will walk you through the lifecycle of building custom MCP agents with Nanobot, focusing on the actual configuration-driven workflow and practical integration strategies.

Prerequisites

Before we dive in, ensure you have the following ready:

  • Python 3.11+: Nanobot requires a modern Python environment.
  • API Key: An API key from a provider like OpenAI, Anthropic, or OpenRouter.
  • Basic CLI Knowledge: Comfort with terminal operations and Python virtual environments.
  • MCP Servers: Either local servers (stdio-based) or remote servers (HTTP-based) that you wish to integrate.

1. Understanding MCP and Nanobot

The Model Context Protocol (MCP)

MCP is an open standard that enables AI models to safely and efficiently access data and tools. It establishes a standard interface for:

  • Tools: Executable functions (e.g., searching the web, reading a file).
  • Resources: Static or dynamic data (e.g., documentation, log files).
  • Prompts: Pre-defined templates for agent behavior.

Why Nanobot?

Nanobot sits on top of MCP servers to provide:

  • Agent Personas: System prompts that define how the agent reasons and acts.
  • State & Memory: Built-in conversational history and context management.
  • Multi-Channel Support: Deploy agents to CLI, Telegram, Discord, or web apps.
  • Ultra-Lightweight: Minimal dependencies and a configuration-first approach.

2. Setting Up Your Development Environment

Setting up Nanobot is a two-step process: installation and initialization.

Step 1: Installation

We recommend using a virtual environment to keep your dependencies clean.

# Create and activate a virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: .\venv\Scripts\activate

# Install the official nanobot-ai package
pip install nanobot-ai

Step 2: Initialization

Run the onboarding command to initialize your configuration and set up your workspace.

nanobot onboard

This command will guide you through:

  1. Connecting your LLM provider (setting up the API key).
  2. Creating your initial configuration file, usually located at ~/.nanobot/config.json.

3. Configuring Your Custom Agent

Unlike traditional frameworks that require complex Python subclasses, Nanobot agents are primarily configuration-driven. You define your agents and the MCP servers they use in a workspace file (e.g., nanobot.yaml).

The Agent Configuration (nanobot.yaml)

Create a file named nanobot.yaml in your project directory:

# nanobot.yaml
agent:
  name: "system-engineer"
  model: "gpt-4-turbo"
  system_prompt: |
    You are a senior DevOps and System Engineer agent.
    You help users monitor system health, manage logs, and automate deployments.
    Use the provided tools to gather data before making recommendations.

tools:
  mcpServers:
    # Example: Integrating a local filesystem MCP server
    filesystem:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/logs"]
    
    # Example: Integrating a remote weather service
    weather:
      url: "https://mcp-weather-server.example.com/mcp"

In this setup:

  • agent: Defines the "brain" of your agent, its model, and its persona.
  • tools.mcpServers: Lists the MCP servers that provide the actual functionality. Nanobot will automatically discover and orchestrate these tools.

4. Building Custom Tools for Your Agent

If your agent needs specific capabilities (like a System Health Monitor), you can build a custom MCP server. Since Nanobot uses standard MCP, your server can be written in Python, TypeScript, or Go.

Example: A Simple Python MCP Server

If you wanted to build the health monitor mentioned earlier, you'd create a standalone MCP server (e.g., using the mcp[cli] Python library) and then add it to your nanobot.yaml:

# Adding your custom health monitor server
tools:
  mcpServers:
    health_monitor:
      command: "python"
      args: ["/path/to/your/health_mcp_server.py"]

5. Interacting with Your Agent

Once configured, you can run and interact with your agent directly from the CLI.

Running the Agent

Start a session with your custom agent:

nanobot run system-engineer

This opens an interactive chat where the agent can call the tools defined in your configuration. You can ask:

  • "What are the latest errors in the log directory?" (using the filesystem tool).
  • "Run the system health check and summarize the results." (using your health_reporter tool).

6. Deployment Strategies

Nanobot is designed to be flexible, whether you're running it locally or as a cloud-based gateway.

1. Gateway Deployment (Telegram/Discord)

Nanobot can run as a gateway, listening to chat messages and invoking the agent.

# Start the gateway
nanobot gateway

You'll need to configure your channel-specific credentials (like a Telegram Bot Token) in your config.json.

2. Containerization (Docker)

For persistent deployments on a VPS, use Docker. A simple Dockerfile for Nanobot looks like this:

FROM python:3.11-slim

WORKDIR /app

# Install nanobot and necessary tools
RUN pip install nanobot-ai
RUN apt-get update && apt-get install -y nodejs npm # For npx-based servers

# Copy your configuration
COPY nanobot.yaml .
COPY config.json /root/.nanobot/config.json

# Run the agent in gateway mode
CMD ["nanobot", "gateway"]

3. Integrated UI with MCP-UI

Nanobot supports MCP-UI, allowing you to render custom interactive components (like graphs or dashboards) directly in the chat interface. This is perfect for the "System Health Monitor" scenario, where you can display CPU metrics as a live chart instead of just text.

7. Best Practices

  • Granular Agents: Instead of one "do-it-all" agent, create specialized agents (e.g., git-helper, cloud-architect) and switch between them.
  • Tool Sanitization: Ensure your MCP servers handle errors gracefully. Nanobot will report server errors to the agent, but robust tool-side validation is better.
  • Environment Variables: Keep API keys and tokens out of your nanobot.yaml. Use .env files or system environment variables which Nanobot can read.
  • Idempotency: Ensure that tools (like a "restart-server" tool) are idempotent to avoid accidental side effects during agent reasoning loops.

Conclusion

Building custom MCP agents with Nanobot bridges the gap between raw LLM capabilities and practical system automation. By leveraging the Model Context Protocol, you can create a reusable ecosystem of tools that multiple agents can leverage. Whether you are using it for local productivity or at-scale engineering orchestration, Nanobot provides the ultra-lightweight glue needed to bring your AI agents to life.

Ready to start? Install nanobot-ai and run your first onboard today!