MCP Tools Integration
The MCP (Model Context Protocol) module enables seamless integration of MCP-compliant tools and servers with PraisonAI agents, supporting both stdio and SSE transport methods.
Overview
MCP (Model Context Protocol) is a standard for connecting AI assistants to external tools and data sources. The MCP module in PraisonAI provides:
Stdio Transport : Run MCP servers as subprocess commands
SSE Transport : Connect to HTTP/SSE-based MCP servers
Automatic Tool Discovery : Tools are automatically discovered and made available to agents
Flexible Integration : Support for NPX packages, Python scripts, and remote servers
Quick Start
NPX MCP Server
Python MCP Server
SSE Endpoint
from praisonaiagents import Agent, MCP
# Use the memory MCP server from NPX
agent = Agent(
name = "Memory Assistant" ,
instructions = "You can store and retrieve memories." ,
tools = MCP( "npx @modelcontextprotocol/server-memory" )
)
response = agent.start( "Remember that my favourite colour is blue" )
Transport Methods
Stdio Transport
The stdio transport runs MCP servers as subprocesses:
Command String
Command and Args
Advanced Options
# Simple command string
tools = MCP( "npx @modelcontextprotocol/server-github" )
# Python script
tools = MCP( "python3 mcp_weather_server.py" )
# With environment variables
import os
os.environ[ "API_KEY" ] = "your-api-key"
tools = MCP( "node weather-server.js" )
SSE Transport
The SSE transport connects to HTTP endpoints using Server-Sent Events:
Basic SSE
With Authentication
With Options
# Simple SSE endpoint
tools = MCP( "http://localhost:8080/sse" )
# HTTPS endpoint
tools = MCP( "https://api.example.com/mcp/sse" )
Available MCP Servers
Official NPX Servers
npx @modelcontextprotocol/server-memory
Store and retrieve conversation memories
npx @modelcontextprotocol/server-filesystem
Read and write files with safety controls
npx @modelcontextprotocol/server-github
Interact with GitHub repositories
npx @modelcontextprotocol/server-postgres
Query and manage PostgreSQL databases
Custom Python Servers
Create your own MCP server in Python:
# mcp_calculator.py
import asyncio
from mcp import Server, Tool
server = Server( "calculator" )
@server.tool
async def add ( a : float , b : float ) -> float :
"""Add two numbers"""
return a + b
@server.tool
async def multiply ( a : float , b : float ) -> float :
"""Multiply two numbers"""
return a * b
if __name__ == "__main__" :
asyncio.run(server.run())
Use in agent:
agent = Agent(
tools = MCP( "python3 mcp_calculator.py" )
)
MCP automatically discovers available tools:
# Create MCP instance
mcp = MCP( "npx @modelcontextprotocol/server-filesystem" )
# Tools are automatically discovered and can be iterated
for tool in mcp:
print ( f "Tool: { tool.name } " )
print ( f "Description: { tool.description } " )
print ( f "Parameters: { tool.parameters } " )
# Assign to agent
agent = Agent( name = "File Manager" , tools = mcp)
Error Handling
MCP includes built-in error handling and retry logic for robust operation.
try :
agent = Agent(
tools = MCP( "npx @modelcontextprotocol/server-memory" )
)
response = agent.start( "Store this information" )
except Exception as e:
print ( f "MCP Error: { e } " )
# Fallback logic
Advanced Usage
Multiple MCP Servers
from praisonaiagents import Agent, MCP
# Combine multiple MCP servers
memory_tools = MCP( "npx @modelcontextprotocol/server-memory" )
file_tools = MCP( "npx @modelcontextprotocol/server-filesystem" )
custom_tools = MCP( "python3 custom_mcp.py" )
agent = Agent(
name = "Multi-Tool Assistant" ,
instructions = "You can manage files and memories." ,
tools = [ * memory_tools, * file_tools, * custom_tools]
)
Environment Configuration
import os
# Set environment variables for MCP servers
os.environ[ "GITHUB_TOKEN" ] = "your-github-token"
os.environ[ "DATABASE_URL" ] = "postgresql://..."
# MCP servers can access these variables
github_tools = MCP( "npx @modelcontextprotocol/server-github" )
db_tools = MCP( "python3 database_mcp.py" )
Debugging MCP Connections
# Enable debug mode
mcp = MCP(
"npx @modelcontextprotocol/server-memory" ,
debug = True # Prints detailed logs
)
# Check if tools are loaded
if not list (mcp):
print ( "No tools discovered from MCP server" )
Creating SSE MCP Servers
Example SSE server implementation:
# sse_mcp_server.py
from flask import Flask, Response
import json
app = Flask( __name__ )
@app.route ( '/sse' )
def sse ():
def generate ():
# Send tool definitions
tools = [{
"name" : "get_weather" ,
"description" : "Get weather information" ,
"parameters" : {
"type" : "object" ,
"properties" : {
"location" : { "type" : "string" }
},
"required" : [ "location" ]
}
}]
yield f "data: { json.dumps({ 'type' : 'tools' , 'tools' : tools}) } \n\n "
return Response(generate(), mimetype = "text/event-stream" )
if __name__ == '__main__' :
app.run( port = 8080 )
Best Practices
Transport Choice
Use stdio for local tools and development
Use SSE for remote/cloud deployments
Consider latency and reliability needs
Reliability
Implement timeouts for long-running operations
Handle server disconnections gracefully
Provide fallback options
Security
Validate input/output from MCP servers
Use environment variables for secrets
Implement proper authentication for SSE
Performance
Reuse MCP instances when possible
Monitor subprocess resource usage
Implement connection pooling for SSE
from praisonaiagents import Agent, Task, PraisonAIAgents, MCP
import os
# Set up environment
os.environ[ "GITHUB_TOKEN" ] = "ghp_..."
# Create agent with multiple MCP tools
agent = Agent(
name = "DevOps Assistant" ,
instructions = """You are a DevOps assistant that can:
- Manage files and directories
- Interact with GitHub repositories
- Store and retrieve important information
Use your tools wisely to help with development tasks.""" ,
tools = [
MCP( "npx @modelcontextprotocol/server-filesystem" ),
MCP( "npx @modelcontextprotocol/server-github" ),
MCP( "npx @modelcontextprotocol/server-memory" )
]
)
# Create tasks
tasks = [
Task(
description = "Check the current directory structure" ,
agent = agent,
expected_output = "Directory listing with key files identified"
),
Task(
description = "Remember the project structure for future reference" ,
agent = agent,
expected_output = "Confirmation that structure is memorised"
)
]
# Run the system
agents = PraisonAIAgents( agents = [agent], tasks = tasks)
result = agents.start()
Next Steps