Tool Orchestration¶
MCPOmni Connect provides sophisticated tool orchestration capabilities, allowing seamless coordination and execution of tools across multiple MCP servers with intelligent routing and context management.
Overview¶
Tool orchestration enables:
- Cross-Server Coordination: Use tools from multiple servers in a single workflow
- Intelligent Routing: Automatic selection of the best tool for each task
- Context Sharing: Pass data between tools seamlessly
- Parallel Execution: Run independent tools simultaneously
- Error Handling: Graceful handling of tool failures with fallbacks
Core Concepts¶
Tool Discovery¶
MCPOmni Connect automatically discovers tools from all connected servers:
Output Example:
Available tools across 4 servers:
📁 filesystem server:
- read_file: Read contents of a file
- write_file: Write content to a file
- list_directory: List directory contents
- search_files: Search for files matching patterns
🗃️ database server:
- query_database: Execute SQL queries
- get_schema: Get database schema information
- backup_database: Create database backup
- migrate_data: Migrate data between tables
📧 notifications server:
- send_email: Send email notifications
- create_alert: Create system alerts
- send_slack: Send Slack messages
- log_event: Log system events
🌐 api server:
- http_request: Make HTTP requests
- webhook_trigger: Trigger webhooks
- api_auth: Authenticate with external APIs
Tool Capabilities¶
Each tool provides metadata about its capabilities:
- Input Parameters: Required and optional parameters
- Output Format: Type and structure of responses
- Error Conditions: Possible failure modes
- Performance Characteristics: Expected execution time
- Dependencies: Other tools or resources required
Intelligent Tool Routing¶
Automatic Tool Selection¶
The AI automatically selects appropriate tools based on the task:
> Analyze the database and create a backup, then notify the team
🤖 Planning task execution:
Step 1: 🔧 get_schema (database server)
└─ Purpose: Analyze database structure
Step 2: 🔧 query_database (database server)
└─ Purpose: Check database health
Step 3: 🔧 backup_database (database server)
└─ Purpose: Create backup
Step 4: 🔧 send_slack (notifications server)
└─ Purpose: Notify team of completion
Tool Preference Logic¶
MCPOmni Connect uses several factors for tool selection:
- Functional Match: Does the tool perform the required function?
- Server Availability: Is the server currently accessible?
- Parameter Compatibility: Can the tool accept available data?
- Performance History: Has this tool been reliable?
- Context Relevance: Does it fit the current workflow?
Cross-Server Workflows¶
Data Flow Between Servers¶
Tools can pass data seamlessly across servers:
> Read the config file, query the database for matching records, and send a summary email
🔧 Step 1: read_file (filesystem) → config.json content
📊 Step 2: query_database (database) ← config parameters
📧 Step 3: send_email (notifications) ← query results
Context Preservation¶
Data flows between tools while maintaining context:
# Conceptual data flow
config_data = filesystem.read_file("config.json")
query_params = extract_db_params(config_data)
results = database.query_database(query_params)
summary = generate_summary(results)
notifications.send_email(to="team@company.com", subject="Summary", body=summary)
Parallel Tool Execution¶
Concurrent Operations¶
Independent tools can run simultaneously:
> Check system health: verify database connectivity, check file system space, and test API endpoints
🤖 Executing parallel health checks:
🔧 Parallel Execution Group 1:
├─ get_schema (database server) ⏱️ 2.1s ✅
├─ list_directory (filesystem server) ⏱️ 0.8s ✅
└─ http_request (api server) ⏱️ 1.5s ✅
All health checks completed in 2.1s (fastest possible)
Dependency Management¶
Tools with dependencies execute in proper order:
> Deploy application: build, test, deploy, then notify
🤖 Dependency-aware execution:
Phase 1 (Sequential):
🔧 build_application → ✅ (3.2s)
🔧 run_tests ← build artifacts → ✅ (5.7s)
Phase 2 (Parallel):
🔧 deploy_staging ← test results → ✅ (12.3s)
🔧 update_documentation → ✅ (4.1s)
🔧 prepare_notifications → ✅ (0.5s)
Phase 3 (Final):
🔧 send_deployment_notification ← all results → ✅ (1.2s)
Total time: 22.7s (vs 27.0s sequential)
Tool Chaining and Composition¶
Simple Tool Chains¶
Basic sequential tool execution:
# User request
> Create a backup of the user data and email me the results
# Tool chain
read_database → backup_data → compress_backup → send_email
Complex Compositions¶
Advanced workflows with branching and merging:
# User request
> Analyze all log files, identify errors, and create both a summary report and individual notifications
# Complex composition
list_log_files →
├─ analyze_errors → summarize_errors → create_report
├─ extract_critical → send_alerts
└─ archive_logs → update_inventory
Conditional Execution¶
Tools can execute based on conditions:
> Check database health and backup if needed
🤖 Conditional workflow:
🔧 check_database_health → Status: DEGRADED
├─ Condition: health < 90% → TRUE
├─ 🔧 backup_database → ✅ Backup created
├─ 🔧 send_alert → ✅ Team notified
└─ 🔧 schedule_maintenance → ✅ Maintenance scheduled
🔧 check_database_health → Status: HEALTHY
└─ Condition: health < 90% → FALSE (no backup needed)
Error Handling and Resilience¶
Automatic Retries¶
Failed tools are automatically retried with backoff:
🔧 http_request (api server) → ❌ Connection timeout
├─ Retry 1/3 (wait 2s) → ❌ Connection timeout
├─ Retry 2/3 (wait 4s) → ❌ Connection timeout
├─ Retry 3/3 (wait 8s) → ✅ Success
└─ Total time: 14.2s (with retries)
Fallback Strategies¶
Alternative tools when primary tools fail:
🔧 send_slack (notifications server) → ❌ Server unreachable
├─ Fallback: send_email (notifications server) → ❌ SMTP error
├─ Fallback: log_event (filesystem server) → ✅ Logged to file
└─ Strategy: Notify via available channel
Graceful Degradation¶
Continue workflow even when some tools fail:
> Generate system report with all subsystem status
🔧 Orchestrated reporting:
├─ database_status → ✅ Connected (healthy)
├─ filesystem_status → ❌ Permission denied
├─ api_status → ✅ All endpoints responding
├─ notification_status → ✅ Services operational
📊 Result: Partial report generated (3/4 subsystems)
⚠️ Warning: Filesystem status unavailable (permission issue)
Performance Optimization¶
Tool Caching¶
Frequently used tool results are cached:
🔧 get_schema (database server)
├─ Cache check → ❌ Not found
├─ Execute tool → ✅ Schema retrieved (2.3s)
└─ Cache stored → Valid for 5 minutes
🔧 get_schema (database server) [later request]
├─ Cache check → ✅ Found (age: 2m 15s)
└─ Return cached → Instant response (0.001s)
Connection Pooling¶
Reuse server connections for better performance:
Connection Pool Status:
├─ filesystem server: 2 active connections
├─ database server: 1 active connection
├─ notifications server: 1 idle connection
└─ api server: 3 active connections
🔧 Tool execution using pooled connections:
├─ read_file → Reused connection #1 (0.1ms setup)
├─ query_database → Reused connection #1 (0.1ms setup)
└─ send_email → New connection #2 (15.2ms setup)
Batch Operations¶
Group similar operations for efficiency:
# Instead of individual file reads
🔧 read_file("file1.txt") → 45ms
🔧 read_file("file2.txt") → 43ms
🔧 read_file("file3.txt") → 44ms
Total: 132ms
# Batch operation
🔧 read_multiple_files(["file1.txt", "file2.txt", "file3.txt"]) → 52ms
Improvement: 60% faster
Tool Monitoring and Analytics¶
Real-time Monitoring¶
Track tool performance during execution:
/debug # Enable detailed monitoring
🔧 Tool Execution Monitor:
├─ query_database: 2.3s (normal)
├─ send_email: 1.1s (fast)
├─ http_request: 8.7s (slow) ⚠️
└─ backup_database: 45.2s (normal for size)
Performance Alert: http_request exceeding normal time (3s avg)
Usage Analytics¶
/tool_stats
📊 Tool Usage Analytics (Last 24h):
├─ Most Used: read_file (47 calls)
├─ Fastest Avg: list_directory (0.3s avg)
├─ Slowest Avg: backup_database (42.1s avg)
├─ Highest Success Rate: send_email (100%)
├─ Lowest Success Rate: http_request (87%)
└─ Total Tool Calls: 234
Advanced Orchestration Features¶
Dynamic Tool Loading¶
Add new tools during runtime:
/add_servers:new_tools.json
🔧 New tools discovered:
├─ image_processing server: 4 new tools
├─ ml_analysis server: 7 new tools
└─ Updated tool registry: 23 → 34 tools
Auto-integration complete: New tools available immediately
Tool Composition Templates¶
Save common workflow patterns:
# Save workflow as template
/save_workflow:database_maintenance
# Template includes:
├─ backup_database
├─ analyze_performance
├─ optimize_indexes
├─ update_statistics
└─ send_completion_report
# Reuse template
/execute_workflow:database_maintenance
Custom Tool Routing¶
Override automatic tool selection:
# Force specific tool selection
> Use the backup tool from the primary database server to backup the users table
🤖 Custom routing applied:
├─ Server: primary-database (forced)
├─ Tool: backup_database (forced)
├─ Parameters: table=users
└─ Alternative tools ignored
Troubleshooting Tool Orchestration¶
Common Issues¶
Tool Not Found
Error: Tool 'unknown_tool' not found
Solutions:
Tool Execution Timeout
Error: Tool execution timeout after 30s
Solutions:
Performance Issues¶
Slow Tool Execution
Issue: Tools taking longer than expected
Diagnosis:
Best Practices¶
Orchestration Best Practices
- Start Simple: Begin with single-tool operations
- Monitor Performance: Use
/debug
for complex workflows - Handle Errors: Plan for tool failures
- Use Caching: Enable caching for repeated operations
- Optimize Parallel: Identify independent operations
Next: Resource Management →