Swarm Advanced

Orchestrate specialized AI agents for complex distributed workflows

✨ The solution you've been looking for

Verified
Tested and verified by our team
11981 Stars

Advanced swarm orchestration patterns for research, development, testing, and complex distributed workflows

swarm orchestration distributed parallel research development testing coordination
Repository

See It In Action

Interactive preview & real-world examples

Live Demo
Skill Demo Animation

AI Conversation Simulator

See how users interact with this skill

User Prompt

I need to research AI trends in 2025 comprehensively. Set up a research swarm with web researchers, academic researchers, data analysts, and a report writer.

Skill Processing

Analyzing request...

Agent Response

Comprehensive research report with validated sources, trend analysis, and actionable insights generated through coordinated parallel research

Quick Start (3 Steps)

Get up and running in minutes

1

Install

claude-code skill install swarm-advanced

claude-code skill install swarm-advanced
2

Config

3

First Trigger

@swarm-advanced help

Commands

CommandDescriptionRequired Args
@swarm-advanced research-team-coordinationDeploy a mesh topology swarm for parallel research gathering, analysis, and synthesisNone
@swarm-advanced full-stack-developmentCoordinate a hierarchical development team for building complete applicationsNone
@swarm-advanced quality-assurance-pipelineExecute comprehensive testing through star topology coordinationNone

Typical Use Cases

Research Team Coordination

Deploy a mesh topology swarm for parallel research gathering, analysis, and synthesis

Full-Stack Development

Coordinate a hierarchical development team for building complete applications

Quality Assurance Pipeline

Execute comprehensive testing through star topology coordination

Overview

Advanced Swarm Orchestration

Master advanced swarm patterns for distributed research, development, and testing workflows. This skill covers comprehensive orchestration strategies using both MCP tools and CLI commands.

Quick Start

Prerequisites

1# Ensure Claude Flow is installed
2npm install -g claude-flow@alpha
3
4# Add MCP server (if using MCP tools)
5claude mcp add claude-flow npx claude-flow@alpha mcp start

Basic Pattern

1// 1. Initialize swarm topology
2mcp__claude-flow__swarm_init({ topology: "mesh", maxAgents: 6 })
3
4// 2. Spawn specialized agents
5mcp__claude-flow__agent_spawn({ type: "researcher", name: "Agent 1" })
6
7// 3. Orchestrate tasks
8mcp__claude-flow__task_orchestrate({ task: "...", strategy: "parallel" })

Core Concepts

Swarm Topologies

Mesh Topology - Peer-to-peer communication, best for research and analysis

  • All agents communicate directly
  • High flexibility and resilience
  • Use for: Research, analysis, brainstorming

Hierarchical Topology - Coordinator with subordinates, best for development

  • Clear command structure
  • Sequential workflow support
  • Use for: Development, structured workflows

Star Topology - Central coordinator, best for testing

  • Centralized control and monitoring
  • Parallel execution with coordination
  • Use for: Testing, validation, quality assurance

Ring Topology - Sequential processing chain

  • Step-by-step processing
  • Pipeline workflows
  • Use for: Multi-stage processing, data pipelines

Agent Strategies

Adaptive - Dynamic adjustment based on task complexity Balanced - Equal distribution of work across agents Specialized - Task-specific agent assignment Parallel - Maximum concurrent execution

Pattern 1: Research Swarm

Purpose

Deep research through parallel information gathering, analysis, and synthesis.

Architecture

 1// Initialize research swarm
 2mcp__claude-flow__swarm_init({
 3  "topology": "mesh",
 4  "maxAgents": 6,
 5  "strategy": "adaptive"
 6})
 7
 8// Spawn research team
 9const researchAgents = [
10  {
11    type: "researcher",
12    name: "Web Researcher",
13    capabilities: ["web-search", "content-extraction", "source-validation"]
14  },
15  {
16    type: "researcher",
17    name: "Academic Researcher",
18    capabilities: ["paper-analysis", "citation-tracking", "literature-review"]
19  },
20  {
21    type: "analyst",
22    name: "Data Analyst",
23    capabilities: ["data-processing", "statistical-analysis", "visualization"]
24  },
25  {
26    type: "analyst",
27    name: "Pattern Analyzer",
28    capabilities: ["trend-detection", "correlation-analysis", "outlier-detection"]
29  },
30  {
31    type: "documenter",
32    name: "Report Writer",
33    capabilities: ["synthesis", "technical-writing", "formatting"]
34  }
35]
36
37// Spawn all agents
38researchAgents.forEach(agent => {
39  mcp__claude-flow__agent_spawn({
40    type: agent.type,
41    name: agent.name,
42    capabilities: agent.capabilities
43  })
44})

Research Workflow

Phase 1: Information Gathering

 1// Parallel information collection
 2mcp__claude-flow__parallel_execute({
 3  "tasks": [
 4    {
 5      "id": "web-search",
 6      "command": "search recent publications and articles"
 7    },
 8    {
 9      "id": "academic-search",
10      "command": "search academic databases and papers"
11    },
12    {
13      "id": "data-collection",
14      "command": "gather relevant datasets and statistics"
15    },
16    {
17      "id": "expert-search",
18      "command": "identify domain experts and thought leaders"
19    }
20  ]
21})
22
23// Store research findings in memory
24mcp__claude-flow__memory_usage({
25  "action": "store",
26  "key": "research-findings-" + Date.now(),
27  "value": JSON.stringify(findings),
28  "namespace": "research",
29  "ttl": 604800 // 7 days
30})

Phase 2: Analysis and Validation

 1// Pattern recognition in findings
 2mcp__claude-flow__pattern_recognize({
 3  "data": researchData,
 4  "patterns": ["trend", "correlation", "outlier", "emerging-pattern"]
 5})
 6
 7// Cognitive analysis
 8mcp__claude-flow__cognitive_analyze({
 9  "behavior": "research-synthesis"
10})
11
12// Quality assessment
13mcp__claude-flow__quality_assess({
14  "target": "research-sources",
15  "criteria": ["credibility", "relevance", "recency", "authority"]
16})
17
18// Cross-reference validation
19mcp__claude-flow__neural_patterns({
20  "action": "analyze",
21  "operation": "fact-checking",
22  "metadata": { "sources": sourcesArray }
23})

Phase 3: Knowledge Management

 1// Search existing knowledge base
 2mcp__claude-flow__memory_search({
 3  "pattern": "topic X",
 4  "namespace": "research",
 5  "limit": 20
 6})
 7
 8// Create knowledge graph connections
 9mcp__claude-flow__neural_patterns({
10  "action": "learn",
11  "operation": "knowledge-graph",
12  "metadata": {
13    "topic": "X",
14    "connections": relatedTopics,
15    "depth": 3
16  }
17})
18
19// Store connections for future use
20mcp__claude-flow__memory_usage({
21  "action": "store",
22  "key": "knowledge-graph-X",
23  "value": JSON.stringify(knowledgeGraph),
24  "namespace": "research/graphs",
25  "ttl": 2592000 // 30 days
26})

Phase 4: Report Generation

 1// Orchestrate report generation
 2mcp__claude-flow__task_orchestrate({
 3  "task": "generate comprehensive research report",
 4  "strategy": "sequential",
 5  "priority": "high",
 6  "dependencies": ["gather", "analyze", "validate", "synthesize"]
 7})
 8
 9// Monitor research progress
10mcp__claude-flow__swarm_status({
11  "swarmId": "research-swarm"
12})
13
14// Generate final report
15mcp__claude-flow__workflow_execute({
16  "workflowId": "research-report-generation",
17  "params": {
18    "findings": findings,
19    "format": "comprehensive",
20    "sections": ["executive-summary", "methodology", "findings", "analysis", "conclusions", "references"]
21  }
22})

CLI Fallback

1# Quick research swarm
2npx claude-flow swarm "research AI trends in 2025" \
3  --strategy research \
4  --mode distributed \
5  --max-agents 6 \
6  --parallel \
7  --output research-report.md

Pattern 2: Development Swarm

Purpose

Full-stack development through coordinated specialist agents.

Architecture

 1// Initialize development swarm with hierarchy
 2mcp__claude-flow__swarm_init({
 3  "topology": "hierarchical",
 4  "maxAgents": 8,
 5  "strategy": "balanced"
 6})
 7
 8// Spawn development team
 9const devTeam = [
10  { type: "architect", name: "System Architect", role: "coordinator" },
11  { type: "coder", name: "Backend Developer", capabilities: ["node", "api", "database"] },
12  { type: "coder", name: "Frontend Developer", capabilities: ["react", "ui", "ux"] },
13  { type: "coder", name: "Database Engineer", capabilities: ["sql", "nosql", "optimization"] },
14  { type: "tester", name: "QA Engineer", capabilities: ["unit", "integration", "e2e"] },
15  { type: "reviewer", name: "Code Reviewer", capabilities: ["security", "performance", "best-practices"] },
16  { type: "documenter", name: "Technical Writer", capabilities: ["api-docs", "guides", "tutorials"] },
17  { type: "monitor", name: "DevOps Engineer", capabilities: ["ci-cd", "deployment", "monitoring"] }
18]
19
20// Spawn all team members
21devTeam.forEach(member => {
22  mcp__claude-flow__agent_spawn({
23    type: member.type,
24    name: member.name,
25    capabilities: member.capabilities,
26    swarmId: "dev-swarm"
27  })
28})

Development Workflow

Phase 1: Architecture and Design

 1// System architecture design
 2mcp__claude-flow__task_orchestrate({
 3  "task": "design system architecture for REST API",
 4  "strategy": "sequential",
 5  "priority": "critical",
 6  "assignTo": "System Architect"
 7})
 8
 9// Store architecture decisions
10mcp__claude-flow__memory_usage({
11  "action": "store",
12  "key": "architecture-decisions",
13  "value": JSON.stringify(architectureDoc),
14  "namespace": "development/design"
15})

Phase 2: Parallel Implementation

 1// Parallel development tasks
 2mcp__claude-flow__parallel_execute({
 3  "tasks": [
 4    {
 5      "id": "backend-api",
 6      "command": "implement REST API endpoints",
 7      "assignTo": "Backend Developer"
 8    },
 9    {
10      "id": "frontend-ui",
11      "command": "build user interface components",
12      "assignTo": "Frontend Developer"
13    },
14    {
15      "id": "database-schema",
16      "command": "design and implement database schema",
17      "assignTo": "Database Engineer"
18    },
19    {
20      "id": "api-documentation",
21      "command": "create API documentation",
22      "assignTo": "Technical Writer"
23    }
24  ]
25})
26
27// Monitor development progress
28mcp__claude-flow__swarm_monitor({
29  "swarmId": "dev-swarm",
30  "interval": 5000
31})

Phase 3: Testing and Validation

 1// Comprehensive testing
 2mcp__claude-flow__batch_process({
 3  "items": [
 4    { type: "unit", target: "all-modules" },
 5    { type: "integration", target: "api-endpoints" },
 6    { type: "e2e", target: "user-flows" },
 7    { type: "performance", target: "critical-paths" }
 8  ],
 9  "operation": "execute-tests"
10})
11
12// Quality assessment
13mcp__claude-flow__quality_assess({
14  "target": "codebase",
15  "criteria": ["coverage", "complexity", "maintainability", "security"]
16})

Phase 4: Review and Deployment

 1// Code review workflow
 2mcp__claude-flow__workflow_execute({
 3  "workflowId": "code-review-process",
 4  "params": {
 5    "reviewers": ["Code Reviewer"],
 6    "criteria": ["security", "performance", "best-practices"]
 7  }
 8})
 9
10// CI/CD pipeline
11mcp__claude-flow__pipeline_create({
12  "config": {
13    "stages": ["build", "test", "security-scan", "deploy"],
14    "environment": "production"
15  }
16})

CLI Fallback

1# Quick development swarm
2npx claude-flow swarm "build REST API with authentication" \
3  --strategy development \
4  --mode hierarchical \
5  --monitor \
6  --output sqlite

Pattern 3: Testing Swarm

Purpose

Comprehensive quality assurance through distributed testing.

Architecture

 1// Initialize testing swarm with star topology
 2mcp__claude-flow__swarm_init({
 3  "topology": "star",
 4  "maxAgents": 7,
 5  "strategy": "parallel"
 6})
 7
 8// Spawn testing team
 9const testingTeam = [
10  {
11    type: "tester",
12    name: "Unit Test Coordinator",
13    capabilities: ["unit-testing", "mocking", "coverage", "tdd"]
14  },
15  {
16    type: "tester",
17    name: "Integration Tester",
18    capabilities: ["integration", "api-testing", "contract-testing"]
19  },
20  {
21    type: "tester",
22    name: "E2E Tester",
23    capabilities: ["e2e", "ui-testing", "user-flows", "selenium"]
24  },
25  {
26    type: "tester",
27    name: "Performance Tester",
28    capabilities: ["load-testing", "stress-testing", "benchmarking"]
29  },
30  {
31    type: "monitor",
32    name: "Security Tester",
33    capabilities: ["security-testing", "penetration-testing", "vulnerability-scanning"]
34  },
35  {
36    type: "analyst",
37    name: "Test Analyst",
38    capabilities: ["coverage-analysis", "test-optimization", "reporting"]
39  },
40  {
41    type: "documenter",
42    name: "Test Documenter",
43    capabilities: ["test-documentation", "test-plans", "reports"]
44  }
45]
46
47// Spawn all testers
48testingTeam.forEach(tester => {
49  mcp__claude-flow__agent_spawn({
50    type: tester.type,
51    name: tester.name,
52    capabilities: tester.capabilities,
53    swarmId: "testing-swarm"
54  })
55})

Testing Workflow

Phase 1: Test Planning

 1// Analyze test coverage requirements
 2mcp__claude-flow__quality_assess({
 3  "target": "test-coverage",
 4  "criteria": [
 5    "line-coverage",
 6    "branch-coverage",
 7    "function-coverage",
 8    "edge-cases"
 9  ]
10})
11
12// Identify test scenarios
13mcp__claude-flow__pattern_recognize({
14  "data": testScenarios,
15  "patterns": [
16    "edge-case",
17    "boundary-condition",
18    "error-path",
19    "happy-path"
20  ]
21})
22
23// Store test plan
24mcp__claude-flow__memory_usage({
25  "action": "store",
26  "key": "test-plan-" + Date.now(),
27  "value": JSON.stringify(testPlan),
28  "namespace": "testing/plans"
29})

Phase 2: Parallel Test Execution

 1// Execute all test suites in parallel
 2mcp__claude-flow__parallel_execute({
 3  "tasks": [
 4    {
 5      "id": "unit-tests",
 6      "command": "npm run test:unit",
 7      "assignTo": "Unit Test Coordinator"
 8    },
 9    {
10      "id": "integration-tests",
11      "command": "npm run test:integration",
12      "assignTo": "Integration Tester"
13    },
14    {
15      "id": "e2e-tests",
16      "command": "npm run test:e2e",
17      "assignTo": "E2E Tester"
18    },
19    {
20      "id": "performance-tests",
21      "command": "npm run test:performance",
22      "assignTo": "Performance Tester"
23    },
24    {
25      "id": "security-tests",
26      "command": "npm run test:security",
27      "assignTo": "Security Tester"
28    }
29  ]
30})
31
32// Batch process test suites
33mcp__claude-flow__batch_process({
34  "items": testSuites,
35  "operation": "execute-test-suite"
36})

Phase 3: Performance and Security

 1// Run performance benchmarks
 2mcp__claude-flow__benchmark_run({
 3  "suite": "comprehensive-performance"
 4})
 5
 6// Bottleneck analysis
 7mcp__claude-flow__bottleneck_analyze({
 8  "component": "application",
 9  "metrics": ["response-time", "throughput", "memory", "cpu"]
10})
11
12// Security scanning
13mcp__claude-flow__security_scan({
14  "target": "application",
15  "depth": "comprehensive"
16})
17
18// Vulnerability analysis
19mcp__claude-flow__error_analysis({
20  "logs": securityScanLogs
21})

Phase 4: Monitoring and Reporting

 1// Real-time test monitoring
 2mcp__claude-flow__swarm_monitor({
 3  "swarmId": "testing-swarm",
 4  "interval": 2000
 5})
 6
 7// Generate comprehensive test report
 8mcp__claude-flow__performance_report({
 9  "format": "detailed",
10  "timeframe": "current-run"
11})
12
13// Get test results
14mcp__claude-flow__task_results({
15  "taskId": "test-execution-001"
16})
17
18// Trend analysis
19mcp__claude-flow__trend_analysis({
20  "metric": "test-coverage",
21  "period": "30d"
22})

CLI Fallback

1# Quick testing swarm
2npx claude-flow swarm "test application comprehensively" \
3  --strategy testing \
4  --mode star \
5  --parallel \
6  --timeout 600

Pattern 4: Analysis Swarm

Purpose

Deep code and system analysis through specialized analyzers.

Architecture

 1// Initialize analysis swarm
 2mcp__claude-flow__swarm_init({
 3  "topology": "mesh",
 4  "maxAgents": 5,
 5  "strategy": "adaptive"
 6})
 7
 8// Spawn analysis specialists
 9const analysisTeam = [
10  {
11    type: "analyst",
12    name: "Code Analyzer",
13    capabilities: ["static-analysis", "complexity-analysis", "dead-code-detection"]
14  },
15  {
16    type: "analyst",
17    name: "Security Analyzer",
18    capabilities: ["security-scan", "vulnerability-detection", "dependency-audit"]
19  },
20  {
21    type: "analyst",
22    name: "Performance Analyzer",
23    capabilities: ["profiling", "bottleneck-detection", "optimization"]
24  },
25  {
26    type: "analyst",
27    name: "Architecture Analyzer",
28    capabilities: ["dependency-analysis", "coupling-detection", "modularity-assessment"]
29  },
30  {
31    type: "documenter",
32    name: "Analysis Reporter",
33    capabilities: ["reporting", "visualization", "recommendations"]
34  }
35]
36
37// Spawn all analysts
38analysisTeam.forEach(analyst => {
39  mcp__claude-flow__agent_spawn({
40    type: analyst.type,
41    name: analyst.name,
42    capabilities: analyst.capabilities
43  })
44})

Analysis Workflow

 1// Parallel analysis execution
 2mcp__claude-flow__parallel_execute({
 3  "tasks": [
 4    { "id": "analyze-code", "command": "analyze codebase structure and quality" },
 5    { "id": "analyze-security", "command": "scan for security vulnerabilities" },
 6    { "id": "analyze-performance", "command": "identify performance bottlenecks" },
 7    { "id": "analyze-architecture", "command": "assess architectural patterns" }
 8  ]
 9})
10
11// Generate comprehensive analysis report
12mcp__claude-flow__performance_report({
13  "format": "detailed",
14  "timeframe": "current"
15})
16
17// Cost analysis
18mcp__claude-flow__cost_analysis({
19  "timeframe": "30d"
20})

Advanced Techniques

Error Handling and Fault Tolerance

 1// Setup fault tolerance for all agents
 2mcp__claude-flow__daa_fault_tolerance({
 3  "agentId": "all",
 4  "strategy": "auto-recovery"
 5})
 6
 7// Error handling pattern
 8try {
 9  await mcp__claude-flow__task_orchestrate({
10    "task": "complex operation",
11    "strategy": "parallel",
12    "priority": "high"
13  })
14} catch (error) {
15  // Check swarm health
16  const status = await mcp__claude-flow__swarm_status({})
17
18  // Analyze error patterns
19  await mcp__claude-flow__error_analysis({
20    "logs": [error.message]
21  })
22
23  // Auto-recovery attempt
24  if (status.healthy) {
25    await mcp__claude-flow__task_orchestrate({
26      "task": "retry failed operation",
27      "strategy": "sequential"
28    })
29  }
30}

Memory and State Management

 1// Cross-session persistence
 2mcp__claude-flow__memory_persist({
 3  "sessionId": "swarm-session-001"
 4})
 5
 6// Namespace management for different swarms
 7mcp__claude-flow__memory_namespace({
 8  "namespace": "research-swarm",
 9  "action": "create"
10})
11
12// Create state snapshot
13mcp__claude-flow__state_snapshot({
14  "name": "development-checkpoint-1"
15})
16
17// Restore from snapshot if needed
18mcp__claude-flow__context_restore({
19  "snapshotId": "development-checkpoint-1"
20})
21
22// Backup memory stores
23mcp__claude-flow__memory_backup({
24  "path": "/workspaces/claude-code-flow/backups/swarm-memory.json"
25})

Neural Pattern Learning

 1// Train neural patterns from successful workflows
 2mcp__claude-flow__neural_train({
 3  "pattern_type": "coordination",
 4  "training_data": JSON.stringify(successfulWorkflows),
 5  "epochs": 50
 6})
 7
 8// Adaptive learning from experience
 9mcp__claude-flow__learning_adapt({
10  "experience": {
11    "workflow": "research-to-report",
12    "success": true,
13    "duration": 3600,
14    "quality": 0.95
15  }
16})
17
18// Pattern recognition for optimization
19mcp__claude-flow__pattern_recognize({
20  "data": workflowMetrics,
21  "patterns": ["bottleneck", "optimization-opportunity", "efficiency-gain"]
22})

Workflow Automation

 1// Create reusable workflow
 2mcp__claude-flow__workflow_create({
 3  "name": "full-stack-development",
 4  "steps": [
 5    { "phase": "design", "agents": ["architect"] },
 6    { "phase": "implement", "agents": ["backend-dev", "frontend-dev"], "parallel": true },
 7    { "phase": "test", "agents": ["tester", "security-tester"], "parallel": true },
 8    { "phase": "review", "agents": ["reviewer"] },
 9    { "phase": "deploy", "agents": ["devops"] }
10  ],
11  "triggers": ["on-commit", "scheduled-daily"]
12})
13
14// Setup automation rules
15mcp__claude-flow__automation_setup({
16  "rules": [
17    {
18      "trigger": "file-changed",
19      "pattern": "*.js",
20      "action": "run-tests"
21    },
22    {
23      "trigger": "PR-created",
24      "action": "code-review-swarm"
25    }
26  ]
27})
28
29// Event-driven triggers
30mcp__claude-flow__trigger_setup({
31  "events": ["code-commit", "PR-merge", "deployment"],
32  "actions": ["test", "analyze", "document"]
33})

Performance Optimization

 1// Topology optimization
 2mcp__claude-flow__topology_optimize({
 3  "swarmId": "current-swarm"
 4})
 5
 6// Load balancing
 7mcp__claude-flow__load_balance({
 8  "swarmId": "development-swarm",
 9  "tasks": taskQueue
10})
11
12// Agent coordination sync
13mcp__claude-flow__coordination_sync({
14  "swarmId": "development-swarm"
15})
16
17// Auto-scaling
18mcp__claude-flow__swarm_scale({
19  "swarmId": "development-swarm",
20  "targetSize": 12
21})

Monitoring and Metrics

 1// Real-time swarm monitoring
 2mcp__claude-flow__swarm_monitor({
 3  "swarmId": "active-swarm",
 4  "interval": 3000
 5})
 6
 7// Collect comprehensive metrics
 8mcp__claude-flow__metrics_collect({
 9  "components": ["agents", "tasks", "memory", "performance"]
10})
11
12// Health monitoring
13mcp__claude-flow__health_check({
14  "components": ["swarm", "agents", "neural", "memory"]
15})
16
17// Usage statistics
18mcp__claude-flow__usage_stats({
19  "component": "swarm-orchestration"
20})
21
22// Trend analysis
23mcp__claude-flow__trend_analysis({
24  "metric": "agent-performance",
25  "period": "7d"
26})

Best Practices

1. Choosing the Right Topology

  • Mesh: Research, brainstorming, collaborative analysis
  • Hierarchical: Structured development, sequential workflows
  • Star: Testing, validation, centralized coordination
  • Ring: Pipeline processing, staged workflows

2. Agent Specialization

  • Assign specific capabilities to each agent
  • Avoid overlapping responsibilities
  • Use coordination agents for complex workflows
  • Leverage memory for agent communication

3. Parallel Execution

  • Identify independent tasks for parallelization
  • Use sequential execution for dependent tasks
  • Monitor resource usage during parallel execution
  • Implement proper error handling

4. Memory Management

  • Use namespaces to organize memory
  • Set appropriate TTL values
  • Create regular backups
  • Implement state snapshots for checkpoints

5. Monitoring and Optimization

  • Monitor swarm health regularly
  • Collect and analyze metrics
  • Optimize topology based on performance
  • Use neural patterns to learn from success

6. Error Recovery

  • Implement fault tolerance strategies
  • Use auto-recovery mechanisms
  • Analyze error patterns
  • Create fallback workflows

Real-World Examples

Example 1: AI Research Project

1// Research AI trends, analyze findings, generate report
2mcp__claude-flow__swarm_init({ topology: "mesh", maxAgents: 6 })
3// Spawn: 2 researchers, 2 analysts, 1 synthesizer, 1 documenter
4// Parallel gather → Analyze patterns → Synthesize → Report

Example 2: Full-Stack Application

1// Build complete web application with testing
2mcp__claude-flow__swarm_init({ topology: "hierarchical", maxAgents: 8 })
3// Spawn: 1 architect, 2 devs, 1 db engineer, 2 testers, 1 reviewer, 1 devops
4// Design → Parallel implement → Test → Review → Deploy

Example 3: Security Audit

1// Comprehensive security analysis
2mcp__claude-flow__swarm_init({ topology: "star", maxAgents: 5 })
3// Spawn: 1 coordinator, 1 code analyzer, 1 security scanner, 1 penetration tester, 1 reporter
4// Parallel scan → Vulnerability analysis → Penetration test → Report

Example 4: Performance Optimization

1// Identify and fix performance bottlenecks
2mcp__claude-flow__swarm_init({ topology: "mesh", maxAgents: 4 })
3// Spawn: 1 profiler, 1 bottleneck analyzer, 1 optimizer, 1 tester
4// Profile → Identify bottlenecks → Optimize → Validate

Troubleshooting

Common Issues

Issue: Swarm agents not coordinating properly Solution: Check topology selection, verify memory usage, enable monitoring

Issue: Parallel execution failing Solution: Verify task dependencies, check resource limits, implement error handling

Issue: Memory persistence not working Solution: Verify namespaces, check TTL settings, ensure backup configuration

Issue: Performance degradation Solution: Optimize topology, reduce agent count, analyze bottlenecks

  • sparc-methodology - Systematic development workflow
  • github-integration - Repository management and automation
  • neural-patterns - AI-powered coordination optimization
  • memory-management - Cross-session state persistence

References


Version: 2.0.0 Last Updated: 2025-10-19 Skill Level: Advanced Estimated Learning Time: 2-3 hours

What Users Are Saying

Real feedback from the community

Environment Matrix

Dependencies

Node.js 16+
npm or yarn
claude-flow@alpha (npm install -g claude-flow@alpha)

Framework Support

Claude Flow MCP ✓ (recommended) CLI tools ✓ Any development framework ✓

Context Window

Token Usage ~5K-15K tokens for complex swarm orchestration

Security & Privacy

Information

Author
ruvnet
Updated
2026-01-30
Category
architecture-patterns