Add IF.bus intra-agent communication protocol to all 5 cloud sessions

- Added IFMessage schema with FIPA-ACL performatives
- Session-specific communication flows (distributed intelligence, peer review, adversarial testing, sequential handoffs, consensus building)
- Automatic conflict detection (>20% variance triggers ESCALATE)
- Multi-source verification (IF.TTT ≥2 sources requirement)
- Token cost tracking (IF.optimise integration)
- PARALLEL_LAUNCH_STRATEGY.md for simultaneous session deployment
- SWARM_COMMUNICATION_PROTOCOL.md comprehensive protocol docs

Based on InfraFabric S² multi-swarm coordination (3,563x faster than git polling)

🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Danny Stocker 2025-11-13 02:03:24 +01:00
parent 58b344aa31
commit da1263d1b3
7 changed files with 1482 additions and 0 deletions

View file

@ -188,6 +188,108 @@ Each agent MUST:
---
## Intra-Agent Communication Protocol (IF.bus)
**Based on:** InfraFabric S² multi-swarm coordination (3,563x faster than git polling)
### IFMessage Schema
Every agent-to-agent message follows this structure:
```json
{
"performative": "inform", // FIPA-ACL: inform, request, query-if, confirm, disconfirm, ESCALATE
"sender": "if://agent/session-1/haiku-Y",
"receiver": ["if://agent/session-1/haiku-Z"],
"conversation_id": "if://conversation/navidocs-session-1-2025-11-13",
"content": {
"claim": "[Your finding]",
"evidence": ["[URL or file:line]"],
"confidence": 0.85, // 0.0-1.0
"cost_tokens": 1247
},
"citation_ids": ["if://citation/uuid"],
"timestamp": "2025-11-13T10:00:00Z",
"sequence_num": 1
}
```
### Speech Acts (Performatives)
**inform:** Share findings with synthesis agent (Agent 10)
- Example: "I am S1-H03. Inventory tracking prevents €15K-€50K loss (confidence 0.85)"
**request:** Ask another agent for verification/data
- Example: "S1-H10 requests S1-H02: Verify market size with 2nd source (IF.TTT requirement)"
**confirm:** Validate another agent's claim
- Example: "S1-H02 confirms S1-H01: Market size €2.3B verified (2 sources now)"
**disconfirm:** Challenge another agent's claim
- Example: "S1-H03 challenges S1-H01: Price range conflict (€250K vs €1.5M = 500% variance)"
**ESCALATE:** Flag critical conflict for Sonnet coordinator
- Example: "S1-H10 ESCALATES: Price variance >20%, requires human resolution"
### Communication Flow (This Session)
```
S1-H01 through S1-H09 → S1-H10 (Evidence Synthesis)
ESCALATE (if conflicts)
Sonnet Resolves
```
**Key Patterns:**
1. **Agents 1-9 → Agent 10:** Send findings with confidence scores
2. **Agent 10 → Agents 1-9:** Request verification if confidence <0.75
3. **Agent 10 → Sonnet:** ESCALATE conflicts (>20% variance)
4. **Sonnet → Agent X:** Request re-investigation with specific instructions
### Multi-Source Verification Example
```yaml
# Agent 1 finds data (1 source, low confidence)
S1-H01: "inform" → claim: "Market size €2.3B", confidence: 0.70
# Agent 10 detects low confidence, requests verification
S1-H10: "request" → S1-H02: "Verify market size (IF.TTT: need 2+ sources)"
# Agent 2 searches, finds 2nd source
S1-H02: "confirm" → S1-H10: "Market size €2.3B verified", confidence: 0.90
# Agent 10 synthesizes
S1-H10: "inform" → Coordinator: "Market size €2.3B (VERIFIED, 2 sources)"
```
### Conflict Detection Example
```yaml
# Agents report conflicting data
S1-H01: "inform" → "Prestige 50 price €250K"
S1-H03: "inform" → "Owner has €1.5M Prestige 50"
# Agent 10 detects 500% variance
S1-H10: "ESCALATE" → Coordinator: "Price conflict requires resolution"
# Sonnet resolves
Coordinator: "request" → S1-H01: "Re-search YachtWorld for Prestige 50 SOLD prices"
# Agent 1 corrects
S1-H01: "inform" → "Prestige 50 price €800K-€1.5M (CORRECTED)"
```
### IF.TTT Compliance
Every message MUST include:
- **citation_ids:** Links to evidence
- **confidence:** Explicit score (0.0-1.0)
- **evidence:** Observable artifacts (URLs, file:line)
- **cost_tokens:** Token consumption (IF.optimise tracking)
---
## IF.optimise Protocol
**Token Efficiency Targets:**

View file

@ -183,6 +183,115 @@ Each agent MUST:
---
## Intra-Agent Communication Protocol (IF.bus)
**Based on:** InfraFabric S² multi-swarm coordination (3,563x faster than git polling)
### IFMessage Schema
Every agent-to-agent message follows this structure:
```json
{
"performative": "inform", // FIPA-ACL: inform, request, query-if, confirm, disconfirm, propose, agree, ESCALATE
"sender": "if://agent/session-2/haiku-Y",
"receiver": ["if://agent/session-2/haiku-Z"],
"conversation_id": "if://conversation/navidocs-session-2-2025-11-13",
"content": {
"claim": "[Your design proposal]",
"evidence": ["[File references or codebase analysis]"],
"confidence": 0.85, // 0.0-1.0
"cost_tokens": 1247
},
"citation_ids": ["if://citation/uuid"],
"timestamp": "2025-11-13T10:00:00Z",
"sequence_num": 1
}
```
### Speech Acts (Performatives)
**propose:** Agent suggests a design or approach
- Example: "S2-H02 proposes: Inventory tracking via manual entry forms"
**agree:** Agent validates another agent's proposal
- Example: "S2-H04 agrees with S2-H02: Manual entry compatible with camera feeds"
**disconfirm:** Agent challenges another agent's design
- Example: "S2-H04 challenges S2-H02: Camera feeds can auto-detect equipment (CV models available)"
**request:** Ask another agent for design input
- Example: "S2-H02 requests S2-H04: How do cameras integrate with inventory schema?"
**confirm:** Validate another agent's technical claim
- Example: "S2-H01 confirms S2-H03: Express.js patterns match existing routes"
**ESCALATE:** Flag critical integration conflicts
- Example: "S2-H10 ESCALATES: Inventory + cameras overlap, needs integration design"
### Communication Flow (This Session)
```
S2-H01 (Codebase) ──→ S2-H10
S2-H02 (Inventory) ──→ S2-H04 (Cameras) ─→ S2-H10 (peer review)
S2-H03 (Maintenance) → S2-H05 (Contacts) → S2-H10 (integration check)
S2-H06 (Expense) ────→ S2-H07 (Search UX)→ S2-H10
```
**Key Patterns:**
1. **Design Proposals:** Agents 2-7 propose features independently
2. **Peer Review:** Adjacent agents challenge/validate designs
3. **Integration Checks:** Agent 10 ensures no conflicts between subsystems
4. **Sonnet Approval:** Final architecture review & synthesis
### Cross-Domain Validation Example
```yaml
# Agent 2 (Inventory Tracking) proposes design
S2-H02: "propose" → content:
feature: "Inventory tracking via manual entry forms"
rationale: "Simple, no OCR complexity"
# Agent 4 (Cameras) challenges with technical capability
S2-H04: "disconfirm" → content:
original_proposal: "Manual entry only"
challenge: "Camera feeds can auto-detect equipment (tender, electronics) via CV"
evidence: ["OpenCV boat equipment detection models"]
alternative: "Hybrid: Manual + camera-assisted auto-detection"
# Agent 2 revises proposal
S2-H02: "agree" → content:
revised_proposal: "Inventory tracking: Manual entry + camera-assisted CV detection"
integration_point: "Use S2-H04's camera feed for equipment detection"
```
### API Integration Conflict Example
```yaml
# Agent 3 (Maintenance) proposes maintenance log API
S2-H03: "propose" → POST /api/maintenance/{boat_id}/logs
# Agent 7 (Search UX) challenges completeness
S2-H07: "query-if" → content:
question: "Does maintenance API support date-range queries for yearly reporting?"
reasoning: "Search UI needs to filter by date for expense rollups"
# Agent 3 confirms and extends API
S2-H03: "confirm" → content:
api_extended: "GET /api/maintenance/{boat_id}/logs?start_date=&end_date=&category="
integration: "Supports S2-H06 expense rollup queries"
```
### IF.TTT Compliance
Every message MUST include:
- **citation_ids:** Links to design docs, codebase references
- **confidence:** Explicit score (0.0-1.0)
- **evidence:** File:line references from NaviDocs codebase
- **cost_tokens:** Token consumption (IF.optimise tracking)
---
## IF.bus Integration Pattern
### Event Bus Design

View file

@ -179,6 +179,147 @@ Each agent MUST:
---
## Intra-Agent Communication Protocol (IF.bus)
**Based on:** InfraFabric S² multi-swarm coordination (3,563x faster than git polling)
### IFMessage Schema
Every agent-to-agent message follows this structure:
```json
{
"performative": "inform", // FIPA-ACL: inform, request, query-if, confirm, disconfirm, ESCALATE
"sender": "if://agent/session-3/haiku-Y",
"receiver": ["if://agent/session-3/haiku-Z"],
"conversation_id": "if://conversation/navidocs-session-3-2025-11-13",
"content": {
"claim": "[Your pitch/objection finding]",
"evidence": ["[Market data, competitor analysis, customer research]"],
"confidence": 0.85, // 0.0-1.0
"cost_tokens": 1247
},
"citation_ids": ["if://citation/uuid"],
"timestamp": "2025-11-13T10:00:00Z",
"sequence_num": 1
}
```
### Speech Acts (Performatives)
**inform:** Share pitch content or objection research
- Example: "S3-H01 informs S3-H05: Pitch emphasizes €8K-€33K warranty savings"
**request:** Ask another agent for market validation
- Example: "S3-H03 requests S3-H05: Validate ROI assumptions with competitor pricing"
**disconfirm:** Challenge pitch claim with evidence
- Example: "S3-H05 disconfirms S3-H01: Warranty savings claim needs 2nd source (only €8K cited)"
**confirm:** Validate pitch claim with external evidence
- Example: "S3-H06 confirms S3-H03: ROI calculator inputs match Session 1 research"
**ESCALATE:** Flag critical objection that requires resolution
- Example: "S3-H05 ESCALATES: Broker pricing objection not addressed in pitch"
### Communication Flow (This Session)
```
S3-H01 (Pitch) ──→ S3-H05 (Objections) ──→ S3-H10 (adversarial testing)
S3-H03 (ROI) ────→ S3-H05 (Validate) ────→ S3-H10
S3-H04 (Demo) ───→ S3-H06 (Case Study) ──→ S3-H10
```
**Key Patterns:**
1. **Pitch → Objection:** Agent 5 challenges every pitch claim with real objections
2. **ROI → Validation:** Agent 5 verifies ROI assumptions with Session 1 market data
3. **Demo → Testing:** Agent 6 checks if demo matches real-world success stories
4. **Agent 10 Synthesis:** Ensures pitch is airtight before presentation
### Adversarial Testing Example
```yaml
# Agent 1 (Pitch Deck) proposes value proposition
S3-H01: "inform" → content:
claim: "NaviDocs enables €8K-€33K warranty savings per yacht"
evidence: ["Session 1 market analysis"]
confidence: 0.85
# Agent 5 (Objections) challenges claim completeness
S3-H05: "disconfirm" → content:
original_claim: "€8K-€33K warranty savings"
objection: "Which brokers actually claim this? Need real testimonials, not estimates"
requested_change: "Find 2 brokers who missed €8K+ warranties. Use as proof point."
evidence: ["Broker interviews required"]
# Agent 1 strengthens claim
S3-H01: "confirm" → content:
revised_claim: "NaviDocs prevents warranty losses (case study: Broker X missed €15K tender)"
evidence: [
"Broker interview transcript",
"YachtWorld resale comparison (boat with/without tender)",
"Session 1 pain point research"
]
confidence: 0.95
```
### ROI Calculator Validation Example
```yaml
# Agent 3 (ROI Calculator) proposes model
S3-H03: "propose" → content:
calculation: "ROI = (Warranty Savings + Time Savings) - NaviDocs Cost"
inputs: {
warranty_savings: "€8K-€33K per yacht",
time_savings: "6 hours manual → 20 minutes automated",
navidocs_cost: "€15/month × 120 months"
}
# Agent 5 (Objections) validates assumptions
S3-H05: "query-if" → content:
question: "Are time savings (6→20 mins) validated with actual brokers?"
reasoning: "If brokers claim 8 hours, our 20-min estimate is unrealistic"
request: "Get time commitment estimate from Sylvain or 2 other Riviera brokers"
# Agent 3 confirms with validation
S3-H03: "confirm" → content:
validated_inputs: {
time_savings: "Confirmed: Riviera broker spends 5.5 hours on doc prep (conservatively estimate 4 hours savings)"
}
revised_roi: "(Warranty €8K-€33K) + (Time €200-€600/sale) - €1800/year"
confidence: 0.92
```
### Demo Script Credibility Example
```yaml
# Agent 4 (Demo Script) proposes key screens
S3-H04: "propose" → content:
screens: ["Dashboard", "Warranty Detail", "Claim Package", "Home Assistant"]
timing: "5 minutes total"
# Agent 6 (Case Study) challenges realism
S3-H06: "disconfirm" → content:
concern: "Demo shows perfect claim package generation (15 mins), but real workflow?"
challenge: "What if boat has missing docs? Demo assumes complete warranty archive."
recommendation: "Add realistic scenario: Missing engine receipt, still generates partial claim"
# Agent 4 revises demo
S3-H04: "agree" → content:
revised_demo: "Screen 1: Complete case (Azimut 55S) - best case, Screen 2: Sunseeker 40 with missing docs - realistic case"
messaging: "NaviDocs works even with incomplete records, fills gaps intelligently"
```
### IF.TTT Compliance
Every message MUST include:
- **citation_ids:** Links to Session 1 research, competitor analysis
- **confidence:** Explicit score (0.0-1.0)
- **evidence:** Market data, broker interviews, case studies
- **cost_tokens:** Token consumption (IF.optimise tracking)
---
## Presentation Flow (15 Minutes)
### Opening (2 minutes)

View file

@ -174,6 +174,148 @@ Each agent MUST:
---
## Intra-Agent Communication Protocol (IF.bus)
**Based on:** InfraFabric S² multi-swarm coordination (3,563x faster than git polling)
### IFMessage Schema
Every agent-to-agent message follows this structure:
```json
{
"performative": "inform", // FIPA-ACL: inform, request, query-if, confirm, disconfirm, ESCALATE
"sender": "if://agent/session-4/haiku-Y",
"receiver": ["if://agent/session-4/haiku-Z"],
"conversation_id": "if://conversation/navidocs-session-4-2025-11-13",
"content": {
"claim": "[Week N deliverables/blockers]",
"evidence": ["[Task completion status, test results]"],
"confidence": 0.85, // 0.0-1.0
"cost_tokens": 1247
},
"citation_ids": ["if://citation/uuid"],
"timestamp": "2025-11-13T10:00:00Z",
"sequence_num": 1
}
```
### Speech Acts (Performatives)
**inform:** Share week deliverables with next week's agent
- Example: "S4-H01 informs S4-H02: Week 1 foundation complete (DB migrations, Event Bus tested)"
**request:** Ask about dependencies before proceeding
- Example: "S4-H02 requests S4-H01: Confirm DB migrations deployed and tested"
**confirm:** Validate previous week's work
- Example: "S4-H02 confirms S4-H01: Database migrations executed successfully on dev"
**disconfirm:** Flag blockers from previous week
- Example: "S4-H02 disconfirms: Event Bus tests failing - need S4-H01 to investigate"
**ESCALATE:** Flag critical timeline risk
- Example: "S4-H03 ESCALATES: Week 2 not complete, blocks Week 3 sale workflow"
### Communication Flow (This Session)
```
S4-H01 (Week 1) ──→ S4-H02 (Week 2) ──→ S4-H03 (Week 3) ──→ S4-H04 (Week 4) ──→ S4-H10
↓ handoff ↓ handoff ↓ handoff ↓ handoff
```
**Key Patterns:**
1. **Sequential Handoffs:** Week N agent sends deliverables + blockers to Week N+1
2. **Dependency Validation:** Week N checks if Week N-1 unblocks their tasks
3. **Progress Tracking:** Each agent tracks token cost + time spent
4. **Agent 10 Synthesis:** Ensures 4-week roadmap is coherent and on-track
### Week Handoff Example
```yaml
# End of Week 1 (S4-H01)
S4-H01: "inform" → content:
week: 1
status: "COMPLETE"
deliverables: [
"DB migrations: warranty_tracking, webhooks, notification_templates",
"Event bus service: IF.bus messaging system",
"Security fixes: DELETE endpoint protection, auth enforcement",
"Background jobs: warranty expiration worker"
]
tests_passed: 23/23
blockers: []
ready_for_week_2: true
# S4-H02 validates Week 1 and confirms readiness
S4-H02: "confirm" → content:
week: 1
validation: "All migrations tested on dev. Event bus ready for Week 2 warranty APIs."
week_2_dependencies: [
"DB migrations (ready)",
"Event bus service (ready)",
"Security fixes (ready)"
]
can_start_week_2: true
# S4-H02 executes Week 2 and reports progress
S4-H02: "inform" → content:
week: 2
status: "IN_PROGRESS"
progress: "Warranty APIs 60% complete (CRUD done, expiring endpoint 80%)"
blockers: ["Home Assistant integration needs webhook URL format validation"]
```
### Critical Path Conflict Example
```yaml
# S4-H03 (Week 3) reports blocker
S4-H03: "disconfirm" → content:
week: 3
blocker: "Sale workflow requires DB schema from Week 2, but S4-H02 incomplete"
missing: "webhooks table not migrated yet"
impact: "Cannot implement sale_workflows table migration"
estimated_delay: "1-2 days"
# S4-H10 escalates to coordinator
S4-H10: "ESCALATE" → content:
critical_blocker: "Week 2 delays cascading to Week 3"
chain_affected: ["Week 3", "Week 4"]
recommendation: "Prioritize webhooks table migration immediately (2-hour task)"
# Sonnet coordinator responds
Coordinator: "request" → S4-H02: "Prioritize webhooks migration today (deadline noon)"
# S4-H02 confirms
S4-H02: "confirm" → content:
priority_shift: "Moved webhooks migration to top of queue"
eta: "9am completion"
unblocks: "S4-H03 can start sale workflow design by noon"
```
### Token Cost Tracking (IF.optimise)
Every handoff message includes cost tracking:
```yaml
S4-H01: "inform" → content:
tokens_used: 8750
tokens_budgeted: 12500
efficiency: 70%
cost_usd: 0.14
remaining_budget: 3750
```
### IF.TTT Compliance
Every message MUST include:
- **citation_ids:** Links to task specs, test results
- **confidence:** Explicit score on deliverable completeness (0.0-1.0)
- **evidence:** Test counts, git commits, code reviews
- **cost_tokens:** Token consumption (IF.optimise tracking)
---
## Week 1: Foundation (Nov 13-19)
### Day 1 (Nov 13): Database Migrations

View file

@ -226,6 +226,145 @@ Each agent MUST:
---
## Intra-Agent Communication Protocol (IF.bus)
**Based on:** InfraFabric S² multi-swarm coordination (3,563x faster than git polling)
### IFMessage Schema
Every agent-to-agent message follows this structure:
```json
{
"performative": "inform", // FIPA-ACL: inform, request, query-if, confirm, disconfirm, ESCALATE
"sender": "if://agent/session-5/haiku-Y",
"receiver": ["if://agent/session-5/haiku-Z"],
"conversation_id": "if://conversation/navidocs-session-5-2025-11-13",
"content": {
"claim": "[Guardian critique, consensus findings]",
"evidence": ["[Citation links, validation reports]"],
"confidence": 0.85, // 0.0-1.0
"cost_tokens": 1247
},
"citation_ids": ["if://citation/uuid"],
"timestamp": "2025-11-13T10:00:00Z",
"sequence_num": 1
}
```
### Speech Acts (Performatives)
**inform:** Share evidence extraction findings
- Example: "S5-H01 informs S5-H10: Market claims extracted, 47 citations identified"
**query-if:** Ask for validation of cross-session consistency
- Example: "S5-H06 queries: Does Session 1 market size match Session 3 pitch deck?"
**confirm:** Validate claim with multiple sources
- Example: "S5-H02 confirms: Architecture claims verified against NaviDocs codebase (file:line refs)"
**disconfirm:** Flag inconsistencies between sessions
- Example: "S5-H06 disconfirms: Timeline contradiction (Session 2 says 4 weeks, Session 4 says 5 weeks)"
**ESCALATE:** Flag evidence quality issues for Guardian review
- Example: "S5-H08 ESCALATES: 5 unverified claims (warranty savings, MLS integration time)"
### Communication Flow (This Session)
```
Guardians (1-12) ──→ IF.sam Debate ──→ S5-H10 (Consensus)
↓ ↓
Individual Reviews 8-Way Dialogue
(Haiku agents) (Light vs Dark)
↓ ↓
Citation Validation Dissent Recording
(Agents 1-9) (IF.TTT traceability)
↓ ↓
ESCALATE (if <80% consensus)
```
**Key Patterns:**
1. **Evidence Extraction:** Agents 1-4 extract claims from Sessions 1-4
2. **Citation Compilation:** Agent 5 builds master citation database
3. **Cross-Session Validation:** Agent 6 checks for contradictions
4. **Guardian Briefing:** Agent 7 prepares tailored documents for each guardian
5. **Evidence Scoring:** Agent 8 rates credibility (0-10 scale)
6. **Dossier Compilation:** Agent 9 synthesizes all findings
7. **Consensus Tallying:** Agent 10 collects Guardian votes, detects <80% threshold
### Contradiction Detection Example
```yaml
# Agent 6 (Cross-Session Consistency) detects timeline conflict
S5-H06: "disconfirm" → content:
conflict_type: "Timeline variance"
session_2_claim: "4-week sprint foundation → deploy"
session_4_claim: "Week 1: Foundation, Week 4: Polish & Deploy (full 4 weeks)"
discrepancy: "Session 2 says 4 weeks total, Session 4 says Week 4 is final polish"
resolution_needed: true
confidence: 0.65
# Agent 10 flags for Guardian review
S5-H10: "ESCALATE" → content:
issue: "Timeline ambiguity affects feasibility judgement"
impact_on_consensus: "Fallibilism guardian will rate implementation risky if timeline unclear"
recommendation: "Clarify: Is 4 weeks INCLUDING final polish or BEFORE final polish?"
# Sonnet coordinator clarifies
Coordinator: "request" → S5-H04: "Timeline review: Week 4 is polish + deploy, all within 4 weeks?"
# Agent 4 confirms
S5-H04: "confirm" → content:
clarification: "4-week timeline includes deployment to production (Dec 8-10)"
status: "VERIFIED - no timeline contradiction"
```
### Guardian Consensus Building Example
```yaml
# Agents report evidence quality to Guardians
S5-H08: "inform" → content:
claim_count: 47
verified: 42
provisional: 3
unverified: 2
average_credibility: 8.2
primary_sources: 32
# IF.sam Light Side (Ethical Idealist) reviews
S5-H07: "inform" → IF.sam_debate: content:
light_side_position: "Dossier is transparent and well-sourced. Unverified claims flagged clearly."
confidence: 0.95
vote_recommendation: "APPROVE"
# IF.sam Dark Side (Pragmatic Survivor) debates
IF.sam_dark: "disconfirm" → IF.sam_debate: content:
dark_side_concern: "4-week timeline is ambitious. Risk = missed delivery deadline."
mitigation: "Is minimum viable product defined if timeline slips?"
vote_recommendation: "ABSTAIN - needs contingency plan"
# Agent 10 tallies initial results
S5-H10: "inform" → content:
early_tally: {
approve: 14,
abstain: 4,
reject: 2
}
approval_percentage: 77.8 # Below 80% threshold
escalation_needed: true
recommendation: "Fallibilism and Nagarjuna abstaining. Address uncertainty concerns."
```
### IF.TTT Compliance
Every message MUST include:
- **citation_ids:** Links to Sessions 1-4 findings
- **confidence:** Explicit score (0.0-1.0) on claim verification
- **evidence:** Citation database references, source credibility
- **cost_tokens:** Token consumption (IF.optimise tracking)
---
## Guardian Council Voting Process
### Step 1: Dossier Distribution (Agent 7)

340
PARALLEL_LAUNCH_STRATEGY.md Normal file
View file

@ -0,0 +1,340 @@
# NaviDocs Cloud Sessions - Parallel Launch Strategy
**Status:** ✅ READY - Maximize $100 budget efficiency
**Generated:** 2025-11-13
---
## 🚀 Parallel Launch Architecture
**Key Insight:** While sessions must run SEQUENTIALLY for final outputs, many preparation tasks can run IN PARALLEL to save time.
---
## ⚡ Launch All 5 Sessions Simultaneously
**YES - Launch all 5 cloud sessions at the same time!**
### What Happens:
1. **Session 1** (Market Research) - Runs full workflow immediately
- Agents 1-9: Web research, competitor analysis, pain points
- Agent 10: Synthesizes findings
- **Output:** `intelligence/session-1/market-analysis.md` (~30-45 min)
2. **Session 2** (Technical Architecture) - Runs preparation tasks immediately
- **Agent 1 (S2-H01):** Analyze NaviDocs codebase (NO dependency on Session 1)
- **Agents 2-9:** Research technical solutions (camera APIs, inventory systems, etc.)
- **WAIT:** When Agent 1 completes, Session 2 checks if Session 1 outputs exist
- **IF NOT READY:** Session 2 waits/polls for `intelligence/session-1/session-1-handoff.md`
- **WHEN READY:** Agent 10 synthesizes with Session 1 findings
3. **Session 3** (UX/Sales) - Runs preparation tasks immediately
- **Agents 1-7:** Research pitch deck templates, ROI calculators, demo scripts
- **WAIT:** Check for Session 1+2 outputs before final synthesis
4. **Session 4** (Implementation) - Runs preparation tasks immediately
- **Week agents:** Plan generic sprint structure, gather dev best practices
- **WAIT:** Check for Sessions 1+2+3 outputs before detailed planning
5. **Session 5** (Guardian Validation) - Runs preparation tasks immediately
- **Guardians:** Review methodology, prepare evaluation criteria
- **WAIT:** Check for Sessions 1+2+3+4 outputs before final vote
---
## 📋 Session-by-Session Idle Task Breakdown
### Session 2: Technical Integration (Can Start Immediately)
**✅ NO-DEPENDENCY TASKS (run while waiting for Session 1):**
#### Agent 1: NaviDocs Codebase Analysis
- Read `server/db/schema.sql`
- Read `server/routes/*.js`
- Read `server/services/*.js`
- Read `server/workers/*.js`
- Map current architecture
- **NO SESSION 1 DATA NEEDED**
#### Agents 2-9: Technology Research
- Agent 2: Inventory tracking system patterns (generic research)
- Agent 3: Maintenance log architectures
- Agent 4: Camera integration APIs (Home Assistant, Hikvision, Reolink)
- Agent 5: Contact management systems
- Agent 6: Expense tracking patterns
- Agent 7: Search UX frameworks (Meilisearch faceting, structured results)
- Agent 8: Multi-tenant data isolation patterns
- Agent 9: Mobile-first responsive design patterns
**⏸️ WAIT FOR SESSION 1:**
- Agent 10: Final synthesis (needs Session 1 pain point priorities)
**TIME SAVED:** ~20-30 minutes (codebase analysis + tech research)
---
### Session 3: UX/Sales Enablement (Can Start Immediately)
**✅ NO-DEPENDENCY TASKS:**
#### Agent 1: Pitch Deck Template Research
- Analyze 10-20 SaaS pitch decks
- Identify winning patterns (problem/solution/demo/ROI/close)
- **NO SESSION DATA NEEDED**
#### Agent 2: Figma/Design Research
- Boat management app UI patterns
- Luxury product design (yacht owner aesthetic)
- Mobile-first wireframe templates
#### Agent 3: ROI Calculator Patterns
- Generic calculator templates
- Visualization options (charts, tables)
#### Agent 4: Demo Script Best Practices
- Software demo structures
- Storytelling techniques
#### Agent 5: Objection Handling Frameworks
- Sales enablement research
- Broker objection patterns (generic)
#### Agent 6: Success Story Research
- Bundled software case studies
- Luxury product onboarding
#### Agent 7: Value Prop Messaging
- Sticky engagement messaging patterns
- Daily-use app positioning
**⏸️ WAIT FOR SESSIONS 1+2:**
- Agent 8: Integrate market findings
- Agent 9: Integrate technical feasibility
- Agent 10: Final synthesis
**TIME SAVED:** ~15-25 minutes
---
### Session 4: Implementation Planning (Can Start Immediately)
**✅ NO-DEPENDENCY TASKS:**
#### Week 1-4 Agents: Generic Sprint Planning
- Agile sprint best practices
- 4-week roadmap templates
- Story point estimation frameworks
- Git workflow patterns
- Testing strategies
**⏸️ WAIT FOR SESSIONS 1+2+3:**
- Detailed feature prioritization (needs Session 1 pain points)
- Technical task breakdown (needs Session 2 architecture)
- Sales milestone alignment (needs Session 3 pitch deck)
**TIME SAVED:** ~10-15 minutes
---
### Session 5: Guardian Validation (Can Start Immediately)
**✅ NO-DEPENDENCY TASKS:**
#### Guardians 1-12: Methodology Review
- Review IF.TTT citation framework
- Review IF.ground anti-hallucination principles
- Prepare evaluation criteria
- Study Guardian Council voting protocols
#### IF.sam Facets: Strategic Analysis Prep
- Review Epic V4 methodology
- Review Joe Trader persona application
- Prepare business model evaluation frameworks
**⏸️ WAIT FOR SESSIONS 1+2+3+4:**
- Full dossier review
- Evidence quality assessment
- Final consensus vote
**TIME SAVED:** ~20-30 minutes
---
## 🎯 Optimal Launch Sequence
### Step 1: Launch All 5 Sessions Simultaneously (t=0)
**Claude Code Cloud Web Interface (5 browser tabs):**
```
Tab 1: Paste CLOUD_SESSION_1_MARKET_RESEARCH.md → Start
Tab 2: Paste CLOUD_SESSION_2_TECHNICAL_INTEGRATION.md → Start
Tab 3: Paste CLOUD_SESSION_3_UX_SALES_ENABLEMENT.md → Start
Tab 4: Paste CLOUD_SESSION_4_IMPLEMENTATION_PLANNING.md → Start
Tab 5: Paste CLOUD_SESSION_5_SYNTHESIS_VALIDATION.md → Start
```
### Step 2: Sessions Run Preparation Tasks (t=0 to t=30min)
**What's happening in parallel:**
- Session 1: Full market research workflow
- Session 2: Codebase analysis + tech research
- Session 3: Pitch deck templates + UI research
- Session 4: Sprint planning frameworks
- Session 5: Guardian methodology review
### Step 3: Session 1 Completes (t=30-45min)
**Outputs created:**
- `intelligence/session-1/market-analysis.md`
- `intelligence/session-1/session-1-handoff.md`
- `intelligence/session-1/session-1-citations.json`
### Step 4: Session 2 Detects Session 1 Completion (t=45min)
**Session 2 polls for file:**
```bash
if [ -f "intelligence/session-1/session-1-handoff.md" ]; then
echo "Session 1 complete - proceeding with synthesis"
# Agent 10 reads Session 1 findings
# Synthesizes with codebase analysis
fi
```
### Step 5: Sessions 2-5 Complete Sequentially (t=45min to t=5hr)
**Timeline:**
- t=45min: Session 2 completes (already did 30min of prep work)
- t=90min: Session 3 completes (reads Sessions 1+2)
- t=150min: Session 4 completes (reads Sessions 1+2+3)
- t=270min: Session 5 completes (reads all previous sessions)
---
## 💰 Budget Efficiency
**Without Parallel Launch:**
- Total time: 3-5 hours sequential
- Idle time: ~60-90 minutes (waiting for dependencies)
- Wasted opportunity: Could have researched tech, templates, frameworks
**With Parallel Launch:**
- Total time: 3-4 hours (60min saved)
- Idle time: 0 minutes (all sessions productive from t=0)
- Budget utilization: 100% efficient
**Cost:**
- Session 1: $15
- Session 2: $20
- Session 3: $15
- Session 4: $15
- Session 5: $25
- **Total: $90 of $100 available**
---
## 🔧 Implementation: Polling Mechanism
Each session's Sonnet coordinator checks for prerequisites:
```javascript
// Session 2 polling logic
async function waitForSession1() {
const maxWaitTime = 60 * 60 * 1000; // 1 hour
const pollInterval = 60 * 1000; // 1 minute
const startTime = Date.now();
while (Date.now() - startTime < maxWaitTime) {
try {
const handoffExists = await fileExists('intelligence/session-1/session-1-handoff.md');
if (handoffExists) {
console.log('✅ Session 1 complete - proceeding with synthesis');
return true;
}
console.log('⏸️ Waiting for Session 1... (polling every 60s)');
await sleep(pollInterval);
} catch (error) {
console.error('Error polling for Session 1:', error);
}
}
throw new Error('Session 1 did not complete within 1 hour');
}
// Run preparation tasks immediately
await runPreparationTasks(); // S2-H01 codebase analysis, etc.
// Then wait for Session 1 before synthesis
await waitForSession1();
await runSynthesisTasks(); // S2-H10 final integration
```
---
## ⚠️ Edge Cases & Error Handling
### What if Session 1 Fails?
**Session 2 behavior:**
- Completes all preparation tasks
- Polls for Session 1 outputs (max 1 hour)
- If timeout: Reports "Session 1 dependency not met" and exits gracefully
- Outputs: Partial deliverables (codebase analysis, tech research) WITHOUT synthesis
### What if GitHub is Inaccessible?
**Fallback:**
- Sessions output to local filesystem
- Manual handoff: Copy `intelligence/session-X/` directories to shared location
- Resume sessions with local paths
### What if a Session Runs Out of Budget?
**Session 2 example:**
- Budget: $20 allocated
- Preparation tasks: $8 consumed
- Remaining: $12 for synthesis
- If exceeds: Switch to Haiku-only mode, flag as "budget exceeded"
---
## 🎯 Launch Checklist
**Before launching all 5 sessions:**
- [ ] GitHub repo accessible: https://github.com/dannystocker/navidocs
- [ ] Claude Code Cloud web interface ready (5 tabs)
- [ ] All 5 CLOUD_SESSION_*.md files verified
- [ ] Budget confirmed: $100 available
- [ ] `intelligence/` directory exists (or will be created by Session 1)
**During launch:**
- [ ] Session 1: Monitor for completion (~30-45min)
- [ ] Sessions 2-5: Monitor preparation task progress
- [ ] Check for errors/blockers in any session
**After Session 1 completes:**
- [ ] Verify `intelligence/session-1/session-1-handoff.md` exists
- [ ] Sessions 2-5 should detect and proceed automatically
**After all sessions complete:**
- [ ] Review `intelligence/session-5/complete-intelligence-dossier.md`
- [ ] Check token consumption: Should be ~$90
- [ ] Prepare for Riviera Plaisance meeting with Sylvain
---
## 🚀 TL;DR: Launch Instructions
1. **Open 5 browser tabs** in Claude Code Cloud web interface
2. **Copy-paste all 5 session files** simultaneously
3. **Click "Start" on all 5 tabs**
4. **Wait 3-4 hours** (sessions coordinate automatically)
5. **Review final dossier** in `intelligence/session-5/`
**Key Insight:** Sessions are SMART - they'll work on preparation tasks while waiting for dependencies, maximizing your $100 budget efficiency.
---
**Questions? Check:**
- `/home/setup/infrafabric/NAVIDOCS_SESSION_SUMMARY.md` - Quick reference
- `/home/setup/navidocs/SESSION_DEBUG_BLOCKERS.md` - Debug analysis
- `/home/setup/infrafabric/agents.md` - Comprehensive project docs

View file

@ -0,0 +1,509 @@
# NaviDocs Cloud Sessions - Intra-Agent Communication Protocol
**Based on:** InfraFabric IF.bus + SWARM-COMMUNICATION-SECURITY.md
**Status:** ✅ READY - Apply to all 5 cloud sessions
**Generated:** 2025-11-13
---
## Why Intra-Agent Communication is Critical
**Key Insight from InfraFabric:**
> "Intra-agent communication enables specialized agents to challenge each other's evidence, preventing single-agent hallucinations from becoming consensus reality. Without it, you get 9 agents making 9 independent mistakes instead of 1 collective truth."
**For NaviDocs Sessions:**
- **Agent 1 (Market Research)** finds "Prestige yachts sell for €250K"
- **Agent 2 (Competitor Analysis)** finds ads showing €1.5M prices
- **WITHOUT communication:** Both reports go to synthesis unchallenged
- **WITH communication:** Agent 2 sends CHALLENGE → Evidence.Agent detects 500% variance → ESCALATE to Sonnet coordinator
---
## IFMessage Schema for NaviDocs Swarms
### Base Message Structure
```json
{
"performative": "inform",
"sender": "if://agent/session-1/haiku-3",
"receiver": ["if://agent/session-1/haiku-10"],
"conversation_id": "if://conversation/navidocs-session-1-2025-11-13",
"topic": "if://topic/session-1/market-research/owner-pain-points",
"protocol": "fipa-request",
"content": {
"task": "Owner pain points analysis",
"claim": "Inventory tracking prevents €15K-€50K forgotten value at resale",
"evidence": [
"https://yachtworld.com/forums/thread-12345",
"https://thetraderonline.com/boats/resale-value-mistakes"
],
"confidence": 0.85,
"cost_tokens": 1247
},
"citation_ids": ["if://citation/inventory-pain-point-2025-11-13"],
"timestamp": "2025-11-13T10:00:00Z",
"sequence_num": 3,
"trace_id": "s1-h03-001"
}
```
### Speech Acts (FIPA-ACL Performatives)
**Available Performatives:**
1. **inform** - Agent shares findings with others
2. **request** - Agent asks another agent for help/data
3. **query-if** - Agent asks if claim is true
4. **confirm** - Agent verifies another agent's claim
5. **disconfirm** - Agent challenges another agent's claim
6. **propose** - Agent suggests approach/solution
7. **agree** - Agent confirms proposal
8. **refuse** - Agent declines request (with reason)
9. **ESCALATE** - Agent flags critical conflict for Sonnet coordinator
### Example Communication Flows
#### Flow 1: Multi-Source Verification (Session 1, Agents 1→10)
```yaml
# Agent 1 (S1-H01): Market sizing
- performative: "inform"
content:
claim: "Mediterranean yacht sales €2.3B annually"
evidence: ["https://yacht-market-report.com/2024"]
confidence: 0.70 # Only 1 source
receiver: ["if://agent/session-1/haiku-10"] # Synthesis agent
# Agent 10 (S1-H10) detects low confidence, requests verification
- performative: "request"
content:
task: "Verify market size claim with 2nd source"
reason: "IF.TTT requires 2+ sources for high-value claims"
receiver: ["if://agent/session-1/haiku-2"] # Competitor analysis agent
# Agent 2 (S1-H02) searches competitor reports
- performative: "confirm"
content:
original_claim: "Mediterranean yacht sales €2.3B annually"
evidence: ["https://statista.com/yacht-market-europe-2024"]
confidence: 0.90 # Now 2 sources
receiver: ["if://agent/session-1/haiku-10"]
# Agent 10 synthesizes
- performative: "inform"
content:
claim: "Mediterranean yacht sales €2.3B annually (VERIFIED)"
sources: 2
confidence: 0.90
receiver: ["if://agent/session-1/coordinator"] # Sonnet
```
#### Flow 2: Cross-Domain Validation (Session 2, Agents 2→4)
```yaml
# Agent 2 (S2-H02): Inventory tracking design
- performative: "propose"
content:
feature: "Inventory tracking via manual entry forms"
rationale: "Simple, no OCR complexity"
receiver: ["if://agent/session-2/haiku-4"] # Camera integration agent
# Agent 4 (S2-H04) challenges with camera capability
- performative: "disconfirm"
content:
original_proposal: "Manual entry only"
challenge: "Camera feeds can auto-detect equipment (tender, electronics) via CV"
evidence: ["OpenCV boat equipment detection models"]
alternative: "Hybrid: Manual + camera-assisted auto-detection"
receiver: ["if://agent/session-2/haiku-2", "if://agent/session-2/haiku-10"]
# Agent 2 revises proposal
- performative: "agree"
content:
revised_proposal: "Inventory tracking: Manual entry + camera-assisted CV detection"
integration_point: "Use S2-H04's camera feed for equipment detection"
receiver: ["if://agent/session-2/haiku-10"]
```
#### Flow 3: ESCALATE Protocol (Session 1, Conflict Detection)
```yaml
# Agent 1 (S1-H01): Market research
- performative: "inform"
content:
claim: "Jeanneau Prestige 40-50ft price range €250K-€480K"
evidence: ["YachtWorld listing (1 data point)"]
confidence: 0.65
# Agent 3 (S1-H03): Owner pain points
- performative: "inform"
content:
claim: "Owner quoted €15K forgotten tender on €1.5M boat resale"
evidence: ["Forum post, broker interview"]
confidence: 0.75
# Agent 10 (S1-H10) detects conflict (€250K vs €1.5M = 500% variance)
- performative: "ESCALATE"
content:
conflict_type: "Price range inconsistency"
agent_1_claim: "€250K-€480K (S1-H01)"
agent_3_claim: "€1.5M boat (S1-H03)"
variance: "500%"
requires_resolution: true
recommendation: "Re-search YachtWorld/Boat Trader for Prestige 40-50ft ACTUAL sale prices"
receiver: ["if://agent/session-1/coordinator"] # Sonnet resolves
# Sonnet coordinator investigates
- performative: "request"
content:
task: "Search YachtWorld ads for Jeanneau Prestige 50 SOLD listings"
priority: "high"
receiver: ["if://agent/session-1/haiku-1"]
# Agent 1 re-searches
- performative: "inform"
content:
claim: "Jeanneau Prestige 50 price range €800K-€1.5M (CORRECTED)"
evidence: [
"YachtWorld: Prestige 50 sold €1.2M (2024)",
"Boat Trader: Prestige 50 sold €950K (2023)"
]
confidence: 0.95
note: "Previous €250K estimate was for smaller Jeanneau models, NOT Prestige line"
receiver: ["if://agent/session-1/haiku-10", "if://agent/session-1/coordinator"]
```
---
## Communication Patterns by Session
### Session 1: Market Research (Distributed Intelligence)
**C-UAS Pattern:** DETECT → TRACK → IDENTIFY → SYNTHESIZE
```
S1-H01 (Market Size) ────┐
S1-H02 (Competitors) ────┤
S1-H03 (Pain Points) ────┤
S1-H04 (Inventory ROI) ──┼──→ S1-H10 (Evidence Synthesis) ──→ Sonnet (Final Report)
S1-H05 (Sticky Features) ┤ ↓
S1-H06 (Search UX) ───────┤ ESCALATE (if conflicts)
S1-H07 (Pricing) ─────────┤ ↓
S1-H08 (Home Assistant) ──┤ Sonnet Resolves
S1-H09 (Objections) ──────┘
```
**Key Communication Flows:**
1. **Agents 1-9 → Agent 10:** Send findings with confidence scores
2. **Agent 10 → Agents 1-9:** Request verification if confidence <0.75
3. **Agent 10 → Sonnet:** ESCALATE conflicts (>20% variance)
4. **Sonnet → Agent X:** Request re-investigation with specific instructions
### Session 2: Technical Architecture (Collaborative Design)
**Wu Lun Pattern (Confucian Relationships):** Peer review + hierarchical approval
```
S2-H01 (Codebase) ──────→ S2-H10 (Synthesis)
S2-H02 (Inventory) ─────→ S2-H04 (Cameras) ─┐
↓ propose ↓ challenge │
↓ ↓ ├──→ S2-H10 ──→ Sonnet
S2-H03 (Maintenance) ────→ S2-H05 (Contacts) ┘
↓ propose ↓ validate
S2-H06 (Expense) ────────→ S2-H07 (Search UX) ──→ S2-H10
↓ propose ↓ challenge (avoid long lists!)
```
**Key Communication Flows:**
1. **Design Proposals:** Agents 2-7 propose features
2. **Peer Review:** Adjacent agents challenge/validate designs
3. **Integration Checks:** Agent 10 ensures no conflicts (e.g., inventory + cameras)
4. **Sonnet Approval:** Final architecture review
### Session 3: UX/Sales (Adversarial Testing)
**Pattern:** Pitch → Objection → Counter-objection → Refinement
```
S3-H01 (Pitch Deck) ──────→ S3-H05 (Objection Handling) ──→ S3-H10
↓ pitch ↓ challenges ("Why not use X?")
S3-H03 (ROI Calculator) ────→ S3-H05 (Validate assumptions)
↓ claims ↓ verify
S3-H04 (Demo Script) ────────→ S3-H06 (Success Stories) ──→ S3-H10
↓ flow ↓ validate against real cases
```
**Key Communication Flows:**
1. **Pitch → Objection:** Agent 5 challenges every pitch claim
2. **ROI → Validation:** Agent 5 verifies ROI assumptions with market data
3. **Demo → Testing:** Agent 6 checks if demo matches real-world success stories
### Session 4: Implementation (Sprint Coordination)
**Pattern:** Sequential handoffs (Week 1 → Week 2 → Week 3 → Week 4)
```
S4-H01 (Week 1 Sprint) ──→ S4-H02 (Week 2 Sprint)
↓ handoff ↓ depends on Week 1 deliverables
S4-H02 (Week 2 Sprint) ──→ S4-H03 (Week 3 Sprint)
↓ handoff ↓ depends on Week 2
S4-H03 (Week 3 Sprint) ──→ S4-H04 (Week 4 Sprint)
↓ handoff ↓ depends on Week 3
S4-H04 (Week 4 Sprint) ──→ S4-H10 (Integration Testing Plan)
```
**Key Communication Flows:**
1. **Week Handoffs:** Each week agent sends deliverables + blockers to next week
2. **Dependency Checks:** Week N checks if Week N-1 unblocks their tasks
3. **Agent 10 Synthesis:** Ensure 4-week roadmap is coherent
### Session 5: Guardian Validation (Consensus Building)
**Pattern:** Individual reviews → Debate → Vote → Synthesis
```
Core Guardians (1-6) ────┐
Western Philosophers ─────┤
Eastern Philosophers ─────┼──→ IF.sam Debate ──→ S5-H10 (Consensus Report)
IF.sam Facets (8) ────────┘ ↓
ESCALATE (if <80% consensus)
Sonnet Mediates
```
**Key Communication Flows:**
1. **Review → Debate:** Each guardian sends critique to IF.sam board
2. **IF.sam Synthesis:** Debate among 8 facets (Light Side vs Dark Side)
3. **Vote → Consensus:** Agent 10 tallies votes, checks for >80% threshold
4. **ESCALATE:** If consensus fails, Sonnet mediates
---
## IF.TTT Compliance
### Every Message Must Include:
1. **citation_ids:** Links to evidence sources
2. **confidence:** Explicit score (0.0-1.0)
3. **evidence:** Observable artifacts (URLs, file:line, git commits)
4. **trace_id:** Unique identifier for audit trail
### Signature Requirements (Anti-Forgery):
```json
{
"signature": {
"algorithm": "ed25519",
"public_key": "ed25519:AAAC3NzaC1...",
"signature_bytes": "ed25519:p9RLz6Y4...",
"signed_fields": [
"performative", "sender", "receiver",
"content", "citation_ids", "timestamp"
]
}
}
```
**Why:** Prevents agent impersonation, ensures non-repudiation
---
## Token Cost Tracking (IF.optimise)
**Every message includes:**
```json
{
"cost_tokens": 1247,
"model": "haiku",
"efficiency_score": 0.82 // tokens_used / tokens_budgeted
}
```
**Agent 10 (Synthesis) tracks total:**
```json
{
"session_cost": {
"total_tokens": 52450,
"total_usd": 0.86,
"budget_remaining": 14.14,
"efficiency": "71% Haiku delegation ✅"
}
}
```
---
## Error Handling & Resilience (Stoic Prudence)
### Retry Policy (Exponential Backoff)
```yaml
request:
retry_attempts: 3
backoff: [1s, 5s, 15s]
on_failure: "ESCALATE to coordinator"
```
### Timeout Policy
```yaml
message_timeout:
inform: 60s # Simple findings
request: 300s # Complex queries (web search, etc.)
ESCALATE: 30s # Critical conflicts need fast resolution
```
### Graceful Degradation
**If Agent X doesn't respond:**
1. **Agent 10 proceeds** with available data
2. **Flags missing input:** "S1-H07 pricing analysis: TIMEOUT (not included in synthesis)"
3. **ESCALATE to Sonnet:** "Agent 7 unresponsive - proceed without pricing?"
---
## Implementation Instructions for Session Coordinators
### Sonnet Coordinator Responsibilities:
1. **Spawn agents with IF.bus protocol:**
```python
await spawn_agent(
agent_id="if://agent/session-1/haiku-3",
task="Owner pain points analysis",
communication_protocol="IF.bus",
topics=["if://topic/session-1/market-research"]
)
```
2. **Monitor message bus for ESCALATEs:**
```python
while session_active:
messages = await poll_messages(topic="if://topic/escalations")
for msg in messages:
if msg.performative == "ESCALATE":
await resolve_conflict(msg)
```
3. **Track token costs:**
```python
total_cost = sum(msg.content.cost_tokens for msg in all_messages)
if total_cost > budget_threshold:
await alert_user(f"Session {session_id} approaching budget limit")
```
### Haiku Agent Responsibilities:
1. **Check in with identity:**
```python
await send_message(
performative="inform",
content="I am S1-H03, assigned to Owner Pain Points Analysis",
receiver="if://agent/session-1/coordinator"
)
```
2. **Send findings to synthesis agent:**
```python
await send_message(
performative="inform",
content={
"claim": "Inventory tracking prevents €15K-€50K loss",
"evidence": ["..."],
"confidence": 0.85
},
receiver="if://agent/session-1/haiku-10" # Always send to Agent 10
)
```
3. **Respond to verification requests:**
```python
requests = await poll_messages(receiver_filter="if://agent/session-1/haiku-3")
for req in requests:
if req.performative == "request":
result = await process_request(req.content.task)
await send_message(
performative="confirm",
content=result,
receiver=req.sender # Reply to requester
)
```
---
## Testing the Protocol
### Minimal Viable Test (Session 1, 3 agents)
**Scenario:** Detect price conflict and ESCALATE
```python
# Agent 1: Report low price
S1_H01.send("inform", claim="Prestige 50 price €250K")
# Agent 3: Report high price
S1_H03.send("inform", claim="Owner has €1.5M Prestige 50")
# Agent 10: Detect conflict
variance = (1.5M - 250K) / 250K # 500%
if variance > 0.20:
S1_H10.send("ESCALATE", content="500% price variance")
# Sonnet: Resolve
Sonnet.send("request", task="Re-search YachtWorld for Prestige 50 SOLD prices", receiver="S1-H01")
```
**Expected Result:**
- Agent 10 detects conflict
- Sonnet receives ESCALATE
- Agent 1 re-investigates
- Corrected price €800K-€1.5M reported
---
## Benefits Summary
**Without Intra-Agent Communication:**
- 10 agents make 10 independent mistakes
- Conflicts undetected until human reviews
- Low confidence claims go unchallenged
- Single-source hallucinations propagate
**With IF.bus Protocol:**
- Agent 10 detects conflicts automatically (>20% variance → ESCALATE)
- Multi-source verification enforced (IF.TTT requirement)
- Cross-domain validation (Agent 4 challenges Agent 2 designs)
- Token costs tracked in real-time (IF.optimise)
- Cryptographic integrity (Ed25519 signatures)
**Result:** Higher quality intelligence, faster conflict resolution, medical-grade evidence standards.
---
## Next Steps
1. **Update all 5 cloud session files** with IF.bus communication protocol
2. **Add message examples** to each agent's task description
3. **Update Agent 10 (Synthesis)** to include conflict detection logic
4. **Test with Session 1** (market research) before launching Sessions 2-5
**Files to Update:**
- `CLOUD_SESSION_1_MARKET_RESEARCH.md` - Add IF.bus protocol section
- `CLOUD_SESSION_2_TECHNICAL_INTEGRATION.md` - Add peer review flow
- `CLOUD_SESSION_3_UX_SALES_ENABLEMENT.md` - Add adversarial testing
- `CLOUD_SESSION_4_IMPLEMENTATION_PLANNING.md` - Add sprint handoffs
- `CLOUD_SESSION_5_SYNTHESIS_VALIDATION.md` - Add consensus protocol
---
**Citation:** if://doc/navidocs/swarm-communication-protocol-2025-11-13
**References:**
- `/home/setup/infrafabric/docs/SWARM-COMMUNICATION-SECURITY.md`
- `/home/setup/infrafabric/docs/INTRA-AGENT-COMMUNICATION-VALUE-ANALYSIS.md`
- `/home/setup/infrafabric/docs/HAIKU-SWARM-TEST-FRAMEWORK.md`