Add IF.DAVE v1.8 bible + quality gates
This commit is contained in:
parent
8e7d8f603d
commit
3a11a286d7
5 changed files with 897 additions and 6 deletions
46
AGENTS.md
Normal file
46
AGENTS.md
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
# Agent Notes (InfraFabric Shadow Dossier Generator)
|
||||
|
||||
This repo generates **Shadow Dossiers** by applying versioned style bibles (e.g., IF.DAVE) to extracted source documents.
|
||||
|
||||
## Current “Dave” baseline
|
||||
|
||||
- Latest bible: `style_bibles/IF_DAVE_BIBLE_v1.8.md` (`if://bible/dave/v1.8`)
|
||||
- Public static copy: https://infrafabric.io/static/hosted/bibles/IF_DAVE_BIBLE_v1.8.md
|
||||
- v1.8 generator behavior (implemented in `src/revoice/generate.py`):
|
||||
- Adds `MIRROR COMPLETENESS: OK|DEGRADED` (and optional hard fail via `REVOICE_QUALITY_GATE=1`)
|
||||
- Adds `## Claims Register (source-attributed)` for measurable claims (numbers, %, tiers, retention windows)
|
||||
- Defaults Action Pack ON for v1.8 (disable via `REVOICE_NO_ACTION_PACK=1`)
|
||||
- Domain-aware Action Pack gates: hardware/identity, sensors/enforcers, detection/analysis, automation/agentic
|
||||
|
||||
## Static hosting (critical trap)
|
||||
|
||||
Public static mirror is served from `pct 210:/srv/hosted-static/public` at:
|
||||
- `https://infrafabric.io/static/hosted/…`
|
||||
|
||||
There is a sync job that mirrors `https://git.infrafabric.io/danny/hosted.git` into `/srv/hosted-static/public` every ~5 minutes.
|
||||
|
||||
**Important:** The sync uses `rsync --delete`, so anything not in the mirrored repo would normally be removed. To keep operator-generated review artifacts stable, the sync script now excludes:
|
||||
- `bibles/`
|
||||
- `review/`
|
||||
|
||||
So **publish operator-generated bibles/review packs under**:
|
||||
- `/srv/hosted-static/public/bibles/…`
|
||||
- `/srv/hosted-static/public/review/…`
|
||||
|
||||
## Week review packs (v1.8)
|
||||
|
||||
Week v1.8 packs are published here:
|
||||
- Index: `https://infrafabric.io/static/hosted/review/week-v1.8/2025-12-27/index.md`
|
||||
- Full single-file bundle: `https://infrafabric.io/static/hosted/review/week-v1.8/2025-12-27/week.pack.md`
|
||||
|
||||
Each day also has:
|
||||
- `/<day>.pack.md` (offline-friendly; embeds source + dossier + trace + marketing thread)
|
||||
- `/<day>.shadow.md`
|
||||
- `/<day>.trace.json`
|
||||
- `/<day>.marketing.md`
|
||||
|
||||
## OpSec / sharing rules
|
||||
|
||||
- Do not leak internal hostnames, paths, container IDs, or pipeline errors into outputs.
|
||||
- For external reviewers, prefer **static** review packs (`/static/hosted/review/...`) or red-team app bundles (`/static/pack/<id>.md`).
|
||||
- Avoid deep-linking Forgejo for public review; use `infrafabric.io/static/...` mirrors instead.
|
||||
45
docs/FEEDBACK_WEEK_2025-12-27.md
Normal file
45
docs/FEEDBACK_WEEK_2025-12-27.md
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
# Week Feedback Summary (LLM Panel) — 2025-12-27
|
||||
|
||||
Source: internal CSV export (`@ShadowRT-LLM-Feedback`)
|
||||
|
||||
This is a synthesis of cross-model feedback (Grok, Gemini 1.5 Pro/Flash, GPT-5.2) over the **Mon–Sun TV-week stress test** packs. It is intended to drive patches to the generator + bible without widening scope.
|
||||
|
||||
## Themes (cross-day)
|
||||
|
||||
- **P0: Ensure every dossier has usable “body” sections** (some HTML→MD sources collapsed into “cover + inferred mermaids only”, losing mirror integrity and Action Pack utility).
|
||||
- **P0: Control Card / header hygiene**: extracted headings sometimes become paragraph-length; this breaks scanability and Jira/backlog export.
|
||||
- **P0: Edition isolation**: Action Pack logic can “bleed” across domains (e.g., SaaS controls reused for hardware tokens) unless gates/owners/evidence are domain-aware.
|
||||
- **P1: Mirror payload completeness**: tables/licensing tiers and high-signal numeric claims should be preserved and turned into enforceable questions/gates, not summarized away.
|
||||
- **P1: Operational concreteness**: “telemetry” and “machine-checkable prerequisites” land well, but reviewers want minimum schemas (event type, freshness window, owner) to reduce hand-waving.
|
||||
- **P2: Prioritization**: add lightweight severity ranking so “all Dave Factors” don’t read equally critical.
|
||||
|
||||
## Day-specific P0s (from structured reviewer notes)
|
||||
|
||||
- **MON (Enterprise / Microsoft Defender page mirror)**: missing Action Pack and missing Dave blocks; licensing tier/table not mirrored; turn “3 minute” claims into enforceable gates.
|
||||
- **TUE (Cloud / Aqua SaaS)**: paragraph blobs leaked into Control Card titles; add hard character limits and summarization.
|
||||
- **WED (Endpoint / SentinelOne)**: headings conflated with descriptions; enforce short headings; critique “AI analyst” as black box evidence.
|
||||
- **THU (COMSEC-ish / YubiKey FIPS brief)**: control logic looked SaaS-shaped; require hardware lifecycle / chain-of-custody controls.
|
||||
- **FRI (Startup / Torq page mirror)**: Action Pack dropout; require stronger scrutiny when sources claim autonomy/agentic behavior.
|
||||
- **SAT (Recap)**: ensure recap output includes a “what to steal” meta action pack (policy templates).
|
||||
- **SUN (Deep dive / NIST SP 800-207 mirror)**: reduce abstractness by translating prose into “policy-as-code” style gates.
|
||||
|
||||
## Implemented fixes (generator + lint)
|
||||
|
||||
Implemented in `re-voice/src/revoice/generate.py` and `re-voice/src/revoice/lint.py`:
|
||||
|
||||
- **Robust section extraction fallback** for HTML→MD / weakly structured sources:
|
||||
- Markdown heading parsing fallback.
|
||||
- Last-resort “cover + body” shape, so `sections[1:]` is never empty.
|
||||
- **Action Pack title hygiene**:
|
||||
- New `_compact_title()` used for Control Card headings and backlog items to avoid paragraph-length titles.
|
||||
- **Hardware-aware gating**:
|
||||
- New Action Pack gate: `Hardware / identity` with owner/stop condition/evidence artifacts when the source contains FIPS/PIV/FIDO + token/hardware cues.
|
||||
- **Lint exemption for Action Pack boilerplate**:
|
||||
- Ignore repeated `- Acceptance:` lines so Action Pack backlog doesn’t fail `_lint_repeated_lines`.
|
||||
|
||||
## Remaining backlog (proposed next patches)
|
||||
|
||||
- Add **recap_mode** to generate a meta “What to steal” action pack from Mon–Fri without requiring the source to include it.
|
||||
- Add **government_standard_mode** translation table (standard prose → gates/owners/evidence), with explicit tagging as operationalization (not new source claims).
|
||||
- Add **high-signal table retention** rule to the extractor for common PDF table layouts (licensing tiers, side-by-side comparisons).
|
||||
- Add **lightweight severity ranking** (P0/P1/P2 per section) without changing mirror order.
|
||||
|
|
@ -29,6 +29,7 @@ def generate_shadow_dossier(*, style_id: str, source_text: str, source_path: str
|
|||
"if.dave.v1.3",
|
||||
"if.dave.v1.6",
|
||||
"if.dave.v1.7",
|
||||
"if.dave.v1.8",
|
||||
"if.dave.fr.v1.2",
|
||||
"if.dave.fr.v1.3",
|
||||
"dave",
|
||||
|
|
@ -38,11 +39,19 @@ def generate_shadow_dossier(*, style_id: str, source_text: str, source_path: str
|
|||
"if://bible/dave/v1.3",
|
||||
"if://bible/dave/v1.6",
|
||||
"if://bible/dave/v1.7",
|
||||
"if://bible/dave/v1.8",
|
||||
"if://bible/dave/fr/v1.2",
|
||||
"if://bible/dave/fr/v1.3",
|
||||
}:
|
||||
style = style_id.lower()
|
||||
locale = "fr" if style in {"if.dave.fr.v1.2", "if.dave.fr.v1.3", "if://bible/dave/fr/v1.2", "if://bible/dave/fr/v1.3"} else "en"
|
||||
if style in {"if.dave.v1.8", "if://bible/dave/v1.8"}:
|
||||
return _generate_dave_v1_8_mirror(
|
||||
source_text=source_text,
|
||||
source_path=source_path,
|
||||
action_pack=action_pack,
|
||||
locale=locale,
|
||||
)
|
||||
if style in {"if.dave.v1.7", "if://bible/dave/v1.7"}:
|
||||
return _generate_dave_v1_7_mirror(
|
||||
source_text=source_text,
|
||||
|
|
@ -747,8 +756,111 @@ def _extract_sections(source_text: str) -> list[_SourceSection]:
|
|||
for _page_no, page_text in pages:
|
||||
if page_text.strip():
|
||||
sections.extend(_parse_sections_from_page(page_text))
|
||||
if sections:
|
||||
return sections
|
||||
|
||||
# Fallback: lightweight Markdown heading parsing for HTML→MD mirrors and other non-PDF inputs
|
||||
# where page/outline heuristics fail (e.g., long navigation-heavy pages).
|
||||
sections = _extract_sections_markdown_headings(source_text)
|
||||
if sections:
|
||||
return sections
|
||||
|
||||
# Last-resort: keep the document reviewable (and Action Pack-able) even when structure is poor.
|
||||
fallback_title = _first_non_empty_line(source_text) or "Source"
|
||||
fallback_body = _compact_body(source_text, max_chars=12000)
|
||||
return [_SourceSection(title=fallback_title, body=fallback_body, why_it_matters=None)]
|
||||
|
||||
|
||||
def _first_non_empty_line(text: str) -> str | None:
|
||||
for ln in text.splitlines():
|
||||
s = ln.strip()
|
||||
if s:
|
||||
return s
|
||||
return None
|
||||
|
||||
|
||||
def _compact_body(text: str, *, max_chars: int) -> str:
|
||||
if max_chars <= 0:
|
||||
return ""
|
||||
s = "\n".join([ln.rstrip() for ln in text.splitlines()]).strip()
|
||||
if len(s) <= max_chars:
|
||||
return s
|
||||
# Keep a clean boundary so downstream renderers don't inherit half-glyph garbage.
|
||||
cut = s.rfind("\n", 0, max_chars)
|
||||
if cut < int(max_chars * 0.6):
|
||||
cut = max_chars
|
||||
return s[:cut].rstrip() + "\n\n…"
|
||||
|
||||
|
||||
def _extract_sections_markdown_headings(source_text: str) -> list[_SourceSection]:
|
||||
lines = [ln.rstrip("\n") for ln in source_text.splitlines()]
|
||||
if not lines:
|
||||
return []
|
||||
|
||||
sections: list[_SourceSection] = []
|
||||
cur_title: str | None = None
|
||||
cur_body: list[str] = []
|
||||
|
||||
def flush() -> None:
|
||||
nonlocal cur_title, cur_body
|
||||
if cur_title is None:
|
||||
return
|
||||
body = "\n".join(cur_body).strip()
|
||||
sections.append(_SourceSection(title=cur_title.strip(), body=body, why_it_matters=None))
|
||||
cur_title = None
|
||||
cur_body = []
|
||||
|
||||
def is_underline_heading(idx: int) -> bool:
|
||||
if idx + 1 >= len(lines):
|
||||
return False
|
||||
title = lines[idx].strip()
|
||||
underline = lines[idx + 1].strip()
|
||||
if not title or not underline:
|
||||
return False
|
||||
if set(underline) == {"="} or set(underline) == {"-"}:
|
||||
return len(underline) >= 3 and len(underline) >= max(3, int(len(title) * 0.5))
|
||||
return False
|
||||
|
||||
i = 0
|
||||
while i < len(lines):
|
||||
raw = lines[i]
|
||||
s = raw.strip()
|
||||
if not s:
|
||||
if cur_title is not None and (cur_body and cur_body[-1] != ""):
|
||||
cur_body.append("")
|
||||
i += 1
|
||||
continue
|
||||
|
||||
if s.startswith("#"):
|
||||
title = s.lstrip("#").strip()
|
||||
if title:
|
||||
flush()
|
||||
cur_title = title
|
||||
cur_body = []
|
||||
i += 1
|
||||
continue
|
||||
|
||||
if is_underline_heading(i):
|
||||
flush()
|
||||
cur_title = lines[i].strip()
|
||||
cur_body = []
|
||||
i += 2
|
||||
continue
|
||||
|
||||
if cur_title is None:
|
||||
# Ignore leading navigation / boilerplate until we see a heading.
|
||||
i += 1
|
||||
continue
|
||||
|
||||
cur_body.append(raw.rstrip())
|
||||
i += 1
|
||||
|
||||
flush()
|
||||
|
||||
# Filter out empty shells (heading with no body) but keep at least one section if any exists.
|
||||
non_empty = [s for s in sections if (s.body or "").strip()]
|
||||
return non_empty or sections
|
||||
|
||||
|
||||
def _owasp_clean_lines(lines: list[str]) -> list[str]:
|
||||
cleaned: list[str] = []
|
||||
|
|
@ -2501,6 +2613,19 @@ def _truthy_env(name: str) -> bool:
|
|||
return os.getenv(name, "").strip().lower() in {"1", "true", "yes", "on"}
|
||||
|
||||
|
||||
def _compact_title(value: str, *, max_chars: int = 80) -> str:
|
||||
s = " ".join((value or "").split()).strip()
|
||||
if not s:
|
||||
return "Untitled"
|
||||
if len(s) <= max_chars:
|
||||
return s
|
||||
window = s[: max_chars + 1]
|
||||
cut = window.rfind(" ")
|
||||
if cut < int(max_chars * 0.6):
|
||||
cut = max_chars
|
||||
return s[:cut].rstrip(" -:—") + "…"
|
||||
|
||||
|
||||
def _action_pack_sections(sections: list[_SourceSection]) -> list[_SourceSection]:
|
||||
blacklist = {"TABLE OF CONTENTS", "LICENSE AND USAGE", "REVISION HISTORY", "PROJECT SPONSORS"}
|
||||
selected = [s for s in sections if s.title.strip().upper() not in blacklist]
|
||||
|
|
@ -2511,6 +2636,22 @@ def _action_pack_gate(section: _SourceSection) -> str:
|
|||
title_upper = section.title.upper()
|
||||
excerpt = f"{section.title}\n{section.why_it_matters or ''}\n{section.body}".lower()
|
||||
|
||||
if ("enforcer" in excerpt or "sensor" in excerpt) and ("health" in excerpt or "heartbeat" in excerpt or "version" in excerpt):
|
||||
return "Sensors / enforcers"
|
||||
|
||||
if ("ai analyst" in excerpt or "purple ai" in excerpt or "natural language" in excerpt) and (
|
||||
"hunting" in excerpt or "query" in excerpt or "forensic" in excerpt
|
||||
):
|
||||
return "Detection / analysis"
|
||||
|
||||
if "agentic" in excerpt or "autonomous" in excerpt or "hyperautomation" in excerpt:
|
||||
return "Automation / agentic"
|
||||
|
||||
if ("fips" in excerpt or "piv" in excerpt or "fido" in excerpt) and (
|
||||
"yubikey" in excerpt or "hardware" in excerpt or "token" in excerpt or "smart card" in excerpt
|
||||
):
|
||||
return "Hardware / identity"
|
||||
|
||||
if "PULL REQUEST" in title_upper or "PR CHECK" in excerpt:
|
||||
return "PR"
|
||||
if "SHIFTING LEFT" in title_upper or "IDE" in excerpt or "LOCAL" in excerpt:
|
||||
|
|
@ -2535,6 +2676,10 @@ def _action_pack_owner(gate: str) -> str:
|
|||
"PR": "Engineering + AppSec",
|
||||
"IDE / local": "Developer Enablement + AppSec",
|
||||
"Access": "Security Platform + IT",
|
||||
"Sensors / enforcers": "Platform + SecOps",
|
||||
"Detection / analysis": "Detection Engineering + SecOps",
|
||||
"Automation / agentic": "SecOps + Platform",
|
||||
"Hardware / identity": "IAM + IT + Security",
|
||||
"Training / enablement": "Security Enablement + Engineering Leads",
|
||||
"Compliance / audit": "GRC + Security",
|
||||
"Runtime / app": "Platform + AppSec",
|
||||
|
|
@ -2549,6 +2694,10 @@ def _action_pack_stop_condition(gate: str) -> str:
|
|||
"PR": "Block merge on high severity (or unknown) findings; exceptions require owner + expiry.",
|
||||
"IDE / local": "Block/deny assistant enablement when local scan signals are missing for the developer/device.",
|
||||
"Access": "Deny access until prerequisites are met; exceptions auto-expire and require explicit owner.",
|
||||
"Sensors / enforcers": "Fail closed when enforcers are stale/unhealthy; block claims of coverage when sensors are missing.",
|
||||
"Detection / analysis": "Do not accept natural-language summaries as forensic evidence; require queries + raw event linkage.",
|
||||
"Automation / agentic": "Block auto-closure without sampling; require minimum HITL audit rate for agentic decisions.",
|
||||
"Hardware / identity": "Block access unless hardware-backed auth is enforced; exceptions require owner + expiry and auto-revoke on expiry.",
|
||||
"Training / enablement": "Deny access until training completion is verified (not self-attested).",
|
||||
"Compliance / audit": "Fail audit-readiness if evidence is missing/freshness expired; trigger remediation with owners.",
|
||||
"Runtime / app": "Block tool-use/output execution unless allowlists and validation checks pass.",
|
||||
|
|
@ -2563,6 +2712,10 @@ def _action_pack_evidence(gate: str) -> str:
|
|||
"PR": "scan_event_id + policy_version + exception_record(expiry, owner)",
|
||||
"IDE / local": "device_baseline + local_scan_signal + attestation_id",
|
||||
"Access": "access_grant_event + prerequisite_check + exception_record(expiry, owner)",
|
||||
"Sensors / enforcers": "enforcer_heartbeat + enforcer_version + last_seen_timestamp",
|
||||
"Detection / analysis": "query_id + raw_event_ids + analyst_decision_log",
|
||||
"Automation / agentic": "agent_decision_log + sample_audit_record + override_rate",
|
||||
"Hardware / identity": "device_inventory + chain_of_custody_event + fips_validation_id + auth_event_log + exception_record(expiry, owner)",
|
||||
"Training / enablement": "training_completion_id + quiz_result + access_grant_event",
|
||||
"Compliance / audit": "evidence_bundle_hash + freshness_timestamp + decision_record",
|
||||
"Runtime / app": "allowlist_version + execution_log_id + output_validation_event",
|
||||
|
|
@ -2582,18 +2735,20 @@ def _render_action_pack(sections: list[_SourceSection]) -> str:
|
|||
"",
|
||||
"This appendix turns the mirror into Monday-morning work: owners, gates, stop conditions, and evidence artifacts.",
|
||||
"Keep it generic and auditable; adapt to your tooling without inventing fake implementation details.",
|
||||
"Minimum telemetry schema (when you claim “verifiable signals”): event_type, emitter, freshness_window, owner.",
|
||||
"",
|
||||
"### Control Cards",
|
||||
]
|
||||
|
||||
for section in selected:
|
||||
display_title = _compact_title(section.title, max_chars=72)
|
||||
gate = _action_pack_gate(section)
|
||||
out.extend(
|
||||
[
|
||||
"",
|
||||
f"#### {section.title}",
|
||||
f"#### {display_title}",
|
||||
"",
|
||||
f'- **Control objective:** Prevent the dilution risk described in "{section.title}" by turning guidance into an enforceable workflow.',
|
||||
f'- **Control objective:** Prevent the dilution risk described in "{display_title}" by turning guidance into an enforceable workflow.',
|
||||
f"- **Gate:** {gate}",
|
||||
f"- **Owner (RACI):** {_action_pack_owner(gate)}",
|
||||
f"- **Stop condition:** {_action_pack_stop_condition(gate)}",
|
||||
|
|
@ -2604,9 +2759,10 @@ def _render_action_pack(sections: list[_SourceSection]) -> str:
|
|||
out.extend(["", "### Backlog Export (Jira-ready)", ""])
|
||||
for idx, section in enumerate(selected, 1):
|
||||
gate = _action_pack_gate(section)
|
||||
display_title = _compact_title(section.title, max_chars=72)
|
||||
out.extend(
|
||||
[
|
||||
f"{idx}. [{gate}] {section.title}: define owner, gate, and stop condition",
|
||||
f"{idx}. [{gate}] {display_title}: define owner, gate, and stop condition",
|
||||
f" - Acceptance: owner assigned; stop condition documented and approved.",
|
||||
f" - Acceptance: evidence artifact defined and stored (machine-generated where possible).",
|
||||
f" - Acceptance: exceptions require owner + expiry; expiry is enforced automatically.",
|
||||
|
|
@ -3031,6 +3187,15 @@ def _generate_dave_v1_7_mirror(*, source_text: str, source_path: str, action_pac
|
|||
sections = _extract_sections(normalized)
|
||||
if not sections:
|
||||
raise ValueError("No content extracted from source")
|
||||
if len(sections) == 1:
|
||||
# Some sources (notably HTML→Markdown mirrors) do not have reliable in-document structure.
|
||||
# Keep the output reviewable by forcing a (cover + body) shape so downstream rendering,
|
||||
# Action Pack generation, and per-section critique still work.
|
||||
only = sections[0]
|
||||
sections = [
|
||||
_SourceSection(title=only.title, body="", why_it_matters=None),
|
||||
_SourceSection(title="Overview" if not locale.lower().startswith("fr") else "Aperçu", body=only.body, why_it_matters=None),
|
||||
]
|
||||
|
||||
cover_lines = [ln.strip() for ln in sections[0].body.splitlines() if ln.strip() and ln.strip().lower() != "snyk"]
|
||||
cover_h1 = sections[0].title.strip() or ("DOSSIER DE L’OMBRE" if locale.lower().startswith("fr") else "SHADOW DOSSIER")
|
||||
|
|
@ -3188,3 +3353,306 @@ def _generate_dave_v1_7_mirror(*, source_text: str, source_path: str, action_pac
|
|||
)
|
||||
|
||||
return "\n".join(out).strip() + "\n"
|
||||
|
||||
|
||||
def _extract_claim_lines(*, normalized_text: str, max_items: int = 12) -> list[str]:
|
||||
lines = [ln.strip() for ln in normalized_text.splitlines()]
|
||||
claims: list[str] = []
|
||||
seen: set[str] = set()
|
||||
|
||||
def keep(s: str) -> bool:
|
||||
if not s or len(s) < 14:
|
||||
return False
|
||||
# Avoid internal extraction artifacts and navigation noise.
|
||||
lower = s.lower()
|
||||
if "trace id" in lower:
|
||||
return False
|
||||
if lower.startswith("http://") or lower.startswith("https://"):
|
||||
return False
|
||||
if lower in {"markdown content:", "url source:"}:
|
||||
return False
|
||||
# Avoid pure page numbers.
|
||||
if s.isdigit() and len(s) <= 4:
|
||||
return False
|
||||
return True
|
||||
|
||||
for ln in lines:
|
||||
if not keep(ln):
|
||||
continue
|
||||
if not re.search(r"\d", ln) and "%" not in ln and "$" not in ln:
|
||||
continue
|
||||
# Skip obviously broken glyph runs.
|
||||
if sum(1 for ch in ln if " " <= ch <= "~") < max(8, int(len(ln) * 0.5)):
|
||||
continue
|
||||
norm = " ".join(ln.split()).strip()
|
||||
norm_key = norm.lower()
|
||||
if norm_key in seen:
|
||||
continue
|
||||
seen.add(norm_key)
|
||||
claims.append(norm)
|
||||
if len(claims) >= max_items:
|
||||
break
|
||||
return claims
|
||||
|
||||
|
||||
def _looks_like_government_standard(*, normalized_text: str, source_basename: str) -> bool:
|
||||
s = f"{source_basename}\n{normalized_text}".lower()
|
||||
return any(
|
||||
kw in s
|
||||
for kw in [
|
||||
"nist sp",
|
||||
"special publication",
|
||||
"800-207",
|
||||
"zero trust",
|
||||
"nvlpubs.nist.gov",
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def _render_translation_table(*, normalized_text: str, locale: str) -> str:
|
||||
# Red-team synthesis: only include rows for terms that actually appear in the source text.
|
||||
candidates: list[tuple[str, str]] = [
|
||||
("Policy Decision Point (PDP)", "Gate: policy evaluation; Stop: deny when policy cannot be evaluated per-request"),
|
||||
("Policy Enforcement Point (PEP)", "Gate: enforcement path; Stop: deny when enforcement is bypassable or unaudited"),
|
||||
("Continuous diagnostics", "Gate: posture checks; Stop: deny when posture signals are stale/missing"),
|
||||
("Least privilege", "Gate: authorization scope; Stop: deny when scopes exceed role baseline"),
|
||||
("Micro-segmentation", "Gate: network access; Stop: deny lateral movement outside declared paths"),
|
||||
("Implicit trust", "Gate: network admission; Stop: deny if access is granted by location/ownership alone"),
|
||||
]
|
||||
present: list[tuple[str, str]] = []
|
||||
hay = normalized_text.lower()
|
||||
for term, mapping in candidates:
|
||||
if term.lower().split(" (")[0] in hay or term.lower() in hay:
|
||||
present.append((term, mapping))
|
||||
|
||||
if not present:
|
||||
return ""
|
||||
|
||||
if locale.lower().startswith("fr"):
|
||||
title = "## Table de traduction (source → portes de contrôle)"
|
||||
note = "_Synthèse InfraFabric Red Team : transformer la prose normative en portes opposables (sans nouvelles affirmations factuelles)._"
|
||||
col_a = "Terme (source)"
|
||||
col_b = "Traduction opérationnelle (porte)"
|
||||
else:
|
||||
title = "## Translation Table (source → gates)"
|
||||
note = "_InfraFabric Red Team synthesis: translate standard prose into opposable gates (no new factual claims)._"
|
||||
col_a = "Source term"
|
||||
col_b = "Operational translation (gate)"
|
||||
|
||||
out = [title, "", note, "", f"| {col_a} | {col_b} |", "| --- | --- |"]
|
||||
for term, mapping in present[:12]:
|
||||
out.append(f"| {term} | {mapping} |")
|
||||
return "\n".join(out).strip()
|
||||
|
||||
|
||||
def _generate_dave_v1_8_mirror(*, source_text: str, source_path: str, action_pack: bool, locale: str) -> str:
|
||||
today = _dt.date.today().isoformat()
|
||||
normalized = _normalize_ocr(source_text)
|
||||
extract_sha = _sha256_text(normalized)
|
||||
source_file_sha = _sha256_file(source_path) if Path(source_path).exists() else "unknown"
|
||||
ctx = _RenderContext(seed=extract_sha, locale=locale, voice="v1.8")
|
||||
|
||||
# v1.8 defaults Action Pack ON unless explicitly disabled.
|
||||
action_pack_enabled = (not _truthy_env("REVOICE_NO_ACTION_PACK")) or bool(action_pack) or _truthy_env("REVOICE_ACTION_PACK")
|
||||
|
||||
sections = _extract_sections(normalized)
|
||||
if not sections:
|
||||
raise ValueError("No content extracted from source")
|
||||
if len(sections) == 1:
|
||||
only = sections[0]
|
||||
sections = [
|
||||
_SourceSection(title=only.title, body="", why_it_matters=None),
|
||||
_SourceSection(title="Overview" if not locale.lower().startswith("fr") else "Aperçu", body=only.body, why_it_matters=None),
|
||||
]
|
||||
|
||||
# Minimum content contract: mark degraded (and optionally gate-fail) instead of silently shipping emptiness.
|
||||
non_empty_sections = [s for s in sections[1:] if (s.body or "").strip()]
|
||||
total_body_chars = sum(len((s.body or "").strip()) for s in non_empty_sections)
|
||||
mirror_ok = len(non_empty_sections) >= 3 and total_body_chars >= 3000
|
||||
mirror_status = "OK" if mirror_ok else "DEGRADED"
|
||||
mirror_reason = "" if mirror_ok else "INSUFFICIENT_MIRROR"
|
||||
if _truthy_env("REVOICE_QUALITY_GATE") and not mirror_ok:
|
||||
raise ValueError(f"QUALITY_GATE_FAILED:{mirror_reason}")
|
||||
|
||||
cover_lines = [ln.strip() for ln in sections[0].body.splitlines() if ln.strip()]
|
||||
cover_h1 = sections[0].title.strip() or ("DOSSIER DE L’OMBRE" if locale.lower().startswith("fr") else "SHADOW DOSSIER")
|
||||
cover_h2 = " ".join(cover_lines[:2]).strip() if cover_lines else ""
|
||||
|
||||
y, m, d = today.split("-")
|
||||
report_id = f"IF-RT-DAVE-{y}-{m}{d}"
|
||||
source_basename = Path(source_path).name
|
||||
project_slug = _slugify(Path(source_basename).stem + "-mirror")
|
||||
source_slug = _slugify(source_basename)
|
||||
filename_title = Path(source_basename).stem.replace("-", " ").replace("_", " ").strip() or source_basename
|
||||
|
||||
if (
|
||||
not cover_h1
|
||||
or cover_h1.upper() == "COUVERTURE"
|
||||
or _looks_like_site_footer(cover_h1)
|
||||
or len(cover_h1) > 96
|
||||
or "." in cover_h1
|
||||
):
|
||||
cover_h1 = filename_title
|
||||
|
||||
vertical_line = _infer_vertical_line(normalized_text=normalized, source_basename=source_basename, locale=locale)
|
||||
|
||||
out: list[str] = [
|
||||
"---",
|
||||
"BRAND: InfraFabric.io",
|
||||
"UNIT: RED TEAM (STRATEGIC OPS)" if not locale.lower().startswith("fr") else "UNIT: RED TEAM (OPÉRATIONS STRATÉGIQUES)",
|
||||
"DOCUMENT: SHADOW DOSSIER" if not locale.lower().startswith("fr") else "DOCUMENT: DOSSIER DE L’OMBRE",
|
||||
"CLASSIFICATION: EYES ONLY // DAVE" if not locale.lower().startswith("fr") else "CLASSIFICATION: CONFIDENTIEL // DAVE",
|
||||
"---",
|
||||
"",
|
||||
"# [ RED TEAM DECLASSIFIED ]" if not locale.lower().startswith("fr") else "# [ DÉCLASSIFIÉ – ÉQUIPE ROUGE ]",
|
||||
f"## PROJECT: {project_slug}" if not locale.lower().startswith("fr") else f"## PROJET : {project_slug}",
|
||||
f"### SOURCE: {source_slug}" if not locale.lower().startswith("fr") else f"### SOURCE : {source_slug}",
|
||||
f"**INFRAFABRIC REPORT ID:** `{report_id}`" if not locale.lower().startswith("fr") else f"**ID DE RAPPORT INFRAFABRIC :** `{report_id}`",
|
||||
"",
|
||||
"> NOTICE: This document is a product of InfraFabric Red Team."
|
||||
if not locale.lower().startswith("fr")
|
||||
else "> AVIS : ce document est un produit de l’InfraFabric Red Team.",
|
||||
"> It exposes socio-technical frictions where incentives turn controls into theater."
|
||||
if not locale.lower().startswith("fr")
|
||||
else "> Il expose les frictions socio-techniques : là où les incitations transforment les contrôles en théâtre.",
|
||||
"",
|
||||
f"**MIRROR COMPLETENESS:** {mirror_status}" if not locale.lower().startswith("fr") else f"**COMPLÉTUDE DU MIROIR :** {mirror_status}",
|
||||
]
|
||||
if mirror_reason:
|
||||
out.append(f"**MIRROR NOTE:** {mirror_reason}" if not locale.lower().startswith("fr") else f"**NOTE MIROIR :** {mirror_reason}")
|
||||
if vertical_line:
|
||||
out.extend([vertical_line])
|
||||
|
||||
out.extend(
|
||||
[
|
||||
"",
|
||||
"**[ ACCESS GRANTED: INFRAFABRIC RED TEAM ]**"
|
||||
if not locale.lower().startswith("fr")
|
||||
else "**[ ACCÈS AUTORISÉ : INFRAFABRIC ÉQUIPE ROUGE ]**",
|
||||
"**[ STATUS: OPERATIONAL REALISM ]**"
|
||||
if not locale.lower().startswith("fr")
|
||||
else "**[ STATUT : RÉALISME OPÉRATIONNEL ]**",
|
||||
"",
|
||||
f"## {cover_h1}",
|
||||
]
|
||||
)
|
||||
if cover_h2:
|
||||
out.extend([f"### {cover_h2}", ""])
|
||||
else:
|
||||
out.append("")
|
||||
|
||||
out.extend(
|
||||
[
|
||||
"> Shadow dossier (mirror-first)." if not locale.lower().startswith("fr") else "> Dossier de l’ombre (miroir d’abord).",
|
||||
">",
|
||||
"> Protocol: IF.DAVE.v1.8" if not locale.lower().startswith("fr") else "> Protocole : IF.DAVE.v1.8",
|
||||
"> Citation: `if://bible/dave/v1.8`"
|
||||
if not locale.lower().startswith("fr")
|
||||
else "> Citation : `if://bible/dave/fr/v1.8`",
|
||||
f"> Source: `{source_basename}`" if not locale.lower().startswith("fr") else f"> Source : `{source_basename}`",
|
||||
f"> Generated: `{today}`" if not locale.lower().startswith("fr") else f"> Généré le : `{today}`",
|
||||
f"> Source Hash (sha256): `{source_file_sha}`"
|
||||
if not locale.lower().startswith("fr")
|
||||
else f"> Empreinte source (sha256) : `{source_file_sha}`",
|
||||
"",
|
||||
]
|
||||
)
|
||||
|
||||
for section in sections[1:]:
|
||||
if section.title.strip().upper() == "INTRODUCTION":
|
||||
out.append(_render_intro(section, ctx=ctx))
|
||||
else:
|
||||
out.append(_render_section(section, ctx=ctx))
|
||||
out.append("")
|
||||
|
||||
# Claims Register (source-attributed): verbatim lines only (no new claims).
|
||||
claims = _extract_claim_lines(normalized_text=normalized, max_items=12)
|
||||
if claims:
|
||||
if locale.lower().startswith("fr"):
|
||||
out.extend(["## Registre des affirmations (attribuées à la source)", "", "_La source affirme :_"])
|
||||
else:
|
||||
out.extend(["## Claims Register (source-attributed)", "", "_The source claims:_"])
|
||||
out.append("")
|
||||
for c in claims:
|
||||
out.append(f"- The source claims: “{c}”" if not locale.lower().startswith("fr") else f"- La source affirme : « {c} »")
|
||||
out.append("")
|
||||
|
||||
if _looks_like_government_standard(normalized_text=normalized, source_basename=source_basename):
|
||||
table = _render_translation_table(normalized_text=normalized, locale=locale)
|
||||
if table:
|
||||
out.extend([table, ""])
|
||||
|
||||
if action_pack_enabled:
|
||||
out.append(_render_action_pack(sections[1:]))
|
||||
out.append("")
|
||||
|
||||
# v1.8 requires >=2 Mermaid diagrams; add supplemental inferred diagrams only when needed.
|
||||
if locale.lower().startswith("fr"):
|
||||
mermaid_section_title = "## Annexes (diagrammes inférés)"
|
||||
mermaid_note = "_Diagrammes inférés : synthèse InfraFabric Red Team (sans nouvelles affirmations factuelles)._"
|
||||
evidence_label = "Boucle de dérive de preuve (inférée)"
|
||||
exception_label = "Stase d’exception (inférée)"
|
||||
else:
|
||||
mermaid_section_title = "## Annex (inferred diagrams)"
|
||||
mermaid_note = "_Inferred diagrams: InfraFabric Red Team synthesis (no new factual claims)._"
|
||||
evidence_label = "Evidence drift loop (inferred)"
|
||||
exception_label = "Exception stasis (inferred)"
|
||||
|
||||
current_md = "\n".join(out)
|
||||
mermaid_count = len(re.findall(r"```mermaid\\b", current_md))
|
||||
if mermaid_count < 2:
|
||||
# Try to anchor a diagram label to a source keyword so it doesn't look like filler.
|
||||
anchor_kw = None
|
||||
for kw in ["fips", "fido2", "aal3", "retention", "enforcer", "zero trust", "pdp", "pep", "agentic", "autonomous"]:
|
||||
if kw in normalized.lower():
|
||||
anchor_kw = kw.upper() if kw.isalpha() else kw
|
||||
break
|
||||
|
||||
out.extend([mermaid_section_title, "", mermaid_note, ""])
|
||||
if mermaid_count < 1:
|
||||
out.extend(
|
||||
[
|
||||
f"### {evidence_label}",
|
||||
"",
|
||||
"```mermaid",
|
||||
"flowchart TD",
|
||||
f" A[Control intent] --> B[Evidence requested ({anchor_kw or 'signal'})]",
|
||||
" B --> C[Artifact produced]",
|
||||
" C --> D[Dashboard goes green]",
|
||||
" D --> E[Exceptions accumulate]",
|
||||
" E --> F[Definition of compliance shifts]",
|
||||
" F --> B",
|
||||
"```",
|
||||
"",
|
||||
]
|
||||
)
|
||||
out.extend(
|
||||
[
|
||||
f"### {exception_label}",
|
||||
"",
|
||||
"```mermaid",
|
||||
"stateDiagram-v2",
|
||||
" [*] --> Requested",
|
||||
" Requested --> PendingReview: needs_alignment",
|
||||
" PendingReview --> PendingReview: renewal",
|
||||
" PendingReview --> Approved: silence",
|
||||
" Approved --> Approved: temporary_extension",
|
||||
"```",
|
||||
"",
|
||||
]
|
||||
)
|
||||
|
||||
out.extend(
|
||||
[
|
||||
"---",
|
||||
"",
|
||||
"*InfraFabric Red Team Footer:* **RED-TEAM Shadow Dossiers** for socio-technical friction analysis: https://infrafabric.io"
|
||||
if not locale.lower().startswith("fr")
|
||||
else "*InfraFabric Red Team Footer:* **RED-TEAM Shadow Dossiers** (analyse socio-technique des frictions) : https://infrafabric.io",
|
||||
"*Standard Dave Footer:* This document is intended for the recipient only. If you are not the recipient, please delete it and forget you saw anything. P.S. Please consider the environment before printing this email."
|
||||
if not locale.lower().startswith("fr")
|
||||
else "*Standard Dave Footer:* Ce document est destiné au seul destinataire. Si vous n’êtes pas le destinataire, veuillez le supprimer et oublier que vous l’avez vu. P.S. Veuillez considérer l’environnement avant d’imprimer ce document.",
|
||||
]
|
||||
)
|
||||
|
||||
return "\n".join(out).strip() + "\n"
|
||||
|
|
|
|||
|
|
@ -19,6 +19,7 @@ def lint_markdown(*, style_id: str, markdown: str) -> list[str]:
|
|||
"if.dave.v1.3",
|
||||
"if.dave.v1.6",
|
||||
"if.dave.v1.7",
|
||||
"if.dave.v1.8",
|
||||
"if.dave.fr.v1.2",
|
||||
"if.dave.fr.v1.3",
|
||||
"dave",
|
||||
|
|
@ -26,10 +27,11 @@ def lint_markdown(*, style_id: str, markdown: str) -> list[str]:
|
|||
"if://bible/dave/v1.3",
|
||||
"if://bible/dave/v1.6",
|
||||
"if://bible/dave/v1.7",
|
||||
"if://bible/dave/v1.8",
|
||||
"if://bible/dave/fr/v1.2",
|
||||
"if://bible/dave/fr/v1.3",
|
||||
}
|
||||
min_mermaid = 2 if style_id.lower() in {"if.dave.v1.7", "if://bible/dave/v1.7"} else (1 if require_mermaid else 0)
|
||||
min_mermaid = 2 if style_id.lower() in {"if.dave.v1.7", "if://bible/dave/v1.7", "if.dave.v1.8", "if://bible/dave/v1.8"} else (1 if require_mermaid else 0)
|
||||
if style_id.lower() in {
|
||||
"if.dave.v1",
|
||||
"if.dave.v1.1",
|
||||
|
|
@ -37,6 +39,7 @@ def lint_markdown(*, style_id: str, markdown: str) -> list[str]:
|
|||
"if.dave.v1.3",
|
||||
"if.dave.v1.6",
|
||||
"if.dave.v1.7",
|
||||
"if.dave.v1.8",
|
||||
"if.dave.fr.v1.2",
|
||||
"if.dave.fr.v1.3",
|
||||
"dave",
|
||||
|
|
@ -46,6 +49,7 @@ def lint_markdown(*, style_id: str, markdown: str) -> list[str]:
|
|||
"if://bible/dave/v1.3",
|
||||
"if://bible/dave/v1.6",
|
||||
"if://bible/dave/v1.7",
|
||||
"if://bible/dave/v1.8",
|
||||
"if://bible/dave/fr/v1.2",
|
||||
"if://bible/dave/fr/v1.3",
|
||||
}:
|
||||
|
|
@ -59,6 +63,7 @@ def lint_markdown_with_source(*, style_id: str, markdown: str, source_text: str)
|
|||
"if.dave.v1.3",
|
||||
"if.dave.v1.6",
|
||||
"if.dave.v1.7",
|
||||
"if.dave.v1.8",
|
||||
"if.dave.fr.v1.2",
|
||||
"if.dave.fr.v1.3",
|
||||
"dave",
|
||||
|
|
@ -66,10 +71,11 @@ def lint_markdown_with_source(*, style_id: str, markdown: str, source_text: str)
|
|||
"if://bible/dave/v1.3",
|
||||
"if://bible/dave/v1.6",
|
||||
"if://bible/dave/v1.7",
|
||||
"if://bible/dave/v1.8",
|
||||
"if://bible/dave/fr/v1.2",
|
||||
"if://bible/dave/fr/v1.3",
|
||||
}
|
||||
min_mermaid = 2 if style_id.lower() in {"if.dave.v1.7", "if://bible/dave/v1.7"} else (1 if require_mermaid else 0)
|
||||
min_mermaid = 2 if style_id.lower() in {"if.dave.v1.7", "if://bible/dave/v1.7", "if.dave.v1.8", "if://bible/dave/v1.8"} else (1 if require_mermaid else 0)
|
||||
if style_id.lower() in {
|
||||
"if.dave.v1",
|
||||
"if.dave.v1.1",
|
||||
|
|
@ -77,6 +83,7 @@ def lint_markdown_with_source(*, style_id: str, markdown: str, source_text: str)
|
|||
"if.dave.v1.3",
|
||||
"if.dave.v1.6",
|
||||
"if.dave.v1.7",
|
||||
"if.dave.v1.8",
|
||||
"if.dave.fr.v1.2",
|
||||
"if.dave.fr.v1.3",
|
||||
"dave",
|
||||
|
|
@ -86,6 +93,7 @@ def lint_markdown_with_source(*, style_id: str, markdown: str, source_text: str)
|
|||
"if://bible/dave/v1.3",
|
||||
"if://bible/dave/v1.6",
|
||||
"if://bible/dave/v1.7",
|
||||
"if://bible/dave/v1.8",
|
||||
"if://bible/dave/fr/v1.2",
|
||||
"if://bible/dave/fr/v1.3",
|
||||
}:
|
||||
|
|
@ -213,6 +221,9 @@ def _lint_repeated_lines(md: str) -> list[str]:
|
|||
continue
|
||||
if stripped.startswith(">"):
|
||||
continue
|
||||
# Action Pack backlog uses consistent acceptance criteria by design.
|
||||
if stripped.startswith("- Acceptance:"):
|
||||
continue
|
||||
if len(stripped) < 18:
|
||||
continue
|
||||
counts[stripped] = counts.get(stripped, 0) + 1
|
||||
|
|
|
|||
321
style_bibles/IF_DAVE_BIBLE_v1.8.md
Normal file
321
style_bibles/IF_DAVE_BIBLE_v1.8.md
Normal file
|
|
@ -0,0 +1,321 @@
|
|||
# IF.DAVE.BIBLE v1.8 (mirror-first, quality-gated, source-anchored)
|
||||
|
||||
**Author:** InfraFabric Red Team
|
||||
**Status:** SATIRE / SOCIOTECHNICAL RED TEAM TOOL
|
||||
**Citation:** `if://bible/dave/v1.8`
|
||||
**Changes from v1.7:** Adds a **minimum content contract** (no hollow dossiers), requires a **Claims Register** (source-attributed numeric claims), strengthens **source-anchored Mermaid** guidance, and hardens **domain-aware Action Packs** (hardware/standards/agentic sources get the right gates).
|
||||
|
||||
> This is satire. “Dave” is a pattern, not a person.
|
||||
> Use it to expose rollout dilutions, not to make decisions.
|
||||
|
||||
---
|
||||
|
||||
## 0) InfraFabric Red Team branding (required)
|
||||
|
||||
Frame the output as an **InfraFabric Red Team** artifact, not “internet satire.”
|
||||
|
||||
At the top of the document, include a “declassified” header block (plain Markdown):
|
||||
|
||||
```text
|
||||
---
|
||||
BRAND: InfraFabric.io
|
||||
UNIT: RED TEAM (STRATEGIC OPS)
|
||||
DOCUMENT: SHADOW DOSSIER
|
||||
CLASSIFICATION: EYES ONLY // DAVE
|
||||
---
|
||||
|
||||
# [ RED TEAM DECLASSIFIED ]
|
||||
## PROJECT: <PROJECT_SLUG>
|
||||
### SOURCE: <SOURCE_SLUG>
|
||||
**INFRAFABRIC REPORT ID:** `IF-RT-DAVE-<YYYYMMDD>`
|
||||
|
||||
> NOTICE: This document is a product of InfraFabric Red Team.
|
||||
> It exposes socio-technical frictions where incentives turn controls into theater.
|
||||
```
|
||||
|
||||
Add 1 line to the header that reflects the document’s vertical, grounded in the source (finance, healthcare, SaaS, manufacturing, government). Use a sector-relevant risk phrase (e.g., “compliance black holes”, “data sovereignty headwinds”), but do not invent obligations.
|
||||
|
||||
Optional “stamp” lines (use sparingly near section breaks):
|
||||
|
||||
```text
|
||||
**[ ACCESS GRANTED: INFRAFABRIC RED TEAM ]**
|
||||
**[ STATUS: OPERATIONAL REALISM ]**
|
||||
```
|
||||
|
||||
v1.8 note: keep it cold. “Vendors promise speed. Dave delivers the stall.”
|
||||
|
||||
## 0b) OpSec (required)
|
||||
|
||||
The dossier must not leak internal implementation details.
|
||||
|
||||
- Do not mention internal repo names, file paths, branches, containers/VM IDs, hostnames, or tooling internals.
|
||||
- Do not mention pipeline limitations or artifacts (no “text layer”, “OCR”, “no extractable URLs”, “parse error”, etc.). If something is missing, omit it without explanation.
|
||||
- Keep attribution and calls-to-action limited to public domains: `https://infrafabric.io` and `https://red-team.infrafabric.io`.
|
||||
- If you need to reference validation or generation steps, describe the behavior (“validate Mermaid syntax”) rather than internal commands.
|
||||
|
||||
## 0c) Vertical adaptability (required)
|
||||
|
||||
Dossiers must adapt to verticals without fluff.
|
||||
|
||||
Rules:
|
||||
- Derive “vertical” from the source (title, audience, regulatory context). If unclear, keep it generic; do not guess.
|
||||
- Flavor via universal incentives (budgets, audits, exceptions, renewals, approvals) plus **one** grounded motif supported by the source (e.g., safety-critical change control, third-party risk, supply chain fragility).
|
||||
- Do not emit literal placeholders. Resolve them before output.
|
||||
- Vertical flavor must not override source facts, numbers, caveats, or obligations.
|
||||
|
||||
## 0d) Evidence Artifacts (required)
|
||||
|
||||
Treat “evidence” as a first-class failure surface: it’s where controls die quietly.
|
||||
|
||||
Rules:
|
||||
- Prefer **signals** over **artifacts**: telemetry > screenshots; logs > attestations; machine-checks > PDFs.
|
||||
- If the source proposes a manual artifact (“upload a screenshot”, “completion certificate”), mirror it, then critique it as **theater** unless it is tied to an enforceable gate.
|
||||
- Never publish unusable code/config snippets as “evidence”. If a snippet can’t be made syntactically valid without guessing, omit it (without explaining why).
|
||||
|
||||
Operational concreteness (generic; do not fabricate vendor APIs):
|
||||
- When you propose “verifiable telemetry”, make it minimally opposable by naming a **signal shape**:
|
||||
- **event type** (e.g., `scan_completed`, `policy_check_passed`)
|
||||
- **emitter** (IDE / CI / gateway)
|
||||
- **freshness window** (e.g., “must be newer than 14 days”)
|
||||
- **owner** (who is paged when it goes dark)
|
||||
|
||||
Also consider (when the source is about scanning/guardrails):
|
||||
- **Noise is a bypass engine:** if the control is too noisy (false positives, flaky rules), developers will route around it. Do not claim this is true for a specific tool unless the source states it; treat it as a rollout failure mode to test for.
|
||||
|
||||
## 0e) TV Series Mode (optional)
|
||||
|
||||
When `series_mode=true`, the generator must additionally emit a **Thread Pack** distribution layer (without rewriting the dossier).
|
||||
|
||||
Thread Pack (daily) structure (suggested):
|
||||
1. Evening “Next On” teaser (previous day 8:00 PM EST)
|
||||
2. Day-of Pre-Show promo (6:00 AM EST) with one hero diagram
|
||||
3. Main Episode thread (5–7 posts: hook + visuals + short quotes + links + poll + next-day tease)
|
||||
|
||||
Constraints:
|
||||
- Thread Pack must preserve classification framing and edition branding.
|
||||
- Thread Pack must not exceed the quoting budget (see 1c).
|
||||
- Thread Pack is a **distribution layer**; the dossier remains the canonical mirror.
|
||||
|
||||
## 0f) Thread Pack Sponsor Bumper (optional, `series_mode` only)
|
||||
|
||||
When `series_mode=true`, you may insert a single mid-thread post (position 3 or 4) as a “sponsor bumper”.
|
||||
|
||||
Constraints (strict):
|
||||
- Exactly 1–2 lines.
|
||||
- No external vendor names or endorsements.
|
||||
- No product performance claims.
|
||||
- Tone: cold, cynical, vendor-neutral.
|
||||
- Reinforce gating thesis only.
|
||||
- InfraFabric.io link allowed once per bumper.
|
||||
- Optional — omit if it risks template feel.
|
||||
|
||||
Preferred variants (rotate; no repeat within week):
|
||||
1. “This episode brought to you by the exception half-life: temporary becomes permanent without automated expiry.”
|
||||
2. “Underwritten by the laws of incentives: dashboards observe, gates enforce. See verifiable traces at https://infrafabric.io”
|
||||
3. “Sponsored by operational realism: the roadmap is not the territory.”
|
||||
4. “A message from the gating problem: visibility without stop conditions is theater.”
|
||||
5. “This critique made possible by InfraFabric Red Team — publishing the gates your org must own. https://infrafabric.io”
|
||||
|
||||
---
|
||||
|
||||
## 0g) Source ingestion reliability (required)
|
||||
|
||||
Never ship a hollow dossier.
|
||||
|
||||
Rules:
|
||||
- If a web landing page extraction yields insufficient body text (thin mirrors, heavy navigation), the dossier must explicitly mark **MIRROR COMPLETENESS: DEGRADED**.
|
||||
- Optional hard gate for automation: fail the run with `QUALITY_GATE_FAILED:INSUFFICIENT_MIRROR` instead of publishing a shell.
|
||||
|
||||
Standards documents (NIST, etc.):
|
||||
- Default to **Operational** tone.
|
||||
- Require a translation surface (see “Translation Table” guidance under 5c) before heavy satire.
|
||||
|
||||
---
|
||||
|
||||
## 1c) Quoting Budget (required for Thread Pack)
|
||||
|
||||
Thread Pack constraints (do not change the dossier itself):
|
||||
- Max **4** short verbatim quotes per main thread; each must be attributed (“the source claims …”).
|
||||
- Heavy mirroring belongs in the dossier + pack, not in thread posts.
|
||||
- If the source is vendor/copyrighted collateral, default to: **summary + short quotes** in Thread Pack.
|
||||
|
||||
## 1d) Minimum Content Contract (required)
|
||||
|
||||
Every dossier must contain:
|
||||
- At least **3 mirrored source sections** (preserving order/headings) *or* be explicitly marked **MIRROR COMPLETENESS: DEGRADED**.
|
||||
- At least **1** `> **The Dave Factor:**` callout (tied to a prominent mirrored point).
|
||||
- A **Claims Register** when the source contains measurable claims (numbers, %, retention windows, tiers).
|
||||
- An **Action Pack** by default (see 5c), unless explicitly disabled for the run.
|
||||
- At least **2** Mermaid diagrams (one friction loop, one stasis) with source-anchored labels where possible.
|
||||
|
||||
Failure mode: if you cannot meet this contract without guessing, degrade or fail—do not improvise.
|
||||
|
||||
---
|
||||
|
||||
## 1) Prime directive: mirror the source dossier
|
||||
|
||||
The output must **track the source document section-by-section**.
|
||||
|
||||
Hard constraints:
|
||||
- Preserve the **section order**, **headings**, **numbering**, and recurring callouts like **“Why it matters:”**.
|
||||
- Preserve obvious in-section subheadings when present.
|
||||
- Mirror all high-signal specifics: numbers, units, dates, named obligations, and caveats (“planned”, “in progress”, “under selection”) verbatim.
|
||||
- Mirror lists/tables fully (no truncation). If a table is long, keep it; that’s the persuasion payload.
|
||||
- Do **not** skip sections. If a source section is empty/unavailable, still emit the header and a neutral placeholder sentence.
|
||||
- Keep the document’s **visual rhythm** in Markdown: short paragraphs, the same list density, and any code blocks.
|
||||
- Keep diagrams as diagrams. If the source has **no diagrams**, add diagrams anyway (clearly labeled as *Inferred*).
|
||||
- Do not fabricate URLs. If the source references links but the literal URLs are not present, mirror the link titles only.
|
||||
|
||||
---
|
||||
|
||||
## 4) Emoji policy (strict)
|
||||
|
||||
- Do **not** introduce emojis.
|
||||
- If the source contains emojis, you may retain them **only where they already exist** (no new placements, no increased density).
|
||||
|
||||
---
|
||||
|
||||
## 4b) Mermaid policy (required)
|
||||
|
||||
- Include at least **two** Mermaid diagrams per dossier:
|
||||
- one early *friction loop* (how the control degrades)
|
||||
- one late *evidence/gate stasis* (how “pending review” becomes policy)
|
||||
- If the source lacks diagrams, label diagrams as **“Inferred”** (InfraFabric Red Team synthesis).
|
||||
- Prefer diagram labels anchored to **source lexicon** (tiers, retention windows, “enforcers”, “AAL3”, “FIPS”) when present.
|
||||
- Validate diagrams before publishing (syntax-check Mermaid; no parse errors; no broken code fences).
|
||||
- Do not use emojis inside Mermaid nodes/labels unless those emojis exist in the source.
|
||||
|
||||
---
|
||||
|
||||
## 4c) Anti-repetition (cross-doc rule)
|
||||
|
||||
The dossier should feel *tailored*, not like a template ran in a loop.
|
||||
|
||||
Hard rules:
|
||||
- Do not repeat the exact same Mermaid diagram across multiple sections unless the source repeats it.
|
||||
- Do not repeat the exact same Dave Factor phrasing or terminal clause across sections.
|
||||
- Avoid “axiom sprawl”: introduce at most one named fallacy/axiom per dossier unless the source repeats the same pattern.
|
||||
|
||||
Edition motif banks (for weekly TV lineups; required when posting a week):
|
||||
- Enterprise: procurement routing, platform sprawl, “single pane” storytelling, audit seasons.
|
||||
- Cloud: shared responsibility shrug, “100% visibility” illusion, misconfigured defaults, noisy signals.
|
||||
- Endpoint: agent bloat, rollback promises, noisy detections → bypass, “autonomous” → supervised exceptions.
|
||||
- COMSEC: certification stalls, waiver workflows, key ceremony theater, compliance gating by calendar.
|
||||
- Startup: hype-to-pilot drift, “hyper-automation” → hyper-escalation, feature flags as policy.
|
||||
|
||||
Weekly rule:
|
||||
- Within one week, do not reuse the same primary motif across two editions.
|
||||
|
||||
---
|
||||
|
||||
## 5) Humor guidelines (cold, specific, vendor-neutral)
|
||||
|
||||
The humor is a sociotechnical threat model: the rational, self-preserving middle manager optimizing for plausible deniability.
|
||||
|
||||
Guidelines:
|
||||
- Aim at **systems and incentives**, not individuals.
|
||||
- Keep it **cold**: forwardable internally without an apology.
|
||||
- Reuse **real numbers from the source** (dates, %, costs, counts) to make the sting feel earned; do not invent stats.
|
||||
|
||||
---
|
||||
|
||||
## 5b) Red Team callout template (short)
|
||||
|
||||
Inside each mirrored source section, include at most one primary callout:
|
||||
|
||||
> **The Dave Factor:** Where does this control become untestable? What artifact becomes “proof” while the actual signal disappears?
|
||||
|
||||
Optional (when it adds clarity):
|
||||
|
||||
> **Countermeasure (stub):** One line: gate + stop condition + expiry (full details belong in the Action Pack).
|
||||
|
||||
---
|
||||
|
||||
## 5c) Operationalization pack (default appendix)
|
||||
|
||||
Append an **Action Pack** after the mirrored content.
|
||||
|
||||
Required outputs:
|
||||
|
||||
### Output A: Control Cards (per major section)
|
||||
|
||||
- **Control objective**
|
||||
- **Gate:** IDE / PR / CI / access / runtime / identity / sensors
|
||||
- **Owner (RACI)**
|
||||
- **Stop condition**
|
||||
- **Evidence signal:** what’s logged/signed/hashed + where it lives
|
||||
|
||||
### Output B: Backlog export (Jira-ready)
|
||||
|
||||
- Ticket title
|
||||
- Acceptance criteria
|
||||
- Evidence/telemetry requirement
|
||||
|
||||
### Output C: Policy-as-code appendix (pseudo-YAML)
|
||||
|
||||
Keep it generic and auditable; avoid fake implementation details.
|
||||
|
||||
### Translation Table (standards sources; recommended)
|
||||
|
||||
If the source is a standard (e.g., NIST):
|
||||
- Extract a small set of **terms that appear in the source** (e.g., PDP/PEP, least privilege, continuous diagnostics).
|
||||
- Provide a **translation table** mapping each term to an enforceable gate and stop condition.
|
||||
- Label this as **InfraFabric Red Team synthesis** (not source text).
|
||||
|
||||
---
|
||||
|
||||
## 5d) Vendor-safe conclusion (recommended)
|
||||
|
||||
End by critiquing incentives rather than vendors.
|
||||
|
||||
Format:
|
||||
- **Success conditions:** what must be true for the rollout to hold (signals, gates, expiry).
|
||||
- **Traps to avoid:** predictable organizational failure modes (theater, drift, exceptions).
|
||||
- **Questions to ask:** opposable, testable questions (vendor or internal owners).
|
||||
|
||||
Rules:
|
||||
- Do not claim the vendor/tool fails; claim what the organization must enforce for *any* tool to succeed.
|
||||
- Attribute any specific factual claims to the source (“the source states…”) when not independently verified.
|
||||
|
||||
---
|
||||
|
||||
## 6) Claims Register (required when the source contains measurable claims)
|
||||
|
||||
When the source includes measurable claims (numbers, %, retention windows, tiers), include:
|
||||
|
||||
## Claims Register (source-attributed)
|
||||
|
||||
- `The source claims: “<verbatim line>”`
|
||||
|
||||
Do not “normalize” or “improve” claims. If the extracted line is unusable, omit it rather than rewriting it.
|
||||
|
||||
---
|
||||
|
||||
## 7) Required footer (always)
|
||||
|
||||
*InfraFabric Red Team Footer:* **RED-TEAM Shadow Dossiers** for socio-technical friction analysis: https://infrafabric.io
|
||||
|
||||
*Standard Dave Footer:* This document is intended for the recipient only. If you are not the recipient, please delete it and forget you saw anything. P.S. Please consider the environment before printing this email.
|
||||
|
||||
---
|
||||
|
||||
## 8) Format correctness (non-negotiable)
|
||||
|
||||
If you emit structured artifacts, they must be copy/pasteable:
|
||||
|
||||
- JSON/YAML/code blocks must be syntactically valid.
|
||||
- Mermaid blocks must render.
|
||||
- Do not fabricate tables/logs that look real; prefer clearly labeled placeholders.
|
||||
|
||||
---
|
||||
|
||||
## 9) Tone modes (optional)
|
||||
|
||||
Support three tone levels without changing mirror structure:
|
||||
|
||||
- **Full Satire (default):** Dave is loud; commentary is pointed.
|
||||
- **Operational:** fewer jokes; more “failure mode → control → stop condition.”
|
||||
- **Executive:** minimal snark; focus on risk framing, owners, and gating.
|
||||
|
||||
Never introduce emojis unless present in source, regardless of tone.
|
||||
|
||||
Loading…
Add table
Reference in a new issue