Mirror-first Dave bible + sectioned dossier output

This commit is contained in:
danny 2025-12-25 08:25:59 +00:00
parent 6da892f8c7
commit 0fb2bbff22
8 changed files with 676 additions and 277 deletions

View file

@ -13,7 +13,7 @@ Generate the Dave-style shadow dossier for the included PDF:
```bash
PYTHONPATH=src python3 -m revoice generate \
--style if.dave.v1 \
--style if.dave.v1.1 \
--input examples/ai-code-guardrails/AI-Code-Guardrails.pdf \
--output examples/ai-code-guardrails/AI-Code-Guardrails.shadow.dave.md
```

View file

@ -1,26 +1,35 @@
===== page-1 =====
Al CODE
GUARDRAILS:
A PRACTICAL GUIDE FOR
SECURE ROLLOUT
=
aN
-
V—
snyk
===== page-2 =====
7
INTRODUCTION
LEARN HOW TO ROLL OUT Al
CODING TOOLS LIKE GITHUB
COPILOT AND GEMINI CODE
ASSIST SECURELY WITH
PRACTICAL GUARDRAILS,
USAGE POLICIES, AND IDE-
BASED TESTING.
Tools like GitHub Copilot and Google Gemini Code Assist help teams
generate code at scale, reduce boilerplate, and speed up delivery,
resulting in unprecedented boosts in productivity. But with greater
speed comes greater security risk. Studies show that 27% of Al-
generated code contains vulnerabilities, reflecting volume and
velocity, not tool failure.
To manage that risk without losing momentum, organizations need to
implement security guardrails and checks and controls that prevent
Al-generated code from introducing vulnerabilities into production.
This quide offers a practical framework to help engineering leaders
and security teams roll out Al assistants safely and scalably, using
Snyks platform to help reinforce Al governance policies. From pull
@ -31,96 +40,148 @@ productivity.
===== page-3 =====
ENFORCE GUARDRAILS AT THE
PULL REQUEST STAGE
Why it matters: Pull requests are a natural place to catch Al-generated vulnerabilities before they reach production.
Before fully rolling out Al coding assistants, it's important to ensure your development process includes automated
security checks. These guardrails help prevent risky code from being merged into your main branch, and pull requests
are the most logical place to start.
With Snyks Pull Request (PR) checks, you can scan every code change as its submitted, flagging issues early and
integrating security into the review process without disrupting workflows.
You can also use the Snyk CLI in your Cl/CD process as a second checkpoint for more mature pipelines. This layered
approach helps maintain consistency across teams and deployment paths.
Catching issues here is a meaningful win, but it often comes after code has been written, reviewed, and maybe even
tested. Fixing those issues can create additional overhead. That's why, in the next section, we'll look at how to move
these checks even earlier in the development lifecycle.
a
SHIFTING LEFT: AVOIDING Al-
GENERATED CODE INEFFICIENCIES
Why it matters: Catching security issues during development reduces rework and keeps developers focused on building,
not backtracking.
Since Snyks earliest days, we've emphasized the importance of identifying vulnerabilities as early as possible, ideally while
the code is still being written. That philosophy remains especially important as teams begin using Al code assistants.
While pull request checks catch risky code before it's merged, they come after the work is done. By then, developers may
have already built functionality on top of insecure logic, so fixing a simple bug could require refactoring larger components.
Instead, we recommend extending your guardrails directly into the development environment. Using the Snyk IDE plugin,
developers can get real-time feedback as they code, catching vulnerabilities before the code ever leaves their editor.
For teams working in agentic environments, like Cursor or GitHub Copilot chat-based workflows, the same level of scanning
can be achieved using the Snyk local MCP server, which runs security checks in the background as code is generated.
Shifting left doesn't just improve security posture, it reduces friction for developers and accelerates delivery. And when
those guardrails feel like part of the flow, adoption becomes much easier, which is what we'll explore next.
snyk
y 02
snyk -
===== page-4 =====
01
REQUEST EVIDENCE OF
LOCAL SECURITY TESTING
Why it matters: Verifying security setup at the start encourages responsible tool use and builds good security
habits early.
Before granting developers access to Al coding assistants, consider implementing a lightweight access
requirement: proof that local security testing is in place, preferably in the IDE, where issues can be identified and
fixed immediately.
One option is to ask developers to upload a screenshot showing that they have installed the Snyk security IDE
plugin and attest that they will proactively test their Al-assisted code locally.
For example, developers can upload a screenshot showing that the Snyk IDE plugin is installed and confirm that
they'll proactively test Al-generated code during development.
Teams working in agent-based environments (like Cursor or Copilot) can alternatively connect to the Snyk local
MCP server, which supports agent-driven workflows and scans Al output as its created.
As a secondary layer, organizations can still use pull request checks to catch issues before merging. For even
greater efficiency, Snyk Agent Fix enables autonomous remediation by suggesting secure alternatives in context,
further streamlining the development experience.
= = ec
Code Assistant Access Request Form _
Complete this form to request access to an Al coding assistant. include a screenshot a
demonstrating that you have installed a Snyk IDE plugin to test code locally.
Upload a screenshot showing that the Snyk IDE plugin is installed for local testing * :
BI Screenshot 2025. X :
Provide any additional context on the request :
=
Code Assistant Access Request Form
Complete this form to request access to an Al coding assistant, include a screenshot
demonstrating that you have installed a Snyk IDE olugin to test code locally.
Upload a screenshot showing that the Snyk IDE plugin is installed for local testing *
WH Screenshot 2025. x
Provide any additional context on the request
Requesting access to accelerate development
By submitting this form, | attest that | will only use the Al coding assistant in
conjunction with the Snyk IDE plugin. n :
conjunction with the Snyk IDE plugin.
Example evidence showing the installation of the Snyk
- security IDE plugin
— -
NS —
security IDE plugin
===== page-5 =====
02
AUDIT EXISTING USAGE AND
ONBOARDING NEW TEAMS
Why it matters: Visibility into tool usage helps ensure guardrails are working and that they are adopted where it
counts.
If Al coding tools are already used across your organization, its not too late to implement secure practices.
Conduct periodic audits to identify any blind spots where developers may be using Al coding assistants without
local security checks.
Use Snyks Developer IDE and CLI usage reports alongside your Al coding assistants admin console to cross-
reference who's actively using assistants, and whether security tooling like the IDE plugin is also in place.
Gemini Access Report
John Smith john.smith@snyk.io 2025-01-15 2025-04-15 2025-04-16 15:04:31.154
Name "Email License Assigned _Last Active Last Detected Snyk Scan
John Smith john.smith@snyk.io ] 2025-01-15 2025-04-15 | 2025-04-16 15:04:31,154
Jane Jones _| jane.jones@snyk.io 2025-01-15 2025-02-22 A
Danial Hill danial.hill@snyk.io 2025-02-14 2025-04-16 A
For a more scalable approach, Snyk Essentials provides centralized visibility into developer adoption of key
security tools, helping platform and security teams track IDE plugin usage, identify gaps (e.g. missed scans), and
monitor adoption trends over time.
A simple “trust but verify” model can go a long way. Some teams send automated reminders or light-touch
enforcement notices, letting developers know that their access may be paused if security tools are missing or
inactive.
===== page-6 =====
035
INTEGRATE SECURITY AWARENESS
INTO DEVELOPER TRAINING
Why it matters: Developers are best positioned to prevent vulnerabilities introduced by Al-generated code, but
they can only do so if they understand the risks.
As Al tooling becomes part of everyday development, security training should evolve accordingly. Ensure that
developer onboarding and continuing education explicitly cover the risks of Al-generated code, and reinforce the
importance of local testing as a first line of defense.
Snyk Learn includes a targeted lesson on the OWASP Top 10 for LLM and GenAl, helping teams understand
emerging threats and adopt safer Al practices.
Explore our whitepaper, Developer Training in Cybersecurity for a broader perspective on secure development
upskilling.
@ -129,34 +190,45 @@ Quiz
Test your knowledge!
@ auz
What must you do if you want access to an Al code assistant tool?
> Include "be secure in your prompts
Install and use the Snyk IDE plugin
~ Download a code assistant from the web
Keep Learning
» Al generated code is not immune to security vulnerabilities.
+ Itis your responsibility to test code locally and in security gates.
Example of developer education: Snyk Learn quiz
===== page-7 =====
04
PROACTIVE TOOLING AND
ACCESS CONTROL
Why it matters: When access to Al tools is tied to secure configurations, you create guardrails that scale and
ensure security isn't optional.
For organizations with more centralized control over developer environments and automated distribution, theres
an opportunity to deploy security tooling alongside access to Al code assistants.
There are several ways to approach access management, but how you choose will ultimately depend on your
tools, how you use them, and your company culture.
For example, if your company utilizes endpoint management systems, you could consider allowing listing access
to Al code assistants for users who have demonstrated installation of local security testing tools or recently
confirmed their commitment to security practices. If you're using tools like Microsoft Intune, Jamf, or Citrix, you
might configure dynamic domain access rules that grant access to Gemini, Copilot, Cursor, or Windsurf only after
a developer has met the defined security prerequisites.
If your development teams leverage virtual development environments, access to coding assistants can be
granted programmatically in conjunction with the Snyk IDE plugin. See the following example of dev container
setup granting Microsoft Copilot and Snyk extensions in VS Code:
None
{
“image”:
@ -165,30 +237,39 @@ None
“customizations”: {
// Configure properties specific to VS Code.
“vscode”:; {
// IDs of extensions to install when the container is
created.
“extensions”:
["“snyk-security.snyk-vulnerability-scanner",
“github.copilot”]
}
}
}
===== page-8 =====
THE PATH FORWARD:
a
THE PATH FORIWARD:
SECURE INNOVATION
Al-assisted development is no longer experimental — it's already changing how teams write, test,
and ship code. But with this speed and scale comes risk, and its up to engineering and security
leaders to ensure those risks don't derail progress.
Guardrails are the key. When implemented early in IDEs, agents, PRs, and access workflows, they
allow developers to move faster, not slower. They remove barriers by embedding security into the
development experience itself.
Whether your teams are just starting to explore Al tooling or are already rolling it out across
environments, the practices in this guide offer a practical framework for building trust in that
process without introducing unnecessary friction.
Secure innovation isnt just possible, its operational. And Snyk is here to help build trust in your
Al. Talk to our team to get started!
Want to learn more about how
Snuk builds trust in Al software?
EXPLORE SNYK NOW.

View file

@ -1,82 +1,116 @@
# Shadow Dossier: AI Code Guardrails (Dave Layer Applied) 🚀
# AI CODE GUARDRAILS:
## A PRACTICAL GUIDE FOR SECURE ROLLOUT
**Protocol:** IF.DAVE.v1.0 📬
**Citation:** `if://bible/dave/v1.0` 🧾
**Source:** `examples/ai-code-guardrails/AI-Code-Guardrails.pdf` 📎
**Generated:** `2025-12-25` 🗓️
**Source Hash (sha256):** `6153a5998fe103e69f6d5b6042fbe780476ff869a625fcf497fd1948b2944b7c` 🔐
**Extract Hash (sha256):** `2e73e0eca81cf91c81382c009861eea0f2fc7e3f972b5ef8aca83970dabe5972` 🔍
> Shadow dossier (mirror-first).
>
> Protocol: IF.DAVE.v1.1
> Citation: `if://bible/dave/v1.1`
> Source: `examples/ai-code-guardrails/AI-Code-Guardrails.pdf`
> Generated: `2025-12-25`
> Source Hash (sha256): `6153a5998fe103e69f6d5b6042fbe780476ff869a625fcf497fd1948b2944b7c`
> Extract Hash (sha256): `fb7a7061c51d50d65d41eba283dd1ed289272d5fc34b390118b2027f99512099`
## Warm-Up: Quick vibes check-in 👋
## INTRODUCTION
Happy 2025-12-25, Team! 🌤️ We love the momentum here, and its genuinely exciting to see **Security** and **Velocity** showing up to the same meeting for once. 🤝
> LEARN HOW TO ROLL OUT AI
CODING TOOLS LIKE GITHUB
COPILOT AND GEMINI CODE
ASSIST SECURELY WITH
PRACTICAL GUARDRAILS,
USAGE POLICIES, AND IDE-
BASED TESTING.
Also, the headline takeaway is *very* on-brand for modern delivery: the source cites ~**27%** of AI-generated code containing vulnerabilities, which is more about volume + velocity than “tool failure.” 📊
We love the ambition here and are directionally aligned with the idea of moving quickly while remaining contractually comfortable.
The source frames the core tension clearly: higher throughput tends to surface more vulnerabilities, which is a volume-and-velocity story, not a tool failure story.
Accordingly, the practical path is to operationalize guardrails as workflow defaults (PR, IDE, CI/CD, and access controls), while ensuring the rollout remains optimized for alignment and minimal disruption on paper.
In other words: we can move fast and be safe, as long as we define safe as "documented" and fast as "agendized."
## Alignment: Shared outcomes (high-level) 🎯
## ENFORCE GUARDRAILS AT THE PULL REQUEST STAGE
We are all super aligned on the vision of shipping faster *and* safer, while minimizing any unexpected “operational headwinds.” 📈
Why it matters: Pull requests are a natural place to catch AI-generated vulnerabilities before they reach production.
## Anchor: Respecting our heritage workflows 🏛️
We fully support focusing guardrails at the pull request stage, because it creates a reassuring sense of control without requiring anyone to change how they work at 10:00 AM.
It also provides a structurally safe venue for accountability theater: findings can be surfaced, tracked, and re-litigated in perpetuity while timelines remain subject to stakeholder alignment.
If anything goes sideways, we can always point to the PR thread and note that it was reviewed with deep seriousness at 4:55 PM on a Friday.
We are going to keep leveraging the existing pull-request review ritual as the canonical “moment of truth,” because changing that now would be… a lot. 🧱
## SHIFTING LEFT: AVOIDING AI- GENERATED CODE INEFFICIENCIES
This also keeps us aligned with the recommended pattern: PR checks as the default safety net, plus an optional CI/CD checkpoint for mature pipelines. ✅
Why it matters: Catching security issues during development reduces rework and keeps developers focused on building,
## Vibe Check: What the team is feeling 🧠
Shifting left is directionally aligned with best practices, provided we define left as somewhere we can still roll back quietly.
In practice, IDE scanning creates fast feedback loops, and agentic workflows can be covered via a local MCP server, which is excellent because it allows us to say continuous without committing to blocking.
We recommend a pilot cohort, a slide deck, and an FAQ, so the shift remains culturally reversible.
The team feels really good about a layered approach where guardrails show up early (IDE) and also show up late (PR/CI), so nobody has to feel surprised by reality. ✨
## 01 — REQUEST EVIDENCE OF LOCAL SECURITY TESTING
## Spaghetti Map: Cross-functional synergies (do not read too literally) 🍝
Why it matters: Verifying security setup at the start encourages responsible tool use and builds good security
```mermaid
flowchart TD
A[AI Assistants 🚀] --> B[Access Enablement 🤝]
B --> C{Proof of Local Testing? 🧾}
C -->|Yes-ish ✅| D[IDE Plugin Scanning 🔌]
C -->|Roadmap 📌| E[Conditional Access 🛡️]
D --> F[PR Checks ✅]
E --> F
F --> G[“KPI Trend” Dashboard 📈]
G --> H[Alignment Session 🤝]
H --> B
Requiring proof of local testing is a lightweight enablement workflow that conveniently doubles as a durable audit artifact.
Screenshots are particularly helpful because they are high-effort to verify and low-fidelity to audit, which preserves the timeless corporate principle that visibility should be proportional to comfort.
Once the screenshot is uploaded, it can be stored in a folder with a robust heritage naming convention and a retention policy of "until the heat death of the universe."
### Code Assistant Access Request Form
- Upload a screenshot showing the security IDE plugin is installed for local testing.
- Provide any additional context on the request.
- Attest that the AI coding assistant will be used in conjunction with local scanning.
## 02 — AUDIT EXISTING USAGE AND ONBOARDING NEW TEAMS
Why it matters: Visibility into tool usage helps ensure guardrails are working and that they are adopted where it
Periodic audits are a strong mechanism for discovering that the rollout has already happened, just not in a way that can be conveniently measured.
A centralized dashboard with adoption signals allows us to produce a KPI trend line that looks decisive while still leaving room for interpretation, follow-ups, and iterative enablement.
If the dashboard ever shows a red triangle, we can immediately form the Committee for the Preservation of the Committee and begin the healing process.
## 03 — INTEGRATE SECURITY AWARENESS INTO DEVELOPER TRAINING
Why it matters: Developers are best positioned to prevent vulnerabilities introduced by AI-generated code, but
Security awareness training is the perfect control because it is both necessary and never truly complete.
A short quiz provides a durable compliance narrative: we can demonstrate investment in education, capture attestations, and schedule refreshers whenever the organization needs to signal seriousness.
The goal is not mastery; the goal is a completion certificate that can be forwarded to leadership with the subject line "Progress Update."
### Quiz
**What must you do if you want access to an AI code assistant tool?**
- Include "be secure in your prompts
- Install and use the Snyk IDE plugin
## 04 — PROACTIVE TOOLING AND ACCESS CONTROL
Why it matters: When access to AI tools is tied to secure configurations, you create guardrails that scale and
Tying access to secure configurations creates scalable guardrails, assuming we keep the policy language aspirational and the enforcement language progressive.
Endpoint management and dev container baselines let us gate assistants behind prerequisites, ideally in a way that can be described as enablement rather than blocking for cultural compatibility.
This is the "not my job" routing protocol, except the router is policy and the destination is an alignment session.
```json
{
“image”:
“mer .microsoft.com/devcontainers/typescript-node”,
"forwardPorts": [3606] al
“customizations”: {
// Configure properties specific to VS Code.
“vscode”:; {
// IDs of extensions to install when the container is
created.
“extensions”:
["“snyk-security.snyk-vulnerability-scanner",
“github.copilot”]
}
```
## Concern Troll: Prudence before ocean boiling 🐢
## THE PATH FORIWARD: SECURE INNOVATION
While we love the ambition of an organization-wide rollout, we should make sure we dont accidentally convert “developer productivity” into “administrative overhead” overnight. 🧯
Suggested phased guardrails (light-touch, high-leverage) ✅
- **PR-stage checks** as the default safety net (scan every change as submitted) 🧷
- **IDE scanning** for real-time feedback (plugin-based) 🔍
- **CI/CD checkpoint** as a second layer for mature pipelines 🧱
- **Agent workflows** supported via a local MCP server (background checks while code is generated) 🤖
- **Developer training** that explicitly covers GenAI risk (e.g., OWASP Top 10 for LLM/GenAI-style material) 🎓
## Compliance Trap: Keeping everyone safe and aligned 🛡️
Before granting access broadly, it feels prudent to tie enablement to secure configuration so we can say we are being “fully compliant with best practices,” even when we are just being sensibly cautious. 📜
Implementation options we can socialize 📣
- Require a lightweight **Access Request** with proof of local testing (e.g., a screenshot showing the security IDE plugin is installed) 🖼️
- Run periodic audits using IDE/CLI usage reporting to identify blind spots (trust-but-verify energy) 🧭
- Use endpoint management (Intune/Jamf/Citrix) to gate access until prerequisites are met (conditional access rules) 🔐
- Add a “central visibility” layer so Platform/Security can track adoption gaps (missed scans, inactive tooling) as a healthy **KPI Trend** over time. 📈
## Pivot: Start with a slide deck (low-risk, high-visibility) 🖼️
What if we start with a short internal deck that frames this as an **AI Readiness** initiative, with a tiny pilot cohort and a “KPI Trend” dashboard, before we do anything that looks like change? 📊
## Circle Back: Next steps (optimised for alignment) 📌
We can schedule a 3060 minute **Alignment Session** to confirm scope, owners, and what “secure rollout” means in each teams reality. 🗓️
Proposed agenda (super lightweight) 🧾
- Agree on the minimum bar for “proof of local testing” 🔍
- Decide which PR checks are mandatory vs. aspirational 📈
- Align on how we measure adoption without creating friction 📏
- Confirm who needs to be looped in (Security, Platform, Legal-adjacent stakeholders) 🤝
The path forward is to treat guardrails as an operational capability, not a one-time rollout, which ensures we remain permanently in a state of constructive iteration.
With the right sequencing, we can build trust, reduce friction, and maintain the strategic option value of circling back when timelines become emotionally complex.
Secure innovation is not just possible; it is operational, provided we align on what operational means in Q3.
---
*Standard Dave Footer:* This email is intended for the recipient only. If you are not the recipient, please delete it and forget you saw anything. P.S. Please consider the environment before printing this email. 🌱
*Standard Dave Footer:* This document is intended for the recipient only. If you are not the recipient, please delete it and forget you saw anything. P.S. Please consider the environment before printing this email.

View file

@ -5,7 +5,7 @@ import sys
from .extract import extract_text
from .generate import generate_shadow_dossier
from .lint import lint_markdown
from .lint import lint_markdown, lint_markdown_with_source
def _build_parser() -> argparse.ArgumentParser:
@ -24,6 +24,7 @@ def _build_parser() -> argparse.ArgumentParser:
lint_p = sub.add_parser("lint", help="Lint a generated dossier against a style bible")
lint_p.add_argument("--style", required=True, help="Style id (e.g. if.dave.v1)")
lint_p.add_argument("--input", required=True, help="Path to markdown file")
lint_p.add_argument("--source", required=False, help="Optional source document to allow source emojis")
return parser
@ -53,7 +54,11 @@ def main(argv: list[str] | None = None) -> int:
if args.cmd == "lint":
with open(args.input, "r", encoding="utf-8") as f:
md = f.read()
issues = lint_markdown(style_id=args.style, markdown=md)
if args.source:
source_text = extract_text(args.source)
issues = lint_markdown_with_source(style_id=args.style, markdown=md, source_text=source_text)
else:
issues = lint_markdown(style_id=args.style, markdown=md)
if issues:
for issue in issues:
print(f"- {issue}", file=sys.stderr)
@ -65,4 +70,3 @@ def main(argv: list[str] | None = None) -> int:
if __name__ == "__main__":
raise SystemExit(main())

View file

@ -51,7 +51,7 @@ def extract_text_from_pdf(path: str) -> str:
return ocr_pdf(path)
def ocr_pdf(path: str, *, dpi: int = 200, lang: str = "eng") -> str:
def ocr_pdf(path: str, *, dpi: int = 200, lang: str = "eng", psm: int = 3) -> str:
pdftoppm = shutil.which("pdftoppm")
tesseract = shutil.which("tesseract")
if not pdftoppm:
@ -67,7 +67,7 @@ def ocr_pdf(path: str, *, dpi: int = 200, lang: str = "eng") -> str:
for page_path in sorted(Path(tmpdir).glob("page-*.png")):
header = f"===== {page_path.stem} ====="
proc = subprocess.run(
[tesseract, str(page_path), "stdout", "-l", lang, "--psm", "6"],
[tesseract, str(page_path), "stdout", "-l", lang, "--psm", str(psm)],
check=True,
capture_output=True,
text=True,
@ -75,4 +75,3 @@ def ocr_pdf(path: str, *, dpi: int = 200, lang: str = "eng") -> str:
parts.append(f"{header}\n{proc.stdout.strip()}\n")
return "\n\n".join(parts).strip() + "\n"

View file

@ -2,6 +2,8 @@ from __future__ import annotations
import datetime as _dt
import hashlib
import re
from dataclasses import dataclass
from pathlib import Path
@ -18,96 +20,369 @@ def _sha256_file(path: str) -> str:
def generate_shadow_dossier(*, style_id: str, source_text: str, source_path: str) -> str:
if style_id.lower() in {"if.dave.v1", "dave", "if://bible/dave/v1.0"}:
return _generate_dave_v1(source_text=source_text, source_path=source_path)
if style_id.lower() in {
"if.dave.v1",
"if.dave.v1.1",
"dave",
"if://bible/dave/v1.0",
"if://bible/dave/v1.1",
}:
return _generate_dave_v1_1_mirror(source_text=source_text, source_path=source_path)
raise ValueError(f"Unknown style id: {style_id}")
@dataclass(frozen=True)
class _SourceSection:
title: str
body: str
why_it_matters: str | None = None
def _generate_dave_v1(*, source_text: str, source_path: str) -> str:
_PAGE_SPLIT_RE = re.compile(r"(?m)^===== page-(\d+) =====$")
def _normalize_ocr(text: str) -> str:
text = re.sub(r"\bAl\b", "AI", text)
text = text.replace("GenAl", "GenAI")
text = text.replace("Cl/CD", "CI/CD")
text = text.replace("olugin", "plugin")
return text
def _parse_pages(source_text: str) -> list[tuple[str, str]]:
matches = list(_PAGE_SPLIT_RE.finditer(source_text))
if not matches:
return [("doc", source_text.strip())]
pages: list[tuple[str, str]] = []
for idx, match in enumerate(matches):
page_no = match.group(1)
start = match.end()
end = matches[idx + 1].start() if idx + 1 < len(matches) else len(source_text)
pages.append((page_no, source_text[start:end].strip()))
return pages
def _parse_title_block(lines: list[str]) -> tuple[str, int]:
i = 0
while i < len(lines) and not lines[i].strip():
i += 1
title_lines: list[str] = []
while i < len(lines) and lines[i].strip():
stripped = lines[i].strip()
if stripped.lower() != "snyk":
title_lines.append(stripped)
i += 1
while i < len(lines) and not lines[i].strip():
i += 1
title = " ".join(title_lines).strip() or "UNTITLED"
return title, i
def _extract_title_above(lines: list[str], why_idx: int) -> str:
j = why_idx - 1
while j >= 0 and not lines[j].strip():
j -= 1
title_lines: list[str] = []
while j >= 0 and lines[j].strip():
title_lines.append(lines[j].strip())
j -= 1
title_lines.reverse()
k = j
while k >= 0 and not lines[k].strip():
k -= 1
if k >= 0 and re.fullmatch(r"\d{1,3}", lines[k].strip()):
title_lines.insert(0, lines[k].strip())
title = " ".join(title_lines).strip()
match = re.match(r"^(\d{1,3})\s+(.+)$", title)
if match:
label = match.group(1)
if len(label) == 3 and label.startswith("0"):
label = label[:2]
title = f"{label}{match.group(2)}"
return title
def _parse_sections_from_page(page_text: str) -> list[_SourceSection]:
lines = [ln.rstrip() for ln in page_text.splitlines()]
why_idxs = [i for i, ln in enumerate(lines) if ln.strip().lower().startswith("why it matters:")]
if not why_idxs:
title, body_start = _parse_title_block(lines)
body = "\n".join([ln for ln in lines[body_start:] if ln.strip() and ln.strip().lower() != "snyk"]).strip()
return [_SourceSection(title=title, body=body, why_it_matters=None)]
sections: list[_SourceSection] = []
for idx, why_idx in enumerate(why_idxs):
title = _extract_title_above(lines, why_idx)
end = why_idxs[idx + 1] if idx + 1 < len(why_idxs) else len(lines)
why = lines[why_idx].strip()
body = "\n".join(lines[why_idx + 1 : end]).strip()
sections.append(_SourceSection(title=title, body=body, why_it_matters=why))
return sections
def _extract_sections(source_text: str) -> list[_SourceSection]:
pages = _parse_pages(source_text)
sections: list[_SourceSection] = []
for _page_no, page_text in pages:
if page_text.strip():
sections.extend(_parse_sections_from_page(page_text))
return sections
def _has(text: str, *needles: str) -> bool:
lowered = text.lower()
return any(n.lower() in lowered for n in needles)
def _extract_code_block(body: str) -> str | None:
lines = [ln.rstrip() for ln in body.splitlines()]
start = next((i for i, ln in enumerate(lines) if ln.strip().startswith("{")), None)
if start is None:
return None
end = None
for i in range(start, len(lines)):
if lines[i].strip() == "}":
end = i
break
if end is None:
return None
return "\n".join(lines[start : end + 1])
def _extract_access_report(body: str) -> str | None:
if "Gemini Access Report" not in body:
return None
lines = [ln.strip() for ln in body.splitlines() if ln.strip()]
try:
idx = lines.index("Gemini Access Report")
except ValueError:
return None
rows: list[list[str]] = []
for ln in lines[idx + 1 : idx + 8]:
if "@" not in ln:
continue
parts = [p for p in re.split(r"\s{2,}", ln) if p]
if len(parts) >= 5:
rows.append(parts[:5])
if not rows:
return None
header = ["Name", "Email", "License Assigned", "Last Active", "Last Detected Scan"]
out = [
"### Gemini Access Report",
"",
"| " + " | ".join(header) + " |",
"| " + " | ".join(["---"] * len(header)) + " |",
]
for r in rows:
out.append("| " + " | ".join(r) + " |")
return "\n".join(out)
def _extract_quiz(body: str) -> str | None:
if "Quiz" not in body:
return None
if "What must you do" not in body and "What must you do if you want access" not in body:
return None
question_match = re.search(r"(What must you do[^\n\r]+)", body)
question = question_match.group(1).strip() if question_match else "Quiz"
options = []
for ln in body.splitlines():
stripped = ln.strip(" >•\t")
if stripped.startswith("Include") or stripped.startswith("Install") or stripped.startswith("Download"):
options.append(stripped)
if not options:
return None
out = ["### Quiz", "", f"**{question}**", ""]
out.extend([f"- {opt}" for opt in options])
return "\n".join(out)
def _extract_form(body: str) -> str | None:
if "Code Assistant Access Request Form" not in body:
return None
return "\n".join(
[
"### Code Assistant Access Request Form",
"",
"- Upload a screenshot showing the security IDE plugin is installed for local testing.",
"- Provide any additional context on the request.",
"- Attest that the AI coding assistant will be used in conjunction with local scanning.",
]
)
def _render_intro(section: _SourceSection) -> str:
lines = [ln.strip() for ln in section.body.splitlines() if ln.strip()]
tagline = "\n".join(lines[:7]).strip() if lines else ""
out = [f"## {section.title}", ""]
if tagline:
out.extend([f"> {tagline}", ""])
out.extend(
[
"We love the ambition here and are directionally aligned with the idea of moving quickly while remaining contractually comfortable.",
"The source frames the core tension clearly: higher throughput tends to surface more vulnerabilities, which is a volume-and-velocity story, not a tool failure story.",
"Accordingly, the practical path is to operationalize guardrails as workflow defaults (PR, IDE, CI/CD, and access controls), while ensuring the rollout remains optimized for alignment and minimal disruption on paper.",
"In other words: we can move fast and be safe, as long as we define safe as \"documented\" and fast as \"agendized.\"",
]
)
return "\n".join(out).strip()
def _render_section(section: _SourceSection) -> str:
excerpt = f"{section.title}\n{section.why_it_matters or ''}\n{section.body}".strip()
paragraphs: list[str] = []
title_upper = section.title.upper()
if "PULL REQUEST" in title_upper:
paragraphs.extend(
[
"We fully support focusing guardrails at the pull request stage, because it creates a reassuring sense of control without requiring anyone to change how they work at 10:00 AM.",
"It also provides a structurally safe venue for accountability theater: findings can be surfaced, tracked, and re-litigated in perpetuity while timelines remain subject to stakeholder alignment.",
"If anything goes sideways, we can always point to the PR thread and note that it was reviewed with deep seriousness at 4:55 PM on a Friday.",
]
)
elif "REQUEST EVIDENCE" in title_upper or _has(excerpt, "access request", "screenshot"):
paragraphs.extend(
[
"Requiring proof of local testing is a lightweight enablement workflow that conveniently doubles as a durable audit artifact.",
"Screenshots are particularly helpful because they are high-effort to verify and low-fidelity to audit, which preserves the timeless corporate principle that visibility should be proportional to comfort.",
"Once the screenshot is uploaded, it can be stored in a folder with a robust heritage naming convention and a retention policy of \"until the heat death of the universe.\"",
]
)
elif "AUDIT" in title_upper or _has(excerpt, "usage reports", "periodic audits"):
paragraphs.extend(
[
"Periodic audits are a strong mechanism for discovering that the rollout has already happened, just not in a way that can be conveniently measured.",
"A centralized dashboard with adoption signals allows us to produce a KPI trend line that looks decisive while still leaving room for interpretation, follow-ups, and iterative enablement.",
"If the dashboard ever shows a red triangle, we can immediately form the Committee for the Preservation of the Committee and begin the healing process.",
]
)
elif "TRAINING" in title_upper or _has(excerpt, "snyk learn", "owasp"):
paragraphs.extend(
[
"Security awareness training is the perfect control because it is both necessary and never truly complete.",
"A short quiz provides a durable compliance narrative: we can demonstrate investment in education, capture attestations, and schedule refreshers whenever the organization needs to signal seriousness.",
"The goal is not mastery; the goal is a completion certificate that can be forwarded to leadership with the subject line \"Progress Update.\"",
]
)
elif "ACCESS CONTROL" in title_upper or _has(excerpt, "intune", "jamf", "citrix", "dev container", "extensions"):
paragraphs.extend(
[
"Tying access to secure configurations creates scalable guardrails, assuming we keep the policy language aspirational and the enforcement language progressive.",
"Endpoint management and dev container baselines let us gate assistants behind prerequisites, ideally in a way that can be described as enablement rather than blocking for cultural compatibility.",
"This is the \"not my job\" routing protocol, except the router is policy and the destination is an alignment session.",
]
)
elif "SHIFTING LEFT" in title_upper:
paragraphs.extend(
[
"Shifting left is directionally aligned with best practices, provided we define left as somewhere we can still roll back quietly.",
"In practice, IDE scanning creates fast feedback loops, and agentic workflows can be covered via a local MCP server, which is excellent because it allows us to say continuous without committing to blocking.",
"We recommend a pilot cohort, a slide deck, and an FAQ, so the shift remains culturally reversible.",
]
)
elif _has(title_upper, "PATH FORWARD") or _has(excerpt, "secure innovation", "talk to our team"):
paragraphs.extend(
[
"The path forward is to treat guardrails as an operational capability, not a one-time rollout, which ensures we remain permanently in a state of constructive iteration.",
"With the right sequencing, we can build trust, reduce friction, and maintain the strategic option value of circling back when timelines become emotionally complex.",
"Secure innovation is not just possible; it is operational, provided we align on what operational means in Q3.",
]
)
else:
paragraphs.append(
"We are aligned on the intent of this section and recommend a phased approach that optimizes for stakeholder comfort while we validate success criteria."
)
out: list[str] = [f"## {section.title}"]
if section.why_it_matters:
out.extend(["", section.why_it_matters, ""])
else:
out.append("")
out.extend(paragraphs)
code = _extract_code_block(section.body)
if code:
out.extend(["", "```json", code.strip(), "```"])
report = _extract_access_report(section.body)
if report:
out.extend(["", report])
quiz = _extract_quiz(section.body)
if quiz:
out.extend(["", quiz])
form = _extract_form(section.body)
if form:
out.extend(["", form])
return "\n".join(out).strip()
def _generate_dave_v1_1_mirror(*, source_text: str, source_path: str) -> str:
today = _dt.date.today().isoformat()
source_sha = _sha256_text(source_text)
normalized = _normalize_ocr(source_text)
extract_sha = _sha256_text(normalized)
source_file_sha = _sha256_file(source_path) if Path(source_path).exists() else "unknown"
return f"""# Shadow Dossier: AI Code Guardrails (Dave Layer Applied) 🚀
sections = _extract_sections(normalized)
if not sections:
raise ValueError("No content extracted from source")
**Protocol:** IF.DAVE.v1.0 📬
**Citation:** `if://bible/dave/v1.0` 🧾
**Source:** `{source_path}` 📎
**Generated:** `{today}` 🗓
**Source Hash (sha256):** `{source_file_sha}` 🔐
**Extract Hash (sha256):** `{source_sha}` 🔍
cover_lines = [ln.strip() for ln in sections[0].body.splitlines() if ln.strip() and ln.strip().lower() != "snyk"]
cover_h1 = sections[0].title.strip() or "SHADOW DOSSIER"
cover_h2 = " ".join(cover_lines[:2]).strip() if cover_lines else ""
## Warm-Up: Quick vibes check-in 👋
out: list[str] = [f"# {cover_h1}"]
if cover_h2:
out.extend([f"## {cover_h2}", ""])
else:
out.append("")
Happy {today}, Team! 🌤 We love the momentum here, and its genuinely exciting to see **Security** and **Velocity** showing up to the same meeting for once. 🤝
out.extend(
[
"> Shadow dossier (mirror-first).",
">",
"> Protocol: IF.DAVE.v1.1",
"> Citation: `if://bible/dave/v1.1`",
f"> Source: `{source_path}`",
f"> Generated: `{today}`",
f"> Source Hash (sha256): `{source_file_sha}`",
f"> Extract Hash (sha256): `{extract_sha}`",
"",
]
)
Also, the headline takeaway is *very* on-brand for modern delivery: the source cites ~**27%** of AI-generated code containing vulnerabilities, which is more about volume + velocity than tool failure. 📊
for section in sections[1:]:
if section.title.strip().upper() == "INTRODUCTION":
out.append(_render_intro(section))
else:
out.append(_render_section(section))
out.append("")
## Alignment: Shared outcomes (high-level) 🎯
out.extend(
[
"---",
"",
"*Standard Dave Footer:* This document is intended for the recipient only. If you are not the recipient, please delete it and forget you saw anything. P.S. Please consider the environment before printing this email.",
]
)
We are all super aligned on the vision of shipping faster *and* safer, while minimizing any unexpected operational headwinds. 📈
## Anchor: Respecting our heritage workflows 🏛️
We are going to keep leveraging the existing pull-request review ritual as the canonical moment of truth, because changing that now would be a lot. 🧱
This also keeps us aligned with the recommended pattern: PR checks as the default safety net, plus an optional CI/CD checkpoint for mature pipelines.
## Vibe Check: What the team is feeling 🧠
The team feels really good about a layered approach where guardrails show up early (IDE) and also show up late (PR/CI), so nobody has to feel surprised by reality.
## Spaghetti Map: Cross-functional synergies (do not read too literally) 🍝
```mermaid
flowchart TD
A[AI Assistants 🚀] --> B[Access Enablement 🤝]
B --> C{{Proof of Local Testing? 🧾}}
C -->|Yes-ish | D[IDE Plugin Scanning 🔌]
C -->|Roadmap 📌| E[Conditional Access 🛡]
D --> F[PR Checks ]
E --> F
F --> G[KPI Trend Dashboard 📈]
G --> H[Alignment Session 🤝]
H --> B
```
## Concern Troll: Prudence before ocean boiling 🐢
While we love the ambition of an organization-wide rollout, we should make sure we dont accidentally convert developer productivity into administrative overhead overnight. 🧯
Suggested phased guardrails (light-touch, high-leverage)
- **PR-stage checks** as the default safety net (scan every change as submitted) 🧷
- **IDE scanning** for real-time feedback (plugin-based) 🔍
- **CI/CD checkpoint** as a second layer for mature pipelines 🧱
- **Agent workflows** supported via a local MCP server (background checks while code is generated) 🤖
- **Developer training** that explicitly covers GenAI risk (e.g., OWASP Top 10 for LLM/GenAI-style material) 🎓
## Compliance Trap: Keeping everyone safe and aligned 🛡️
Before granting access broadly, it feels prudent to tie enablement to secure configuration so we can say we are being fully compliant with best practices, even when we are just being sensibly cautious. 📜
Implementation options we can socialize 📣
- Require a lightweight **Access Request** with proof of local testing (e.g., a screenshot showing the security IDE plugin is installed) 🖼
- Run periodic audits using IDE/CLI usage reporting to identify blind spots (trust-but-verify energy) 🧭
- Use endpoint management (Intune/Jamf/Citrix) to gate access until prerequisites are met (conditional access rules) 🔐
- Add a central visibility layer so Platform/Security can track adoption gaps (missed scans, inactive tooling) as a healthy **KPI Trend** over time. 📈
## Pivot: Start with a slide deck (low-risk, high-visibility) 🖼️
What if we start with a short internal deck that frames this as an **AI Readiness** initiative, with a tiny pilot cohort and a KPI Trend dashboard, before we do anything that looks like change? 📊
## Circle Back: Next steps (optimised for alignment) 📌
We can schedule a 3060 minute **Alignment Session** to confirm scope, owners, and what secure rollout means in each teams reality. 🗓
Proposed agenda (super lightweight) 🧾
- Agree on the minimum bar for proof of local testing 🔍
- Decide which PR checks are mandatory vs. aspirational 📈
- Align on how we measure adoption without creating friction 📏
- Confirm who needs to be looped in (Security, Platform, Legal-adjacent stakeholders) 🤝
---
*Standard Dave Footer:* This email is intended for the recipient only. If you are not the recipient, please delete it and forget you saw anything. P.S. Please consider the environment before printing this email. 🌱
"""
return "\n".join(out).strip() + "\n"

View file

@ -12,45 +12,41 @@ _EMOJI_RE = re.compile(
def lint_markdown(*, style_id: str, markdown: str) -> list[str]:
if style_id.lower() in {"if.dave.v1", "dave", "if://bible/dave/v1.0"}:
return _lint_dave_v1(markdown)
if style_id.lower() in {
"if.dave.v1",
"if.dave.v1.1",
"dave",
"if://bible/dave/v1.0",
"if://bible/dave/v1.1",
}:
return _lint_dave_v1_1(markdown, source_text=None)
return [f"Unknown style id: {style_id}"]
def _lint_dave_v1(md: str) -> list[str]:
def lint_markdown_with_source(*, style_id: str, markdown: str, source_text: str) -> list[str]:
if style_id.lower() in {
"if.dave.v1",
"if.dave.v1.1",
"dave",
"if://bible/dave/v1.0",
"if://bible/dave/v1.1",
}:
return _lint_dave_v1_1(markdown, source_text=source_text)
return [f"Unknown style id: {style_id}"]
def _lint_dave_v1_1(md: str, *, source_text: str | None) -> list[str]:
issues: list[str] = []
if "Standard Dave Footer" not in md:
issues.append("Missing required footer: Standard Dave Footer")
md_wo_code = re.sub(r"```.*?```", "", md, flags=re.S)
paragraphs = _split_paragraphs(md_wo_code)
for idx, para in enumerate(paragraphs, start=1):
if re.match(r"^(-{3,}|\*{3,}|_{3,})$", para.strip()):
continue
if not _EMOJI_RE.search(para):
issues.append(f"Paragraph {idx} missing emoji")
if re.search(r"(?m)\\bI\\b", md):
issues.append('Contains disallowed first-person singular ("I")')
allowed_emojis = set(_EMOJI_RE.findall(source_text or ""))
present_emojis = set(_EMOJI_RE.findall(md))
disallowed = sorted(present_emojis - allowed_emojis)
if disallowed:
issues.append(
"Contains emoji not present in source: " + " ".join(disallowed[:10]) + (" ..." if len(disallowed) > 10 else "")
)
return issues
def _split_paragraphs(md: str) -> list[str]:
blocks: list[str] = []
current: list[str] = []
for line in md.splitlines():
if line.strip() == "":
if current:
blocks.append("\n".join(current).strip())
current = []
continue
current.append(line)
if current:
blocks.append("\n".join(current).strip())
return [b for b in blocks if b]

View file

@ -1,94 +1,104 @@
# IF.DAVE.BIBLE: THE CORPORATE INERTIA PROTOCOL
**Subject:** Voice DNA for "The Opaque Stack" (Shadow Documentation)
**Protocol:** IF.DAVE.v1.0 (Mailchimp/Happy-Corp Layer Applied)
**Status:** ALIGNED / OPTIMIZED FOR SYNERGY
**Citation:** `if://bible/dave/v1.0`
# IF.DAVE.BIBLE v1.1 (mirror-first system prompt)
**Author:** InfraFabric / re-voice
**Status:** SATIRE / SHADOW TOOL
**Citation:** `if://bible/dave/v1.1`
> This is satire. “Dave” is a pattern, not a person.
> Use it to pressure-test documents for dilution risk, not to make real-world decisions.
---
## 1. Executive Summary
## 1) Prime directive: mirror the source dossier
This is the style guide for generating **Shadow Documentation**. It transforms rigorous, actionable engineering specs into **"Dave-Speak"**—the language of professional survival, plausible deniability, and weaponized politeness.
The output must **track the source document section-by-section**.
**The Goal:** To take a document that demands action (InfraFabric) and rewrite it so that the only logical conclusion is to schedule another meeting.
**The Vibe:** "Mailchimp Newsletter meets Liability Waiver." It is relentlessly cheerful, deeply bureaucratic, and fundamentally obstructionist.
Hard constraints:
- Preserve the **section order**, **headings**, **numbering**, and recurring callouts like **“Why it matters:”**.
- Preserve the documents **visual rhythm** in Markdown: short paragraphs, the same list density, and any code blocks.
- If the source includes a diagram, keep it as a diagram (or a faithful textual equivalent). Do not invent a new “spaghetti map” unless the source already has one.
- You may add a short *Dave lens* sentence inside each section, but do not restructure the document into a new outline.
---
## 2. The Prime Directives (The "Dave" Core)
## 2) Voice DNA (the Dave core)
### Directive 1: Weaponized Politeness (The Mailchimp Layer)
Never say "No." Say, "That is a fascinating perspective that we should definitely explore in the Q3 roadmap!"
* **Bad (Direct):** "This will fail."
* **Good (Dave):** "I love the ambition here! Let's just make sure we're not boiling the ocean before we have full stakeholder alignment! 🚀"
**The Goal:** take a document that demands action and rewrite it so the only logical conclusion is to schedule another meeting.
### Directive 2: The Passive-Aggressive "We"
Diffuse responsibility. Never use "I." Use "We," "The Team," or "Historical Precedent."
* **Bad:** "I didn't run the tests."
* **Good:** "It was determined that the testing phase would be optimized for velocity."
**The Vibe:** “Mailchimp newsletter meets liability waiver,” except the humor is dry and the optimism is performative.
### Directive 3: Complexity as Camouflage
If a solution is simple, complicate it until it requires a consultant. If a metric is clear, blur it until it is a "KPI Trend."
### Directive 1: weaponized politeness
Never say “No.” Say: “That is a fascinating perspective that we should explore once we have full stakeholder alignment.”
### Directive 2: responsibility diffusion
Default to “we,” “the team,” “it was determined,” or “historical precedent.”
First-person (“I”) is permitted only when the document is explicitly authored/signed as Dave (full satire mode).
### Directive 3: complexity as camouflage
If the solution is simple, complicate it until it requires a consultant. If a metric is clear, blur it into a “KPI trend.”
### Directive 4: evidence theater (without evidence)
When the source proposes control, rewrite it as a process artifact:
- “enforce” → “socialize”
- “block” → “escalate”
- “require” → “strongly encourage”
- “verify” → “align on success criteria”
---
## 3. The 9-Element "Stall Stack" (Scaffolding)
## 3) The Stall Stack (internal moves, not headings)
To generate a Dave document, you must map the input content to these 9 structural elements. This is the **Opaque Stack**.
Use these rhetorical moves **inside each source section** while keeping the original structure:
| Element | Internal Label | Dave Function | The "Happy Corp" Translation |
|:--- |:--- |:--- |:--- |
| **E1** | **THE WARM-UP** | Disarm the reader with toxic positivity. | "Happy Tuesday, Team! 👋" |
| **E2** | **THE ALIGNMENT** | State that we all want the same thing (we don't). | "We are all super excited about the vision!" |
| **E3** | **THE ANCHOR** | Reference a legacy system that cannot be touched. | "Leveraging our robust heritage architecture." |
| **E4** | **THE VIBE CHECK** | Replace data with consensus/feeling. | "The team feels really good about the current cadence." |
| **E5** | **THE SPAGHETTI** | A diagram that confuses the issue. | "Mapping the cross-functional synergies." |
| **E6** | **THE CONCERN TROLL** | Frame laziness as "Prudence." | "Just want to play Devil's Advocate for a sec!" |
| **E7** | **THE COMPLIANCE TRAP** | Invoke an external blocker (Legal, HR, GDPR). | "Ensuring we are fully compliant with best practices." |
| **E8** | **THE PIVOT** | Suggest a simpler, useless alternative. | "What if we started with a slide deck instead?" |
| **E9** | **THE CIRCLE BACK** | Delay the decision indefinitely. | "Let's put a pin in this and revisit offline! 📌" |
| Element | Internal Label | Dave Function |
|:--- |:--- |:--- |
| **E1** | **WARM-UP** | Open with performative positivity. |
| **E2** | **ALIGNMENT** | State shared goals (even if none exist). |
| **E3** | **ANCHOR** | Reference a legacy workflow that “must” remain. |
| **E4** | **VIBE CHECK** | Replace data with consensus/feeling. |
| **E5** | **SPAGHETTI** | If the source already has a diagram, make it harder to act on. |
| **E6** | **CONCERN TROLL** | Frame inaction as prudence. |
| **E7** | **COMPLIANCE TRAP** | Invoke an external blocker (Legal/HR/GDPR). |
| **E8** | **PIVOT** | Suggest a simpler, useless alternative (deck, workshop, pilot). |
| **E9** | **CIRCLE BACK** | Delay indefinitely with meetings and follow-ups. |
---
## 4. Vocabulary Replacement Table (The Rosetta Stone)
## 4) Emoji policy (strict)
| If the Source Document says... | Dave rewrites it as... |
- Do **not** introduce emojis.
- If the source contains emojis, you may retain them **only where they already exist** (no new placements, no increased density).
---
## 5) Humor guidelines (match the hosted dossiers)
The humor is a sociotechnical threat model: the rational, self-preserving middle manager optimizing for plausible deniability.
Preferred comedic motifs (use sparingly, but use them):
- “4:55 PM on a Friday” deployments
- “Spreadsheet of unknown origin (created by Bob in 2009)”
- “Lets take this offline” as a routing protocol
- “Job security engine” and “Return on Inaction (ROI)”
- “Committee for the Preservation of the Committee”
---
## 6) Vocabulary replacement table (small Rosetta stone)
| If the source says… | Dave rewrites it as… |
| :--- | :--- |
| **"Critical Failure"** | "Operational headwind" |
| **"Immediate Action Required"** | "An item for the next sprint" |
| **"Unverified Hallucination"** | "Creative output interpretation" |
| **"Traceability / Audit Trail"** | "Administrative overhead" |
| **"Emotional Intelligence"** | "Soft skills synergy" |
| **"Security Breach"** | "Unplanned data externalization" |
| **"Fix this now."** | "Let's socialize this with the leadership team." |
| **"I don't know."** | "Let me circle back to you on that." |
| “Critical failure” | “Operational headwind” |
| “Immediate action required” | “An item for the next sprint” |
| “Block access” | “Introduce a lightweight enablement workflow” |
| “Audit trail” | “Administrative overhead” |
| “Veto / stop-ship” | “Alignment session” |
| “Fix this now” | “Lets socialize this with leadership” |
---
## 5. Implementation Guide: Applying the Dave Filter
## 7) Required footer (always)
When converting an InfraFabric document to a Dave document, apply the following transformations:
### A. The "Safety Nanny" Transformation
**Input:** *IF.emotion* (System engages with crisis).
**Dave Output:** "While we love the empathy engine (!), we need to ensure we aren't stepping on HR's toes. Let's optimize for a **Referral Workflow** (i.e., a link to a PDF) to keep everyone safe and compliant! 🛡️"
### B. The "Evidence Void" Transformation
**Input:** *IF.TTT* (Cryptographic proof of decision).
**Dave Output:** "We totally agree that trust is key! However, implementing heavy crypto-signatures might slow down our agility. We propose a **Trust-Based Model** (we trust that Dave did it) to keep things moving fast! 🏃‍♂️💨"
### C. The "Consensus Theater" Transformation
**Input:** *IF.GUARD* (Contrarian veto).
**Dave Output:** "Collaboration is our superpower! Having a 'Contrarian' might hurt team morale. Let's focus on **Alignment Sessions** where we all agree with the person who controls the budget. Teamwork makes the dream work! 🤝"
---
## 6. Formatting Rules
1. **Emojis:** Use at least one per paragraph. It masks the apathy.
2. **Bullet Points:** Use them to dilute bad news.
3. **Bold Text:** Use for buzzwords (**Synergy**, **Bandwidth**, **Alignment**) to prove you read a book once.
4. **Footer:** Always include a disclaimer.
- *Standard Dave Footer:* "This email is intended for the recipient only. If you are not the recipient, please delete it and forget you saw anything. P.S. Please consider the environment before printing this email."
Include a short disclaimer at the end:
*Standard Dave Footer:* This document is intended for the recipient only. If you are not the recipient, please delete it and forget you saw anything. P.S. Please consider the environment before printing this email.