AI Adoption Policy and Governance

Acceptable use of AI, IP and security, code review policy, and how to frame risk so the org can move forward without unnecessary exposure.

Defining how AI is used in engineering reduces risk and confusion. This page covers acceptable use, IP and security, code review policy, and how to frame risk for the org.

Acceptable use

Spell out what’s allowed and what’s not:

  • Allowed: Code generation and completion for non-sensitive code; docs and tests; refactors and boilerplate; internal tooling and experiments within guardrails.
  • Not allowed (examples): Pasting proprietary or customer data into public or unapproved AI tools; using AI for security-sensitive or compliance-critical logic without review; bypassing review by claiming “AI wrote it.”

Publish a short policy (one page) and point to it from onboarding and the engineering handbook.

IP and security

  • Data. What data can be sent to which tools? Prefer vendors with clear data-use terms and options to opt out of training. For customer or regulated data, default to “no” unless explicitly approved.
  • Credentials and secrets. Never put secrets or credentials into AI prompts or into tools that send data off-prem.
  • Supply chain. If you depend on AI-generated code or dependencies, treat it like any third-party code: review, pin versions, and monitor for vulnerabilities.

Align with legal and security; get policy in writing so engineers know the boundaries.

Code review policy

Define what “reviewed” means when AI is involved:

  • AI-generated code is still owned by the author and must go through the same review as hand-written code.
  • Reviewers should look for correctness, security, and maintainability—not just style. AI can hide subtle bugs.
  • If your tooling can attribute or tag AI-generated sections, use it to focus review; don’t use it to skip review.

See AI and engineering strategy for adoption and measurement; see Developer experience for how tooling and flow support safe use.

Risk framing

Frame risk in terms execs and legal understand: data exposure, IP leakage, compliance, and reputational harm. Then tie controls to those: acceptable use, vendor selection, and review policy. “We’re not blocking AI; we’re using it safely” is a message that usually lands better than “we’re not sure yet.”


← Head of Engineering topics · AI and engineering strategy