Module 34 Professional Engineering

Working with AI as an Engineer

Here's the irony at the heart of vibe coding: the engineers who understand software best use AI most effectively. Not because they blindly trust it more — because they can evaluate its output faster, know which tasks are safe to delegate and which require careful review, and understand exactly what the AI got wrong when it gets something wrong. This module is about becoming the engineer who uses AI as a force multiplier rather than a crutch. AI coding tools have genuine capabilities and genuine limitations. They're excellent at boilerplate, documentation, test generation, code review, and explaining unfamiliar code. They're unreliable for security-sensitive code, complex algorithms, code that requires deep context, and anything involving recent APIs. Understanding where the capabilities end and the risks begin is the judgment that separates engineers who use AI productively from those who ship AI-generated vulnerabilities. This module gives you the complete framework for engineering-grade AI usage: when to reach for it, how to prompt it effectively, how to validate its output before shipping, how to use it as a pair programmer for design exploration, and how to build AI-powered features responsibly — including handling prompt injection, cost management, and evaluation of model outputs in production.

What You'll Learn

  • 1
    AI Capabilities and Limitations — What LLMs actually are, hallucinations, stale data
  • 2
    When to Use AI and When Not To — A judgment framework for high-stakes code
  • 3
    Prompt Engineering for Code Generation — Context, constraints, few-shot, chain-of-thought
  • 4
    Validating AI-Generated Code — Read before running, security review, testing first
  • 5
    AI as Pair Programmer — Rubber ducking, design exploration, brainstorming edge cases
  • 6
    AI for Code Review and Refactoring — Targeted reviews, comparing suggestions to your judgment
  • 7
    AI for Testing and Documentation — Edge case generation, API docs, quality floor problem
  • 8
    Building AI-Powered Features Responsibly — Prompt injection, cost management, evals

Capstone Project: Take a Complex Project and Use AI as a Co-Pilot — Document Every Interaction

Build a complex feature — a real-time notification system, a complex data pipeline, or a custom DSL — using AI as a co-pilot, documenting every AI interaction: what you asked for, what the AI produced, what you changed and why, what it got wrong, and where your engineering judgment overrode or improved the AI output. The deliverable is both the working feature and a written analysis of where AI added genuine value versus where it introduced risks or errors that required your intervention.

Why This Matters for Your Career

AI tools are not going away. Engineers who use them effectively will consistently outproduce those who don't, and engineers who use them irresponsibly will consistently introduce more bugs, security issues, and technical debt. The goal isn't to avoid AI — it's to develop the engineering judgment to use it well. That judgment requires understanding software deeply enough to evaluate AI output, which is exactly what the rest of this curriculum builds. Prompt injection is one of the underappreciated security risks in AI-powered features. An application that uses user-provided text in an LLM prompt is potentially vulnerable to users who craft inputs that hijack the prompt's intent. Engineers who build AI-powered features without understanding this attack vector are building applications with a new class of security vulnerability. Understanding prompt injection, data exfiltration via LLMs, and the defense patterns is essential for any engineer building AI-powered products. The 'quality floor problem' — AI-generated tests or documentation that appear complete but miss the important cases — is a subtle risk in AI-assisted engineering. AI can generate 20 test cases quickly, but if they all test the happy path and miss the edge cases that actually matter, the test suite provides false confidence. Engineers who understand this use AI to generate a first draft and then critically review for completeness — not as a replacement for thinking about what needs to be tested.