This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

How-to Guides

Practical guides focused on achieving specific tasks or solving common problems that arise when using LLMs in the spec-to-code workflow. These guides assume the user knows what they want to do and provides steps on how to do it effectively.
  • Goals: Users will learn how to perform key actions like setting up tools, validating output, managing changes, integrating LLMs into their workflow, and applying effective prompting strategies.

1 - Setting Up Your LLM Coding Environment

Step-by-step instructions for getting started with different LLM coding platforms and integrating them into your development setup.

Key Points:

  • Installing and configuring popular IDE extensions (e.g., GitHub Copilot in VSCode, JetBrains IDEs).
  • Getting started with AI-native IDEs (e.g., Cursor, Windsurf - covering installation and basic usage).
  • Setting up and running local models via Ollama for privacy or offline use.
  • Managing API keys, authentication, and usage limits for cloud-based tools.

Assets:

  • Configuration code snippets (e.g., VSCode settings JSON, basic Ollama run commands).
  • Callout (Inline Tip): “Tip: Consider starting with an IDE extension for minimal workflow disruption.”
  • Callout (Inline Note): “Note: Local models offer privacy benefits but may require more setup and computational resources.”

2 - Validating and Verifying LLM-Generated Code

Practical steps and recommended practices for ensuring the quality, correctness, security, and performance of code produced by LLMs before integrating it into your codebase.

Key Points:

  • Implementing a structured manual code review process specifically for LLM output.
  • Leveraging static analysis tools (linters, formatters like Black, ESLint, Prettier) to enforce code style and catch basic errors.
  • Running and extending automatically generated tests (as shown in Tutorial 2.4).
  • Using security scanning tools (SAST) on generated code snippets.
  • Basic performance considerations and profiling generated code if necessary.
  • Reusable Template: Code Review Checklist for LLM Output (Markdown template).
  • Mapping output back to spec acceptance criteria
  • Writing lightweight validation prompts
  • Using checklists to catch missing functionality

Assets:

  • Example configurations for linters/formatters.
  • Callout (Inline Warning): “Warning: LLMs can confidently generate insecure code patterns. Security review is non-negotiable.”
  • Callout (Inline Tip): “Tip: Integrate linting and formatting into a pre-commit hook to automate checks.”
  • 📄 Mini flowchart: “Spec âž” Code âž” Review âž” Confirm”

3 - Managing Changes and Spec Evolution

Strategies and techniques for handling updates and changes to specifications over time and aligning them with previously generated and potentially human-modified code. Addresses the ’lifecycle management challenge’. Strategies for uncertainty.

Key Points:

  • Identifying and assessing the impact of changes in an updated specification.
  • Deciding whether to attempt regenerating the affected code section or manually modifying the existing code.
  • Using version control (Git) and diffing tools effectively to understand changes.
  • Strategies for merging LLM-generated code updates with existing human-written code, minimizing conflicts.
  • Documenting which parts of the codebase originated from LLM generation and which were human-modified.
  • Triaging missing information
  • Hypothesis-driven prompting
  • Annotating assumptions

Assets:

  • Flowchart: “Spec Change Handling Workflow with LLMs” (Placeholder for a visual diagram).
  • Callout (Inline Note): “Note: Regenerating large sections of code can be faster initially but may lead to more complex merges later.”
  • Callout (Inline Tip): “Tip: Use clear commit messages indicating when code was LLM-generated vs. human-written.”
  • âš¡ Callout: Risk of hallucinations when specs are weak
  • 📄 “Assumption Tracker” Template

4 - Crafting Effective Prompts for Spec-to-Code

Advanced techniques and best practices for writing prompts that maximize the chances of getting accurate, relevant, and usable code from specifications. This is core ‘prompt engineering’ for this specific use case.

Key Points:

  • Structuring your prompts logically: Role/Persona, Context (project, libraries, relevant code), Instruction (the task), Examples (few-shot prompting), Format (desired output structure).
  • Providing sufficient and relevant context without overwhelming the model.
  • Translating abstract spec requirements into concrete coding tasks and constraints.
  • Using negative constraints effectively (“Do not use library X”, “Avoid pattern Y”).
  • Techniques for iterative prompting and refining output through conversation.
  • Handling ambiguity in specifications through clarifying prompts.
  • Decomposing specs into promptable units
  • Using “show, don’t tell” in prompts
  • Prompt chaining for large features

Assets:

  • Prompt Engineering Template: General Structure for Spec-to-Code Prompts (Reusable Markdown template).
  • Examples: Side-by-side comparison of ineffective vs. effective prompts for a given spec snippet.
  • Callout (Inline Tip): “Tip: Start with a clear, concise instruction and add context/constraints iteratively if the initial output is insufficient.”
  • Callout (Inline Misconception): “Common Misconception: More detail is always better. Focus on relevant detail and clear structure.”
  • 📄 Prompt Skeleton Template
  • âš¡ Callout: Common Mistakes (e.g., vague verbs)

5 - Writing LLM-Friendly Specifications

Tutorial on creating machine-parsable specifications that work well with language models.

Key Points:

  • Atomic user stories
  • Testable acceptance criteria
  • API contract syntax examples (OpenAPI/YAML)

Content: Hands-on: Convert vague requirements into structured prompts

6 - Integrating LLM Assistance into Your Development Workflow

Guidance on incorporating LLM tools seamlessly into existing development practices, including IDE usage, version control workflows, and potential (cautious) integration into CI/CD pipelines.

Key Points:

  • Leveraging IDE features for inline code completion and chat-based assistance.
  • Establishing team conventions for using and reviewing LLM-generated code within a version control system (e.g., specific branch naming, required reviews).
  • Exploring possibilities for automating boilerplate or test generation in CI/CD (emphasizing the need for rigorous validation steps afterward).
  • Strategies for team collaboration when some members are using LLM tools and others are not.

Assets:

  • Diagram: “LLM-Assisted Development Workflow” (Placeholder for a visual diagram showing integration points).
  • Callout (Inline Warning): “Warning: Fully automating code generation and deployment in CI/CD without human oversight is risky and not recommended for critical systems.”
  • Callout (Inline Note): “Note: Consistency in how LLM tools are used and reviewed is key for team effectiveness.”

7 - Handle Platform Differences

Adjusting practices between Copilot, Cursor, Windsurf, and local LLMs.

Key Points:

  • Specific prompt tuning for each platform
  • Copilot vs. Cursor context window differences
  • Benefits of local models (privacy, control)

Assets:

  • 📄 Comparison Table: LLM IDE Platforms