This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Tutorials

Step-by-step guides walking the user through specific spec-to-code tasks using LLMs. Each tutorial focuses on achieving a concrete, tangible outcome by following a guided path from a given specification snippet to functional code.
  • Learning Goals: Users will gain practical experience in applying LLMs to generate code from various specification types and integrating the output into a project, understanding the iterative nature of the process.

1 - First Steps: From Spec to First Code

Build a basic web app from a short functional spec using GitHub Copilot.

Key Points:

  • Introduction to LLMs and their role in coding
  • Setting up your development environment
  • Selecting the right spec chunks
  • Basic prompt crafting
  • Managing output iterations
  • Hands-on exercise: Generating a simple function from a specification

Assets:

  • 📄 Starter Spec Template (Markdown)
  • 📄 Prompt Template: “Draft a React component from this spec…”
  • 📄 Prompt Engineering Flowchart (Spec âž” Prompt âž” Output âž” Test)

2 - Building a Simple REST API Endpoint from an Interface Spec

Guide on using an LLM (e.g., via Copilot or Cursor) to generate code for a basic CRUD API endpoint based on a simplified Interface/API Specification snippet. Focuses on Python (Flask/FastAPI) or TypeScript (Node.js/Express).

Key Points:

  • Understanding business requirements specifications
  • Breaking down requirements into actionable tasks
  • Using LLMs to generate initial code drafts
  • Analyzing the Interface/API Specification to identify key elements (endpoints, methods, request/response formats).
  • Mapping requirements to code components
  • Crafting the initial prompt to request the API endpoint code, providing necessary context (language, framework, spec details).
  • Iterative prompting to add details like input validation, basic error handling, and database interaction placeholders.
  • Integrating the generated code into a minimal project structure.
  • Basic manual testing or using a simple script to verify the endpoint’s response structure.

Assets:

  • Example Interface/API Specification snippet (Markdown or simple text format).
  • Prompt Template: “Generate API Endpoint from Spec” (Reusable Markdown template).
  • Example LLM interaction flow (showing initial prompt, refinement prompts, and responses).
  • Code Snippets: Generated endpoint code, minimal application setup code.
  • Callout (Inline Tip): “Common Pitfall: Over-reliance on the first LLM output. Always expect to iterate and refine based on the spec.”
  • Callout (Inline Note): “Tip: Be explicit about the desired libraries and framework versions in your prompt.”

3 - Creating a React Component from a UI/Functional Spec

Guide on using an LLM to generate a React component based on a UI description or Functional Specification snippet, emphasizing styling with Tailwind CSS.

Key Points:

  • Translating visual descriptions and functional requirements from the spec into prompt requirements (component name, props, state, behavior, styling).
  • Specifying desired libraries (React) and styling framework (Tailwind CSS) explicitly.
  • Prompting for component structure, handling state/props as defined in the spec.
  • Refining generated JSX and applying/correcting Tailwind classes based on the spec’s visual requirements.
  • Integrating the component into a sample React application and verifying its appearance and basic functionality.

Assets:

  • Example UI/Functional Specification snippet (could include text descriptions, simple wireframe ASCII art, or reference to a diagram).
  • Prompt Template: “Generate React Component from Spec” (Reusable Markdown template).
  • Example LLM interaction flow.
  • Code Snippets: Generated React component code (JSX with Tailwind classes).
  • Callout (Inline Tip): “Tip: For complex UI, break it down into smaller components and generate them individually.”
  • Callout (Inline Warning): “Warning: LLMs might not perfectly replicate complex layouts from text descriptions alone. Visual review is essential.”

4 - Generating Database Schema and CRUD Operations from a Data Model Spec

Guide on using an LLM to create database schema definitions (SQL CREATE TABLE statements) and basic CRUD functions (e.g., Python with SQLAlchemy or TypeScript with Mongoose) based on a Data Model or relevant section of a Software Requirements Specification.

Key Points:

  • Identifying entities, attributes, relationships, and data types from the spec.
  • Representing data relationships (one-to-one, one-to-many, many-to-many) clearly in prompts.
  • Prompting for SQL schema definitions or ORM model classes.
  • Generating basic functions for Create, Read, Update, and Delete operations for one or more entities.
  • Choosing between generating raw SQL or ORM code based on project needs and prompting accordingly.

Assets:

  • Example Data Model snippet from an SRS or a dedicated Data Specification.
  • Prompt Template: “Generate Database Schema/CRUD from Spec” (Reusable Markdown template).
  • Example LLM interaction flow.
  • Code Snippets: Generated SQL schema, generated ORM models, generated CRUD function stubs.
  • Callout (Inline Warning): “Warning: Always review generated schema definitions and raw SQL for potential security vulnerabilities (e.g., lack of proper escaping, injection risks).”
  • Callout (Inline Note): “Note: LLMs are good at generating standard CRUD patterns, but complex query logic often requires significant human guidance or writing.”

5 - Writing Unit Tests Based on Acceptance Criteria

Guide on using an LLM to generate unit tests (e.g., Jest for JS/TS, Pytest for Python) for existing code based on User Stories and their associated Acceptance Criteria.

Key Points:

  • Understanding how Acceptance Criteria translate into test cases (Given-When-Then).
  • Providing the LLM with the code to be tested and the relevant User Story/Acceptance Criteria.
  • Prompting for specific test functions or methods covering each criterion.
  • Refining generated test code, adding necessary setup/teardown, and ensuring assertions are correct.
  • Running the generated tests and verifying their correctness and coverage against the criteria.

Assets:

  • Example User Story and Acceptance Criteria.
  • Example code snippet (e.g., a function or class) to be tested.
  • Prompt Template: “Generate Unit Tests from Acceptance Criteria” (Reusable Markdown template).
  • Example LLM interaction flow.
  • Code Snippets: Generated unit test code.
  • Callout (Inline Tip): “Tip: Provide the LLM with the signature or interface of the code being tested for better results.”
  • Callout (Inline Note): “Note: LLMs can generate test structures and ideas based on criteria, but human expertise is vital for ensuring comprehensive test coverage and correct assertions.”

6 - Evolving Code as Specs Evolve

Iteratively update a Python backend when specs change mid-development.

Key Points:

  • Version control practices
  • Updating LLM prompts with deltas
  • Regression testing LLM changes

Assets:

  • 📄 Prompt Template: “Update existing code to reflect these changes…”
  • âš¡ Callout: Warning about code drift

7 - Cross-Spec Challenge: Multi-Spec Integration

Combine a Product Requirements Doc and a System Design Spec to build a feature.

Key Points:

  • Reconciling different spec types
  • Layering prompts
  • Managing ambiguity

Assets:

  • 📄 Example PRD + System Design Spec
  • 📄 Diagram: “Spec Flow Merging”