Lesson 1Access controls and role-based permissions, least privilege, privileged access monitoringThis section explains how to design access controls and role-based permissions for AI systems, enforce least privilege, and monitor privileged access so that sensitive data and administrative functions remain tightly governed.
Defining AI-specific roles and permissionsImplementing least privilege for AI adminsStrong authentication for privileged usersSession recording and just-in-time accessPeriodic access review and recertificationLesson 2Logging, audit trails, and immutable logging for data access and model query recordsThis section covers logging strategies for AI systems, including detailed audit trails and immutable logs for data access and model queries, enabling investigations, accountability, and evidence for regulatory or internal reviews.
Defining AI logging scope and granularityCapturing user, admin, and system actionsImmutable logging and tamper resistanceLog minimization and pseudonymizationLog review, alerting, and investigationsLesson 3Data minimisation and pre-processing: techniques for reducing PII before sending to LLMThis section explains data minimisation and preprocessing techniques that reduce personal data before sending it to AI models, using redaction, aggregation, and transformation to lower risk while preserving utility for business use cases.
Identifying unnecessary personal data fieldsRedaction and masking of free-text inputsAggregation and generalization techniquesEdge preprocessing before API submissionBalancing utility with minimization dutiesLesson 4Input filtering and prompt engineering: removing sensitive data, pattern-based scrubbing, NLP-based classifiersThis section focuses on input filtering and prompt engineering to remove sensitive data before processing, using pattern-based scrubbing and NLP classifiers to detect risky content and enforce organisational policies at the boundary.
Pattern-based scrubbing of identifiersNLP classifiers for sensitive categoriesPrompt templates that avoid PII captureReal-time input validation and blockingUser guidance and consent at input timeLesson 5Governance: DPIA integration, Data Processing Agreements (DPAs), record updates and change controlThis section describes governance structures for AI, including integrating DPIAs, managing Data Processing Agreements, and maintaining records and change control so that system modifications remain transparent, assessed, and compliant.
When and how to run AI-focused DPIAsKey DPA clauses for AI processingMaintaining records of processing for AIChange control for models and datasetsGovernance forums and approval workflowsLesson 6Pseudonymisation and tokenisation approaches for free-text data and structured fieldsThis section explores pseudonymisation and tokenisation strategies for both free-text and structured data, showing how to replace identifiers with reversible or irreversible tokens while managing re-identification and key separation risks.
Pseudonymization versus anonymization limitsTokenization for structured identifiersHandling names and IDs in free-text dataKey and token vault management controlsRe-identification risk assessment methodsLesson 7Output filtering and post-processing: sensitivity detection, hallucination detection, confidence scoringThis section covers mechanisms that inspect and adjust AI outputs to detect sensitive data, identify hallucinations, and apply confidence scoring so that risky responses are blocked, flagged, or routed for review before reaching end users.
Detecting personal and sensitive data in model outputsHallucination detection rules and model ensemblesDesigning confidence scores and thresholdsHuman review workflows for risky responsesUser feedback loops to refine output filtersLesson 8Retention policies, automated deletion, and backup retention alignment with purpose limitationThis section explains how to define retention schedules for AI data, configure automated deletion, and align backups with purpose limitation so that training data, logs, and prompts are not stored longer than necessary or used incompatibly.
Mapping data categories to retention periodsAutomated deletion of prompts and logsBackup retention and restore testingHandling legal holds and exceptionsDocumenting retention decisions for auditsLesson 9Sandboxing and rate-limiting API calls; throttling, request validation, and queuingThis section explains how to isolate AI services, control traffic volume, and validate incoming requests using sandboxing, rate limits, throttling, and queuing so that systems remain stable, secure, and resistant to abuse or denial-of-service.
Designing API rate limits and burst controlsSandbox environments for testing AI featuresRequest validation and schema enforcementQueueing strategies for high-volume workloadsAbuse detection and automated blocking rulesLesson 10Vendor due diligence: security questionnaires, SOC/ISO reports, penetration test requirementsThis section details how to evaluate AI vendors using structured due diligence, including security questionnaires, SOC and ISO reports, and penetration testing requirements, ensuring processors meet legal, security, and resilience expectations.
Building AI-specific security questionnairesReviewing SOC 2, ISO 27001, and similar reportsPenetration testing scope for AI integrationsAssessing data residency and subcontractorsOngoing vendor monitoring and reassessmentLesson 11Operational measures: staff training, privacy-by-design, incident response playbooks, breach notification proceduresThis section focuses on operational safeguards such as staff training, privacy-by-design practices, incident response playbooks, and breach notification procedures that ensure AI operations remain compliant, resilient, and well documented.
AI-specific security and privacy trainingEmbedding privacy by design in AI projectsIncident detection and triage for AI systemsAI incident response and communication plansBreach notification timelines and contentLesson 12Encryption in transit and at rest; key management and envelope encryption for model inputs/outputsThis section covers encryption in transit and at rest for AI data, including key management and envelope encryption patterns that protect prompts, outputs, and logs while supporting access control, rotation, and regulatory expectations.
TLS configuration for AI APIs and servicesDisk, database, and object storage encryptionEnvelope encryption for prompts and outputsKey lifecycle, rotation, and segregationHSMs and cloud KMS integration options