Lesson 1Access controls and role-based permissions, least privilege, privileged access monitoringThis section explains how to set up access controls and role-based permissions for AI systems, stick to least privilege, and watch privileged access so sensitive data and admin tasks stay well controlled.
Defining AI-specific roles and permissionsImplementing least privilege for AI adminsStrong authentication for privileged usersSession recording and just-in-time accessPeriodic access review and recertificationLesson 2Logging, audit trails, and immutable logging for data access and model query recordsThis section covers logging plans for AI systems, with full audit trails and unchangeable logs for data access and model queries, allowing checks, accountability, and proof for regulators or internal reviews.
Defining AI logging scope and granularityCapturing user, admin, and system actionsImmutable logging and tamper resistanceLog minimization and pseudonymizationLog review, alerting, and investigationsLesson 3Data minimisation and pre-processing: techniques for reducing PII before sending to LLMThis section explains data cutting down and pre-handling tricks to reduce personal info before sending to AI models, using removal, grouping, and changing to lower risks while keeping usefulness for business needs.
Identifying unnecessary personal data fieldsRedaction and masking of free-text inputsAggregation and generalization techniquesEdge preprocessing before API submissionBalancing utility with minimization dutiesLesson 4Input filtering and prompt engineering: removing sensitive data, pattern-based scrubbing, NLP-based classifiersThis section looks at input filtering and prompt crafting to remove sensitive data before handling, using pattern scrubbing and NLP tools to spot risky content and enforce company rules at the edge.
Pattern-based scrubbing of identifiersNLP classifiers for sensitive categoriesPrompt templates that avoid PII captureReal-time input validation and blockingUser guidance and consent at input timeLesson 5Oversight: DPIA linking, Data Processing Agreements (DPAs), record updates and change controlThis section describes oversight setups for AI, including linking DPIAs, handling Data Processing Agreements, and keeping records and change control so system changes stay open, checked, and following rules.
When and how to run AI-focused DPIAsKey DPA clauses for AI processingMaintaining records of processing for AIChange control for models and datasetsGovernance forums and approval workflowsLesson 6Pseudonymisation and tokenisation approaches for free-text data and structured fieldsThis section looks at pseudonymising and tokenising for free text and structured data, showing how to swap identifiers with reversible or irreversible tokens while handling re-identification and key separation risks.
Pseudonymization versus anonymization limitsTokenization for structured identifiersHandling names and IDs in free-text dataKey and token vault management controlsRe-identification risk assessment methodsLesson 7Output filtering and post-processing: sensitivity detection, hallucination detection, confidence scoringThis section covers tools that check and tweak AI outputs to spot sensitive data, find hallucinations, and use confidence scores so risky replies are stopped, marked, or sent for review before users see them.
Detecting personal and sensitive data in model outputsHallucination detection rules and model ensemblesDesigning confidence scores and thresholdsHuman review workflows for risky responsesUser feedback loops to refine output filtersLesson 8Retention policies, automated deletion, and backup retention alignment with purpose limitationThis section explains how to set retention times for AI data, set up auto-deletion, and match backups to purpose limits so training data, logs, and prompts aren't kept longer than needed or used wrongly.
Mapping data categories to retention periodsAutomated deletion of prompts and logsBackup retention and restore testingHandling legal holds and exceptionsDocumenting retention decisions for auditsLesson 9Sandboxing and rate-limiting API calls; throttling, request validation, and queuingThis section explains how to box in AI services, control traffic amount, and check incoming requests with sandboxing, rate limits, slowing, and queuing so systems stay steady, safe, and tough against abuse or overload.
Designing API rate limits and burst controlsSandbox environments for testing AI featuresRequest validation and schema enforcementQueueing strategies for high-volume workloadsAbuse detection and automated blocking rulesLesson 10Supplier due care: security questionnaires, SOC/ISO reports, penetration test requirementsThis section shows how to check AI suppliers with proper due care, including security forms, SOC and ISO reports, and penetration test needs, making sure handlers meet legal, security, and toughness standards.
Building AI-specific security questionnairesReviewing SOC 2, ISO 27001, and similar reportsPenetration testing scope for AI integrationsAssessing data residency and subcontractorsOngoing vendor monitoring and reassessmentLesson 11Running measures: staff training, privacy-by-design, incident response playbooks, breach notification proceduresThis section looks at running safeguards like staff training, privacy-by-design ways, incident response guides, and breach alert steps that keep AI work following rules, tough, and well recorded.
AI-specific security and privacy trainingEmbedding privacy by design in AI projectsIncident detection and triage for AI systemsAI incident response and communication plansBreach notification timelines and contentLesson 12Encryption in transit and at rest; key management and envelope encryption for model inputs/outputsThis section covers encryption moving and stored for AI data, including key handling and envelope encryption for prompts, outputs, and logs while backing access control, changes, and rule needs.
TLS configuration for AI APIs and servicesDisk, database, and object storage encryptionEnvelope encryption for prompts and outputsKey lifecycle, rotation, and segregationHSMs and cloud KMS integration options