Lesson 1Access controls and role-based permissions, least privilege, privileged access monitoringThis part explains how to plan access controls and role permissions for AI systems, use least power, and watch special access so sensitive data and admin work stay well controlled.
Defining AI-specific roles and permissionsImplementing least privilege for AI adminsStrong authentication for privileged usersSession recording and just-in-time accessPeriodic access review and recertificationLesson 2Logging, audit trails, and immutable logging for data access and model query recordsThis part covers logging plans for AI systems, with full check trails and unchangeable logs for data access and model questions, helping checks, answerability, and proof for rule or inside reviews.
Defining AI logging scope and granularityCapturing user, admin, and system actionsImmutable logging and tamper resistanceLog minimization and pseudonymizationLog review, alerting, and investigationsLesson 3Data minimization and pre-processing: techniques for reducing PII before sending to LLMThis part explains data cutting and pre-work ways that cut personal data before sending to AI models, using hiding, grouping, and changing to lower risk while keeping use for business needs.
Identifying unnecessary personal data fieldsRedaction and masking of free-text inputsAggregation and generalization techniquesEdge preprocessing before API submissionBalancing utility with minimization dutiesLesson 4Input filtering and prompt engineering: removing sensitive data, pattern-based scrubbing, NLP-based classifiersThis part focuses on input cleaning and prompt building to remove sensitive data before work, using pattern cleaning and NLP sorters to find risky content and enforce group rules at the edge.
Pattern-based scrubbing of identifiersNLP classifiers for sensitive categoriesPrompt templates that avoid PII captureReal-time input validation and blockingUser guidance and consent at input timeLesson 5Governance: DPIA integration, Data Processing Agreements (DPAs), record updates and change controlThis part describes management setups for AI, including mixing DPIAs, handling Data Processing Agreements, and keeping records and change control so system changes stay clear, checked, and following rules.
When and how to run AI-focused DPIAsKey DPA clauses for AI processingMaintaining records of processing for AIChange control for models and datasetsGovernance forums and approval workflowsLesson 6Pseudonymization and tokenization approaches for free-text data and structured fieldsThis part looks at fake-naming and token ways for free text and set fields, showing how to swap signs with reversible or not tokens while handling re-find and key split risks.
Pseudonymization versus anonymization limitsTokenization for structured identifiersHandling names and IDs in free-text dataKey and token vault management controlsRe-identification risk assessment methodsLesson 7Output filtering and post-processing: sensitivity detection, hallucination detection, confidence scoringThis part covers tools that check and fix AI outputs to find sensitive data, spot false talks, and use trust scores so risky answers are stopped, marked, or sent for check before users see them.
Detecting personal and sensitive data in model outputsHallucination detection rules and model ensemblesDesigning confidence scores and thresholdsHuman review workflows for risky responsesUser feedback loops to refine output filtersLesson 8Retention policies, automated deletion, and backup retention alignment with purpose limitationThis part explains how to set keep times for AI data, set auto deletion, and match backups with aim limits so training data, logs, and prompts are not kept longer than needed or used wrongly.
Mapping data categories to retention periodsAutomated deletion of prompts and logsBackup retention and restore testingHandling legal holds and exceptionsDocumenting retention decisions for auditsLesson 9Sandboxing and rate-limiting API calls; throttling, request validation, and queuingThis part explains how to box AI services, control traffic amount, and check incoming asks using boxing, rate limits, slowing, and lining up so systems stay steady, safe, and against abuse or block service.
Designing API rate limits and burst controlsSandbox environments for testing AI featuresRequest validation and schema enforcementQueueing strategies for high-volume workloadsAbuse detection and automated blocking rulesLesson 10Vendor due diligence: security questionnaires, SOC/ISO reports, penetration test requirementsThis part details how to check AI sellers using steady checks, including security questions, SOC and ISO reports, and break-in test needs, making sure processors meet law, safety, and strong hopes.
Building AI-specific security questionnairesReviewing SOC 2, ISO 27001, and similar reportsPenetration testing scope for AI integrationsAssessing data residency and subcontractorsOngoing vendor monitoring and reassessmentLesson 11Operational measures: staff training, privacy-by-design, incident response playbooks, breach notification proceduresThis part focuses on work safeguards like staff training, privacy-by-design ways, event answer books, and break notice steps that keep AI work following rules, strong, and well recorded.
AI-specific security and privacy trainingEmbedding privacy by design in AI projectsIncident detection and triage for AI systemsAI incident response and communication plansBreach notification timelines and contentLesson 12Encryption in transit and at rest; key management and envelope encryption for model inputs/outputsThis part covers hiding in move and rest for AI data, including key handling and envelope hiding ways that guard prompts, outputs, and logs while helping access control, turn, and rule hopes.
TLS configuration for AI APIs and servicesDisk, database, and object storage encryptionEnvelope encryption for prompts and outputsKey lifecycle, rotation, and segregationHSMs and cloud KMS integration options