Lesson 1Vendor and client contracts for AI features: data processing agreements, joint controllership, liability allocation, and security requirementsThis section explains how to structure vendor and client contracts for AI features, focusing on data processing agreements, joint controllership, liability allocation, and security clauses that reflect regulatory and ethical requirements in high-stakes environments.
Defining controller and processor rolesKey data processing agreement clausesJoint controllership and shared dutiesLiability caps, indemnities, and insuranceSecurity and incident response obligationsAudit, oversight, and termination rightsLesson 2Core data protection regimes and obligations relevant to AI (principles: purpose limitation, data minimisation, lawful basis, transparency)This section reviews core data protection regimes relevant to AI, emphasising principles such as purpose limitation, data minimisation, lawful basis, and transparency, and how to operationalise them across AI development and deployment for compliance.
Purpose limitation in AI training and useData minimisation and feature selectionChoosing and documenting lawful basesTransparency and meaningful noticesAccuracy, storage limits, and integrityAccountability and governance structuresLesson 3Data Protection Impact Assessments (DPIAs) / AI Impact Assessments (AIA): structure, key questions, and remediation plansThis section explains how to design and run DPIAs and AIAs, from scoping and risk identification to stakeholder engagement, documentation, and remediation planning, ensuring AI systems meet legal, ethical, and organisational expectations effectively.
Scoping AI systems and processing activitiesIdentifying stakeholders and affected groupsCataloguing risks to rights and freedomsDesigning mitigation and remediation plansDocumenting outcomes and sign-offIntegrating DPIAs into product lifecycleLesson 4Algorithmic fairness and bias: sources of bias, measurement methods, and mitigation techniquesThis section analyses algorithmic bias and fairness in AI, explaining sources of bias, fairness metrics, and mitigation strategies across data, modelling, and deployment, with attention to legal expectations in strict regulatory environments worldwide.
Types and sources of algorithmic biasFairness metrics and trade-offsBias in data collection and labellingModel training and evaluation strategiesMitigation during deployment and monitoringDocumentation of fairness decisionsLesson 5Operational playbooks for product compliance reviews and cross-functional escalation (Product, Legal, Privacy, Compliance)This section provides practical playbooks for product compliance reviews, defining roles, workflows, and escalation paths among Product, Legal, Privacy, and Compliance teams to manage AI risks and document defensible decisions in collaborative settings.
Intake and triage of AI product changesRisk-based review levels and criteriaRoles of Product, Legal, Privacy, ComplianceEscalation paths for high-risk AI use casesDecision documentation and approval recordsFeedback loops into product roadmapsLesson 6Model risk management for AI features: documentation (model cards), validation, testing, performance monitoring, and explainabilityThis section covers model risk management for AI features, including documentation, validation, testing, monitoring, and explainability, aligning model governance with regulatory expectations and internal risk appetite frameworks for reliable performance.
Model inventory and classificationModel cards and documentation standardsValidation and independent challengePerformance, drift, and stability monitoringExplainability methods and limitationsModel change management and decommissioningLesson 7Ethical frameworks for AI decisions: stakeholder mapping, proportionality, contestability, human oversight, and redress mechanismsThis section introduces ethical frameworks for AI decision-making, covering stakeholder mapping, proportionality, contestability, human oversight, and redress, and shows how to embed these principles into governance processes and product design for fairness.
Stakeholder and impact mapping for AIProportionality and necessity assessmentsDesigning contestability and appeal channelsHuman-in-the-loop and on-the-loop modelsRedress and remedy mechanisms for harmEmbedding ethics reviews into governanceLesson 8Privacy-preserving design: data minimisation, differential privacy, anonymisation, pseudonymisation, and secure multi-party computation basicsThis section explores privacy-preserving design strategies for AI, including data minimisation, anonymisation, pseudonymisation, differential privacy, and secure multi-party computation, with guidance on use cases and implementation trade-offs for secure systems.
Data minimisation in AI feature designAnonymisation and re-identification risksPseudonymisation and tokenisation methodsDifferential privacy for analytics and MLSecure multi-party computation basicsSelecting appropriate privacy techniquesLesson 9Technical controls: access control, logging, encryption, retention policies, and secure development lifecycle (SDLC) for MLThis section details technical safeguards for AI systems, including access control, logging, encryption, retention, and secure ML development, showing how engineering choices support regulatory compliance and ethical risk reduction in practical applications.
Role-based and attribute-based access controlSecurity logging and audit trail designEncryption in transit and at rest for AI dataData retention and deletion automationSecure coding and code review for MLSecurity testing and hardening of AI servicesLesson 10Assessing lawful bases and consent limits for workplace surveillance and employee data processingThis section examines lawful bases and consent limits for workplace surveillance and employee data, addressing monitoring tools, transparency duties, power imbalances, and safeguards to protect dignity and labour rights in organisational settings.
Common workplace surveillance scenariosAssessing legitimate interest and necessityConsent limits in employment contextsTransparency and worker information dutiesSafeguards for monitoring technologiesEngaging works councils and unionsLesson 11Regulatory trends in high-regulation jurisdictions and compliance pathways for novel AI productsThis section surveys regulatory trends in high-regulation jurisdictions, outlining emerging AI laws, guidance, and enforcement patterns, and mapping practical compliance pathways for novel AI products and cross-border operations to stay ahead.
Overview of major AI regulatory regimesSector-specific AI rules and guidanceSupervisory expectations and enforcementRegulatory sandboxes and innovation hubsDesigning risk-based compliance programmesCross-border data and AI compliance issuesLesson 12Human rights frameworks applicable to data and AI: UN Guiding Principles, GDPR as a rights-based model, and national human-rights implicationsThis section links human rights law to data and AI governance, explaining the UN Guiding Principles, GDPR’s rights-based approach, and how national human rights duties shape corporate responsibilities for AI design and deployment responsibly.
UN Guiding Principles and corporate dutiesGDPR as a rights-based regulatory modelNational human rights laws affecting AISalient human rights risks in AI useHuman rights due diligence for AIRemedy and accountability expectations