Lesson 1Vendor and client contracts for AI features: data processing agreements, joint controllership, liability allocation, and security requirementsThis part explains how to structure contracts with vendors and clients for AI features, focusing on data processing agreements, shared control, sharing liability, and security parts that match rules and ethical needs.
Defining controller and processor rolesKey data processing agreement clausesJoint controllership and shared dutiesLiability caps, indemnities, and insuranceSecurity and incident response obligationsAudit, oversight, and termination rightsLesson 2Core data protection regimes and obligations relevant to AI (principles: purpose limitation, data minimization, lawful basis, transparency)This reviews main data protection rules for AI, stressing principles like purpose limits, data minimising, legal bases, and openness, and how to put them into practice in AI building and using.
Purpose limitation in AI training and useData minimization and feature selectionChoosing and documenting lawful basesTransparency and meaningful noticesAccuracy, storage limits, and integrityAccountability and governance structuresLesson 3Data Protection Impact Assessments (DPIAs) / AI Impact Assessments (AIA): structure, key questions, and remediation plansThis explains how to set up and run DPIAs and AIAs, from scoping and risk finding to involving stakeholders, recording, and planning fixes, making sure AI systems meet legal, ethical, and organisation expectations.
Scoping AI systems and processing activitiesIdentifying stakeholders and affected groupsCataloging risks to rights and freedomsDesigning mitigation and remediation plansDocumenting outcomes and sign-offIntegrating DPIAs into product lifecycleLesson 4Algorithmic fairness and bias: sources of bias, measurement methods, and mitigation techniquesThis analyses bias and fairness in AI algorithms, explaining where bias comes from, fairness measures, and ways to reduce it in data, models, and using, with focus on legal needs in strict rule places.
Types and sources of algorithmic biasFairness metrics and trade-offsBias in data collection and labelingModel training and evaluation strategiesMitigation during deployment and monitoringDocumentation of fairness decisionsLesson 5Operational playbooks for product compliance reviews and cross-functional escalation (Product, Legal, Privacy, Compliance)This gives practical guides for product compliance checks, defining roles, steps, and escalation paths among Product, Legal, Privacy, and Compliance teams to handle AI risks and record strong decisions.
Intake and triage of AI product changesRisk-based review levels and criteriaRoles of Product, Legal, Privacy, ComplianceEscalation paths for high-risk AI use casesDecision documentation and approval recordsFeedback loops into product roadmapsLesson 6Model risk management for AI features: documentation (model cards), validation, testing, performance monitoring, and explainabilityThis covers managing risks for AI models, including records, checking, testing, watching performance, and explaining, matching model management with rule expectations and inside risk levels.
Model inventory and classificationModel cards and documentation standardsValidation and independent challengePerformance, drift, and stability monitoringExplainability methods and limitationsModel change management and decommissioningLesson 7Ethical frameworks for AI decisions: stakeholder mapping, proportionality, contestability, human oversight, and redress mechanismsThis introduces ethical frames for AI choices, covering stakeholder mapping, balance, challengeability, human watch, and fix mechanisms, and shows how to put these into management processes and product design.
Stakeholder and impact mapping for AIProportionality and necessity assessmentsDesigning contestability and appeal channelsHuman-in-the-loop and on-the-loop modelsRedress and remedy mechanisms for harmEmbedding ethics reviews into governanceLesson 8Privacy-preserving design: data minimization, differential privacy, anonymization, pseudonymization, and secure multi-party computation basicsThis explores ways to design with privacy for AI, including minimising data, anonymisation, pseudonymisation, differential privacy, and secure multi-party computation, with tips on uses and trade-offs in doing them.
Data minimization in AI feature designAnonymization and re-identification risksPseudonymization and tokenization methodsDifferential privacy for analytics and MLSecure multi-party computation basicsSelecting appropriate privacy techniquesLesson 9Technical controls: access control, logging, encryption, retention policies, and secure development lifecycle (SDLC) for MLThis details technical protections for AI systems, including access control, logging, encryption, keeping policies, and secure ML development, showing how engineering choices help rule compliance and cut ethical risks.
Role-based and attribute-based access controlSecurity logging and audit trail designEncryption in transit and at rest for AI dataData retention and deletion automationSecure coding and code review for MLSecurity testing and hardening of AI servicesLesson 10Assessing lawful bases and consent limits for workplace surveillance and employee data processingThis looks at legal bases and consent limits for watching at work and handling employee data, addressing monitoring tools, openness duties, power imbalances, and protections for dignity and worker rights.
Common workplace surveillance scenariosAssessing legitimate interest and necessityConsent limits in employment contextsTransparency and worker information dutiesSafeguards for monitoring technologiesEngaging works councils and unionsLesson 11Regulatory trends in high-regulation jurisdictions and compliance pathways for novel AI productsThis surveys rule trends in high-regulation places, outlining new AI laws, advice, and enforcement ways, and mapping practical compliance paths for new AI products and cross-border work.
Overview of major AI regulatory regimesSector-specific AI rules and guidanceSupervisory expectations and enforcementRegulatory sandboxes and innovation hubsDesigning risk-based compliance programsCross-border data and AI compliance issuesLesson 12Human rights frameworks applicable to data and AI: UN Guiding Principles, GDPR as a rights-based model, and national human-rights implicationsThis links human rights law to data and AI management, explaining UN Guiding Principles, GDPR’s rights-based way, and how national human rights duties shape company responsibilities for AI design and use.
UN Guiding Principles and corporate dutiesGDPR as a rights-based regulatory modelNational human rights laws affecting AISalient human rights risks in AI useHuman rights due diligence for AIRemedy and accountability expectations