Lesson 1Vendor and client contracts for AI features: data processing agreements, joint controllership, liability allocation, and security requirementsThis part explains how to structure vendor and client contracts for AI features, focusing on data processing agreements, shared control, blame sharing, and security clauses that match rules and ethical needs.
Defining controller and processor rolesKey data processing agreement clausesJoint controllership and shared dutiesLiability caps, indemnities, and insuranceSecurity and incident response obligationsAudit, oversight, and termination rightsLesson 2Core data protection regimes and obligations relevant to AI (principles: purpose limitation, data minimization, lawful basis, transparency)This part reviews main data protection rules for AI, stressing principles like purpose limits, data cutting down, legal bases, and openness, and how to put them into action in AI building and using.
Purpose limitation in AI training and useData minimization and feature selectionChoosing and documenting lawful basesTransparency and meaningful noticesAccuracy, storage limits, and integrityAccountability and governance structuresLesson 3Data Protection Impact Assessments (DPIAs) / AI Impact Assessments (AIA): structure, key questions, and remediation plansThis part explains how to set up and run DPIAs and AIAs, from scoping and risk finding to involving stakeholders, recording, and planning fixes, making sure AI systems meet legal, ethical, and group expectations.
Scoping AI systems and processing activitiesIdentifying stakeholders and affected groupsCataloging risks to rights and freedomsDesigning mitigation and remediation plansDocumenting outcomes and sign-offIntegrating DPIAs into product lifecycleLesson 4Algorithmic fairness and bias: sources of bias, measurement methods, and mitigation techniquesThis part analyses AI bias and fairness, explaining bias sources, fairness measures, and fix strategies across data, models, and using, with focus on legal needs in strict rule places.
Types and sources of algorithmic biasFairness metrics and trade-offsBias in data collection and labelingModel training and evaluation strategiesMitigation during deployment and monitoringDocumentation of fairness decisionsLesson 5Operational playbooks for product compliance reviews and cross-functional escalation (Product, Legal, Privacy, Compliance)This part gives practical guides for product compliance checks, defining roles, steps, and escalation paths among Product, Legal, Privacy, and Compliance teams to handle AI risks and record strong decisions.
Intake and triage of AI product changesRisk-based review levels and criteriaRoles of Product, Legal, Privacy, ComplianceEscalation paths for high-risk AI use casesDecision documentation and approval recordsFeedback loops into product roadmapsLesson 6Model risk management for AI features: documentation (model cards), validation, testing, performance monitoring, and explainabilityThis part covers model risk handling for AI features, including records, checking, testing, watching, and explaining, matching model management with rule expectations and inside risk levels.
Model inventory and classificationModel cards and documentation standardsValidation and independent challengePerformance, drift, and stability monitoringExplainability methods and limitationsModel change management and decommissioningLesson 7Ethical frameworks for AI decisions: stakeholder mapping, proportionality, contestability, human oversight, and redress mechanismsThis part introduces ethical frames for AI choices, covering stakeholder mapping, balance, challengeability, human watching, and fix mechanisms, and shows how to put these into management processes and product design.
Stakeholder and impact mapping for AIProportionality and necessity assessmentsDesigning contestability and appeal channelsHuman-in-the-loop and on-the-loop modelsRedress and remedy mechanisms for harmEmbedding ethics reviews into governanceLesson 8Privacy-preserving design: data minimization, differential privacy, anonymization, pseudonymization, and secure multi-party computation basicsThis part looks at privacy-keeping design for AI, including data cutting, anonymization, pseudonymization, differential privacy, and secure multi-party computing, with advice on uses and trade-offs.
Data minimization in AI feature designAnonymization and re-identification risksPseudonymization and tokenization methodsDifferential privacy for analytics and MLSecure multi-party computation basicsSelecting appropriate privacy techniquesLesson 9Technical controls: access control, logging, encryption, retention policies, and secure development lifecycle (SDLC) for MLThis part details technical protections for AI systems, including access control, logging, encryption, keeping, and secure ML building, showing how engineering choices help rule following and ethical risk cutting.
Role-based and attribute-based access controlSecurity logging and audit trail designEncryption in transit and at rest for AI dataData retention and deletion automationSecure coding and code review for MLSecurity testing and hardening of AI servicesLesson 10Assessing lawful bases and consent limits for workplace surveillance and employee data processingThis part checks legal bases and consent limits for work surveillance and employee data, addressing monitoring tools, openness duties, power imbalances, and safeguards to protect dignity and worker rights.
Common workplace surveillance scenariosAssessing legitimate interest and necessityConsent limits in employment contextsTransparency and worker information dutiesSafeguards for monitoring technologiesEngaging works councils and unionsLesson 11Regulatory trends in high-regulation jurisdictions and compliance pathways for novel AI productsThis part surveys rule trends in strict places, outlining new AI laws, advice, and enforcement ways, and mapping practical compliance paths for new AI products and cross-border work.
Overview of major AI regulatory regimesSector-specific AI rules and guidanceSupervisory expectations and enforcementRegulatory sandboxes and innovation hubsDesigning risk-based compliance programsCross-border data and AI compliance issuesLesson 12Human rights frameworks applicable to data and AI: UN Guiding Principles, GDPR as a rights-based model, and national human-rights implicationsThis part links human rights law to data and AI management, explaining UN Guiding Principles, GDPR’s rights approach, and how national human rights duties shape company duties for AI design and use.
UN Guiding Principles and corporate dutiesGDPR as a rights-based regulatory modelNational human rights laws affecting AISalient human rights risks in AI useHuman rights due diligence for AIRemedy and accountability expectations