Lesson 1Prioritization for automation: which tests to automate first (API, critical flows, regression), why, and criteria for automation ROIThis part explains how to choose tests for automation first, looking at APIs, key user paths, and repeat checks, and sets rules for return on automation, upkeep, and lowering risks to guide spending choices.
Finding high-value tests for automationAutomating API tests before hard UI pathsAutomating main paths and smooth journeysBuilding a steady repeat automation baseWorking out and following automation returnsKnowing when not to automate a testLesson 2Release gating and test exit criteria tied to acceptance criteria and metricsThis part sets release gates and test end rules, showing how to link them to acceptance rules, risks, and measures like fault rates, coverage, and speed so release choices are open and based on proof.
Setting clear start and end rulesLinking end rules to acceptance rulesQuality measures for go or no-go choicesFault seriousness limits and open issues capsDealing with risk-based exceptions and passesNoting release choices and approvalsLesson 3Traceability: mapping requirements to tests and reporting coverageThis part covers linking needs to tests, including making and keeping link charts, joining tests to user stories and risks, and reporting coverage shortfalls that help planning and release choices.
Making a simple link chartJoining user stories, risks, and test casesLinking in agile tools and test handlingMeasuring coverage more than just numbersFinding and ranking coverage shortfallsUsing links in checks and rule followingLesson 4Types of testing required: functional, regression, smoke/sanity, E2E, performance/load/stress, security, accessibility, cross-browser and responsive, localization and data validationThis part lists needed test kinds for web apps, including working checks, repeat tests, quick checks, full end-to-end, speed/load/pressure, security, access ease, cross-browser, responsive, local fit, and data checks, with advice on when to use each.
Working and repeat sets for main flowsQuick and sanity checks for fast feedbackSpeed, load, and pressure test aimsSecurity checks for common web weak spotsAccess ease, cross-browser, and responsive testsLocal fit and data check thoughtsLesson 5Manual testing strategy: exploratory, usability, ad-hoc, session-based testing, edge case validationThis part looks at manual test strategies that work with automation, including exploring, ease-of-use, unplanned, and session tests, plus ways to find edge cases and note high-quality test details and plans.
Planning and setting exploratory test plansGuides and paths for finding hidden faultsEase-of-use check for web flows and UI patternsSession-based test handling and notingUnplanned testing for quick risk checks and burstsSetting edge case situations and limit checksLesson 6Test environments, staging setup, data masking, service virtualization for third-party payments and real-time channelsThis part details how to set and handle web test setups, including staging like real ones, hidden but real data, and service fakes for outside payments and live channels to allow safe, repeatable testing.
Setting staging to match real risksWays for fake and hidden test dataHandling setup changes and driftsService fakes for payment linksFaking live channels and webhooksWatching setup health and readinessLesson 7Test automation strategy: selecting frameworks, test pyramid, CI/CD integration, test data and environment managementThis part sets a lasting automation strategy for web apps, covering choosing frameworks, the test pyramid, CI/CD joining, and strong ways for test data and setup handling that keep sets fast, steady, and easy to keep.
Rules for picking UI and API automation frameworksSetting a keepable test pyramid for web productsJoining automated tests to CI/CD linesHandling test data: starting, makers, hidingSteadying shaky tests and handling async actionsVersioning tests with app codeLesson 8Overview of test strategy components: scope, levels, types, environments, schedule, rolesThis part breaks down main parts of a test strategy, including range, stages, kinds, setups, timing, and roles, and shows how to note them clearly so teams share understanding of quality aims.
Setting in-range and out-range featuresPicking right test stages for each partChoosing test kinds based on product risksPlanning setups and needed settingsNoting roles, ownership, and who-does-what chartsKeeping and versioning the strategy noteLesson 9Test scheduling and resource allocation for a beta timelineThis part explains how to time testing tasks and share people, setups, and tools over a beta period, balancing risks, range, and limits while keeping stakeholders updated with real, data-based plans.
Setting testing phases in a beta periodGuessing work using risk and hardnessSharing testers, tools, and setupsMatching test milestones to release onesExtra time, backups, and handling delaysSharing time plans and changes with stakeholdersLesson 10Testing levels: unit, integration, component, system, end-to-end — goals and example deliverables for eachThis part explains each testing stage for web systems—unit, joining, part, full system, and end to end—making clear aims, ownership, example outputs, and how stages work together for layered quality feedback.
Unit tests: range, alone, and code promisesJoining tests for services and data partsPart tests for UI pieces and modulesFull system tests for whole web app actionsEnd-to-end tests for key user pathsPicking ownership and tools per stage