Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Outline plan for testing work to be completed as part of MVP1. TBC whether we maintain this up to date with status or move tracking into PractiTest, but starting here for reasons of flexibility.

Table of contents shows how we are dividing up testing: a series of MVP stories, and other key themes.

Note: MVP2, MVP7, MVP19 & MVP21 have been de-scoped from the MVP1 release.

Notes on Tracking & Execution

My expectation is that each item flagged below will become the responsibility of a single tester (with support from others as they need it).

I’m still considering the best way to track this activity…

  • Jira sub-tasks

  • PractiTest

  • A Trello Board

  • Confluence

Where testers want to build out explicit lists of test cases to follow, I’d advocate PractiTest. But in many cases I don’t think this will be the case. For more exploratory approaches, a test report in Confluence might be adequate.

It will be useful to have a single place we can go to that shows ownership, status, and detailed references for any one of these items. I am leaning towards Jira sub-tasks for that (we already have Epics for each of the MVP items), but either PractiTest or Trello could work OK - or we could simply master the content in this page & edit it to show progress….

Decision to be taken once we have some review of this content, and a buy-in that we have approximately the right atoms of ownership here.

FWIW we have: 67 “TO DO”, 7 “IN PROGRESS”, 11 “BLOCKERS”, but I expect the list will grow with Input from Adam. I think that understates our status as we have done plenty of informal testing around a lot of the “TO DO” items already…. Nevertheless there is lots to get on with!

Given the level of bugs we are fixing / not fixing, and the pressure to enable Beta testing of MVP1 at customers, I expect that we will not complete all this testing before MVP1 ships - but having a record of everything we’d like to do will help us make judgments about when we have good enough info to ship, and where the faps are…

MVP1: GPS logging gaps

Android

  • Many test iterations with Dev on a single device to get to the point where we are recording > 90% of data points. Reports here: 1.5d testing with another GPS Update - 9 June

  • TO DO: Testing across a wider range of devices.

  • TO DO: Testing to isolate causes of remaining gaps.

  • TO DO: Regression test on final build.

  • BLOCKER: Ability to output JSON data for analysis (was lost on implementation of e1e data transmission. To be returned under a Dev Feature Flag.

iOS

  • Known major issues remaining with data sometimes missing for hours or days.

  • Dev team still working on analysis & fixing

  • Testing blocked until we have something from Dev.

  • TO DO: Test changes coming from Dev.

  • BLOCKER: update from Dev.

  • RISK: Not having reliable GPS recording on iOS is a major risk to MVP1.

MVP3: Encryption on Device

Adam Leon Smith tested when implemented back in May. Followed OWASP guidelines.

Testing documented here - not yet complete.

Secure Data Store - Android - INCOMPLETE

TO DO: Complete this testing
TO DO: Any regression testing needed, given there has been a lot of change since testing was done.

MVP4: Clarify limitations via copy

See SAF-431, but in fact work all implemented under SAF-482.

Specifically this covers the new onboarding screens

Testing needed:

  • TO DO: Careful review of onboarding screens vs. Figma designs

  • TO DO: Careful review of onboarding screens in themselves - do they meet the intended purpose? Any issues?

  • TO DO: Stress test of onboarding screens - error flows, bail out & restart etc.

  • TO DO: Both on iOS and Android

MVP5: Feature Flags

Testing done: basic exercise of all available feature flags. Testing done informall, no detailed documentation. Since FF’s are widely used by the test team, and not user-deliverable function, this is likely to be sufficient.

TO DO: Regression test to ensure feature flags in final MVP1 build all work correctly.

MVP6: Clarify Exposure Notification

SAF-283 / SAF-284. Detailed Figma doc in SAF-284.

Testing needed:

  • IN PROGRESS: Careful review of exposure screens vs. Figma designs - see SAF-705.

  • TO DO: Test usability of design, for common scenarios

  • TO DO: Stress Test Exposure Notification with use cases that are likely to be problematic for the UI.

  • See also: Accessibility, MVP 18 / MVP 20

MVP8: Improved user consents

Testing needed:

  • DONE: Careful review of exposure screens vs. Figma designs.

  • IN PROGRESS: Test clarity of text vs. our actual consent model - See SAF-689

  • TO DO: Careful review vs. Privacy principles.

  • See also: Privacy

MVP9: Internationalization

Non-English languages to be:

  • Haitain Creole

  • Latin American Spanish

  • Japanese

  • Tagalog

  • Chamorro

Testing needed (for each language)

  • TO DO: Basic language testing by non-speaker - check for text formatting, not fitting space etc.

  • TO DO: Language testing by native speaker

  • BLOCKER: Translations blocked on finalization of English language text.

MVP10: Designate Privacy Policy in Settings

Testing needed:

  • IN PROGRESS: Basic end-to-end test: specify Privacy Policy - see PLACES-350

  • TO DO: Stress test Safe Places problematic scenarios

  • TO DO: Stress test Mobile App problematic scenarios


MVP11: Safe Places UX

This is an expansive line-item covering a wide range of Safe Places function

Will ask Adam L-S to articulate the full set of testing needed here.

MVP12: Manually Enter Points as a Contact Tracer

Testing needed:

  • IN PROGRESS: Basic end-to-end testing of manually entered points leading to a match on the mobile device if & only if they match a time/place where that device was.

  • TO DO: Exploration of the UI for problematic scenarios for the UI

  • TO DO: Exploration of the UI for problematic scenarios for the system - e.g. large volume of data points, hard-to-handle data points etc.

  • TO DO: Security considerations. What new risks does this introduce (especially for “bad actor” Contact Tracers)? Do we have sufficient mitigation?

  • Adam - anything to add?

MVP13: Load, edit, save aggregated data

Testing needed:

  • IN PROGRESS: Basic end-to-end testing of manually entered points leading to a match on the mobile device if & only if they match a time/place where that device was.

  • TO DO: Exploration of the UI for problematic scenarios for the UI

  • TO DO: Exploration of the UI for problematic scenarios for the system - e.g. large volume of data points, hard-to-handle data points etc.

  • TO DO: Security considerations: what are the risks for a user with these access privileges?

  • Adam - anything to add?

MVP14: Configure Retention Policy

Testing needed

  • DONE: Test user interface

  • TO DO: Test actual deletion of data when retention period expires

  • TO DO: Stress test - e.g. massive volumes of data, failure cases etc.

  • BLOCKER: Need a way to populate N days of synthetic historical data, for testing of data discard without huge elapsed time test cycles.

  • Adam - anything to add?


MVP15: Publisher Tool

  • DONE: Basic end-to-end flow: publish data, confirm that it can be retrieved and trigger exposue notifications

  • TO DO: Scale test. Test publishing large numbers of cases, and large cases.

  • TO DO: Robustness test. What failures can occur during the publishing process? Do we handle them gracefully?

  • TO DO: Security considerations: what are the risks for a user with these access privileges?

  • BLOCKER: Need a way to populate large volumes of case data, for large-scale publishing test cases.

  • Adam - anything to add?

MVP16: Secure transmission of location data

  • DONE: Basic end-to-end test where a user shares location data with

  • TO DO: Testing with a full 2-week data set from an end-user.

  • TO DO: Testing on both Android & iOS

  • TO DO: Robustness - what happens in common error scenarios? Are they handled well from user & CT perspective (mis-types error code, network not available, mobile data switched off etc.)

  • TO DO: Test server disacard of data without a suitable authentication code, including doing so at scale (i.e. a high volume of requests).

  • TO DO: Test case where an attacker does manage to guess an authentication code - both CT & infected user experience.

  • BLOCKER: Ability to put 2 weeks of location data on a phone, to allow a push of 2 weeks of location data.

  • BLOCKER: Tooling to generate a large volume of requests with invalid authentication codes.

  • Adam - anything to add?


MVP17: Scalability

  • TO DO: Test all different dimensions of scalability of infection data for a single HD: number of pages, size of each page.

  • TO DO: Test scale with multiple HDs (all at max supported scale)

  • TO DO: Test intersections with a full 2 weeks of data on the phone.

  • TO DO: Test scalability on a range of different phones, including very low-end phones

  • TO DO: Test scalability on both Android & iOS

  • BLOCKER: Ability to create 2 weeks of historical data on a phone’s location DB (since re-installing the app loses all location data)

MVP18: Encryption of Points of Concern

  • TO DO: End-to-end test: ensure that downloaded data generates Exposure Notifications if & only if the data matches locations where a user has been.

  • TO DO: Validate that generated location data matches expected values based on hashign algorithm.

  • TO DO: Security. Explore ways in which the data shared in this solution might be compromised. Have we properly understood & articulated the weaknesses of our approach?

MVP20: Tunable GPS intersection logic

Agreed restriction: there is no Safe Places interface to change these settings, an HD that wants to do this will need to modify code.

  • TO DO: Test a variety of patterns that HDs may wish to use, from Exposure Notifications on a single point, up to 1 or 2 hours of overlap, and validate that Exposure Notifications occur as expected

  • TO DO: Test with extreme values, and confirm nothing breaks in the Mobile App.

MVP22: Safe Paths → Path Check

  • TO DO: Carefully review all text in app, linked text such as Privacy Policy for replacement of Safe Paths with PathCheck.

  • TO DO: Review related collateral such as website, marketing materials etc. for replacement of Safe Paths with PathCheck.

  • BLOCKER: Do we have a decision internally on what we are doing with things like the Safe Paths repo name?

MVP23: Rework HA YAML file

  • TO DO: Use Feature Flag ability to override YAML file to show we can support a custom YAML file with some different HAs in it.

  • TO DO: Test Scale of YAML file. How does the app cope with 100s of HAs?

  • TO DO: Robustness testing of YAML, particularly for the case where an error is introduced into the file by accident.

  • TO DO: Testing actual process of adding an HD with a Git PR on the staging YAML file.

  • TO DO: Testing with production build of the App pointed to the production YAML file.

  • BLOCKER: Tooling to build out large YAML files

  • BLOCKER: sorting out the production TAML file to contain sensible content (i.e. not just Mairie de PAP and HA for Testing).

MVP24: Tab redesign

  • TO DO: Careful review of tab screens vs. Figma designs

  • TO DO: Careful review of tab design screens in itself - is it usable. Any issues?

  • TO DO: Stress test - try to break the UI, create problems.

  • TO DO: Test on both Android and iOS

Accessibility

Mobile App:

  • IN PROGRESS: Scan all app screens with Google Accessibil

  • TO DO: Scan all app screens with Apple Accessibility Inspector

  • TO DO: Accessibility Expert to assess Mobile app on iOS and Android against A and AA WCAG 2.1

Safe Places

  • Adam - can you fill this in?

Privacy

  • TO DO: Assess the product agains all Privacy Unit Tests, with explicit testing of the product where appropriate.

  • What else? For discussion with Adam.

Security

  • TO DO: Static Analysis of latest app builds on iOS and Android using ImmuniWeb

  • TO DO: Dynamic Testing of Mobile Apps by Immuniweb or other.

  • TO DO: Dynamic Testing of Safe Places by Immuniweb or other.

  • TO DO: Documented Threat Model for the entire GPS ecosystem, with all risks & threats identified, logged in Jira, and assessed.

  • TO DO: Fuzz testing of any interfaces exposed to the Internet - e.g. data upload UI. Don’t assume that the 6 digit access code will protect us, as

  • What else? To discuss with Adam…

XXXity - Other “non-functional requirements”

Anything else we should include here?

  • Ethical Assessment?

  • Scalability / Reliability / Robustness should be covered in specific MVP items above and/or regression test.

  • Testability? I had some initial thoughts on this in a section here: Safe Paths Quality Map . We are not very testable right now, but we don’t have a clear position on what we need here, and what the key next steps should be. Should we try to create such a thing?

  • Demonstrability? Something to support Implementation with Demos?

  • Beta Simulatio readiness?

  • M&E readiness / TiP readiness?

  • ?????

Regression Test

Mobile App

  • IN PROGRESS: Run current Regression Test Plan (in PractiTest) https://stage.practitest.com/p/5023/sets/26387/edit

  • TO DO: Update Regression Test Plan to include all functions added in MVP1

  • TO DO: Run Regression Test Plan over final release candidate build.

  • TO DO: All above for both iOS & Android.

Safe Places

  • Adam to fill in

  • No labels