Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

My expectation is that each item flagged below will become the responsibility of a single tester (with support from others as they need it).
I’m still considering the best way to track this activity…
To make a fast start on tracking this, I have created a Trello board here - you should be able to access via this link:
https://trello.com/invite/b/Qe0eioYq/2545baab562e045ff0035b1f3c9bdea5/path-check-mvp1
(but reach out to Diarmid on Slack if any issues with access).

Trello card
urlhttps://trello.com/b/Qe0eioYq/path-check-mvp1

Other solutions might be better, but getting started with Trello was very low cost so I took it.

  • Jira sub-tasks

  • PractiTest

  • A Trello Board

  • Confluence

Where testers want to build out explicit lists of test cases to follow, I’d advocate PractiTest. But in many cases I don’t think this will be the case. For more exploratory approaches, a test report in Confluence might be adequate.It will be useful to have a single place we can go to that shows ownership, status, and detailed references for any one of these items. I am leaning towards Jira sub-tasks for that (we already have Epics for each of the MVP items), but either PractiTest or Trello could work OK - or we could simply master the content in this page & edit it to show progress….

Decision to be taken once we have some review of this content, and a buy-in that we have approximately the right atoms of ownership here.
FWIW we have: 67 “TO DO”, 7 “IN PROGRESS”, 11 “BLOCKERS”, but I expect the list will grow with Input from Adam. I think that understates our status as we have done plenty of informal testing around a lot of the “TO DO” items already…. Nevertheless there is lots to get on with!In all cases, please when you move a card to “Done” put a comment in indicating where your test notes are.

Given the level of bugs we are fixing / not fixing, and the pressure to enable Beta testing of MVP1 at customers, I expect that we will not complete all this testing before MVP1 ships - but having a record of everything we’d like to do will help us make judgments about when we have good enough info to ship, and where the faps are…

...

Secure Data Store - Android - INCOMPLETE

TO DO: Complete this testing
TO DO: Any regression testing needed, given there has been a lot of change since testing was done.regression testing and submit to privacy-security repo for review

MVP4: Clarify limitations via copy

...

MVP10: Designate Privacy Policy in Settings

Testing needed:

  • IN PROGRESSCOMPLETE: Basic end-to-end test: specify Privacy Policy - see PLACES-350

  • TO DO: Stress test Safe Places problematic scenarios

  • TO DO: Stress test Mobile App problematic scenarios

...

This is an expansive line-item covering a wide range of Safe Places functionWill ask Adam L-S to articulate the full set of testing needed here.

  • Exploratory testing of the UI

  • Boundary testing of the fields

MVP12: Manually Enter Points as a Contact Tracer

...

  • IN PROGRESS: Basic end-to-end testing of manually entered points leading to a match on the mobile device if & only if they match a time/place where that device was.

  • DONE: Scale test with 14 days of data

  • TO DO: Exploration of the UI for problematic scenarios for the UI

  • TO DO: Exploration of the UI for problematic scenarios for the system - e.g. large volume of data points, hard-to-handle data points etc.

  • TO DO: Security considerations: what are the risks for a user with these access privileges?Adam - anything to add?

  • TO DO: Functional test of 1 data item, multiple data items, and verifying the right data items actually are transmitted through the flow

MVP14: Configure Retention Policy

...

  • DONE: Test user interface

  • TO DO: Test actual deletion of data when retention period expires

  • TO DO: Stress test - e.g. massive volumes of data, failure cases etc.

  • BLOCKER: Need a way to populate N days of synthetic historical data, for testing of data discard without huge elapsed time test cycles.Adam - anything to add?


MVP15: Publisher Tool

  • DONE: Basic end-to-end flow: publish data, confirm that it can be retrieved and trigger exposue notifications

  • FAILS: Test that data outside a jurisdictions GPS bounds is not published.

  • TO DO: Scale test. Test publishing large numbers of cases, and large cases.

  • TO DO: Robustness test. What failures can occur during the publishing process? Do we handle them gracefully?

  • TO DO: Security considerations: what are the risks for a user with these access privileges?

  • BLOCKER: Need a way to populate large volumes of case data, for large-scale publishing test cases.Adam - anything to add?

MVP16: Secure transmission of location data

  • DONE: Basic end-to-end test where a user shares location data with

  • FAIL: Security testing of endpoints

  • TO DO: Testing with a full 2-week data set from an end-user.

  • TO DO: Testing on both Android & iOS

  • TO DO: Robustness - what happens in common error scenarios? Are they handled well from user & CT perspective (mis-types error code, network not available, mobile data switched off etc.)

  • TO DO: Test server disacard of data without a suitable authentication code, including doing so at scale (i.e. a high volume of requests).

  • TO DO: Test case where an attacker does manage to guess an authentication code - both CT & infected user experience.

  • BLOCKER: Ability to put 2 weeks of location data on a phone, to allow a push of 2 weeks of location data.

  • BLOCKER: Tooling to generate a large volume of requests with invalid authentication codes.

  • Adam - anything to add?


MVP17: Scalability

  • TO DO: Test all different dimensions of scalability of infection data for a single HD: number of pages, size of each page.

  • TO DO: Test scale with multiple HDs (all at max supported scale)

  • TO DO: Test intersections with a full 2 weeks of data on the phone.

  • TO DO: Test scalability on a range of different phones, including very low-end phones

  • TO DO: Test scalability on both Android & iOS

  • BLOCKER: Ability to create 2 weeks of historical data on a phone’s location DB (since re-installing the app loses all location data)

...

  • IN PROGRESS: Scan all app screens with Google Accessibil

  • TO DO: Scan all app screens with Apple Accessibility Inspector

  • TO DO: Accessibility Expert to assess Mobile app on iOS and Android against A and AA WCAG 2.1

Safe Places

  • Adam - can you fill this in?FAIL: Run Odin Axe extension on all pages to check for issues

Privacy

  • TO DO: Assess the product agains all Privacy Unit Tests, with explicit testing of the product where appropriate.

  • What else? For discussion with Adam.

...

  • TO DO: Static Analysis of latest app builds on iOS and Android using ImmuniWeb

  • FAIL: Static Analysis of latest webapp build using ImmuniWeb

  • TO DO: Dynamic Testing of Mobile Apps by Immuniweb or other.

  • DESCOPED: TO DO: Dynamic Testing of Safe Places by Immuniweb or other.

  • TO DO: Documented Threat Model for the entire GPS ecosystem, with all risks & threats identified, logged in Jira, and assessed.

  • TO DO: Fuzz testing of any interfaces exposed to the Internet - e.g. data upload UI. Don’t assume that the 6 digit access code will protect us, as

  • What else? To discuss with Adam…

...

  • Ethical Assessment?

  • Scalability / Reliability / Robustness should be covered in specific MVP items above and/or regression test.

  • Testability? I had some initial thoughts on this in a section here: Safe Paths Quality Map . We are not very testable right now, but we don’t have a clear position on what we need here, and what the key next steps should be. Should we try to create such a thing?

  • Demonstrability? Something to support Implementation with Demos?

  • Beta Simulatio readiness?

  • M&E readiness / TiP readiness?

  • ?????

Automation

Safe Places Back-end

  • Adam can you fill in?

Safe Places Front-end

  • Adam can you fill in?

Mobile App

  • We are building a suite of tests in 21labs for both iOS and Android

  • Tests are slow to run, but still add value if they can be implemented as an extension to the Dev pipeline (and as long as they are reliable).

  • For MVP1, we hope to have 5-10 tests covering the key mainline flows/screens. We don’t yet have a design for how we handle the interactions with Safe Places for location sharing and exposure notification.

  • Given the short-term pressure on MVP1, and the slow RoI on Automated Tests, we don’t expect to do a lot more on Mobile Automation until after MVP1 has shipped.

Non-Automated Regression Test

Mobile App

  • IN PROGRESS: Run current Regression Test Plan (in PractiTest) https://stage.practitest.com/p/5023/sets/26387/edit

  • TO DO: Update Regression Test Plan to include all functions added in MVP1

  • TO DO: Run Regression Test Plan over final release candidate build.

  • TO DO: All above for both iOS & Android.

...