Test Plan:  Safe Places

How To Contribute

You can contribute to this testing by doing the following.  Dependent on your skills, pick the highest item you can work on in this list:

 

  1. Reviewing and improving this test plan - particularly by integrating aspects of the https://docs.google.com/document/d/1hJNOaElk9aP9SNgnHA_eZOynAMb3Kvo_pzCwofbRHEQ/edit and ensuring it covers that

  2. Documenting the test objectives as requirements in PractiTest [currently, this is done for the mvp using the tags: mvp, safe_places]

  3. Designing tests in PractiTest (mapped to requirements)

  4. Executing tests and raising issues in GitHub

  5. Determining the right automation tools

  6. Implementing automation tests

Functionality Under Test

This feature allows the import of location data that has been exported from PrivateKit into a user interface, edit / filter the data points, and export a reduced dataset (redaction of some location history).

The following user stories are relevant and copied from the MVP scope.


Diagnosed Patient User Story

The Contact Tracer interviews me to discuss the locations on my travel path.



Assumptions and Preconditions: The locations on this map are not perfectly accurate, but they are close enough to jog my memory

Detailed Steps in the Current UI: See below

Current Technical Details: Browser compatibility
Prioritized direction for future UI and backlogged features:

UI/UX: Yellow Flags suggest that I was traveling. Because I was alone in my moving car, there would be no contact details transmission with another person. Orange Flags indicate a “transient” location -- meaning a single stop. Pink Flags indicate a “recurring” location -- meaning one which was visited multiple times.





Process: The Contact Tracer eliminates the yellow flags (self-contained travel). The Contact Tracer deletes my personally identifiable locations -- my home and workplace(s).

The Contact Tracer Saves my location information as a redacted file.



The Contact Tracer Saves my location information as a redacted file.

At regular intervals, the Healthcare Authority loads my redacted data files, along with other patients’ redacted data files, into the Safe Paths publishing tool



Assumptions and Preconditions

Detailed Steps in the Current UI: See below

Current Technical Details: Browser compatibility
Prioritized direction for future UI and backlogged features:

My Healthcare Authority publishes the group location data with no personally identifiable information attached

Assumptions and Preconditions

Detailed Steps in the Current UI: See below

Current Technical Details: Browser compatibility
Prioritized direction for future UI and backlogged features:

Under the hood: “Publish” gets a .json file, which the Healthcare Authority will then host on its web server.





Contact Tracer User Story

  1. Contact Tracer loads the Patient’s SafePaths log file into the SafePaths Redaction Tool.

  2. CT uses a map with historical location data to jog the Patient’s memory and facilitate the interview.

  3. CT and Patient identify likely travel data points.  CT deletes those points if travel is done by car.

  4. CT adjusts duration settings based on HA’s policy for exposure.

  5. CT and Patient identify private homes and redact those locations

  6. CT and Patient identify moderated locations (office buildings, closed conferences) and redacts.

  7. CT confirms all remaining locations pose a public health risk.

  8. CT instructs Patient to inform friends/family/individuals of their exposure risk.

  9. CT documents at-risk locations for follow-up by HA.

  10. CT Saves the patient info as a redacted file.

Test Objectives

Based on the functionality and the quality map, the following objectives have been identified.

 Location Viewer/Scrubber



  • Verify that data that should be redacted, is in fact redacted, and unredacted data is not persisted

  • Verify that the application does not hang or crash when processing a large amount of data, and performs reasonably, including:

    • 14 days of data points.  If google location history is used this is expected to be around 650 data points every 24 hours, which means ~9,100 data points covering a 14 day period.

    • TODO: We need to confirm that google location data remains as granular as it currently is during the conversion to privkit format.

    • TODO: How often is iOS location stored?

    • The usability of the user interface when processing a significant cluster of data points in the same period

    • Performance of the redaction of 9,000 data points in the user interface

  • Verify that none of the data is sent to any 3rd party endpoints

  • Verify the functionality is the same in popular modern browsers (Chrome, Safari, Edge) TODO: We should determine what the reqs are. Healthcare providers may use some surprisingly old stuff - e.g. NHS were stuck on Window XP, I recall? Also what assumptions should we make about specs of the device running the browser? Bandwidth to the server? (Imagine a healthcare office in Haiti. What spec PC do they have?)

  • Verify the accuracy of data presentation and filters, including:  (a) color labelling; (b) accuracy of filters

  • Verify that data points can be selected individually and deleted

  • Verify that multiple data points can be selected based on geometrics area

  • Verify that data points are correctly classified as travel (yellow) or transient stop (orange) or recurring location (red)

  • Validate that common errors a CT may make result in sensible error messages

  • Validate that the filters are appropriate for HA guidance and are usable for their contact tracing processes.

  • Validate the automatic zoom functionality



Publisher

  • Verify the ability to combine files in the publisher.  

  • Verify that the publisher does not crash / hang / become unresponsive [Note: I don’t believe there are volumetrics for this, I would assume that a HA could be combining hundreds of files with hundreds of data points]

  • Verify that none of the data is sent to any 3rd party endpoints

  • Verify the functionality is the same in popular modern browsers (Chrome, Safari, Edge)

  • Verify the accuracy of data presentation and time filters

  • Verify that the application will not crash if invalid or very short/long values are entered in free text fields

  • Validate that common errors a CT may make result in sensible error messages

  • Validate the automatic zoom functionality



Test Environment

The application can be run locally in Docker.  This is described in the README.  It is necessary to input a Google Maps API key from a billing Google Cloud plan - this one can be used: AIzaSyBvm-T7hqlAtAcQqPy0nOS1CSmXJQeZSPI 

An environment has been setup to host the MVP and future React versions of the web app. This is linked to CI so should be the latest development version.

dev_mvp goes to http://safeplaces.extremesolution.com
dev_react goes to http://react.safeplaces.extremesolution.com

Test Data

The data flow is to load PrivKit data into the redcation tool, export it then load in the publishing tool.

Using Your Own Personal Data

It is necessary to install the Private Kit app from the relevant app store to create data, either natively, or by importing your location history from google.  As the mobile application is not fully stable, you can also get the JSON export directly from google, and a python script exists here to convert from google to PrivKit format.  A second script (convert_and_split.py) will take years of location history data into many datasets that cover March 2020.

Using your own data is useful because you have the context of your own travel to act as an implied test oracle, this is particularly useful when looking at the classification of data points as travel, transient stop, or persistent location.

Synthetic Data

An approach using machine learning to generate synthetic data is in place. Four test files are attached to this page and many more are available from Adam Leon Smith.

Test Automation

There are two elements to the automation.

  • Verifying the functionality of the redaction process based on inputs and outputs, and manipulation of user interface objects

  • Identifying visual regression

It is recommended that the functionality of the redaction process is the priority, and that open source tools are used where possible.

Non-Functional Testing

The non-functional testing areas should be in scope for the testing streamed. However, they may be managed and owned by other teams.

Non-functional testing identified for the MVP:

  • Performance testing

  • Security testing

  • Resilience

  • Disaster recovery (docker node DR)

  • Multi-geography language, time zones, global scope (i.e. ISO character sets)

  • Data sovereignty and cross border data transfer

  • Auditing, visibility and traceability

  • Accessibility

Performance testing


The performance of the platform is its ability to upload JSON payloads and / or process data in acceptable time frames.

The performance of the platform can be measured in the terms of:

  • Response time for a transaction (e.g response in seconds/hours and percentile target to achieve response)

  • Throughput (e.g. transactions per second or usage profile of a query, e.g. number of users, number of executions)

  • Capacity (e.g. the number of transactions the architecture needs to accommodate)

  • Scalability (the ability of the system to handle increased future volumes)

Security Testing

There's no currently auth as data is uploaded / exported in the browser session - so the confidentiality impact is limited. I'm not sure there is a requirement to be resilient to DOS, so availability is limited too. It would be good to ensure XSS mitigations are in place, and try to determine if any local data is stored in the docker container. The current deployment model needs to be defined before security requirements can be implied.

Localisation Testing

TODO: unix timestamps and lat/long, to my knowledge, do not vary by locale. I think timezones are important though. If I start a journey in GMT and end in CET, then is that reflected properly? Also would be good to understand the approach to localisation - I have seen some translation work going on for the app at least.