Testing Status - who is working on what?

Author: Diarmid Mackenzie

Last updated by: Diarmid Mackenzie, Friday 22 May

Testing Activity

Overall

  • We are working on the MVP1 delivery of Safe Paths & Safe Places

  • This will ship to HAs on 1 June, but there will be more testing to be completed before it is ready for launch, which we will hope to do over the following week (including getting fixes for any problems).

  • Goal for 1 June should be that all features are code complete, and the basic function of the solution works.

  • In parallel, we have kicked off Project Aurora to build a GAEN Bluetooth app (without GPS). That will be coming into test soon as well.

  • Loads of activity around Privacy, Security & Transparency - we hope to have a robust public posiiton launched on all these points by 1 June.. Diarmid & Adam both heavily involved in this.


Safe Paths Mobile App:

Safe Places

  • Adam Leon Smith has been developing an Test Plan:  Safe Places < this includes notes on how to contribute

  • We have a team of about 5 testers engaged in a mix of manual testign and automated tests (using Postman and Selenium).

  • Adam Leon Smith is test lead - talk to him if you have any questions.

 

Security / Privacy

  • Lots of progress going on here

  • Much of it at the level of principles / reqs / spec - which has fed into MVP1

  • A lot of the MVP1 features are intended to improve Privacy and Security.

  • Still some items where we are behind (e.g. a proper Threat Model, penetration testing etc.

  • Anyone with experience in these areas who is able to contribute - please talk to DIarmid or Adam..


Testing in Production

  • We are looking at a Beta Trial in the Boston area from 1 June.

  • Work in progress here: Boston Beta Trial

  • Jonathon Wright is taking the lead on Testing in Production - figuring out what we need to do, and how to make it happen. Testing in Production (TiP)

    • Working with Todd DeCapua around Testing in Production (TiP) / Telemetry Data (Firebase) / Data Visualisation (Splunk for Good) Splunk - APM / Data Visualization

    • Way forward is expected to be primarily based on analytics from Safe Places rather than Safe Paths, for privacy reasons.

    • MVP1 story MVP1#19 covers some work in this area.

 

Testing Community support

  • @Sherry Heinze is continuing to be a point of contact for any new testers who want some support working out what to work on; or for anyone who wants to find something different to work on. Sherry will be looking for new volunteers, and will probably reach out to you, but if she doesn’t, please do reach out to her. She’s on Mountain Time.

Key impediments for the team

Key issues right now.

  1. We need code to test! Loads of function coming in for MVP1 but not ready for test yet.

  2. Not enough volunteers. Although we are keeping up now , when the wave of MVP1 work + the GAEN BT app becomes ready for test, we will be struggling to keep up.

    1. It seems we have a large number of volunteers, who are only able to contribute a small amount of time.

    2. We’re not managing to create a wrokflow that works with this volunteer team, leading to poor distribution of work.

  3. Test Automation for Mobile clients - we are working on this with 21labs, to try to build a range automated regression tests.

Updates on previously reported items - mostly solved.

  1. (4/19) Too many gaps in documentation of requirements, and detailed product behaviour

    1. MVP1 spreadsheet here: https://docs.google.com/spreadsheets/d/1VTSnUOrfKBKXLkButvZ4B8MJzfV8gw-4JYv4lkG6XHA/edit#gid=0

    2. More detailes specs linked off there / Jira.

    3. ALso some good info on details in teh UI / UX space: https://pathcheck.atlassian.net/wiki/spaces/UIUX

  2. (4/19) No available domain expert for Contact Tracing

    1. Kyle has been collecting lots of info from HAs

    2. We have some info here How are Health Authorities actually going to use Safe Paths?

    3. We are still hoping to get an epidemiological adviser onboard within the project (with Kyle, I think).

  3. (4/19) No signatory to make a contract with Applause

    1. All sorted - though we have not yet come up with a good use for Applause (Haiti field tiral not yet a priority).

  4. (4/19) No clear path for escalation of very high-level issues identified in Test - e.g. Privacy / Security concerns

    1. All resolved, and loads of good progress on Privacy / Security / Transparency / EThics, which shoudl all bear fruit in the MVP1 timeframe.

  5. (4/19) PractiTest not yet embedded as a tool.

    1. Limited test cases documented in PractiTest

    2. No clear patterns for using PractiTest for more exploratory forms of testing and for overall Requirements Tracing - not clear whether or not we want to invest in this, much of this is happening in Confluence today, and that may be good enough.

  6. (4/19) No current process for raising reqs/spec issues uncovered in Test

    1. All covered in Jira: How to raise bugs found in Testing

  7. (4/19) It’s still proving difficult for even experienced testers to engage with this project, when they first arrive, despite us now having a broad set of resources now available in Confluence. We are tryng to get a better understanding of what else can we do. Are further resources neeed? What? Do we need to run in a much more explicitly directed manner?

    1. @Sherry Heinze acting as community liaison. Feedback is that the resources available are mostly good (or at least OK). Reasons for individuals not engaging are highly mixed & personal.

 

Other useful docs:

Testing Resources