Testing Status - who is working on what?
Author: Diarmid Mackenzie
Last updated by: Diarmid Mackenzie, Friday 22 May
Testing Activity
Overall
We are working on the MVP1 delivery of Safe Paths & Safe Places
This will ship to HAs on 1 June, but there will be more testing to be completed before it is ready for launch, which we will hope to do over the following week (including getting fixes for any problems).
Goal for 1 June should be that all features are code complete, and the basic function of the solution works.
In parallel, we have kicked off Project Aurora to build a GAEN Bluetooth app (without GPS). That will be coming into test soon as well.
Loads of activity around Privacy, Security & Transparency - we hope to have a robust public posiiton launched on all these points by 1 June.. Diarmid & Adam both heavily involved in this.
Safe Paths Mobile App:
Right now we are in a big Dev push for MVP1, but not much code to test yet. - we expect lots of new function to be coming into Test over the next week or so.
Follow this link to see what’s in MVP1.
There’s quite a few detaied items still to be specced, but some of the big changes coming soon, that will need significant testing. Lots of planning work to be done for these…
MVP1#1: Location lgging reliability
MVP1#2: Location lgging accuracy
MVP1#16: Secure transmission of location data to HAs.
MVP1#17: Scalability
MVP1#18: Moving points of concern from plain text to hashes
Other areas of testing that we can be driving forward in the meantime.
Regression testing - Dev is ongoing, and we have access to regular Dev builds via TestFlight & Google Beta. It is helpful for us to keep doing some testing with these. Defined set of test cases in PractiTest here, but not yet run through this: https://stage.practitest.com/p/5023/sets/26387/edit
Deeper testing - There are some existing functions that have not been very deeply tested yet. Some examples: Accessibility, early Android versions, some aspects of Security - talk to Diarmid if you are interested in exploring any of these. Some known gaps here: Test Session Charters
Test Automation. We have some automated test scripts from 21labs and Eggplant, and we are looking to extend these.
Safe Places
Adam Leon Smith has been developing an Test Plan: Safe Places < this includes notes on how to contribute
We have a team of about 5 testers engaged in a mix of manual testign and automated tests (using Postman and Selenium).
Adam Leon Smith is test lead - talk to him if you have any questions.
Security / Privacy
Lots of progress going on here
Much of it at the level of principles / reqs / spec - which has fed into MVP1
A lot of the MVP1 features are intended to improve Privacy and Security.
Still some items where we are behind (e.g. a proper Threat Model, penetration testing etc.
Anyone with experience in these areas who is able to contribute - please talk to DIarmid or Adam..
Testing in Production
We are looking at a Beta Trial in the Boston area from 1 June.
Work in progress here: Boston Beta Trial
Jonathon Wright is taking the lead on Testing in Production - figuring out what we need to do, and how to make it happen. Testing in Production (TiP)
Working with Todd DeCapua around Testing in Production (TiP) / Telemetry Data (Firebase) / Data Visualisation (Splunk for Good) Splunk - APM / Data Visualization
Way forward is expected to be primarily based on analytics from Safe Places rather than Safe Paths, for privacy reasons.
MVP1 story MVP1#19 covers some work in this area.
Testing Community support
@Sherry Heinze is continuing to be a point of contact for any new testers who want some support working out what to work on; or for anyone who wants to find something different to work on. Sherry will be looking for new volunteers, and will probably reach out to you, but if she doesn’t, please do reach out to her. She’s on Mountain Time.
Key impediments for the team
Key issues right now.
We need code to test! Loads of function coming in for MVP1 but not ready for test yet.
Not enough volunteers. Although we are keeping up now , when the wave of MVP1 work + the GAEN BT app becomes ready for test, we will be struggling to keep up.
It seems we have a large number of volunteers, who are only able to contribute a small amount of time.
We’re not managing to create a wrokflow that works with this volunteer team, leading to poor distribution of work.
Test Automation for Mobile clients - we are working on this with 21labs, to try to build a range automated regression tests.
Updates on previously reported items - mostly solved.
(4/19) Too many gaps in documentation of requirements, and detailed product behaviour
MVP1 spreadsheet here: https://docs.google.com/spreadsheets/d/1VTSnUOrfKBKXLkButvZ4B8MJzfV8gw-4JYv4lkG6XHA/edit#gid=0
More detailes specs linked off there / Jira.
ALso some good info on details in teh UI / UX space: https://pathcheck.atlassian.net/wiki/spaces/UIUX
(4/19) No available domain expert for Contact Tracing
Kyle has been collecting lots of info from HAs
We have some info here How are Health Authorities actually going to use Safe Paths?
We are still hoping to get an epidemiological adviser onboard within the project (with Kyle, I think).
(4/19) No signatory to make a contract with Applause
All sorted - though we have not yet come up with a good use for Applause (Haiti field tiral not yet a priority).
(4/19) No clear path for escalation of very high-level issues identified in Test - e.g. Privacy / Security concerns
All resolved, and loads of good progress on Privacy / Security / Transparency / EThics, which shoudl all bear fruit in the MVP1 timeframe.
(4/19) PractiTest not yet embedded as a tool.
Limited test cases documented in PractiTest
No clear patterns for using PractiTest for more exploratory forms of testing and for overall Requirements Tracing - not clear whether or not we want to invest in this, much of this is happening in Confluence today, and that may be good enough.
(4/19) No current process for raising reqs/spec issues uncovered in Test
All covered in Jira: How to raise bugs found in Testing
(4/19) It’s still proving difficult for even experienced testers to engage with this project, when they first arrive, despite us now having a broad set of resources now available in Confluence. We are tryng to get a better understanding of what else can we do. Are further resources neeed? What? Do we need to run in a much more explicitly directed manner?
@Sherry Heinze acting as community liaison. Feedback is that the resources available are mostly good (or at least OK). Reasons for individuals not engaging are highly mixed & personal.
Other useful docs: