Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Last updated by: Diarmid Mackenzie, Sunday 19 April, 12pm EDTFriday 22 May

Testing ActivitySafe Paths Android: v0.9.4 in test

  • Various individual volunteers

  • Luca Dusi has a team of ~20 Italian testers, with a focus on:

    • Italian language

    • A wide range of devices & form factors

    • Error conditions

    • Usability

Safe Paths iOS: Testing is limited until we get through Beta App Review, which will enable Open Beta (hopefully by Monday)

  • Ali Raizin testing

  • We have limited spaces for other iOS testers.  Contact Diarmid Mackenzie if you are keen to start immediately.

  • Jonathan Wright can repackage / resign the IPA file to deploy to any iOS devices (just need peoples device IDs (uid) and apple ids).

  • Luca Dusi team of ~20 testers lined up to test iOS when available.

Safe Places

  • Adam Leon Smith has been developing an initial test plan

  • Starting Monday, we should have a team from PQA Test, starting testing.

    • Lead contact TBC

Safe Paths Automation

  • Team from PQA Test starting tomorrow.

    • Lead contact: TBC

Safe Places Automation 

  • Team from PQA Test starting tomorrow.

    • Lead contact: TBC

Test Data / Location Data

  • Adam Leon Smith is looking at a number of solutions for Test Data

    • His own Google location history.  5 years of data rebased into 86 different versions of March 2020.  He has tools to do this manipulation if you want to (see the test plan doc) 

      • If you are using your own long-term location history for testing, be mindful of privacy concerns, and keep the data to yourself (do not share with the project) unless you have explicit agreement otherwise.

    • He has a ML contact who can use this as seed data to generate many more similar trails.

      • Huw Price previously solved this problem for smart cities project, Jonathan Wright has reached out for the 5billion historical journeys previously generated (GPX format).

  • We have a lead on a tech that allows a user to program alternate GPS coordinates into their phone, to simulate being in different locations.  Could be a powerful enabler, as testers will be able to use test data from anywhere in the world.

    • Jonathon Wright is qualifying this opportunity. Working with Eran Kinsbruner around using the same stack we used to test smart city mobile apps (https://youtu.be/wuQt97rTa1Y)

  • Test Data / Mock HAs for Automation.  Problem that will need solving. Probably give to PQA Test as part of their Automation mission - DIarmid to brief on Monday.

Testing in Production

Overall

  • We are working on the MVP1 delivery of Safe Paths & Safe Places

  • This will ship to HAs on 1 June, but there will be more testing to be completed before it is ready for launch, which we will hope to do over the following week (including getting fixes for any problems).

  • Goal for 1 June should be that all features are code complete, and the basic function of the solution works.

  • In parallel, we have kicked off Project Aurora to build a GAEN Bluetooth app (without GPS). That will be coming into test soon as well.

  • Loads of activity around Privacy, Security & Transparency - we hope to have a robust public posiiton launched on all these points by 1 June.. Diarmid & Adam both heavily involved in this.


Safe Paths Mobile App:

  • Right now we are in a big Dev push for MVP1, but not much code to test yet. - we expect lots of new function to be coming into Test over the next week or so.

  • Follow this link to see what’s in MVP1.

  • There’s quite a few detaied items still to be specced, but some of the big changes coming soon, that will need significant testing. Lots of planning work to be done for these…

  • MVP1#1: Location lgging reliability

  • MVP1#2: Location lgging accuracy

  • MVP1#16: Secure transmission of location data to HAs.

  • MVP1#17: Scalability

  • MVP1#18: Moving points of concern from plain text to hashes

  • Other areas of testing that we can be driving forward in the meantime.

  • Regression testing - Dev is ongoing, and we have access to regular Dev builds via TestFlight & Google Beta. It is helpful for us to keep doing some testing with these. Defined set of test cases in PractiTest here, but not yet run through this: https://stage.practitest.com/p/5023/sets/26387/edit

  • Deeper testing - There are some existing functions that have not been very deeply tested yet. Some examples: Accessibility, early Android versions, some aspects of Security - talk to Diarmid if you are interested in exploring any of these. Some known gaps here: Test Session Charters

  • Test Automation. We have some automated test scripts from 21labs and Eggplant, and we are looking to extend these.

Safe Places

  • Adam Leon Smith has been developing an Test Plan:  Safe Places < this includes notes on how to contribute

  • We have a team of about 5 testers engaged in a mix of manual testign and automated tests (using Postman and Selenium).

  • Adam Leon Smith is test lead - talk to him if you have any questions.

Security / Privacy

  • Lots of progress going on here

  • Much of it at the level of principles / reqs / spec - which has fed into MVP1

  • A lot of the MVP1 features are intended to improve Privacy and Security.

  • Still some items where we are behind (e.g. a proper Threat Model, penetration testing etc.

  • Anyone with experience in these areas who is able to contribute - please talk to DIarmid or Adam..

Testing in Production

Overall MVP Quality Readout

  • Diarmid owns this for now, trying to present a position based on the Quality Map document

What aren’t we doing?

Here are the things that we probably ought to be doing more of.  Plus why we aren’t doing them

  • Testing Community support (Not Blocked - needs someone’s time & energy!  Who can step up?)

    • Somebody taking an explicit “Scrum master”-like role in how we are all working together, what’s effective, what’s ineffective & how we can improve

    • Identifying & resolving team impediments

    • Of course, everyone can do this, but it would be great to have one or more people making this a real focus.

  • Security / Privacy (Blocked)

    • No active testing in this space.

    • Issues we are aware of are not getting prioritized by Dev, so not clear it is worth putting effort into finding more.

    • Diarmid Mackenzie has a string of 30 Privacy & Security concerns about the project that need to be channeled to the right people for review.  He’s figuring out how to handle.

    • Adam Leon Smith has raised a feature request for a brief architectural deployment/security note on this for Safe Places

Key impediments for the team

...

    • / Data Visualisation (Splunk for Good) Splunk - APM, Data, Data Visualization, and more...

    • Way forward is expected to be primarily based on analytics from Safe Places rather than Safe Paths, for privacy reasons.

    • MVP1 story MVP1#19 covers some work in this area.

Testing Community support

  • Sherry Heinze is continuing to be a point of contact for any new testers who want some support working out what to work on; or for anyone who wants to find something different to work on. Sherry will be looking for new volunteers, and will probably reach out to you, but if she doesn’t, please do reach out to her. She’s on Mountain Time.

Key impediments for the team

Key issues right now.

  1. We need code to test! Loads of function coming in for MVP1 but not ready for test yet.

  2. Not enough volunteers. Although we are keeping up now , when the wave of MVP1 work + the GAEN BT app becomes ready for test, we will be struggling to keep up.

    1. It seems we have a large number of volunteers, who are only able to contribute a small amount of time.

    2. We’re not managing to create a wrokflow that works with this volunteer team, leading to poor distribution of work.

  3. Test Automation for Mobile clients - we are working on this with 21labs, to try to build a range automated regression tests.

Updates on previously reported items - mostly solved.

  1. (4/19) Too many gaps in documentation of requirements, and detailed product behaviour

    1. Diarmid hoping to drive something here, but anyone else who could take this on would be amazing.MVP1 spreadsheet here: https://docs.google.com/spreadsheets/d/1VTSnUOrfKBKXLkButvZ4B8MJzfV8gw-4JYv4lkG6XHA/edit#gid=0

    2. More detailes specs linked off there / Jira.

    3. ALso some good info on details in teh UI / UX space: https://pathcheck.atlassian.net/wiki/spaces/UIUX/overview

  2. (4/19) No available domain expert for Contact Tracing

    1. Kyle is working on a contact through AlinaIs one enough?  Or do we need more?has been collecting lots of info from HAs

    2. We have some info here How are Health Authorities actually going to use Safe Paths?

    3. We are still hoping to get an epidemiological adviser onboard within the project (with Kyle, I think).

  3. (4/19) No signatory to make a contract with Applause

    1. Christian is trying to find us somebody - until we have this in place we can’t use Applause

    2. Diarmid to share PoC - Du Tri will take it forward for Partnership to action

  4. (4/19) #430 means we still don't have the ability to import data from Health Agencies. This blocks a lot of important App testing.

    1. Escalated to Abhishek Singh who is making sure it gets the attention it needs.

    1. All sorted - though we have not yet come up with a good use for Applause (Haiti field tiral not yet a priority).

  5. (4/19) No clear path for escalation of very high-level issues identified in Test - e.g. Privacy / Security concerns

    1. Diarmid asking Yasaman, Christian & Kyle for guidance, but not path forward yet.

    2. Du Tri will take forward - he has a link to the doc. 

  6. (4/19) Dev bandwidth & MVP 1.0 crunch is a blocker in terms of engaging more in areas that are likely to need significant time from Dev

    1. E.g. Security / Privacy

  7. (4/19) Very limited testing for iOS until we get into Open Beta

    1. We are in Beta Review right now - hopefully just a few daysAll resolved, and loads of good progress on Privacy / Security / Transparency / EThics, which shoudl all bear fruit in the MVP1 timeframe.

  8. (4/19) PractiTest not yet embedded as a tool.

    Outdated documentation points to spreadsheets rather than PractiTest

    1. Limited test cases documented in PractiTest

    2. No clear patterns for using PractiTest for more exploratory forms of testing

    (4/19) Onboarding has a 3d backlog

    1. Diarmid has permission to bypass onboarding by directly sharing a link to the key info.  This has been key to getting certain people on board fast.and for overall Requirements Tracing - not clear whether or not we want to invest in this, much of this is happening in Confluence today, and that may be good enough.

  9. (4/19) No current process for raising reqs/spec issues uncovered in Test

    1. We’ve been asked not to file these in GitHub issues.

    2. We believe we’ll be moving to Jira, but not quite ready yet

    3. We need to update guidance for testers.All covered in Jira: How to raise bugs found in Testing

  10. (4/19) Not clear map for new arrivals about what works, what doesn’t work, where to focus testing attention

    1. Diarmid envisages a coverage / status dashboard based on the Quality Map.  Work to do to make this a reality

  11. ??? what else - please add your impediments here

...

  1. It’s still proving difficult for even experienced testers to engage with this project, when they first arrive, despite us now having a broad set of resources now available in Confluence. We are tryng to get a better understanding of what else can we do. Are further resources neeed? What? Do we need to run in a much more explicitly directed manner?

    1. Sherry Heinze acting as community liaison. Feedback is that the resources available are mostly good (or at least OK). Reasons for individuals not engaging are highly mixed & personal.

Other useful docs:

Testing Resources

...