Outline plan is to run some sort of Beta Trial in Boston.
...
Users get notified when they have been in contact with an infected person, with few false positives, and few false negatives.
The app provides clear information to users about what has been detected, and what steps they should take
The app provides a high-quality user experience: slick, attractive, usable.
The app does not cause frustration
The app does not inconvenience (battery usage, data costs, unhelpful notifications, other problems)
The user trusts the app.
The app user is very privacy conscious - match behavior with expectations
Asymptomatic vs Symptomatic users - contact tracing impact
Diagnosed users:
The contact tracing experience is clear, straightforward, and informative.
Contact tracing based on data from the app is superior to contact tracing without the app.
The user continues to have a positive experience after the contact tracing is complete.
Is he able to identify where he got the infection from
Is he able to ask his family/friends before sharing any details with contact tracers
Contact tracer
The contact tracing process is clear and straightforward
It is straightforward to publish points of concern in Safe Places.
Contact tracing based on data from the app is superior to contact tracing without the app.
It is straightforward to redact data to meet a user’s privacy needs.
Does it reduce the load on contact tracer considerably - able to do more no of patients (how many more? in the same time?
Health authority
The app supports contact tracing efforts
The app helps to reduce the spread of COVID-19
Does it mean health facility requiring less staff?
…and how might we measure it?
...
Drive predictable volumes of “contact trace” interviews
Tech validation
Collect complete location data from non-infected patients to assess what matched & what didn’t
test location contexts form wide area
test moving objects with moving paths?
test location related interactive behaviors between multiple mobile users and objects
Location services on and off between different test users - impact
Social system validation
Get feedback from the person who participated in the interview
Get feedback from non-infected patients as to who matched a given location.
Volunteer Base to be used (question):
Harvard students - putting them at risk?
Health officials working daily - also with actual COVID patients ( Mayo Clinic staff, or something similar)
Fedex/ logistic company delivery people - as they move around city.
Small geographical area to consider - (Boston? Or smaller - to ensure paths are crossed often.
Tech Aspects to consider
Run detailed analytics / diagnostics on individual phones: firebase, crashlytics etc.
Collect daily? location logs from every phone & check for reliability of logging.
Get qualitative feedback from individuals
Set up lots of infections, and get a much higher rate of notifications than we could do otherwise.
Push up scale / number of infections/ data points to download.
What features we want to test - only crossed Paths?
Check the difference in mapping when we use bluetooth only vs / bluetooth and GPS? - possibility MVP2 - is ready to test?
Contact tracers perspective - safe places - new
Risks related to experimentation: Product dependencies - must be in place before we start
...
How do we measure “effectiveness” ? Define KPIs to make this prototype/ simulation successful ( Tech as well as non-tech)
User accounts of their movements vs. what the location trails tell us vs. what notifications triggered? Use this info to try to account for numbers of false negatives, true positives & false positives…
Epidemiological view of effectiveness - independent review of stuff in previous bullet?
User feedback on messaging - how seriously do they take it? Do they know what to do? Other feedback?
Contact tracer feedback.
MoP feedback on contact tracing experience
60% penetration vs 20% penetration of app for contact tracing to work. (is there a way to test that through simulation.
How many people?
Depends how active / mobile / engaged people are going to be.
Directed interactions only need a small number of people (< 10) to be able to do some effective stuff, (controlled experiment)
Larger groups of people enable more non-directed learnings, much more unexpected stuff will start to happen as we get to 50+ people.
Much more tech/automation needed to process from 50+ people than from 5-10 people - with 5-10 lots could be manual.
Suspect we should aim for ~10 people for a week, then grow by ~20 people/week so we are at 50 people after 3 weeks. Not obviously going to get lots more benefit from scaling above 50 people, and will become increasingly challenging to organize….
2 groups -
10 people - directed learnings with pre-determined KPIs
They should follow plans, and on purpose cross paths to validate our idea of false positives and false negatives. (public transportation, cafe, office building)
50 people - 2nd group - non- directed - get unexpected feedback. (extra feedback on what info they trust - and how important is privacy to them at all levels. Develop a questionnaire) non-technological part.
Use cases for directed learnings -
Cafes
Grocery stores
Public transportation - false positives/ negatives possibility
Office buildings - floors
Diffrence in logging - wifi vs 3g vs 4g
User journey aspects to consideration:
App perspective - user experience
IOS versions (different) vs android versions - range of phone devices to be checked. Select and test a set of popular mobile devices
User behavior in surroundings
Demographics behaviour - old age 60+, parents and kids, essential workers
If social distancing is maintained - people practicing, people not practicing.
Mobility impacts - on feet, bicycle, by car, by metro/train, bus/tram
Contact tracer/interview:
Dashboard - what helps (through interviews generate features)