Outline plan is to run some sort of Beta Trial in Boston.
What are we trying to achieve?
What can we do with this group that we can’t do with regular people?
Drive predictable volumes of “contact trace” interviews
Get feedback from the person who participated in the interview
Collect complete location data from non-infected patients to assess what matched & what didn’t
Get feedback from non-infected patoients as to who matched a given location.
Run detailed analytics / diagnostocs on individual phones: firebase, crashlytics etc.
Collect daily? location logs from every phone & check for reliability of logging.
Get qualitative feedback from individuals
Set up lots of infections, and get a much higher rate of notiifcations than we could do otherwise.
Push up scale / number of infections/ data points to download.
Product dependencies - must be in place before we start
GPS logging reliability - else all we will learn is that GPS reliability is not good enough!!
(not sure there is much else that is really essential…
… secure transport of loction data is nice but not essential
….dittto hashing of location data on HA JSON server…
… Diarmid to read through full MNVP1 spec & decide what else is a “must have” here - suspect not much…
(maybe some of the consent stuff; chance to review redacted trail before it is published…?)
Key tech enablers (non-product)
Firebase / crashlytics - how easy to set up?
Rig to consume daily GPS data for analysis
Analysis engine to determine what should have matched vs. what did.
Pre-production Path Check environemnt to direct to Mock HA Server.
Mock HA server to receive encrypted data transmissions
Mock HA server into which we can feed data
Safe Places server for contact tracers
Synthetic data generator to generate large data sets
People enablers
Volunteer MoPs, willing to share their location data & a daily report.
Volunteer MoPs to participate in contact trace interviews
Volunteer contact tracers
Data controller who can monitor how we use PII
Overall people to run the analysis daily
Someone to direct the experiments
PM/Dev engagement to learn from this & fix problems.
What to measure?
How do we measure “effectiveness” ?
User accounts of their movements vs. what the location trails tell us vs. what notiifcations triggered? Use this info to try to account for numbers of false negatives, true positives & false positives…
Epidemiological view of effectiveness - independent review of stuff in previous bullet?
User feedback on messaging - how seriously do they take it? Do they know what to do? Other feedback?
Contact tracer feedback.
MoP feedback on contact tracing experience
How many people?
Depends how active / mobile / engaged people are going to be.
Directed interactions only need a small numebr of people (< 10) to be able to do some effective stuff,
Larger groups of people enable more non-directed learnings, much more unexpected stuff will start to happen as we get to 50+ people.
Much more tech/auutomation needed to process into from 50+ people than from 5-10 people - with 5-10 lots could be manual.
Suspect we should aim for ~10 people for a week, then grow by ~20 people/week so we are at 50 people after 3 weeks. Not obviously going to get lots more benefit from scaling above 50 people, and will become increasingly challenging to organize….