Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Drive predictable volumes of “contact trace” interviews

  • Get feedback from the person who participated in the interview

  • Collect complete location data from non-infected patients to assess what matched & what didn’t

  • Get feedback from non-infected patoients as to who matched a given location.

  • Run detailed analytics / diagnostocs on individual phones: firebase, crashlytics etc.

  • Collect daily? location logs from every phone & check for reliability of logging.

  • Get qualitative feedback from individuals

  • Set up lots of infections, and get a much higher rate of notiifcations than we could do otherwise.

  • Push up scale / number of infections/ data points to download.

Product dependencies - must be in place before we start

  • GPS logging reliability - else all we will learn is that GPS reliability is not good enough!!

  • (not sure there is much else that is really essential…

  • … secure transport of loction data is nice but not essential

  • ….dittto hashing of location data on HA JSON server…

  • … Diarmid to read through full MNVP1 spec & decide what else is a “must have” here - suspect not much…

  • (maybe some of the consent stuff; chance to review redacted trail before it is published…?)

Key tech enablers (non-product)

  • Firebase / crashlytics - how easy to set up?

  • Rig to consume daily GPS data for analysis

  • Analysis engine to determine what should have matched vs. what did.

  • Pre-production Path Check environemnt to direct to Mock HA Server.

  • Mock HA server to receive encrypted data transmissions

  • Mock HA server into which we can feed data

  • Safe Places server for contact tracers

  • Synthetic data generator to generate large data sets

...

  • Volunteer MoPs, willing to share their location data & a daily report.

  • Volunteer MoPs to participate in contact trace interviews

  • Volunteer contact tracers

  • Data controller who can monitor how we use PII

  • Overall people to run the analysis daily

  • Someone to direct the experiments

  • PM/Dev engagement to learn from this & fix problems.

Other considerations

  • How do we measure “effectiveness” ?

  • User accounts of their movements vs. what the location trails tell us vs. what notiifcations triggered? Use this info to try to account for numbers of false negatives, true positives & false positives…

  • Epidemiological view of effectiveness - independent review of stuff in previous bullet?

  • User feedback on messaging - how seriously do they take it? Do they know what to do? Other feedback?

  • Contact tracer feedback.

  • MoP feedback on contact tracing experience