28 July 2020
Consensus that we won’t move forwards with Eggplant. Significant investment needed to fix up the model to match the latest app, and we don’t see much value in return.
21labs also needs significant work to update models to work with the latest app. We think there is more value here, but we’re unsure whether we wan to move forwards with 21labs, or whether Native Appium would be better,
21labs vs. Native Appium
21labs is more accessible for folks who can’t code. But for people comfortable with coding, it’s slower, fiddlier, and harder to learn than coding with Appium (we suspect).
21labs automatic maintenance of scripts is a neat feature, but if we design code for Appium scripts well, with good re-use of code, the problem it solves won’t manifest in the first place.
Currently, we have 3 people (Liz, Jeri, Kallie) all comfortable with coding, and happy to learn Appium - and have found 21labs frustrating, fiddly & slow.
We agreed to have a spike where we try to push forwards with test scripts in Appium directly (running against Perfecto). After a week, we’ll re-assess. If things go well we’ll likely stick with Appium. If they go badly, we may pivot back to 21labs.
Native Appium may run a bit faster than 21labs, which may help us with CI costs - see later point.
Plan for the week
Goal for next Tues (4 August) was to have a basic Appium script that we could run that gets us through onboarding.
(once we can do that, the rest of the app is just more button pushes, so should be pretty easy).
(we didn’t explicitly specify iOS or Android. Either would do. Both would be awesome!)
Other points: e2e testing
For e2e testing, we’ll need to figure out how to extract codes for location data sharing from Safe Places.
Various options:
Direct access to Safe Places Back End API - simplest if we can.
Using Appium + browser on the Mobile App to access Safe Places Front End.
Invoke some Selenium scripts (which will probably be developed anyway by the Selenium team)
Solving this problem is a priority for next week.
Other points: CI/CD
Once we have some working scripts, we’ll want to integrate into the CI/CD pipeline.
Let’s do this as soon as we have something of some minimal value.
TBD how often this runs. Bare minimum would be once a week, manually.
Ideally it would run much more frequently, potentially every build.
If the tests take a long time, this could increase out GitHub CI costs, as the VM that does the builds sits waiting for the results of the automated tests to return. This is a future challenge we will need to address if we want to get these tests running on a very frequent basis.